threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "Hi,\n\nI am getting \"ERROR: subplan \"SubPlan 1\" was not initialized\" error with\nbelow test case.\n\nCREATE TABLE tbl ( c1 int, c2 int, c3 int ) PARTITION BY LIST (c1);\ncreate table tbl_null PARTITION OF tbl FOR VALUES IN (null);\ncreate table tbl_def PARTITION OF tbl DEFAULT;\ninsert into tbl values (8800,0,0);\ninsert into tbl values (1891,1,1);\ninsert into tbl values (3420,2,0);\ninsert into tbl values (9850,3,0);\ninsert into tbl values (7164,4,4);\nanalyze tbl;\nexplain (costs off) select count(*) from tbl t1 where (exists(select 1 from\ntbl t2 where t2.c1 = t1.c2) or c3 < 0);\n\npostgres=# explain (costs off) select count(*) from tbl t1 where\n(exists(select 1 from tbl t2 where t2.c1 = t1.c2) or c3 < 0);\nERROR: subplan \"SubPlan 1\" was not initialized\n\nThanks & Regards,\nRajkumar Raghuwanshi\n\nHi,I am getting \"ERROR: subplan \"SubPlan 1\" was not initialized\" error with below test case.CREATE TABLE tbl ( c1 int, c2 int, c3 int ) PARTITION BY LIST (c1);create table tbl_null PARTITION OF tbl FOR VALUES IN (null);create table tbl_def PARTITION OF tbl DEFAULT;insert into tbl values (8800,0,0);insert into tbl values (1891,1,1);insert into tbl values (3420,2,0);insert into tbl values (9850,3,0);insert into tbl values (7164,4,4);analyze tbl;explain (costs off) select count(*) from tbl t1 where (exists(select 1 from tbl t2 where t2.c1 = t1.c2) or c3 < 0);postgres=# explain (costs off) select count(*) from tbl t1 where (exists(select 1 from tbl t2 where t2.c1 = t1.c2) or c3 < 0);ERROR: subplan \"SubPlan 1\" was not initializedThanks & Regards,Rajkumar Raghuwanshi",
"msg_date": "Tue, 14 Sep 2021 17:19:15 +0530",
"msg_from": "Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Getting ERROR \"subplan \"SubPlan 1\" was not initialized\" in EXISTS\n subplan when using for list partition."
},
{
"msg_contents": "Em ter., 14 de set. de 2021 às 08:49, Rajkumar Raghuwanshi <\nrajkumar.raghuwanshi@enterprisedb.com> escreveu:\n\n> Hi,\n>\n> I am getting \"ERROR: subplan \"SubPlan 1\" was not initialized\" error with\n> below test case.\n>\n> CREATE TABLE tbl ( c1 int, c2 int, c3 int ) PARTITION BY LIST (c1);\n> create table tbl_null PARTITION OF tbl FOR VALUES IN (null);\n> create table tbl_def PARTITION OF tbl DEFAULT;\n> insert into tbl values (8800,0,0);\n> insert into tbl values (1891,1,1);\n> insert into tbl values (3420,2,0);\n> insert into tbl values (9850,3,0);\n> insert into tbl values (7164,4,4);\n> analyze tbl;\n> explain (costs off) select count(*) from tbl t1 where (exists(select 1\n> from tbl t2 where t2.c1 = t1.c2) or c3 < 0);\n>\n> postgres=# explain (costs off) select count(*) from tbl t1 where\n> (exists(select 1 from tbl t2 where t2.c1 = t1.c2) or c3 < 0);\n> ERROR: subplan \"SubPlan 1\" was not initialized\n>\nNot sure if that helps, but below backtrace at Windows 64.\n\n00 postgres!ExecInitSubPlan(struct SubPlan * subplan = 0x00000000`021b4ed8,\nstruct PlanState * parent = 0x00000000`0219ff90)+0x93\n[C:\\dll\\postgres\\postgres_head\\src\\backend\\executor\\nodeSubplan.c @ 804]\n01 postgres!ExecInitExprRec(struct Expr * node = 0x00000000`021b4ed8,\nstruct ExprState * state = 0x00000000`021a0ba0, unsigned int64 * resv =\n0x00000000`021a0ba8, bool * resnull = 0x00000000`021a0ba5)+0x1447\n[C:\\dll\\postgres\\postgres_head\\src\\backend\\executor\\execExpr.c @ 1424]\n02 postgres!ExecInitExprRec(struct Expr * node = 0x00000000`021b4ea8,\nstruct ExprState * state = 0x00000000`021a0ba0, unsigned int64 * resv =\n0x00000000`021a0ba8, bool * resnull = 0x00000000`021a0ba5)+0x1176\n[C:\\dll\\postgres\\postgres_head\\src\\backend\\executor\\execExpr.c @ 1364]\n03 postgres!ExecInitQual(struct List * qual = 0x00000000`021b5198, struct\nPlanState * parent = 0x00000000`0219ff90)+0x197\n[C:\\dll\\postgres\\postgres_head\\src\\backend\\executor\\execExpr.c @ 256]\n04 postgres!ExecInitSeqScan(struct SeqScan * node = 0x00000000`021b3dd8,\nstruct EState * estate = 0x00000000`0219f2c8, int eflags = 0n17)+0x105\n[C:\\dll\\postgres\\postgres_head\\src\\backend\\executor\\nodeSeqscan.c @ 171]\n05 postgres!ExecInitNode(struct Plan * node = 0x00000000`021b3dd8, struct\nEState * estate = 0x00000000`0219f2c8, int eflags = 0n17)+0x1bb\n[C:\\dll\\postgres\\postgres_head\\src\\backend\\executor\\execProcnode.c @ 209]\n06 postgres!ExecInitAppend(struct Append * node = 0x00000000`021b3c78,\nstruct EState * estate = 0x00000000`0219f2c8, int eflags = 0n17)+0x301\n[C:\\dll\\postgres\\postgres_head\\src\\backend\\executor\\nodeAppend.c @ 232]\n07 postgres!ExecInitNode(struct Plan * node = 0x00000000`021b3c78, struct\nEState * estate = 0x00000000`0219f2c8, int eflags = 0n17)+0xf8\n[C:\\dll\\postgres\\postgres_head\\src\\backend\\executor\\execProcnode.c @ 181]\n08 postgres!ExecInitAgg(struct Agg * node = 0x00000000`021b4688, struct\nEState * estate = 0x00000000`0219f2c8, int eflags = 0n17)+0x559\n[C:\\dll\\postgres\\postgres_head\\src\\backend\\executor\\nodeAgg.c @ 3383]\n09 postgres!ExecInitNode(struct Plan * node = 0x00000000`021b4688, struct\nEState * estate = 0x00000000`0219f2c8, int eflags = 0n17)+0x58a\n[C:\\dll\\postgres\\postgres_head\\src\\backend\\executor\\execProcnode.c @ 340]\n0a postgres!InitPlan(struct QueryDesc * queryDesc = 0x00000000`021b5e48,\nint eflags = 0n17)+0x490\n[C:\\dll\\postgres\\postgres_head\\src\\backend\\executor\\execMain.c @ 936]\n0b postgres!standard_ExecutorStart(struct QueryDesc * queryDesc =\n0x00000000`021b5e48, int eflags = 0n17)+0x242\n[C:\\dll\\postgres\\postgres_head\\src\\backend\\executor\\execMain.c @ 265]\n0c postgres!ExecutorStart(struct QueryDesc * queryDesc =\n0x00000000`021b5e48, int eflags = 0n1)+0x4a\n[C:\\dll\\postgres\\postgres_head\\src\\backend\\executor\\execMain.c @ 144]\n0d postgres!ExplainOnePlan(struct PlannedStmt * plannedstmt =\n0x00000000`021b5db8, struct IntoClause * into = 0x00000000`00000000, struct\nExplainState * es = 0x00000000`021831f8, char * queryString =\n0x00000000`00999348 \"explain (costs off) select count(*) from tbl t1 where\n(exists(select 1 from tbl t2 where t2.c1 = t1.c2) or c3 < 0);\", struct\nParamListInfoData * params = 0x00000000`00000000, struct QueryEnvironment *\nqueryEnv = 0x00000000`00000000, union _LARGE_INTEGER * planduration =\n0x00000000`007ff160 {5127}, struct BufferUsage * bufusage =\n0x00000000`00000000)+0x197\n[C:\\dll\\postgres\\postgres_head\\src\\backend\\commands\\explain.c @ 582]\n0e postgres!ExplainOneQuery(struct Query * query = 0x00000000`0099a5d0, int\ncursorOptions = 0n2048, struct IntoClause * into = 0x00000000`00000000,\nstruct ExplainState * es = 0x00000000`021831f8, char * queryString =\n0x00000000`00999348 \"explain (costs off) select count(*) from tbl t1 where\n(exists(select 1 from tbl t2 where t2.c1 = t1.c2) or c3 < 0);\", struct\nParamListInfoData * params = 0x00000000`00000000, struct QueryEnvironment *\nqueryEnv = 0x00000000`00000000)+0x210\n[C:\\dll\\postgres\\postgres_head\\src\\backend\\commands\\explain.c @ 413]\n0f postgres!ExplainQuery(struct ParseState * pstate = 0x00000000`0099de20,\nstruct ExplainStmt * stmt = 0x00000000`0099a410, struct ParamListInfoData *\nparams = 0x00000000`00000000, struct _DestReceiver * dest =\n0x00000000`0099dd90)+0x72f\n[C:\\dll\\postgres\\postgres_head\\src\\backend\\commands\\explain.c @ 286]\n10 postgres!standard_ProcessUtility(struct PlannedStmt * pstmt =\n0x00000000`02197808, char * queryString = 0x00000000`00999348 \"explain\n(costs off) select count(*) from tbl t1 where (exists(select 1 from tbl t2\nwhere t2.c1 = t1.c2) or c3 < 0);\", bool readOnlyTree = false,\nProcessUtilityContext context = PROCESS_UTILITY_TOPLEVEL (0n0), struct\nParamListInfoData * params = 0x00000000`00000000, struct QueryEnvironment *\nqueryEnv = 0x00000000`00000000, struct _DestReceiver * dest =\n0x00000000`0099dd90, struct QueryCompletion * qc =\n0x00000000`007ff630)+0x8f1\n[C:\\dll\\postgres\\postgres_head\\src\\backend\\tcop\\utility.c @ 846]\n11 postgres!ProcessUtility(struct PlannedStmt * pstmt =\n0x00000000`02197808, char * queryString = 0x00000000`00999348 \"explain\n(costs off) select count(*) from tbl t1 where (exists(select 1 from tbl t2\nwhere t2.c1 = t1.c2) or c3 < 0);\", bool readOnlyTree = false,\nProcessUtilityContext context = PROCESS_UTILITY_TOPLEVEL (0n0), struct\nParamListInfoData * params = 0x00000000`00000000, struct QueryEnvironment *\nqueryEnv = 0x00000000`00000000, struct _DestReceiver * dest =\n0x00000000`0099dd90, struct QueryCompletion * qc =\n0x00000000`007ff630)+0xb5\n[C:\\dll\\postgres\\postgres_head\\src\\backend\\tcop\\utility.c @ 530]\n12 postgres!PortalRunUtility(struct PortalData * portal =\n0x00000000`0095cd38, struct PlannedStmt * pstmt = 0x00000000`02197808, bool\nisTopLevel = true, bool setHoldSnapshot = true, struct _DestReceiver * dest\n= 0x00000000`0099dd90, struct QueryCompletion * qc =\n0x00000000`007ff630)+0x135\n[C:\\dll\\postgres\\postgres_head\\src\\backend\\tcop\\pquery.c @ 1157]\n13 postgres!FillPortalStore(struct PortalData * portal =\n0x00000000`0095cd38, bool isTopLevel = true)+0x105\n[C:\\dll\\postgres\\postgres_head\\src\\backend\\tcop\\pquery.c @ 1028]\n\nregards,\nRanier Vilela\n\nEm ter., 14 de set. de 2021 às 08:49, Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> escreveu:Hi,I am getting \"ERROR: subplan \"SubPlan 1\" was not initialized\" error with below test case.CREATE TABLE tbl ( c1 int, c2 int, c3 int ) PARTITION BY LIST (c1);create table tbl_null PARTITION OF tbl FOR VALUES IN (null);create table tbl_def PARTITION OF tbl DEFAULT;insert into tbl values (8800,0,0);insert into tbl values (1891,1,1);insert into tbl values (3420,2,0);insert into tbl values (9850,3,0);insert into tbl values (7164,4,4);analyze tbl;explain (costs off) select count(*) from tbl t1 where (exists(select 1 from tbl t2 where t2.c1 = t1.c2) or c3 < 0);postgres=# explain (costs off) select count(*) from tbl t1 where (exists(select 1 from tbl t2 where t2.c1 = t1.c2) or c3 < 0);ERROR: subplan \"SubPlan 1\" was not initializedNot sure if that helps, but below backtrace at Windows 64.00 postgres!ExecInitSubPlan(struct SubPlan * subplan = 0x00000000`021b4ed8, struct PlanState * parent = 0x00000000`0219ff90)+0x93 [C:\\dll\\postgres\\postgres_head\\src\\backend\\executor\\nodeSubplan.c @ 804] 01 postgres!ExecInitExprRec(struct Expr * node = 0x00000000`021b4ed8, struct ExprState * state = 0x00000000`021a0ba0, unsigned int64 * resv = 0x00000000`021a0ba8, bool * resnull = 0x00000000`021a0ba5)+0x1447 [C:\\dll\\postgres\\postgres_head\\src\\backend\\executor\\execExpr.c @ 1424] 02 postgres!ExecInitExprRec(struct Expr * node = 0x00000000`021b4ea8, struct ExprState * state = 0x00000000`021a0ba0, unsigned int64 * resv = 0x00000000`021a0ba8, bool * resnull = 0x00000000`021a0ba5)+0x1176 [C:\\dll\\postgres\\postgres_head\\src\\backend\\executor\\execExpr.c @ 1364] 03 postgres!ExecInitQual(struct List * qual = 0x00000000`021b5198, struct PlanState * parent = 0x00000000`0219ff90)+0x197 [C:\\dll\\postgres\\postgres_head\\src\\backend\\executor\\execExpr.c @ 256] 04 postgres!ExecInitSeqScan(struct SeqScan * node = 0x00000000`021b3dd8, struct EState * estate = 0x00000000`0219f2c8, int eflags = 0n17)+0x105 [C:\\dll\\postgres\\postgres_head\\src\\backend\\executor\\nodeSeqscan.c @ 171] 05 postgres!ExecInitNode(struct Plan * node = 0x00000000`021b3dd8, struct EState * estate = 0x00000000`0219f2c8, int eflags = 0n17)+0x1bb [C:\\dll\\postgres\\postgres_head\\src\\backend\\executor\\execProcnode.c @ 209] 06 postgres!ExecInitAppend(struct Append * node = 0x00000000`021b3c78, struct EState * estate = 0x00000000`0219f2c8, int eflags = 0n17)+0x301 [C:\\dll\\postgres\\postgres_head\\src\\backend\\executor\\nodeAppend.c @ 232] 07 postgres!ExecInitNode(struct Plan * node = 0x00000000`021b3c78, struct EState * estate = 0x00000000`0219f2c8, int eflags = 0n17)+0xf8 [C:\\dll\\postgres\\postgres_head\\src\\backend\\executor\\execProcnode.c @ 181] 08 postgres!ExecInitAgg(struct Agg * node = 0x00000000`021b4688, struct EState * estate = 0x00000000`0219f2c8, int eflags = 0n17)+0x559 [C:\\dll\\postgres\\postgres_head\\src\\backend\\executor\\nodeAgg.c @ 3383] 09 postgres!ExecInitNode(struct Plan * node = 0x00000000`021b4688, struct EState * estate = 0x00000000`0219f2c8, int eflags = 0n17)+0x58a [C:\\dll\\postgres\\postgres_head\\src\\backend\\executor\\execProcnode.c @ 340] 0a postgres!InitPlan(struct QueryDesc * queryDesc = 0x00000000`021b5e48, int eflags = 0n17)+0x490 [C:\\dll\\postgres\\postgres_head\\src\\backend\\executor\\execMain.c @ 936] 0b postgres!standard_ExecutorStart(struct QueryDesc * queryDesc = 0x00000000`021b5e48, int eflags = 0n17)+0x242 [C:\\dll\\postgres\\postgres_head\\src\\backend\\executor\\execMain.c @ 265] 0c postgres!ExecutorStart(struct QueryDesc * queryDesc = 0x00000000`021b5e48, int eflags = 0n1)+0x4a [C:\\dll\\postgres\\postgres_head\\src\\backend\\executor\\execMain.c @ 144] 0d postgres!ExplainOnePlan(struct PlannedStmt * plannedstmt = 0x00000000`021b5db8, struct IntoClause * into = 0x00000000`00000000, struct ExplainState * es = 0x00000000`021831f8, char * queryString = 0x00000000`00999348 \"explain (costs off) select count(*) from tbl t1 where (exists(select 1 from tbl t2 where t2.c1 = t1.c2) or c3 < 0);\", struct ParamListInfoData * params = 0x00000000`00000000, struct QueryEnvironment * queryEnv = 0x00000000`00000000, union _LARGE_INTEGER * planduration = 0x00000000`007ff160 {5127}, struct BufferUsage * bufusage = 0x00000000`00000000)+0x197 [C:\\dll\\postgres\\postgres_head\\src\\backend\\commands\\explain.c @ 582] 0e postgres!ExplainOneQuery(struct Query * query = 0x00000000`0099a5d0, int cursorOptions = 0n2048, struct IntoClause * into = 0x00000000`00000000, struct ExplainState * es = 0x00000000`021831f8, char * queryString = 0x00000000`00999348 \"explain (costs off) select count(*) from tbl t1 where (exists(select 1 from tbl t2 where t2.c1 = t1.c2) or c3 < 0);\", struct ParamListInfoData * params = 0x00000000`00000000, struct QueryEnvironment * queryEnv = 0x00000000`00000000)+0x210 [C:\\dll\\postgres\\postgres_head\\src\\backend\\commands\\explain.c @ 413] 0f postgres!ExplainQuery(struct ParseState * pstate = 0x00000000`0099de20, struct ExplainStmt * stmt = 0x00000000`0099a410, struct ParamListInfoData * params = 0x00000000`00000000, struct _DestReceiver * dest = 0x00000000`0099dd90)+0x72f [C:\\dll\\postgres\\postgres_head\\src\\backend\\commands\\explain.c @ 286] 10 postgres!standard_ProcessUtility(struct PlannedStmt * pstmt = 0x00000000`02197808, char * queryString = 0x00000000`00999348 \"explain (costs off) select count(*) from tbl t1 where (exists(select 1 from tbl t2 where t2.c1 = t1.c2) or c3 < 0);\", bool readOnlyTree = false, ProcessUtilityContext context = PROCESS_UTILITY_TOPLEVEL (0n0), struct ParamListInfoData * params = 0x00000000`00000000, struct QueryEnvironment * queryEnv = 0x00000000`00000000, struct _DestReceiver * dest = 0x00000000`0099dd90, struct QueryCompletion * qc = 0x00000000`007ff630)+0x8f1 [C:\\dll\\postgres\\postgres_head\\src\\backend\\tcop\\utility.c @ 846] 11 postgres!ProcessUtility(struct PlannedStmt * pstmt = 0x00000000`02197808, char * queryString = 0x00000000`00999348 \"explain (costs off) select count(*) from tbl t1 where (exists(select 1 from tbl t2 where t2.c1 = t1.c2) or c3 < 0);\", bool readOnlyTree = false, ProcessUtilityContext context = PROCESS_UTILITY_TOPLEVEL (0n0), struct ParamListInfoData * params = 0x00000000`00000000, struct QueryEnvironment * queryEnv = 0x00000000`00000000, struct _DestReceiver * dest = 0x00000000`0099dd90, struct QueryCompletion * qc = 0x00000000`007ff630)+0xb5 [C:\\dll\\postgres\\postgres_head\\src\\backend\\tcop\\utility.c @ 530] 12 postgres!PortalRunUtility(struct PortalData * portal = 0x00000000`0095cd38, struct PlannedStmt * pstmt = 0x00000000`02197808, bool isTopLevel = true, bool setHoldSnapshot = true, struct _DestReceiver * dest = 0x00000000`0099dd90, struct QueryCompletion * qc = 0x00000000`007ff630)+0x135 [C:\\dll\\postgres\\postgres_head\\src\\backend\\tcop\\pquery.c @ 1157] 13 postgres!FillPortalStore(struct PortalData * portal = 0x00000000`0095cd38, bool isTopLevel = true)+0x105 [C:\\dll\\postgres\\postgres_head\\src\\backend\\tcop\\pquery.c @ 1028] regards,Ranier Vilela",
"msg_date": "Tue, 14 Sep 2021 09:51:57 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Getting ERROR \"subplan \"SubPlan 1\" was not initialized\" in EXISTS\n subplan when using for list partition."
},
{
"msg_contents": "Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> writes:\n> I am getting \"ERROR: subplan \"SubPlan 1\" was not initialized\" error with\n> below test case.\n\n> CREATE TABLE tbl ( c1 int, c2 int, c3 int ) PARTITION BY LIST (c1);\n> create table tbl_null PARTITION OF tbl FOR VALUES IN (null);\n> create table tbl_def PARTITION OF tbl DEFAULT;\n> insert into tbl values (8800,0,0);\n> insert into tbl values (1891,1,1);\n> insert into tbl values (3420,2,0);\n> insert into tbl values (9850,3,0);\n> insert into tbl values (7164,4,4);\n> analyze tbl;\n> explain (costs off) select count(*) from tbl t1 where (exists(select 1 from\n> tbl t2 where t2.c1 = t1.c2) or c3 < 0);\n\n> postgres=# explain (costs off) select count(*) from tbl t1 where\n> (exists(select 1 from tbl t2 where t2.c1 = t1.c2) or c3 < 0);\n> ERROR: subplan \"SubPlan 1\" was not initialized\n\nNice example. This is failing since 41efb8340. It happens because\nwe copy the AlternativeSubPlan for the EXISTS into the scan clauses\nfor each of t1's partitions. At setrefs.c time, when\nfix_alternative_subplan() looks at the first of these\nAlternativeSubPlans, it decides it likes the first subplan better,\nso it deletes SubPlan 2 from the root->glob->subplans list. But when\nit comes to the next copy (which is attached to a partition with a\ndifferent number of rows), it likes the second subplan better, so it\ndeletes SubPlan 1 from the root->glob->subplans list. Now we have\nSubPlan nodes in the tree with no referents in the global list of\nsubplans, so kaboom.\n\nThe easiest fix would just be to not try to delete unreferenced\nsubplans. The error goes away if I remove the \"lfirst(lc2) = NULL\"\nstatements from fix_alternative_subplan(). However, this is a bit\nannoying since then we will still pay the cost of initializing\nsubplans that (in most cases) will never be used. I'm going to\nlook into how painful it is to have setrefs.c remove unused subplans\nonly at the end, after it's seen all the AlternativeSubPlans.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Sep 2021 13:44:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Getting ERROR \"subplan \"SubPlan 1\" was not initialized\" in EXISTS\n subplan when using for list partition."
},
{
"msg_contents": "On Tue, Sep 14, 2021 at 10:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> writes:\n> > I am getting \"ERROR: subplan \"SubPlan 1\" was not initialized\" error with\n> > below test case.\n>\n> > CREATE TABLE tbl ( c1 int, c2 int, c3 int ) PARTITION BY LIST (c1);\n> > create table tbl_null PARTITION OF tbl FOR VALUES IN (null);\n> > create table tbl_def PARTITION OF tbl DEFAULT;\n> > insert into tbl values (8800,0,0);\n> > insert into tbl values (1891,1,1);\n> > insert into tbl values (3420,2,0);\n> > insert into tbl values (9850,3,0);\n> > insert into tbl values (7164,4,4);\n> > analyze tbl;\n> > explain (costs off) select count(*) from tbl t1 where (exists(select 1\n> from\n> > tbl t2 where t2.c1 = t1.c2) or c3 < 0);\n>\n> > postgres=# explain (costs off) select count(*) from tbl t1 where\n> > (exists(select 1 from tbl t2 where t2.c1 = t1.c2) or c3 < 0);\n> > ERROR: subplan \"SubPlan 1\" was not initialized\n>\n> Nice example. This is failing since 41efb8340. It happens because\n> we copy the AlternativeSubPlan for the EXISTS into the scan clauses\n> for each of t1's partitions. At setrefs.c time, when\n> fix_alternative_subplan() looks at the first of these\n> AlternativeSubPlans, it decides it likes the first subplan better,\n> so it deletes SubPlan 2 from the root->glob->subplans list. But when\n> it comes to the next copy (which is attached to a partition with a\n> different number of rows), it likes the second subplan better, so it\n> deletes SubPlan 1 from the root->glob->subplans list. Now we have\n> SubPlan nodes in the tree with no referents in the global list of\n> subplans, so kaboom.\n>\n> The easiest fix would just be to not try to delete unreferenced\n> subplans. The error goes away if I remove the \"lfirst(lc2) = NULL\"\n> statements from fix_alternative_subplan(). However, this is a bit\n> annoying since then we will still pay the cost of initializing\n> subplans that (in most cases) will never be used. I'm going to\n> look into how painful it is to have setrefs.c remove unused subplans\n> only at the end, after it's seen all the AlternativeSubPlans.\n>\n> regards, tom lane\n>\n>\n> Hi,\nIn the fix, isUsedSubplan is used to tell whether any given subplan is used.\nSince only one subplan is used, I wonder if the array can be replaced by\nspecifying the subplan is used.\n\nCheers\n\nOn Tue, Sep 14, 2021 at 10:44 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> writes:\n> I am getting \"ERROR: subplan \"SubPlan 1\" was not initialized\" error with\n> below test case.\n\n> CREATE TABLE tbl ( c1 int, c2 int, c3 int ) PARTITION BY LIST (c1);\n> create table tbl_null PARTITION OF tbl FOR VALUES IN (null);\n> create table tbl_def PARTITION OF tbl DEFAULT;\n> insert into tbl values (8800,0,0);\n> insert into tbl values (1891,1,1);\n> insert into tbl values (3420,2,0);\n> insert into tbl values (9850,3,0);\n> insert into tbl values (7164,4,4);\n> analyze tbl;\n> explain (costs off) select count(*) from tbl t1 where (exists(select 1 from\n> tbl t2 where t2.c1 = t1.c2) or c3 < 0);\n\n> postgres=# explain (costs off) select count(*) from tbl t1 where\n> (exists(select 1 from tbl t2 where t2.c1 = t1.c2) or c3 < 0);\n> ERROR: subplan \"SubPlan 1\" was not initialized\n\nNice example. This is failing since 41efb8340. It happens because\nwe copy the AlternativeSubPlan for the EXISTS into the scan clauses\nfor each of t1's partitions. At setrefs.c time, when\nfix_alternative_subplan() looks at the first of these\nAlternativeSubPlans, it decides it likes the first subplan better,\nso it deletes SubPlan 2 from the root->glob->subplans list. But when\nit comes to the next copy (which is attached to a partition with a\ndifferent number of rows), it likes the second subplan better, so it\ndeletes SubPlan 1 from the root->glob->subplans list. Now we have\nSubPlan nodes in the tree with no referents in the global list of\nsubplans, so kaboom.\n\nThe easiest fix would just be to not try to delete unreferenced\nsubplans. The error goes away if I remove the \"lfirst(lc2) = NULL\"\nstatements from fix_alternative_subplan(). However, this is a bit\nannoying since then we will still pay the cost of initializing\nsubplans that (in most cases) will never be used. I'm going to\nlook into how painful it is to have setrefs.c remove unused subplans\nonly at the end, after it's seen all the AlternativeSubPlans.\n\n regards, tom lane\n\nHi,In the fix, isUsedSubplan is used to tell whether any given subplan is used.Since only one subplan is used, I wonder if the array can be replaced by specifying the subplan is used.Cheers",
"msg_date": "Tue, 14 Sep 2021 13:00:44 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Getting ERROR \"subplan \"SubPlan 1\" was not initialized\" in EXISTS\n subplan when using for list partition."
},
{
"msg_contents": "Zhihong Yu <zyu@yugabyte.com> writes:\n> In the fix, isUsedSubplan is used to tell whether any given subplan is used.\n> Since only one subplan is used, I wonder if the array can be replaced by\n> specifying the subplan is used.\n\nThat doesn't seem particularly more convenient. The point of the bool\narray is to merge the results from examination of (possibly) many\nAlternativeSubPlans.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Sep 2021 16:11:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Getting ERROR \"subplan \"SubPlan 1\" was not initialized\" in EXISTS\n subplan when using for list partition."
},
{
"msg_contents": "Em ter., 14 de set. de 2021 às 17:11, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Zhihong Yu <zyu@yugabyte.com> writes:\n> > In the fix, isUsedSubplan is used to tell whether any given subplan is\n> used.\n> > Since only one subplan is used, I wonder if the array can be replaced by\n> > specifying the subplan is used.\n>\n> That doesn't seem particularly more convenient. The point of the bool\n> array is to merge the results from examination of (possibly) many\n> AlternativeSubPlans.\n>\nImpressive quick fix, but IMHO I also think it's a bit excessive.\n\nI would like to ask if this alternative fix (attached) would also solve the\nproblem or not.\nApparently, it passes the proposed test and in regress.\n\npostgres=# create temp table exists_tbl (c1 int, c2 int, c3 int) partition\nby list (c1);\nCREATE TABLE\npostgres=# create temp table exists_tbl_null partition of exists_tbl for\nvalues in (null);\nCREATE TABLE\npostgres=# create temp table exists_tbl_def partition of exists_tbl default;\nCREATE TABLE\npostgres=# insert into exists_tbl select x, x/2, x+1 from\ngenerate_series(0,10) x;\nINSERT 0 11\npostgres=# analyze exists_tbl;\nANALYZE\npostgres=# explain (costs off)\npostgres-# explain (costs off);\nERROR: syntax error at or near \"explain\"\nLINE 2: explain (costs off);\n ^\npostgres=# explain (costs off)\npostgres-# select * from exists_tbl t1\npostgres-# where (exists(select 1 from exists_tbl t2 where t1.c1 = t2.c2)\nor c3 < 0);\n QUERY PLAN\n------------------------------------------------------\n Append\n -> Seq Scan on exists_tbl_null t1_1\n Filter: ((SubPlan 1) OR (c3 < 0))\n SubPlan 1\n -> Append\n -> Seq Scan on exists_tbl_null t2_1\n Filter: (t1_1.c1 = c2)\n -> Seq Scan on exists_tbl_def t2_2\n Filter: (t1_1.c1 = c2)\n -> Seq Scan on exists_tbl_def t1_2\n Filter: ((hashed SubPlan 2) OR (c3 < 0))\n SubPlan 2\n -> Append\n -> Seq Scan on exists_tbl_null t2_4\n -> Seq Scan on exists_tbl_def t2_5\n(15 rows)\n\n\npostgres=# select * from exists_tbl t1\npostgres-# where (exists(select 1 from exists_tbl t2 where t1.c1 = t2.c2)\nor c3 < 0);\n c1 | c2 | c3\n----+----+----\n 0 | 0 | 1\n 1 | 0 | 2\n 2 | 1 | 3\n 3 | 1 | 4\n 4 | 2 | 5\n 5 | 2 | 6\n(6 rows)\n\nregards,\nRanier Vilela",
"msg_date": "Wed, 15 Sep 2021 11:46:40 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Getting ERROR \"subplan \"SubPlan 1\" was not initialized\" in EXISTS\n subplan when using for list partition."
},
{
"msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> I would like to ask if this alternative fix (attached) would also solve the\n> problem or not.\n\nIf I'm reading the patch correctly, that fixes it by failing to drop\nunused subplans at all --- the second loop you have has no external\neffect.\n\nWe could, in fact, not bother with removing the no-longer-referenced\nsubplans, and it probably wouldn't be all that awful. But the intent\nof the original patch was to save the executor startup time for such\nsubplans, so I wanted to preserve that goal if I could. The committed\npatch seems small enough and cheap enough to be worthwhile.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 Sep 2021 11:00:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Getting ERROR \"subplan \"SubPlan 1\" was not initialized\" in EXISTS\n subplan when using for list partition."
},
{
"msg_contents": "Em qua., 15 de set. de 2021 às 12:00, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n> > I would like to ask if this alternative fix (attached) would also solve\n> the\n> > problem or not.\n>\n> If I'm reading the patch correctly, that fixes it by failing to drop\n> unused subplans at all --- the second loop you have has no external\n> effect.\n>\n> We could, in fact, not bother with removing the no-longer-referenced\n> subplans, and it probably wouldn't be all that awful. But the intent\n> of the original patch was to save the executor startup time for such\n> subplans, so I wanted to preserve that goal if I could. The committed\n> patch seems small enough and cheap enough to be worthwhile.\n>\n Understood, thanks for replying.\n\nregards,\nRanier Vilela\n\nEm qua., 15 de set. de 2021 às 12:00, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Ranier Vilela <ranier.vf@gmail.com> writes:\n> I would like to ask if this alternative fix (attached) would also solve the\n> problem or not.\n\nIf I'm reading the patch correctly, that fixes it by failing to drop\nunused subplans at all --- the second loop you have has no external\neffect.\n\nWe could, in fact, not bother with removing the no-longer-referenced\nsubplans, and it probably wouldn't be all that awful. But the intent\nof the original patch was to save the executor startup time for such\nsubplans, so I wanted to preserve that goal if I could. The committed\npatch seems small enough and cheap enough to be worthwhile. Understood, thanks for replying.regards,Ranier Vilela",
"msg_date": "Wed, 15 Sep 2021 13:18:11 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Getting ERROR \"subplan \"SubPlan 1\" was not initialized\" in EXISTS\n subplan when using for list partition."
},
{
"msg_contents": "Em qua., 15 de set. de 2021 às 12:00, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> We could, in fact, not bother with removing the no-longer-referenced\n> subplans, and it probably wouldn't be all that awful. But the intent\n> of the original patch was to save the executor startup time for such\n> subplans, so I wanted to preserve that goal if I could.\n>\n\nI'm sorry if I'm being persistent with this issue, but I'd like to give it\none last try before I let it go\nI modified the way the subplane deletion is done and it seems to me that\nthis really happens.\n\nI ran a quick dirty test to count the remaining subplanes.\n\ni = 0;\nforeach(lc, asplan->subplans)\n{\n SubPlan *curplan = (SubPlan *) lfirst(lc);\n Cost curcost;\n\n curcost = curplan->startup_cost + num_exec * curplan->per_call_cost;\n if (bestplan == NULL || curcost <= bestcost)\n {\n bestplan = curplan;\n bestcost = curcost;\n }\n i++;\n}\nif (bestplan != NULL)\n{\n foreach(lc, asplan->subplans)\n {\n SubPlan *curplan = (SubPlan *) lfirst(lc);\n if (curplan != bestplan)\n lfirst(lc) = NULL;\n }\n j = 0;\n foreach(lc, asplan->subplans)\n {\n SubPlan *curplan = (SubPlan *) lfirst(lc);\n if (curplan != NULL)\n j++;\n }\n if (j != i)\n {\n ereport(ERROR,\n (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n errmsg(\"too many subplans: total_plans=%d, remain_plans=%d\",\ni, j)));\n }\n}\n\nexplain (costs off)\npostgres-# select * from exists_tbl t1\npostgres-# where (exists(select 1 from exists_tbl t2 where t1.c1 = t2.c2)\nor c3 < 0);\nERROR: too many subplans: total_plans=2, remain_plans=1\npostgres=# select * from exists_tbl t1\npostgres-# where (exists(select 1 from exists_tbl t2 where t1.c1 = t2.c2)\nor c3 < 0);\nERROR: too many subplans: total_plans=2, remain_plans=1\n\nI think that works:\n lfirst(lc) = NULL;\n\nregards,\nRanier Vilela\n\nEm qua., 15 de set. de 2021 às 12:00, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\nWe could, in fact, not bother with removing the no-longer-referenced\nsubplans, and it probably wouldn't be all that awful. But the intent\nof the original patch was to save the executor startup time for such\nsubplans, so I wanted to preserve that goal if I could. I'm sorry if I'm being persistent with this issue, but I'd like to give it one last try before I let it goI modified the way the subplane deletion is done and it seems to me that this really happens.I ran a quick dirty test to count the remaining subplanes.\ti = 0;\tforeach(lc, asplan->subplans)\t{ SubPlan *curplan = (SubPlan *) lfirst(lc); Cost\t\tcurcost; curcost = curplan->startup_cost + num_exec * curplan->per_call_cost; if (bestplan == NULL || curcost <= bestcost) { bestplan = curplan; bestcost = curcost; } i++;\t}\tif (bestplan != NULL)\t{ foreach(lc, asplan->subplans) { SubPlan *curplan = (SubPlan *) lfirst(lc); if (curplan != bestplan) lfirst(lc) = NULL; } j = 0; foreach(lc, asplan->subplans) { SubPlan *curplan = (SubPlan *) lfirst(lc); if (curplan != NULL) j++; } if (j != i) { ereport(ERROR, (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED), errmsg(\"too many subplans: total_plans=%d, remain_plans=%d\", i, j))); }\t}explain (costs off)postgres-# select * from exists_tbl t1postgres-# where (exists(select 1 from exists_tbl t2 where t1.c1 = t2.c2) or c3 < 0);ERROR: too many subplans: total_plans=2, remain_plans=1postgres=# select * from exists_tbl t1postgres-# where (exists(select 1 from exists_tbl t2 where t1.c1 = t2.c2) or c3 < 0);ERROR: too many subplans: total_plans=2, remain_plans=1I think that works:\n lfirst(lc) = NULL;regards,Ranier Vilela",
"msg_date": "Wed, 15 Sep 2021 15:27:00 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Getting ERROR \"subplan \"SubPlan 1\" was not initialized\" in EXISTS\n subplan when using for list partition."
},
{
"msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> Em qua., 15 de set. de 2021 às 12:00, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n>> We could, in fact, not bother with removing the no-longer-referenced\n>> subplans, and it probably wouldn't be all that awful. But the intent\n>> of the original patch was to save the executor startup time for such\n>> subplans, so I wanted to preserve that goal if I could.\n\n> I'm sorry if I'm being persistent with this issue, but I'd like to give it\n> one last try before I let it go\n> I modified the way the subplane deletion is done and it seems to me that\n> this really happens.\n\nIt looks like what this fragment is doing is clobbering the List\nsubstructure of the AlternativeSubPlan node itself. That's not\ngoing to make any difference, since the whole point of the exercise\nis that the AlternativeSubPlan gets cut out of the finished tree.\nBut the list that we want to modify, in order to save the \nexecutor time, is the root->glob->subplans list (which ends\nup being PlannedStmt.subplans). And that's global to the\nquery, so we can't fix it correctly on the basis of a single\nAlternativeSubPlan.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 Sep 2021 14:35:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Getting ERROR \"subplan \"SubPlan 1\" was not initialized\" in EXISTS\n subplan when using for list partition."
},
{
"msg_contents": "Em qua., 15 de set. de 2021 às 15:35, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n> > Em qua., 15 de set. de 2021 às 12:00, Tom Lane <tgl@sss.pgh.pa.us>\n> escreveu:\n> >> We could, in fact, not bother with removing the no-longer-referenced\n> >> subplans, and it probably wouldn't be all that awful. But the intent\n> >> of the original patch was to save the executor startup time for such\n> >> subplans, so I wanted to preserve that goal if I could.\n>\n> > I'm sorry if I'm being persistent with this issue, but I'd like to give\n> it\n> > one last try before I let it go\n> > I modified the way the subplane deletion is done and it seems to me that\n> > this really happens.\n>\n> It looks like what this fragment is doing is clobbering the List\n> substructure of the AlternativeSubPlan node itself. That's not\n> going to make any difference, since the whole point of the exercise\n> is that the AlternativeSubPlan gets cut out of the finished tree.\n> But the list that we want to modify, in order to save the\n> executor time, is the root->glob->subplans list (which ends\n> up being PlannedStmt.subplans). And that's global to the\n> query, so we can't fix it correctly on the basis of a single\n> AlternativeSubPlan.\n>\nOk, I can see now.\nBut this leads me to the conclusion that AlternativeSubPlan *asplan\ndoes not seem to me to be a good approach for this function, better to deal\nwith it directly:\n\"root->glob->subplans\" which, it seems, works too.\n\ni = 0;\nforeach(lc, root->glob->subplans)\n{\n SubPlan *curplan = (SubPlan *) lfirst(lc);\n Cost curcost;\n\n curcost = curplan->startup_cost + num_exec * curplan->per_call_cost;\n if (bestplan == NULL || curcost <= bestcost)\n {\n bestplan = curplan;\n bestcost = curcost;\n }\n i++;\n}\n\nif (bestplan != NULL)\n{\n foreach(lc, root->glob->subplans)\n {\n SubPlan *curplan = (SubPlan *) lfirst(lc);\n if (curplan != bestplan)\n lfirst(lc) = NULL;\n }\n j = 0;\n foreach(lc, root->glob->subplans)\n {\n SubPlan *curplan = (SubPlan *) lfirst(lc);\n if (curplan != NULL)\n j++;\n }\n if (j != i)\n {\n ereport(ERROR,\n (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n errmsg(\"too many subplans: total_plans=%d,\nremain_plans=%d\", i, j)));\n }\n}\n\npostgres=# explain (costs off)\npostgres-# select * from exists_tbl t1\npostgres-# where (exists(select 1 from exists_tbl t2 where t1.c1 = t2.c2)\nor c3 < 0);\nERROR: too many subplans: total_plans=2, remain_plans=1\npostgres=# select * from exists_tbl t1\npostgres-# where (exists(select 1 from exists_tbl t2 where t1.c1 = t2.c2)\nor c3 < 0);\nERROR: too many subplans: total_plans=2, remain_plans=1\n\nAnyway, thank you for the explanations.\n\nregards,\nRanier Vilela\n\nEm qua., 15 de set. de 2021 às 15:35, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Ranier Vilela <ranier.vf@gmail.com> writes:\n> Em qua., 15 de set. de 2021 às 12:00, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n>> We could, in fact, not bother with removing the no-longer-referenced\n>> subplans, and it probably wouldn't be all that awful. But the intent\n>> of the original patch was to save the executor startup time for such\n>> subplans, so I wanted to preserve that goal if I could.\n\n> I'm sorry if I'm being persistent with this issue, but I'd like to give it\n> one last try before I let it go\n> I modified the way the subplane deletion is done and it seems to me that\n> this really happens.\n\nIt looks like what this fragment is doing is clobbering the List\nsubstructure of the AlternativeSubPlan node itself. That's not\ngoing to make any difference, since the whole point of the exercise\nis that the AlternativeSubPlan gets cut out of the finished tree.\nBut the list that we want to modify, in order to save the \nexecutor time, is the root->glob->subplans list (which ends\nup being PlannedStmt.subplans). And that's global to the\nquery, so we can't fix it correctly on the basis of a single\nAlternativeSubPlan.Ok, I can see now.But this leads me to the conclusion that AlternativeSubPlan *asplan does not seem to me to be a good approach for this function, better to deal with it directly:\"root->glob->subplans\" which, it seems, works too.\ti = 0;\tforeach(lc, root->glob->subplans)\t{ SubPlan *curplan = (SubPlan *) lfirst(lc); Cost\t\tcurcost; curcost = curplan->startup_cost + num_exec * curplan->per_call_cost; if (bestplan == NULL || curcost <= bestcost) { bestplan = curplan; bestcost = curcost; } i++;\t}\tif (bestplan != NULL)\t{ foreach(lc, root->glob->subplans) { SubPlan *curplan = (SubPlan *) lfirst(lc); if (curplan != bestplan) lfirst(lc) = NULL; } j = 0; foreach(lc, root->glob->subplans) { SubPlan *curplan = (SubPlan *) lfirst(lc); if (curplan != NULL) j++; } if (j != i) { ereport(ERROR, (errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED), errmsg(\"too many subplans: total_plans=%d, remain_plans=%d\", i, j))); }\t}postgres=# explain (costs off)postgres-# select * from exists_tbl t1postgres-# where (exists(select 1 from exists_tbl t2 where t1.c1 = t2.c2) or c3 < 0);ERROR: too many subplans: total_plans=2, remain_plans=1postgres=# select * from exists_tbl t1postgres-# where (exists(select 1 from exists_tbl t2 where t1.c1 = t2.c2) or c3 < 0);ERROR: too many subplans: total_plans=2, remain_plans=1Anyway, thank you for the explanations.regards,Ranier Vilela",
"msg_date": "Wed, 15 Sep 2021 16:16:01 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Getting ERROR \"subplan \"SubPlan 1\" was not initialized\" in EXISTS\n subplan when using for list partition."
},
{
"msg_contents": "Em qua., 15 de set. de 2021 às 16:16, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Em qua., 15 de set. de 2021 às 15:35, Tom Lane <tgl@sss.pgh.pa.us>\n> escreveu:\n>\n>> Ranier Vilela <ranier.vf@gmail.com> writes:\n>> > Em qua., 15 de set. de 2021 às 12:00, Tom Lane <tgl@sss.pgh.pa.us>\n>> escreveu:\n>> >> We could, in fact, not bother with removing the no-longer-referenced\n>> >> subplans, and it probably wouldn't be all that awful. But the intent\n>> >> of the original patch was to save the executor startup time for such\n>> >> subplans, so I wanted to preserve that goal if I could.\n>>\n>> > I'm sorry if I'm being persistent with this issue, but I'd like to give\n>> it\n>> > one last try before I let it go\n>> > I modified the way the subplane deletion is done and it seems to me that\n>> > this really happens.\n>>\n>> It looks like what this fragment is doing is clobbering the List\n>> substructure of the AlternativeSubPlan node itself. That's not\n>> going to make any difference, since the whole point of the exercise\n>> is that the AlternativeSubPlan gets cut out of the finished tree.\n>> But the list that we want to modify, in order to save the\n>> executor time, is the root->glob->subplans list (which ends\n>> up being PlannedStmt.subplans). And that's global to the\n>> query, so we can't fix it correctly on the basis of a single\n>> AlternativeSubPlan.\n>>\n> Ok, I can see now.\n> But this leads me to the conclusion that AlternativeSubPlan *asplan\n> does not seem to me to be a good approach for this function, better to\n> deal with it directly:\n> \"root->glob->subplans\" which, it seems, works too.\n>\nHmm, too fast and wrong, do not work.\n\npostgres=# explain (costs off)\npostgres-# select * from exists_tbl t1\npostgres-# where (exists(select 1 from exists_tbl t2 where t1.c1 = t2.c2)\nor c3 < 0);\nERROR: unrecognized node type: 13\npostgres=# select * from exists_tbl t1\npostgres-# where (exists(select 1 from exists_tbl t2 where t1.c1 = t2.c2)\nor c3 < 0);\nERROR: unrecognized node type: 13\n\nregards,\nRanier Vilela\n\nEm qua., 15 de set. de 2021 às 16:16, Ranier Vilela <ranier.vf@gmail.com> escreveu:Em qua., 15 de set. de 2021 às 15:35, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Ranier Vilela <ranier.vf@gmail.com> writes:\n> Em qua., 15 de set. de 2021 às 12:00, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n>> We could, in fact, not bother with removing the no-longer-referenced\n>> subplans, and it probably wouldn't be all that awful. But the intent\n>> of the original patch was to save the executor startup time for such\n>> subplans, so I wanted to preserve that goal if I could.\n\n> I'm sorry if I'm being persistent with this issue, but I'd like to give it\n> one last try before I let it go\n> I modified the way the subplane deletion is done and it seems to me that\n> this really happens.\n\nIt looks like what this fragment is doing is clobbering the List\nsubstructure of the AlternativeSubPlan node itself. That's not\ngoing to make any difference, since the whole point of the exercise\nis that the AlternativeSubPlan gets cut out of the finished tree.\nBut the list that we want to modify, in order to save the \nexecutor time, is the root->glob->subplans list (which ends\nup being PlannedStmt.subplans). And that's global to the\nquery, so we can't fix it correctly on the basis of a single\nAlternativeSubPlan.Ok, I can see now.But this leads me to the conclusion that AlternativeSubPlan *asplan does not seem to me to be a good approach for this function, better to deal with it directly:\"root->glob->subplans\" which, it seems, works too.Hmm, too fast and wrong, do not work.postgres=# explain (costs off)postgres-# select * from exists_tbl t1postgres-# where (exists(select 1 from exists_tbl t2 where t1.c1 = t2.c2) or c3 < 0);ERROR: unrecognized node type: 13postgres=# select * from exists_tbl t1postgres-# where (exists(select 1 from exists_tbl t2 where t1.c1 = t2.c2) or c3 < 0);ERROR: unrecognized node type: 13regards,Ranier Vilela",
"msg_date": "Wed, 15 Sep 2021 16:22:13 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Getting ERROR \"subplan \"SubPlan 1\" was not initialized\" in EXISTS\n subplan when using for list partition."
}
] |
[
{
"msg_contents": "I've attached the patch for including this update in our sources. I'll\napply it on master after doing some sanity checks. The announcement can be\nfound here:\n\nhttp://blog.unicode.org/2021/09/announcing-unicode-standard-version-140.html\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 14 Sep 2021 17:30:27 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Unicode 14.0.0 update"
},
{
"msg_contents": "On Tue, Sep 14, 2021 at 05:30:27PM -0400, John Naylor wrote:\n> I've attached the patch for including this update in our sources. I'll\n> apply it on master after doing some sanity checks. The announcement can be\n> found here:\n> \n> http://blog.unicode.org/2021/09/announcing-unicode-standard-version-140.html\n\nThanks for picking this up!\n--\nMichael",
"msg_date": "Wed, 15 Sep 2021 12:34:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Unicode 14.0.0 update"
}
] |
[
{
"msg_contents": "Hi\n\nAttached a small fix to remove double check when field_name is not NULL in be-secure-openssl.c.\nThe double check is introduced in 13cfa02f7 for \"Improve error handling in backend OpenSSL implementation\".\n\nRegards,\nTang",
"msg_date": "Wed, 15 Sep 2021 08:06:37 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Remove double check when field_name is not NULL in\n be-secure-openssl.c"
},
{
"msg_contents": "> On 15 Sep 2021, at 10:06, tanghy.fnst@fujitsu.com wrote:\n\n> Attached a small fix to remove double check when field_name is not NULL in be-secure-openssl.c.\n> The double check is introduced in 13cfa02f7 for \"Improve error handling in backend OpenSSL implementation\".\n\nThe proposal removes a second == NULL check on field_name in the case where\nOBJ_nid2sn() returns an ASN1_OBJECT. This is not in a hot path, and the ASM\ngenerated is equal under optimization levels so I don't see the value in the\ncode churn and the potential for collisions during backpatching around here.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n",
"msg_date": "Wed, 15 Sep 2021 11:53:50 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Remove double check when field_name is not NULL in\n be-secure-openssl.c"
},
{
"msg_contents": "On Wednesday, September 15, 2021 6:54 PM, Daniel Gustafsson <daniel@yesql.se> wrote:\n>The proposal removes a second == NULL check on field_name in the case where\n>OBJ_nid2sn() returns an ASN1_OBJECT. This is not in a hot path, and the ASM\n>generated is equal under optimization levels so I don't see the value in the\n>code churn and the potential for collisions during backpatching around here.\n\nThanks for your kindly explanation.\nGot it.\n\nRegards,\nTang\n\n\n",
"msg_date": "Wed, 15 Sep 2021 10:01:20 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Remove double check when field_name is not NULL in\n be-secure-openssl.c"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nAlphabetical order of triggers sometimes makes me write a_Recalc or z_Calc\nto be sure it´ll be the first or the last trigger with same event of that\ntable\n\nOracle and SQL Server have FOLLOWS and PRECEDES when defining trigger\nexecution order. Firebird has POSITION, which I like it more.\n\nWhat do you think about it, do you know an old/abandoned patch that was not\ncommitted ?\n\nCREATE TRIGGER RECALC_THAT BEFORE UPDATE POSITION 1 ON ORDERS...\nCREATE TRIGGER DO_OTHER_CALC BEFORE UPDATE POSITION 2 ON ORDERS...\n\nRegards,\nMarcos\n\nHi Hackers, Alphabetical order of triggers sometimes makes me write a_Recalc or z_Calc to be sure it´ll be the first or the last trigger with same event of that table \n\nOracle and SQL Server have FOLLOWS and PRECEDES when defining trigger execution order. Firebird has POSITION, which I like it more.What do you think about it, do you know an old/abandoned patch that was not committed ?CREATE TRIGGER RECALC_THAT BEFORE UPDATE POSITION 1 ON ORDERS...CREATE TRIGGER DO_OTHER_CALC BEFORE UPDATE POSITION 2 ON ORDERS...Regards, Marcos",
"msg_date": "Wed, 15 Sep 2021 07:28:37 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Trigger position"
},
{
"msg_contents": "> On 15 Sep 2021, at 12:28, Marcos Pegoraro <marcos@f10.com.br> wrote:\n\n> CREATE TRIGGER RECALC_THAT BEFORE UPDATE POSITION 1 ON ORDERS...\n> CREATE TRIGGER DO_OTHER_CALC BEFORE UPDATE POSITION 2 ON ORDERS...\n\nFor those not familiar with Firebird: triggers are executing in alphabetical\norder within a position number, so it multiple triggers are defined for\nPOSITION 1 then they are individually executed alphabetically.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 15 Sep 2021 12:59:17 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Trigger position"
},
{
"msg_contents": "Marcos Pegoraro <marcos@f10.com.br> writes:\n> Alphabetical order of triggers sometimes makes me write a_Recalc or z_Calc\n> to be sure it´ll be the first or the last trigger with same event of that\n> table\n\n> Oracle and SQL Server have FOLLOWS and PRECEDES when defining trigger\n> execution order. Firebird has POSITION, which I like it more.\n\nColor me skeptical: doesn't that introduce more complication without\nfundamentally solving anything? You still don't know which position\nnumbers other triggers have used, so it seems like this is just a\ndifferent way to spell the same problem.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 Sep 2021 07:40:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Trigger position"
},
{
"msg_contents": "This problem can raise ... there is a trigger foo using position 1, please\nchoose another\n\nAtenciosamente,\n\n\n\n\nEm qua., 15 de set. de 2021 às 07:59, Daniel Gustafsson <daniel@yesql.se>\nescreveu:\n\n> > On 15 Sep 2021, at 12:28, Marcos Pegoraro <marcos@f10.com.br> wrote:\n>\n> > CREATE TRIGGER RECALC_THAT BEFORE UPDATE POSITION 1 ON ORDERS...\n> > CREATE TRIGGER DO_OTHER_CALC BEFORE UPDATE POSITION 2 ON ORDERS...\n>\n> For those not familiar with Firebird: triggers are executing in\n> alphabetical\n> order within a position number, so it multiple triggers are defined for\n> POSITION 1 then they are individually executed alphabetically.\n>\n> --\n> Daniel Gustafsson https://vmware.com/\n>\n>\n\nThis problem can raise ... there is a trigger foo using position 1, please choose anotherAtenciosamente, Em qua., 15 de set. de 2021 às 07:59, Daniel Gustafsson <daniel@yesql.se> escreveu:> On 15 Sep 2021, at 12:28, Marcos Pegoraro <marcos@f10.com.br> wrote:\n\n> CREATE TRIGGER RECALC_THAT BEFORE UPDATE POSITION 1 ON ORDERS...\n> CREATE TRIGGER DO_OTHER_CALC BEFORE UPDATE POSITION 2 ON ORDERS...\n\nFor those not familiar with Firebird: triggers are executing in alphabetical\norder within a position number, so it multiple triggers are defined for\nPOSITION 1 then they are individually executed alphabetically.\n\n--\nDaniel Gustafsson https://vmware.com/",
"msg_date": "Wed, 15 Sep 2021 09:10:36 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Re: Trigger position"
},
{
"msg_contents": "This way would be interesting for those are migrating from these databases\ntoo. But ok, I´ll forget it.\n\nEm qua., 15 de set. de 2021 às 08:40, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Marcos Pegoraro <marcos@f10.com.br> writes:\n> > Alphabetical order of triggers sometimes makes me write a_Recalc or\n> z_Calc\n> > to be sure it´ll be the first or the last trigger with same event of that\n> > table\n>\n> > Oracle and SQL Server have FOLLOWS and PRECEDES when defining trigger\n> > execution order. Firebird has POSITION, which I like it more.\n>\n> Color me skeptical: doesn't that introduce more complication without\n> fundamentally solving anything? You still don't know which position\n> numbers other triggers have used, so it seems like this is just a\n> different way to spell the same problem.\n>\n> regards, tom lane\n>\n\nThis way would be interesting for those are migrating from these databases too. But ok, I´ll forget it.Em qua., 15 de set. de 2021 às 08:40, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Marcos Pegoraro <marcos@f10.com.br> writes:\n> Alphabetical order of triggers sometimes makes me write a_Recalc or z_Calc\n> to be sure it´ll be the first or the last trigger with same event of that\n> table\n\n> Oracle and SQL Server have FOLLOWS and PRECEDES when defining trigger\n> execution order. Firebird has POSITION, which I like it more.\n\nColor me skeptical: doesn't that introduce more complication without\nfundamentally solving anything? You still don't know which position\nnumbers other triggers have used, so it seems like this is just a\ndifferent way to spell the same problem.\n\n regards, tom lane",
"msg_date": "Wed, 15 Sep 2021 09:12:07 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Re: Trigger position"
},
{
"msg_contents": "On 9/15/21 1:40 PM, Tom Lane wrote:\n> Marcos Pegoraro <marcos@f10.com.br> writes:\n>> Alphabetical order of triggers sometimes makes me write a_Recalc or z_Calc\n>> to be sure it´ll be the first or the last trigger with same event of that\n>> table\n> \n>> Oracle and SQL Server have FOLLOWS and PRECEDES when defining trigger\n>> execution order. Firebird has POSITION, which I like it more.\n> \n> Color me skeptical: doesn't that introduce more complication without\n> fundamentally solving anything? You still don't know which position\n> numbers other triggers have used, so it seems like this is just a\n> different way to spell the same problem.\n\nI guess one advantage is that it would make the intent of the DDL author \nmore clear to a reader and that it also makes it more clear to people \nnew to PostgreSQL that trigger order is something that is important to \nreason about.\n\nIf those small advantages are worth the complication is another question \n(I am skpetical), but if we would implement this I prefer the Firebird \nsolution over the Oralce/MSSQL solution since the Firebird solution is \nsimpler while achieving the same thing plus that the Firefird solution \nseems like it would be obviously backwards compatible with our current \nsolution.\n\nAndreas\n\n\n",
"msg_date": "Wed, 15 Sep 2021 14:35:30 +0200",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: Trigger position"
},
{
"msg_contents": "Correct, we need a field tgposition on pg_trigger and when it´s null we\nfollow normal ordering\nselect * from pg_trigger where tgrelid = X and tgtype = Y order by\ntgposition nulls last, tgname\n\nregards,\nMarcos\n\nEm qua., 15 de set. de 2021 às 09:35, Andreas Karlsson <andreas@proxel.se>\nescreveu:\n\n> On 9/15/21 1:40 PM, Tom Lane wrote:\n> > Marcos Pegoraro <marcos@f10.com.br> writes:\n> >> Alphabetical order of triggers sometimes makes me write a_Recalc or\n> z_Calc\n> >> to be sure it´ll be the first or the last trigger with same event of\n> that\n> >> table\n> >\n> >> Oracle and SQL Server have FOLLOWS and PRECEDES when defining trigger\n> >> execution order. Firebird has POSITION, which I like it more.\n> >\n> > Color me skeptical: doesn't that introduce more complication without\n> > fundamentally solving anything? You still don't know which position\n> > numbers other triggers have used, so it seems like this is just a\n> > different way to spell the same problem.\n>\n> I guess one advantage is that it would make the intent of the DDL author\n> more clear to a reader and that it also makes it more clear to people\n> new to PostgreSQL that trigger order is something that is important to\n> reason about.\n>\n> If those small advantages are worth the complication is another question\n> (I am skpetical), but if we would implement this I prefer the Firebird\n> solution over the Oralce/MSSQL solution since the Firebird solution is\n> simpler while achieving the same thing plus that the Firefird solution\n> seems like it would be obviously backwards compatible with our current\n> solution.\n>\n> Andreas\n>\n\nCorrect, we need a field tgposition on pg_trigger and when it´s null we follow normal orderingselect * from pg_trigger where tgrelid = X and tgtype = Y order by tgposition nulls last, tgnameregards, MarcosEm qua., 15 de set. de 2021 às 09:35, Andreas Karlsson <andreas@proxel.se> escreveu:On 9/15/21 1:40 PM, Tom Lane wrote:\n> Marcos Pegoraro <marcos@f10.com.br> writes:\n>> Alphabetical order of triggers sometimes makes me write a_Recalc or z_Calc\n>> to be sure it´ll be the first or the last trigger with same event of that\n>> table\n> \n>> Oracle and SQL Server have FOLLOWS and PRECEDES when defining trigger\n>> execution order. Firebird has POSITION, which I like it more.\n> \n> Color me skeptical: doesn't that introduce more complication without\n> fundamentally solving anything? You still don't know which position\n> numbers other triggers have used, so it seems like this is just a\n> different way to spell the same problem.\n\nI guess one advantage is that it would make the intent of the DDL author \nmore clear to a reader and that it also makes it more clear to people \nnew to PostgreSQL that trigger order is something that is important to \nreason about.\n\nIf those small advantages are worth the complication is another question \n(I am skpetical), but if we would implement this I prefer the Firebird \nsolution over the Oralce/MSSQL solution since the Firebird solution is \nsimpler while achieving the same thing plus that the Firefird solution \nseems like it would be obviously backwards compatible with our current \nsolution.\n\nAndreas",
"msg_date": "Wed, 15 Sep 2021 09:49:17 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Re: Trigger position"
},
{
"msg_contents": "On 2021-Sep-15, Marcos Pegoraro wrote:\n\n> This problem can raise ... there is a trigger foo using position 1, please\n> choose another\n\nThis is reminiscent of the old BASIC programming language, where you\neventually learn to choose line numbers that aren't consecutive, so that\nif you later have to add lines in between you have some room to do so.\n(This happens when modifying a program sufficient times you are forced\nto renumber old lines where you want to add new lines that no longer fit\nin the sequence.) It's a pretty bad system.\n\nIn a computer system, alphabet letters are just a different way to\npresent numbers, so you just choose ASCII letters that match what you\nwant. You can use \"AA_first_trigger\", \"BB_second_trigger\",\n\"AB_nope_this_is_second\" and you'll be fine; you can do\n\"AAB_oops_really_second\" afterwards, and so on. The integer numbering\nsystem doesn't seem very useful/flexible when seen in this light.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"Nunca confiaré en un traidor. Ni siquiera si el traidor lo he creado yo\"\n(Barón Vladimir Harkonnen)\n\n\n",
"msg_date": "Wed, 15 Sep 2021 10:51:12 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Trigger position"
},
{
"msg_contents": "When I was writing my initial email I was remembering exactly this, my\nfirst basic programs.\nI would like this feature more because I sometimes have a mess of triggers\nwhen this trigger function is fired on several tables and it needs to be\nthe first on this table but not on that table. And usually trigger names\nhave same names as their functions, so for this table I have to have a\ndifferent name just to be fired first.\n\n\nEm qua., 15 de set. de 2021 às 10:51, Alvaro Herrera <\nalvherre@alvh.no-ip.org> escreveu:\n\n> On 2021-Sep-15, Marcos Pegoraro wrote:\n>\n> > This problem can raise ... there is a trigger foo using position 1,\n> please\n> > choose another\n>\n> This is reminiscent of the old BASIC programming language, where you\n> eventually learn to choose line numbers that aren't consecutive, so that\n> if you later have to add lines in between you have some room to do so.\n> (This happens when modifying a program sufficient times you are forced\n> to renumber old lines where you want to add new lines that no longer fit\n> in the sequence.) It's a pretty bad system.\n>\n> In a computer system, alphabet letters are just a different way to\n> present numbers, so you just choose ASCII letters that match what you\n> want. You can use \"AA_first_trigger\", \"BB_second_trigger\",\n> \"AB_nope_this_is_second\" and you'll be fine; you can do\n> \"AAB_oops_really_second\" afterwards, and so on. The integer numbering\n> system doesn't seem very useful/flexible when seen in this light.\n>\n> --\n> Álvaro Herrera Valdivia, Chile —\n> https://www.EnterpriseDB.com/\n> \"Nunca confiaré en un traidor. Ni siquiera si el traidor lo he creado yo\"\n> (Barón Vladimir Harkonnen)\n>\n\nWhen I was writing my initial email I was remembering exactly this, my first basic programs.I would like this feature more because I sometimes have a mess of triggers when this trigger function is fired on several tables and it needs to be the first on this table but not on that table. And usually trigger names have same names as their functions, so for this table I have to have a different name just to be fired first.Em qua., 15 de set. de 2021 às 10:51, Alvaro Herrera <alvherre@alvh.no-ip.org> escreveu:On 2021-Sep-15, Marcos Pegoraro wrote:\n\n> This problem can raise ... there is a trigger foo using position 1, please\n> choose another\n\nThis is reminiscent of the old BASIC programming language, where you\neventually learn to choose line numbers that aren't consecutive, so that\nif you later have to add lines in between you have some room to do so.\n(This happens when modifying a program sufficient times you are forced\nto renumber old lines where you want to add new lines that no longer fit\nin the sequence.) It's a pretty bad system.\n\nIn a computer system, alphabet letters are just a different way to\npresent numbers, so you just choose ASCII letters that match what you\nwant. You can use \"AA_first_trigger\", \"BB_second_trigger\",\n\"AB_nope_this_is_second\" and you'll be fine; you can do\n\"AAB_oops_really_second\" afterwards, and so on. The integer numbering\nsystem doesn't seem very useful/flexible when seen in this light.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"Nunca confiaré en un traidor. Ni siquiera si el traidor lo he creado yo\"\n(Barón Vladimir Harkonnen)",
"msg_date": "Wed, 15 Sep 2021 12:10:34 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Re: Trigger position"
},
{
"msg_contents": "On Wed, Sep 15, 2021, at 10:51 AM, Alvaro Herrera wrote:\n> In a computer system, alphabet letters are just a different way to\n> present numbers, so you just choose ASCII letters that match what you\n> want. You can use \"AA_first_trigger\", \"BB_second_trigger\",\n> \"AB_nope_this_is_second\" and you'll be fine; you can do\n> \"AAB_oops_really_second\" afterwards, and so on. The integer numbering\n> system doesn't seem very useful/flexible when seen in this light.\n... or renumber all trigger positions in a single transaction. I agree that\nletters are more flexible than numbers but some users are number-oriented.\n\nI'm afraid an extra mechanism to determine the order to fire triggers will\nconfuse programmers if someone decides to use both. Besides that, we have to\nexpend a few cycles to determine the exact trigger execution order.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, Sep 15, 2021, at 10:51 AM, Alvaro Herrera wrote:In a computer system, alphabet letters are just a different way topresent numbers, so you just choose ASCII letters that match what youwant. You can use \"AA_first_trigger\", \"BB_second_trigger\",\"AB_nope_this_is_second\" and you'll be fine; you can do\"AAB_oops_really_second\" afterwards, and so on. The integer numberingsystem doesn't seem very useful/flexible when seen in this light.... or renumber all trigger positions in a single transaction. I agree thatletters are more flexible than numbers but some users are number-oriented.I'm afraid an extra mechanism to determine the order to fire triggers willconfuse programmers if someone decides to use both. Besides that, we have toexpend a few cycles to determine the exact trigger execution order.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Wed, 15 Sep 2021 12:13:29 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: Trigger position"
},
{
"msg_contents": "st 15. 9. 2021 v 17:14 odesílatel Euler Taveira <euler@eulerto.com> napsal:\n\n> On Wed, Sep 15, 2021, at 10:51 AM, Alvaro Herrera wrote:\n>\n> In a computer system, alphabet letters are just a different way to\n> present numbers, so you just choose ASCII letters that match what you\n> want. You can use \"AA_first_trigger\", \"BB_second_trigger\",\n> \"AB_nope_this_is_second\" and you'll be fine; you can do\n> \"AAB_oops_really_second\" afterwards, and so on. The integer numbering\n> system doesn't seem very useful/flexible when seen in this light.\n>\n> ... or renumber all trigger positions in a single transaction. I agree that\n> letters are more flexible than numbers but some users are number-oriented.\n>\n> I'm afraid an extra mechanism to determine the order to fire triggers will\n> confuse programmers if someone decides to use both. Besides that, we have\n> to\n> expend a few cycles to determine the exact trigger execution order.\n>\n\nTriggers that depend on execution order are pretty hell. It is a clean\nsignal of some crazy design and overusing of triggers.\n\nPersonally I prefer to don't have any similar feature just as a strong\nsignal for developers - Don't do this. Unfortunately (but good for\nbusiness) . A lot of migrated applications from Oracle use this terrible\nstyle. I like PL/SQL, but the most ugly code that I saw was in PL/SQL. So\nthis feature can be necessary for migrations from Oracle, but I don't see\nreasons to be more visible.\n\nRegards\n\nPavel\n\n\n>\n> --\n> Euler Taveira\n> EDB https://www.enterprisedb.com/\n>\n>\n\nst 15. 9. 2021 v 17:14 odesílatel Euler Taveira <euler@eulerto.com> napsal:On Wed, Sep 15, 2021, at 10:51 AM, Alvaro Herrera wrote:In a computer system, alphabet letters are just a different way topresent numbers, so you just choose ASCII letters that match what youwant. You can use \"AA_first_trigger\", \"BB_second_trigger\",\"AB_nope_this_is_second\" and you'll be fine; you can do\"AAB_oops_really_second\" afterwards, and so on. The integer numberingsystem doesn't seem very useful/flexible when seen in this light.... or renumber all trigger positions in a single transaction. I agree thatletters are more flexible than numbers but some users are number-oriented.I'm afraid an extra mechanism to determine the order to fire triggers willconfuse programmers if someone decides to use both. Besides that, we have toexpend a few cycles to determine the exact trigger execution order.Triggers that depend on execution order are pretty hell. It is a clean signal of some crazy design and overusing of triggers.Personally I prefer to don't have any similar feature just as a strong signal for developers - Don't do this. Unfortunately (but good for business) . A lot of migrated applications from Oracle use this terrible style. I like PL/SQL, but the most ugly code that I saw was in PL/SQL. So this feature can be necessary for migrations from Oracle, but I don't see reasons to be more visible.RegardsPavel--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Wed, 15 Sep 2021 17:23:34 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Trigger position"
},
{
"msg_contents": "On 2021-Sep-15, Pavel Stehule wrote:\n\n> Triggers that depend on execution order are pretty hell. It is a clean\n> signal of some crazy design and overusing of triggers.\n\nYeah. The only case I've seen where order of triggers was important\n(beyond the \"before\" / \"after\" classification) is where you have\nsomething that you need to ensure runs before the FK triggers.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"XML!\" Exclaimed C++. \"What are you doing here? You're not a programming\nlanguage.\"\n\"Tell that to the people who use me,\" said XML.\nhttps://burningbird.net/the-parable-of-the-languages/\n\n\n",
"msg_date": "Wed, 15 Sep 2021 12:30:42 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Trigger position"
},
{
"msg_contents": "On 09/15/21 06:28, Marcos Pegoraro wrote:\n\n> Oracle and SQL Server have FOLLOWS and PRECEDES when defining trigger\n> execution order. Firebird has POSITION, which I like it more.\n\nBetween those two, I think my vote would come down the other way,\nassuming FOLLOWS and PRECEDES work the way I am guessing they do:\nyou would be specifying the firing order between triggers whose\nrelative order you care about, and leaving it unspecified between\ntriggers whose relative order doesn't matter.\n\nI find that an appealing general solution that allows the machine\nto find a satisfactory order, and is less fussy than trying to manually\ncreate a total order for all of the triggers (even those whose relative\norder may not matter) by arbitrarily fussing with names or integers.\n\nIt resembles similar constructs in lots of other things, like the way\ngrammar precedences are specified [0] in SDF.\n\nIt may be objected that this makes a trigger order that is less\nfully determined in advance, and can lead to issues that are harder\nto reason out if you forgot to specify a relative order that matters.\n\nBut balancing that is that it may be easier in general to reason about\njust the relative orders that matter, undistracted by any that don't.\nIn some settings, leaving unspecified the ones that don't may increase\nopportunities for optimization. (Not that I have any specific optimizations\nin mind for this setting.)\n\nOne could even think about a test mode that would deliberately randomize\nthe relative order between triggers where it hasn't been specified.\n\nRegards,\n-Chap\n\n\n[0]\nhttps://www.metaborg.org/en/latest/source/langdev/meta/lang/sdf3/reference.html#priorities\n\n\n",
"msg_date": "Wed, 15 Sep 2021 11:31:16 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Trigger position"
},
{
"msg_contents": "We can run triggers using position only, this way we don´t have these few\ncycles to determine ordering.\nOn creation time we populate position, even if it's not set, so for the\nfirst time position will match trigger names. When user changes a trigger\nposition we sum 1 to the followers.\n\nregards,\nMarcos\n\nEm qua., 15 de set. de 2021 às 12:13, Euler Taveira <euler@eulerto.com>\nescreveu:\n\n> On Wed, Sep 15, 2021, at 10:51 AM, Alvaro Herrera wrote:\n>\n> In a computer system, alphabet letters are just a different way to\n> present numbers, so you just choose ASCII letters that match what you\n> want. You can use \"AA_first_trigger\", \"BB_second_trigger\",\n> \"AB_nope_this_is_second\" and you'll be fine; you can do\n> \"AAB_oops_really_second\" afterwards, and so on. The integer numbering\n> system doesn't seem very useful/flexible when seen in this light.\n>\n> ... or renumber all trigger positions in a single transaction. I agree that\n> letters are more flexible than numbers but some users are number-oriented.\n>\n> I'm afraid an extra mechanism to determine the order to fire triggers will\n> confuse programmers if someone decides to use both. Besides that, we have\n> to\n> expend a few cycles to determine the exact trigger execution order.\n>\n>\n> --\n> Euler Taveira\n> EDB https://www.enterprisedb.com/\n>\n>\n\nWe can run triggers using position only, this way we don´t have these few cycles to determine ordering.On creation time we populate position, even if it's not set, so for the first time position will match trigger names. When user changes a trigger position we sum 1 to the followers.regards,Marcos Em qua., 15 de set. de 2021 às 12:13, Euler Taveira <euler@eulerto.com> escreveu:On Wed, Sep 15, 2021, at 10:51 AM, Alvaro Herrera wrote:In a computer system, alphabet letters are just a different way topresent numbers, so you just choose ASCII letters that match what youwant. You can use \"AA_first_trigger\", \"BB_second_trigger\",\"AB_nope_this_is_second\" and you'll be fine; you can do\"AAB_oops_really_second\" afterwards, and so on. The integer numberingsystem doesn't seem very useful/flexible when seen in this light.... or renumber all trigger positions in a single transaction. I agree thatletters are more flexible than numbers but some users are number-oriented.I'm afraid an extra mechanism to determine the order to fire triggers willconfuse programmers if someone decides to use both. Besides that, we have toexpend a few cycles to determine the exact trigger execution order.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Wed, 15 Sep 2021 12:55:57 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Re: Trigger position"
}
] |
[
{
"msg_contents": "\nGreetings\n\n\nThe Release Management Team (Michael Paquier, Peter Geoghegan and\nmyself) in consultation with the release team proposes the following\nrelease schedule:\n\n* PostgreSQL 14 Release Candidate 1 (RC1) will be released on September 23, 2021.\n\n* In the absence of any critical issues, PostgreSQL 14 will become generally available on September 30, 2021.\n\nAll commits and fixes intended for this release should be made before September 23, 2021 AoE.\n\nWe would like to thank all the contributors. reviewers and committers for their work on this release, and for making this a fairly smooth process.\n\ncheers\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 15 Sep 2021 08:56:04 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Release 14 Schedule"
},
{
"msg_contents": "On Wed, Sep 15, 2021 at 8:56 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> The Release Management Team (Michael Paquier, Peter Geoghegan and\n> myself) in consultation with the release team proposes the following\n> release schedule:\n>\n> * PostgreSQL 14 Release Candidate 1 (RC1) will be released on September 23, 2021.\n>\n> * In the absence of any critical issues, PostgreSQL 14 will become generally available on September 30, 2021.\n>\n> All commits and fixes intended for this release should be made before September 23, 2021 AoE.\n\nPresumably this needs to be a couple days earlier, right? Tom would\nprobably stamp on Monday, so I guess fixes should be in by Sunday at\nthe very latest to allow for a full buildfarm cycle.\n\nAlso, I really like the fact that we're looking to release in\nSeptember! I think that's nicer than when it slips into October.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 15 Sep 2021 10:20:36 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Release 14 Schedule"
},
{
"msg_contents": "\nOn 9/15/21 10:20 AM, Robert Haas wrote:\n> On Wed, Sep 15, 2021 at 8:56 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> The Release Management Team (Michael Paquier, Peter Geoghegan and\n>> myself) in consultation with the release team proposes the following\n>> release schedule:\n>>\n>> * PostgreSQL 14 Release Candidate 1 (RC1) will be released on September 23, 2021.\n>>\n>> * In the absence of any critical issues, PostgreSQL 14 will become generally available on September 30, 2021.\n>>\n>> All commits and fixes intended for this release should be made before September 23, 2021 AoE.\n> Presumably this needs to be a couple days earlier, right? Tom would\n> probably stamp on Monday, so I guess fixes should be in by Sunday at\n> the very latest to allow for a full buildfarm cycle.\n\n\nGood point. Let's say Sunday 19th. There are in fact very few open\nitems, so the release 14 branch should already be fairly quiet.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 15 Sep 2021 10:33:56 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Release 14 Schedule"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> The Release Management Team (Michael Paquier, Peter Geoghegan and\n> myself) in consultation with the release team proposes the following\n> release schedule:\n> * PostgreSQL 14 Release Candidate 1 (RC1) will be released on September 23, 2021.\n> * In the absence of any critical issues, PostgreSQL 14 will become generally available on September 30, 2021.\n\nWe don't yet have a list-of-major-features for the v14 release notes.\nAnybody care to propose one?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 18 Sep 2021 13:37:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Release 14 Schedule"
},
{
"msg_contents": "On Sat, Sep 18, 2021 at 01:37:19PM -0400, Tom Lane wrote:\n> We don't yet have a list-of-major-features for the v14 release notes.\n> Anybody care to propose one?\n\n . Allow extended statistics on column expressions;\n . Memoize node which can improve speed of nested loop joins;\n . Allow use of LZ4 compression for faster access to TOASTed fields;\n . JSONB and H-store types may be subscripted, as may be participating data types provided by extensions.\n . Many improvements to performance of VACUUM;\n\nMaybe these??\nImprove the performance of updates/deletes on partitioned tables when only a few partitions are affected (Amit Langote, Tom Lane)\nAdd SQL-standard SEARCH and CYCLE clauses for common table expressions (Peter Eisentraut)\nAllow REINDEX to process all child tables or indexes of a partitioned relation (Justin Pryzby, Michael Paquier)\n\nBTW I wondered if this should be mentioned as an incompatibile change:\n\ncommit 3d351d916b20534f973eda760cde17d96545d4c4\n Redefine pg_class.reltuples to be -1 before the first VACUUM or ANALYZE.\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 19 Sep 2021 11:32:01 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Release 14 Schedule"
},
{
"msg_contents": "On 9/19/21 12:32 PM, Justin Pryzby wrote:\n> On Sat, Sep 18, 2021 at 01:37:19PM -0400, Tom Lane wrote:\n>> We don't yet have a list-of-major-features for the v14 release notes.\n>> Anybody care to propose one?\n> \n> . Allow extended statistics on column expressions;\n> . Memoize node which can improve speed of nested loop joins;\n> . Allow use of LZ4 compression for faster access to TOASTed fields;\n> . JSONB and H-store types may be subscripted, as may be participating data types provided by extensions.\n> . Many improvements to performance of VACUUM;\n> \n> Maybe these??\n\nI would propose a few different ones. I'm looking at the overall breadth\nof user impact as I propose these and the reactions I've seen in the field.\n\n- General performance improvements for databases with multiple\nconnections (the MVCC snapshot work).\n\n- The reduction in bloat on frequently updated B-trees; that was a\nlongstanding complaint against PostgreSQL that was resolved.\n\n- I agree with the JSON improvements; I'd bucket this in data types and\ninclude the support of multiranges.\n\n- Logical decoding / replication received some significant performance\nimprovements\n\n- Many improvements in query parallelism. One that stands out is how\nparallel queries can be leveraged using FDWs now, in particular the\npostgres_fdw.\n\n- I agree with VACUUM suggestion as well.\n\nI can try proposing some wording on this in a bit; I'm working on the\noverdue draft of the press release, and thought I'd chime in here first.\n\nThanks,\n\nJonathan",
"msg_date": "Sun, 19 Sep 2021 17:45:32 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Release 14 Schedule"
},
{
"msg_contents": "On Mon, Sep 20, 2021 at 3:15 AM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n>\n> On 9/19/21 12:32 PM, Justin Pryzby wrote:\n> > On Sat, Sep 18, 2021 at 01:37:19PM -0400, Tom Lane wrote:\n> >> We don't yet have a list-of-major-features for the v14 release notes.\n> >> Anybody care to propose one?\n> >\n> > . Allow extended statistics on column expressions;\n> > . Memoize node which can improve speed of nested loop joins;\n> > . Allow use of LZ4 compression for faster access to TOASTed fields;\n> > . JSONB and H-store types may be subscripted, as may be participating data types provided by extensions.\n> > . Many improvements to performance of VACUUM;\n> >\n> > Maybe these??\n>\n> I would propose a few different ones. I'm looking at the overall breadth\n> of user impact as I propose these and the reactions I've seen in the field.\n>\n> - General performance improvements for databases with multiple\n> connections (the MVCC snapshot work).\n>\n> - The reduction in bloat on frequently updated B-trees; that was a\n> longstanding complaint against PostgreSQL that was resolved.\n>\n> - I agree with the JSON improvements; I'd bucket this in data types and\n> include the support of multiranges.\n>\n> - Logical decoding / replication received some significant performance\n> improvements\n>\n> - Many improvements in query parallelism. One that stands out is how\n> parallel queries can be leveraged using FDWs now, in particular the\n> postgres_fdw.\n>\n> - I agree with VACUUM suggestion as well.\n>\n\n+1 to this list. One enhancement which we might want to consider is:\nImprove the performance of updates/deletes on partitioned tables when\nonly a few partitions are affected (Amit Langote, Tom Lane)\n\nI think this will be quite useful for customers using partitions.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 20 Sep 2021 08:15:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Release 14 Schedule"
},
{
"msg_contents": "Observability-related improvements are also good and very important for the\nfuture of DBA operations -- compute_query_id, new pg_stat_**, etc.\n\nThings like new knob idle_session_timeout and restore_command change not\nrequiring a restart will be very noticeable too.\n\nOn Sun, Sep 19, 2021 at 2:45 PM Jonathan S. Katz <jkatz@postgresql.org>\nwrote:\n\n> On 9/19/21 12:32 PM, Justin Pryzby wrote:\n> > On Sat, Sep 18, 2021 at 01:37:19PM -0400, Tom Lane wrote:\n> >> We don't yet have a list-of-major-features for the v14 release notes.\n> >> Anybody care to propose one?\n> >\n> > . Allow extended statistics on column expressions;\n> > . Memoize node which can improve speed of nested loop joins;\n> > . Allow use of LZ4 compression for faster access to TOASTed fields;\n> > . JSONB and H-store types may be subscripted, as may be participating\n> data types provided by extensions.\n> > . Many improvements to performance of VACUUM;\n> >\n> > Maybe these??\n>\n> I would propose a few different ones. I'm looking at the overall breadth\n> of user impact as I propose these and the reactions I've seen in the field.\n>\n> - General performance improvements for databases with multiple\n> connections (the MVCC snapshot work).\n>\n> - The reduction in bloat on frequently updated B-trees; that was a\n> longstanding complaint against PostgreSQL that was resolved.\n>\n> - I agree with the JSON improvements; I'd bucket this in data types and\n> include the support of multiranges.\n>\n> - Logical decoding / replication received some significant performance\n> improvements\n>\n> - Many improvements in query parallelism. One that stands out is how\n> parallel queries can be leveraged using FDWs now, in particular the\n> postgres_fdw.\n>\n> - I agree with VACUUM suggestion as well.\n>\n> I can try proposing some wording on this in a bit; I'm working on the\n> overdue draft of the press release, and thought I'd chime in here first.\n>\n> Thanks,\n>\n> Jonathan\n>\n>\n\nObservability-related improvements are also good and very important for the future of DBA operations -- compute_query_id, new pg_stat_**, etc.Things like new knob idle_session_timeout and restore_command change not requiring a restart will be very noticeable too.On Sun, Sep 19, 2021 at 2:45 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:On 9/19/21 12:32 PM, Justin Pryzby wrote:\n> On Sat, Sep 18, 2021 at 01:37:19PM -0400, Tom Lane wrote:\n>> We don't yet have a list-of-major-features for the v14 release notes.\n>> Anybody care to propose one?\n> \n> . Allow extended statistics on column expressions;\n> . Memoize node which can improve speed of nested loop joins;\n> . Allow use of LZ4 compression for faster access to TOASTed fields;\n> . JSONB and H-store types may be subscripted, as may be participating data types provided by extensions.\n> . Many improvements to performance of VACUUM;\n> \n> Maybe these??\n\nI would propose a few different ones. I'm looking at the overall breadth\nof user impact as I propose these and the reactions I've seen in the field.\n\n- General performance improvements for databases with multiple\nconnections (the MVCC snapshot work).\n\n- The reduction in bloat on frequently updated B-trees; that was a\nlongstanding complaint against PostgreSQL that was resolved.\n\n- I agree with the JSON improvements; I'd bucket this in data types and\ninclude the support of multiranges.\n\n- Logical decoding / replication received some significant performance\nimprovements\n\n- Many improvements in query parallelism. One that stands out is how\nparallel queries can be leveraged using FDWs now, in particular the\npostgres_fdw.\n\n- I agree with VACUUM suggestion as well.\n\nI can try proposing some wording on this in a bit; I'm working on the\noverdue draft of the press release, and thought I'd chime in here first.\n\nThanks,\n\nJonathan",
"msg_date": "Sun, 19 Sep 2021 23:33:18 -0700",
"msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Release 14 Schedule"
},
{
"msg_contents": "On 9/20/21 2:33 AM, Nikolay Samokhvalov wrote:\n> Observability-related improvements are also good and very important for\n> the future of DBA operations -- compute_query_id, new pg_stat_**, etc.\n> \n> Things like new knob idle_session_timeout and restore_command change not\n> requiring a restart will be very noticeable too.\n\nI agree on the observability enhancements (the PR draft gives a bunch of\ncoverage on this) and the usefulness on the knobs.\n\nI think this also highlights that there are a lot of helpful features in\nPostgreSQL 14 -- it may be tough to distill them all down into a list\nfor the release notes themselves. I think typically we try pick 5-7\nfeatures to highlight, and we're at about 10 or so proposed.\n\nOn the flip side and going off-script, do we need to select only a few\nfeatures in the release notes? We can let the press release provide the\ngeneral highlights and use that as a spring board to pick out particular\nfeatures.\n\nThanks,\n\nJonathan",
"msg_date": "Mon, 20 Sep 2021 22:23:32 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Release 14 Schedule"
},
{
"msg_contents": "\nOn 21/09/21 14:23, Jonathan S. Katz wrote:\n> On 9/20/21 2:33 AM, Nikolay Samokhvalov wrote:\n>> Observability-related improvements are also good and very important for\n>> the future of DBA operations -- compute_query_id, new pg_stat_**, etc.\n>>\n>> Things like new knob idle_session_timeout and restore_command change not\n>> requiring a restart will be very noticeable too.\n> I agree on the observability enhancements (the PR draft gives a bunch of\n> coverage on this) and the usefulness on the knobs.\n>\n> I think this also highlights that there are a lot of helpful features in\n> PostgreSQL 14 -- it may be tough to distill them all down into a list\n> for the release notes themselves. I think typically we try pick 5-7\n> features to highlight, and we're at about 10 or so proposed.\n>\n> On the flip side and going off-script, do we need to select only a few\n> features in the release notes? We can let the press release provide the\n> general highlights and use that as a spring board to pick out particular\n> features.\n>\n> Thanks,\n>\n> Jonathan\n>\nI suggest that if there are 7 or more, then possibly you should group \nthem under 2 or 3 headings.\n\nThat way it will not look quite so intimidating, and people have a \nframework to give them perspective. Also makes it easier for people to \nfocus on the highlights that they might consider the most important to \nthemselves.\n\n\nCheers,\nGavin\n\n\n",
"msg_date": "Tue, 21 Sep 2021 18:49:36 +1200",
"msg_from": "Gavin Flower <GavinFlower@archidevsys.co.nz>",
"msg_from_op": false,
"msg_subject": "Re: Release 14 Schedule"
},
{
"msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 9/19/21 12:32 PM, Justin Pryzby wrote:\n>> On Sat, Sep 18, 2021 at 01:37:19PM -0400, Tom Lane wrote:\n>>> We don't yet have a list-of-major-features for the v14 release notes.\n>>> Anybody care to propose one?\n\n> I can try proposing some wording on this in a bit; I'm working on the\n> overdue draft of the press release, and thought I'd chime in here first.\n\nI looked over Jonathan's draft press release [1] and tried to boil it down\nto our usual ten-or-so bullet points for the release notes' introductory\nparagraph. I ended up with this (didn't bother with markup yet):\n\n-----\nStored procedures can now return data via OUT parameters.\n\nThe SQL-standard SEARCH and CYCLE options for common table expressions\nhave been implemented.\n\nRange types have been extended by adding multiranges, which allow\nrepresentation of noncontiguous data ranges.\n\nSubscripting can now be applied to any data type for which it is a useful\nnotation, not only arrays. In this release, JSONB and hstore have gained\nsubscripting operators.\n\nNumerous performance improvements have been made for parallel queries,\nheavily-concurrent workloads, partitioned tables, logical replication, and\nvacuuming. Notably, foreign data wrappers can now make use of query\nparallelism.\n\nB-tree index updates are managed more efficiently, reducing index bloat.\n\nExtended statistics can now be collected on expressions, allowing\nbetter planning results for complex queries.\n\nlibpq now has the ability to pipeline multiple queries, which can boost\nthroughput over high-latency connections.\n\nTOAST data can optionally be compressed with LZ4 instead of the traditional\npglz algorithm.\n-----\n\nI'm not entirely sure that the TOAST item should make the cut,\nbut I feel fairly good about the rest of this list. Thoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/c1e72deb-8f8b-694a-1dc1-12ce671f8b8f%40postgresql.org\n\n\n",
"msg_date": "Wed, 22 Sep 2021 11:12:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Release 14 Schedule"
},
{
"msg_contents": "On Wed, Sep 22, 2021 at 5:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> > On 9/19/21 12:32 PM, Justin Pryzby wrote:\n> >> On Sat, Sep 18, 2021 at 01:37:19PM -0400, Tom Lane wrote:\n> >>> We don't yet have a list-of-major-features for the v14 release notes.\n> >>> Anybody care to propose one?\n>\n> > I can try proposing some wording on this in a bit; I'm working on the\n> > overdue draft of the press release, and thought I'd chime in here first.\n>\n> I looked over Jonathan's draft press release [1] and tried to boil it down\n> to our usual ten-or-so bullet points for the release notes' introductory\n> paragraph. I ended up with this (didn't bother with markup yet):\n>\n> -----\n> Stored procedures can now return data via OUT parameters.\n>\n> The SQL-standard SEARCH and CYCLE options for common table expressions\n> have been implemented.\n>\n> Range types have been extended by adding multiranges, which allow\n> representation of noncontiguous data ranges.\n>\n> Subscripting can now be applied to any data type for which it is a useful\n> notation, not only arrays. In this release, JSONB and hstore have gained\n> subscripting operators.\n>\n> Numerous performance improvements have been made for parallel queries,\n> heavily-concurrent workloads, partitioned tables, logical replication, and\n> vacuuming. Notably, foreign data wrappers can now make use of query\n> parallelism.\n\n\"foreign data wrappers and stored procedures/functions\" maybe?\n\n> B-tree index updates are managed more efficiently, reducing index bloat.\n>\n> Extended statistics can now be collected on expressions, allowing\n> better planning results for complex queries.\n>\n> libpq now has the ability to pipeline multiple queries, which can boost\n> throughput over high-latency connections.\n>\n> TOAST data can optionally be compressed with LZ4 instead of the traditional\n> pglz algorithm.\n> -----\n>\n> I'm not entirely sure that the TOAST item should make the cut,\n\nI think it should be t here.\n\n> but I feel fairly good about the rest of this list. Thoughts?\n\nI have a feeling emergency mode vacuum fits on that list. Not in the\npress release, but in the major features list of the release notes.\n\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Wed, 22 Sep 2021 17:15:06 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Release 14 Schedule"
},
{
"msg_contents": "Hello,\n\n> Stored procedures can now return data via OUT parameters.\n>\n> The SQL-standard SEARCH and CYCLE options for common table expressions\n> have been implemented.\n\nI think that from the application developer point of view very important feature:\n* Allow SQL-language functions and procedures to use SQL-standard function bodies\n\nCompiling query, tracking dependencies - very important.\n\nPavel Luzanov\nPostgres Professional: https://postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\nHello,\n\n\n\nStored procedures can now return data via OUT parameters.\n\nThe SQL-standard SEARCH and CYCLE options for common table expressions\nhave been implemented.\n\n\n\nI think that from the application developer point of view very important feature:\n* Allow SQL-language functions and procedures to use SQL-standard function bodies\n\nCompiling query, tracking dependencies - very important.\n\n\nPavel Luzanov\nPostgres Professional: https://postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 22 Sep 2021 18:59:12 +0300",
"msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Release 14 Schedule"
},
{
"msg_contents": "On 9/22/21 11:15 AM, Magnus Hagander wrote:\n> On Wed, Sep 22, 2021 at 5:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>>> On 9/19/21 12:32 PM, Justin Pryzby wrote:\n>>>> On Sat, Sep 18, 2021 at 01:37:19PM -0400, Tom Lane wrote:\n>>>>> We don't yet have a list-of-major-features for the v14 release notes.\n>>>>> Anybody care to propose one?\n>>\n>>> I can try proposing some wording on this in a bit; I'm working on the\n>>> overdue draft of the press release, and thought I'd chime in here first.\n>>\n>> I looked over Jonathan's draft press release [1] and tried to boil it down\n>> to our usual ten-or-so bullet points for the release notes' introductory\n>> paragraph. I ended up with this (didn't bother with markup yet):\n>>\n>> -----\n>> Stored procedures can now return data via OUT parameters.\n>>\n>> The SQL-standard SEARCH and CYCLE options for common table expressions\n>> have been implemented.\n>>\n>> Range types have been extended by adding multiranges, which allow\n>> representation of noncontiguous data ranges.\n>>\n>> Subscripting can now be applied to any data type for which it is a useful\n>> notation, not only arrays. In this release, JSONB and hstore have gained\n>> subscripting operators.\n>>\n>> Numerous performance improvements have been made for parallel queries,\n>> heavily-concurrent workloads, partitioned tables, logical replication, and\n>> vacuuming. Notably, foreign data wrappers can now make use of query\n>> parallelism.\n> \n> \"foreign data wrappers and stored procedures/functions\" maybe?\n\n+1\n\n>> B-tree index updates are managed more efficiently, reducing index bloat.\n>>\n>> Extended statistics can now be collected on expressions, allowing\n>> better planning results for complex queries.\n>>\n>> libpq now has the ability to pipeline multiple queries, which can boost\n>> throughput over high-latency connections.\n>>\n>> TOAST data can optionally be compressed with LZ4 instead of the traditional\n>> pglz algorithm.\n>> -----\n>>\n>> I'm not entirely sure that the TOAST item should make the cut,\n> \n> I think it should be t here.\n\nLeaning towards keeping it. If we subbed it, I'd suggest a statement on\nthe monitoring/observability improvements.\n\n>> but I feel fairly good about the rest of this list. Thoughts?\n> \n> I have a feeling emergency mode vacuum fits on that list. Not in the\n> press release, but in the major features list of the release notes.\n\nGiven some recent news I saw floating around, I'd agree with this.\n\nMy suggestion on ordering:\n\n- Numerous performance ...\n- B-tree...\n- Subscripting ...\n- Range types ...\n- Stored ...\n- Extended ...\n- SEARCH / CYCLE ...\n- libpq ...\n- TOAST ...\n(- emergency mode vacuum ...)\n\nJonathan",
"msg_date": "Wed, 22 Sep 2021 12:00:07 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Release 14 Schedule"
},
{
"msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 9/22/21 11:15 AM, Magnus Hagander wrote:\n>> On Wed, Sep 22, 2021 at 5:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Numerous performance improvements have been made for parallel queries,\n>>> heavily-concurrent workloads, partitioned tables, logical replication, and\n>>> vacuuming. Notably, foreign data wrappers can now make use of query\n>>> parallelism.\n\n>> \"foreign data wrappers and stored procedures/functions\" maybe?\n\n> +1\n\nI thought the point about FDWs was important because actual work (by\nFDW authors) is needed to make anything happen. The extra parallelism\ninside plpgsql functions doesn't require user effort, so I don't see\nthat it needs to be called out separately.\n\n>> I have a feeling emergency mode vacuum fits on that list. Not in the\n>> press release, but in the major features list of the release notes.\n\n> Given some recent news I saw floating around, I'd agree with this.\n\nMeh ... if it didn't make the press release's longer list, why is\nit critical here?\n\n> My suggestion on ordering:\n\nMy thought was \"SQL features first, then performance\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 Sep 2021 12:30:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Release 14 Schedule"
},
{
"msg_contents": "On Wed, Sep 22, 2021 at 12:00:07PM -0400, Jonathan S. Katz wrote:\n> On 9/22/21 11:15 AM, Magnus Hagander wrote:\n> > On Wed, Sep 22, 2021 at 5:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> >>> On 9/19/21 12:32 PM, Justin Pryzby wrote:\n> >>>> On Sat, Sep 18, 2021 at 01:37:19PM -0400, Tom Lane wrote:\n> >>>>> We don't yet have a list-of-major-features for the v14 release notes.\n> >>>>> Anybody care to propose one?\n> >>\n> >>> I can try proposing some wording on this in a bit; I'm working on the\n> >>> overdue draft of the press release, and thought I'd chime in here first.\n> >>\n> >> I looked over Jonathan's draft press release [1] and tried to boil it down\n> >> to our usual ten-or-so bullet points for the release notes' introductory\n> >> paragraph. I ended up with this (didn't bother with markup yet):\n> >>\n> >> -----\n> >> Stored procedures can now return data via OUT parameters.\n> >>\n> >> The SQL-standard SEARCH and CYCLE options for common table expressions\n> >> have been implemented.\n> >>\n> >> Range types have been extended by adding multiranges, which allow\n> >> representation of noncontiguous data ranges.\n> >>\n> >> Subscripting can now be applied to any data type for which it is a useful\n> >> notation, not only arrays. In this release, JSONB and hstore have gained\n> >> subscripting operators.\n> >>\n> >> Numerous performance improvements have been made for parallel queries,\n> >> heavily-concurrent workloads, partitioned tables, logical replication, and\n> >> vacuuming. Notably, foreign data wrappers can now make use of query\n> >> parallelism.\n> > \n> > \"foreign data wrappers and stored procedures/functions\" maybe?\n> \n> +1\n> \n> >> B-tree index updates are managed more efficiently, reducing index bloat.\n> >>\n> >> Extended statistics can now be collected on expressions, allowing\n> >> better planning results for complex queries.\n> >>\n> >> libpq now has the ability to pipeline multiple queries, which can boost\n> >> throughput over high-latency connections.\n> >>\n> >> TOAST data can optionally be compressed with LZ4 instead of the traditional\n> >> pglz algorithm.\n> >> -----\n> >>\n> >> I'm not entirely sure that the TOAST item should make the cut,\n> > \n> > I think it should be t here.\n> \n> Leaning towards keeping it. If we subbed it, I'd suggest a statement on\n> the monitoring/observability improvements.\n> \n> >> but I feel fairly good about the rest of this list. Thoughts?\n> > \n> > I have a feeling emergency mode vacuum fits on that list. Not in the\n> > press release, but in the major features list of the release notes.\n> \n> Given some recent news I saw floating around, I'd agree with this.\n> \n> My suggestion on ordering:\n> \n> - Numerous performance ...\n> - B-tree...\n> - Subscripting ...\n> - Range types ...\n> - Stored ...\n> - Extended ...\n> - SEARCH / CYCLE ...\n> - libpq ...\n> - TOAST ...\n> (- emergency mode vacuum ...)\n\nMaybe group the features together into types of features, similar to v11/v12:\nhttps://www.postgresql.org/docs/12/release-12.html\n\nSQL features: SEARCH/CYLCE, subcripting, range, ...\nPerformance improvements in btree, vacuum, toast...\n...\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 22 Sep 2021 13:20:50 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Release 14 Schedule"
},
{
"msg_contents": "On 9/22/21 12:30 PM, Tom Lane wrote:\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n>> On 9/22/21 11:15 AM, Magnus Hagander wrote:\n\n>>> I have a feeling emergency mode vacuum fits on that list. Not in the\n>>> press release, but in the major features list of the release notes.\n> \n>> Given some recent news I saw floating around, I'd agree with this.\n> \n> Meh ... if it didn't make the press release's longer list, why is\n> it critical here?\n\nMaybe it should have. I can add it to the \"vacuum improvements\" sentence.\n\nJonathan",
"msg_date": "Wed, 22 Sep 2021 16:04:06 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Release 14 Schedule"
},
{
"msg_contents": "On Wed, Sep 22, 2021 at 6:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> > On 9/22/21 11:15 AM, Magnus Hagander wrote:\n> >> On Wed, Sep 22, 2021 at 5:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>> Numerous performance improvements have been made for parallel queries,\n> >>> heavily-concurrent workloads, partitioned tables, logical replication, and\n> >>> vacuuming. Notably, foreign data wrappers can now make use of query\n> >>> parallelism.\n>\n> >> \"foreign data wrappers and stored procedures/functions\" maybe?\n>\n> > +1\n>\n> I thought the point about FDWs was important because actual work (by\n> FDW authors) is needed to make anything happen. The extra parallelism\n> inside plpgsql functions doesn't require user effort, so I don't see\n> that it needs to be called out separately.\n\nTrue, but I'm willing to guess we have a lot more people who are using\nstored procs with return query and who are going to be very happy\nabout them now being much faster in cases where parallelism worked,\nthan we have people who are writing FDWs..\n\nThat said, I'm not suggesting we remove the mention of the FDWs, just\nthat we keep both.\n\n\n> >> I have a feeling emergency mode vacuum fits on that list. Not in the\n> >> press release, but in the major features list of the release notes.\n>\n> > Given some recent news I saw floating around, I'd agree with this.\n>\n> Meh ... if it didn't make the press release's longer list, why is\n> it critical here?\n\nMy take on that is audience. It's an important feature for existing\nusers of PostgreSQL, and an important change over how the system\nbehaved before. They are more likely to read the release notes. The\npress release is more about reaching people who are not already using\npostgres, or who are so but more tangentially.\n\nMaybe that audience take is wrong though, but it is what I based the idea on :)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Wed, 22 Sep 2021 22:44:35 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Release 14 Schedule"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> On Wed, Sep 22, 2021 at 6:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I thought the point about FDWs was important because actual work (by\n>> FDW authors) is needed to make anything happen. The extra parallelism\n>> inside plpgsql functions doesn't require user effort, so I don't see\n>> that it needs to be called out separately.\n\n> True, but I'm willing to guess we have a lot more people who are using\n> stored procs with return query and who are going to be very happy\n> about them now being much faster in cases where parallelism worked,\n> than we have people who are writing FDWs..\n\nCertainly. But my sentence about \"Numerous performance improvements\"\nalready mashes down dozens of other it-just-works-better-now\nperformance improvements. Wny call out that one in particular?\n\nIf I had to pick out just one, I might actually lean towards mentioning\n86dc90056, which might change users' calculus about how many partitions\nthey can use. (But I may be biased about that.)\n\n> My take on that is audience. It's an important feature for existing\n> users of PostgreSQL, and an important change over how the system\n> behaved before. They are more likely to read the release notes. The\n> press release is more about reaching people who are not already using\n> postgres, or who are so but more tangentially.\n\nPerhaps, but on those grounds, the business about reducing B-tree bloat\ndoesn't belong in the press release either. Anyway I'm not sure the\naudiences are so different --- if I thought they were, I'd not have\nstarted from the press release.\n\nMy feeling is that the initial summary in the release notes is meant\nto be a 10000-meter overview of what's new in the release. As soon\nas you get past that bullet list, you find yourself right down in the\nweeds, so an overview is good to have. The press release can afford\nto fly a little lower than 10000 meters, though not by all that much.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 Sep 2021 17:01:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Release 14 Schedule"
},
{
"msg_contents": "On Wed, Sep 22, 2021 at 2:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Certainly. But my sentence about \"Numerous performance improvements\"\n> already mashes down dozens of other it-just-works-better-now\n> performance improvements. Wny call out that one in particular?\n\nRC 1 is supposed to be released in less than 24 hours. ISTM that we're\nalmost out of time.\n\nIs some kind of simple compromise possible?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 22 Sep 2021 18:24:23 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Release 14 Schedule"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Wed, Sep 22, 2021 at 2:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Certainly. But my sentence about \"Numerous performance improvements\"\n>> already mashes down dozens of other it-just-works-better-now\n>> performance improvements. Wny call out that one in particular?\n\n> RC 1 is supposed to be released in less than 24 hours. ISTM that we're\n> almost out of time.\n\nUmmm ... RC1 was wrapped on Monday. It will go out with the \"TO BE ADDED\"\nplaceholder for this list. I'm not panicked about time --- we just need\nto finalize this text by Sunday-ish.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 Sep 2021 22:06:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Release 14 Schedule"
},
{
"msg_contents": "On Wed, Sep 22, 2021 at 7:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Ummm ... RC1 was wrapped on Monday. It will go out with the \"TO BE ADDED\"\n> placeholder for this list. I'm not panicked about time --- we just need\n> to finalize this text by Sunday-ish.\n\nI assumed that the web team had the discretion to keep the website\nversion of the release notes a bit more consistent then what you'd get\nfrom the RC1 tarball.\n\nI'm not going to make a fuss about it, but it would have been nice if\nwe'd kept with the usual schedule for the major features list.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 22 Sep 2021 19:43:01 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Release 14 Schedule"
},
{
"msg_contents": "On 9/22/21 10:43 PM, Peter Geoghegan wrote:\n> On Wed, Sep 22, 2021 at 7:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Ummm ... RC1 was wrapped on Monday. It will go out with the \"TO BE ADDED\"\n>> placeholder for this list. I'm not panicked about time --- we just need\n>> to finalize this text by Sunday-ish.\n> \n> I assumed that the web team had the discretion to keep the website\n> version of the release notes a bit more consistent then what you'd get\n> from the RC1 tarball.\n\nNope, they get loaded when the tarball is loaded as part of the release\nprocess.\n\nJonathan",
"msg_date": "Wed, 22 Sep 2021 22:52:04 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Release 14 Schedule"
},
{
"msg_contents": "On Wed, Sep 22, 2021 at 11:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Magnus Hagander <magnus@hagander.net> writes:\n> > On Wed, Sep 22, 2021 at 6:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I thought the point about FDWs was important because actual work (by\n> >> FDW authors) is needed to make anything happen. The extra parallelism\n> >> inside plpgsql functions doesn't require user effort, so I don't see\n> >> that it needs to be called out separately.\n>\n> > True, but I'm willing to guess we have a lot more people who are using\n> > stored procs with return query and who are going to be very happy\n> > about them now being much faster in cases where parallelism worked,\n> > than we have people who are writing FDWs..\n>\n> Certainly. But my sentence about \"Numerous performance improvements\"\n> already mashes down dozens of other it-just-works-better-now\n> performance improvements. Wny call out that one in particular?\n\nJust my guestimate that that one is going to be one of the more\npopular ones. But it is a guess. And I'm not feeling strongly enough\nabout it to argue that one - if you feel the one there now is more\nimportant,t hen we go with it.\n\n\n> If I had to pick out just one, I might actually lean towards mentioning\n> 86dc90056, which might change users' calculus about how many partitions\n> they can use. (But I may be biased about that.)\n\nAh, so you posting that led me to re-read the whole thing again, this\ntime directly after caffeine.\n\nIt starts mentioning parallel query and it finishes with parallel\nquery. At a quick glance that gave me the impression the whole\nparagraph was about \"things related to improvements of parallel\nquery\", given how it started.\n\nKnowing that, I'd leave the \"numerous improvements for parallel\nqueries\" and just drop the specific mention of FDWs personally. We we\nwant to call out something in particular at the end, I agree that\n86dc90056 is probably a better choice than either of the other two for\nbeing the called-out one.\n\n\n> > My take on that is audience. It's an important feature for existing\n> > users of PostgreSQL, and an important change over how the system\n> > behaved before. They are more likely to read the release notes. The\n> > press release is more about reaching people who are not already using\n> > postgres, or who are so but more tangentially.\n>\n> Perhaps, but on those grounds, the business about reducing B-tree bloat\n> doesn't belong in the press release either. Anyway I'm not sure the\n> audiences are so different --- if I thought they were, I'd not have\n> started from the press release.\n\nTrue on the btree point.\n\n\n> My feeling is that the initial summary in the release notes is meant\n> to be a 10000-meter overview of what's new in the release. As soon\n> as you get past that bullet list, you find yourself right down in the\n> weeds, so an overview is good to have. The press release can afford\n> to fly a little lower than 10000 meters, though not by all that much.\n\nI'm not really sure of the value of having two different set of\nsummaries if they're basically targeting the same group. But now is\nnot the time to bikeshed about the overall structure I think, so I'm\nfine keeping it that way. And if that is the target audience then yes,\nit makes sense not to have something in the release notes summary\nthat's not in the press release. I would then argue for *including*\nthe emergency mode vacuum in the press release.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Thu, 23 Sep 2021 14:37:53 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Release 14 Schedule"
},
{
"msg_contents": "On Wed, Sep 22, 2021 at 12:00 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> - Numerous performance ...\n> - B-tree...\n> - Subscripting ...\n> - Range types ...\n> - Stored ...\n> - Extended ...\n> - SEARCH / CYCLE ...\n> - libpq ...\n> - TOAST ...\n> (- emergency mode vacuum ...)\n\nMy opinion is that this is awfully long for a list of major features.\nBut Tom said 10 or so was typical, so perhaps I am all wet.\n\nStill, this kind of seems like a laundry list to me. I'd argue for\ncutting range types, extended statistics, SEARCH / CYCLE, TOAST, and\nemergency mode vacuum. They're all nice, and I'm glad we have them,\nbut they're also things that only people who are deeply steeped in\nPostgreSQL already seem likely to appreciate. Better scalability, less\nbloat, working OUT parameters, and query pipelining have benefits\nanyone can understand.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 23 Sep 2021 13:06:06 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Release 14 Schedule"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Still, this kind of seems like a laundry list to me. I'd argue for\n> cutting range types, extended statistics, SEARCH / CYCLE, TOAST, and\n> emergency mode vacuum. They're all nice, and I'm glad we have them,\n> but they're also things that only people who are deeply steeped in\n> PostgreSQL already seem likely to appreciate. Better scalability, less\n> bloat, working OUT parameters, and query pipelining have benefits\n> anyone can understand.\n\nBut of course it's a laundry list, and it's aimed at people steeped\nin Postgres, because who else is going to be reading release notes?\n\nAnyway, after re-reading the list, I concur that the TOAST item\nshouldn't make the cut, so I took that out along with the explicit\nmention of FDWs, and added something about emergency vacuum.\nPushed at\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=765f677f364100072160e7af37288eb1df2ff355\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 25 Sep 2021 11:40:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Release 14 Schedule"
}
] |
[
{
"msg_contents": "So I've been looking at issues we used to have in production some time\nago which eventually lead us to migrating away from partial indexes in\nsome cases. In the end, I'm surprised how easy this (or at least a\nsimilar case) was to reproduce. The attached program does some\nUPDATEs where around every third update deletes the row from the\npartial index since it doesn't match indpred anymore. In that case\nthe row is immediately UPDATEd back to match the index WHERE clause\nagain. This roughly emulates what some of our processes do in\nproduction.\n\nToday, running the program for a few minutes (until the built-in\n262144 iteration limit), I usually end up with a partial index through\nwhich producing the only row takes milliseconds on a cold cache, and\nover a millisecond on a hot one. Finding the row through the primary\nkey is still fast, because the bloat there gets cleaned up. As far as\nI can tell, after the index has gotten into this state, there's no way\nto clean it up except VACUUMing the entire table or a REINDEX. Both\nsolutions are pretty bad.\n\nMy working theory was that this has to do with the fact that\nHeapTupleSatisfiesMVCC doesn't set the HEAP_XMAX_COMMITTED bit here,\nbut I'm not so sure anymore. Has anyone seen something like this? If\nthat really is what's happening here, then I can see why we wouldn't\nwant to slow down SELECTs with expensive visibility checks. But that\nreally leaves me wishing for something like VACUUM INDEX partial_idx.\nOtherwise your elephant just keeping getting slower and slower until\nyou get called at 2 AM to play REINDEX.\n\n(I've tested this on 9.6, v11 and v13. 13 seems to be a bit better\nhere, but not \"fixed\", I think.)\n\n\n.m",
"msg_date": "Wed, 15 Sep 2021 17:18:17 +0300",
"msg_from": "Marko Tiikkaja <marko@joh.to>",
"msg_from_op": true,
"msg_subject": "Partial index \"microvacuum\""
},
{
"msg_contents": "On Wed, Sep 15, 2021 at 7:18 AM Marko Tiikkaja <marko@joh.to> wrote:\n> So I've been looking at issues we used to have in production some time\n> ago which eventually lead us to migrating away from partial indexes in\n> some cases. In the end, I'm surprised how easy this (or at least a\n> similar case) was to reproduce.\n\n> (I've tested this on 9.6, v11 and v13. 13 seems to be a bit better\n> here, but not \"fixed\", I think.)\n\nWhat about v14? There were significant changes to the\nmicrovacuum/index deletion stuff in that release:\n\nhttps://www.postgresql.org/docs/14/btree-implementation.html#BTREE-DELETION\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 15 Sep 2021 09:25:33 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Partial index \"microvacuum\""
},
{
"msg_contents": "On Wed, Sep 15, 2021 at 7:25 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> What about v14? There were significant changes to the\n> microvacuum/index deletion stuff in that release:\n>\n> https://www.postgresql.org/docs/14/btree-implementation.html#BTREE-DELETION\n\nHuh. Interesting. I'm sorry, I wasn't aware of this work and didn't\nhave version 14 at hand. But it looks like both the partial index as\nwell as the secondary index on (id::text) get cleaned up nicely there.\nI even tried a version where I have a snapshot open for the entire\nrun, and the subsequents SELECTs clean the bloat up. I'll need to\nread up on the details a bit to understand exactly what changed, but\nit appears that at least this particular pattern has already been\nfixed.\n\nThank you so much for your work on this!\n\n\n.m\n\n\n",
"msg_date": "Thu, 16 Sep 2021 14:45:06 +0300",
"msg_from": "Marko Tiikkaja <marko@joh.to>",
"msg_from_op": true,
"msg_subject": "Re: Partial index \"microvacuum\""
},
{
"msg_contents": "On Thu, Sep 16, 2021 at 4:45 AM Marko Tiikkaja <marko@joh.to> wrote:\n> Huh. Interesting. I'm sorry, I wasn't aware of this work and didn't\n> have version 14 at hand. But it looks like both the partial index as\n> well as the secondary index on (id::text) get cleaned up nicely there.\n\nThat's great.\n\nI understand why other hackers see partial indexes as a special case,\nbut I don't really see them that way. The only substantive difference\nis the considerations for HOT safety in your scenario, versus a\nscenario with an equivalent non-partial index. By equivalent I mean an\nindex that is the same in every way, but doesn't have a predicate. And\nwith the same workload. In other words, an index that really should\nhave been partial (because the \"extra\" index tuples are useless in\npractice), but for whatever reason wasn't defined that way.\n\nIf you look at what's going on at the level of the constantly modified\nleaf pages in each scenario, then you'll see no differences -- none at\nall. The problem of VACUUM running infrequently is really no worse\nwith the partial index. VACUUM runs infrequently relative to the small\nuseful working set in *either* scenario. The useless extra index\ntuples in the non-partial-index scenario only *hide* the problem --\nobviously they're not protective in any way.\n\n> I even tried a version where I have a snapshot open for the entire\n> run, and the subsequents SELECTs clean the bloat up. I'll need to\n> read up on the details a bit to understand exactly what changed, but\n> it appears that at least this particular pattern has already been\n> fixed.\n\nBottom-up index deletion tends to help even when a snapshot holds back\ncleanup. For example:\n\nhttps://www.postgresql.org/message-id/CAGnEbogATZS1mWMVX8FzZHMXzuDEcb10AnVwwhCtXtiBpg3XLQ@mail.gmail.com\n\nIt's hard to explain exactly why this happens. The short version is\nthat there is a synergy between deduplication and bottom-up index\ndeletion. As bottom-up index deletion starts to fail (because it\nfundamentally isn't possible to delete any more index tuples on the\npage due to the basic invariants for cleanup not allowing it),\ndeduplication \"takes over for the page\". Deduplication can \"absorb\"\nextra versions from non-hot updates. A deduplication pass could easily\nbuy us enough time for the old snapshot to naturally go away. Next\ntime around a bottom-up index deletion pass is attempted for the same\npage, we'll probably find something to delete.\n\nJust accepting version-driven page splits was always a permanent\nsolution to a temporary problem.\n\n> Thank you so much for your work on this!\n\nThanks Marko!\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 16 Sep 2021 09:19:17 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Partial index \"microvacuum\""
}
] |
[
{
"msg_contents": "Memory allocation appeared be O(1) WRT the number of statistics objects, which\nwas not expected to me. This is true in v13 (and probably back to v10).\n\nIt seems to work fine to reset the memory context within the loop, so long as\nthe statslist is allocated in the parent context.\n\n|DROP TABLE t; CREATE TABLE t AS SELECT i, i+1 AS a, i+2 AS b, i+3 AS c, i+4 AS d, i+5 AS e FROM generate_series(1,99999)i;\n\n|SELECT format('CREATE STATISTICS sta%s (ndistinct) ON a,(1+b),(2+c),(3+d),(4+e) FROM t', a) FROM generate_series(1,9)a\\gexec\n|SET log_statement_stats=on; SET client_min_messages=debug; ANALYZE t;\n|=> 369432 kB max resident size\n\n|SELECT format('CREATE STATISTICS sta%s (ndistinct) ON a,b,c,d,e FROM t', a) FROM generate_series(1,33)a\\gexec\n|SET log_statement_stats=on; SET client_min_messages=debug; ANALYZE t;\n|=> 1284368 kB max resident size",
"msg_date": "Wed, 15 Sep 2021 15:09:28 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "mem context is not reset between extended stats"
},
{
"msg_contents": "On 9/15/21 10:09 PM, Justin Pryzby wrote:\n> Memory allocation appeared be O(1) WRT the number of statistics objects, which\n> was not expected to me. This is true in v13 (and probably back to v10).\n> \n> It seems to work fine to reset the memory context within the loop, so long as\n> the statslist is allocated in the parent context.\n> \n\nYeah, and I agree this fix seems reasonable. Thanks for looking!\n\nIn principle we don't expect too many extended statistics on a single \ntable, but building a single statistics may use quite a bit of memory, \nso it makes sense to release it early ...\n\nBut while playing with this a bit more, I discovered a much worse issue. \nConsider this:\n\n create table t (a text, b text, c text, d text,\n e text, f text, g text, h text);\n\n insert into t select x, x, x, x, x, x, x, x from (\n select md5(mod(i,100)::text) as x\n from generate_series(1,30000) s(i)) foo;\n\n\n create statistics s (dependencies) on a, b, c, d, e, f, g, h from t;\n\n analyze t;\n\nThis ends up eating insane amounts of memory - on my laptop it eats \n~2.5GB and then crashes with OOM. This happens because each call to \ndependency_degree does build_sorted_items, which detoasts the values. \nAnd resetting the context can't fix that, because this happens while \nbuilding a single statistics object.\n\nIMHO the right fix is to run dependency_degree in a separate context, \nand reset it after each dependency. This releases the detoasted values, \nwhich are otherwise hard to deal with.\n\nThis does not mean we should not do what your patch does too. That does \naddress various other \"leaks\" (for example MCV calls build_sorted_items \ntoo, but only once so it does not have this same issue).\n\nThese issues exist pretty much since PG10, which is where extended stats \nwere introduced, so we'll have to backpatch it. But there's no rush and \nI don't want to interfere with rc1 at the moment.\n\nAttached are two patches - 0001 is your patch (seems fine, but I looked \nonly very briefly) and 0002 is the context reset I proposed.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 21 Sep 2021 02:15:45 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: mem context is not reset between extended stats"
},
{
"msg_contents": "On Tue, Sep 21, 2021 at 02:15:45AM +0200, Tomas Vondra wrote:\n> On 9/15/21 10:09 PM, Justin Pryzby wrote:\n> > Memory allocation appeared be O(1) WRT the number of statistics objects, which\n> > was not expected to me. This is true in v13 (and probably back to v10).\n\nOf course I meant to say that it's O(N) and not O(1) :)\n\n> In principle we don't expect too many extended statistics on a single table,\n\nYes, but note that expression statistics make it more reasonable to have\nmultiple extended stats objects. I noticed this while testing a patch to build\n(I think) 7 stats objects on each of our current month's partitions.\nautovacuum was repeatedly killed on this vm after using using 2+GB RAM,\nprobably in part because there were multiple autovacuum workers handling the\nmost recent batch of inserted tables.\n\nFirst, I tried to determine what specifically was leaking so badly, and\neventually converged to this patch. Maybe there's additional subcontexts which\nwould be useful, but the minimum is to reset between objects.\n\n> These issues exist pretty much since PG10, which is where extended stats\n> were introduced, so we'll have to backpatch it. But there's no rush and I\n> don't want to interfere with rc1 at the moment.\n\nAck that. It'd be *nice* if if the fix were included in v14.0, but I don't\nknow the rules about what can change after rc1.\n\n> Attached are two patches - 0001 is your patch (seems fine, but I looked only\n> very briefly) and 0002 is the context reset I proposed.\n\nI noticed there seems to be a 3rd patch available, which might either be junk\nfor testing or a cool new feature I'll hear about later ;)\n\n> From 204f4602b218ec13ac1e3fa501a7f94adc8a4ea1 Mon Sep 17 00:00:00 2001\n> From: Tomas Vondra <tomas.vondra@postgresql.org>\n> Date: Tue, 21 Sep 2021 01:14:11 +0200\n> Subject: [PATCH 1/3] reset context\n\ncheers,\n-- \nJustin\n\n\n",
"msg_date": "Mon, 20 Sep 2021 20:37:45 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: mem context is not reset between extended stats"
},
{
"msg_contents": "On 9/21/21 3:37 AM, Justin Pryzby wrote:\n> On Tue, Sep 21, 2021 at 02:15:45AM +0200, Tomas Vondra wrote:\n>> On 9/15/21 10:09 PM, Justin Pryzby wrote:\n>>> Memory allocation appeared be O(1) WRT the number of statistics objects, which\n>>> was not expected to me. This is true in v13 (and probably back to v10).\n> \n> Of course I meant to say that it's O(N) and not O(1) :)\n> \n\nSure, I got that ;-)\n\n>> In principle we don't expect too many extended statistics on a single table,\n> \n> Yes, but note that expression statistics make it more reasonable to have\n> multiple extended stats objects. I noticed this while testing a patch to build\n> (I think) 7 stats objects on each of our current month's partitions.\n> autovacuum was repeatedly killed on this vm after using using 2+GB RAM,\n> probably in part because there were multiple autovacuum workers handling the\n> most recent batch of inserted tables.\n> \n> First, I tried to determine what specifically was leaking so badly, and\n> eventually converged to this patch. Maybe there's additional subcontexts which\n> would be useful, but the minimum is to reset between objects.\n> \n\nAgreed.\n\nI don't think there's much we could release, given the current design,\nbecause we evaluate (and process) all expressions at once. We might\nevaluate/process them one by one (and release the memory), but only when\nno other statistics kinds are requested. That seems pretty futile.\n\n\n>> These issues exist pretty much since PG10, which is where extended stats\n>> were introduced, so we'll have to backpatch it. But there's no rush and I\n>> don't want to interfere with rc1 at the moment.\n> \n> Ack that. It'd be *nice* if if the fix were included in v14.0, but I don't\n> know the rules about what can change after rc1.\n> \n\nIMO this is a bugfix, and I'll get it into 14.0 (and backpatch). But I\ndon't want to interfere with the rc1 tagging and release, so I'll do\nthat later this week.\n\n>> Attached are two patches - 0001 is your patch (seems fine, but I looked only\n>> very briefly) and 0002 is the context reset I proposed.\n> \n> I noticed there seems to be a 3rd patch available, which might either be junk\n> for testing or a cool new feature I'll hear about later ;)\n> \n\nHaha! Nope, that was just an experiment with doubling the repalloc()\nsizes in functional dependencies, instead of growing them in tiny\nchunks. but it does not make a measurable difference, so I haven't\nincluded that.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 21 Sep 2021 13:28:35 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: mem context is not reset between extended stats"
},
{
"msg_contents": "Hi,\n\nI've pushed both of these patches, with some minor tweaks (freeing the \nstatistics list, and deleting the new context), and backpatched them all \nthe way to 10.\n\nThanks for the report & patch, Justin!\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 23 Sep 2021 19:04:28 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: mem context is not reset between extended stats"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nHere's another crash caught by sqlsmith.\n\n\"\"\"\ndrop table if exists fkpart3_pk5 cascade;\ndrop table if exists inet_tbl;\n\ncreate table fkpart3_pk5 (\n a integer not null primary key\n)\npartition by range (a);\n\ncreate table fkpart3_pk51 partition of fkpart3_pk5\n\tfor values from (4000) to (4500);\n\ncreate table inet_tbl (\n c cidr,\n i inet\n);\n\nselect\n 1 as c0\nfrom\n\t(select null::integer as c9,\n\t ref_0.a as c24\n\t from fkpart3_pk5 as ref_0\n \t) as subq_0\n \tright join public.inet_tbl as sample_0 on (cast(null as cidr) = c)\nwhere subq_0.c9 <= subq_0.c24\n\"\"\"\n\n\nAttached the backtrace.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL",
"msg_date": "Wed, 15 Sep 2021 18:09:59 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": true,
"msg_subject": "right join with partitioned table crash"
},
{
"msg_contents": "Jaime Casanova <jcasanov@systemguards.com.ec> writes:\n> Here's another crash caught by sqlsmith.\n\nFun. Looks like it fails back to v12, but not in v11,\nso it's some optimization we added in v12 that's at fault.\n\n(That being the case, this isn't a blocker for 14rc1,\nthough of course it'd be nice if we fix it in time for that.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 Sep 2021 19:53:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: right join with partitioned table crash"
},
{
"msg_contents": "On Wed, Sep 15, 2021 at 07:53:49PM -0400, Tom Lane wrote:\n> Jaime Casanova <jcasanov@systemguards.com.ec> writes:\n> > Here's another crash caught by sqlsmith.\n> \n> Fun. Looks like it fails back to v12, but not in v11,\n> so it's some optimization we added in v12 that's at fault.\n\nIt seems to be a regression (?) in 12.6 (2021-02-11), from\n| 1cce024fd2 Fix pull_varnos' miscomputation of relids set for a PlaceHolderVar.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 15 Sep 2021 23:42:58 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: right join with partitioned table crash"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Wed, Sep 15, 2021 at 07:53:49PM -0400, Tom Lane wrote:\n>> Jaime Casanova <jcasanov@systemguards.com.ec> writes:\n>>> Here's another crash caught by sqlsmith.\n\n>> Fun. Looks like it fails back to v12, but not in v11,\n>> so it's some optimization we added in v12 that's at fault.\n\n> It seems to be a regression (?) in 12.6 (2021-02-11), from\n> | 1cce024fd2 Fix pull_varnos' miscomputation of relids set for a PlaceHolderVar.\n\nYeah, that patch still had a hole in it. Fix pushed,\nthanks for the report!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 Sep 2021 15:42:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: right join with partitioned table crash"
}
] |
[
{
"msg_contents": "After the recent case where the SQL/JSON patches had an error that only\nexhibited when pg_stat_statement was preloaded, I decided to see if\nthere were any other such cases in our committed code. So I tested it\nout with a modified buildfarm client with this line added in the initdb\nstep where it's adding the extra_config to postgresql.conf:\n\n print $handle \"shared_preload_libraries = 'pg_stat_statements'\\n\";\n\nThe good news is that it didn't actually find anything amiss. The bad\nnews is that it generated a bunch of diffs along these lines:\n\n\n EXPLAIN (verbose, costs off) SELECT * FROM functest_sri1();\n- QUERY PLAN \n---------------------------------------\n+ QUERY PLAN \n+---------------------------------------\n Seq Scan on temp_func_test.functest3\n Output: functest3.a\n-(2 rows)\n+ Query Identifier: 4255315482610697537\n+(3 rows)\n\n\nISTM there's probably a good case for suppressing the \"Query Identifier\"\nlines if costs are off. The main reason for saying \"costs off\" is to\nhave predictable results, AFAICT, and clearly the query identifier is\nnot predictable. It would be a trivial change in explain.c. Another\npossibility would be to add another option to supporess the query\nidentifier separately, but that seems like overkill.\n\nThoughts?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 16 Sep 2021 15:08:18 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "testing with pg_stat_statements"
},
{
"msg_contents": "On Fri, Sep 17, 2021 at 3:08 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n> So I tested it\n> out with a modified buildfarm client with this line added in the initdb\n> step where it's adding the extra_config to postgresql.conf:\n>\n> print $handle \"shared_preload_libraries = 'pg_stat_statements'\\n\";\n>\n> The good news is that it didn't actually find anything amiss.\n\nThat's something I also regularly do when working on patches, so I\nconfirm that there's no problem with pg_stat_statements, at least to\nmy knowledge.\n\n> The bad\n> news is that it generated a bunch of diffs along these lines:\n>\n>\n> EXPLAIN (verbose, costs off) SELECT * FROM functest_sri1();\n> - QUERY PLAN\n> ---------------------------------------\n> + QUERY PLAN\n> +---------------------------------------\n> Seq Scan on temp_func_test.functest3\n> Output: functest3.a\n> -(2 rows)\n> + Query Identifier: 4255315482610697537\n> +(3 rows)\n>\n>\n> ISTM there's probably a good case for suppressing the \"Query Identifier\"\n> lines if costs are off. The main reason for saying \"costs off\" is to\n> have predictable results, AFAICT, and clearly the query identifier is\n> not predictable. It would be a trivial change in explain.c. Another\n> possibility would be to add another option to supporess the query\n> identifier separately, but that seems like overkill.\n\nYes that's something that I also find annoying. I'm not sure if\nremoving the queryid when costs is disabled is the best way forward,\nbut I have no strong objection.\n\nNote that we do have a test in explain.sql to make sure that a queryid\nis computed and outputed in explain plans, but it doesn't rely on\ncosts being disabled so that wouldn't be a problem.\n\n\n",
"msg_date": "Fri, 17 Sep 2021 09:25:14 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: testing with pg_stat_statements"
}
] |
[
{
"msg_contents": "Hi,\n\nFound by llvm scan build.\nArgument with 'nonnull' attribute passed null pl/plpgsql/src/pl_comp.c\nresolve_column_ref\n\nProceed?\n\nregards,\nRanier Vilela",
"msg_date": "Thu, 16 Sep 2021 16:11:53 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Possible fault with resolve column name (plpgsql)"
},
{
"msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> Found by llvm scan build.\n> Argument with 'nonnull' attribute passed null pl/plpgsql/src/pl_comp.c\n> resolve_column_ref\n\nThis is somewhere between pointless and counterproductive. colname won't\nbe used unless the switch has set nnames_field (and the identified number\nof names matches that). If that logic somehow went wrong, I'd *want*\nthe later strcmp to dump core, not possibly give a false match.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 16 Sep 2021 16:05:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Possible fault with resolve column name (plpgsql)"
},
{
"msg_contents": "Em qui., 16 de set. de 2021 às 17:05, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n> > Found by llvm scan build.\n> > Argument with 'nonnull' attribute passed null pl/plpgsql/src/pl_comp.c\n> > resolve_column_ref\n>\n> This is somewhere between pointless and counterproductive.\n\nNot if you've ever used llvm scan, but it's pretty accurate in identifying\nwhat the condition might occur.\n\n\n> colname won't\n> be used unless the switch has set nnames_field (and the identified number\n> of names matches that).\n\n13\n← <#Path12>\nAssuming field 'type' is equal to T_String\n→ <#Path14>\n\n22\n← <#Path21>\nAssuming 'nnames' is equal to 'nnames_field'\n→ <#Path23>\n\nIf that logic somehow went wrong, I'd *want*\n> the later strcmp to dump core, not possibly give a false match.\n>\nIn this case, strcmp will fail silently, without any coredump.\n\nIf we have a record, and the field is T_String, always have a true match?\n\nregards,\nRanier Vilela\n\nEm qui., 16 de set. de 2021 às 17:05, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Ranier Vilela <ranier.vf@gmail.com> writes:\n> Found by llvm scan build.\n> Argument with 'nonnull' attribute passed null pl/plpgsql/src/pl_comp.c\n> resolve_column_ref\n\nThis is somewhere between pointless and counterproductive.Not if you've ever used llvm scan, but it's pretty accurate in identifying what the condition might occur. colname won't\nbe used unless the switch has set nnames_field (and the identified number\nof names matches that). \n13←Assuming field 'type' is equal to T_String→ \n22←Assuming 'nnames' is equal to 'nnames_field'→\n If that logic somehow went wrong, I'd *want*\nthe later strcmp to dump core, not possibly give a false match.In this case, strcmp will fail silently, without any coredump.If we have a record, and the field is T_String, always have a true match?regards,Ranier Vilela",
"msg_date": "Thu, 16 Sep 2021 19:58:25 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Possible fault with resolve column name (plpgsql)"
}
] |
[
{
"msg_contents": "Hi,\n\nIn postgres_fdw, pgfdw_xact_callback() and pgfdw_subxact_callback() do\nalmost the same thing to rollback remote toplevel- and sub-transaction.\nBut their such rollback logics are implemented separately and\nin different way. Which would decrease the readability and maintainability,\nI think. So how about making the common function so that those callback\nfunctions can just use it? Patch attached.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Fri, 17 Sep 2021 11:31:38 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Refactoring postgres_fdw code to rollback remote transaction"
},
{
"msg_contents": "On Thu, Sep 16, 2021 at 7:31 PM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n\n> Hi,\n>\n> In postgres_fdw, pgfdw_xact_callback() and pgfdw_subxact_callback() do\n> almost the same thing to rollback remote toplevel- and sub-transaction.\n> But their such rollback logics are implemented separately and\n> in different way. Which would decrease the readability and maintainability,\n> I think. So how about making the common function so that those callback\n> functions can just use it? Patch attached.\n>\n> Regards,\n>\n> --\n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n>\nHi,\n\n+ goto fail; /* Trouble clearing prepared statements */\n\nThe label fail can be removed. Under the above condition,\nentry->changing_xact_state is still true. You can directly return.\n\nCheers\n\nOn Thu, Sep 16, 2021 at 7:31 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:Hi,\n\nIn postgres_fdw, pgfdw_xact_callback() and pgfdw_subxact_callback() do\nalmost the same thing to rollback remote toplevel- and sub-transaction.\nBut their such rollback logics are implemented separately and\nin different way. Which would decrease the readability and maintainability,\nI think. So how about making the common function so that those callback\nfunctions can just use it? Patch attached.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATIONHi,+ goto fail; /* Trouble clearing prepared statements */The label fail can be removed. Under the above condition, entry->changing_xact_state is still true. You can directly return.Cheers",
"msg_date": "Thu, 16 Sep 2021 19:40:06 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring postgres_fdw code to rollback remote transaction"
},
{
"msg_contents": "On 2021/09/17 11:40, Zhihong Yu wrote:\n> + goto fail; /* Trouble clearing prepared statements */\n> \n> The label fail can be removed. Under the above condition, entry->changing_xact_state is still true. You can directly return.\n\nThanks for the review! Yes, you're right. Attached the updated version of the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Fri, 17 Sep 2021 11:58:35 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring postgres_fdw code to rollback remote transaction"
},
{
"msg_contents": "On Fri, Sep 17, 2021 at 8:28 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2021/09/17 11:40, Zhihong Yu wrote:\n> > + goto fail; /* Trouble clearing prepared statements */\n> >\n> > The label fail can be removed. Under the above condition, entry->changing_xact_state is still true. You can directly return.\n>\n> Thanks for the review! Yes, you're right. Attached the updated version of the patch.\n\n+1 for the code refactoring (1 file changed, 75 insertions(+), 102\ndeletions(-)).\n\nThe v2 patch looks good to me as is.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Fri, 17 Sep 2021 12:03:54 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactoring postgres_fdw code to rollback remote transaction"
},
{
"msg_contents": "\n\nOn 2021/09/17 15:33, Bharath Rupireddy wrote:\n> On Fri, Sep 17, 2021 at 8:28 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>> On 2021/09/17 11:40, Zhihong Yu wrote:\n>>> + goto fail; /* Trouble clearing prepared statements */\n>>>\n>>> The label fail can be removed. Under the above condition, entry->changing_xact_state is still true. You can directly return.\n>>\n>> Thanks for the review! Yes, you're right. Attached the updated version of the patch.\n> \n> +1 for the code refactoring (1 file changed, 75 insertions(+), 102\n> deletions(-)).\n> \n> The v2 patch looks good to me as is.\n\nThanks for the review! Barring any objection, I will commit the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 22 Sep 2021 00:16:03 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring postgres_fdw code to rollback remote transaction"
},
{
"msg_contents": "\n\nOn 2021/09/22 0:16, Fujii Masao wrote:\n> Thanks for the review! Barring any objection, I will commit the patch.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 22 Sep 2021 23:50:01 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactoring postgres_fdw code to rollback remote transaction"
}
] |
[
{
"msg_contents": "Hi,\n\nLogical replication is configured on one instance in version 10.18. Timeout\nerrors occur regularly and the worker process exit with an exit code 1\n\n2021-09-16 12:06:50 CEST [24881]: [1-1] user=postgres,db=foo,client=[local]\nLOG: duration: 1281408.171 ms statement: COPY schem.tab (col1, col2) FROM\nstdin;\n2021-09-16 12:07:11 CEST [12161]: [1-1] user=,db=,client= LOG: automatic\nanalyze of table \"foo.schem.tab\" system usage: CPU: user: 4.13 s, system:\n0.55 s, elapsed: 9.58 s\n2021-09-16 12:07:50 CEST [3770]: [2-1] user=,db=,client= ERROR:\nterminating logical replication worker due to timeout\n2021-09-16 12:07:50 CEST [12546]: [11-1] user=,db=,client= LOG: worker\nprocess: logical replication worker for subscription 24106654 (PID 3770)\nexited with exit code 1\n2021-09-16 12:07:50 CEST [13872]: [1-1] user=,db=,client= LOG: logical\nreplication apply worker for subscription \"subxxxx\" has started\n2021-09-16 12:07:50 CEST [13873]: [1-1]\nuser=repuser,db=foo,client=127.0.0.1 LOG: received replication command:\nIDENTIFY_SYSTEM\n\nWhy this happen?\n\nThanks a lot for your help\n\nFabrice\n\nHi,Logical replication is configured on one instance in version 10.18. Timeout errors occur regularly and the worker process exit with an exit code 12021-09-16 12:06:50 CEST [24881]: [1-1] user=postgres,db=foo,client=[local] LOG: duration: 1281408.171 ms statement: COPY schem.tab (col1, col2) FROM stdin;2021-09-16 12:07:11 CEST [12161]: [1-1] user=,db=,client= LOG: automatic analyze of table \"foo.schem.tab\" system usage: CPU: user: 4.13 s, system: 0.55 s, elapsed: 9.58 s2021-09-16 12:07:50 CEST [3770]: [2-1] user=,db=,client= ERROR: terminating logical replication worker due to timeout2021-09-16 12:07:50 CEST [12546]: [11-1] user=,db=,client= LOG: worker process: logical replication worker for subscription 24106654 (PID 3770) exited with exit code 12021-09-16 12:07:50 CEST [13872]: [1-1] user=,db=,client= LOG: logical replication apply worker for subscription \"subxxxx\" has started2021-09-16 12:07:50 CEST [13873]: [1-1] user=repuser,db=foo,client=127.0.0.1 LOG: received replication command: IDENTIFY_SYSTEMWhy this happen?Thanks a lot for your helpFabrice",
"msg_date": "Fri, 17 Sep 2021 11:59:08 +0200",
"msg_from": "Fabrice Chapuis <fabrice636861@gmail.com>",
"msg_from_op": true,
"msg_subject": "Logical replication timeout problem"
},
{
"msg_contents": "On Fri, Sep 17, 2021 at 3:29 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:\n>\n> Hi,\n>\n> Logical replication is configured on one instance in version 10.18. Timeout errors occur regularly and the worker process exit with an exit code 1\n>\n> 2021-09-16 12:06:50 CEST [24881]: [1-1] user=postgres,db=foo,client=[local] LOG: duration: 1281408.171 ms statement: COPY schem.tab (col1, col2) FROM stdin;\n> 2021-09-16 12:07:11 CEST [12161]: [1-1] user=,db=,client= LOG: automatic analyze of table \"foo.schem.tab\" system usage: CPU: user: 4.13 s, system: 0.55 s, elapsed: 9.58 s\n> 2021-09-16 12:07:50 CEST [3770]: [2-1] user=,db=,client= ERROR: terminating logical replication worker due to timeout\n> 2021-09-16 12:07:50 CEST [12546]: [11-1] user=,db=,client= LOG: worker process: logical replication worker for subscription 24106654 (PID 3770) exited with exit code 1\n> 2021-09-16 12:07:50 CEST [13872]: [1-1] user=,db=,client= LOG: logical replication apply worker for subscription \"subxxxx\" has started\n> 2021-09-16 12:07:50 CEST [13873]: [1-1] user=repuser,db=foo,client=127.0.0.1 LOG: received replication command: IDENTIFY_SYSTEM\n>\n\nCan you share the publisher-side log as well?\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 17 Sep 2021 15:56:45 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "the publisher and the subscriber run on the same postgres instance.\n\nRegards,\nFabrice\n\nOn Fri, Sep 17, 2021 at 12:26 PM Amit Kapila <amit.kapila16@gmail.com>\nwrote:\n\n> On Fri, Sep 17, 2021 at 3:29 PM Fabrice Chapuis <fabrice636861@gmail.com>\n> wrote:\n> >\n> > Hi,\n> >\n> > Logical replication is configured on one instance in version 10.18.\n> Timeout errors occur regularly and the worker process exit with an exit\n> code 1\n> >\n> > 2021-09-16 12:06:50 CEST [24881]: [1-1]\n> user=postgres,db=foo,client=[local] LOG: duration: 1281408.171 ms\n> statement: COPY schem.tab (col1, col2) FROM stdin;\n> > 2021-09-16 12:07:11 CEST [12161]: [1-1] user=,db=,client= LOG:\n> automatic analyze of table \"foo.schem.tab\" system usage: CPU: user: 4.13 s,\n> system: 0.55 s, elapsed: 9.58 s\n> > 2021-09-16 12:07:50 CEST [3770]: [2-1] user=,db=,client= ERROR:\n> terminating logical replication worker due to timeout\n> > 2021-09-16 12:07:50 CEST [12546]: [11-1] user=,db=,client= LOG: worker\n> process: logical replication worker for subscription 24106654 (PID 3770)\n> exited with exit code 1\n> > 2021-09-16 12:07:50 CEST [13872]: [1-1] user=,db=,client= LOG: logical\n> replication apply worker for subscription \"subxxxx\" has started\n> > 2021-09-16 12:07:50 CEST [13873]: [1-1]\n> user=repuser,db=foo,client=127.0.0.1 LOG: received replication command:\n> IDENTIFY_SYSTEM\n> >\n>\n> Can you share the publisher-side log as well?\n>\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nthe publisher and the subscriber run on the same postgres instance.Regards,FabriceOn Fri, Sep 17, 2021 at 12:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Fri, Sep 17, 2021 at 3:29 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:\n>\n> Hi,\n>\n> Logical replication is configured on one instance in version 10.18. Timeout errors occur regularly and the worker process exit with an exit code 1\n>\n> 2021-09-16 12:06:50 CEST [24881]: [1-1] user=postgres,db=foo,client=[local] LOG: duration: 1281408.171 ms statement: COPY schem.tab (col1, col2) FROM stdin;\n> 2021-09-16 12:07:11 CEST [12161]: [1-1] user=,db=,client= LOG: automatic analyze of table \"foo.schem.tab\" system usage: CPU: user: 4.13 s, system: 0.55 s, elapsed: 9.58 s\n> 2021-09-16 12:07:50 CEST [3770]: [2-1] user=,db=,client= ERROR: terminating logical replication worker due to timeout\n> 2021-09-16 12:07:50 CEST [12546]: [11-1] user=,db=,client= LOG: worker process: logical replication worker for subscription 24106654 (PID 3770) exited with exit code 1\n> 2021-09-16 12:07:50 CEST [13872]: [1-1] user=,db=,client= LOG: logical replication apply worker for subscription \"subxxxx\" has started\n> 2021-09-16 12:07:50 CEST [13873]: [1-1] user=repuser,db=foo,client=127.0.0.1 LOG: received replication command: IDENTIFY_SYSTEM\n>\n\nCan you share the publisher-side log as well?\n\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Fri, 17 Sep 2021 16:38:41 +0200",
"msg_from": "Fabrice Chapuis <fabrice636861@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Fri, Sep 17, 2021 at 8:08 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:\n>\n> the publisher and the subscriber run on the same postgres instance.\n>\n\nOkay, but there is no log corresponding to operations being performed\nby the publisher. By looking at current logs it is not very clear to\nme what might have caused this. Did you try increasing\nwal_sender_timeout and wal_receiver_timeout?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sun, 19 Sep 2021 09:55:20 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "Hi Amit,\n\nWe can replay the problem: we load a table of several Gb in the schema of\nthe publisher, this generates the worker's timeout after one minute from\nthe end of this load. The table on which this load is executed is not\nreplicated.\n\n2021-09-16 12:06:50 CEST [24881]: [1-1]\nuser=postgres,db=db012a00,client=[local] LOG: duration: 1281408.171 ms\nstatement: COPY db.table (col1, col2) FROM stdin;\n\n2021-09-16 12:07:11 CEST [12161]: [1-1] user=,db=,client= LOG: automatic\nanalyze of table \"db.table \" system usage: CPU: user: 4.13 s, system: 0.55\ns, elapsed: 9.58 s\n\n2021-09-16 12:07:50 CEST [3770]: [2-1] user=,db=,client= ERROR:\nterminating logical replication worker due to timeout\n\nBefore increasing value for wal_sender_timeout and wal_receiver_timeout I\nthought to further investigate the mechanisms leading to this timeout.\n\nThanks for your help\n\nFabrice\n\n\n\nOn Sun, Sep 19, 2021 at 6:25 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Fri, Sep 17, 2021 at 8:08 PM Fabrice Chapuis <fabrice636861@gmail.com>\n> wrote:\n> >\n> > the publisher and the subscriber run on the same postgres instance.\n> >\n>\n> Okay, but there is no log corresponding to operations being performed\n> by the publisher. By looking at current logs it is not very clear to\n> me what might have caused this. Did you try increasing\n> wal_sender_timeout and wal_receiver_timeout?\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nHi Amit, We can replay the problem: we load a table of several Gb in the schema of the publisher, this generates the worker's timeout after one minute from the end of this load. The table on which this load is executed is not replicated.2021-09-16 12:06:50 CEST [24881]: [1-1] user=postgres,db=db012a00,client=[local] LOG: duration: 1281408.171 ms statement: COPY db.table (col1, col2) FROM stdin;2021-09-16 12:07:11 CEST [12161]: [1-1] user=,db=,client= LOG: automatic analyze of table \"db.table \" system usage: CPU: user: 4.13 s, system: 0.55 s, elapsed: 9.58 s2021-09-16 12:07:50 CEST [3770]: [2-1] user=,db=,client= ERROR: terminating logical replication worker due to timeoutBefore increasing value for wal_sender_timeout and wal_receiver_timeout I thought to further investigate the mechanisms leading to this timeout.Thanks for your helpFabriceOn Sun, Sep 19, 2021 at 6:25 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Fri, Sep 17, 2021 at 8:08 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:\n>\n> the publisher and the subscriber run on the same postgres instance.\n>\n\nOkay, but there is no log corresponding to operations being performed\nby the publisher. By looking at current logs it is not very clear to\nme what might have caused this. Did you try increasing\nwal_sender_timeout and wal_receiver_timeout?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Mon, 20 Sep 2021 12:40:30 +0200",
"msg_from": "Fabrice Chapuis <fabrice636861@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Mon, Sep 20, 2021 at 4:10 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:\n>\n> Hi Amit,\n>\n> We can replay the problem: we load a table of several Gb in the schema of the publisher, this generates the worker's timeout after one minute from the end of this load. The table on which this load is executed is not replicated.\n>\n> 2021-09-16 12:06:50 CEST [24881]: [1-1] user=postgres,db=db012a00,client=[local] LOG: duration: 1281408.171 ms statement: COPY db.table (col1, col2) FROM stdin;\n>\n> 2021-09-16 12:07:11 CEST [12161]: [1-1] user=,db=,client= LOG: automatic analyze of table \"db.table \" system usage: CPU: user: 4.13 s, system: 0.55 s, elapsed: 9.58 s\n>\n> 2021-09-16 12:07:50 CEST [3770]: [2-1] user=,db=,client= ERROR: terminating logical replication worker due to timeout\n>\n> Before increasing value for wal_sender_timeout and wal_receiver_timeout I thought to further investigate the mechanisms leading to this timeout.\n>\n\nThe basic problem here seems to be that WAL Sender is not able to send\na keepalive or any other message for the configured\nwal_receiver_timeout. I am not sure how that can happen but can you\nonce try by switching autovacuum = off? I wanted to ensure that\nWALSender is not blocked due to the background process autovacuum.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 20 Sep 2021 17:21:00 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Mon, Sep 20, 2021 at 5:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Sep 20, 2021 at 4:10 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:\n> >\n> > Hi Amit,\n> >\n> > We can replay the problem: we load a table of several Gb in the schema of the publisher, this generates the worker's timeout after one minute from the end of this load. The table on which this load is executed is not replicated.\n> >\n> > 2021-09-16 12:06:50 CEST [24881]: [1-1] user=postgres,db=db012a00,client=[local] LOG: duration: 1281408.171 ms statement: COPY db.table (col1, col2) FROM stdin;\n> >\n> > 2021-09-16 12:07:11 CEST [12161]: [1-1] user=,db=,client= LOG: automatic analyze of table \"db.table \" system usage: CPU: user: 4.13 s, system: 0.55 s, elapsed: 9.58 s\n> >\n> > 2021-09-16 12:07:50 CEST [3770]: [2-1] user=,db=,client= ERROR: terminating logical replication worker due to timeout\n> >\n> > Before increasing value for wal_sender_timeout and wal_receiver_timeout I thought to further investigate the mechanisms leading to this timeout.\n> >\n>\n> The basic problem here seems to be that WAL Sender is not able to send\n> a keepalive or any other message for the configured\n> wal_receiver_timeout. I am not sure how that can happen but can you\n> once try by switching autovacuum = off? I wanted to ensure that\n> WALSender is not blocked due to the background process autovacuum.\n>\n\nThe other thing we can try out is to check the data in pg_locks on\npublisher during one minute after the large copy is finished. This we\ncan try out both with and without autovacuum.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 20 Sep 2021 18:04:21 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "By passing the autovacuum parameter to off the problem did not occur right\nafter loading the table as in our previous tests. However, the timeout\noccurred later. We have seen the accumulation of .snap files for several Gb.\n\n...\n-rw-------. 1 postgres postgres 16791226 Sep 20 15:26\nxid-1238444701-lsn-2D2B-F5000000.snap\n-rw-------. 1 postgres postgres 16973268 Sep 20 15:26\nxid-1238444701-lsn-2D2B-F6000000.snap\n-rw-------. 1 postgres postgres 16790984 Sep 20 15:26\nxid-1238444701-lsn-2D2B-F7000000.snap\n-rw-------. 1 postgres postgres 16988112 Sep 20 15:26\nxid-1238444701-lsn-2D2B-F8000000.snap\n-rw-------. 1 postgres postgres 16864593 Sep 20 15:26\nxid-1238444701-lsn-2D2B-F9000000.snap\n-rw-------. 1 postgres postgres 16902167 Sep 20 15:26\nxid-1238444701-lsn-2D2B-FA000000.snap\n-rw-------. 1 postgres postgres 16914638 Sep 20 15:26\nxid-1238444701-lsn-2D2B-FB000000.snap\n-rw-------. 1 postgres postgres 16782471 Sep 20 15:26\nxid-1238444701-lsn-2D2B-FC000000.snap\n-rw-------. 1 postgres postgres 16963667 Sep 20 15:27\nxid-1238444701-lsn-2D2B-FD000000.snap\n...\n\n\n\n2021-09-20 17:11:29 CEST [12687]: [1283-1] user=,db=,client= LOG:\ncheckpoint starting: time\n2021-09-20 17:11:31 CEST [12687]: [1284-1] user=,db=,client= LOG:\ncheckpoint complete: wrote 13 buffers (0.0%); 0 WAL file(s) added, 0\nremoved, 0 recycled; write=1.713 s, sync=0.001 s, total=1.718 s\n; sync files=12, longest=0.001 s, average=0.001 s; distance=29 kB,\nestimate=352191 kB\n2021-09-20 17:12:43 CEST [59986]: [2-1] user=,db=,client= ERROR:\nterminating logical replication worker due to timeout\n2021-09-20 17:12:43 CEST [12546]: [1068-1] user=,db=,client= LOG: worker\nprocess: logical replication worker for subscription 24215702 (PID 59986)\nexited with exit code 1\n2021-09-20 17:12:43 CEST [39945]: [1-1] user=,db=,client= LOG: logical\nreplication apply worker for subscription \"sub\" has started\n2021-09-20 17:12:43 CEST [39946]: [1-1] user=repuser,db=db,client=127.0.0.1\nLOG: received replication command: IDENTIFY_SYSTEM\n\nRegards,\n\nFabrice\n\n\n\nOn Mon, Sep 20, 2021 at 1:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Mon, Sep 20, 2021 at 4:10 PM Fabrice Chapuis <fabrice636861@gmail.com>\n> wrote:\n> >\n> > Hi Amit,\n> >\n> > We can replay the problem: we load a table of several Gb in the schema\n> of the publisher, this generates the worker's timeout after one minute from\n> the end of this load. The table on which this load is executed is not\n> replicated.\n> >\n> > 2021-09-16 12:06:50 CEST [24881]: [1-1]\n> user=postgres,db=db012a00,client=[local] LOG: duration: 1281408.171 ms\n> statement: COPY db.table (col1, col2) FROM stdin;\n> >\n> > 2021-09-16 12:07:11 CEST [12161]: [1-1] user=,db=,client= LOG:\n> automatic analyze of table \"db.table \" system usage: CPU: user: 4.13 s,\n> system: 0.55 s, elapsed: 9.58 s\n> >\n> > 2021-09-16 12:07:50 CEST [3770]: [2-1] user=,db=,client= ERROR:\n> terminating logical replication worker due to timeout\n> >\n> > Before increasing value for wal_sender_timeout and wal_receiver_timeout\n> I thought to further investigate the mechanisms leading to this timeout.\n> >\n>\n> The basic problem here seems to be that WAL Sender is not able to send\n> a keepalive or any other message for the configured\n> wal_receiver_timeout. I am not sure how that can happen but can you\n> once try by switching autovacuum = off? I wanted to ensure that\n> WALSender is not blocked due to the background process autovacuum.\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nBy passing the autovacuum parameter to off the problem did not occur right after loading the table as in our previous tests. However, the timeout occurred later. We have seen the accumulation of .snap files for several Gb....-rw-------. 1 postgres postgres 16791226 Sep 20 15:26 xid-1238444701-lsn-2D2B-F5000000.snap-rw-------. 1 postgres postgres 16973268 Sep 20 15:26 xid-1238444701-lsn-2D2B-F6000000.snap-rw-------. 1 postgres postgres 16790984 Sep 20 15:26 xid-1238444701-lsn-2D2B-F7000000.snap-rw-------. 1 postgres postgres 16988112 Sep 20 15:26 xid-1238444701-lsn-2D2B-F8000000.snap-rw-------. 1 postgres postgres 16864593 Sep 20 15:26 xid-1238444701-lsn-2D2B-F9000000.snap-rw-------. 1 postgres postgres 16902167 Sep 20 15:26 xid-1238444701-lsn-2D2B-FA000000.snap-rw-------. 1 postgres postgres 16914638 Sep 20 15:26 xid-1238444701-lsn-2D2B-FB000000.snap-rw-------. 1 postgres postgres 16782471 Sep 20 15:26 xid-1238444701-lsn-2D2B-FC000000.snap-rw-------. 1 postgres postgres 16963667 Sep 20 15:27 xid-1238444701-lsn-2D2B-FD000000.snap...2021-09-20 17:11:29 CEST [12687]: [1283-1] user=,db=,client= LOG: checkpoint starting: time2021-09-20 17:11:31 CEST [12687]: [1284-1] user=,db=,client= LOG: checkpoint complete: wrote 13 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=1.713 s, sync=0.001 s, total=1.718 s; sync files=12, longest=0.001 s, average=0.001 s; distance=29 kB, estimate=352191 kB2021-09-20 17:12:43 CEST [59986]: [2-1] user=,db=,client= ERROR: terminating logical replication worker due to timeout2021-09-20 17:12:43 CEST [12546]: [1068-1] user=,db=,client= LOG: worker process: logical replication worker for subscription 24215702 (PID 59986) exited with exit code 12021-09-20 17:12:43 CEST [39945]: [1-1] user=,db=,client= LOG: logical replication apply worker for subscription \"sub\" has started2021-09-20 17:12:43 CEST [39946]: [1-1] user=repuser,db=db,client=127.0.0.1 LOG: received replication command: IDENTIFY_SYSTEMRegards,FabriceOn Mon, Sep 20, 2021 at 1:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Mon, Sep 20, 2021 at 4:10 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:\n>\n> Hi Amit,\n>\n> We can replay the problem: we load a table of several Gb in the schema of the publisher, this generates the worker's timeout after one minute from the end of this load. The table on which this load is executed is not replicated.\n>\n> 2021-09-16 12:06:50 CEST [24881]: [1-1] user=postgres,db=db012a00,client=[local] LOG: duration: 1281408.171 ms statement: COPY db.table (col1, col2) FROM stdin;\n>\n> 2021-09-16 12:07:11 CEST [12161]: [1-1] user=,db=,client= LOG: automatic analyze of table \"db.table \" system usage: CPU: user: 4.13 s, system: 0.55 s, elapsed: 9.58 s\n>\n> 2021-09-16 12:07:50 CEST [3770]: [2-1] user=,db=,client= ERROR: terminating logical replication worker due to timeout\n>\n> Before increasing value for wal_sender_timeout and wal_receiver_timeout I thought to further investigate the mechanisms leading to this timeout.\n>\n\nThe basic problem here seems to be that WAL Sender is not able to send\na keepalive or any other message for the configured\nwal_receiver_timeout. I am not sure how that can happen but can you\nonce try by switching autovacuum = off? I wanted to ensure that\nWALSender is not blocked due to the background process autovacuum.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Mon, 20 Sep 2021 18:13:24 +0200",
"msg_from": "Fabrice Chapuis <fabrice636861@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Mon, Sep 20, 2021 at 9:43 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:\n>\n> By passing the autovacuum parameter to off the problem did not occur right after loading the table as in our previous tests. However, the timeout occurred later. We have seen the accumulation of .snap files for several Gb.\n>\n> ...\n> -rw-------. 1 postgres postgres 16791226 Sep 20 15:26 xid-1238444701-lsn-2D2B-F5000000.snap\n> -rw-------. 1 postgres postgres 16973268 Sep 20 15:26 xid-1238444701-lsn-2D2B-F6000000.snap\n> -rw-------. 1 postgres postgres 16790984 Sep 20 15:26 xid-1238444701-lsn-2D2B-F7000000.snap\n> -rw-------. 1 postgres postgres 16988112 Sep 20 15:26 xid-1238444701-lsn-2D2B-F8000000.snap\n> -rw-------. 1 postgres postgres 16864593 Sep 20 15:26 xid-1238444701-lsn-2D2B-F9000000.snap\n> -rw-------. 1 postgres postgres 16902167 Sep 20 15:26 xid-1238444701-lsn-2D2B-FA000000.snap\n> -rw-------. 1 postgres postgres 16914638 Sep 20 15:26 xid-1238444701-lsn-2D2B-FB000000.snap\n> -rw-------. 1 postgres postgres 16782471 Sep 20 15:26 xid-1238444701-lsn-2D2B-FC000000.snap\n> -rw-------. 1 postgres postgres 16963667 Sep 20 15:27 xid-1238444701-lsn-2D2B-FD000000.snap\n> ...\n>\n\nOkay, still not sure why the publisher is not sending keep_alive\nmessages in between spilling such a big transaction. If you see, we\nhave logic in WalSndLoop() wherein each time after sending data we\ncheck whether we need to send a keep-alive message via function\nWalSndKeepaliveIfNecessary(). I think to debug this problem further\nyou need to add some logs in function WalSndKeepaliveIfNecessary() to\nsee why it is not sending keep_alive messages when all these files are\nbeing created.\n\nDid you change the default value of\nwal_sender_timeout/wal_receiver_timeout? What is the value of those\nvariables in your environment? Did you see the message \"terminating\nwalsender process due to replication timeout\" in your server logs?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 21 Sep 2021 12:08:28 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "If I understand, the instruction to send keep alive by the wal sender has\nnot been reached in the for loop, for what reason?\n...\n* Check for replication timeout. */\n WalSndCheckTimeOut();\n\n/* Send keepalive if the time has come */\n WalSndKeepaliveIfNecessary();\n...\n\nThe data load is performed on a table which is not replicated, I do not\nunderstand why the whole transaction linked to an insert is copied to snap\nfiles given that table does not take part of the logical replication.\nWe are going to do a test by modifying parameters\nwal_sender_timeout/wal_receiver_timeout from 1' to 5'. The problem is that\nthese parameters are global and changing them will also impact the physical\nreplication.\n\nConcerning the walsender timeout, when the worker is started again after a\ntimeout, it will trigger a new walsender associated with it.\n\npostgres 55680 12546 0 Sep20 ? 00:00:02 postgres: aq: bgworker:\nlogical replication worker for subscription 24651602\npostgres 55681 12546 0 Sep20 ? 00:00:00 postgres: aq: wal sender\nprocess repuser 127.0.0.1(57930) idle\n\nKind Regards\n\nFabrice\n\nOn Tue, Sep 21, 2021 at 8:38 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Mon, Sep 20, 2021 at 9:43 PM Fabrice Chapuis <fabrice636861@gmail.com>\n> wrote:\n> >\n> > By passing the autovacuum parameter to off the problem did not occur\n> right after loading the table as in our previous tests. However, the\n> timeout occurred later. We have seen the accumulation of .snap files for\n> several Gb.\n> >\n> > ...\n> > -rw-------. 1 postgres postgres 16791226 Sep 20 15:26\n> xid-1238444701-lsn-2D2B-F5000000.snap\n> > -rw-------. 1 postgres postgres 16973268 Sep 20 15:26\n> xid-1238444701-lsn-2D2B-F6000000.snap\n> > -rw-------. 1 postgres postgres 16790984 Sep 20 15:26\n> xid-1238444701-lsn-2D2B-F7000000.snap\n> > -rw-------. 1 postgres postgres 16988112 Sep 20 15:26\n> xid-1238444701-lsn-2D2B-F8000000.snap\n> > -rw-------. 1 postgres postgres 16864593 Sep 20 15:26\n> xid-1238444701-lsn-2D2B-F9000000.snap\n> > -rw-------. 1 postgres postgres 16902167 Sep 20 15:26\n> xid-1238444701-lsn-2D2B-FA000000.snap\n> > -rw-------. 1 postgres postgres 16914638 Sep 20 15:26\n> xid-1238444701-lsn-2D2B-FB000000.snap\n> > -rw-------. 1 postgres postgres 16782471 Sep 20 15:26\n> xid-1238444701-lsn-2D2B-FC000000.snap\n> > -rw-------. 1 postgres postgres 16963667 Sep 20 15:27\n> xid-1238444701-lsn-2D2B-FD000000.snap\n> > ...\n> >\n>\n> Okay, still not sure why the publisher is not sending keep_alive\n> messages in between spilling such a big transaction. If you see, we\n> have logic in WalSndLoop() wherein each time after sending data we\n> check whether we need to send a keep-alive message via function\n> WalSndKeepaliveIfNecessary(). I think to debug this problem further\n> you need to add some logs in function WalSndKeepaliveIfNecessary() to\n> see why it is not sending keep_alive messages when all these files are\n> being created.\n>\n> Did you change the default value of\n> wal_sender_timeout/wal_receiver_timeout? What is the value of those\n> variables in your environment? Did you see the message \"terminating\n> walsender process due to replication timeout\" in your server logs?\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nIf I understand, the instruction to send keep alive by the wal sender has not been reached in the for loop, for what reason?...* Check for replication timeout. */ WalSndCheckTimeOut();/* Send keepalive if the time has come */ WalSndKeepaliveIfNecessary();...The data load is performed on a table which is not replicated, I do not understand why the whole transaction linked to an insert is copied to snap files given that table does not take part of the logical replication.We are going to do a test by modifying parameters wal_sender_timeout/wal_receiver_timeout from 1' to 5'. The problem is that these parameters are global and changing them will also impact the physical replication.Concerning the walsender timeout, when the worker is started again after a timeout, it will trigger a new walsender associated with it.postgres 55680 12546 0 Sep20 ? 00:00:02 postgres: aq: bgworker: logical replication worker for subscription 24651602postgres 55681 12546 0 Sep20 ? 00:00:00 postgres: aq: wal sender process repuser 127.0.0.1(57930) idleKind RegardsFabriceOn Tue, Sep 21, 2021 at 8:38 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Mon, Sep 20, 2021 at 9:43 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:\n>\n> By passing the autovacuum parameter to off the problem did not occur right after loading the table as in our previous tests. However, the timeout occurred later. We have seen the accumulation of .snap files for several Gb.\n>\n> ...\n> -rw-------. 1 postgres postgres 16791226 Sep 20 15:26 xid-1238444701-lsn-2D2B-F5000000.snap\n> -rw-------. 1 postgres postgres 16973268 Sep 20 15:26 xid-1238444701-lsn-2D2B-F6000000.snap\n> -rw-------. 1 postgres postgres 16790984 Sep 20 15:26 xid-1238444701-lsn-2D2B-F7000000.snap\n> -rw-------. 1 postgres postgres 16988112 Sep 20 15:26 xid-1238444701-lsn-2D2B-F8000000.snap\n> -rw-------. 1 postgres postgres 16864593 Sep 20 15:26 xid-1238444701-lsn-2D2B-F9000000.snap\n> -rw-------. 1 postgres postgres 16902167 Sep 20 15:26 xid-1238444701-lsn-2D2B-FA000000.snap\n> -rw-------. 1 postgres postgres 16914638 Sep 20 15:26 xid-1238444701-lsn-2D2B-FB000000.snap\n> -rw-------. 1 postgres postgres 16782471 Sep 20 15:26 xid-1238444701-lsn-2D2B-FC000000.snap\n> -rw-------. 1 postgres postgres 16963667 Sep 20 15:27 xid-1238444701-lsn-2D2B-FD000000.snap\n> ...\n>\n\nOkay, still not sure why the publisher is not sending keep_alive\nmessages in between spilling such a big transaction. If you see, we\nhave logic in WalSndLoop() wherein each time after sending data we\ncheck whether we need to send a keep-alive message via function\nWalSndKeepaliveIfNecessary(). I think to debug this problem further\nyou need to add some logs in function WalSndKeepaliveIfNecessary() to\nsee why it is not sending keep_alive messages when all these files are\nbeing created.\n\nDid you change the default value of\nwal_sender_timeout/wal_receiver_timeout? What is the value of those\nvariables in your environment? Did you see the message \"terminating\nwalsender process due to replication timeout\" in your server logs?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Tue, 21 Sep 2021 10:22:32 +0200",
"msg_from": "Fabrice Chapuis <fabrice636861@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Tue, Sep 21, 2021 at 1:52 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:\n>\n> If I understand, the instruction to send keep alive by the wal sender has not been reached in the for loop, for what reason?\n> ...\n> * Check for replication timeout. */\n> WalSndCheckTimeOut();\n>\n> /* Send keepalive if the time has come */\n> WalSndKeepaliveIfNecessary();\n> ...\n>\n\nAre you sure that these functions have not been called? Or the case is\nthat these are called but due to some reason the keep-alive is not\nsent? IIUC, these are called after processing each WAL record so not\nsure how is it possible in your case that these are not reached?\n\n> The data load is performed on a table which is not replicated, I do not understand why the whole transaction linked to an insert is copied to snap files given that table does not take part of the logical replication.\n>\n\nIt is because we don't know till the end of the transaction (where we\nstart sending the data) whether the table will be replicated or not. I\nthink specifically for this purpose the new 'streaming' feature\nintroduced in PG-14 will help us to avoid writing data of such tables\nto snap/spill files. See 'streaming' option in Create Subscription\ndocs [1].\n\n> We are going to do a test by modifying parameters wal_sender_timeout/wal_receiver_timeout from 1' to 5'. The problem is that these parameters are global and changing them will also impact the physical replication.\n>\n\nDo you mean you are planning to change from 1 minute to 5 minutes? I\nagree with the global nature of parameters and I think your approach\nto finding out the root cause is good here because otherwise, under\nsome similar or more heavy workload, it might lead to the same\nsituation.\n\n> Concerning the walsender timeout, when the worker is started again after a timeout, it will trigger a new walsender associated with it.\n>\n\nRight, I know that but I was curious to know if the walsender has\nexited before walreceiver.\n\n[1] - https://www.postgresql.org/docs/devel/sql-createsubscription.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 21 Sep 2021 15:22:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "> IIUC, these are called after processing each WAL record so not\nsure how is it possible in your case that these are not reached?\n\nI don't know, as you say, to highlight the problem we would have to debug\nthe WalSndKeepaliveIfNecessary function\n\n> I was curious to know if the walsender has exited before walreceiver\n\nDuring the last tests we made we didn't observe any timeout of the wal\nsender process.\n\n> Do you mean you are planning to change from 1 minute to 5 minutes?\n\nWe set wal_sender_timeout/wal_receiver_timeout to 5' and launch new test.\nThe result is surprising and rather positive there is no timeout any more\nin the log and the 20Gb of snap files are removed in less than 5 minutes.\nHow to explain that behaviour, why the snap files are consumed suddenly so\nquickly.\nI choose the value arbitrarily for wal_sender_timeout/wal_receiver_timeout\nparameters, are theses values appropriate from your point of view?\n\nBest Regards\n\nFabrice\n\n\n\nOn Tue, Sep 21, 2021 at 11:52 AM Amit Kapila <amit.kapila16@gmail.com>\nwrote:\n\n> On Tue, Sep 21, 2021 at 1:52 PM Fabrice Chapuis <fabrice636861@gmail.com>\n> wrote:\n> >\n> > If I understand, the instruction to send keep alive by the wal sender\n> has not been reached in the for loop, for what reason?\n> > ...\n> > * Check for replication timeout. */\n> > WalSndCheckTimeOut();\n> >\n> > /* Send keepalive if the time has come */\n> > WalSndKeepaliveIfNecessary();\n> > ...\n> >\n>\n> Are you sure that these functions have not been called? Or the case is\n> that these are called but due to some reason the keep-alive is not\n> sent? IIUC, these are called after processing each WAL record so not\n> sure how is it possible in your case that these are not reached?\n>\n> > The data load is performed on a table which is not replicated, I do not\n> understand why the whole transaction linked to an insert is copied to snap\n> files given that table does not take part of the logical replication.\n> >\n>\n> It is because we don't know till the end of the transaction (where we\n> start sending the data) whether the table will be replicated or not. I\n> think specifically for this purpose the new 'streaming' feature\n> introduced in PG-14 will help us to avoid writing data of such tables\n> to snap/spill files. See 'streaming' option in Create Subscription\n> docs [1].\n>\n> > We are going to do a test by modifying parameters\n> wal_sender_timeout/wal_receiver_timeout from 1' to 5'. The problem is that\n> these parameters are global and changing them will also impact the physical\n> replication.\n> >\n>\n> Do you mean you are planning to change from 1 minute to 5 minutes? I\n> agree with the global nature of parameters and I think your approach\n> to finding out the root cause is good here because otherwise, under\n> some similar or more heavy workload, it might lead to the same\n> situation.\n>\n> > Concerning the walsender timeout, when the worker is started again after\n> a timeout, it will trigger a new walsender associated with it.\n> >\n>\n> Right, I know that but I was curious to know if the walsender has\n> exited before walreceiver.\n>\n> [1] - https://www.postgresql.org/docs/devel/sql-createsubscription.html\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\n> IIUC, these are called after processing each WAL record so notsure how is it possible in your case that these are not reached?I don't know, as you say, to highlight the problem we would have to debug the WalSndKeepaliveIfNecessary function> I was curious to know if the walsender has exited before walreceiverDuring the last tests we made we didn't observe any timeout of the wal sender process.> Do you mean you are planning to change from 1 minute to 5 minutes?We set wal_sender_timeout/wal_receiver_timeout to 5' and launch new test. The result is surprising and rather positive there is no timeout any more in the log and the 20Gb of snap files are removed in less than 5 minutes.How to explain that behaviour, why the snap files are consumed suddenly so quickly.I choose the value arbitrarily for wal_sender_timeout/wal_receiver_timeout parameters, are theses values appropriate from your point of view?Best RegardsFabriceOn Tue, Sep 21, 2021 at 11:52 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Tue, Sep 21, 2021 at 1:52 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:\n>\n> If I understand, the instruction to send keep alive by the wal sender has not been reached in the for loop, for what reason?\n> ...\n> * Check for replication timeout. */\n> WalSndCheckTimeOut();\n>\n> /* Send keepalive if the time has come */\n> WalSndKeepaliveIfNecessary();\n> ...\n>\n\nAre you sure that these functions have not been called? Or the case is\nthat these are called but due to some reason the keep-alive is not\nsent? IIUC, these are called after processing each WAL record so not\nsure how is it possible in your case that these are not reached?\n\n> The data load is performed on a table which is not replicated, I do not understand why the whole transaction linked to an insert is copied to snap files given that table does not take part of the logical replication.\n>\n\nIt is because we don't know till the end of the transaction (where we\nstart sending the data) whether the table will be replicated or not. I\nthink specifically for this purpose the new 'streaming' feature\nintroduced in PG-14 will help us to avoid writing data of such tables\nto snap/spill files. See 'streaming' option in Create Subscription\ndocs [1].\n\n> We are going to do a test by modifying parameters wal_sender_timeout/wal_receiver_timeout from 1' to 5'. The problem is that these parameters are global and changing them will also impact the physical replication.\n>\n\nDo you mean you are planning to change from 1 minute to 5 minutes? I\nagree with the global nature of parameters and I think your approach\nto finding out the root cause is good here because otherwise, under\nsome similar or more heavy workload, it might lead to the same\nsituation.\n\n> Concerning the walsender timeout, when the worker is started again after a timeout, it will trigger a new walsender associated with it.\n>\n\nRight, I know that but I was curious to know if the walsender has\nexited before walreceiver.\n\n[1] - https://www.postgresql.org/docs/devel/sql-createsubscription.html\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Tue, 21 Sep 2021 17:41:50 +0200",
"msg_from": "Fabrice Chapuis <fabrice636861@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Tue, Sep 21, 2021 at 9:12 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:\n>\n> > IIUC, these are called after processing each WAL record so not\n> sure how is it possible in your case that these are not reached?\n>\n> I don't know, as you say, to highlight the problem we would have to debug the WalSndKeepaliveIfNecessary function\n>\n> > I was curious to know if the walsender has exited before walreceiver\n>\n> During the last tests we made we didn't observe any timeout of the wal sender process.\n>\n> > Do you mean you are planning to change from 1 minute to 5 minutes?\n>\n> We set wal_sender_timeout/wal_receiver_timeout to 5' and launch new test. The result is surprising and rather positive there is no timeout any more in the log and the 20Gb of snap files are removed in less than 5 minutes.\n> How to explain that behaviour, why the snap files are consumed suddenly so quickly.\n>\n\nI think it is because we decide that the data in those snap files\ndoesn't need to be sent at xact end, so we remove them.\n\n> I choose the value arbitrarily for wal_sender_timeout/wal_receiver_timeout parameters, are theses values appropriate from your point of view?\n>\n\nIt is difficult to say what is the appropriate value for these\nparameters unless in some way we debug WalSndKeepaliveIfNecessary() to\nfind why it didn't send keep alive when it is expected. Would you be\nable to make code changes and test or if you want I can make changes\nand send the patch if you can test it? If not, is it possible that in\nsome way you send a reproducible test?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 22 Sep 2021 14:32:25 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "If you would like I can test the patch you send to me.\n\nRegards\n\nFabrice\n\nOn Wed, Sep 22, 2021 at 11:02 AM Amit Kapila <amit.kapila16@gmail.com>\nwrote:\n\n> On Tue, Sep 21, 2021 at 9:12 PM Fabrice Chapuis <fabrice636861@gmail.com>\n> wrote:\n> >\n> > > IIUC, these are called after processing each WAL record so not\n> > sure how is it possible in your case that these are not reached?\n> >\n> > I don't know, as you say, to highlight the problem we would have to\n> debug the WalSndKeepaliveIfNecessary function\n> >\n> > > I was curious to know if the walsender has exited before walreceiver\n> >\n> > During the last tests we made we didn't observe any timeout of the wal\n> sender process.\n> >\n> > > Do you mean you are planning to change from 1 minute to 5 minutes?\n> >\n> > We set wal_sender_timeout/wal_receiver_timeout to 5' and launch new\n> test. The result is surprising and rather positive there is no timeout any\n> more in the log and the 20Gb of snap files are removed in less than 5\n> minutes.\n> > How to explain that behaviour, why the snap files are consumed suddenly\n> so quickly.\n> >\n>\n> I think it is because we decide that the data in those snap files\n> doesn't need to be sent at xact end, so we remove them.\n>\n> > I choose the value arbitrarily for\n> wal_sender_timeout/wal_receiver_timeout parameters, are theses values\n> appropriate from your point of view?\n> >\n>\n> It is difficult to say what is the appropriate value for these\n> parameters unless in some way we debug WalSndKeepaliveIfNecessary() to\n> find why it didn't send keep alive when it is expected. Would you be\n> able to make code changes and test or if you want I can make changes\n> and send the patch if you can test it? If not, is it possible that in\n> some way you send a reproducible test?\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nIf you would like I can test the patch you send to me.RegardsFabriceOn Wed, Sep 22, 2021 at 11:02 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Tue, Sep 21, 2021 at 9:12 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:\n>\n> > IIUC, these are called after processing each WAL record so not\n> sure how is it possible in your case that these are not reached?\n>\n> I don't know, as you say, to highlight the problem we would have to debug the WalSndKeepaliveIfNecessary function\n>\n> > I was curious to know if the walsender has exited before walreceiver\n>\n> During the last tests we made we didn't observe any timeout of the wal sender process.\n>\n> > Do you mean you are planning to change from 1 minute to 5 minutes?\n>\n> We set wal_sender_timeout/wal_receiver_timeout to 5' and launch new test. The result is surprising and rather positive there is no timeout any more in the log and the 20Gb of snap files are removed in less than 5 minutes.\n> How to explain that behaviour, why the snap files are consumed suddenly so quickly.\n>\n\nI think it is because we decide that the data in those snap files\ndoesn't need to be sent at xact end, so we remove them.\n\n> I choose the value arbitrarily for wal_sender_timeout/wal_receiver_timeout parameters, are theses values appropriate from your point of view?\n>\n\nIt is difficult to say what is the appropriate value for these\nparameters unless in some way we debug WalSndKeepaliveIfNecessary() to\nfind why it didn't send keep alive when it is expected. Would you be\nable to make code changes and test or if you want I can make changes\nand send the patch if you can test it? If not, is it possible that in\nsome way you send a reproducible test?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Wed, 22 Sep 2021 18:15:58 +0200",
"msg_from": "Fabrice Chapuis <fabrice636861@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Sep 22, 2021 at 9:46 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:\n>\n> If you would like I can test the patch you send to me.\n>\n\nOkay, please find an attached patch for additional logs. I would like\nto see the logs during the time when walsender appears to be writing\nto files. We might need to add more logs to find the exact problem but\nlet's start with this.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Thu, 23 Sep 2021 19:20:07 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "Thanks for your patch, we are going to set up a lab in order to debug the\nfunction.\nRegards\nFabrice\n\nOn Thu, Sep 23, 2021 at 3:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Wed, Sep 22, 2021 at 9:46 PM Fabrice Chapuis <fabrice636861@gmail.com>\n> wrote:\n> >\n> > If you would like I can test the patch you send to me.\n> >\n>\n> Okay, please find an attached patch for additional logs. I would like\n> to see the logs during the time when walsender appears to be writing\n> to files. We might need to add more logs to find the exact problem but\n> let's start with this.\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nThanks for your patch, we are going to set up a lab in order to debug the function.RegardsFabriceOn Thu, Sep 23, 2021 at 3:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Wed, Sep 22, 2021 at 9:46 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:\n>\n> If you would like I can test the patch you send to me.\n>\n\nOkay, please find an attached patch for additional logs. I would like\nto see the logs during the time when walsender appears to be writing\nto files. We might need to add more logs to find the exact problem but\nlet's start with this.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Thu, 23 Sep 2021 18:03:56 +0200",
"msg_from": "Fabrice Chapuis <fabrice636861@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Friday, September 24, 2021 12:04 AM, Fabrice Chapuis <fabrice636861@gmail.com<mailto:fabrice636861@gmail.com>> wrote:\r\n\r\n>\r\n\r\n> Thanks for your patch, we are going to set up a lab in order to debug the function.\r\n\r\n\r\n\r\nHi\r\n\r\n\r\n\r\nI tried to reproduce this timeout problem on version10.18 but failed.\r\n\r\nIn my trial, I inserted large amounts of data at publisher, which took more than 1 minute to replicate.\r\n\r\nAnd with the patch provided by Amit, I saw that the frequency of invoking\r\n\r\nWalSndKeepaliveIfNecessary function is raised after I inserted data.\r\n\r\n\r\n\r\nThe test script is attached. Maybe you can try it on your machine and check if this problem could happen.\r\n\r\nIf I miss something in the script, please let me know.\r\n\r\nOf course, it will be better if you can provide your script to reproduce the problem.\r\n\r\n\r\n\r\nRegards\r\n\r\nTang",
"msg_date": "Thu, 30 Sep 2021 01:15:18 +0000",
"msg_from": "=?utf-8?B?VGFuZywgSGFpeWluZy/llJAg5rW36Iux?= <tanghy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "Thanks Tang for your script.\nOur debugging environment will be ready soon. I will test your script and\nwe will try to reproduce the problem by integrating the patch provided by\nAmit. As soon as I have results I will let you know.\n\nRegards\n\nFabrice\n\nOn Thu, Sep 30, 2021 at 3:15 AM Tang, Haiying/唐 海英 <tanghy.fnst@fujitsu.com>\nwrote:\n\n> On Friday, September 24, 2021 12:04 AM, Fabrice Chapuis <\n> fabrice636861@gmail.com> wrote:\n>\n> >\n>\n> > Thanks for your patch, we are going to set up a lab in order to debug\n> the function.\n>\n>\n>\n> Hi\n>\n>\n>\n> I tried to reproduce this timeout problem on version10.18 but failed.\n>\n> In my trial, I inserted large amounts of data at publisher, which took\n> more than 1 minute to replicate.\n>\n> And with the patch provided by Amit, I saw that the frequency of invoking\n>\n> WalSndKeepaliveIfNecessary function is raised after I inserted data.\n>\n>\n>\n> The test script is attached. Maybe you can try it on your machine and\n> check if this problem could happen.\n>\n> If I miss something in the script, please let me know.\n>\n> Of course, it will be better if you can provide your script to reproduce\n> the problem.\n>\n>\n>\n> Regards\n>\n> Tang\n>\n>\n\nThanks Tang for your script. Our debugging environment will be ready soon. I will test your script and we will try to reproduce the problem by integrating the patch provided by Amit. As soon as I have results I will let you know.RegardsFabriceOn Thu, Sep 30, 2021 at 3:15 AM Tang, Haiying/唐 海英 <tanghy.fnst@fujitsu.com> wrote:\n\n\n\n\nOn Friday, September 24, 2021 12:04 AM, Fabrice Chapuis <fabrice636861@gmail.com> wrote:\n> \n> Thanks for your patch, we are going to set up a lab in order to debug the function.\n \nHi\n \nI tried to reproduce this timeout problem on version10.18 but failed.\n\nIn my trial, I inserted large amounts of data at publisher, which took more than 1 minute to replicate.\nAnd with the patch provided by Amit, I saw that the frequency of invoking\nWalSndKeepaliveIfNecessary function is raised after I inserted data.\n \nThe test script is attached. Maybe you can try it on your machine and check if this problem could happen.\n\nIf I miss something in the script, please let me know.\n\nOf course, it will be better if you can provide your script\nto reproduce the problem.\n \nRegards\nTang",
"msg_date": "Fri, 8 Oct 2021 09:33:40 +0200",
"msg_from": "Fabrice Chapuis <fabrice636861@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "Hello,\nOur lab is ready now. Amit, I compile Postgres 10.18 with your patch.Tang,\nI used your script to configure logical replication between 2 databases and\nto generate 10 million entries in an unreplicated foo table. On a\nstandalone instance no error message appears in log.\nI activate the physical replication between 2 nodes, and I got following\nerror:\n\n2021-11-10 10:49:12.297 CET [12126] LOG: attempt to send keep alive message\n2021-11-10 10:49:12.297 CET [12126] STATEMENT: START_REPLICATION 0/3000000\nTIMELINE 1\n2021-11-10 10:49:15.127 CET [12064] FATAL: terminating logical replication\nworker due to administrator command\n2021-11-10 10:49:15.127 CET [12036] LOG: worker process: logical\nreplication worker for subscription 16413 (PID 12064) exited with exit code\n1\n2021-11-10 10:49:15.155 CET [12126] LOG: attempt to send keep alive message\n\nThis message look like strange because no admin command have been executed\nduring data load.\nI did not find any error related to the timeout.\nThe message coming from the modification made with the patch comes back all\nthe time: attempt to send keep alive message. But there is no \"sent keep\nalive message\".\n\nWhy logical replication worker exit when physical replication is configured?\n\nThanks for your help\n\nFabrice\n\n\n\nOn Fri, Oct 8, 2021 at 9:33 AM Fabrice Chapuis <fabrice636861@gmail.com>\nwrote:\n\n> Thanks Tang for your script.\n> Our debugging environment will be ready soon. I will test your script and\n> we will try to reproduce the problem by integrating the patch provided by\n> Amit. As soon as I have results I will let you know.\n>\n> Regards\n>\n> Fabrice\n>\n> On Thu, Sep 30, 2021 at 3:15 AM Tang, Haiying/唐 海英 <\n> tanghy.fnst@fujitsu.com> wrote:\n>\n>> On Friday, September 24, 2021 12:04 AM, Fabrice Chapuis <\n>> fabrice636861@gmail.com> wrote:\n>>\n>> >\n>>\n>> > Thanks for your patch, we are going to set up a lab in order to debug\n>> the function.\n>>\n>>\n>>\n>> Hi\n>>\n>>\n>>\n>> I tried to reproduce this timeout problem on version10.18 but failed.\n>>\n>> In my trial, I inserted large amounts of data at publisher, which took\n>> more than 1 minute to replicate.\n>>\n>> And with the patch provided by Amit, I saw that the frequency of invoking\n>>\n>> WalSndKeepaliveIfNecessary function is raised after I inserted data.\n>>\n>>\n>>\n>> The test script is attached. Maybe you can try it on your machine and\n>> check if this problem could happen.\n>>\n>> If I miss something in the script, please let me know.\n>>\n>> Of course, it will be better if you can provide your script to reproduce\n>> the problem.\n>>\n>>\n>>\n>> Regards\n>>\n>> Tang\n>>\n>>\n\nHello,Our lab is ready now. Amit, I compile Postgres 10.18 with your patch.Tang, I used your script to configure logical replication between 2 databases and to generate 10 million entries in an unreplicated foo table. On a standalone instance no error message appears in log.I activate the physical replication between 2 nodes, and I got following error:2021-11-10 10:49:12.297 CET [12126] LOG: attempt to send keep alive message2021-11-10 10:49:12.297 CET [12126] STATEMENT: START_REPLICATION 0/3000000 TIMELINE 12021-11-10 10:49:15.127 CET [12064] FATAL: terminating logical replication worker due to administrator command2021-11-10 10:49:15.127 CET [12036] LOG: worker process: logical replication worker for subscription 16413 (PID 12064) exited with exit code 12021-11-10 10:49:15.155 CET [12126] LOG: attempt to send keep alive messageThis message look like strange because no admin command have been executed during data load.I did not find any error related to the timeout.The message coming from the modification made with the patch comes back all the time: attempt to send keep alive message. But there is no \"sent keep alive message\".Why logical replication worker exit when physical replication is configured?Thanks for your helpFabriceOn Fri, Oct 8, 2021 at 9:33 AM Fabrice Chapuis <fabrice636861@gmail.com> wrote:Thanks Tang for your script. Our debugging environment will be ready soon. I will test your script and we will try to reproduce the problem by integrating the patch provided by Amit. As soon as I have results I will let you know.RegardsFabriceOn Thu, Sep 30, 2021 at 3:15 AM Tang, Haiying/唐 海英 <tanghy.fnst@fujitsu.com> wrote:\n\n\n\n\nOn Friday, September 24, 2021 12:04 AM, Fabrice Chapuis <fabrice636861@gmail.com> wrote:\n> \n> Thanks for your patch, we are going to set up a lab in order to debug the function.\n \nHi\n \nI tried to reproduce this timeout problem on version10.18 but failed.\n\nIn my trial, I inserted large amounts of data at publisher, which took more than 1 minute to replicate.\nAnd with the patch provided by Amit, I saw that the frequency of invoking\nWalSndKeepaliveIfNecessary function is raised after I inserted data.\n \nThe test script is attached. Maybe you can try it on your machine and check if this problem could happen.\n\nIf I miss something in the script, please let me know.\n\nOf course, it will be better if you can provide your script\nto reproduce the problem.\n \nRegards\nTang",
"msg_date": "Thu, 11 Nov 2021 18:44:51 +0100",
"msg_from": "Fabrice Chapuis <fabrice636861@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Thu, Nov 11, 2021 at 11:15 PM Fabrice Chapuis\n<fabrice636861@gmail.com> wrote:\n>\n> Hello,\n> Our lab is ready now. Amit, I compile Postgres 10.18 with your patch.Tang, I used your script to configure logical replication between 2 databases and to generate 10 million entries in an unreplicated foo table. On a standalone instance no error message appears in log.\n> I activate the physical replication between 2 nodes, and I got following error:\n>\n> 2021-11-10 10:49:12.297 CET [12126] LOG: attempt to send keep alive message\n> 2021-11-10 10:49:12.297 CET [12126] STATEMENT: START_REPLICATION 0/3000000 TIMELINE 1\n> 2021-11-10 10:49:15.127 CET [12064] FATAL: terminating logical replication worker due to administrator command\n> 2021-11-10 10:49:15.127 CET [12036] LOG: worker process: logical replication worker for subscription 16413 (PID 12064) exited with exit code 1\n> 2021-11-10 10:49:15.155 CET [12126] LOG: attempt to send keep alive message\n>\n> This message look like strange because no admin command have been executed during data load.\n> I did not find any error related to the timeout.\n> The message coming from the modification made with the patch comes back all the time: attempt to send keep alive message. But there is no \"sent keep alive message\".\n>\n> Why logical replication worker exit when physical replication is configured?\n>\n\nI am also not sure why that happened may be due to\nmax_worker_processes reaching its limit. This can happen because it\nseems you configured both publisher and subscriber in the same\ncluster. Tang, did you also see the same problem?\n\nBTW, why are you bringing physical standby configuration into the\ntest? Does in your original setup where you observe the problem the\nphysical standbys were there?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 12 Nov 2021 11:53:34 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Friday, November 12, 2021 2:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Thu, Nov 11, 2021 at 11:15 PM Fabrice Chapuis\r\n> <fabrice636861@gmail.com> wrote:\r\n> >\r\n> > Hello,\r\n> > Our lab is ready now. Amit, I compile Postgres 10.18 with your patch.Tang, I\r\n> used your script to configure logical replication between 2 databases and to\r\n> generate 10 million entries in an unreplicated foo table. On a standalone instance\r\n> no error message appears in log.\r\n> > I activate the physical replication between 2 nodes, and I got following error:\r\n> >\r\n> > 2021-11-10 10:49:12.297 CET [12126] LOG: attempt to send keep alive\r\n> message\r\n> > 2021-11-10 10:49:12.297 CET [12126] STATEMENT: START_REPLICATION\r\n> 0/3000000 TIMELINE 1\r\n> > 2021-11-10 10:49:15.127 CET [12064] FATAL: terminating logical replication\r\n> worker due to administrator command\r\n> > 2021-11-10 10:49:15.127 CET [12036] LOG: worker process: logical replication\r\n> worker for subscription 16413 (PID 12064) exited with exit code 1\r\n> > 2021-11-10 10:49:15.155 CET [12126] LOG: attempt to send keep alive\r\n> message\r\n> >\r\n> > This message look like strange because no admin command have been executed\r\n> during data load.\r\n> > I did not find any error related to the timeout.\r\n> > The message coming from the modification made with the patch comes back all\r\n> the time: attempt to send keep alive message. But there is no \"sent keep alive\r\n> message\".\r\n> >\r\n> > Why logical replication worker exit when physical replication is configured?\r\n> >\r\n> \r\n> I am also not sure why that happened may be due to\r\n> max_worker_processes reaching its limit. This can happen because it\r\n> seems you configured both publisher and subscriber in the same\r\n> cluster. Tang, did you also see the same problem?\r\n> \r\n\r\nNo.\r\nI used the default max_worker_processes value, ran logical replication and\r\nphysical replication at the same time. I also changed the data in table on\r\npublisher. But didn't see the same problem.\r\n\r\nRegards\r\nTang\r\n",
"msg_date": "Fri, 12 Nov 2021 09:22:11 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "I made a mistake in the configuration of my test script, in fact I cannot\nreproduce the problem at the moment.\nYes, on the original environment there is physical replication, that's why\nfor the lab I configured 2 nodes with physical replication.\nI'll try new tests next week\nRegards\n\nOn Fri, Nov 12, 2021 at 7:23 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Thu, Nov 11, 2021 at 11:15 PM Fabrice Chapuis\n> <fabrice636861@gmail.com> wrote:\n> >\n> > Hello,\n> > Our lab is ready now. Amit, I compile Postgres 10.18 with your\n> patch.Tang, I used your script to configure logical replication between 2\n> databases and to generate 10 million entries in an unreplicated foo table.\n> On a standalone instance no error message appears in log.\n> > I activate the physical replication between 2 nodes, and I got following\n> error:\n> >\n> > 2021-11-10 10:49:12.297 CET [12126] LOG: attempt to send keep alive\n> message\n> > 2021-11-10 10:49:12.297 CET [12126] STATEMENT: START_REPLICATION\n> 0/3000000 TIMELINE 1\n> > 2021-11-10 10:49:15.127 CET [12064] FATAL: terminating logical\n> replication worker due to administrator command\n> > 2021-11-10 10:49:15.127 CET [12036] LOG: worker process: logical\n> replication worker for subscription 16413 (PID 12064) exited with exit code\n> 1\n> > 2021-11-10 10:49:15.155 CET [12126] LOG: attempt to send keep alive\n> message\n> >\n> > This message look like strange because no admin command have been\n> executed during data load.\n> > I did not find any error related to the timeout.\n> > The message coming from the modification made with the patch comes back\n> all the time: attempt to send keep alive message. But there is no \"sent\n> keep alive message\".\n> >\n> > Why logical replication worker exit when physical replication is\n> configured?\n> >\n>\n> I am also not sure why that happened may be due to\n> max_worker_processes reaching its limit. This can happen because it\n> seems you configured both publisher and subscriber in the same\n> cluster. Tang, did you also see the same problem?\n>\n> BTW, why are you bringing physical standby configuration into the\n> test? Does in your original setup where you observe the problem the\n> physical standbys were there?\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nI made a mistake in the configuration of my test script, in fact I cannot reproduce the problem at the moment.Yes, on the original environment there is physical replication, that's why for the lab I configured 2 nodes with physical replication.I'll try new tests next weekRegardsOn Fri, Nov 12, 2021 at 7:23 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Thu, Nov 11, 2021 at 11:15 PM Fabrice Chapuis\n<fabrice636861@gmail.com> wrote:\n>\n> Hello,\n> Our lab is ready now. Amit, I compile Postgres 10.18 with your patch.Tang, I used your script to configure logical replication between 2 databases and to generate 10 million entries in an unreplicated foo table. On a standalone instance no error message appears in log.\n> I activate the physical replication between 2 nodes, and I got following error:\n>\n> 2021-11-10 10:49:12.297 CET [12126] LOG: attempt to send keep alive message\n> 2021-11-10 10:49:12.297 CET [12126] STATEMENT: START_REPLICATION 0/3000000 TIMELINE 1\n> 2021-11-10 10:49:15.127 CET [12064] FATAL: terminating logical replication worker due to administrator command\n> 2021-11-10 10:49:15.127 CET [12036] LOG: worker process: logical replication worker for subscription 16413 (PID 12064) exited with exit code 1\n> 2021-11-10 10:49:15.155 CET [12126] LOG: attempt to send keep alive message\n>\n> This message look like strange because no admin command have been executed during data load.\n> I did not find any error related to the timeout.\n> The message coming from the modification made with the patch comes back all the time: attempt to send keep alive message. But there is no \"sent keep alive message\".\n>\n> Why logical replication worker exit when physical replication is configured?\n>\n\nI am also not sure why that happened may be due to\nmax_worker_processes reaching its limit. This can happen because it\nseems you configured both publisher and subscriber in the same\ncluster. Tang, did you also see the same problem?\n\nBTW, why are you bringing physical standby configuration into the\ntest? Does in your original setup where you observe the problem the\nphysical standbys were there?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Fri, 12 Nov 2021 17:57:41 +0100",
"msg_from": "Fabrice Chapuis <fabrice636861@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "Hello Amit,\n\nI was able to reproduce the timeout problem in the lab.\nAfter loading more than 20 millions of rows in a table which is not\nreplicated (insert command ends without error), errors related to logical\nreplication processes appear in the postgres log.\nApproximately every 5 minutes worker process is restarted. The snap files\nin the slot directory are still present. The replication system seems to be\nblocked. Why these snap files are not removed. What do they contain?\nI will recompile postgres with your patch to debug.\n\n2021-12-22 14:54:21.506 CET [64939] STATEMENT: START_REPLICATION SLOT\n\"sub008_s000000\" LOGICAL 17/27240748 (proto_version '1', publication_names\n'\"pub008_s000000\"')\n2021-12-22 15:01:20.908 CET [64938] ERROR: terminating logical replication\nworker due to timeout\n2021-12-22 15:01:20.911 CET [61827] LOG: worker process: logical\nreplication worker for subscription 26994 (PID 64938) exited with exit code\n1\n2021-12-22 15:01:20.923 CET [65037] LOG: logical replication apply worker\nfor subscription \"sub008_s000000\" has started\n2021-12-22 15:01:20.932 CET [65038] ERROR: replication slot\n\"sub008_s000000\" is active for PID 64939\n2021-12-22 15:01:20.932 CET [65038] STATEMENT: START_REPLICATION SLOT\n\"sub008_s000000\" LOGICAL 17/27240748 (proto_version '1', publication_names\n'\"pub008_s000000\"')\n2021-12-22 15:01:20.932 CET [65037] ERROR: could not start WAL streaming:\nERROR: replication slot \"sub008_s000000\" is active for PID 64939\n2021-12-22 15:01:20.933 CET [61827] LOG: worker process: logical\nreplication worker for subscription 26994 (PID 65037) exited with exit code\n1\n2021-12-22 15:01:25.944 CET [65039] LOG: logical replication apply worker\nfor subscription \"sub008_s000000\" has started\n2021-12-22 15:01:25.951 CET [65040] ERROR: replication slot\n\"sub008_s000000\" is active for PID 64939\n2021-12-22 15:01:25.951 CET [65040] STATEMENT: START_REPLICATION SLOT\n\"sub008_s000000\" LOGICAL 17/27240748 (proto_version '1', publication_names\n'\"pub008_s000000\"')\n2021-12-22 15:01:25.951 CET [65039] ERROR: could not start WAL streaming:\nERROR: replication slot \"sub008_s000000\" is active for PID 64939\n2021-12-22 15:01:25.952 CET [61827] LOG: worker process: logical\nreplication worker for subscription 26994 (PID 65039) exited with exit code\n1\n2021-12-22 15:01:30.962 CET [65041] LOG: logical replication apply worker\nfor subscription \"sub008_s000000\" has started\n2021-12-22 15:01:30.970 CET [65042] ERROR: replication slot\n\"sub008_s000000\" is active for PID 64939\n2021-12-22 15:01:30.970 CET [65042] STATEMENT: START_REPLICATION SLOT\n\"sub008_s000000\" LOGICAL 17/27240748 (proto_version '1', publication_names\n'\"pub008_s000000\"')\n2021-12-22 15:01:30.970 CET [65041] ERROR: could not start WAL streaming:\nERROR: replication slot \"sub008_s000000\" is active for PID 64939\n2021-12-22 15:01:30.971 CET [61827] LOG: worker process: logical\nreplication worker for subscription 26994 (PID 65041) exited with exit code\n1\n2021-12-22 15:01:35.982 CET [65043] LOG: logical replication apply worker\nfor subscription \"sub008_s000000\" has started\n2021-12-22 15:01:35.990 CET [65044] ERROR: replication slot\n\"sub008_s000000\" is active for PID 64939\n2021-12-22 15:01:35.990 CET [65044] STATEMENT: START_REPLICATION SLOT\n\"sub008_s000000\" LOGICAL 17/27240748 (proto_version '1', publication_names\n'\"pub008_s000000\"')\n\n-rw-------. 1 postgres postgres 16270723 Dec 22 16:02\nxid-14312-lsn-23-99000000.snap\n-rw-------. 1 postgres postgres 16145717 Dec 22 16:02\nxid-14312-lsn-23-9A000000.snap\n-rw-------. 1 postgres postgres 10889437 Dec 22 16:02\nxid-14312-lsn-23-9B000000.snap\n[postgres@s729058a debug]$ ls -ltr pg_replslot/sub008_s012a00/ | wc -l\n1420\n\nOn Fri, Nov 12, 2021 at 5:57 PM Fabrice Chapuis <fabrice636861@gmail.com>\nwrote:\n\n> I made a mistake in the configuration of my test script, in fact I cannot\n> reproduce the problem at the moment.\n> Yes, on the original environment there is physical replication, that's why\n> for the lab I configured 2 nodes with physical replication.\n> I'll try new tests next week\n> Regards\n>\n> On Fri, Nov 12, 2021 at 7:23 AM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n>\n>> On Thu, Nov 11, 2021 at 11:15 PM Fabrice Chapuis\n>> <fabrice636861@gmail.com> wrote:\n>> >\n>> > Hello,\n>> > Our lab is ready now. Amit, I compile Postgres 10.18 with your\n>> patch.Tang, I used your script to configure logical replication between 2\n>> databases and to generate 10 million entries in an unreplicated foo table.\n>> On a standalone instance no error message appears in log.\n>> > I activate the physical replication between 2 nodes, and I got\n>> following error:\n>> >\n>> > 2021-11-10 10:49:12.297 CET [12126] LOG: attempt to send keep alive\n>> message\n>> > 2021-11-10 10:49:12.297 CET [12126] STATEMENT: START_REPLICATION\n>> 0/3000000 TIMELINE 1\n>> > 2021-11-10 10:49:15.127 CET [12064] FATAL: terminating logical\n>> replication worker due to administrator command\n>> > 2021-11-10 10:49:15.127 CET [12036] LOG: worker process: logical\n>> replication worker for subscription 16413 (PID 12064) exited with exit code\n>> 1\n>> > 2021-11-10 10:49:15.155 CET [12126] LOG: attempt to send keep alive\n>> message\n>> >\n>> > This message look like strange because no admin command have been\n>> executed during data load.\n>> > I did not find any error related to the timeout.\n>> > The message coming from the modification made with the patch comes back\n>> all the time: attempt to send keep alive message. But there is no \"sent\n>> keep alive message\".\n>> >\n>> > Why logical replication worker exit when physical replication is\n>> configured?\n>> >\n>>\n>> I am also not sure why that happened may be due to\n>> max_worker_processes reaching its limit. This can happen because it\n>> seems you configured both publisher and subscriber in the same\n>> cluster. Tang, did you also see the same problem?\n>>\n>> BTW, why are you bringing physical standby configuration into the\n>> test? Does in your original setup where you observe the problem the\n>> physical standbys were there?\n>>\n>> --\n>> With Regards,\n>> Amit Kapila.\n>>\n>\n\nHello Amit,I was able to reproduce the timeout problem in the lab.After loading more than 20 millions of rows in a table which is not replicated (insert command ends without error), errors related to logical replication processes appear in the postgres log.Approximately every 5 minutes worker process is restarted. The snap files in the slot directory are still present. The replication system seems to be blocked. Why these snap files are not removed. What do they contain?I will recompile postgres with your patch to debug.2021-12-22 14:54:21.506 CET [64939] STATEMENT: START_REPLICATION SLOT \"sub008_s000000\" LOGICAL 17/27240748 (proto_version '1', publication_names '\"pub008_s000000\"')2021-12-22 15:01:20.908 CET [64938] ERROR: terminating logical replication worker due to timeout2021-12-22 15:01:20.911 CET [61827] LOG: worker process: logical replication worker for subscription 26994 (PID 64938) exited with exit code 12021-12-22 15:01:20.923 CET [65037] LOG: logical replication apply worker for subscription \"sub008_s000000\" has started2021-12-22 15:01:20.932 CET [65038] ERROR: replication slot \"sub008_s000000\" is active for PID 649392021-12-22 15:01:20.932 CET [65038] STATEMENT: START_REPLICATION SLOT \"sub008_s000000\" LOGICAL 17/27240748 (proto_version '1', publication_names '\"pub008_s000000\"')2021-12-22 15:01:20.932 CET [65037] ERROR: could not start WAL streaming: ERROR: replication slot \"sub008_s000000\" is active for PID 649392021-12-22 15:01:20.933 CET [61827] LOG: worker process: logical replication worker for subscription 26994 (PID 65037) exited with exit code 12021-12-22 15:01:25.944 CET [65039] LOG: logical replication apply worker for subscription \"sub008_s000000\" has started2021-12-22 15:01:25.951 CET [65040] ERROR: replication slot \"sub008_s000000\" is active for PID 649392021-12-22 15:01:25.951 CET [65040] STATEMENT: START_REPLICATION SLOT \"sub008_s000000\" LOGICAL 17/27240748 (proto_version '1', publication_names '\"pub008_s000000\"')2021-12-22 15:01:25.951 CET [65039] ERROR: could not start WAL streaming: ERROR: replication slot \"sub008_s000000\" is active for PID 649392021-12-22 15:01:25.952 CET [61827] LOG: worker process: logical replication worker for subscription 26994 (PID 65039) exited with exit code 12021-12-22 15:01:30.962 CET [65041] LOG: logical replication apply worker for subscription \"sub008_s000000\" has started2021-12-22 15:01:30.970 CET [65042] ERROR: replication slot \"sub008_s000000\" is active for PID 649392021-12-22 15:01:30.970 CET [65042] STATEMENT: START_REPLICATION SLOT \"sub008_s000000\" LOGICAL 17/27240748 (proto_version '1', publication_names '\"pub008_s000000\"')2021-12-22 15:01:30.970 CET [65041] ERROR: could not start WAL streaming: ERROR: replication slot \"sub008_s000000\" is active for PID 649392021-12-22 15:01:30.971 CET [61827] LOG: worker process: logical replication worker for subscription 26994 (PID 65041) exited with exit code 12021-12-22 15:01:35.982 CET [65043] LOG: logical replication apply worker for subscription \"sub008_s000000\" has started2021-12-22 15:01:35.990 CET [65044] ERROR: replication slot \"sub008_s000000\" is active for PID 649392021-12-22 15:01:35.990 CET [65044] STATEMENT: START_REPLICATION SLOT \"sub008_s000000\" LOGICAL 17/27240748 (proto_version '1', publication_names '\"pub008_s000000\"')-rw-------. 1 postgres postgres 16270723 Dec 22 16:02 xid-14312-lsn-23-99000000.snap-rw-------. 1 postgres postgres 16145717 Dec 22 16:02 xid-14312-lsn-23-9A000000.snap-rw-------. 1 postgres postgres 10889437 Dec 22 16:02 xid-14312-lsn-23-9B000000.snap[postgres@s729058a debug]$ ls -ltr pg_replslot/sub008_s012a00/ | wc -l1420On Fri, Nov 12, 2021 at 5:57 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:I made a mistake in the configuration of my test script, in fact I cannot reproduce the problem at the moment.Yes, on the original environment there is physical replication, that's why for the lab I configured 2 nodes with physical replication.I'll try new tests next weekRegardsOn Fri, Nov 12, 2021 at 7:23 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Thu, Nov 11, 2021 at 11:15 PM Fabrice Chapuis\n<fabrice636861@gmail.com> wrote:\n>\n> Hello,\n> Our lab is ready now. Amit, I compile Postgres 10.18 with your patch.Tang, I used your script to configure logical replication between 2 databases and to generate 10 million entries in an unreplicated foo table. On a standalone instance no error message appears in log.\n> I activate the physical replication between 2 nodes, and I got following error:\n>\n> 2021-11-10 10:49:12.297 CET [12126] LOG: attempt to send keep alive message\n> 2021-11-10 10:49:12.297 CET [12126] STATEMENT: START_REPLICATION 0/3000000 TIMELINE 1\n> 2021-11-10 10:49:15.127 CET [12064] FATAL: terminating logical replication worker due to administrator command\n> 2021-11-10 10:49:15.127 CET [12036] LOG: worker process: logical replication worker for subscription 16413 (PID 12064) exited with exit code 1\n> 2021-11-10 10:49:15.155 CET [12126] LOG: attempt to send keep alive message\n>\n> This message look like strange because no admin command have been executed during data load.\n> I did not find any error related to the timeout.\n> The message coming from the modification made with the patch comes back all the time: attempt to send keep alive message. But there is no \"sent keep alive message\".\n>\n> Why logical replication worker exit when physical replication is configured?\n>\n\nI am also not sure why that happened may be due to\nmax_worker_processes reaching its limit. This can happen because it\nseems you configured both publisher and subscriber in the same\ncluster. Tang, did you also see the same problem?\n\nBTW, why are you bringing physical standby configuration into the\ntest? Does in your original setup where you observe the problem the\nphysical standbys were there?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Wed, 22 Dec 2021 16:20:03 +0100",
"msg_from": "Fabrice Chapuis <fabrice636861@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Dec 22, 2021 at 8:50 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:\n>\n> Hello Amit,\n>\n> I was able to reproduce the timeout problem in the lab.\n> After loading more than 20 millions of rows in a table which is not replicated (insert command ends without error), errors related to logical replication processes appear in the postgres log.\n> Approximately every 5 minutes worker process is restarted. The snap files in the slot directory are still present. The replication system seems to be blocked. Why these snap files are not removed. What do they contain?\n>\n\nThese contain changes of insert. I think these are not removed for\nyour case as your long transaction is never finished. As mentioned\nearlier, for such cases, it is better to use 'streaming' feature\nreleased as part of PG-14 but anyway here we are trying to debug\ntimeout problem.\n\n> I will recompile postgres with your patch to debug.\n>\n\nOkay, that might help.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 23 Dec 2021 16:21:58 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "I put the instance with high level debug mode.\nI try to do some log interpretation: After having finished writing the\nmodifications generated by the insert in the snap files,\nthen these files are read (restored). One minute after this work starts,\nthe worker process exit with an error code = 1.\nI see that keepalive messages were sent before the work process work leave.\n\n2021-12-28 10:50:01.894 CET [55792] LOCATION: WalSndKeepalive,\nwalsender.c:3365\n...\n2021-12-28 10:50:31.854 CET [55792] STATEMENT: START_REPLICATION SLOT\n\"sub008_s012a00\" LOGICAL 17/27240748 (proto_version '1', publication_names\n'\"pub008_s012a00\"')\n2021-12-28 10:50:31.907 CET [55792] DEBUG: 00000: StartTransaction(1)\nname: unnamed; blockState: DEFAULT; state: INPROGR, xid/subid/cid: 0/1/0\n2021-12-28 10:50:31.907 CET [55792] LOCATION: ShowTransactionStateRec,\nxact.c:5075\n2021-12-28 10:50:31.907 CET [55792] STATEMENT: START_REPLICATION SLOT\n\"sub008_s012a00\" LOGICAL 17/27240748 (proto_version '1', publication_names\n'\"pub008_s012a00\"')\n2021-12-28 10:50:31.907 CET [55792] DEBUG: 00000: spill 2271 changes in\nXID 14312 to disk\n2021-12-28 10:50:31.907 CET [55792] LOCATION: ReorderBufferSerializeTXN,\nreorderbuffer.c:2245\n2021-12-28 10:50:31.907 CET [55792] STATEMENT: START_REPLICATION SLOT\n\"sub008_s012a00\" LOGICAL 17/27240748 (proto_version '1', publication_names\n'\"pub008_s012a00\"')\n*2021-12-28 10:50:32.110 CET [55792] DEBUG: 00000: restored 4096/22603999\nchanges from disk*\n2021-12-28 10:50:32.110 CET [55792] LOCATION: ReorderBufferIterTXNNext,\nreorderbuffer.c:1156\n2021-12-28 10:50:32.110 CET [55792] STATEMENT: START_REPLICATION SLOT\n\"sub008_s012a00\" LOGICAL 17/27240748 (proto_version '1', publication_names\n'\"pub008_s012a00\"')\n2021-12-28 10:50:32.138 CET [55792] DEBUG: 00000: restored 4096/22603999\nchanges from disk\n...\n\n*2021-12-28 10:50:35.341 CET [55794] DEBUG: 00000: sending replication\nkeepalive2021-12-28 10:50:35.341 CET [55794] LOCATION: WalSndKeepalive,\nwalsender.c:3365*\n...\n*2021-12-28 10:51:31.995 CET [55791] ERROR: XX000: terminating logical\nreplication worker due to timeout*\n\n*2021-12-28 10:51:31.995 CET [55791] LOCATION: LogicalRepApplyLoop,\nworker.c:1267*\n\nCould this function in* Apply main loop* in worker.c help to find a\nsolution?\n\nrc = WaitLatchOrSocket(MyLatch,\nWL_SOCKET_READABLE | WL_LATCH_SET |\nWL_TIMEOUT | WL_POSTMASTER_DEATH,\nfd, wait_time,\nWAIT_EVENT_LOGICAL_APPLY_MAIN);\nThanks for your help\n\nFabrice\n\nOn Thu, Dec 23, 2021 at 11:52 AM Amit Kapila <amit.kapila16@gmail.com>\nwrote:\n\n> On Wed, Dec 22, 2021 at 8:50 PM Fabrice Chapuis <fabrice636861@gmail.com>\n> wrote:\n> >\n> > Hello Amit,\n> >\n> > I was able to reproduce the timeout problem in the lab.\n> > After loading more than 20 millions of rows in a table which is not\n> replicated (insert command ends without error), errors related to logical\n> replication processes appear in the postgres log.\n> > Approximately every 5 minutes worker process is restarted. The snap\n> files in the slot directory are still present. The replication system seems\n> to be blocked. Why these snap files are not removed. What do they contain?\n> >\n>\n> These contain changes of insert. I think these are not removed for\n> your case as your long transaction is never finished. As mentioned\n> earlier, for such cases, it is better to use 'streaming' feature\n> released as part of PG-14 but anyway here we are trying to debug\n> timeout problem.\n>\n> > I will recompile postgres with your patch to debug.\n> >\n>\n> Okay, that might help.\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nI put the instance with high level debug mode.I try to do some log interpretation: After having finished writing the modifications generated by the insert in the snap files,then these files are read (restored). One minute after this work starts, the worker process exit with an error code = 1.I see that keepalive messages were sent before the work process work leave.2021-12-28 10:50:01.894 CET [55792] LOCATION: WalSndKeepalive, walsender.c:3365...2021-12-28 10:50:31.854 CET [55792] STATEMENT: START_REPLICATION SLOT \"sub008_s012a00\" LOGICAL 17/27240748 (proto_version '1', publication_names '\"pub008_s012a00\"')2021-12-28 10:50:31.907 CET [55792] DEBUG: 00000: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGR, xid/subid/cid: 0/1/02021-12-28 10:50:31.907 CET [55792] LOCATION: ShowTransactionStateRec, xact.c:50752021-12-28 10:50:31.907 CET [55792] STATEMENT: START_REPLICATION SLOT \"sub008_s012a00\" LOGICAL 17/27240748 (proto_version '1', publication_names '\"pub008_s012a00\"')2021-12-28 10:50:31.907 CET [55792] DEBUG: 00000: spill 2271 changes in XID 14312 to disk2021-12-28 10:50:31.907 CET [55792] LOCATION: ReorderBufferSerializeTXN, reorderbuffer.c:22452021-12-28 10:50:31.907 CET [55792] STATEMENT: START_REPLICATION SLOT \"sub008_s012a00\" LOGICAL 17/27240748 (proto_version '1', publication_names '\"pub008_s012a00\"')2021-12-28 10:50:32.110 CET [55792] DEBUG: 00000: restored 4096/22603999 changes from disk2021-12-28 10:50:32.110 CET [55792] LOCATION: ReorderBufferIterTXNNext, reorderbuffer.c:11562021-12-28 10:50:32.110 CET [55792] STATEMENT: START_REPLICATION SLOT \"sub008_s012a00\" LOGICAL 17/27240748 (proto_version '1', publication_names '\"pub008_s012a00\"')2021-12-28 10:50:32.138 CET [55792] DEBUG: 00000: restored 4096/22603999 changes from disk...2021-12-28 10:50:35.341 CET [55794] DEBUG: 00000: sending replication keepalive2021-12-28 10:50:35.341 CET [55794] LOCATION: WalSndKeepalive, walsender.c:3365...2021-12-28 10:51:31.995 CET [55791] ERROR: XX000: terminating logical replication worker due to timeout2021-12-28 10:51:31.995 CET [55791] LOCATION: LogicalRepApplyLoop, worker.c:1267Could this function in Apply main loop in worker.c help to find a solution?rc = WaitLatchOrSocket(MyLatch,WL_SOCKET_READABLE | WL_LATCH_SET |WL_TIMEOUT | WL_POSTMASTER_DEATH,fd, wait_time,WAIT_EVENT_LOGICAL_APPLY_MAIN);Thanks for your helpFabriceOn Thu, Dec 23, 2021 at 11:52 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Wed, Dec 22, 2021 at 8:50 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:\n>\n> Hello Amit,\n>\n> I was able to reproduce the timeout problem in the lab.\n> After loading more than 20 millions of rows in a table which is not replicated (insert command ends without error), errors related to logical replication processes appear in the postgres log.\n> Approximately every 5 minutes worker process is restarted. The snap files in the slot directory are still present. The replication system seems to be blocked. Why these snap files are not removed. What do they contain?\n>\n\nThese contain changes of insert. I think these are not removed for\nyour case as your long transaction is never finished. As mentioned\nearlier, for such cases, it is better to use 'streaming' feature\nreleased as part of PG-14 but anyway here we are trying to debug\ntimeout problem.\n\n> I will recompile postgres with your patch to debug.\n>\n\nOkay, that might help.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Wed, 29 Dec 2021 12:32:41 +0100",
"msg_from": "Fabrice Chapuis <fabrice636861@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Dec 29, 2021 at 5:02 PM Fabrice Chapuis <fabrice636861@gmail.com>\nwrote:\n\n> I put the instance with high level debug mode.\n> I try to do some log interpretation: After having finished writing the\n> modifications generated by the insert in the snap files,\n> then these files are read (restored). One minute after this work starts,\n> the worker process exit with an error code = 1.\n> I see that keepalive messages were sent before the work process work leave.\n>\n> 2021-12-28 10:50:01.894 CET [55792] LOCATION: WalSndKeepalive,\n> walsender.c:3365\n> ...\n> 2021-12-28 10:50:31.854 CET [55792] STATEMENT: START_REPLICATION SLOT\n> \"sub008_s012a00\" LOGICAL 17/27240748 (proto_version '1', publication_names\n> '\"pub008_s012a00\"')\n> 2021-12-28 10:50:31.907 CET [55792] DEBUG: 00000: StartTransaction(1)\n> name: unnamed; blockState: DEFAULT; state: INPROGR, xid/subid/cid: 0/1/0\n> 2021-12-28 10:50:31.907 CET [55792] LOCATION: ShowTransactionStateRec,\n> xact.c:5075\n> 2021-12-28 10:50:31.907 CET [55792] STATEMENT: START_REPLICATION SLOT\n> \"sub008_s012a00\" LOGICAL 17/27240748 (proto_version '1', publication_names\n> '\"pub008_s012a00\"')\n> 2021-12-28 10:50:31.907 CET [55792] DEBUG: 00000: spill 2271 changes in\n> XID 14312 to disk\n> 2021-12-28 10:50:31.907 CET [55792] LOCATION: ReorderBufferSerializeTXN,\n> reorderbuffer.c:2245\n> 2021-12-28 10:50:31.907 CET [55792] STATEMENT: START_REPLICATION SLOT\n> \"sub008_s012a00\" LOGICAL 17/27240748 (proto_version '1', publication_names\n> '\"pub008_s012a00\"')\n> *2021-12-28 10:50:32.110 CET [55792] DEBUG: 00000: restored 4096/22603999\n> changes from disk*\n> 2021-12-28 10:50:32.110 CET [55792] LOCATION: ReorderBufferIterTXNNext,\n> reorderbuffer.c:1156\n> 2021-12-28 10:50:32.110 CET [55792] STATEMENT: START_REPLICATION SLOT\n> \"sub008_s012a00\" LOGICAL 17/27240748 (proto_version '1', publication_names\n> '\"pub008_s012a00\"')\n> 2021-12-28 10:50:32.138 CET [55792] DEBUG: 00000: restored 4096/22603999\n> changes from disk\n> ...\n>\n> *2021-12-28 10:50:35.341 CET [55794] DEBUG: 00000: sending replication\n> keepalive2021-12-28 10:50:35.341 CET [55794] LOCATION: WalSndKeepalive,\n> walsender.c:3365*\n> ...\n> *2021-12-28 10:51:31.995 CET [55791] ERROR: XX000: terminating logical\n> replication worker due to timeout*\n>\n> *2021-12-28 10:51:31.995 CET [55791] LOCATION: LogicalRepApplyLoop,\n> worker.c:1267*\n>\n>\nIt is still not clear to me why the problem happened? IIUC, after restoring\n4096 changes from snap files, we send them to the subscriber, and then\napply worker should apply those one by one. Now, is it taking one minute to\nrestore 4096 changes due to which apply worker is timed out?\n\nCould this function in* Apply main loop* in worker.c help to find a\n> solution?\n>\n> rc = WaitLatchOrSocket(MyLatch,\n> WL_SOCKET_READABLE | WL_LATCH_SET |\n> WL_TIMEOUT | WL_POSTMASTER_DEATH,\n> fd, wait_time,\n> WAIT_EVENT_LOGICAL_APPLY_MAIN);\n>\n\nCan you explain why you think this will help in solving your current\nproblem?\n\n--\nWith Regards,\nAmit Kapila.\n\nOn Wed, Dec 29, 2021 at 5:02 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:I put the instance with high level debug mode.I try to do some log interpretation: After having finished writing the modifications generated by the insert in the snap files,then these files are read (restored). One minute after this work starts, the worker process exit with an error code = 1.I see that keepalive messages were sent before the work process work leave.2021-12-28 10:50:01.894 CET [55792] LOCATION: WalSndKeepalive, walsender.c:3365...2021-12-28 10:50:31.854 CET [55792] STATEMENT: START_REPLICATION SLOT \"sub008_s012a00\" LOGICAL 17/27240748 (proto_version '1', publication_names '\"pub008_s012a00\"')2021-12-28 10:50:31.907 CET [55792] DEBUG: 00000: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGR, xid/subid/cid: 0/1/02021-12-28 10:50:31.907 CET [55792] LOCATION: ShowTransactionStateRec, xact.c:50752021-12-28 10:50:31.907 CET [55792] STATEMENT: START_REPLICATION SLOT \"sub008_s012a00\" LOGICAL 17/27240748 (proto_version '1', publication_names '\"pub008_s012a00\"')2021-12-28 10:50:31.907 CET [55792] DEBUG: 00000: spill 2271 changes in XID 14312 to disk2021-12-28 10:50:31.907 CET [55792] LOCATION: ReorderBufferSerializeTXN, reorderbuffer.c:22452021-12-28 10:50:31.907 CET [55792] STATEMENT: START_REPLICATION SLOT \"sub008_s012a00\" LOGICAL 17/27240748 (proto_version '1', publication_names '\"pub008_s012a00\"')2021-12-28 10:50:32.110 CET [55792] DEBUG: 00000: restored 4096/22603999 changes from disk2021-12-28 10:50:32.110 CET [55792] LOCATION: ReorderBufferIterTXNNext, reorderbuffer.c:11562021-12-28 10:50:32.110 CET [55792] STATEMENT: START_REPLICATION SLOT \"sub008_s012a00\" LOGICAL 17/27240748 (proto_version '1', publication_names '\"pub008_s012a00\"')2021-12-28 10:50:32.138 CET [55792] DEBUG: 00000: restored 4096/22603999 changes from disk...2021-12-28 10:50:35.341 CET [55794] DEBUG: 00000: sending replication keepalive2021-12-28 10:50:35.341 CET [55794] LOCATION: WalSndKeepalive, walsender.c:3365...2021-12-28 10:51:31.995 CET [55791] ERROR: XX000: terminating logical replication worker due to timeout2021-12-28 10:51:31.995 CET [55791] LOCATION: LogicalRepApplyLoop, worker.c:1267It is still not clear to me why the problem happened? IIUC, after restoring 4096 changes from snap files, we send them to the subscriber, and then apply worker should apply those one by one. Now, is it taking one minute to restore 4096 changes due to which apply worker is timed out?Could this function in Apply main loop in worker.c help to find a solution?rc = WaitLatchOrSocket(MyLatch,WL_SOCKET_READABLE | WL_LATCH_SET |WL_TIMEOUT | WL_POSTMASTER_DEATH,fd, wait_time,WAIT_EVENT_LOGICAL_APPLY_MAIN);Can you explain why you think this will help in solving your current problem?--With Regards,Amit Kapila.",
"msg_date": "Fri, 7 Jan 2022 15:56:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "Can you explain why you think this will help in solving your current\nproblem?\n\nIndeed your are right this function won't help, we have to look elsewhere.\n\nIt is still not clear to me why the problem happened? IIUC, after restoring\n4096 changes from snap files, we send them to the subscriber, and then\napply worker should apply those one by one. Now, is it taking one minute to\nrestore 4096 changes due to which apply worker is timed out?\n\nNow I can easily reproduce the problem.\nIn a first phase, snap files are generated and stored in pg_replslot. This\nprocess end when1420 files are present in pg_replslots (this is in relation\nwith statements that must be replayed from WAL). In the pg_stat_replication\nview, the state field is set to *catchup*.\nIn a 2nd phase, the snap files must be decoded. However after one minute\n(wal_receiver_timeout parameter set to 1 minute) the worker process stop\nwith a timeout.\n\nI can put a debug point to check if a timeout is sent to the worker\nprocess. Do you have any other clue?\n\nThank you for your help\n\nFabrice\n\n\n\n\nOn Fri, Jan 7, 2022 at 11:26 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Wed, Dec 29, 2021 at 5:02 PM Fabrice Chapuis <fabrice636861@gmail.com>\n> wrote:\n>\n>> I put the instance with high level debug mode.\n>> I try to do some log interpretation: After having finished writing the\n>> modifications generated by the insert in the snap files,\n>> then these files are read (restored). One minute after this work starts,\n>> the worker process exit with an error code = 1.\n>> I see that keepalive messages were sent before the work process work\n>> leave.\n>>\n>> 2021-12-28 10:50:01.894 CET [55792] LOCATION: WalSndKeepalive,\n>> walsender.c:3365\n>> ...\n>> 2021-12-28 10:50:31.854 CET [55792] STATEMENT: START_REPLICATION SLOT\n>> \"sub008_s012a00\" LOGICAL 17/27240748 (proto_version '1', publication_names\n>> '\"pub008_s012a00\"')\n>> 2021-12-28 10:50:31.907 CET [55792] DEBUG: 00000: StartTransaction(1)\n>> name: unnamed; blockState: DEFAULT; state: INPROGR, xid/subid/cid: 0/1/0\n>> 2021-12-28 10:50:31.907 CET [55792] LOCATION: ShowTransactionStateRec,\n>> xact.c:5075\n>> 2021-12-28 10:50:31.907 CET [55792] STATEMENT: START_REPLICATION SLOT\n>> \"sub008_s012a00\" LOGICAL 17/27240748 (proto_version '1', publication_names\n>> '\"pub008_s012a00\"')\n>> 2021-12-28 10:50:31.907 CET [55792] DEBUG: 00000: spill 2271 changes in\n>> XID 14312 to disk\n>> 2021-12-28 10:50:31.907 CET [55792] LOCATION: ReorderBufferSerializeTXN,\n>> reorderbuffer.c:2245\n>> 2021-12-28 10:50:31.907 CET [55792] STATEMENT: START_REPLICATION SLOT\n>> \"sub008_s012a00\" LOGICAL 17/27240748 (proto_version '1', publication_names\n>> '\"pub008_s012a00\"')\n>> *2021-12-28 10:50:32.110 CET [55792] DEBUG: 00000: restored\n>> 4096/22603999 changes from disk*\n>> 2021-12-28 10:50:32.110 CET [55792] LOCATION: ReorderBufferIterTXNNext,\n>> reorderbuffer.c:1156\n>> 2021-12-28 10:50:32.110 CET [55792] STATEMENT: START_REPLICATION SLOT\n>> \"sub008_s012a00\" LOGICAL 17/27240748 (proto_version '1', publication_names\n>> '\"pub008_s012a00\"')\n>> 2021-12-28 10:50:32.138 CET [55792] DEBUG: 00000: restored 4096/22603999\n>> changes from disk\n>> ...\n>>\n>> *2021-12-28 10:50:35.341 CET [55794] DEBUG: 00000: sending replication\n>> keepalive2021-12-28 10:50:35.341 CET [55794] LOCATION: WalSndKeepalive,\n>> walsender.c:3365*\n>> ...\n>> *2021-12-28 10:51:31.995 CET [55791] ERROR: XX000: terminating logical\n>> replication worker due to timeout*\n>>\n>> *2021-12-28 10:51:31.995 CET [55791] LOCATION: LogicalRepApplyLoop,\n>> worker.c:1267*\n>>\n>>\n> It is still not clear to me why the problem happened? IIUC, after\n> restoring 4096 changes from snap files, we send them to the subscriber, and\n> then apply worker should apply those one by one. Now, is it taking one\n> minute to restore 4096 changes due to which apply worker is timed out?\n>\n> Could this function in* Apply main loop* in worker.c help to find a\n>> solution?\n>>\n>> rc = WaitLatchOrSocket(MyLatch,\n>> WL_SOCKET_READABLE | WL_LATCH_SET |\n>> WL_TIMEOUT | WL_POSTMASTER_DEATH,\n>> fd, wait_time,\n>> WAIT_EVENT_LOGICAL_APPLY_MAIN);\n>>\n>\n> Can you explain why you think this will help in solving your current\n> problem?\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nCan you explain why you think this will help in solving your current problem?Indeed your are right this function won't help, we have to look elsewhere.It is still not clear to me why the problem happened? IIUC, after restoring 4096 changes from snap files, we send them to the subscriber, and then apply worker should apply those one by one. Now, is it taking one minute to restore 4096 changes due to which apply worker is timed out?Now I can easily reproduce the problem.In a first phase, snap files are generated and stored in pg_replslot. This process end when1420 files are present in pg_replslots (this is in relation with statements that must be replayed from WAL). In the pg_stat_replication view, the state field is set to catchup.In a 2nd phase, the snap files must be decoded. However after one minute (wal_receiver_timeout parameter set to 1 minute) the worker process stop with a timeout.I can put a debug point to check if a timeout is sent to the worker process. Do you have any other clue?Thank you for your helpFabriceOn Fri, Jan 7, 2022 at 11:26 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Wed, Dec 29, 2021 at 5:02 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:I put the instance with high level debug mode.I try to do some log interpretation: After having finished writing the modifications generated by the insert in the snap files,then these files are read (restored). One minute after this work starts, the worker process exit with an error code = 1.I see that keepalive messages were sent before the work process work leave.2021-12-28 10:50:01.894 CET [55792] LOCATION: WalSndKeepalive, walsender.c:3365...2021-12-28 10:50:31.854 CET [55792] STATEMENT: START_REPLICATION SLOT \"sub008_s012a00\" LOGICAL 17/27240748 (proto_version '1', publication_names '\"pub008_s012a00\"')2021-12-28 10:50:31.907 CET [55792] DEBUG: 00000: StartTransaction(1) name: unnamed; blockState: DEFAULT; state: INPROGR, xid/subid/cid: 0/1/02021-12-28 10:50:31.907 CET [55792] LOCATION: ShowTransactionStateRec, xact.c:50752021-12-28 10:50:31.907 CET [55792] STATEMENT: START_REPLICATION SLOT \"sub008_s012a00\" LOGICAL 17/27240748 (proto_version '1', publication_names '\"pub008_s012a00\"')2021-12-28 10:50:31.907 CET [55792] DEBUG: 00000: spill 2271 changes in XID 14312 to disk2021-12-28 10:50:31.907 CET [55792] LOCATION: ReorderBufferSerializeTXN, reorderbuffer.c:22452021-12-28 10:50:31.907 CET [55792] STATEMENT: START_REPLICATION SLOT \"sub008_s012a00\" LOGICAL 17/27240748 (proto_version '1', publication_names '\"pub008_s012a00\"')2021-12-28 10:50:32.110 CET [55792] DEBUG: 00000: restored 4096/22603999 changes from disk2021-12-28 10:50:32.110 CET [55792] LOCATION: ReorderBufferIterTXNNext, reorderbuffer.c:11562021-12-28 10:50:32.110 CET [55792] STATEMENT: START_REPLICATION SLOT \"sub008_s012a00\" LOGICAL 17/27240748 (proto_version '1', publication_names '\"pub008_s012a00\"')2021-12-28 10:50:32.138 CET [55792] DEBUG: 00000: restored 4096/22603999 changes from disk...2021-12-28 10:50:35.341 CET [55794] DEBUG: 00000: sending replication keepalive2021-12-28 10:50:35.341 CET [55794] LOCATION: WalSndKeepalive, walsender.c:3365...2021-12-28 10:51:31.995 CET [55791] ERROR: XX000: terminating logical replication worker due to timeout2021-12-28 10:51:31.995 CET [55791] LOCATION: LogicalRepApplyLoop, worker.c:1267It is still not clear to me why the problem happened? IIUC, after restoring 4096 changes from snap files, we send them to the subscriber, and then apply worker should apply those one by one. Now, is it taking one minute to restore 4096 changes due to which apply worker is timed out?Could this function in Apply main loop in worker.c help to find a solution?rc = WaitLatchOrSocket(MyLatch,WL_SOCKET_READABLE | WL_LATCH_SET |WL_TIMEOUT | WL_POSTMASTER_DEATH,fd, wait_time,WAIT_EVENT_LOGICAL_APPLY_MAIN);Can you explain why you think this will help in solving your current problem?--With Regards,Amit Kapila.",
"msg_date": "Tue, 11 Jan 2022 15:43:29 +0100",
"msg_from": "Fabrice Chapuis <fabrice636861@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Tue, Jan 11, 2022 at 8:13 PM Fabrice Chapuis <fabrice636861@gmail.com>\nwrote:\n\n> Can you explain why you think this will help in solving your current\n> problem?\n>\n> Indeed your are right this function won't help, we have to look elsewhere.\n>\n> It is still not clear to me why the problem happened? IIUC, after\n> restoring 4096 changes from snap files, we send them to the subscriber, and\n> then apply worker should apply those one by one. Now, is it taking one\n> minute to restore 4096 changes due to which apply worker is timed out?\n>\n> Now I can easily reproduce the problem.\n> In a first phase, snap files are generated and stored in pg_replslot. This\n> process end when1420 files are present in pg_replslots (this is in relation\n> with statements that must be replayed from WAL). In the pg_stat_replication\n> view, the state field is set to *catchup*.\n> In a 2nd phase, the snap files must be decoded. However after one minute\n> (wal_receiver_timeout parameter set to 1 minute) the worker process stop\n> with a timeout.\n>\n>\nWhat exactly do you mean by the first and second phase in the above\ndescription?\n\n-- \nWith Regards,\nAmit Kapila.\n\nOn Tue, Jan 11, 2022 at 8:13 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:Can you explain why you think this will help in solving your current problem?Indeed your are right this function won't help, we have to look elsewhere.It is still not clear to me why the problem happened? IIUC, after restoring 4096 changes from snap files, we send them to the subscriber, and then apply worker should apply those one by one. Now, is it taking one minute to restore 4096 changes due to which apply worker is timed out?Now I can easily reproduce the problem.In a first phase, snap files are generated and stored in pg_replslot. This process end when1420 files are present in pg_replslots (this is in relation with statements that must be replayed from WAL). In the pg_stat_replication view, the state field is set to catchup.In a 2nd phase, the snap files must be decoded. However after one minute (wal_receiver_timeout parameter set to 1 minute) the worker process stop with a timeout.What exactly do you mean by the first and second phase in the above description?-- With Regards,Amit Kapila.",
"msg_date": "Wed, 12 Jan 2022 16:24:21 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "first phase: postgres read WAL files and generate 1420 snap files.\nsecond phase: I guess, but on this point maybe you can clarify, postgres\nhas to decode the snap files and remove them if no statement must be\napplied on a replicated table.\nIt is from this point that the worker process exit after 1 minute timeout.\n\nOn Wed, Jan 12, 2022 at 11:54 AM Amit Kapila <amit.kapila16@gmail.com>\nwrote:\n\n> On Tue, Jan 11, 2022 at 8:13 PM Fabrice Chapuis <fabrice636861@gmail.com>\n> wrote:\n>\n>> Can you explain why you think this will help in solving your current\n>> problem?\n>>\n>> Indeed your are right this function won't help, we have to look elsewhere.\n>>\n>> It is still not clear to me why the problem happened? IIUC, after\n>> restoring 4096 changes from snap files, we send them to the subscriber, and\n>> then apply worker should apply those one by one. Now, is it taking one\n>> minute to restore 4096 changes due to which apply worker is timed out?\n>>\n>> Now I can easily reproduce the problem.\n>> In a first phase, snap files are generated and stored in pg_replslot.\n>> This process end when1420 files are present in pg_replslots (this is in\n>> relation with statements that must be replayed from WAL). In the\n>> pg_stat_replication view, the state field is set to *catchup*.\n>> In a 2nd phase, the snap files must be decoded. However after one minute\n>> (wal_receiver_timeout parameter set to 1 minute) the worker process stop\n>> with a timeout.\n>>\n>>\n> What exactly do you mean by the first and second phase in the above\n> description?\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nfirst phase: postgres read WAL files and generate 1420 snap files.second phase: I guess, but on this point maybe you can clarify, postgres has to decode the snap files and remove them if no statement must be applied on a replicated table.It is from this point that the worker process exit after 1 minute timeout.On Wed, Jan 12, 2022 at 11:54 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Tue, Jan 11, 2022 at 8:13 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:Can you explain why you think this will help in solving your current problem?Indeed your are right this function won't help, we have to look elsewhere.It is still not clear to me why the problem happened? IIUC, after restoring 4096 changes from snap files, we send them to the subscriber, and then apply worker should apply those one by one. Now, is it taking one minute to restore 4096 changes due to which apply worker is timed out?Now I can easily reproduce the problem.In a first phase, snap files are generated and stored in pg_replslot. This process end when1420 files are present in pg_replslots (this is in relation with statements that must be replayed from WAL). In the pg_stat_replication view, the state field is set to catchup.In a 2nd phase, the snap files must be decoded. However after one minute (wal_receiver_timeout parameter set to 1 minute) the worker process stop with a timeout.What exactly do you mean by the first and second phase in the above description?-- With Regards,Amit Kapila.",
"msg_date": "Thu, 13 Jan 2022 11:13:02 +0100",
"msg_from": "Fabrice Chapuis <fabrice636861@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Thu, Jan 13, 2022 at 3:43 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:\n>\n> first phase: postgres read WAL files and generate 1420 snap files.\n> second phase: I guess, but on this point maybe you can clarify, postgres has to decode the snap files and remove them if no statement must be applied on a replicated table.\n> It is from this point that the worker process exit after 1 minute timeout.\n>\n\nOkay, I think the problem could be that because we are skipping all\nthe changes of transaction there is no communication sent to the\nsubscriber and it eventually timed out. Actually, we try to send\nkeep-alive at transaction boundaries like when we call\npgoutput_commit_txn. The pgoutput_commit_txn will call\nOutputPluginWrite->WalSndWriteData. I think to tackle the problem we\nneed to try to send such keepalives via WalSndUpdateProgress and\ninvoke that in pgoutput_change when we skip sending the change.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 13 Jan 2022 19:29:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "If I can follow you, I have to make the following changes:\n\n1. In walsender.c:\n\nstatic void\nWalSndUpdateProgress(LogicalDecodingContext *ctx, XLogRecPtr lsn,\nTransactionId xid)\n{\nstatic TimestampTz sendTime = 0;\nTimestampTz now = GetCurrentTimestamp();\n\n/* Keep the worker process alive */\nWalSndKeepalive(true);\n/*\n* Track lag no more than once per WALSND_LOGICAL_LAG_TRACK_INTERVAL_MS to\n* avoid flooding the lag tracker when we commit frequently.\n*/\n#define WALSND_LOGICAL_LAG_TRACK_INTERVAL_MS 1000\nif (!TimestampDifferenceExceeds(sendTime, now,\nWALSND_LOGICAL_LAG_TRACK_INTERVAL_MS))\nreturn;\n\nLagTrackerWrite(lsn, now);\nsendTime = now;\n}\n\nI put *requestReply *parameter to true, is that correct?\n\n2. In pgoutput.c\n\n/*\n * Sends the decoded DML over wire.\n *\n * This is called both in streaming and non-streaming modes.\n */\nstatic void\npgoutput_change(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,\nRelation relation, ReorderBufferChange *change)\n{\nPGOutputData *data = (PGOutputData *) ctx->output_plugin_private;\nMemoryContext old;\nRelationSyncEntry *relentry;\nTransactionId xid = InvalidTransactionId;\nRelation ancestor = NULL;\n\nWalSndUpdateProgress(ctx, txn->origin_lsn, change->txn->xid);\n\nif (!is_publishable_relation(relation))\nreturn;\n...\n\nMake a call to *WalSndUpdateProgress* in function *pgoutput_change.*\n\nFor info: the information in the log after reproducing the problem.\n\n2022-01-13 11:19:46.340 CET [82233] LOCATION: WalSndKeepaliveIfNecessary,\nwalsender.c:3389\n2022-01-13 11:19:46.340 CET [82233] STATEMENT: START_REPLICATION SLOT\n\"sub008_s012a00\" LOGICAL 17/27240748 (proto_version '1', publication_names\n'\"pub008_s012a00\"')\n2022-01-13 11:19:46.340 CET [82233] LOG: 00000: attempt to send keep alive\nmessage\n2022-01-13 11:19:46.340 CET [82233] LOCATION: WalSndKeepaliveIfNecessary,\nwalsender.c:3389\n2022-01-13 11:19:46.340 CET [82233] STATEMENT: START_REPLICATION SLOT\n\"sub008_s012a00\" LOGICAL 17/27240748 (proto_version '1', publication_names\n'\"pub008_s012a00\"')\n2022-01-13 11:19:46.340 CET [82233] LOG: 00000: attempt to send keep alive\nmessage\n2022-01-13 11:19:46.340 CET [82233] LOCATION: WalSndKeepaliveIfNecessary,\nwalsender.c:3389\n2022-01-13 11:19:46.340 CET [82233] STATEMENT: START_REPLICATION SLOT\n\"sub008_s012a00\" LOGICAL 17/27240748 (proto_version '1', publication_names\n'\"pub008_s012a00\"')\n2022-01-13 11:20:46.418 CET [82232] ERROR: XX000: terminating logical\nreplication worker due to timeout\n2022-01-13 11:20:46.418 CET [82232] LOCATION: LogicalRepApplyLoop,\nworker.c:1267\n2022-01-13 11:20:46.421 CET [82224] LOG: 00000: worker process: logical\nreplication worker for subscription 26994 (PID 82232) exited with exit code\n1\n2022-01-13 11:20:46.421 CET [82224] LOCATION: LogChildExit,\npostmaster.c:3625\n\nThanks a lot for your help.\n\nFabrice\n\nOn Thu, Jan 13, 2022 at 2:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Thu, Jan 13, 2022 at 3:43 PM Fabrice Chapuis <fabrice636861@gmail.com>\n> wrote:\n> >\n> > first phase: postgres read WAL files and generate 1420 snap files.\n> > second phase: I guess, but on this point maybe you can clarify, postgres\n> has to decode the snap files and remove them if no statement must be\n> applied on a replicated table.\n> > It is from this point that the worker process exit after 1 minute\n> timeout.\n> >\n>\n> Okay, I think the problem could be that because we are skipping all\n> the changes of transaction there is no communication sent to the\n> subscriber and it eventually timed out. Actually, we try to send\n> keep-alive at transaction boundaries like when we call\n> pgoutput_commit_txn. The pgoutput_commit_txn will call\n> OutputPluginWrite->WalSndWriteData. I think to tackle the problem we\n> need to try to send such keepalives via WalSndUpdateProgress and\n> invoke that in pgoutput_change when we skip sending the change.\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nIf I can follow you, I have to make the following changes:1. In walsender.c:static voidWalSndUpdateProgress(LogicalDecodingContext *ctx, XLogRecPtr lsn, TransactionId xid){\tstatic TimestampTz sendTime = 0;\tTimestampTz now = GetCurrentTimestamp(); \t/* Keep the worker process alive */\tWalSndKeepalive(true);/*\t * Track lag no more than once per WALSND_LOGICAL_LAG_TRACK_INTERVAL_MS to\t * avoid flooding the lag tracker when we commit frequently.\t */#define WALSND_LOGICAL_LAG_TRACK_INTERVAL_MS\t1000\tif (!TimestampDifferenceExceeds(sendTime, now,\t\t\t\t\t\t\t\t\tWALSND_LOGICAL_LAG_TRACK_INTERVAL_MS))\t\treturn;\tLagTrackerWrite(lsn, now);\tsendTime = now;}I put requestReply parameter to true, is that correct?2. In pgoutput.c/* * Sends the decoded DML over wire. * * This is called both in streaming and non-streaming modes. */static voidpgoutput_change(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,\t\t\t\tRelation relation, ReorderBufferChange *change){\tPGOutputData *data = (PGOutputData *) ctx->output_plugin_private;\tMemoryContext old;\tRelationSyncEntry *relentry;\tTransactionId xid = InvalidTransactionId;\tRelation\tancestor = NULL; \tWalSndUpdateProgress(ctx, txn->origin_lsn, change->txn->xid);\tif (!is_publishable_relation(relation))\t\treturn;...Make a call to WalSndUpdateProgress in function pgoutput_change.For info: the information in the log after reproducing the problem.2022-01-13 11:19:46.340 CET [82233] LOCATION: WalSndKeepaliveIfNecessary, walsender.c:33892022-01-13 11:19:46.340 CET [82233] STATEMENT: START_REPLICATION SLOT \"sub008_s012a00\" LOGICAL 17/27240748 (proto_version '1', publication_names '\"pub008_s012a00\"')2022-01-13 11:19:46.340 CET [82233] LOG: 00000: attempt to send keep alive message2022-01-13 11:19:46.340 CET [82233] LOCATION: WalSndKeepaliveIfNecessary, walsender.c:33892022-01-13 11:19:46.340 CET [82233] STATEMENT: START_REPLICATION SLOT \"sub008_s012a00\" LOGICAL 17/27240748 (proto_version '1', publication_names '\"pub008_s012a00\"')2022-01-13 11:19:46.340 CET [82233] LOG: 00000: attempt to send keep alive message2022-01-13 11:19:46.340 CET [82233] LOCATION: WalSndKeepaliveIfNecessary, walsender.c:33892022-01-13 11:19:46.340 CET [82233] STATEMENT: START_REPLICATION SLOT \"sub008_s012a00\" LOGICAL 17/27240748 (proto_version '1', publication_names '\"pub008_s012a00\"')2022-01-13 11:20:46.418 CET [82232] ERROR: XX000: terminating logical replication worker due to timeout2022-01-13 11:20:46.418 CET [82232] LOCATION: LogicalRepApplyLoop, worker.c:12672022-01-13 11:20:46.421 CET [82224] LOG: 00000: worker process: logical replication worker for subscription 26994 (PID 82232) exited with exit code 12022-01-13 11:20:46.421 CET [82224] LOCATION: LogChildExit, postmaster.c:3625Thanks a lot for your help.FabriceOn Thu, Jan 13, 2022 at 2:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Thu, Jan 13, 2022 at 3:43 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:\n>\n> first phase: postgres read WAL files and generate 1420 snap files.\n> second phase: I guess, but on this point maybe you can clarify, postgres has to decode the snap files and remove them if no statement must be applied on a replicated table.\n> It is from this point that the worker process exit after 1 minute timeout.\n>\n\nOkay, I think the problem could be that because we are skipping all\nthe changes of transaction there is no communication sent to the\nsubscriber and it eventually timed out. Actually, we try to send\nkeep-alive at transaction boundaries like when we call\npgoutput_commit_txn. The pgoutput_commit_txn will call\nOutputPluginWrite->WalSndWriteData. I think to tackle the problem we\nneed to try to send such keepalives via WalSndUpdateProgress and\ninvoke that in pgoutput_change when we skip sending the change.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Fri, 14 Jan 2022 11:17:07 +0100",
"msg_from": "Fabrice Chapuis <fabrice636861@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Fri, Jan 14, 2022 at 3:47 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:\n>\n> If I can follow you, I have to make the following changes:\n>\n\nNo, not like that but we can try that way as well to see if that helps\nto avoid your problem. Am, I understanding correctly even after\nmodification, you are seeing the problem. Can you try by calling\nWalSndKeepaliveIfNecessary() instead of WalSndKeepalive()?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 14 Jan 2022 17:32:57 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "if it takes little work for you, can you please send me a piece of code\nwith the change needed to do the test\n\nThanks\n\nRegards,\n\nFabrice\n\nOn Fri, Jan 14, 2022 at 1:03 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Fri, Jan 14, 2022 at 3:47 PM Fabrice Chapuis <fabrice636861@gmail.com>\n> wrote:\n> >\n> > If I can follow you, I have to make the following changes:\n> >\n>\n> No, not like that but we can try that way as well to see if that helps\n> to avoid your problem. Am, I understanding correctly even after\n> modification, you are seeing the problem. Can you try by calling\n> WalSndKeepaliveIfNecessary() instead of WalSndKeepalive()?\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nif it takes little work for you, can you please send me a piece of code with the change needed to do the testThanks Regards,FabriceOn Fri, Jan 14, 2022 at 1:03 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Fri, Jan 14, 2022 at 3:47 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:\n>\n> If I can follow you, I have to make the following changes:\n>\n\nNo, not like that but we can try that way as well to see if that helps\nto avoid your problem. Am, I understanding correctly even after\nmodification, you are seeing the problem. Can you try by calling\nWalSndKeepaliveIfNecessary() instead of WalSndKeepalive()?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Fri, 14 Jan 2022 16:12:13 +0100",
"msg_from": "Fabrice Chapuis <fabrice636861@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "Hello Amit,\n\n\n\n\n\n\n\nIf it takes little work for you, can you please send me a piece of code\nwith the change needed to do the test\n\nThanks\n\nRegards,\n\nFabrice\n\n\nOn Fri, Jan 14, 2022 at 1:03 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Fri, Jan 14, 2022 at 3:47 PM Fabrice Chapuis <fabrice636861@gmail.com>\n> wrote:\n> >\n> > If I can follow you, I have to make the following changes:\n> >\n>\n> No, not like that but we can try that way as well to see if that helps\n> to avoid your problem. Am, I understanding correctly even after\n> modification, you are seeing the problem. Can you try by calling\n> WalSndKeepaliveIfNecessary() instead of WalSndKeepalive()?\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nHello Amit,If it takes little work for you, can you please send me a piece of codewith the change needed to do the test\nThanks\nRegards,\nFabrice\nOn Fri, Jan 14, 2022 at 1:03 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Fri, Jan 14, 2022 at 3:47 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:\n>\n> If I can follow you, I have to make the following changes:\n>\n\nNo, not like that but we can try that way as well to see if that helps\nto avoid your problem. Am, I understanding correctly even after\nmodification, you are seeing the problem. Can you try by calling\nWalSndKeepaliveIfNecessary() instead of WalSndKeepalive()?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Wed, 19 Jan 2022 14:53:16 +0100",
"msg_from": "Fabrice Chapuis <fabrice636861@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Jan 19, 2022 at 9:53 PM Fabrice Chapuis fabrice636861@gmail.com<mailto:fabrice636861@gmail.com> wrote:\r\n> Hello Amit,\r\n> If it takes little work for you, can you please send me a piece of code\r\n> with the change needed to do the test\r\n\r\nI wrote a patch(Send-keepalive.patch, please refer to attachment) according to\r\nAmit's suggestions. But after I did some simple test about this patch by the\r\ntest script \"test.sh\"(please refer to attachment), I found the timeout problem\r\nhas not been fixed by this patch.\r\n\r\nSo I add some logs(please refer to Add-some-logs-to-debug.patch) to confirm newly\r\nadded WalSndKeepaliveIfNecessary() send keepalive message or not.\r\n\r\nAfter applying the Send-keepalive.patch and Add-some-logs-to-debug.patch, I\r\nfound that the added message \"send keep alive message\" was not printed in\r\npublisher-side log.\r\n\r\n[publisher-side log]:\r\n2022-01-20 15:21:50.057 CST [2400278] LOG: checkpoint complete: wrote 61 buffers (0.4%); 0 WAL file(s) added, 0 removed, 0 recycled; write=9.838 s, sync=0.720 s, total=10.559 s; sync files=4, longest=0.563 s, average=0.180 s; distance=538053 kB, estimate=543889 kB\r\n2022-01-20 15:21:50.977 CST [2400278] LOG: checkpoints are occurring too frequently (11 seconds apart)\r\n2022-01-20 15:21:50.977 CST [2400278] HINT: Consider increasing the configuration parameter \"max_wal_size\".\r\n2022-01-20 15:21:50.988 CST [2400278] LOG: checkpoint starting: wal\r\n2022-01-20 15:21:52.853 CST [2400404] LOG: begin load changes\r\n2022-01-20 15:21:52.853 CST [2400404] STATEMENT: START_REPLICATION SLOT \"sub\" LOGICAL 0/0 (proto_version '3', publication_names '\"pub\"')\r\n2022-01-20 15:22:52.969 CST [2410649] ERROR: replication slot \"sub\" is active for PID 2400404\r\n2022-01-20 15:22:52.969 CST [2410649] STATEMENT: START_REPLICATION SLOT \"sub\" LOGICAL 0/0 (proto_version '3', publication_names '\"pub\"')\r\n2022-01-20 15:22:57.980 CST [2410657] ERROR: replication slot \"sub\" is active for PID 2400404\r\n\r\n[subscriber-side log]:\r\n2022-01-20 15:16:10.975 CST [2400335] LOG: checkpoint starting: time\r\n2022-01-20 15:16:16.052 CST [2400335] LOG: checkpoint complete: wrote 51 buffers (0.3%); 0 WAL file(s) added, 0 removed, 0 recycled; write=4.830 s, sync=0.135 s, total=5.078 s; sync files=39, longest=0.079 s, average=0.004 s; distance=149 kB, estimate=149 kB\r\n2022-01-20 15:22:52.738 CST [2400400] ERROR: terminating logical replication worker due to timeout\r\n2022-01-20 15:22:52.738 CST [2400332] LOG: background worker \"logical replication worker\" (PID 2400400) exited with exit code 1\r\n2022-01-20 15:22:52.740 CST [2410648] LOG: logical replication apply worker for subscription \"sub\" has started\r\n2022-01-20 15:22:52.969 CST [2410648] ERROR: could not start WAL streaming: ERROR: replication slot \"sub\" is active for PID 2400404\r\n2022-01-20 15:22:52.970 CST [2400332] LOG: background worker \"logical replication worker\" (PID 2410648) exited with exit code 1\r\n2022-01-20 15:22:57.977 CST [2410656] LOG: logical replication apply worker for subscription \"sub\" has started\r\n\r\nIt seems WalSndKeepaliveIfNecessary did not send keepalive message in testing. I\r\nam still doing some research about the cause.\r\n\r\nAttach the patches and test script mentioned above, in case someone wants to try.\r\nIf I miss something, please let me know.\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Thu, 20 Jan 2022 09:05:09 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Thu, Jan 20, 2022 at 2:35 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Wed, Jan 19, 2022 at 9:53 PM Fabrice Chapuis fabrice636861@gmail.com wrote:\n>\n> > Hello Amit,\n>\n> > If it takes little work for you, can you please send me a piece of code\n>\n> > with the change needed to do the test\n>\n>\n>\n> I wrote a patch(Send-keepalive.patch, please refer to attachment) according to\n>\n> Amit's suggestions. But after I did some simple test about this patch by the\n>\n> test script \"test.sh\"(please refer to attachment), I found the timeout problem\n>\n> has not been fixed by this patch.\n>\n>\n>\n> So I add some logs(please refer to Add-some-logs-to-debug.patch) to confirm newly\n>\n> added WalSndKeepaliveIfNecessary() send keepalive message or not.\n>\n>\n>\n> After applying the Send-keepalive.patch and Add-some-logs-to-debug.patch, I\n>\n> found that the added message \"send keep alive message\" was not printed in\n>\n> publisher-side log.\n>\n\nIt might be not reaching the actual send_keep_alive logic in\nWalSndKeepaliveIfNecessary because of below code:\n{\n...\n/*\n* Don't send keepalive messages if timeouts are globally disabled or\n* we're doing something not partaking in timeouts.\n*/\nif (wal_sender_timeout <= 0 || last_reply_timestamp <= 0)\nreturn;\n..\n}\n\nI think you can add elog before the above return and before updating\nprogress in the below code:\ncase REORDER_BUFFER_CHANGE_INSERT:\n if (!relentry->pubactions.pubinsert)\n+ {\n+ OutputPluginUpdateProgress(ctx);\n return;\n\nThis will help us to rule out one possibility.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 20 Jan 2022 18:48:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Thu, Jan 20, 2022 at 9:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> It might be not reaching the actual send_keep_alive logic in\r\n> WalSndKeepaliveIfNecessary because of below code:\r\n> {\r\n> ...\r\n> /*\r\n> * Don't send keepalive messages if timeouts are globally disabled or\r\n> * we're doing something not partaking in timeouts.\r\n> */\r\n> if (wal_sender_timeout <= 0 || last_reply_timestamp <= 0) return; ..\r\n> }\r\n> \r\n> I think you can add elog before the above return and before updating progress\r\n> in the below code:\r\n> case REORDER_BUFFER_CHANGE_INSERT:\r\n> if (!relentry->pubactions.pubinsert)\r\n> + {\r\n> + OutputPluginUpdateProgress(ctx);\r\n> return;\r\n> \r\n> This will help us to rule out one possibility.\r\n\r\nThanks for your advices!\r\n\r\nAccording to your advices, I applied 0001,0002 and 0003 to run the test script.\r\nWhen subscriber timeout, I filter publisher-side log:\r\n$ grep \"before invoking update progress\" pub.log | wc -l\r\n60373557\r\n$ grep \"return because wal_sender_timeout or last_reply_timestamp\" pub.log | wc -l\r\n0\r\n$ grep \"return because waiting_for_ping_response\" pub.log | wc -l\r\n0\r\n\r\nBased on this result, I think function WalSndKeepaliveIfNecessary was invoked,\r\nbut function WalSndKeepalive was not invoked because (last_processing >=\r\nping_time) is false.\r\nSo I tried to see changes about last_processing and last_reply_timestamp\r\n(because ping_time is based on last_reply_timestamp).\r\n\r\nI found last_processing and last_reply_timestamp is set in function\r\nProcessRepliesIfAny.\r\nlast_processing is set to the time when function ProcessRepliesIfAny is\r\ninvoked.\r\nOnly when publisher receive a response from subscriber, last_reply_timestamp is\r\nset to last_processing and the flag waiting_for_ping_response is reset to\r\nfalse.\r\n\r\nWhen we are during the loop to skip all the changes of transaction, IIUC, we do\r\nnot invoke function ProcessRepliesIfAny. So I think last_processing and\r\nlast_reply_timestamp will not be changed in this loop.\r\nTherefore I think about our use case, we should modify the condition of\r\ninvoking WalSndKeepalive.(please refer to\r\n0004-Simple-modification-of-timing.patch, and note that this is only a patch\r\nfor testing).\r\nAt the same time I modify the input of WalSndKeepalive from true to false. This\r\nis because when input is true, waiting_for_ping_response is set to true in\r\nWalSndKeepalive. As mentioned above, ProcessRepliesIfAny is not invoked in the\r\nloop, so I think waiting_for_ping_response will not be reset to false and\r\nkeepalive messages will not be sent.\r\n\r\nI tested after applying patches(0001 and 0004), I found the timeout was not\r\nprinted in subscriber-side log. And the added messages \"begin load changes\" and\r\n\"commit the log\" were printed in publisher-side log:\r\n$ grep -ir \"begin load changes\" pub.log\r\n2022-01-21 11:17:06.934 CST [2577699] LOG: begin load changes\r\n$ grep -ir \"commit the log\" pub.log\r\n2022-01-21 11:21:15.564 CST [2577699] LOG: commit the log\r\n\r\nAttach the patches and test script mentioned above, in case someone wants to\r\ntry.\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Fri, 21 Jan 2022 09:51:27 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "Thanks for your patch, it also works well when executing our use case, the\ntimeout no longer appears in the logs. Is it necessary now to refine this\npatch and make as few changes as possible in order for it to be released?\n\nOn Fri, Jan 21, 2022 at 10:51 AM wangw.fnst@fujitsu.com <\nwangw.fnst@fujitsu.com> wrote:\n\n> On Thu, Jan 20, 2022 at 9:18 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> > It might be not reaching the actual send_keep_alive logic in\n> > WalSndKeepaliveIfNecessary because of below code:\n> > {\n> > ...\n> > /*\n> > * Don't send keepalive messages if timeouts are globally disabled or\n> > * we're doing something not partaking in timeouts.\n> > */\n> > if (wal_sender_timeout <= 0 || last_reply_timestamp <= 0) return; ..\n> > }\n> >\n> > I think you can add elog before the above return and before updating\n> progress\n> > in the below code:\n> > case REORDER_BUFFER_CHANGE_INSERT:\n> > if (!relentry->pubactions.pubinsert)\n> > + {\n> > + OutputPluginUpdateProgress(ctx);\n> > return;\n> >\n> > This will help us to rule out one possibility.\n>\n> Thanks for your advices!\n>\n> According to your advices, I applied 0001,0002 and 0003 to run the test\n> script.\n> When subscriber timeout, I filter publisher-side log:\n> $ grep \"before invoking update progress\" pub.log | wc -l\n> 60373557\n> $ grep \"return because wal_sender_timeout or last_reply_timestamp\" pub.log\n> | wc -l\n> 0\n> $ grep \"return because waiting_for_ping_response\" pub.log | wc -l\n> 0\n>\n> Based on this result, I think function WalSndKeepaliveIfNecessary was\n> invoked,\n> but function WalSndKeepalive was not invoked because (last_processing >=\n> ping_time) is false.\n> So I tried to see changes about last_processing and last_reply_timestamp\n> (because ping_time is based on last_reply_timestamp).\n>\n> I found last_processing and last_reply_timestamp is set in function\n> ProcessRepliesIfAny.\n> last_processing is set to the time when function ProcessRepliesIfAny is\n> invoked.\n> Only when publisher receive a response from subscriber,\n> last_reply_timestamp is\n> set to last_processing and the flag waiting_for_ping_response is reset to\n> false.\n>\n> When we are during the loop to skip all the changes of transaction, IIUC,\n> we do\n> not invoke function ProcessRepliesIfAny. So I think last_processing and\n> last_reply_timestamp will not be changed in this loop.\n> Therefore I think about our use case, we should modify the condition of\n> invoking WalSndKeepalive.(please refer to\n> 0004-Simple-modification-of-timing.patch, and note that this is only a\n> patch\n> for testing).\n> At the same time I modify the input of WalSndKeepalive from true to false.\n> This\n> is because when input is true, waiting_for_ping_response is set to true in\n> WalSndKeepalive. As mentioned above, ProcessRepliesIfAny is not invoked in\n> the\n> loop, so I think waiting_for_ping_response will not be reset to false and\n> keepalive messages will not be sent.\n>\n> I tested after applying patches(0001 and 0004), I found the timeout was not\n> printed in subscriber-side log. And the added messages \"begin load\n> changes\" and\n> \"commit the log\" were printed in publisher-side log:\n> $ grep -ir \"begin load changes\" pub.log\n> 2022-01-21 11:17:06.934 CST [2577699] LOG: begin load changes\n> $ grep -ir \"commit the log\" pub.log\n> 2022-01-21 11:21:15.564 CST [2577699] LOG: commit the log\n>\n> Attach the patches and test script mentioned above, in case someone wants\n> to\n> try.\n>\n> Regards,\n> Wang wei\n>\n\nThanks for your patch, it also works well when executing our use case, the timeout no longer appears in the logs. Is it necessary now to refine this patch and make as few changes as possible in order for it to be released?On Fri, Jan 21, 2022 at 10:51 AM wangw.fnst@fujitsu.com <wangw.fnst@fujitsu.com> wrote:On Thu, Jan 20, 2022 at 9:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> It might be not reaching the actual send_keep_alive logic in\n> WalSndKeepaliveIfNecessary because of below code:\n> {\n> ...\n> /*\n> * Don't send keepalive messages if timeouts are globally disabled or\n> * we're doing something not partaking in timeouts.\n> */\n> if (wal_sender_timeout <= 0 || last_reply_timestamp <= 0) return; ..\n> }\n> \n> I think you can add elog before the above return and before updating progress\n> in the below code:\n> case REORDER_BUFFER_CHANGE_INSERT:\n> if (!relentry->pubactions.pubinsert)\n> + {\n> + OutputPluginUpdateProgress(ctx);\n> return;\n> \n> This will help us to rule out one possibility.\n\nThanks for your advices!\n\nAccording to your advices, I applied 0001,0002 and 0003 to run the test script.\nWhen subscriber timeout, I filter publisher-side log:\n$ grep \"before invoking update progress\" pub.log | wc -l\n60373557\n$ grep \"return because wal_sender_timeout or last_reply_timestamp\" pub.log | wc -l\n0\n$ grep \"return because waiting_for_ping_response\" pub.log | wc -l\n0\n\nBased on this result, I think function WalSndKeepaliveIfNecessary was invoked,\nbut function WalSndKeepalive was not invoked because (last_processing >=\nping_time) is false.\nSo I tried to see changes about last_processing and last_reply_timestamp\n(because ping_time is based on last_reply_timestamp).\n\nI found last_processing and last_reply_timestamp is set in function\nProcessRepliesIfAny.\nlast_processing is set to the time when function ProcessRepliesIfAny is\ninvoked.\nOnly when publisher receive a response from subscriber, last_reply_timestamp is\nset to last_processing and the flag waiting_for_ping_response is reset to\nfalse.\n\nWhen we are during the loop to skip all the changes of transaction, IIUC, we do\nnot invoke function ProcessRepliesIfAny. So I think last_processing and\nlast_reply_timestamp will not be changed in this loop.\nTherefore I think about our use case, we should modify the condition of\ninvoking WalSndKeepalive.(please refer to\n0004-Simple-modification-of-timing.patch, and note that this is only a patch\nfor testing).\nAt the same time I modify the input of WalSndKeepalive from true to false. This\nis because when input is true, waiting_for_ping_response is set to true in\nWalSndKeepalive. As mentioned above, ProcessRepliesIfAny is not invoked in the\nloop, so I think waiting_for_ping_response will not be reset to false and\nkeepalive messages will not be sent.\n\nI tested after applying patches(0001 and 0004), I found the timeout was not\nprinted in subscriber-side log. And the added messages \"begin load changes\" and\n\"commit the log\" were printed in publisher-side log:\n$ grep -ir \"begin load changes\" pub.log\n2022-01-21 11:17:06.934 CST [2577699] LOG: begin load changes\n$ grep -ir \"commit the log\" pub.log\n2022-01-21 11:21:15.564 CST [2577699] LOG: commit the log\n\nAttach the patches and test script mentioned above, in case someone wants to\ntry.\n\nRegards,\nWang wei",
"msg_date": "Fri, 21 Jan 2022 14:17:40 +0100",
"msg_from": "Fabrice Chapuis <fabrice636861@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "I keep your patch 0001 and I add these two calls in function\nWalSndUpdateProgress without modifying WalSndKeepaliveIfNecessary, it works\ntoo.\nWhat do your think of this patch?\n\nstatic void\nWalSndUpdateProgress(LogicalDecodingContext *ctx, XLogRecPtr lsn,\nTransactionId xid)\n{\n static TimestampTz sendTime = 0;\n TimestampTz now = GetCurrentTimestamp();\n\n ProcessRepliesIfAny();\n WalSndKeepaliveIfNecessary();\n\n\n\n /*\n * Track lag no more than once per\nWALSND_LOGICAL_LAG_TRACK_INTERVAL_MS to\n * avoid flooding the lag tracker when we commit frequently.\n */\n...\nRegards\n\nFabrice\n\nOn Fri, Jan 21, 2022 at 2:17 PM Fabrice Chapuis <fabrice636861@gmail.com>\nwrote:\n\n> Thanks for your patch, it also works well when executing our use case, the\n> timeout no longer appears in the logs. Is it necessary now to refine this\n> patch and make as few changes as possible in order for it to be released?\n>\n> On Fri, Jan 21, 2022 at 10:51 AM wangw.fnst@fujitsu.com <\n> wangw.fnst@fujitsu.com> wrote:\n>\n>> On Thu, Jan 20, 2022 at 9:18 PM Amit Kapila <amit.kapila16@gmail.com>\n>> wrote:\n>> > It might be not reaching the actual send_keep_alive logic in\n>> > WalSndKeepaliveIfNecessary because of below code:\n>> > {\n>> > ...\n>> > /*\n>> > * Don't send keepalive messages if timeouts are globally disabled or\n>> > * we're doing something not partaking in timeouts.\n>> > */\n>> > if (wal_sender_timeout <= 0 || last_reply_timestamp <= 0) return; ..\n>> > }\n>> >\n>> > I think you can add elog before the above return and before updating\n>> progress\n>> > in the below code:\n>> > case REORDER_BUFFER_CHANGE_INSERT:\n>> > if (!relentry->pubactions.pubinsert)\n>> > + {\n>> > + OutputPluginUpdateProgress(ctx);\n>> > return;\n>> >\n>> > This will help us to rule out one possibility.\n>>\n>> Thanks for your advices!\n>>\n>> According to your advices, I applied 0001,0002 and 0003 to run the test\n>> script.\n>> When subscriber timeout, I filter publisher-side log:\n>> $ grep \"before invoking update progress\" pub.log | wc -l\n>> 60373557\n>> $ grep \"return because wal_sender_timeout or last_reply_timestamp\"\n>> pub.log | wc -l\n>> 0\n>> $ grep \"return because waiting_for_ping_response\" pub.log | wc -l\n>> 0\n>>\n>> Based on this result, I think function WalSndKeepaliveIfNecessary was\n>> invoked,\n>> but function WalSndKeepalive was not invoked because (last_processing >=\n>> ping_time) is false.\n>> So I tried to see changes about last_processing and last_reply_timestamp\n>> (because ping_time is based on last_reply_timestamp).\n>>\n>> I found last_processing and last_reply_timestamp is set in function\n>> ProcessRepliesIfAny.\n>> last_processing is set to the time when function ProcessRepliesIfAny is\n>> invoked.\n>> Only when publisher receive a response from subscriber,\n>> last_reply_timestamp is\n>> set to last_processing and the flag waiting_for_ping_response is reset to\n>> false.\n>>\n>> When we are during the loop to skip all the changes of transaction, IIUC,\n>> we do\n>> not invoke function ProcessRepliesIfAny. So I think last_processing and\n>> last_reply_timestamp will not be changed in this loop.\n>> Therefore I think about our use case, we should modify the condition of\n>> invoking WalSndKeepalive.(please refer to\n>> 0004-Simple-modification-of-timing.patch, and note that this is only a\n>> patch\n>> for testing).\n>> At the same time I modify the input of WalSndKeepalive from true to\n>> false. This\n>> is because when input is true, waiting_for_ping_response is set to true in\n>> WalSndKeepalive. As mentioned above, ProcessRepliesIfAny is not invoked\n>> in the\n>> loop, so I think waiting_for_ping_response will not be reset to false and\n>> keepalive messages will not be sent.\n>>\n>> I tested after applying patches(0001 and 0004), I found the timeout was\n>> not\n>> printed in subscriber-side log. And the added messages \"begin load\n>> changes\" and\n>> \"commit the log\" were printed in publisher-side log:\n>> $ grep -ir \"begin load changes\" pub.log\n>> 2022-01-21 11:17:06.934 CST [2577699] LOG: begin load changes\n>> $ grep -ir \"commit the log\" pub.log\n>> 2022-01-21 11:21:15.564 CST [2577699] LOG: commit the log\n>>\n>> Attach the patches and test script mentioned above, in case someone wants\n>> to\n>> try.\n>>\n>> Regards,\n>> Wang wei\n>>\n>\n\nI keep your patch 0001 and I add these two calls in function WalSndUpdateProgress without modifying WalSndKeepaliveIfNecessary, it works too.What do your think of this patch?static voidWalSndUpdateProgress(LogicalDecodingContext *ctx, XLogRecPtr lsn, TransactionId xid){ static TimestampTz sendTime = 0; TimestampTz now = GetCurrentTimestamp(); ProcessRepliesIfAny(); WalSndKeepaliveIfNecessary(); /* * Track lag no more than once per WALSND_LOGICAL_LAG_TRACK_INTERVAL_MS to * avoid flooding the lag tracker when we commit frequently. */...RegardsFabriceOn Fri, Jan 21, 2022 at 2:17 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:Thanks for your patch, it also works well when executing our use case, the timeout no longer appears in the logs. Is it necessary now to refine this patch and make as few changes as possible in order for it to be released?On Fri, Jan 21, 2022 at 10:51 AM wangw.fnst@fujitsu.com <wangw.fnst@fujitsu.com> wrote:On Thu, Jan 20, 2022 at 9:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> It might be not reaching the actual send_keep_alive logic in\n> WalSndKeepaliveIfNecessary because of below code:\n> {\n> ...\n> /*\n> * Don't send keepalive messages if timeouts are globally disabled or\n> * we're doing something not partaking in timeouts.\n> */\n> if (wal_sender_timeout <= 0 || last_reply_timestamp <= 0) return; ..\n> }\n> \n> I think you can add elog before the above return and before updating progress\n> in the below code:\n> case REORDER_BUFFER_CHANGE_INSERT:\n> if (!relentry->pubactions.pubinsert)\n> + {\n> + OutputPluginUpdateProgress(ctx);\n> return;\n> \n> This will help us to rule out one possibility.\n\nThanks for your advices!\n\nAccording to your advices, I applied 0001,0002 and 0003 to run the test script.\nWhen subscriber timeout, I filter publisher-side log:\n$ grep \"before invoking update progress\" pub.log | wc -l\n60373557\n$ grep \"return because wal_sender_timeout or last_reply_timestamp\" pub.log | wc -l\n0\n$ grep \"return because waiting_for_ping_response\" pub.log | wc -l\n0\n\nBased on this result, I think function WalSndKeepaliveIfNecessary was invoked,\nbut function WalSndKeepalive was not invoked because (last_processing >=\nping_time) is false.\nSo I tried to see changes about last_processing and last_reply_timestamp\n(because ping_time is based on last_reply_timestamp).\n\nI found last_processing and last_reply_timestamp is set in function\nProcessRepliesIfAny.\nlast_processing is set to the time when function ProcessRepliesIfAny is\ninvoked.\nOnly when publisher receive a response from subscriber, last_reply_timestamp is\nset to last_processing and the flag waiting_for_ping_response is reset to\nfalse.\n\nWhen we are during the loop to skip all the changes of transaction, IIUC, we do\nnot invoke function ProcessRepliesIfAny. So I think last_processing and\nlast_reply_timestamp will not be changed in this loop.\nTherefore I think about our use case, we should modify the condition of\ninvoking WalSndKeepalive.(please refer to\n0004-Simple-modification-of-timing.patch, and note that this is only a patch\nfor testing).\nAt the same time I modify the input of WalSndKeepalive from true to false. This\nis because when input is true, waiting_for_ping_response is set to true in\nWalSndKeepalive. As mentioned above, ProcessRepliesIfAny is not invoked in the\nloop, so I think waiting_for_ping_response will not be reset to false and\nkeepalive messages will not be sent.\n\nI tested after applying patches(0001 and 0004), I found the timeout was not\nprinted in subscriber-side log. And the added messages \"begin load changes\" and\n\"commit the log\" were printed in publisher-side log:\n$ grep -ir \"begin load changes\" pub.log\n2022-01-21 11:17:06.934 CST [2577699] LOG: begin load changes\n$ grep -ir \"commit the log\" pub.log\n2022-01-21 11:21:15.564 CST [2577699] LOG: commit the log\n\nAttach the patches and test script mentioned above, in case someone wants to\ntry.\n\nRegards,\nWang wei",
"msg_date": "Fri, 21 Jan 2022 18:15:18 +0100",
"msg_from": "Fabrice Chapuis <fabrice636861@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Fri, Jan 21, 2022 at 10:45 PM Fabrice Chapuis\n<fabrice636861@gmail.com> wrote:\n>\n> I keep your patch 0001 and I add these two calls in function WalSndUpdateProgress without modifying WalSndKeepaliveIfNecessary, it works too.\n> What do your think of this patch?\n>\n\nI think this will also work. Here, the point was to just check what is\nthe exact problem and the possible approach to solve it, the actual\npatch might be different from these ideas. So, let me try to summarize\nthe problem and the possible approach to solve it so that others can\nalso share their opinion.\n\nHere, the problem is that we don't send keep-alive messages for a long\ntime while processing large transactions during logical replication\nwhere we don't send any data of such transactions (say because the\ntable modified in the transaction is not published). We do try to send\nthe keep_alive if necessary at the end of the transaction (via\nWalSndWriteData()) but by that time the subscriber-side can timeout\nand exit.\n\nNow, one idea to solve this problem could be that whenever we skip\nsending any change we do try to update the plugin progress via\nOutputPluginUpdateProgress(for walsender, it will invoke\nWalSndUpdateProgress), and there it tries to process replies and send\nkeep_alive if necessary as we do when we send some data via\nOutputPluginWrite(for walsender, it will invoke WalSndWriteData). I\ndon't know whether it is a good idea to invoke such a mechanism for\nevery change we skip to send or we should do it after we skip sending\nsome threshold of continuous changes. I think later would be\npreferred. Also, we might want to introduce a new parameter\nsend_keep_alive to this API so that there is flexibility to invoke\nthis mechanism as we don't need to invoke it while we are actually\nsending data and before that, we just update the progress via this\nAPI.\n\nThoughts?\n\nNote: I have added Simon and Petr J. to this thread as they introduced\nthe API OutputPluginUpdateProgress in commit 024711bb54 and know this\npart of code/design well but ideas suggestions from everyone are\nwelcome.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 22 Jan 2022 16:41:42 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Thu, Jan 22, 2022 at 7:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> Now, one idea to solve this problem could be that whenever we skip\r\n> sending any change we do try to update the plugin progress via\r\n> OutputPluginUpdateProgress(for walsender, it will invoke\r\n> WalSndUpdateProgress), and there it tries to process replies and send\r\n> keep_alive if necessary as we do when we send some data via\r\n> OutputPluginWrite(for walsender, it will invoke WalSndWriteData). I\r\n> don't know whether it is a good idea to invoke such a mechanism for\r\n> every change we skip to send or we should do it after we skip sending\r\n> some threshold of continuous changes. I think later would be\r\n> preferred. Also, we might want to introduce a new parameter\r\n> send_keep_alive to this API so that there is flexibility to invoke\r\n> this mechanism as we don't need to invoke it while we are actually\r\n> sending data and before that, we just update the progress via this\r\n> API.\r\n\r\nI tried out the patch according to your advice.\r\nI found if I invoke ProcessRepliesIfAny and WalSndKeepaliveIfNecessary in\r\nfunction OutputPluginUpdateProgress, the running time of the newly added\r\nfunction OutputPluginUpdateProgress invoked in pgoutput_change brings notable\r\noverhead:\r\n--11.34%--pgoutput_change\r\n | \r\n |--8.94%--OutputPluginUpdateProgress\r\n | | \r\n | --8.70%--WalSndUpdateProgress\r\n | | \r\n | |--7.44%--ProcessRepliesIfAny\r\n\r\nSo I tried another way of sending keepalive message to the standby machine\r\nbased on the timeout without asking for a reply(see attachment), the running\r\ntime of the newly added function OutputPluginUpdateProgress invoked in\r\npgoutput_change also brings slight overhead:\r\n--3.63%--pgoutput_change\r\n | \r\n |--1.40%--get_rel_sync_entry\r\n | | \r\n | --1.14%--hash_search\r\n | \r\n --1.08%--OutputPluginUpdateProgress\r\n | \r\n --0.85%--WalSndUpdateProgress\r\n\r\nBased on above, I think the second idea that sending some threshold of\r\ncontinuous changes might be better, I will do some research about this\r\napproach.\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Wed, 26 Jan 2022 03:37:28 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "Thanks for your new fix Wang.\n\nTimestampTz ping_time = TimestampTzPlusMilliseconds(sendTime,\nwal_sender_timeout / 2);\n\nshouldn't we use receiver_timeout in place of wal_sender_timeout because de\nproblem comes from the consummer.\n\nOn Wed, Jan 26, 2022 at 4:37 AM wangw.fnst@fujitsu.com <\nwangw.fnst@fujitsu.com> wrote:\n\n> On Thu, Jan 22, 2022 at 7:12 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> > Now, one idea to solve this problem could be that whenever we skip\n> > sending any change we do try to update the plugin progress via\n> > OutputPluginUpdateProgress(for walsender, it will invoke\n> > WalSndUpdateProgress), and there it tries to process replies and send\n> > keep_alive if necessary as we do when we send some data via\n> > OutputPluginWrite(for walsender, it will invoke WalSndWriteData). I\n> > don't know whether it is a good idea to invoke such a mechanism for\n> > every change we skip to send or we should do it after we skip sending\n> > some threshold of continuous changes. I think later would be\n> > preferred. Also, we might want to introduce a new parameter\n> > send_keep_alive to this API so that there is flexibility to invoke\n> > this mechanism as we don't need to invoke it while we are actually\n> > sending data and before that, we just update the progress via this\n> > API.\n>\n> I tried out the patch according to your advice.\n> I found if I invoke ProcessRepliesIfAny and WalSndKeepaliveIfNecessary in\n> function OutputPluginUpdateProgress, the running time of the newly added\n> function OutputPluginUpdateProgress invoked in pgoutput_change brings\n> notable\n> overhead:\n> --11.34%--pgoutput_change\n> |\n> |--8.94%--OutputPluginUpdateProgress\n> | |\n> | --8.70%--WalSndUpdateProgress\n> | |\n> | |--7.44%--ProcessRepliesIfAny\n>\n> So I tried another way of sending keepalive message to the standby machine\n> based on the timeout without asking for a reply(see attachment), the\n> running\n> time of the newly added function OutputPluginUpdateProgress invoked in\n> pgoutput_change also brings slight overhead:\n> --3.63%--pgoutput_change\n> |\n> |--1.40%--get_rel_sync_entry\n> | |\n> | --1.14%--hash_search\n> |\n> --1.08%--OutputPluginUpdateProgress\n> |\n> --0.85%--WalSndUpdateProgress\n>\n> Based on above, I think the second idea that sending some threshold of\n> continuous changes might be better, I will do some research about this\n> approach.\n>\n> Regards,\n> Wang wei\n>\n\nThanks for your new fix Wang.TimestampTz ping_time = TimestampTzPlusMilliseconds(sendTime, wal_sender_timeout / 2);shouldn't we use receiver_timeout in place of wal_sender_timeout because de problem comes from the consummer.On Wed, Jan 26, 2022 at 4:37 AM wangw.fnst@fujitsu.com <wangw.fnst@fujitsu.com> wrote:On Thu, Jan 22, 2022 at 7:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Now, one idea to solve this problem could be that whenever we skip\n> sending any change we do try to update the plugin progress via\n> OutputPluginUpdateProgress(for walsender, it will invoke\n> WalSndUpdateProgress), and there it tries to process replies and send\n> keep_alive if necessary as we do when we send some data via\n> OutputPluginWrite(for walsender, it will invoke WalSndWriteData). I\n> don't know whether it is a good idea to invoke such a mechanism for\n> every change we skip to send or we should do it after we skip sending\n> some threshold of continuous changes. I think later would be\n> preferred. Also, we might want to introduce a new parameter\n> send_keep_alive to this API so that there is flexibility to invoke\n> this mechanism as we don't need to invoke it while we are actually\n> sending data and before that, we just update the progress via this\n> API.\n\nI tried out the patch according to your advice.\nI found if I invoke ProcessRepliesIfAny and WalSndKeepaliveIfNecessary in\nfunction OutputPluginUpdateProgress, the running time of the newly added\nfunction OutputPluginUpdateProgress invoked in pgoutput_change brings notable\noverhead:\n--11.34%--pgoutput_change\n | \n |--8.94%--OutputPluginUpdateProgress\n | | \n | --8.70%--WalSndUpdateProgress\n | | \n | |--7.44%--ProcessRepliesIfAny\n\nSo I tried another way of sending keepalive message to the standby machine\nbased on the timeout without asking for a reply(see attachment), the running\ntime of the newly added function OutputPluginUpdateProgress invoked in\npgoutput_change also brings slight overhead:\n--3.63%--pgoutput_change\n | \n |--1.40%--get_rel_sync_entry\n | | \n | --1.14%--hash_search\n | \n --1.08%--OutputPluginUpdateProgress\n | \n --0.85%--WalSndUpdateProgress\n\nBased on above, I think the second idea that sending some threshold of\ncontinuous changes might be better, I will do some research about this\napproach.\n\nRegards,\nWang wei",
"msg_date": "Fri, 28 Jan 2022 12:35:30 +0100",
"msg_from": "Fabrice Chapuis <fabrice636861@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Sat, Jan 28, 2022 at 19:36 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:\r\n> shouldn't we use receiver_timeout in place of wal_sender_timeout because de\r\n> problem comes from the consummer.\r\nThanks for your review.\r\n\r\nIMO, because it is a bug fix on the publisher-side, and the keepalive message\r\nis sent based on wal_sender_timeout in the existing code. So keep it consistent\r\nwith the existing code.\r\n\r\nRegards,\r\nWang wei\r\n",
"msg_date": "Tue, 8 Feb 2022 02:59:31 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Jan 26, 2022 at 11:37 AM I wrote:\r\n> On Sat, Jan 22, 2022 at 7:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> > Now, one idea to solve this problem could be that whenever we skip\r\n> > sending any change we do try to update the plugin progress via\r\n> > OutputPluginUpdateProgress(for walsender, it will invoke\r\n> > WalSndUpdateProgress), and there it tries to process replies and send\r\n> > keep_alive if necessary as we do when we send some data via\r\n> > OutputPluginWrite(for walsender, it will invoke WalSndWriteData). I\r\n> > don't know whether it is a good idea to invoke such a mechanism for\r\n> > every change we skip to send or we should do it after we skip sending\r\n> > some threshold of continuous changes. I think later would be\r\n> > preferred. Also, we might want to introduce a new parameter\r\n> > send_keep_alive to this API so that there is flexibility to invoke\r\n> > this mechanism as we don't need to invoke it while we are actually\r\n> > sending data and before that, we just update the progress via this\r\n> > API.\r\n> ......\r\n> Based on above, I think the second idea that sending some threshold of\r\n> continuous changes might be better, I will do some research about this\r\n> approach.\r\nBased on the second idea, I wrote a new patch(see attachment).\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Tue, 8 Feb 2022 02:59:34 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "Dear Wang,\r\n\r\nThank you for making a patch.\r\nI applied your patch and confirmed that codes passed regression test.\r\nI put a short reviewing:\r\n\r\n```\r\n+\tstatic int skipped_changes_count = 0;\r\n+\t/*\r\n+\t * Conservatively, at least 150,000 changes can be skipped in 1s.\r\n+\t *\r\n+\t * Because we use half of wal_sender_timeout as the threshold, and the unit\r\n+\t * of wal_sender_timeout in process is ms, the final threshold is\r\n+\t * wal_sender_timeout * 75.\r\n+\t */\r\n+\tint skipped_changes_threshold = wal_sender_timeout * 75;\r\n```\r\n\r\nI'm not sure but could you tell me the background of this calculation? \r\nIs this assumption reasonable?\r\n\r\n```\r\n@@ -654,20 +663,62 @@ pgoutput_change(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,\r\n \t{\r\n \t\tcase REORDER_BUFFER_CHANGE_INSERT:\r\n \t\t\tif (!relentry->pubactions.pubinsert)\r\n+\t\t\t{\r\n+\t\t\t\tif (++skipped_changes_count >= skipped_changes_threshold)\r\n+\t\t\t\t{\r\n+\t\t\t\t\tOutputPluginUpdateProgress(ctx, true);\r\n+\r\n+\t\t\t\t\t/*\r\n+\t\t\t\t\t * After sending keepalive message, reset\r\n+\t\t\t\t\t * skipped_changes_count.\r\n+\t\t\t\t\t */\r\n+\t\t\t\t\tskipped_changes_count = 0;\r\n+\t\t\t\t}\r\n \t\t\t\treturn;\r\n+\t\t\t}\r\n \t\t\tbreak;\r\n```\r\n\r\nIs the if-statement needed? In the walsender case OutputPluginUpdateProgress() leads WalSndUpdateProgress(),\r\nand the function also has the threshold for ping-ing.\r\n\r\n```\r\nstatic void\r\n-WalSndUpdateProgress(LogicalDecodingContext *ctx, XLogRecPtr lsn, TransactionId xid)\r\n+WalSndUpdateProgress(LogicalDecodingContext *ctx, XLogRecPtr lsn, TransactionId xid, bool send_keep_alive)\r\n {\r\n-\tstatic TimestampTz sendTime = 0;\r\n+\tstatic TimestampTz trackTime = 0;\r\n \tTimestampTz now = GetCurrentTimestamp();\r\n \r\n+\tif (send_keep_alive)\r\n+\t{\r\n+\t\t/*\r\n+\t\t * If half of wal_sender_timeout has lapsed without send message standby,\r\n+\t\t * send a keep-alive message to the standby.\r\n+\t\t */\r\n+\t\tstatic TimestampTz sendTime = 0;\r\n+\t\tTimestampTz ping_time = TimestampTzPlusMilliseconds(sendTime,\r\n+\t\t\t\t\t\t\t\t\t\t\twal_sender_timeout / 2);\r\n+\t\tif (now >= ping_time)\r\n+\t\t{\r\n+\t\t\tWalSndKeepalive(false);\r\n+\r\n+\t\t\t/* Try to flush pending output to the client */\r\n+\t\t\tif (pq_flush_if_writable() != 0)\r\n+\t\t\t\tWalSndShutdown();\r\n+\t\t\tsendTime = now;\r\n+\t\t}\r\n+\t}\r\n+\r\n```\r\n\r\n* +1 about renaming to trackTime.\r\n* `/2` might be magic number. How about following? Renaming is very welcome:\r\n\r\n```\r\n+#define WALSND_LOGICAL_PING_FACTOR 0.5\r\n+ static TimestampTz sendTime = 0;\r\n+ TimestampTz ping_time = TimestampTzPlusMilliseconds(sendTime,\r\n+ wal_sender_timeout * WALSND_LOGICAL_PING_FACTOR)\r\n```\r\n\r\nCould you add a commitfest entry for cfbot?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Tue, 8 Feb 2022 09:18:28 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "Thanks for your patch, it works well in my test lab.\nI added the definition *extern in wal_sender_timeout;* in the\n*output_plugin.h* file for compilation works.\nI tested the patch for version 10 which is currently in production on our\nsystems.\nThe functions below are only in master branch:\npgoutput_prepare_txn functions,\npgoutput_commit_prepared_txn,\npgoutput_rollback_prepared_txn,\npgoutput_stream_commit,\npgoutput_stream_prepare_txn\n\nWill the patch be proposed retroactively to version 13-12-11-10.\n\nBest regards,\n\nFabrice\n\nOn Tue, Feb 8, 2022 at 3:59 AM wangw.fnst@fujitsu.com <\nwangw.fnst@fujitsu.com> wrote:\n\n> On Wed, Jan 26, 2022 at 11:37 AM I wrote:\n> > On Sat, Jan 22, 2022 at 7:12 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> > > Now, one idea to solve this problem could be that whenever we skip\n> > > sending any change we do try to update the plugin progress via\n> > > OutputPluginUpdateProgress(for walsender, it will invoke\n> > > WalSndUpdateProgress), and there it tries to process replies and send\n> > > keep_alive if necessary as we do when we send some data via\n> > > OutputPluginWrite(for walsender, it will invoke WalSndWriteData). I\n> > > don't know whether it is a good idea to invoke such a mechanism for\n> > > every change we skip to send or we should do it after we skip sending\n> > > some threshold of continuous changes. I think later would be\n> > > preferred. Also, we might want to introduce a new parameter\n> > > send_keep_alive to this API so that there is flexibility to invoke\n> > > this mechanism as we don't need to invoke it while we are actually\n> > > sending data and before that, we just update the progress via this\n> > > API.\n> > ......\n> > Based on above, I think the second idea that sending some threshold of\n> > continuous changes might be better, I will do some research about this\n> > approach.\n> Based on the second idea, I wrote a new patch(see attachment).\n>\n> Regards,\n> Wang wei\n>\n\nThanks for your patch, it works well in my test lab.I added the definition extern in wal_sender_timeout; in the output_plugin.h file for compilation works.I tested the patch for version 10 which is currently in production on our systems.The functions below are only in master branch:pgoutput_prepare_txn functions,pgoutput_commit_prepared_txn,pgoutput_rollback_prepared_txn,pgoutput_stream_commit,pgoutput_stream_prepare_txnWill the patch be proposed retroactively to version 13-12-11-10.Best regards,FabriceOn Tue, Feb 8, 2022 at 3:59 AM wangw.fnst@fujitsu.com <wangw.fnst@fujitsu.com> wrote:On Wed, Jan 26, 2022 at 11:37 AM I wrote:\n> On Sat, Jan 22, 2022 at 7:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Now, one idea to solve this problem could be that whenever we skip\n> > sending any change we do try to update the plugin progress via\n> > OutputPluginUpdateProgress(for walsender, it will invoke\n> > WalSndUpdateProgress), and there it tries to process replies and send\n> > keep_alive if necessary as we do when we send some data via\n> > OutputPluginWrite(for walsender, it will invoke WalSndWriteData). I\n> > don't know whether it is a good idea to invoke such a mechanism for\n> > every change we skip to send or we should do it after we skip sending\n> > some threshold of continuous changes. I think later would be\n> > preferred. Also, we might want to introduce a new parameter\n> > send_keep_alive to this API so that there is flexibility to invoke\n> > this mechanism as we don't need to invoke it while we are actually\n> > sending data and before that, we just update the progress via this\n> > API.\n> ......\n> Based on above, I think the second idea that sending some threshold of\n> continuous changes might be better, I will do some research about this\n> approach.\nBased on the second idea, I wrote a new patch(see attachment).\n\nRegards,\nWang wei",
"msg_date": "Wed, 9 Feb 2022 10:41:12 +0100",
"msg_from": "Fabrice Chapuis <fabrice636861@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Tues, Feb 08, 2022 at 17:18 PM Kuroda, Hayato <kuroda.hayato@fujitsu.com> wrote:\r\n> I applied your patch and confirmed that codes passed regression test.\r\n> I put a short reviewing:\r\nThanks for your test and review.\r\n\r\n> ```\r\n> +\tstatic int skipped_changes_count = 0;\r\n> +\t/*\r\n> +\t * Conservatively, at least 150,000 changes can be skipped in 1s.\r\n> +\t *\r\n> +\t * Because we use half of wal_sender_timeout as the threshold, and\r\n> the unit\r\n> +\t * of wal_sender_timeout in process is ms, the final threshold is\r\n> +\t * wal_sender_timeout * 75.\r\n> +\t */\r\n> +\tint skipped_changes_threshold = wal_sender_timeout * 75;\r\n> ```\r\n> \r\n> I'm not sure but could you tell me the background of this calculation?\r\n> Is this assumption reasonable?\r\nAccording to our discussion, we need to send keepalive messages to subscriber\r\nwhen skipping changes.\r\nOne approach is that **for each skipped change**, we try to send keepalive\r\nmessage by calculating whether a timeout will occur based on the current time\r\nand the last time the keepalive was sent. But this will brings slight overhead.\r\nSo I want to try another approach: after **constantly skipping some changes**,\r\nwe try to send keepalive message by calculating whether a timeout will occur\r\nbased on the current time and the last time the keepalive was sent.\r\n\r\nIMO, we should send keepalive message after skipping a certain number of\r\nchanges constantly.\r\nAnd I want to calculate the threshold dynamically by using a fixed value to\r\navoid adding too much code.\r\nIn addition, different users have different machine performance, and users can\r\nmodify wal_sender_timeout, so the threshold should be dynamically calculated\r\naccording to wal_sender_timeout.\r\n\r\nBased on these, I have tested on machines with different configurations. I took\r\nthe test results on the machine with the lowest configuration.\r\n[results]\r\nThe number of changes that can be skipped per second : 537087 (Average)\r\nTo be safe, I set the value to 150000.\r\n(wal_sender_timeout / 2 / 1000 * 150000 = wal_sender_timeout * 75)\r\n\r\nThe spec of the test server to get the threshold is:\r\nCPU information : Intel(R) Core(TM) i5-6500 CPU @ 3.20GHz\r\nMemert information : 816188 kB\r\n\r\n> ```\r\n> @@ -654,20 +663,62 @@ pgoutput_change(LogicalDecodingContext *ctx,\r\n> ReorderBufferTXN *txn,\r\n> \t{\r\n> \t\tcase REORDER_BUFFER_CHANGE_INSERT:\r\n> \t\t\tif (!relentry->pubactions.pubinsert)\r\n> +\t\t\t{\r\n> +\t\t\t\tif (++skipped_changes_count >=\r\n> skipped_changes_threshold)\r\n> +\t\t\t\t{\r\n> +\t\t\t\t\tOutputPluginUpdateProgress(ctx, true);\r\n> +\r\n> +\t\t\t\t\t/*\r\n> +\t\t\t\t\t * After sending keepalive message,\r\n> reset\r\n> +\t\t\t\t\t * skipped_changes_count.\r\n> +\t\t\t\t\t */\r\n> +\t\t\t\t\tskipped_changes_count = 0;\r\n> +\t\t\t\t}\r\n> \t\t\t\treturn;\r\n> +\t\t\t}\r\n> \t\t\tbreak;\r\n> ```\r\n> \r\n> Is the if-statement needed? In the walsender case\r\n> OutputPluginUpdateProgress() leads WalSndUpdateProgress(), and the\r\n> function also has the threshold for ping-ing.\r\nAs mentioned above, we need to skip some changes continuously before\r\ncalculating whether it will time out.\r\nIf there is no if-statement here, every time a change is skipped, the timeout\r\nwill be checked. This brings extra overhead.\r\n\r\n> ```\r\n> static void\r\n> -WalSndUpdateProgress(LogicalDecodingContext *ctx, XLogRecPtr lsn,\r\n> TransactionId xid)\r\n> +WalSndUpdateProgress(LogicalDecodingContext *ctx, XLogRecPtr lsn,\r\n> +TransactionId xid, bool send_keep_alive)\r\n> {\r\n> -\tstatic TimestampTz sendTime = 0;\r\n> +\tstatic TimestampTz trackTime = 0;\r\n> \tTimestampTz now = GetCurrentTimestamp();\r\n> \r\n> +\tif (send_keep_alive)\r\n> +\t{\r\n> +\t\t/*\r\n> +\t\t * If half of wal_sender_timeout has lapsed without send\r\n> message standby,\r\n> +\t\t * send a keep-alive message to the standby.\r\n> +\t\t */\r\n> +\t\tstatic TimestampTz sendTime = 0;\r\n> +\t\tTimestampTz ping_time =\r\n> TimestampTzPlusMilliseconds(sendTime,\r\n> +\r\n> \twal_sender_timeout / 2);\r\n> +\t\tif (now >= ping_time)\r\n> +\t\t{\r\n> +\t\t\tWalSndKeepalive(false);\r\n> +\r\n> +\t\t\t/* Try to flush pending output to the client */\r\n> +\t\t\tif (pq_flush_if_writable() != 0)\r\n> +\t\t\t\tWalSndShutdown();\r\n> +\t\t\tsendTime = now;\r\n> +\t\t}\r\n> +\t}\r\n> +\r\n> ```\r\n> \r\n> * +1 about renaming to trackTime.\r\n> * `/2` might be magic number. How about following? Renaming is very welcome:\r\n> \r\n> ```\r\n> +#define WALSND_LOGICAL_PING_FACTOR 0.5\r\n> + static TimestampTz sendTime = 0;\r\n> + TimestampTz ping_time = TimestampTzPlusMilliseconds(sendTime,\r\n> +\r\n> +wal_sender_timeout * WALSND_LOGICAL_PING_FACTOR)\r\n> ```\r\nIn the existing code, similar operations on wal_sender_timeout use the style of\r\n(wal_sender_timeout / 2), e.g. function WalSndKeepaliveIfNecessary. So I think\r\nit should be consistent in this patch.\r\nBut I think it is better to use magic number too. Maybe we could improve it in\r\na new thread.\r\n\r\n> Could you add a commitfest entry for cfbot?\r\nThanks for the reminder, I will add it soon.\r\n\r\n\r\nRegards,\r\nWang wei\r\n",
"msg_date": "Tue, 15 Feb 2022 05:18:20 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Tue, Feb 8, 2022 at 1:59 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Wed, Jan 26, 2022 at 11:37 AM I wrote:\n> > On Sat, Jan 22, 2022 at 7:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > Now, one idea to solve this problem could be that whenever we skip\n> > > sending any change we do try to update the plugin progress via\n> > > OutputPluginUpdateProgress(for walsender, it will invoke\n> > > WalSndUpdateProgress), and there it tries to process replies and send\n> > > keep_alive if necessary as we do when we send some data via\n> > > OutputPluginWrite(for walsender, it will invoke WalSndWriteData). I\n> > > don't know whether it is a good idea to invoke such a mechanism for\n> > > every change we skip to send or we should do it after we skip sending\n> > > some threshold of continuous changes. I think later would be\n> > > preferred. Also, we might want to introduce a new parameter\n> > > send_keep_alive to this API so that there is flexibility to invoke\n> > > this mechanism as we don't need to invoke it while we are actually\n> > > sending data and before that, we just update the progress via this\n> > > API.\n> > ......\n> > Based on above, I think the second idea that sending some threshold of\n> > continuous changes might be better, I will do some research about this\n> > approach.\n> Based on the second idea, I wrote a new patch(see attachment).\n\nHi Wang,\n\nSome comments:\n I see you only track skipped Inserts/Updates and Deletes. What about\nDDL operations that are skipped, what about truncate.\nWhat about changes made to unpublished tables? I wonder if you could\ncreate a test script that only did DDL operations\nand truncates, would this timeout happen?\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 18 Feb 2022 13:50:42 +1100",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Fri, Feb 18, 2022 at 10:51 AM Ajin Cherian <itsajin@gmail.com> wrote:\r\n> Some comments:\r\nThanks for your review.\r\n\r\n> I see you only track skipped Inserts/Updates and Deletes. What about\r\n> DDL operations that are skipped, what about truncate.\r\n> What about changes made to unpublished tables? I wonder if you could\r\n> create a test script that only did DDL operations\r\n> and truncates, would this timeout happen?\r\nAccording to your suggestion, I tested with DDL and truncate.\r\nWhile testing, I ran only 20,000 DDLs and 10,000 truncations in one\r\ntransaction.\r\nIf I set wal_sender_timeout and wal_receiver_timeout to 30s, it will time out.\r\nAnd if I use the default values, it will not time out.\r\nIMHO there should not be long transactions that only contain DDL and\r\ntruncation. I'm not quite sure, do we need to handle this kind of use case?\r\n\r\nAttach the test details.\r\n[publisher-side]\r\nconfigure:\r\n wal_sender_timeout = 30s or 60s\r\n wal_receiver_timeout = 30s or 60s\r\nsql:\r\n create table tbl (a int primary key, b text);\r\n create table tbl2 (a int primary key, b text);\r\n create publication pub for table tbl;\r\n\r\n[subscriber-side]\r\nconfigure:\r\n wal_sender_timeout = 30s or 60s\r\n wal_receiver_timeout = 30s or 60s\r\nsql:\r\n create table tbl (a int primary key, b text);\"\r\n create subscription sub connection 'dbname=postgres user=postgres' publication pub;\r\n\r\n[Execute sql in publisher-side]\r\nIn a transaction, execute the following SQL 10,000 times in a loop:\r\n alter table tbl2 rename column b to c;\r\n truncate table tbl2;\r\n alter table tbl2 rename column c to b;\r\n\r\n\r\nRegards,\r\nWang wei\r\n",
"msg_date": "Tue, 22 Feb 2022 03:47:08 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Tue, Feb 22, 2022 at 9:17 AM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Fri, Feb 18, 2022 at 10:51 AM Ajin Cherian <itsajin@gmail.com> wrote:\n> > Some comments:\n> Thanks for your review.\n>\n> > I see you only track skipped Inserts/Updates and Deletes. What about\n> > DDL operations that are skipped, what about truncate.\n> > What about changes made to unpublished tables? I wonder if you could\n> > create a test script that only did DDL operations\n> > and truncates, would this timeout happen?\n> According to your suggestion, I tested with DDL and truncate.\n> While testing, I ran only 20,000 DDLs and 10,000 truncations in one\n> transaction.\n> If I set wal_sender_timeout and wal_receiver_timeout to 30s, it will time out.\n> And if I use the default values, it will not time out.\n> IMHO there should not be long transactions that only contain DDL and\n> truncation. I'm not quite sure, do we need to handle this kind of use case?\n>\n\nI think it is better to handle such cases as well and changes related\nto unpublished tables as well. BTW, it seems Kuroda-San has also given\nsome comments [1] which I am not sure are addressed.\n\nI think instead of keeping the skipping threshold w.r.t\nwal_sender_timeout, we can use some conservative number like 10000 or\nso which we are sure won't impact performance and won't lead to\ntimeouts.\n\n*\n+ /*\n+ * skipped_changes_count is reset when processing changes that do not need to\n+ * be skipped.\n+ */\n+ skipped_changes_count = 0\n\nWhen the skipped_changes_count is reset, the sendTime should also be\nreset. Can we reset it whenever the UpdateProgress function is called\nwith send_keep_alive as false?\n\n[1] - https://www.postgresql.org/message-id/TYAPR01MB5866BD2248EF82FF432FE599F52D9%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 23 Feb 2022 14:25:34 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "Dear Wang,\r\n\r\nThank you for teaching some backgrounds about the patch.\r\n\r\n> According to our discussion, we need to send keepalive messages to subscriber\r\n> when skipping changes.\r\n> One approach is that **for each skipped change**, we try to send keepalive\r\n> message by calculating whether a timeout will occur based on the current time\r\n> and the last time the keepalive was sent. But this will brings slight overhead.\r\n> So I want to try another approach: after **constantly skipping some changes**,\r\n> we try to send keepalive message by calculating whether a timeout will occur\r\n> based on the current time and the last time the keepalive was sent.\r\n\r\nYou meant that calling system calls like GetCurrentTimestamp() should be reduced,\r\nright? I'm not sure how it affects but it seems reasonable.\r\n\r\n> IMO, we should send keepalive message after skipping a certain number of\r\n> changes constantly.\r\n> And I want to calculate the threshold dynamically by using a fixed value to\r\n> avoid adding too much code.\r\n> In addition, different users have different machine performance, and users can\r\n> modify wal_sender_timeout, so the threshold should be dynamically calculated\r\n> according to wal_sender_timeout.\r\n\r\nYour experiment seems not bad, but the background cannot be understand from\r\ncode comments. I prefer a static threshold because it's more simple,\r\nwhich as Amit said in the following too:\r\n\r\nhttps://www.postgresql.org/message-id/CAA4eK1%2B-p_K_j%3DNiGGD6tCYXiJH0ypT4REX5PBKJ4AcUoF3gZQ%40mail.gmail.com\r\n\r\n> In the existing code, similar operations on wal_sender_timeout use the style of\r\n> (wal_sender_timeout / 2), e.g. function WalSndKeepaliveIfNecessary. So I think\r\n> it should be consistent in this patch.\r\n> But I think it is better to use magic number too. Maybe we could improve it in\r\n> a new thread.\r\n\r\nI confirmed the code and +1 yours. We should treat it in another thread if needed.\r\n\r\nBTW, this patch cannot be applied to current master.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 24 Feb 2022 08:06:29 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Feb 22, 2022 at 4:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n>\r\nThanks for your review.\r\n\r\n> On Tue, Feb 22, 2022 at 9:17 AM wangw.fnst@fujitsu.com\r\n> <wangw.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Fri, Feb 18, 2022 at 10:51 AM Ajin Cherian <itsajin@gmail.com> wrote:\r\n> > > Some comments:\r\n> > Thanks for your review.\r\n> >\r\n> > > I see you only track skipped Inserts/Updates and Deletes. What about\r\n> > > DDL operations that are skipped, what about truncate.\r\n> > > What about changes made to unpublished tables? I wonder if you could\r\n> > > create a test script that only did DDL operations\r\n> > > and truncates, would this timeout happen?\r\n> > According to your suggestion, I tested with DDL and truncate.\r\n> > While testing, I ran only 20,000 DDLs and 10,000 truncations in one\r\n> > transaction.\r\n> > If I set wal_sender_timeout and wal_receiver_timeout to 30s, it will time out.\r\n> > And if I use the default values, it will not time out.\r\n> > IMHO there should not be long transactions that only contain DDL and\r\n> > truncation. I'm not quite sure, do we need to handle this kind of use case?\r\n> >\r\n> \r\n> I think it is better to handle such cases as well and changes related\r\n> to unpublished tables as well. BTW, it seems Kuroda-San has also given\r\n> some comments [1] which I am not sure are addressed.\r\nAdd handling of related use cases.\r\n\r\n> I think instead of keeping the skipping threshold w.r.t\r\n> wal_sender_timeout, we can use some conservative number like 10000 or\r\n> so which we are sure won't impact performance and won't lead to\r\n> timeouts.\r\nYes, it would be better. Set the threshold conservatively to 10000.\r\n\r\n> *\r\n> + /*\r\n> + * skipped_changes_count is reset when processing changes that do not need\r\n> to\r\n> + * be skipped.\r\n> + */\r\n> + skipped_changes_count = 0\r\n> \r\n> When the skipped_changes_count is reset, the sendTime should also be\r\n> reset. Can we reset it whenever the UpdateProgress function is called\r\n> with send_keep_alive as false?\r\nFixed.\r\n\r\nAttached a new patch that addresses following improvements I have got so far as\r\ncomments:\r\n1. Consider other changes that need to be skipped(truncate, DDL and function\r\ncalls pg_logical_emit_message). [suggestion by Ajin, Amit]\r\n(Add a new function SendKeepaliveIfNecessary for trying to send keepalive message.)\r\n2. Set the threshold conservatively to a static value of 10000.[suggestion by Amit, Kuroda-San]\r\n3. Reset sendTime in function WalSndUpdateProgress when send_keep_alive is\r\nfalse. [suggestion by Amit]\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Mon, 28 Feb 2022 07:40:51 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Thur, Feb 24, 2022 at 4:06 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\r\n> Dear Wang,\r\nThanks for your review.\r\n\r\n> > According to our discussion, we need to send keepalive messages to\r\n> > subscriber when skipping changes.\r\n> > One approach is that **for each skipped change**, we try to send\r\n> > keepalive message by calculating whether a timeout will occur based on\r\n> > the current time and the last time the keepalive was sent. But this will brings\r\n> slight overhead.\r\n> > So I want to try another approach: after **constantly skipping some\r\n> > changes**, we try to send keepalive message by calculating whether a\r\n> > timeout will occur based on the current time and the last time the keepalive\r\n> was sent.\r\n> \r\n> You meant that calling system calls like GetCurrentTimestamp() should be\r\n> reduced, right? I'm not sure how it affects but it seems reasonable.\r\nYes. There is no need to invoke frequently, and it will bring overhead.\r\n\r\n> > IMO, we should send keepalive message after skipping a certain number\r\n> > of changes constantly.\r\n> > And I want to calculate the threshold dynamically by using a fixed\r\n> > value to avoid adding too much code.\r\n> > In addition, different users have different machine performance, and\r\n> > users can modify wal_sender_timeout, so the threshold should be\r\n> > dynamically calculated according to wal_sender_timeout.\r\n> \r\n> Your experiment seems not bad, but the background cannot be understand\r\n> from code comments. I prefer a static threshold because it's more simple, which\r\n> as Amit said in the following too:\r\n> \r\n> https://www.postgresql.org/message-id/CAA4eK1%2B-\r\n> p_K_j%3DNiGGD6tCYXiJH0ypT4REX5PBKJ4AcUoF3gZQ%40mail.gmail.com\r\nYes, you are right. Fixed.\r\nAnd I set the threshold to 10000.\r\n\r\n> BTW, this patch cannot be applied to current master.\r\nThanks for reminder. Rebase it.\r\nKindly have a look at new patch shared in [1].\r\n\r\n[1] https://www.postgresql.org/message-id/OS3PR01MB6275FEB9F83081F1C87539B99E019%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei\r\n",
"msg_date": "Mon, 28 Feb 2022 07:42:34 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "Dear Wang,\r\n\r\n> Attached a new patch that addresses following improvements I have got so far as\r\n> comments:\r\n> 1. Consider other changes that need to be skipped(truncate, DDL and function\r\n> calls pg_logical_emit_message). [suggestion by Ajin, Amit]\r\n> (Add a new function SendKeepaliveIfNecessary for trying to send keepalive\r\n> message.)\r\n> 2. Set the threshold conservatively to a static value of 10000.[suggestion by Amit,\r\n> Kuroda-San]\r\n> 3. Reset sendTime in function WalSndUpdateProgress when send_keep_alive is\r\n> false. [suggestion by Amit]\r\n\r\nThank you for giving a good patch! I'll check more detail later,\r\nbut it can be applied my codes and passed check world.\r\nI put some minor comments:\r\n\r\n```\r\n+ * Try to send keepalive message\r\n```\r\n\r\nMaybe missing \"a\"?\r\n\r\n```\r\n+ /*\r\n+ * After continuously skipping SKIPPED_CHANGES_THRESHOLD changes, try to send a\r\n+ * keepalive message.\r\n+ */\r\n```\r\n\r\nThis comments does not follow preferred style:\r\nhttps://www.postgresql.org/docs/devel/source-format.html\r\n\r\n```\r\n@@ -683,12 +683,12 @@ OutputPluginWrite(struct LogicalDecodingContext *ctx, bool last_write)\r\n * Update progress tracking (if supported).\r\n */\r\n void\r\n-OutputPluginUpdateProgress(struct LogicalDecodingContext *ctx)\r\n+OutputPluginUpdateProgress(struct LogicalDecodingContext *ctx, bool send_keep_alive)\r\n```\r\n\r\nThis function is no longer doing just tracking.\r\nCould you update the code comment above?\r\n\r\n```\r\n\tif (!is_publishable_relation(relation))\r\n\t\treturn;\r\n```\r\n\r\nI'm not sure but it seems that the function exits immediately if relation\r\nis a sequence, view, temporary table and so on. Is it OK? Does it never happen?\r\n\r\n```\r\n+ SendKeepaliveIfNecessary(ctx, false);\r\n```\r\n\r\nI think a comment is needed above which clarifies sending a keepalive message.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Mon, 28 Feb 2022 10:58:15 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Mon, Feb 28, 2022 at 6:58 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\r\n> Dear Wang,\r\n> \r\n> > Attached a new patch that addresses following improvements I have got\r\n> > so far as\r\n> > comments:\r\n> > 1. Consider other changes that need to be skipped(truncate, DDL and\r\n> > function calls pg_logical_emit_message). [suggestion by Ajin, Amit]\r\n> > (Add a new function SendKeepaliveIfNecessary for trying to send\r\n> > keepalive\r\n> > message.)\r\n> > 2. Set the threshold conservatively to a static value of\r\n> > 10000.[suggestion by Amit, Kuroda-San] 3. Reset sendTime in function\r\n> > WalSndUpdateProgress when send_keep_alive is false. [suggestion by\r\n> > Amit]\r\n> \r\n> Thank you for giving a good patch! I'll check more detail later, but it can be\r\n> applied my codes and passed check world.\r\n> I put some minor comments:\r\nThanks for your comments.\r\n\r\n> ```\r\n> + * Try to send keepalive message\r\n> ```\r\n> \r\n> Maybe missing \"a\"?\r\nFixed. Add missing \"a\".\r\n\r\n> ```\r\n> + /*\r\n> + * After continuously skipping SKIPPED_CHANGES_THRESHOLD changes, try\r\n> to send a\r\n> + * keepalive message.\r\n> + */\r\n> ```\r\n> \r\n> This comments does not follow preferred style:\r\n> https://www.postgresql.org/docs/devel/source-format.html\r\nFixed. Correct wrong comment style.\r\n\r\n> ```\r\n> @@ -683,12 +683,12 @@ OutputPluginWrite(struct LogicalDecodingContext *ctx,\r\n> bool last_write)\r\n> * Update progress tracking (if supported).\r\n> */\r\n> void\r\n> -OutputPluginUpdateProgress(struct LogicalDecodingContext *ctx)\r\n> +OutputPluginUpdateProgress(struct LogicalDecodingContext *ctx, bool\r\n> +send_keep_alive)\r\n> ```\r\n> \r\n> This function is no longer doing just tracking.\r\n> Could you update the code comment above?\r\nFixed. Update the comment above function OutputPluginUpdateProgress.\r\n\r\n> ```\r\n> \tif (!is_publishable_relation(relation))\r\n> \t\treturn;\r\n> ```\r\n> \r\n> I'm not sure but it seems that the function exits immediately if relation is a\r\n> sequence, view, temporary table and so on. Is it OK? Does it never happen?\r\nI did some checks to confirm this. After my confirmation, there are several\r\nsituations that can cause a timeout. For example, if I insert many date into\r\ntable sql_features in a long transaction, subscriber-side will time out.\r\nAlthough I think users should not modify these tables arbitrarily, it could\r\nhappen. To be conservative, I think this use case should be addressed as well.\r\nFixed. Invoke function SendKeepaliveIfNecessary before return.\r\n\r\n> ```\r\n> + SendKeepaliveIfNecessary(ctx, false);\r\n> ```\r\n> \r\n> I think a comment is needed above which clarifies sending a keepalive message.\r\nFixed. Before invoking function SendKeepaliveIfNecessary, add the corresponding\r\ncomment.\r\n\r\nAttach the new patch. [suggestion by Kuroda-San]\r\n1. Fix the typo.\r\n2. Improve comment style.\r\n3. Fix missing consideration.\r\n4. Add comments to clarifies above functions and function calls.\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Wed, 2 Mar 2022 02:06:17 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Mar 2, 2022 at 1:06 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n...\n> Attach the new patch. [suggestion by Kuroda-San]\n\nIt is difficult to read the thread and to keep track of who reviewed\nwhat, and what patch is latest etc, when every patch name is the same.\n\nCan you please introduce a version number for future patch attachments?\n\n------\nKInd Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 3 Mar 2022 09:55:53 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "Dear Wang,\r\n\r\n> Attach the new patch. [suggestion by Kuroda-San]\r\n> 1. Fix the typo.\r\n> 2. Improve comment style.\r\n> 3. Fix missing consideration.\r\n> 4. Add comments to clarifies above functions and function calls.\r\n\r\nThank you for updating the patch! I confirmed they were fixed.\r\n\r\n```\r\n case REORDER_BUFFER_CHANGE_INVALIDATION:\r\n- /* Execute the invalidation messages locally */\r\n- ReorderBufferExecuteInvalidations(\r\n- change->data.inval.ninvalidations,\r\n- change->data.inval.invalidations);\r\n- break;\r\n+ {\r\n+ LogicalDecodingContext *ctx = rb->private_data;\r\n+\r\n+ Assert(!ctx->fast_forward);\r\n+\r\n+ /* Set output state. */\r\n+ ctx->accept_writes = true;\r\n+ ctx->write_xid = txn->xid;\r\n+ ctx->write_location = change->lsn;\r\n```\r\n\r\nSome codes were added in ReorderBufferProcessTXN() for treating DDL, \r\n\r\n\r\n\r\n\r\nI'm also happy if you give the version number :-).\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n> -----Original Message-----\r\n> From: Wang, Wei/王 威 <wangw.fnst@fujitsu.com>\r\n> Sent: Wednesday, March 2, 2022 11:06 AM\r\n> To: Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com>\r\n> Cc: Fabrice Chapuis <fabrice636861@gmail.com>; Simon Riggs\r\n> <simon.riggs@enterprisedb.com>; Petr Jelinek\r\n> <petr.jelinek@enterprisedb.com>; Tang, Haiying/唐 海英\r\n> <tanghy.fnst@fujitsu.com>; Amit Kapila <amit.kapila16@gmail.com>;\r\n> PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>; Ajin Cherian\r\n> <itsajin@gmail.com>\r\n> Subject: RE: Logical replication timeout problem\r\n> \r\n> On Mon, Feb 28, 2022 at 6:58 PM Kuroda, Hayato/黒田 隼人\r\n> <kuroda.hayato@fujitsu.com> wrote:\r\n> > Dear Wang,\r\n> >\r\n> > > Attached a new patch that addresses following improvements I have got\r\n> > > so far as\r\n> > > comments:\r\n> > > 1. Consider other changes that need to be skipped(truncate, DDL and\r\n> > > function calls pg_logical_emit_message). [suggestion by Ajin, Amit]\r\n> > > (Add a new function SendKeepaliveIfNecessary for trying to send\r\n> > > keepalive\r\n> > > message.)\r\n> > > 2. Set the threshold conservatively to a static value of\r\n> > > 10000.[suggestion by Amit, Kuroda-San] 3. Reset sendTime in function\r\n> > > WalSndUpdateProgress when send_keep_alive is false. [suggestion by\r\n> > > Amit]\r\n> >\r\n> > Thank you for giving a good patch! I'll check more detail later, but it can be\r\n> > applied my codes and passed check world.\r\n> > I put some minor comments:\r\n> Thanks for your comments.\r\n> \r\n> > ```\r\n> > + * Try to send keepalive message\r\n> > ```\r\n> >\r\n> > Maybe missing \"a\"?\r\n> Fixed. Add missing \"a\".\r\n> \r\n> > ```\r\n> > + /*\r\n> > + * After continuously skipping SKIPPED_CHANGES_THRESHOLD\r\n> changes, try\r\n> > to send a\r\n> > + * keepalive message.\r\n> > + */\r\n> > ```\r\n> >\r\n> > This comments does not follow preferred style:\r\n> > https://www.postgresql.org/docs/devel/source-format.html\r\n> Fixed. Correct wrong comment style.\r\n> \r\n> > ```\r\n> > @@ -683,12 +683,12 @@ OutputPluginWrite(struct LogicalDecodingContext\r\n> *ctx,\r\n> > bool last_write)\r\n> > * Update progress tracking (if supported).\r\n> > */\r\n> > void\r\n> > -OutputPluginUpdateProgress(struct LogicalDecodingContext *ctx)\r\n> > +OutputPluginUpdateProgress(struct LogicalDecodingContext *ctx, bool\r\n> > +send_keep_alive)\r\n> > ```\r\n> >\r\n> > This function is no longer doing just tracking.\r\n> > Could you update the code comment above?\r\n> Fixed. Update the comment above function OutputPluginUpdateProgress.\r\n> \r\n> > ```\r\n> > \tif (!is_publishable_relation(relation))\r\n> > \t\treturn;\r\n> > ```\r\n> >\r\n> > I'm not sure but it seems that the function exits immediately if relation is a\r\n> > sequence, view, temporary table and so on. Is it OK? Does it never happen?\r\n> I did some checks to confirm this. After my confirmation, there are several\r\n> situations that can cause a timeout. For example, if I insert many date into\r\n> table sql_features in a long transaction, subscriber-side will time out.\r\n> Although I think users should not modify these tables arbitrarily, it could\r\n> happen. To be conservative, I think this use case should be addressed as well.\r\n> Fixed. Invoke function SendKeepaliveIfNecessary before return.\r\n> \r\n> > ```\r\n> > + SendKeepaliveIfNecessary(ctx, false);\r\n> > ```\r\n> >\r\n> > I think a comment is needed above which clarifies sending a keepalive\r\n> message.\r\n> Fixed. Before invoking function SendKeepaliveIfNecessary, add the\r\n> corresponding\r\n> comment.\r\n> \r\n> Attach the new patch. [suggestion by Kuroda-San]\r\n> 1. Fix the typo.\r\n> 2. Improve comment style.\r\n> 3. Fix missing consideration.\r\n> 4. Add comments to clarifies above functions and function calls.\r\n> \r\n> Regards,\r\n> Wang wei\r\n",
"msg_date": "Thu, 3 Mar 2022 09:18:46 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "Dear Wang,\r\n\r\n> Some codes were added in ReorderBufferProcessTXN() for treating DDL,\r\n\r\nMy mailer went wrong, so I'll put comments again. Sorry.\r\n\r\nSome codes were added in ReorderBufferProcessTXN() for treating DDL,\r\nbut I doubted updating accept_writes is needed.\r\nIMU, the parameter is read by OutputPluginPrepareWrite() in order align messages.\r\nThey should have a header - like 'w' - before their body. But here only a keepalive message is sent,\r\nno meaningful changes, so I think it might be not needed.\r\nI commented out the line and tested like you did [1], and no timeout and errors were found.\r\nDo you have any reasons for that?\r\n\r\nhttps://www.postgresql.org/message-id/OS3PR01MB6275A95FD44DC6C46058EA389E3B9%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Fri, 4 Mar 2022 08:25:42 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Fri, Mar 4, 2022 at 4:26 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\r\n>\r\nThanks for your test and comments.\r\n\r\n> Some codes were added in ReorderBufferProcessTXN() for treating DDL,\r\n> but I doubted updating accept_writes is needed.\r\n> IMU, the parameter is read by OutputPluginPrepareWrite() in order align\r\n> messages.\r\n> They should have a header - like 'w' - before their body. But here only a\r\n> keepalive message is sent,\r\n> no meaningful changes, so I think it might be not needed.\r\n> I commented out the line and tested like you did [1], and no timeout and errors\r\n> were found.\r\n> Do you have any reasons for that?\r\n> \r\n> https://www.postgresql.org/message-\r\n> id/OS3PR01MB6275A95FD44DC6C46058EA389E3B9%40OS3PR01MB6275.jpnprd0\r\n> 1.prod.outlook.com\r\nYes, you are right. We should not set accept_writes to true here.\r\nAnd I looked into the function WalSndUpdateProgress. I found function\r\nWalSndUpdateProgress try to record the time of some message(by function\r\nLagTrackerWrite) sent to subscriber, such as in function pgoutput_commit_txn.\r\nThen, when publisher receives the reply message from the subscriber(function\r\nProcessStandbyReplyMessage), publisher invokes LagTrackerRead to calculate the\r\ndelay time(refer to view pg_stat_replication).\r\nReferring to the purpose of LagTrackerWrite, I think it is no need to log time\r\nwhen sending keepalive messages here.\r\nSo when the parameter send_keep_alive of function WalSndUpdateProgress is true,\r\nskip the recording time.\r\n\r\n> I'm also happy if you give the version number :-).\r\nIntroduce version information, starting from version 1.\r\n\r\nAttach the new patch.\r\n1. Fix wrong variable setting and skip unnecessary time records.[suggestion by Kuroda-San and me.]\r\n2. Introduce version information.[suggestion by Peter, Kuroda-San]\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Tue, 8 Mar 2022 01:25:13 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Tue, Mar 8, 2022 at 12:25 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n> Attach the new patch.\n> 1. Fix wrong variable setting and skip unnecessary time records.[suggestion by Kuroda-San and me.]\n> 2. Introduce version information.[suggestion by Peter, Kuroda-San]\n>\n> Regards,\n> Wang wei\n\nSome comments.\n\n1. The comment on top of SendKeepaliveIfNecessary\n\n Try to send a keepalive message if too many changes was skipped.\n\nchange to\n\nTry to send a keepalive message if too many changes wer skipped.\n\n2. In pgoutput_change:\n\n+ /* Reset the counter for skipped changes. */\n+ SendKeepaliveIfNecessary(ctx, false);\n+\n\nThis reset is called too early, this function might go on to skip\nchanges because of the row filter, so this\nreset fits better once we know for sure that a change is sent out. You\nwill also need to send keep alive\nwhen the change is skipped due to the row filter.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 8 Mar 2022 14:53:43 +1100",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "Hi,\n\nOn Tue, Mar 8, 2022 at 10:25 AM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Fri, Mar 4, 2022 at 4:26 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\n> >\n> Thanks for your test and comments.\n>\n> > Some codes were added in ReorderBufferProcessTXN() for treating DDL,\n> > but I doubted updating accept_writes is needed.\n> > IMU, the parameter is read by OutputPluginPrepareWrite() in order align\n> > messages.\n> > They should have a header - like 'w' - before their body. But here only a\n> > keepalive message is sent,\n> > no meaningful changes, so I think it might be not needed.\n> > I commented out the line and tested like you did [1], and no timeout and errors\n> > were found.\n> > Do you have any reasons for that?\n> >\n> > https://www.postgresql.org/message-\n> > id/OS3PR01MB6275A95FD44DC6C46058EA389E3B9%40OS3PR01MB6275.jpnprd0\n> > 1.prod.outlook.com\n> Yes, you are right. We should not set accept_writes to true here.\n> And I looked into the function WalSndUpdateProgress. I found function\n> WalSndUpdateProgress try to record the time of some message(by function\n> LagTrackerWrite) sent to subscriber, such as in function pgoutput_commit_txn.\n> Then, when publisher receives the reply message from the subscriber(function\n> ProcessStandbyReplyMessage), publisher invokes LagTrackerRead to calculate the\n> delay time(refer to view pg_stat_replication).\n> Referring to the purpose of LagTrackerWrite, I think it is no need to log time\n> when sending keepalive messages here.\n> So when the parameter send_keep_alive of function WalSndUpdateProgress is true,\n> skip the recording time.\n>\n> > I'm also happy if you give the version number :-).\n> Introduce version information, starting from version 1.\n>\n> Attach the new patch.\n> 1. Fix wrong variable setting and skip unnecessary time records.[suggestion by Kuroda-San and me.]\n> 2. Introduce version information.[suggestion by Peter, Kuroda-San]\n\nI've looked at the patch and have a question:\n\n+void\n+SendKeepaliveIfNecessary(LogicalDecodingContext *ctx, bool skipped)\n+{\n+ static int skipped_changes_count = 0;\n+\n+ /*\n+ * skipped_changes_count is reset when processing changes that do not\n+ * need to be skipped.\n+ */\n+ if (!skipped)\n+ {\n+ skipped_changes_count = 0;\n+ return;\n+ }\n+\n+ /*\n+ * After continuously skipping SKIPPED_CHANGES_THRESHOLD\nchanges, try to send a\n+ * keepalive message.\n+ */\n+ #define SKIPPED_CHANGES_THRESHOLD 10000\n+\n+ if (++skipped_changes_count >= SKIPPED_CHANGES_THRESHOLD)\n+ {\n+ /* Try to send a keepalive message. */\n+ OutputPluginUpdateProgress(ctx, true);\n+\n+ /* After trying to send a keepalive message, reset the flag. */\n+ skipped_changes_count = 0;\n+ }\n+}\n\nSince we send a keepalive after continuously skipping 10000 changes,\nthe originally reported issue can still occur if skipping 10000\nchanges took more than the timeout and the walsender didn't send any\nchange while that, is that right?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 8 Mar 2022 16:52:27 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "Dear Wang,\r\n\r\nThank you for updating the patch! Good self-reviewing.\r\n\r\n> And I looked into the function WalSndUpdateProgress. I found function\r\n> WalSndUpdateProgress try to record the time of some message(by function\r\n> LagTrackerWrite) sent to subscriber, such as in function pgoutput_commit_txn.\r\n\r\nYeah, I think you are right.\r\n\r\n> Then, when publisher receives the reply message from the subscriber(function\r\n> ProcessStandbyReplyMessage), publisher invokes LagTrackerRead to calculate\r\n> the\r\n> delay time(refer to view pg_stat_replication).\r\n> Referring to the purpose of LagTrackerWrite, I think it is no need to log time\r\n> when sending keepalive messages here.\r\n> So when the parameter send_keep_alive of function WalSndUpdateProgress is\r\n> true,\r\n> skip the recording time.\r\n\r\nI also read them. LagTracker records the elapsed time between sending commit\r\nfrom publisher and receiving reply from subscriber, right? It seems good.\r\n\r\nDo we need adding a test for them? I think it can be added to 100_bugs.pl.\r\nActually I tried to send PoC, but it does not finish to implement that.\r\nI'll send if it is done.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Tue, 8 Mar 2022 08:47:32 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Tues, Mar 8, 2022 at 11:54 PM Ajin Cherian <itsajin@gmail.com> wrote:\r\n> Some comments.\r\nThanks for your comments.\r\n\r\n> 1. The comment on top of SendKeepaliveIfNecessary\r\n> \r\n> Try to send a keepalive message if too many changes was skipped.\r\n> \r\n> change to\r\n> \r\n> Try to send a keepalive message if too many changes wer skipped.\r\nFixed. Change 'was' to 'were'.\r\n\r\n> 2. In pgoutput_change:\r\n> \r\n> + /* Reset the counter for skipped changes. */\r\n> + SendKeepaliveIfNecessary(ctx, false);\r\n> +\r\n> \r\n> This reset is called too early, this function might go on to skip\r\n> changes because of the row filter, so this\r\n> reset fits better once we know for sure that a change is sent out. You\r\n> will also need to send keep alive\r\n> when the change is skipped due to the row filter.\r\nFixed. Add a flag 'is_send' to record whether the change is sent, then reset\r\nthe counter or try to send a keepalive message based on the flag 'is_send'.\r\n\r\nAttach the new patch.\r\n1. Fix typo in comment on top of SendKeepaliveIfNecessary.[suggestion by Ajin.]\r\n2. Add handling of cases filtered out by row filter.[suggestion by Ajin.]\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Wed, 9 Mar 2022 02:25:15 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Tue, Mar 8, 2022 at 3:52 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> I've looked at the patch and have a question:\r\nThanks for your review and comments.\r\n\r\n> +void\r\n> +SendKeepaliveIfNecessary(LogicalDecodingContext *ctx, bool skipped) {\r\n> + static int skipped_changes_count = 0;\r\n> +\r\n> + /*\r\n> + * skipped_changes_count is reset when processing changes that do not\r\n> + * need to be skipped.\r\n> + */\r\n> + if (!skipped)\r\n> + {\r\n> + skipped_changes_count = 0;\r\n> + return;\r\n> + }\r\n> +\r\n> + /*\r\n> + * After continuously skipping SKIPPED_CHANGES_THRESHOLD\r\n> changes, try to send a\r\n> + * keepalive message.\r\n> + */\r\n> + #define SKIPPED_CHANGES_THRESHOLD 10000\r\n> +\r\n> + if (++skipped_changes_count >= SKIPPED_CHANGES_THRESHOLD)\r\n> + {\r\n> + /* Try to send a keepalive message. */\r\n> + OutputPluginUpdateProgress(ctx, true);\r\n> +\r\n> + /* After trying to send a keepalive message, reset the flag. */\r\n> + skipped_changes_count = 0;\r\n> + }\r\n> +}\r\n> \r\n> Since we send a keepalive after continuously skipping 10000 changes, the\r\n> originally reported issue can still occur if skipping 10000 changes took more than\r\n> the timeout and the walsender didn't send any change while that, is that right?\r\nYes, theoretically so.\r\nBut after testing, I think this value should be conservative enough not to reproduce\r\nthis bug.\r\nAfter the previous discussion[1], it is currently considered that it is better\r\nto directly set a conservative threshold than to calculate the threshold based\r\non wal_sender_timeout.\r\n\r\n[1] - https://www.postgresql.org/message-id/OS3PR01MB6275FEB9F83081F1C87539B99E019%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei\r\n",
"msg_date": "Wed, 9 Mar 2022 02:26:14 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Tue, Mar 8, 2022 at 4:48 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\r\n> Thank you for updating the patch! Good self-reviewing.\r\nThanks for your comments.\r\n\r\n> > And I looked into the function WalSndUpdateProgress. I found function\r\n> > WalSndUpdateProgress try to record the time of some message(by\r\n> > function\r\n> > LagTrackerWrite) sent to subscriber, such as in function pgoutput_commit_txn.\r\n> \r\n> Yeah, I think you are right.\r\n> \r\n> > Then, when publisher receives the reply message from the\r\n> > subscriber(function ProcessStandbyReplyMessage), publisher invokes\r\n> > LagTrackerRead to calculate the delay time(refer to view\r\n> > pg_stat_replication).\r\n> > Referring to the purpose of LagTrackerWrite, I think it is no need to\r\n> > log time when sending keepalive messages here.\r\n> > So when the parameter send_keep_alive of function WalSndUpdateProgress\r\n> > is true, skip the recording time.\r\n> \r\n> I also read them. LagTracker records the elapsed time between sending commit\r\n> from publisher and receiving reply from subscriber, right? It seems good.\r\nYes.\r\n\r\n> Do we need adding a test for them? I think it can be added to 100_bugs.pl.\r\n> Actually I tried to send PoC, but it does not finish to implement that.\r\n> I'll send if it is done.\r\nI'm not sure if it is worth it.\r\nBecause the reproduced test of this bug might take some time and might risk\r\nmaking the build farm slow, so I am not sure if others would like the\r\nreproduced test of this bug.\r\n\r\nRegards,\r\nWang wei\r\n",
"msg_date": "Wed, 9 Mar 2022 02:27:35 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "Dear Wang,\r\n\r\nThank you for updating!\r\n\r\n> > Do we need adding a test for them? I think it can be added to 100_bugs.pl.\r\n> > Actually I tried to send PoC, but it does not finish to implement that.\r\n> > I'll send if it is done.\r\n> I'm not sure if it is worth it.\r\n> Because the reproduced test of this bug might take some time and might risk\r\n> making the build farm slow, so I am not sure if others would like the\r\n> reproduced test of this bug.\r\n\r\nI was taught from you that it may suggest that it is difficult to stabilize and\r\nminimize the test. I withdraw the above.\r\nI put some comments for v2, mainly cosmetic ones.\r\n\r\n1. pgoutput_change\r\n```\r\n+ bool is_send = true;\r\n```\r\n\r\nMy first impression is that is_send should be initialized to false,\r\nand it will change to true when OutputPluginWrite() is called.\r\n\r\n\r\n2. pgoutput_change\r\n```\r\n+ {\r\n+ is_send = false;\r\n+ break;\r\n+ }\r\n```\r\n\r\nHere are too many indents, but I think they should be removed.\r\nSee above comment.\r\n\r\n3. WalSndUpdateProgress\r\n```\r\n+ /*\r\n+ * If half of wal_sender_timeout has lapsed without send message standby,\r\n+ * send a keep-alive message to the standby.\r\n+ */\r\n```\r\n\r\nThe comment seems inconsistency with others.\r\nHere is \"keep-alive\", but other parts are \"keepalive\".\r\n\r\n4. ReorderBufferProcessTXN\r\n```\r\n+ change->data.inval.ninvalidations,\r\n+ change->data.inval.invalidations);\r\n```\r\n\r\nMaybe these lines break 80-columns rule.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 9 Mar 2022 03:52:03 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Mar 9, 2022 at 11:26 AM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Tue, Mar 8, 2022 at 3:52 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > I've looked at the patch and have a question:\n> Thanks for your review and comments.\n>\n> > +void\n> > +SendKeepaliveIfNecessary(LogicalDecodingContext *ctx, bool skipped) {\n> > + static int skipped_changes_count = 0;\n> > +\n> > + /*\n> > + * skipped_changes_count is reset when processing changes that do not\n> > + * need to be skipped.\n> > + */\n> > + if (!skipped)\n> > + {\n> > + skipped_changes_count = 0;\n> > + return;\n> > + }\n> > +\n> > + /*\n> > + * After continuously skipping SKIPPED_CHANGES_THRESHOLD\n> > changes, try to send a\n> > + * keepalive message.\n> > + */\n> > + #define SKIPPED_CHANGES_THRESHOLD 10000\n> > +\n> > + if (++skipped_changes_count >= SKIPPED_CHANGES_THRESHOLD)\n> > + {\n> > + /* Try to send a keepalive message. */\n> > + OutputPluginUpdateProgress(ctx, true);\n> > +\n> > + /* After trying to send a keepalive message, reset the flag. */\n> > + skipped_changes_count = 0;\n> > + }\n> > +}\n> >\n> > Since we send a keepalive after continuously skipping 10000 changes, the\n> > originally reported issue can still occur if skipping 10000 changes took more than\n> > the timeout and the walsender didn't send any change while that, is that right?\n> Yes, theoretically so.\n> But after testing, I think this value should be conservative enough not to reproduce\n> this bug.\n\nBut it really depends on the workload, the server condition, and the\ntimeout value, right? The logical decoding might involve disk I/O much\nto spill/load intermediate data and the system might be under the\nhigh-load condition. Why don't we check both the count and the time?\nThat is, I think we can send a keep-alive either if we skipped 10000\nchanges or if we didn't sent anything for wal_sender_timeout / 2.\n\nAlso, the patch changes the current behavior of wal senders; with the\npatch, we send keep-alive messages even when wal_sender_timeout = 0.\nBut I'm not sure it's a good idea. The subscriber's\nwal_receiver_timeout might be lower than wal_sender_timeout. Instead,\nI think it's better to periodically check replies and send a reply to\nthe keep-alive message sent from the subscriber if necessary, for\nexample, every 10000 skipped changes.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 9 Mar 2022 15:45:01 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "Hi, I have been following this discussion for a while because I believe we\nare hit by this pretty hard.\n\nThis sounds very reasonable to me:\n\n\"Why don't we check both the count and the time?\nThat is, I think we can send a keep-alive either if we skipped 10000\nchanges or if we didn't sent anything for wal_sender_timeout / 2\"\n\nWill gladly test what ends up as an acceptable patch for this, hoping for\nthe best and thanks for looking into this.\n\nDen ons 9 mars 2022 kl 07:45 skrev Masahiko Sawada <sawada.mshk@gmail.com>:\n\n> On Wed, Mar 9, 2022 at 11:26 AM wangw.fnst@fujitsu.com\n> <wangw.fnst@fujitsu.com> wrote:\n> >\n> > On Tue, Mar 8, 2022 at 3:52 PM Masahiko Sawada <sawada.mshk@gmail.com>\n> wrote:\n> > > I've looked at the patch and have a question:\n> > Thanks for your review and comments.\n> >\n> > > +void\n> > > +SendKeepaliveIfNecessary(LogicalDecodingContext *ctx, bool skipped) {\n> > > + static int skipped_changes_count = 0;\n> > > +\n> > > + /*\n> > > + * skipped_changes_count is reset when processing changes\n> that do not\n> > > + * need to be skipped.\n> > > + */\n> > > + if (!skipped)\n> > > + {\n> > > + skipped_changes_count = 0;\n> > > + return;\n> > > + }\n> > > +\n> > > + /*\n> > > + * After continuously skipping SKIPPED_CHANGES_THRESHOLD\n> > > changes, try to send a\n> > > + * keepalive message.\n> > > + */\n> > > + #define SKIPPED_CHANGES_THRESHOLD 10000\n> > > +\n> > > + if (++skipped_changes_count >= SKIPPED_CHANGES_THRESHOLD)\n> > > + {\n> > > + /* Try to send a keepalive message. */\n> > > + OutputPluginUpdateProgress(ctx, true);\n> > > +\n> > > + /* After trying to send a keepalive message, reset\n> the flag. */\n> > > + skipped_changes_count = 0;\n> > > + }\n> > > +}\n> > >\n> > > Since we send a keepalive after continuously skipping 10000 changes,\n> the\n> > > originally reported issue can still occur if skipping 10000 changes\n> took more than\n> > > the timeout and the walsender didn't send any change while that, is\n> that right?\n> > Yes, theoretically so.\n> > But after testing, I think this value should be conservative enough not\n> to reproduce\n> > this bug.\n>\n> But it really depends on the workload, the server condition, and the\n> timeout value, right? The logical decoding might involve disk I/O much\n> to spill/load intermediate data and the system might be under the\n> high-load condition. Why don't we check both the count and the time?\n> That is, I think we can send a keep-alive either if we skipped 10000\n> changes or if we didn't sent anything for wal_sender_timeout / 2.\n>\n> Also, the patch changes the current behavior of wal senders; with the\n> patch, we send keep-alive messages even when wal_sender_timeout = 0.\n> But I'm not sure it's a good idea. The subscriber's\n> wal_receiver_timeout might be lower than wal_sender_timeout. Instead,\n> I think it's better to periodically check replies and send a reply to\n> the keep-alive message sent from the subscriber if necessary, for\n> example, every 10000 skipped changes.\n>\n> Regards,\n>\n> --\n> Masahiko Sawada\n> EDB: https://www.enterprisedb.com/\n>\n>\n>\n\nHi, I have been following this discussion for a while because I believe we are hit by this pretty hard.This sounds very reasonable to me:\"Why don't we check both the count and the time?That is, I think we can send a keep-alive either if we skipped 10000changes or if we didn't sent anything for wal_sender_timeout / 2\"Will gladly test what ends up as an acceptable patch for this, hoping for the best and thanks for looking into this.Den ons 9 mars 2022 kl 07:45 skrev Masahiko Sawada <sawada.mshk@gmail.com>:On Wed, Mar 9, 2022 at 11:26 AM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Tue, Mar 8, 2022 at 3:52 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > I've looked at the patch and have a question:\n> Thanks for your review and comments.\n>\n> > +void\n> > +SendKeepaliveIfNecessary(LogicalDecodingContext *ctx, bool skipped) {\n> > + static int skipped_changes_count = 0;\n> > +\n> > + /*\n> > + * skipped_changes_count is reset when processing changes that do not\n> > + * need to be skipped.\n> > + */\n> > + if (!skipped)\n> > + {\n> > + skipped_changes_count = 0;\n> > + return;\n> > + }\n> > +\n> > + /*\n> > + * After continuously skipping SKIPPED_CHANGES_THRESHOLD\n> > changes, try to send a\n> > + * keepalive message.\n> > + */\n> > + #define SKIPPED_CHANGES_THRESHOLD 10000\n> > +\n> > + if (++skipped_changes_count >= SKIPPED_CHANGES_THRESHOLD)\n> > + {\n> > + /* Try to send a keepalive message. */\n> > + OutputPluginUpdateProgress(ctx, true);\n> > +\n> > + /* After trying to send a keepalive message, reset the flag. */\n> > + skipped_changes_count = 0;\n> > + }\n> > +}\n> >\n> > Since we send a keepalive after continuously skipping 10000 changes, the\n> > originally reported issue can still occur if skipping 10000 changes took more than\n> > the timeout and the walsender didn't send any change while that, is that right?\n> Yes, theoretically so.\n> But after testing, I think this value should be conservative enough not to reproduce\n> this bug.\n\nBut it really depends on the workload, the server condition, and the\ntimeout value, right? The logical decoding might involve disk I/O much\nto spill/load intermediate data and the system might be under the\nhigh-load condition. Why don't we check both the count and the time?\nThat is, I think we can send a keep-alive either if we skipped 10000\nchanges or if we didn't sent anything for wal_sender_timeout / 2.\n\nAlso, the patch changes the current behavior of wal senders; with the\npatch, we send keep-alive messages even when wal_sender_timeout = 0.\nBut I'm not sure it's a good idea. The subscriber's\nwal_receiver_timeout might be lower than wal_sender_timeout. Instead,\nI think it's better to periodically check replies and send a reply to\nthe keep-alive message sent from the subscriber if necessary, for\nexample, every 10000 skipped changes.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Mon, 14 Mar 2022 23:23:47 +0100",
"msg_from": "=?UTF-8?Q?Bj=C3=B6rn_Harrtell?= <bjorn.harrtell@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Mar 9, 2022 at 2:45 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n>\r\nThanks for your comments.\r\n\r\n> On Wed, Mar 9, 2022 at 10:26 AM I wrote:\r\n> > On Tue, Mar 8, 2022 at 3:52 PM Masahiko Sawada <sawada.mshk@gmail.com>\r\n> wrote:\r\n> > > I've looked at the patch and have a question:\r\n> > Thanks for your review and comments.\r\n> >\r\n> > > +void\r\n> > > +SendKeepaliveIfNecessary(LogicalDecodingContext *ctx, bool skipped) {\r\n> > > + static int skipped_changes_count = 0;\r\n> > > +\r\n> > > + /*\r\n> > > + * skipped_changes_count is reset when processing changes that do\r\n> not\r\n> > > + * need to be skipped.\r\n> > > + */\r\n> > > + if (!skipped)\r\n> > > + {\r\n> > > + skipped_changes_count = 0;\r\n> > > + return;\r\n> > > + }\r\n> > > +\r\n> > > + /*\r\n> > > + * After continuously skipping SKIPPED_CHANGES_THRESHOLD\r\n> > > changes, try to send a\r\n> > > + * keepalive message.\r\n> > > + */\r\n> > > + #define SKIPPED_CHANGES_THRESHOLD 10000\r\n> > > +\r\n> > > + if (++skipped_changes_count >= SKIPPED_CHANGES_THRESHOLD)\r\n> > > + {\r\n> > > + /* Try to send a keepalive message. */\r\n> > > + OutputPluginUpdateProgress(ctx, true);\r\n> > > +\r\n> > > + /* After trying to send a keepalive message, reset the flag. */\r\n> > > + skipped_changes_count = 0;\r\n> > > + }\r\n> > > +}\r\n> > >\r\n> > > Since we send a keepalive after continuously skipping 10000 changes, the\r\n> > > originally reported issue can still occur if skipping 10000 changes took more\r\n> than\r\n> > > the timeout and the walsender didn't send any change while that, is that\r\n> right?\r\n> > Yes, theoretically so.\r\n> > But after testing, I think this value should be conservative enough not to\r\n> reproduce\r\n> > this bug.\r\n> \r\n> But it really depends on the workload, the server condition, and the\r\n> timeout value, right? The logical decoding might involve disk I/O much\r\n> to spill/load intermediate data and the system might be under the\r\n> high-load condition. Why don't we check both the count and the time?\r\n> That is, I think we can send a keep-alive either if we skipped 10000\r\n> changes or if we didn't sent anything for wal_sender_timeout / 2.\r\nYes, you are right.\r\nDo you mean that when skipping every change, check if it has been more than\r\n(wal_sender_timeout / 2) without sending anything?\r\nIIUC, I tried to send keep-alive messages based on time before[1], but after\r\ntesting, I found that it will brings slight overhead. So I am not sure, in a\r\nfunction(pgoutput_change) that is invoked frequently, should this kind of\r\noverhead be introduced?\r\n\r\n> Also, the patch changes the current behavior of wal senders; with the\r\n> patch, we send keep-alive messages even when wal_sender_timeout = 0.\r\n> But I'm not sure it's a good idea. The subscriber's\r\n> wal_receiver_timeout might be lower than wal_sender_timeout. Instead,\r\n> I think it's better to periodically check replies and send a reply to\r\n> the keep-alive message sent from the subscriber if necessary, for\r\n> example, every 10000 skipped changes.\r\nSorry, I could not follow what you said. I am not sure, do you mean the\r\nfollowing?\r\n1. When we didn't sent anything for (wal_sender_timeout / 2) or we skipped\r\n10000 changes continuously, we will invoke the function WalSndKeepalive in the\r\nfunction WalSndUpdateProgress, and send a keepalive message to the subscriber\r\nwith requesting an immediate reply.\r\n2. If after sending a keepalive message, and then 10000 changes are skipped\r\ncontinuously again. In this case, we need to handle the reply from the\r\nsubscriber-side when processing the 10000th change. The handling approach is to\r\nreply to the confirmation message from the subscriber.\r\n\r\n[1] - https://www.postgresql.org/message-id/OS3PR01MB6275DFFDAC7A59FA148931529E209%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nPlease let me know if I understand wrong.\r\n\r\nRegards,\r\nWang wei\r\n",
"msg_date": "Wed, 16 Mar 2022 02:57:11 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Mar 16, 2022 at 11:57 AM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Wed, Mar 9, 2022 at 2:45 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> Thanks for your comments.\n>\n> > On Wed, Mar 9, 2022 at 10:26 AM I wrote:\n> > > On Tue, Mar 8, 2022 at 3:52 PM Masahiko Sawada <sawada.mshk@gmail.com>\n> > wrote:\n> > > > I've looked at the patch and have a question:\n> > > Thanks for your review and comments.\n> > >\n> > > > +void\n> > > > +SendKeepaliveIfNecessary(LogicalDecodingContext *ctx, bool skipped) {\n> > > > + static int skipped_changes_count = 0;\n> > > > +\n> > > > + /*\n> > > > + * skipped_changes_count is reset when processing changes that do\n> > not\n> > > > + * need to be skipped.\n> > > > + */\n> > > > + if (!skipped)\n> > > > + {\n> > > > + skipped_changes_count = 0;\n> > > > + return;\n> > > > + }\n> > > > +\n> > > > + /*\n> > > > + * After continuously skipping SKIPPED_CHANGES_THRESHOLD\n> > > > changes, try to send a\n> > > > + * keepalive message.\n> > > > + */\n> > > > + #define SKIPPED_CHANGES_THRESHOLD 10000\n> > > > +\n> > > > + if (++skipped_changes_count >= SKIPPED_CHANGES_THRESHOLD)\n> > > > + {\n> > > > + /* Try to send a keepalive message. */\n> > > > + OutputPluginUpdateProgress(ctx, true);\n> > > > +\n> > > > + /* After trying to send a keepalive message, reset the flag. */\n> > > > + skipped_changes_count = 0;\n> > > > + }\n> > > > +}\n> > > >\n> > > > Since we send a keepalive after continuously skipping 10000 changes, the\n> > > > originally reported issue can still occur if skipping 10000 changes took more\n> > than\n> > > > the timeout and the walsender didn't send any change while that, is that\n> > right?\n> > > Yes, theoretically so.\n> > > But after testing, I think this value should be conservative enough not to\n> > reproduce\n> > > this bug.\n> >\n> > But it really depends on the workload, the server condition, and the\n> > timeout value, right? The logical decoding might involve disk I/O much\n> > to spill/load intermediate data and the system might be under the\n> > high-load condition. Why don't we check both the count and the time?\n> > That is, I think we can send a keep-alive either if we skipped 10000\n> > changes or if we didn't sent anything for wal_sender_timeout / 2.\n> Yes, you are right.\n> Do you mean that when skipping every change, check if it has been more than\n> (wal_sender_timeout / 2) without sending anything?\n> IIUC, I tried to send keep-alive messages based on time before[1], but after\n> testing, I found that it will brings slight overhead. So I am not sure, in a\n> function(pgoutput_change) that is invoked frequently, should this kind of\n> overhead be introduced?\n>\n> > Also, the patch changes the current behavior of wal senders; with the\n> > patch, we send keep-alive messages even when wal_sender_timeout = 0.\n> > But I'm not sure it's a good idea. The subscriber's\n> > wal_receiver_timeout might be lower than wal_sender_timeout. Instead,\n> > I think it's better to periodically check replies and send a reply to\n> > the keep-alive message sent from the subscriber if necessary, for\n> > example, every 10000 skipped changes.\n> Sorry, I could not follow what you said. I am not sure, do you mean the\n> following?\n> 1. When we didn't sent anything for (wal_sender_timeout / 2) or we skipped\n> 10000 changes continuously, we will invoke the function WalSndKeepalive in the\n> function WalSndUpdateProgress, and send a keepalive message to the subscriber\n> with requesting an immediate reply.\n> 2. If after sending a keepalive message, and then 10000 changes are skipped\n> continuously again. In this case, we need to handle the reply from the\n> subscriber-side when processing the 10000th change. The handling approach is to\n> reply to the confirmation message from the subscriber.\n\nAfter more thought, can we check only wal_sender_timeout without\nskip-count? That is, in WalSndUpdateProgress(), if we have received\nany reply from the subscriber in last (wal_sender_timeout / 2), we\ndon't need to do anything in terms of keep-alive. If not, we do\nProcessRepliesIfAny() (and probably WalSndCheckTimeOut()?) then\nWalSndKeepalivesIfNecessary(). That way, we can send keep-alive\nmessages every (wal_sender_timeout / 2). And since we don't call them\nfor every change, we would not need to worry about the overhead much.\nActually, WalSndWriteData() does similar things; even in the case\nwhere we don't skip consecutive changes (i.e., sending consecutive\nchanges to the subscriber), we do ProcessRepliesIfAny() at least every\n(wal_sender_timeout / 2). I think this would work in most common cases\nwhere the user sets both wal_sender_timeout and wal_receiver_timeout\nto the same value.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 16 Mar 2022 23:07:51 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Mar 16, 2022 at 7:38 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Mar 16, 2022 at 11:57 AM wangw.fnst@fujitsu.com\n> <wangw.fnst@fujitsu.com> wrote:\n> > > But it really depends on the workload, the server condition, and the\n> > > timeout value, right? The logical decoding might involve disk I/O much\n> > > to spill/load intermediate data and the system might be under the\n> > > high-load condition. Why don't we check both the count and the time?\n> > > That is, I think we can send a keep-alive either if we skipped 10000\n> > > changes or if we didn't sent anything for wal_sender_timeout / 2.\n> > Yes, you are right.\n> > Do you mean that when skipping every change, check if it has been more than\n> > (wal_sender_timeout / 2) without sending anything?\n> > IIUC, I tried to send keep-alive messages based on time before[1], but after\n> > testing, I found that it will brings slight overhead. So I am not sure, in a\n> > function(pgoutput_change) that is invoked frequently, should this kind of\n> > overhead be introduced?\n> >\n> > > Also, the patch changes the current behavior of wal senders; with the\n> > > patch, we send keep-alive messages even when wal_sender_timeout = 0.\n> > > But I'm not sure it's a good idea. The subscriber's\n> > > wal_receiver_timeout might be lower than wal_sender_timeout. Instead,\n> > > I think it's better to periodically check replies and send a reply to\n> > > the keep-alive message sent from the subscriber if necessary, for\n> > > example, every 10000 skipped changes.\n> > Sorry, I could not follow what you said. I am not sure, do you mean the\n> > following?\n> > 1. When we didn't sent anything for (wal_sender_timeout / 2) or we skipped\n> > 10000 changes continuously, we will invoke the function WalSndKeepalive in the\n> > function WalSndUpdateProgress, and send a keepalive message to the subscriber\n> > with requesting an immediate reply.\n> > 2. If after sending a keepalive message, and then 10000 changes are skipped\n> > continuously again. In this case, we need to handle the reply from the\n> > subscriber-side when processing the 10000th change. The handling approach is to\n> > reply to the confirmation message from the subscriber.\n>\n> After more thought, can we check only wal_sender_timeout without\n> skip-count? That is, in WalSndUpdateProgress(), if we have received\n> any reply from the subscriber in last (wal_sender_timeout / 2), we\n> don't need to do anything in terms of keep-alive. If not, we do\n> ProcessRepliesIfAny() (and probably WalSndCheckTimeOut()?) then\n> WalSndKeepalivesIfNecessary(). That way, we can send keep-alive\n> messages every (wal_sender_timeout / 2). And since we don't call them\n> for every change, we would not need to worry about the overhead much.\n>\n\nBut won't that lead to a call to GetCurrentTimestamp() for each change\nwe skip? IIUC from previous replies that lead to a slight slowdown in\nprevious tests of Wang-San.\n\n> Actually, WalSndWriteData() does similar things;\n>\n\nThat also every time seems to be calling GetCurrentTimestamp(). I\nthink it might be okay when we are sending the change but not sure if\nthe overhead of the same is negligible when we are skipping the\nchanges.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 17 Mar 2022 12:27:50 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Thu, Mar 17, 2022 at 12:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Mar 16, 2022 at 7:38 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > After more thought, can we check only wal_sender_timeout without\n> > skip-count? That is, in WalSndUpdateProgress(), if we have received\n> > any reply from the subscriber in last (wal_sender_timeout / 2), we\n> > don't need to do anything in terms of keep-alive. If not, we do\n> > ProcessRepliesIfAny() (and probably WalSndCheckTimeOut()?) then\n> > WalSndKeepalivesIfNecessary(). That way, we can send keep-alive\n> > messages every (wal_sender_timeout / 2). And since we don't call them\n> > for every change, we would not need to worry about the overhead much.\n> >\n>\n> But won't that lead to a call to GetCurrentTimestamp() for each change\n> we skip? IIUC from previous replies that lead to a slight slowdown in\n> previous tests of Wang-San.\n>\n\nIf the above is true then I think we can use a lower skip_count say 10\nalong with a timeout mechanism to send keepalive message. This will\nhelp us to alleviate the overhead Wang-San has shown.\n\nBTW, I think there could be one other advantage of using\nProcessRepliesIfAny() (as you are suggesting) is that it can help to\nrelease sync waiters if there are any. I feel that would be the case\nfor the skip_empty_transactions patch [1] which uses\nWalSndUpdateProgress to send keepalive messages after skipping empty\ntransactions.\n\n[1] - https://www.postgresql.org/message-id/CAFPTHDYvRSyT5ppYSPsH4Ozs0_W62-nffu0%3DmY1%2BsVipF%3DUN-g%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 17 Mar 2022 15:44:31 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Thu, Mar 17, 2022 at 7:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Mar 17, 2022 at 12:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Mar 16, 2022 at 7:38 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > After more thought, can we check only wal_sender_timeout without\n> > > skip-count? That is, in WalSndUpdateProgress(), if we have received\n> > > any reply from the subscriber in last (wal_sender_timeout / 2), we\n> > > don't need to do anything in terms of keep-alive. If not, we do\n> > > ProcessRepliesIfAny() (and probably WalSndCheckTimeOut()?) then\n> > > WalSndKeepalivesIfNecessary(). That way, we can send keep-alive\n> > > messages every (wal_sender_timeout / 2). And since we don't call them\n> > > for every change, we would not need to worry about the overhead much.\n> > >\n> >\n> > But won't that lead to a call to GetCurrentTimestamp() for each change\n> > we skip? IIUC from previous replies that lead to a slight slowdown in\n> > previous tests of Wang-San.\n> >\n> If the above is true then I think we can use a lower skip_count say 10\n> along with a timeout mechanism to send keepalive message. This will\n> help us to alleviate the overhead Wang-San has shown.\n\nUsing both sounds reasonable to me. I'd like to see how much the\noverhead is alleviated by using skip_count 10 (or 100).\n\n> BTW, I think there could be one other advantage of using\n> ProcessRepliesIfAny() (as you are suggesting) is that it can help to\n> release sync waiters if there are any. I feel that would be the case\n> for the skip_empty_transactions patch [1] which uses\n> WalSndUpdateProgress to send keepalive messages after skipping empty\n> transactions.\n\n+1\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/OS3PR01MB6275DFFDAC7A59FA148931529E209%40OS3PR01MB6275.jpnprd01.prod.outlook.com\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 17 Mar 2022 20:51:45 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Thu, Mar 17, 2022 at 7:52 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n>\r\nThanks for your comments.\r\n\r\n> On Thu, Mar 17, 2022 at 7:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > On Thu, Mar 17, 2022 at 12:27 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > On Wed, Mar 16, 2022 at 7:38 PM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> > > >\r\n> > > > After more thought, can we check only wal_sender_timeout without\r\n> > > > skip-count? That is, in WalSndUpdateProgress(), if we have\r\n> > > > received any reply from the subscriber in last (wal_sender_timeout\r\n> > > > / 2), we don't need to do anything in terms of keep-alive. If not,\r\n> > > > we do\r\n> > > > ProcessRepliesIfAny() (and probably WalSndCheckTimeOut()?) then\r\n> > > > WalSndKeepalivesIfNecessary(). That way, we can send keep-alive\r\n> > > > messages every (wal_sender_timeout / 2). And since we don't call\r\n> > > > them for every change, we would not need to worry about the overhead\r\n> much.\r\n> > > >\r\n> > >\r\n> > > But won't that lead to a call to GetCurrentTimestamp() for each\r\n> > > change we skip? IIUC from previous replies that lead to a slight\r\n> > > slowdown in previous tests of Wang-San.\r\n> > >\r\n> > If the above is true then I think we can use a lower skip_count say 10\r\n> > along with a timeout mechanism to send keepalive message. This will\r\n> > help us to alleviate the overhead Wang-San has shown.\r\n> \r\n> Using both sounds reasonable to me. I'd like to see how much the overhead is\r\n> alleviated by using skip_count 10 (or 100).\r\n> \r\n> > BTW, I think there could be one other advantage of using\r\n> > ProcessRepliesIfAny() (as you are suggesting) is that it can help to\r\n> > release sync waiters if there are any. I feel that would be the case\r\n> > for the skip_empty_transactions patch [1] which uses\r\n> > WalSndUpdateProgress to send keepalive messages after skipping empty\r\n> > transactions.\r\n> \r\n> +1\r\nI modified the patch according to your and Amit-San's suggestions.\r\nIn addition, after testing, I found that when the threshold is 10, it brings\r\nslight overhead.\r\nSo I try to change it to 100, after testing, the results look good to me.\r\n10 : 1.22%--UpdateProgress\r\n100 : 0.16%--UpdateProgress\r\n\r\nPlease refer to attachment.\r\n\r\nAttach the new patch.\r\n1. Refactor the way to send keepalive messages.\r\n [suggestion by Sawada-San, Amit-San.]\r\n2. Modify the value of flag is_send initialization to make it look more\r\n reasonable. [suggestion by Kuroda-San.]\r\n3. Improve new function names.\r\n (From SendKeepaliveIfNecessary to UpdateProgress.)\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Fri, 18 Mar 2022 05:13:02 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Thu, Mar 9, 2022 at 11:52 AM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\r\n> Thank you for updating!\r\nThanks for your comments.\r\n\r\n> 1. pgoutput_change\r\n> ```\r\n> + bool is_send = true;\r\n> ```\r\n> \r\n> My first impression is that is_send should be initialized to false, and it will change\r\n> to true when OutputPluginWrite() is called.\r\n> \r\n> \r\n> 2. pgoutput_change\r\n> ```\r\n> + {\r\n> + is_send = false;\r\n> + break;\r\n> + }\r\n> ```\r\n> \r\n> Here are too many indents, but I think they should be removed.\r\n> See above comment.\r\nFixed. Initialize is_send to false.\r\n\r\n> 3. WalSndUpdateProgress\r\n> ```\r\n> + /*\r\n> + * If half of wal_sender_timeout has lapsed without send message\r\n> standby,\r\n> + * send a keep-alive message to the standby.\r\n> + */\r\n> ```\r\n> \r\n> The comment seems inconsistency with others.\r\n> Here is \"keep-alive\", but other parts are \"keepalive\".\r\nSince this part of the code was refactored, this inconsistent comment was\r\nremoved.\r\n\r\n> 4. ReorderBufferProcessTXN\r\n> ```\r\n> + change-\r\n> >data.inval.ninvalidations,\r\n> +\r\n> + change->data.inval.invalidations);\r\n> ```\r\n> \r\n> Maybe these lines break 80-columns rule.\r\nThanks for reminder. I will run pg_ident later.\r\n\r\nKindly have a look at new patch shared in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/OS3PR01MB6275C67F14954E05CE5D04399E139%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei\r\n",
"msg_date": "Fri, 18 Mar 2022 05:13:48 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Fri, Mar 18, 2022 at 10:43 AM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Thu, Mar 17, 2022 at 7:52 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n>\n> Attach the new patch.\n>\n\n*\n case REORDER_BUFFER_CHANGE_INVALIDATION:\n- /* Execute the invalidation messages locally */\n- ReorderBufferExecuteInvalidations(\n- change->data.inval.ninvalidations,\n- change->data.inval.invalidations);\n- break;\n+ {\n+ LogicalDecodingContext *ctx = rb->private_data;\n+\n+ /* Try to send a keepalive message. */\n+ UpdateProgress(ctx, true);\n\nCalling UpdateProgress() here appears adhoc to me especially because\nit calls OutputPluginUpdateProgress which appears to be called only\nfrom plugin API. Am, I missing something? Also why the same handling\nis missed in other similar messages like\nREORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID where we don't call any\nplug-in API?\n\nI am not sure what is a good way to achieve this but one idea that\noccurred to me was shall we invent a new callback\nReorderBufferSkipChangeCB similar to ReorderBufferApplyChangeCB and\nthen pgoutput can register its API where we can have the logic similar\nto what you have in UpdateProgress()? If we do so, then all the\ncuurent callers of UpdateProgress in pgoutput can also call that API.\nWhat do you think?\n\n* Why don't you have a quick exit like below code in WalSndWriteData?\n/* Try taking fast path unless we get too close to walsender timeout. */\nif (now < TimestampTzPlusMilliseconds(last_reply_timestamp,\n wal_sender_timeout / 2) &&\n!pq_is_send_pending())\n{\nreturn;\n}\n\n* Can we rename variable 'is_send' to 'change_sent'?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 18 Mar 2022 16:20:07 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Fri, Mar 18, 2022 at 4:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Mar 18, 2022 at 10:43 AM wangw.fnst@fujitsu.com\n> <wangw.fnst@fujitsu.com> wrote:\n> >\n> > On Thu, Mar 17, 2022 at 7:52 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> >\n> > Attach the new patch.\n> >\n>\n> *\n> case REORDER_BUFFER_CHANGE_INVALIDATION:\n> - /* Execute the invalidation messages locally */\n> - ReorderBufferExecuteInvalidations(\n> - change->data.inval.ninvalidations,\n> - change->data.inval.invalidations);\n> - break;\n> + {\n> + LogicalDecodingContext *ctx = rb->private_data;\n> +\n> + /* Try to send a keepalive message. */\n> + UpdateProgress(ctx, true);\n>\n> Calling UpdateProgress() here appears adhoc to me especially because\n> it calls OutputPluginUpdateProgress which appears to be called only\n> from plugin API. Am, I missing something? Also why the same handling\n> is missed in other similar messages like\n> REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID where we don't call any\n> plug-in API?\n>\n> I am not sure what is a good way to achieve this but one idea that\n> occurred to me was shall we invent a new callback\n> ReorderBufferSkipChangeCB similar to ReorderBufferApplyChangeCB and\n> then pgoutput can register its API where we can have the logic similar\n> to what you have in UpdateProgress()? If we do so, then all the\n> cuurent callers of UpdateProgress in pgoutput can also call that API.\n> What do you think?\n>\n\nAnother idea could be that we leave the DDL case for now as anyway\nthere is very less chance of timeout for skipping DDLs and we may\nlater need to even backpatch this bug-fix which would be another\nreason to not make such invasive changes. We can handle the DDL case\nif required separately.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 21 Mar 2022 11:00:40 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Mon, Mar 21, 2022 at 1:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n>\r\nThanks for your comments.\r\n\r\n> On Fri, Mar 18, 2022 at 4:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > On Fri, Mar 18, 2022 at 10:43 AM wangw.fnst@fujitsu.com\r\n> > <wangw.fnst@fujitsu.com> wrote:\r\n> > >\r\n> > > On Thu, Mar 17, 2022 at 7:52 PM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> > > >\r\n> > >\r\n> > > Attach the new patch.\r\n> > >\r\n> >\r\n> > *\r\n> > case REORDER_BUFFER_CHANGE_INVALIDATION:\r\n> > - /* Execute the invalidation messages locally */\r\n> > - ReorderBufferExecuteInvalidations(\r\n> > - change->data.inval.ninvalidations,\r\n> > - change->data.inval.invalidations);\r\n> > - break;\r\n> > + {\r\n> > + LogicalDecodingContext *ctx = rb->private_data;\r\n> > +\r\n> > + /* Try to send a keepalive message. */\r\n> > + UpdateProgress(ctx, true);\r\n> >\r\n> > Calling UpdateProgress() here appears adhoc to me especially because\r\n> > it calls OutputPluginUpdateProgress which appears to be called only\r\n> > from plugin API. Am, I missing something? Also why the same handling\r\n> > is missed in other similar messages like\r\n> > REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID where we don't call\r\n> any\r\n> > plug-in API?\r\nYes, you are right.\r\nAnd I invoke in case REORDER_BUFFER_CHANGE_INVALIDATION because I think every\r\nDDL will modify the catalog then get into this case. So I only invoke function\r\nUpdateProgress here to handle DDL.\r\n\r\n> > I am not sure what is a good way to achieve this but one idea that\r\n> > occurred to me was shall we invent a new callback\r\n> > ReorderBufferSkipChangeCB similar to ReorderBufferApplyChangeCB and\r\n> > then pgoutput can register its API where we can have the logic similar\r\n> > to what you have in UpdateProgress()? If we do so, then all the\r\n> > cuurent callers of UpdateProgress in pgoutput can also call that API.\r\n> > What do you think?\r\n> >\r\n> Another idea could be that we leave the DDL case for now as anyway\r\n> there is very less chance of timeout for skipping DDLs and we may\r\n> later need to even backpatch this bug-fix which would be another\r\n> reason to not make such invasive changes. We can handle the DDL case\r\n> if required separately.\r\nYes, I think a new callback function would be nice.\r\nYes, as you said, maybe we could fix the usecase that found the problem in the\r\nfirst place. Then make further modifications on the master branch.\r\nModify the patch. Currently only DML related code remains.\r\n\r\n> > * Why don't you have a quick exit like below code in WalSndWriteData?\r\n> > /* Try taking fast path unless we get too close to walsender timeout. */ if (now\r\n> > < TimestampTzPlusMilliseconds(last_reply_timestamp,\r\n> > wal_sender_timeout / 2) &&\r\n> > !pq_is_send_pending())\r\n> > {\r\n> > return;\r\n> > }\r\nFixed. I missed this so adding it in the new patch.\r\n\r\n> > * Can we rename variable 'is_send' to 'change_sent'?\r\nImprove the the name of this variable.(From 'is_send' to 'change_sent')\r\n\r\nAttach the new patch. [suggestion by Amit-San.]\r\n1. Remove DDL related code. Handle the DDL case later separately if need.\r\n2. Fix a missing.(In function WalSndUpdateProgress)\r\n3. Improve variable names. (From 'is_send' to 'change_sent')\r\n4. Fix some comments.(Above and inside the function WalSndUpdateProgress.)\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Tue, 22 Mar 2022 01:55:32 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Tue, Mar 22, 2022 at 7:25 AM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> Attach the new patch.\n>\n\nIt seems by mistake you have removed the changes from pgoutput_message\nand pgoutput_truncate functions. I have added those back.\nAdditionally, I made a few other changes: (a) moved the function\nUpdateProgress to pgoutput.c as it is not used outside it, (b) change\nthe new parameter in plugin API from 'send_keep_alive' to 'last_write'\nto make it look similar to WalSndPrepareWrite and WalSndWriteData, (c)\nmade a number of changes in WalSndUpdateProgress API, it is better to\nmove keep-alive code after lag track code because we do process\nreplies at that time and there it will compute the lag; (d)\nchanged/added comments in the code.\n\nDo let me know what you think of the attached?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Thu, 24 Mar 2022 16:02:19 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> It seems by mistake you have removed the changes from pgoutput_message\r\n> and pgoutput_truncate functions. I have added those back.\r\n> Additionally, I made a few other changes: (a) moved the function\r\n> UpdateProgress to pgoutput.c as it is not used outside it, (b) change\r\n> the new parameter in plugin API from 'send_keep_alive' to 'last_write'\r\n> to make it look similar to WalSndPrepareWrite and WalSndWriteData, (c)\r\n> made a number of changes in WalSndUpdateProgress API, it is better to\r\n> move keep-alive code after lag track code because we do process\r\n> replies at that time and there it will compute the lag; (d)\r\n> changed/added comments in the code.\r\n\r\nLGTM, but the patch cannot be applied to current HEAD.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Fri, 25 Mar 2022 03:20:18 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Thur, Mar 24, 2022 at 6:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n>\r\nThanks for your kindly update.\r\n\r\n> It seems by mistake you have removed the changes from pgoutput_message\r\n> and pgoutput_truncate functions. I have added those back.\r\n> Additionally, I made a few other changes: (a) moved the function\r\n> UpdateProgress to pgoutput.c as it is not used outside it, (b) change\r\n> the new parameter in plugin API from 'send_keep_alive' to 'last_write'\r\n> to make it look similar to WalSndPrepareWrite and WalSndWriteData, (c)\r\n> made a number of changes in WalSndUpdateProgress API, it is better to\r\n> move keep-alive code after lag track code because we do process\r\n> replies at that time and there it will compute the lag; (d)\r\n> changed/added comments in the code.\r\n> \r\n> Do let me know what you think of the attached?\r\nIt looks good to me. Just rebase it because the change in header(75b1521).\r\nI tested it and the result looks good to me.\r\n\r\nAttach the new patch.\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Fri, 25 Mar 2022 05:23:05 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Fri, Mar 25, 2022 at 2:23 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Thur, Mar 24, 2022 at 6:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> Thanks for your kindly update.\n>\n> > It seems by mistake you have removed the changes from pgoutput_message\n> > and pgoutput_truncate functions. I have added those back.\n> > Additionally, I made a few other changes: (a) moved the function\n> > UpdateProgress to pgoutput.c as it is not used outside it, (b) change\n> > the new parameter in plugin API from 'send_keep_alive' to 'last_write'\n> > to make it look similar to WalSndPrepareWrite and WalSndWriteData, (c)\n> > made a number of changes in WalSndUpdateProgress API, it is better to\n> > move keep-alive code after lag track code because we do process\n> > replies at that time and there it will compute the lag; (d)\n> > changed/added comments in the code.\n> >\n> > Do let me know what you think of the attached?\n> It looks good to me. Just rebase it because the change in header(75b1521).\n> I tested it and the result looks good to me.\n\nSince commit 75b1521 added decoding of sequence to logical\nreplication, the patch needs to have pgoutput_sequence() call\nupdate_progress().\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 25 Mar 2022 15:19:12 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Fri, Mar 25, 2022 at 11:49 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Mar 25, 2022 at 2:23 PM wangw.fnst@fujitsu.com\n> <wangw.fnst@fujitsu.com> wrote:\n>\n> Since commit 75b1521 added decoding of sequence to logical\n> replication, the patch needs to have pgoutput_sequence() call\n> update_progress().\n>\n\nYeah, I also think this needs to be addressed. But apart from this, I\nwant to know your and other's opinion on the following two points:\na. Both this and the patch discussed in the nearby thread [1] add an\nadditional parameter to\nWalSndUpdateProgress/OutputPluginUpdateProgress and it seems to me\nthat both are required. The additional parameter 'last_write' added by\nthis patch indicates: \"If the last write is skipped then try (if we\nare close to wal_sender_timeout) to send a keepalive message to the\nreceiver to avoid timeouts.\". This means it can be used after any\n'write' message. OTOH, the parameter 'skipped_xact' added by another\npatch [1] indicates if we have skipped sending anything for a\ntransaction then sendkeepalive for synchronous replication to avoid\nany delays in such a transaction. Does this sound reasonable or can\nyou think of a better way to deal with it?\nb. Do we want to backpatch the patch in this thread? I am reluctant to\nbackpatch because it changes the exposed API which can have an impact\nand second there exists a workaround (user can increase\nwal_sender_timeout/wal_receiver_timeout).\n\n\n[1] - https://www.postgresql.org/message-id/OS0PR01MB5716BB24409D4B69206615B1941A9%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 25 Mar 2022 14:02:57 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Fri, Mar 25, 2022 at 2:19 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> On Fri, Mar 25, 2022 at 2:23 PM wangw.fnst@fujitsu.com\r\n> <wangw.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Thur, Mar 24, 2022 at 6:32 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > >\r\n> > Thanks for your kindly update.\r\n> >\r\n> > > It seems by mistake you have removed the changes from\r\n> pgoutput_message\r\n> > > and pgoutput_truncate functions. I have added those back.\r\n> > > Additionally, I made a few other changes: (a) moved the function\r\n> > > UpdateProgress to pgoutput.c as it is not used outside it, (b) change\r\n> > > the new parameter in plugin API from 'send_keep_alive' to 'last_write'\r\n> > > to make it look similar to WalSndPrepareWrite and WalSndWriteData, (c)\r\n> > > made a number of changes in WalSndUpdateProgress API, it is better to\r\n> > > move keep-alive code after lag track code because we do process\r\n> > > replies at that time and there it will compute the lag; (d)\r\n> > > changed/added comments in the code.\r\n> > >\r\n> > > Do let me know what you think of the attached?\r\n> > It looks good to me. Just rebase it because the change in header(75b1521).\r\n> > I tested it and the result looks good to me.\r\n> \r\n> Since commit 75b1521 added decoding of sequence to logical\r\n> replication, the patch needs to have pgoutput_sequence() call\r\n> update_progress().\r\nThanks for your comments.\r\n\r\nYes, you are right.\r\nAdd missing handling of pgoutput_sequence.\r\n\r\nAttach the new patch.\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Fri, 25 Mar 2022 10:19:32 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "Dear Wang-san,\r\n\r\nThank you for updating!\r\n...but it also cannot be applied to current HEAD\r\nbecause of the commit 923def9a533.\r\n\r\nYour patch seems to conflict the adding an argument of logicalrep_write_insert().\r\nIt allows specifying columns to publish by skipping some columns in logicalrep_write_tuple()\r\nwhich is called from logicalrep_write_insert() and logicalrep_write_update().\r\n\r\nDo we have to consider something special case for that?\r\nI thought timeout may occur if users have huge table and publish few columns,\r\nbut it is corner case.\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Mon, 28 Mar 2022 01:55:40 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Mon, Mar 28, 2022 at 9:56 AM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\r\n> Dear Wang-san,\r\nThanks for your comments.\r\n\r\n> Thank you for updating!\r\n> ...but it also cannot be applied to current HEAD\r\n> because of the commit 923def9a533.\r\n> \r\n> Your patch seems to conflict the adding an argument of\r\n> logicalrep_write_insert().\r\n> It allows specifying columns to publish by skipping some columns in\r\n> logicalrep_write_tuple()\r\n> which is called from logicalrep_write_insert() and logicalrep_write_update().\r\nThank for your kindly reminder.\r\nRebase the patch.\r\n\r\n> Do we have to consider something special case for that?\r\n> I thought timeout may occur if users have huge table and publish few columns,\r\n> but it is corner case.\r\nI think maybe we do not need to deal with this use case.\r\nThe maximum number of table columns allowed by PG is 1600\r\n(macro MaxHeapAttributeNumber), and after loop through all columns in the\r\nfunction logicalrep_write_tuple, the function OutputPluginWrite will be invoked\r\nimmediately to actually send the data to the subscriber. This refreshes the\r\nlast time the subscriber received a message.\r\nSo I think this loop will not cause timeout issues.\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Mon, 28 Mar 2022 06:11:08 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Mon, Mar 28, 2022 at 11:41 AM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Mon, Mar 28, 2022 at 9:56 AM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\n>\n> > Do we have to consider something special case for that?\n> > I thought timeout may occur if users have huge table and publish few columns,\n> > but it is corner case.\n> I think maybe we do not need to deal with this use case.\n> The maximum number of table columns allowed by PG is 1600\n> (macro MaxHeapAttributeNumber), and after loop through all columns in the\n> function logicalrep_write_tuple, the function OutputPluginWrite will be invoked\n> immediately to actually send the data to the subscriber. This refreshes the\n> last time the subscriber received a message.\n> So I think this loop will not cause timeout issues.\n>\n\nRight, I also don't think it can be a source of timeout.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 28 Mar 2022 11:57:59 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "Dear Amit, Wang,\r\n\r\n> > I think maybe we do not need to deal with this use case.\r\n> > The maximum number of table columns allowed by PG is 1600\r\n> > (macro MaxHeapAttributeNumber), and after loop through all columns in the\r\n> > function logicalrep_write_tuple, the function OutputPluginWrite will be invoked\r\n> > immediately to actually send the data to the subscriber. This refreshes the\r\n> > last time the subscriber received a message.\r\n> > So I think this loop will not cause timeout issues.\r\n> >\r\n> \r\n> Right, I also don't think it can be a source of timeout.\r\n\r\nOK. I have no comments for this version.\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n",
"msg_date": "Tue, 29 Mar 2022 01:29:59 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Mon, Mar 28, 2022 at 2:11 AM I wrote:\r\n> Rebase the patch.\r\n\r\nAfter reviewing anohter patch[1], I think this patch should also add a loop in\r\nfunction WalSndUpdateProgress like what did in function WalSndWriteData.\r\nSo update the patch to be consistent with the existing code and the patch\r\nmentioned above.\r\n\r\nAttach the new patch.\r\n\r\n[1] - https://www.postgresql.org/message-id/OS0PR01MB5716946347F607F4CFB02FCE941D9%40OS0PR01MB5716.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Tue, 29 Mar 2022 01:44:57 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Fri, Mar 25, 2022 at 5:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Mar 25, 2022 at 11:49 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Mar 25, 2022 at 2:23 PM wangw.fnst@fujitsu.com\n> > <wangw.fnst@fujitsu.com> wrote:\n> >\n> > Since commit 75b1521 added decoding of sequence to logical\n> > replication, the patch needs to have pgoutput_sequence() call\n> > update_progress().\n> >\n>\n> Yeah, I also think this needs to be addressed. But apart from this, I\n> want to know your and other's opinion on the following two points:\n> a. Both this and the patch discussed in the nearby thread [1] add an\n> additional parameter to\n> WalSndUpdateProgress/OutputPluginUpdateProgress and it seems to me\n> that both are required. The additional parameter 'last_write' added by\n> this patch indicates: \"If the last write is skipped then try (if we\n> are close to wal_sender_timeout) to send a keepalive message to the\n> receiver to avoid timeouts.\". This means it can be used after any\n> 'write' message. OTOH, the parameter 'skipped_xact' added by another\n> patch [1] indicates if we have skipped sending anything for a\n> transaction then sendkeepalive for synchronous replication to avoid\n> any delays in such a transaction. Does this sound reasonable or can\n> you think of a better way to deal with it?\n\nThese current approaches look good to me.\n\n> b. Do we want to backpatch the patch in this thread? I am reluctant to\n> backpatch because it changes the exposed API which can have an impact\n> and second there exists a workaround (user can increase\n> wal_sender_timeout/wal_receiver_timeout).\n\nYeah, we should avoid API changes between minor versions. I feel it's\nbetter to fix it also for back-branches but probably we need another\nfix for them. The issue reported on this thread seems quite\nconfusable; it looks like a network problem but is not true. Also, the\nuser who faced this issue has to increase wal_sender_timeout due to\nthe decoded data size, which also means to delay detecting network\nproblems. It seems an unrelated trade-off.\n\nRegards,\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 29 Mar 2022 14:07:17 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Tues, Mar 29, 2022 at 9:45 AM I wrote:\r\n> Attach the new patch.\r\n\r\nRebase the patch because the commit d5a9d86d in current HEAD.\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Wed, 30 Mar 2022 07:54:16 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Mar 30, 2022 at 1:24 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Tues, Mar 29, 2022 at 9:45 AM I wrote:\n> > Attach the new patch.\n>\n> Rebase the patch because the commit d5a9d86d in current HEAD.\n>\n\nThanks, this looks good to me apart from a minor indentation change\nwhich I'll take care of before committing. I am planning to push this\nday after tomorrow on Friday unless there are any other major\ncomments.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 30 Mar 2022 14:29:56 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Mar 30, 2022 3:54 PM wangw.fnst@fujitsu.com <wangw.fnst@fujitsu.com> wrote:\r\n> \r\n> Rebase the patch because the commit d5a9d86d in current HEAD.\r\n> \r\n\r\nThanks for your patch, I tried this patch and confirmed that there is no timeout\r\nproblem after applying this patch, and I could reproduce this problem on HEAD.\r\n\r\nRegards,\r\nShi yu\r\n",
"msg_date": "Thu, 31 Mar 2022 02:26:06 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Mar 30, 2022 at 6:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Mar 30, 2022 at 1:24 PM wangw.fnst@fujitsu.com\n> <wangw.fnst@fujitsu.com> wrote:\n> >\n> > On Tues, Mar 29, 2022 at 9:45 AM I wrote:\n> > > Attach the new patch.\n> >\n> > Rebase the patch because the commit d5a9d86d in current HEAD.\n> >\n>\n> Thanks, this looks good to me apart from a minor indentation change\n> which I'll take care of before committing. I am planning to push this\n> day after tomorrow on Friday unless there are any other major\n> comments.\n\nThe patch basically looks good to me. But the only concern to me is\nthat once we get the patch committed, we will have to call\nupdate_progress() at all paths in callbacks that process changes.\nWhich seems poor maintainability.\n\nOn the other hand, possible another solution would be to add a new\ncallback that is called e.g., every 1000 changes so that walsender\ndoes its job such as timeout handling while processing the decoded\ndata in reorderbuffer.c. The callback is set only if the walsender\ndoes logical decoding, otherwise NULL. With this idea, other plugins\nwill also be able to benefit without changes. But I’m not really sure\nit’s a good design, and adding a new callback introduces complexity.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 31 Mar 2022 21:24:46 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Thu, Mar 31, 2022 at 5:55 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> On Wed, Mar 30, 2022 at 6:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Mar 30, 2022 at 1:24 PM wangw.fnst@fujitsu.com\n> > <wangw.fnst@fujitsu.com> wrote:\n> > >\n> > > On Tues, Mar 29, 2022 at 9:45 AM I wrote:\n> > > > Attach the new patch.\n> > >\n> > > Rebase the patch because the commit d5a9d86d in current HEAD.\n> > >\n> >\n> > Thanks, this looks good to me apart from a minor indentation change\n> > which I'll take care of before committing. I am planning to push this\n> > day after tomorrow on Friday unless there are any other major\n> > comments.\n>\n> The patch basically looks good to me. But the only concern to me is\n> that once we get the patch committed, we will have to call\n> update_progress() at all paths in callbacks that process changes.\n> Which seems poor maintainability.\n>\n> On the other hand, possible another solution would be to add a new\n> callback that is called e.g., every 1000 changes so that walsender\n> does its job such as timeout handling while processing the decoded\n> data in reorderbuffer.c. The callback is set only if the walsender\n> does logical decoding, otherwise NULL. With this idea, other plugins\n> will also be able to benefit without changes. But I’m not really sure\n> it’s a good design, and adding a new callback introduces complexity.\n>\n\nYeah, same here. I have also mentioned another way to expose an API\nfrom reorderbuffer [1] by introducing a skip API but just not sure if\nthat or this API is generic enough to make it adding worth. Also, note\nthat the current patch makes the progress recording of large\ntransactions somewhat better when most of the changes are skipped. We\ncan further extend it to make it true for other cases as well but that\nprobably can be done separately if required as that is not required\nfor this bug-fix.\n\nI intend to commit this patch today but I think it is better to wait\nfor a few more days to see if anybody has any opinion on this matter.\nI'll push this on Tuesday unless we decide to do something different\nhere.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1%2BfQjndoBOFUn9Wy0hhm3MLyUWEpcT9O7iuCELktfdBiQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 1 Apr 2022 07:30:35 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Thu, Mar 31, 2022, at 9:24 AM, Masahiko Sawada wrote:\n> The patch basically looks good to me. But the only concern to me is\n> that once we get the patch committed, we will have to call\n> update_progress() at all paths in callbacks that process changes.\n> Which seems poor maintainability.\nI didn't like the current fix for the same reason. We need a robust feedback\nsystem for logical replication. We had this discussion in the \"skip empty\ntransactions\" thread [1].\n\n> On the other hand, possible another solution would be to add a new\n> callback that is called e.g., every 1000 changes so that walsender\n> does its job such as timeout handling while processing the decoded\n> data in reorderbuffer.c. The callback is set only if the walsender\n> does logical decoding, otherwise NULL. With this idea, other plugins\n> will also be able to benefit without changes. But I’m not really sure\n> it’s a good design, and adding a new callback introduces complexity.\nNo new callback is required.\n\nIn the current code, each output plugin callback is responsible to call\nOutputPluginUpdateProgress. It is up to the output plugin author to add calls\nto this function. The lack of a call in a callback might cause issues like what\nwas described in the initial message.\n\nThe functions CreateInitDecodingContext and CreateDecodingContext receives the\nupdate_progress function as a parameter. These functions are called in 2\nplaces: (a) streaming replication protocol (CREATE_REPLICATION_SLOT) and (b)\nSQL logical decoding functions (pg_logical_*_changes). Case (a) uses\nWalSndUpdateProgress as a progress function. Case (b) does not have one because\nit is not required -- local decoding/communication. There is no custom update\nprogress routine for each output plugin which leads me to the question:\ncouldn't we encapsulate the update progress call into the callback functions?\nIf so, we could have an output plugin parameter to inform which callbacks we\nwould like to call the update progress routine. This would simplify the code,\nmake it less error prone and wouldn't impose a burden on maintainability.\n\n[1] https://www.postgresql.org/message-id/20200309183018.tzkzwu635sd366ej%40alap3.anarazel.de\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Thu, Mar 31, 2022, at 9:24 AM, Masahiko Sawada wrote:The patch basically looks good to me. But the only concern to me isthat once we get the patch committed, we will have to callupdate_progress() at all paths in callbacks that process changes.Which seems poor maintainability.I didn't like the current fix for the same reason. We need a robust feedbacksystem for logical replication. We had this discussion in the \"skip emptytransactions\" thread [1].On the other hand, possible another solution would be to add a newcallback that is called e.g., every 1000 changes so that walsenderdoes its job such as timeout handling while processing the decodeddata in reorderbuffer.c. The callback is set only if the walsenderdoes logical decoding, otherwise NULL. With this idea, other pluginswill also be able to benefit without changes. But I’m not really sureit’s a good design, and adding a new callback introduces complexity.No new callback is required.In the current code, each output plugin callback is responsible to callOutputPluginUpdateProgress. It is up to the output plugin author to add callsto this function. The lack of a call in a callback might cause issues like whatwas described in the initial message.The functions CreateInitDecodingContext and CreateDecodingContext receives theupdate_progress function as a parameter. These functions are called in 2places: (a) streaming replication protocol (CREATE_REPLICATION_SLOT) and (b)SQL logical decoding functions (pg_logical_*_changes). Case (a) usesWalSndUpdateProgress as a progress function. Case (b) does not have one becauseit is not required -- local decoding/communication. There is no custom updateprogress routine for each output plugin which leads me to the question:couldn't we encapsulate the update progress call into the callback functions?If so, we could have an output plugin parameter to inform which callbacks wewould like to call the update progress routine. This would simplify the code,make it less error prone and wouldn't impose a burden on maintainability.[1] https://www.postgresql.org/message-id/20200309183018.tzkzwu635sd366ej%40alap3.anarazel.de--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Thu, 31 Mar 2022 23:03:03 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Fri, Apr 1, 2022 at 7:33 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Thu, Mar 31, 2022, at 9:24 AM, Masahiko Sawada wrote:\n>\n> On the other hand, possible another solution would be to add a new\n> callback that is called e.g., every 1000 changes so that walsender\n> does its job such as timeout handling while processing the decoded\n> data in reorderbuffer.c. The callback is set only if the walsender\n> does logical decoding, otherwise NULL. With this idea, other plugins\n> will also be able to benefit without changes. But I’m not really sure\n> it’s a good design, and adding a new callback introduces complexity.\n>\n> No new callback is required.\n>\n> In the current code, each output plugin callback is responsible to call\n> OutputPluginUpdateProgress. It is up to the output plugin author to add calls\n> to this function. The lack of a call in a callback might cause issues like what\n> was described in the initial message.\n>\n\nThis is exactly our initial analysis and we have tried a patch on\nthese lines and it has a noticeable overhead. See [1]. Calling this\nfor each change or each skipped change can bring noticeable overhead\nthat is why we decided to call it after a certain threshold (100) of\nskipped changes. Now, surely as mentioned in my previous reply we can\nmake it generic such that instead of calling this (update_progress\nfunction as in the patch) for skipped cases, we call it always. Will\nthat make it better?\n\n> The functions CreateInitDecodingContext and CreateDecodingContext receives the\n> update_progress function as a parameter. These functions are called in 2\n> places: (a) streaming replication protocol (CREATE_REPLICATION_SLOT) and (b)\n> SQL logical decoding functions (pg_logical_*_changes). Case (a) uses\n> WalSndUpdateProgress as a progress function. Case (b) does not have one because\n> it is not required -- local decoding/communication. There is no custom update\n> progress routine for each output plugin which leads me to the question:\n> couldn't we encapsulate the update progress call into the callback functions?\n>\n\nSorry, I don't get your point. What exactly do you mean by this?\nAFAIS, currently we call this output plugin API in pgoutput functions\nonly, do you intend to get it invoked from a different place?\n\n[1] - https://www.postgresql.org/message-id/OS3PR01MB6275DFFDAC7A59FA148931529E209%40OS3PR01MB6275.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 1 Apr 2022 07:57:46 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Thu, Mar 31, 2022, at 11:27 PM, Amit Kapila wrote:\n> This is exactly our initial analysis and we have tried a patch on\n> these lines and it has a noticeable overhead. See [1]. Calling this\n> for each change or each skipped change can bring noticeable overhead\n> that is why we decided to call it after a certain threshold (100) of\n> skipped changes. Now, surely as mentioned in my previous reply we can\n> make it generic such that instead of calling this (update_progress\n> function as in the patch) for skipped cases, we call it always. Will\n> that make it better?\nThat's what I have in mind but using a different approach.\n\n> > The functions CreateInitDecodingContext and CreateDecodingContext receives the\n> > update_progress function as a parameter. These functions are called in 2\n> > places: (a) streaming replication protocol (CREATE_REPLICATION_SLOT) and (b)\n> > SQL logical decoding functions (pg_logical_*_changes). Case (a) uses\n> > WalSndUpdateProgress as a progress function. Case (b) does not have one because\n> > it is not required -- local decoding/communication. There is no custom update\n> > progress routine for each output plugin which leads me to the question:\n> > couldn't we encapsulate the update progress call into the callback functions?\n> >\n> \n> Sorry, I don't get your point. What exactly do you mean by this?\n> AFAIS, currently we call this output plugin API in pgoutput functions\n> only, do you intend to get it invoked from a different place?\nIt seems I didn't make myself clear. The callbacks I'm referring to the\n*_cb_wrapper functions. After every ctx->callbacks.foo_cb() call into a\n*_cb_wrapper() function, we have something like:\n\nif (ctx->progress & PGOUTPUT_PROGRESS_FOO)\n NewUpdateProgress(ctx, false);\n\nThe NewUpdateProgress function would contain a logic similar to the\nupdate_progress() from the proposed patch. (A different function name here just\nto avoid confusion.)\n\nThe output plugin is responsible to set ctx->progress with the callback\nvariables (for example, PGOUTPUT_PROGRESS_CHANGE for change_cb()) that we would\nlike to run NewUpdateProgress.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Thu, Mar 31, 2022, at 11:27 PM, Amit Kapila wrote:This is exactly our initial analysis and we have tried a patch onthese lines and it has a noticeable overhead. See [1]. Calling thisfor each change or each skipped change can bring noticeable overheadthat is why we decided to call it after a certain threshold (100) ofskipped changes. Now, surely as mentioned in my previous reply we canmake it generic such that instead of calling this (update_progressfunction as in the patch) for skipped cases, we call it always. Willthat make it better?That's what I have in mind but using a different approach.> The functions CreateInitDecodingContext and CreateDecodingContext receives the> update_progress function as a parameter. These functions are called in 2> places: (a) streaming replication protocol (CREATE_REPLICATION_SLOT) and (b)> SQL logical decoding functions (pg_logical_*_changes). Case (a) uses> WalSndUpdateProgress as a progress function. Case (b) does not have one because> it is not required -- local decoding/communication. There is no custom update> progress routine for each output plugin which leads me to the question:> couldn't we encapsulate the update progress call into the callback functions?>Sorry, I don't get your point. What exactly do you mean by this?AFAIS, currently we call this output plugin API in pgoutput functionsonly, do you intend to get it invoked from a different place?It seems I didn't make myself clear. The callbacks I'm referring to the*_cb_wrapper functions. After every ctx->callbacks.foo_cb() call into a*_cb_wrapper() function, we have something like:if (ctx->progress & PGOUTPUT_PROGRESS_FOO) NewUpdateProgress(ctx, false);The NewUpdateProgress function would contain a logic similar to theupdate_progress() from the proposed patch. (A different function name here justto avoid confusion.)The output plugin is responsible to set ctx->progress with the callbackvariables (for example, PGOUTPUT_PROGRESS_CHANGE for change_cb()) that we wouldlike to run NewUpdateProgress.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Thu, 31 Mar 2022 23:57:39 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Fri, Apr 1, 2022 at 8:28 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Thu, Mar 31, 2022, at 11:27 PM, Amit Kapila wrote:\n>\n> This is exactly our initial analysis and we have tried a patch on\n> these lines and it has a noticeable overhead. See [1]. Calling this\n> for each change or each skipped change can bring noticeable overhead\n> that is why we decided to call it after a certain threshold (100) of\n> skipped changes. Now, surely as mentioned in my previous reply we can\n> make it generic such that instead of calling this (update_progress\n> function as in the patch) for skipped cases, we call it always. Will\n> that make it better?\n>\n> That's what I have in mind but using a different approach.\n>\n> > The functions CreateInitDecodingContext and CreateDecodingContext receives the\n> > update_progress function as a parameter. These functions are called in 2\n> > places: (a) streaming replication protocol (CREATE_REPLICATION_SLOT) and (b)\n> > SQL logical decoding functions (pg_logical_*_changes). Case (a) uses\n> > WalSndUpdateProgress as a progress function. Case (b) does not have one because\n> > it is not required -- local decoding/communication. There is no custom update\n> > progress routine for each output plugin which leads me to the question:\n> > couldn't we encapsulate the update progress call into the callback functions?\n> >\n>\n> Sorry, I don't get your point. What exactly do you mean by this?\n> AFAIS, currently we call this output plugin API in pgoutput functions\n> only, do you intend to get it invoked from a different place?\n>\n> It seems I didn't make myself clear. The callbacks I'm referring to the\n> *_cb_wrapper functions. After every ctx->callbacks.foo_cb() call into a\n> *_cb_wrapper() function, we have something like:\n>\n> if (ctx->progress & PGOUTPUT_PROGRESS_FOO)\n> NewUpdateProgress(ctx, false);\n>\n> The NewUpdateProgress function would contain a logic similar to the\n> update_progress() from the proposed patch. (A different function name here just\n> to avoid confusion.)\n>\n> The output plugin is responsible to set ctx->progress with the callback\n> variables (for example, PGOUTPUT_PROGRESS_CHANGE for change_cb()) that we would\n> like to run NewUpdateProgress.\n>\n\nThis sounds like a conflicting approach to what we currently do.\nCurrently, OutputPluginUpdateProgress() is called from the xact\nrelated pgoutput functions like pgoutput_commit_txn(),\npgoutput_prepare_txn(), pgoutput_commit_prepared_txn(), etc. So, if we\nfollow what you are saying then for some of the APIs like\npgoutput_change/_message/_truncate, we need to set the parameter to\ninvoke NewUpdateProgress() which will internally call\nOutputPluginUpdateProgress(), and for the remaining APIs, we will call\nin the corresponding pgoutput_* function. I feel if we want to make it\nmore generic than the current patch, it is better to directly call\nwhat you are referring to here as NewUpdateProgress() in all remaining\nAPIs like pgoutput_change/_truncate, etc.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 1 Apr 2022 09:38:52 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Fri, Apr 1, 2022 at 12:09 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Fri, Apr 1, 2022 at 8:28 AM Euler Taveira <euler@eulerto.com> wrote:\r\n> >\r\n> > On Thu, Mar 31, 2022, at 11:27 PM, Amit Kapila wrote:\r\n> >\r\n> > This is exactly our initial analysis and we have tried a patch on\r\n> > these lines and it has a noticeable overhead. See [1]. Calling this\r\n> > for each change or each skipped change can bring noticeable overhead\r\n> > that is why we decided to call it after a certain threshold (100) of\r\n> > skipped changes. Now, surely as mentioned in my previous reply we can\r\n> > make it generic such that instead of calling this (update_progress\r\n> > function as in the patch) for skipped cases, we call it always. Will\r\n> > that make it better?\r\n> >\r\n> > That's what I have in mind but using a different approach.\r\n> >\r\n> > > The functions CreateInitDecodingContext and CreateDecodingContext\r\n> receives the\r\n> > > update_progress function as a parameter. These functions are called in 2\r\n> > > places: (a) streaming replication protocol (CREATE_REPLICATION_SLOT) and\r\n> (b)\r\n> > > SQL logical decoding functions (pg_logical_*_changes). Case (a) uses\r\n> > > WalSndUpdateProgress as a progress function. Case (b) does not have one\r\n> because\r\n> > > it is not required -- local decoding/communication. There is no custom\r\n> update\r\n> > > progress routine for each output plugin which leads me to the question:\r\n> > > couldn't we encapsulate the update progress call into the callback functions?\r\n> > >\r\n> >\r\n> > Sorry, I don't get your point. What exactly do you mean by this?\r\n> > AFAIS, currently we call this output plugin API in pgoutput functions\r\n> > only, do you intend to get it invoked from a different place?\r\n> >\r\n> > It seems I didn't make myself clear. The callbacks I'm referring to the\r\n> > *_cb_wrapper functions. After every ctx->callbacks.foo_cb() call into a\r\n> > *_cb_wrapper() function, we have something like:\r\n> >\r\n> > if (ctx->progress & PGOUTPUT_PROGRESS_FOO)\r\n> > NewUpdateProgress(ctx, false);\r\n> >\r\n> > The NewUpdateProgress function would contain a logic similar to the\r\n> > update_progress() from the proposed patch. (A different function name here\r\n> just\r\n> > to avoid confusion.)\r\n> >\r\n> > The output plugin is responsible to set ctx->progress with the callback\r\n> > variables (for example, PGOUTPUT_PROGRESS_CHANGE for change_cb())\r\n> that we would\r\n> > like to run NewUpdateProgress.\r\n> >\r\n> \r\n> This sounds like a conflicting approach to what we currently do.\r\n> Currently, OutputPluginUpdateProgress() is called from the xact\r\n> related pgoutput functions like pgoutput_commit_txn(),\r\n> pgoutput_prepare_txn(), pgoutput_commit_prepared_txn(), etc. So, if we\r\n> follow what you are saying then for some of the APIs like\r\n> pgoutput_change/_message/_truncate, we need to set the parameter to\r\n> invoke NewUpdateProgress() which will internally call\r\n> OutputPluginUpdateProgress(), and for the remaining APIs, we will call\r\n> in the corresponding pgoutput_* function. I feel if we want to make it\r\n> more generic than the current patch, it is better to directly call\r\n> what you are referring to here as NewUpdateProgress() in all remaining\r\n> APIs like pgoutput_change/_truncate, etc.\r\nThanks for your comments.\r\n\r\nAccording to your suggestion, improve the patch to make it more generic.\r\nAttach the new patch.\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Wed, 6 Apr 2022 05:39:05 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Apr 6, 2022 at 11:09 AM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Fri, Apr 1, 2022 at 12:09 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Fri, Apr 1, 2022 at 8:28 AM Euler Taveira <euler@eulerto.com> wrote:\n> > >\n> > > It seems I didn't make myself clear. The callbacks I'm referring to the\n> > > *_cb_wrapper functions. After every ctx->callbacks.foo_cb() call into a\n> > > *_cb_wrapper() function, we have something like:\n> > >\n> > > if (ctx->progress & PGOUTPUT_PROGRESS_FOO)\n> > > NewUpdateProgress(ctx, false);\n> > >\n> > > The NewUpdateProgress function would contain a logic similar to the\n> > > update_progress() from the proposed patch. (A different function name here\n> > just\n> > > to avoid confusion.)\n> > >\n> > > The output plugin is responsible to set ctx->progress with the callback\n> > > variables (for example, PGOUTPUT_PROGRESS_CHANGE for change_cb())\n> > that we would\n> > > like to run NewUpdateProgress.\n> > >\n> >\n> > This sounds like a conflicting approach to what we currently do.\n> > Currently, OutputPluginUpdateProgress() is called from the xact\n> > related pgoutput functions like pgoutput_commit_txn(),\n> > pgoutput_prepare_txn(), pgoutput_commit_prepared_txn(), etc. So, if we\n> > follow what you are saying then for some of the APIs like\n> > pgoutput_change/_message/_truncate, we need to set the parameter to\n> > invoke NewUpdateProgress() which will internally call\n> > OutputPluginUpdateProgress(), and for the remaining APIs, we will call\n> > in the corresponding pgoutput_* function. I feel if we want to make it\n> > more generic than the current patch, it is better to directly call\n> > what you are referring to here as NewUpdateProgress() in all remaining\n> > APIs like pgoutput_change/_truncate, etc.\n> Thanks for your comments.\n>\n> According to your suggestion, improve the patch to make it more generic.\n> Attach the new patch.\n>\n\n typedef void (*LogicalOutputPluginWriterUpdateProgress) (struct\nLogicalDecodingContext *lr,\n XLogRecPtr Ptr,\n TransactionId xid,\n- bool skipped_xact\n+ bool skipped_xact,\n+ bool last_write\n\nIn this approach, I don't think we need an additional parameter\nlast_write. Let's do the work related to keepalive without a\nparameter, do you see any problem with that?\n\nAlso, let's try to evaluate how it impacts lag functionality for large\ntransactions?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 6 Apr 2022 11:28:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Apr 6, 2022 at 11:28 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 6, 2022 at 11:09 AM wangw.fnst@fujitsu.com\n> <wangw.fnst@fujitsu.com> wrote:\n> >\n> > According to your suggestion, improve the patch to make it more generic.\n> > Attach the new patch.\n> >\n>\n> typedef void (*LogicalOutputPluginWriterUpdateProgress) (struct\n> LogicalDecodingContext *lr,\n> XLogRecPtr Ptr,\n> TransactionId xid,\n> - bool skipped_xact\n> + bool skipped_xact,\n> + bool last_write\n>\n> In this approach, I don't think we need an additional parameter\n> last_write. Let's do the work related to keepalive without a\n> parameter, do you see any problem with that?\n>\n\nI think this patch doesn't take into account that we call\nOutputPluginUpdateProgress() from APIs like pgoutput_commit_txn(). I\nthink we should always call the new function update_progress from\nthose existing call sites and arrange the function such that when\ncalled from xact end APIs like pgoutput_commit_txn(), it always call\nOutputPluginUpdateProgress and make changes_count as 0.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 6 Apr 2022 14:01:35 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Apr 6, 2022 at 1:59 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\nOn Wed, Apr 6, 2022 at 4:32 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n>\r\nThanks for your comments.\r\n\r\n> typedef void (*LogicalOutputPluginWriterUpdateProgress) (struct\r\n> LogicalDecodingContext *lr,\r\n> XLogRecPtr Ptr,\r\n> TransactionId xid,\r\n> - bool skipped_xact\r\n> + bool skipped_xact,\r\n> + bool last_write\r\n> \r\n> In this approach, I don't think we need an additional parameter last_write. Let's\r\n> do the work related to keepalive without a parameter, do you see any problem\r\n> with that?\r\nI agree with you. Modify this point.\r\n\r\n> I think this patch doesn't take into account that we call\r\n> OutputPluginUpdateProgress() from APIs like pgoutput_commit_txn(). I\r\n> think we should always call the new function update_progress from\r\n> those existing call sites and arrange the function such that when\r\n> called from xact end APIs like pgoutput_commit_txn(), it always call\r\n> OutputPluginUpdateProgress and make changes_count as 0.\r\nImprove it.\r\nAdd two new input to function update_progress.(skipped_xact and end_xact).\r\nModify the function invoke from OutputPluginUpdateProgress to update_progress.\r\n\r\n> Also, let's try to evaluate how it impacts lag functionality for large transactions?\r\nI think this patch will not affect lag functionality. It will updates the lag\r\nfield of view pg_stat_replication more frequently.\r\nIIUC, when invoking function WalSndUpdateProgress, it will store the lsn of\r\nchange and invoking time in lag_tracker. Then when invoking function\r\nProcessStandbyReplyMessage, it will calculate the lag field according to the\r\nmessage from subscriber and the information in lag_tracker. This patch does\r\nnot modify this logic, but only increases the frequency of invoking.\r\nPlease let me know if I understand wrong.\r\n\r\nAttach the new patch.\r\n1. Remove the new function input parameters in this patch(parameter last_write\r\nof WalSndUpdateProgress). [suggestion by Amit-San]\r\n2. Also invoke function update_progress in other xact end APIs like\r\npgoutput_commit_txn. [suggestion by Amit-San]\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Wed, 6 Apr 2022 12:59:55 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Apr 6, 2022 at 6:30 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Wed, Apr 6, 2022 at 1:59 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Wed, Apr 6, 2022 at 4:32 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> Thanks for your comments.\n>\n> > typedef void (*LogicalOutputPluginWriterUpdateProgress) (struct\n> > LogicalDecodingContext *lr,\n> > XLogRecPtr Ptr,\n> > TransactionId xid,\n> > - bool skipped_xact\n> > + bool skipped_xact,\n> > + bool last_write\n> >\n> > In this approach, I don't think we need an additional parameter last_write. Let's\n> > do the work related to keepalive without a parameter, do you see any problem\n> > with that?\n> I agree with you. Modify this point.\n>\n> > I think this patch doesn't take into account that we call\n> > OutputPluginUpdateProgress() from APIs like pgoutput_commit_txn(). I\n> > think we should always call the new function update_progress from\n> > those existing call sites and arrange the function such that when\n> > called from xact end APIs like pgoutput_commit_txn(), it always call\n> > OutputPluginUpdateProgress and make changes_count as 0.\n> Improve it.\n> Add two new input to function update_progress.(skipped_xact and end_xact).\n> Modify the function invoke from OutputPluginUpdateProgress to update_progress.\n>\n> > Also, let's try to evaluate how it impacts lag functionality for large transactions?\n> I think this patch will not affect lag functionality. It will updates the lag\n> field of view pg_stat_replication more frequently.\n> IIUC, when invoking function WalSndUpdateProgress, it will store the lsn of\n> change and invoking time in lag_tracker. Then when invoking function\n> ProcessStandbyReplyMessage, it will calculate the lag field according to the\n> message from subscriber and the information in lag_tracker. This patch does\n> not modify this logic, but only increases the frequency of invoking.\n> Please let me know if I understand wrong.\n>\n\nNo, your understanding seems correct to me. But what I want to check\nis if calling the progress function more often has any impact on\nlag-related fields in pg_stat_replication? I think you need to check\nthe impact of large transaction replay.\n\nOne comment:\n+static void\n+update_progress(LogicalDecodingContext *ctx, bool skipped_xact, bool end_xact)\n+{\n+ static int changes_count = 0;\n+\n+ if (end_xact)\n+ {\n+ /* Update progress tracking at xact end. */\n+ OutputPluginUpdateProgress(ctx, skipped_xact);\n+ changes_count = 0;\n+ }\n+ /*\n+ * After continuously processing CHANGES_THRESHOLD changes, update progress\n+ * which will also try to send a keepalive message if required.\n\nI think you can simply return after making changes_count = 0. There\nshould be an empty line before starting the next comment.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 7 Apr 2022 11:04:16 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Apr 7, 2022 at 1:34 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n>\r\nThanks for your comments.\r\n\r\n> One comment:\r\n> +static void\r\n> +update_progress(LogicalDecodingContext *ctx, bool skipped_xact, bool\r\n> end_xact)\r\n> +{\r\n> + static int changes_count = 0;\r\n> +\r\n> + if (end_xact)\r\n> + {\r\n> + /* Update progress tracking at xact end. */\r\n> + OutputPluginUpdateProgress(ctx, skipped_xact);\r\n> + changes_count = 0;\r\n> + }\r\n> + /*\r\n> + * After continuously processing CHANGES_THRESHOLD changes, update\r\n> progress\r\n> + * which will also try to send a keepalive message if required.\r\n> \r\n> I think you can simply return after making changes_count = 0. There\r\n> should be an empty line before starting the next comment.\r\nImprove as suggested.\r\nBTW, there is a conflict in current HEAD when applying v12 because of the\r\ncommit 2c7ea57. Also rebase it.\r\n\r\nAttach the new patch.\r\n1. Make some improvements to the new function update_progress. [suggestion by Amit-San]\r\n2. Rebase the patch because the commit 2c7ea57 in current HEAD.\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Fri, 8 Apr 2022 05:09:50 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Apr 7, 2022 at 1:34 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Wed, Apr 6, 2022 at 6:30 PM wangw.fnst@fujitsu.com\r\n> <wangw.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Wed, Apr 6, 2022 at 1:58 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> > On Wed, Apr 6, 2022 at 4:32 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> > > Also, let's try to evaluate how it impacts lag functionality for large\r\n> transactions?\r\n> > I think this patch will not affect lag functionality. It will updates the lag\r\n> > field of view pg_stat_replication more frequently.\r\n> > IIUC, when invoking function WalSndUpdateProgress, it will store the lsn of\r\n> > change and invoking time in lag_tracker. Then when invoking function\r\n> > ProcessStandbyReplyMessage, it will calculate the lag field according to the\r\n> > message from subscriber and the information in lag_tracker. This patch does\r\n> > not modify this logic, but only increases the frequency of invoking.\r\n> > Please let me know if I understand wrong.\r\n> >\r\n> \r\n> No, your understanding seems correct to me. But what I want to check\r\n> is if calling the progress function more often has any impact on\r\n> lag-related fields in pg_stat_replication? I think you need to check\r\n> the impact of large transaction replay.\r\nThanks for the explanation.\r\n\r\nAfter doing some checks, I found that the v13 patch makes the calculations of\r\nlag functionality inaccurate.\r\n\r\nIn short, v13 patch lets us try to track lag more frequently and try to send a\r\nkeepalive message to subscribers. But in order to prevent flooding the lag\r\ntracker, we could not track lag more than once within\r\nWALSND_LOGICAL_LAG_TRACK_INTERVAL_MS (see function WalSndUpdateProgress).\r\nThis means we may lose informations that needs to be tracked.\r\nFor example, suppose there is a large transaction with lsn from lsn1 to lsn3.\r\nIn HEAD, when we calculate the lag time for lsn3, the lag time of lsn3 is\r\n(now - lsn3.time).\r\nBut with v13 patch, when we calculate the lag time for lsn3, because there\r\nmaybe no informations of lsn3 but has informations of lsn2 in lag_tracker, the\r\nlag time of lsn3 is (now - t2.time). (see function LagTrackerRead)\r\nTherefore, if we lose the informations that need to be tracked, the lag time\r\nbecomes large and inaccurate.\r\n\r\nSo I skip tracking lag during a transaction just like the current HEAD.\r\nAttach the new patch.\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Mon, 11 Apr 2022 06:38:59 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Mon, Apr 11, 2022 at 2:39 PM I wrote:\r\n> Attach the new patch.\r\nAlso, share test results and details.\r\n\r\nTo check that the lsn information used for the calculation is what we expected,\r\nI get some information by adding logs in the function LagTrackerRead.\r\n\r\nSummary of test results:\r\n- In current HEAD and current HEAD with v14 patch, we could found the\r\n information of same lsn as received from subscriber-side in lag_tracker.\r\n- In current HEAD with v13 patch, we could hardly found the information of same\r\n lsn in lag_tracker.\r\n\r\nAttach the details:\r\n[The log by HEAD]\r\nthe lsn we received from subscriber | the lsn whose time we used to calculate in lag_tracker\r\n382826584 | 382826584\r\n743884840 | 743884840\r\n1104943232 | 1104943232\r\n1468949424 | 1468949424\r\n1469521216 | 1469521216\r\n\r\n[The log by HEAD with v14 patch]\r\nthe lsn we received from subscriber | the lsn whose time we used to calculate in lag_tracker\r\n382826584 | 382826584\r\n743890672 | 743890672\r\n1105074264 | 1105074264\r\n1469127040 | 1469127040\r\n1830591240 | 1830591240\r\n\r\n[The log by HEAD with v13 patch]\r\nthe lsn we received from subscriber | the lsn whose time we used to calculate in lag_tracker\r\n382826584 | 359848728 \r\n743884840 | 713808560 \r\n1105010640 | 1073978544\r\n1468517536 | 1447850160\r\n1469516328 | 1469516328\r\n\r\nRegards,\r\nWang wei\r\n",
"msg_date": "Mon, 11 Apr 2022 07:33:09 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Mon, Apr 11, 2022 at 12:09 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> So I skip tracking lag during a transaction just like the current HEAD.\n> Attach the new patch.\n>\n\nThanks, please find the updated patch where I have slightly modified\nthe comments.\n\nSawada-San, Euler, do you have any opinion on this approach? I\npersonally still prefer the approach implemented in v10 [1] especially\ndue to the latest finding by Wang-San that we can't update the\nlag-tracker apart from when it is invoked at the transaction end.\nHowever, I am fine if we like this approach more.\n\n[1] - https://www.postgresql.org/message-id/OS3PR01MB6275E0C2B4D9E488AD7CBA209E1F9%40OS3PR01MB6275.jpnprd01.prod.outlook.com\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Wed, 13 Apr 2022 16:15:23 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Apr 13, 2022 at 7:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Apr 11, 2022 at 12:09 PM wangw.fnst@fujitsu.com\n> <wangw.fnst@fujitsu.com> wrote:\n> >\n> > So I skip tracking lag during a transaction just like the current HEAD.\n> > Attach the new patch.\n> >\n>\n> Thanks, please find the updated patch where I have slightly modified\n> the comments.\n>\n> Sawada-San, Euler, do you have any opinion on this approach? I\n> personally still prefer the approach implemented in v10 [1] especially\n> due to the latest finding by Wang-San that we can't update the\n> lag-tracker apart from when it is invoked at the transaction end.\n> However, I am fine if we like this approach more.\n\nThank you for updating the patch.\n\nThe current patch looks much better than v10 which requires to call to\nupdate_progress() every path.\n\nRegarding v15 patch, I'm concerned a bit that the new function name,\nupdate_progress(), is too generic. How about\nupdate_replation_progress() or something more specific name?\n\n---\n+ if (end_xact)\n+ {\n+ /* Update progress tracking at xact end. */\n+ OutputPluginUpdateProgress(ctx, skipped_xact, end_xact);\n+ changes_count = 0;\n+ return;\n+ }\n+\n+ /*\n+ * After continuously processing CHANGES_THRESHOLD changes,\nwe try to send\n+ * a keepalive message if required.\n+ *\n+ * We don't want to try sending a keepalive message after\nprocessing each\n+ * change as that can have overhead. Testing reveals that there is no\n+ * noticeable overhead in doing it after continuously\nprocessing 100 or so\n+ * changes.\n+ */\n+#define CHANGES_THRESHOLD 100\n+ if (++changes_count >= CHANGES_THRESHOLD)\n+ {\n+ OutputPluginUpdateProgress(ctx, skipped_xact, end_xact);\n+ changes_count = 0;\n+ }\n\nCan we merge two if branches since we do the same things? Or did you\nseparate them for better readability?\n\nRegards,\n\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 14 Apr 2022 21:20:13 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Apr 13, 2022, at 7:45 AM, Amit Kapila wrote:\n> On Mon, Apr 11, 2022 at 12:09 PM wangw.fnst@fujitsu.com\n> <wangw.fnst@fujitsu.com> wrote:\n> >\n> > So I skip tracking lag during a transaction just like the current HEAD.\n> > Attach the new patch.\n> >\n> \n> Thanks, please find the updated patch where I have slightly modified\n> the comments.\n> \n> Sawada-San, Euler, do you have any opinion on this approach? I\n> personally still prefer the approach implemented in v10 [1] especially\n> due to the latest finding by Wang-San that we can't update the\n> lag-tracker apart from when it is invoked at the transaction end.\n> However, I am fine if we like this approach more.\nIt seems v15 is simpler and less error prone than v10. v10 has a mix of\nOutputPluginUpdateProgress() and the new function update_progress(). The v10\nalso calls update_progress() for every change action in pgoutput_change(). It\nis not a good approach for maintainability -- new changes like sequences need\nextra calls. However, as you mentioned there should handle the track lag case.\n\nBoth patches change the OutputPluginUpdateProgress() so it cannot be\nbackpatched. Are you planning to backpatch it? If so, the boolean variable\n(last_write or end_xacts depending of which version you are considering) could\nbe added to LogicalDecodingContext. (You should probably consider this approach\nfor skipped_xact too)\n\n+ * For a large transaction, if we don't send any change to the downstream for a\n+ * long time then it can timeout. This can happen when all or most of the\n+ * changes are either not published or got filtered out.\n\nWe should probable mention that \"long time\" is wal_receiver_timeout on\nsubscriber.\n\n+ * change as that can have overhead. Testing reveals that there is no\n+ * noticeable overhead in doing it after continuously processing 100 or so\n+ * changes.\n\nTests revealed that ...\n\n+ * We don't have a mechanism to get the ack for any LSN other than end xact\n+ * lsn from the downstream. So, we track lag only for end xact lsn's.\n\ns/lsn/LSN/ and s/lsn's/LSNs/\n\nI would say \"end of transaction LSN\".\n\n+ * If too many changes are processed then try to send a keepalive message to\n+ * receiver to avoid timeouts.\n\nIn logical replication, if too many changes are processed then try to send a\nkeepalive message. It might avoid a timeout in the subscriber.\n\nDoes this same issue occur for long transactions? I mean keep a long\ntransaction open and execute thousands of transactions.\n\nBEGIN;\nINSERT INTO foo (a) VALUES(1);\n-- wait a few hours while executing 10^x transactions\nINSERT INTO foo (a) VALUES(2);\nCOMMIT;\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, Apr 13, 2022, at 7:45 AM, Amit Kapila wrote:On Mon, Apr 11, 2022 at 12:09 PM wangw.fnst@fujitsu.com<wangw.fnst@fujitsu.com> wrote:>> So I skip tracking lag during a transaction just like the current HEAD.> Attach the new patch.>Thanks, please find the updated patch where I have slightly modifiedthe comments.Sawada-San, Euler, do you have any opinion on this approach? Ipersonally still prefer the approach implemented in v10 [1] especiallydue to the latest finding by Wang-San that we can't update thelag-tracker apart from when it is invoked at the transaction end.However, I am fine if we like this approach more.It seems v15 is simpler and less error prone than v10. v10 has a mix ofOutputPluginUpdateProgress() and the new function update_progress(). The v10also calls update_progress() for every change action in pgoutput_change(). Itis not a good approach for maintainability -- new changes like sequences needextra calls. However, as you mentioned there should handle the track lag case.Both patches change the OutputPluginUpdateProgress() so it cannot bebackpatched. Are you planning to backpatch it? If so, the boolean variable(last_write or end_xacts depending of which version you are considering) couldbe added to LogicalDecodingContext. (You should probably consider this approachfor skipped_xact too)+ * For a large transaction, if we don't send any change to the downstream for a+ * long time then it can timeout. This can happen when all or most of the+ * changes are either not published or got filtered out.We should probable mention that \"long time\" is wal_receiver_timeout onsubscriber.+ * change as that can have overhead. Testing reveals that there is no+ * noticeable overhead in doing it after continuously processing 100 or so+ * changes.Tests revealed that ...+ * We don't have a mechanism to get the ack for any LSN other than end xact+ * lsn from the downstream. So, we track lag only for end xact lsn's.s/lsn/LSN/ and s/lsn's/LSNs/I would say \"end of transaction LSN\".+ * If too many changes are processed then try to send a keepalive message to+ * receiver to avoid timeouts.In logical replication, if too many changes are processed then try to send akeepalive message. It might avoid a timeout in the subscriber.Does this same issue occur for long transactions? I mean keep a longtransaction open and execute thousands of transactions.BEGIN;INSERT INTO foo (a) VALUES(1);-- wait a few hours while executing 10^x transactionsINSERT INTO foo (a) VALUES(2);COMMIT;--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Thu, 14 Apr 2022 09:21:27 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Thu, Apr 14, 2022 at 5:52 PM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Wed, Apr 13, 2022, at 7:45 AM, Amit Kapila wrote:\n>\n> Sawada-San, Euler, do you have any opinion on this approach? I\n> personally still prefer the approach implemented in v10 [1] especially\n> due to the latest finding by Wang-San that we can't update the\n> lag-tracker apart from when it is invoked at the transaction end.\n> However, I am fine if we like this approach more.\n>\n> It seems v15 is simpler and less error prone than v10. v10 has a mix of\n> OutputPluginUpdateProgress() and the new function update_progress(). The v10\n> also calls update_progress() for every change action in pgoutput_change(). It\n> is not a good approach for maintainability -- new changes like sequences need\n> extra calls.\n>\n\nOkay, let's use the v15 approach as Sawada-San also seems to have a\npreference for that.\n\n> However, as you mentioned there should handle the track lag case.\n>\n> Both patches change the OutputPluginUpdateProgress() so it cannot be\n> backpatched. Are you planning to backpatch it? If so, the boolean variable\n> (last_write or end_xacts depending of which version you are considering) could\n> be added to LogicalDecodingContext.\n>\n\nIf we add it to LogicalDecodingContext then I think we have to always\nreset the variable after its use which will make it look ugly and\nerror-prone. I was not thinking to backpatch it because of the API\nchange but I guess if we want to backpatch then we can add it to\nLogicalDecodingContext for back-branches. I am not sure if that will\nlook committable but surely we can try.\n\n> (You should probably consider this approach\n> for skipped_xact too)\n>\n\nAs mentioned, I think it will be more error-prone and we already have\nother xact related parameters in that and similar APIs. So, I am not\nsure why you want to prefer that?\n\n>\n> Does this same issue occur for long transactions? I mean keep a long\n> transaction open and execute thousands of transactions.\n>\n\nNo, this problem won't happen for such cases because we will only try\nto send it at the commit time. Note that this problem happens only\nwhen we don't send anything to the subscriber till a timeout happens.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 18 Apr 2022 09:29:36 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Thu, Apr 14, 2022 at 5:50 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Apr 13, 2022 at 7:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Apr 11, 2022 at 12:09 PM wangw.fnst@fujitsu.com\n> > <wangw.fnst@fujitsu.com> wrote:\n> > >\n> > > So I skip tracking lag during a transaction just like the current HEAD.\n> > > Attach the new patch.\n> > >\n> >\n> > Thanks, please find the updated patch where I have slightly modified\n> > the comments.\n> >\n> > Sawada-San, Euler, do you have any opinion on this approach? I\n> > personally still prefer the approach implemented in v10 [1] especially\n> > due to the latest finding by Wang-San that we can't update the\n> > lag-tracker apart from when it is invoked at the transaction end.\n> > However, I am fine if we like this approach more.\n>\n> Thank you for updating the patch.\n>\n> The current patch looks much better than v10 which requires to call to\n> update_progress() every path.\n>\n> Regarding v15 patch, I'm concerned a bit that the new function name,\n> update_progress(), is too generic. How about\n> update_replation_progress() or something more specific name?\n>\n\nDo you intend to say update_replication_progress()? The word\n'replation' doesn't make sense to me. I am fine with this suggestion.\n\n>\n> ---\n> + if (end_xact)\n> + {\n> + /* Update progress tracking at xact end. */\n> + OutputPluginUpdateProgress(ctx, skipped_xact, end_xact);\n> + changes_count = 0;\n> + return;\n> + }\n> +\n> + /*\n> + * After continuously processing CHANGES_THRESHOLD changes,\n> we try to send\n> + * a keepalive message if required.\n> + *\n> + * We don't want to try sending a keepalive message after\n> processing each\n> + * change as that can have overhead. Testing reveals that there is no\n> + * noticeable overhead in doing it after continuously\n> processing 100 or so\n> + * changes.\n> + */\n> +#define CHANGES_THRESHOLD 100\n> + if (++changes_count >= CHANGES_THRESHOLD)\n> + {\n> + OutputPluginUpdateProgress(ctx, skipped_xact, end_xact);\n> + changes_count = 0;\n> + }\n>\n> Can we merge two if branches since we do the same things? Or did you\n> separate them for better readability?\n>\n\nI think it is fine to merge the two checks.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 18 Apr 2022 09:31:45 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Mon, Apr 18, 2022 at 1:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Apr 14, 2022 at 5:50 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Apr 13, 2022 at 7:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Apr 11, 2022 at 12:09 PM wangw.fnst@fujitsu.com\n> > > <wangw.fnst@fujitsu.com> wrote:\n> > > >\n> > > > So I skip tracking lag during a transaction just like the current HEAD.\n> > > > Attach the new patch.\n> > > >\n> > >\n> > > Thanks, please find the updated patch where I have slightly modified\n> > > the comments.\n> > >\n> > > Sawada-San, Euler, do you have any opinion on this approach? I\n> > > personally still prefer the approach implemented in v10 [1] especially\n> > > due to the latest finding by Wang-San that we can't update the\n> > > lag-tracker apart from when it is invoked at the transaction end.\n> > > However, I am fine if we like this approach more.\n> >\n> > Thank you for updating the patch.\n> >\n> > The current patch looks much better than v10 which requires to call to\n> > update_progress() every path.\n> >\n> > Regarding v15 patch, I'm concerned a bit that the new function name,\n> > update_progress(), is too generic. How about\n> > update_replation_progress() or something more specific name?\n> >\n>\n> Do you intend to say update_replication_progress()? The word\n> 'replation' doesn't make sense to me. I am fine with this suggestion.\n\nYeah, that was a typo. I meant update_replication_progress().\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 18 Apr 2022 13:35:07 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Mon, Apr 18, 2022 at 9:29 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Apr 14, 2022 at 5:52 PM Euler Taveira <euler@eulerto.com> wrote:\n> >\n> > On Wed, Apr 13, 2022, at 7:45 AM, Amit Kapila wrote:\n> >\n> > Sawada-San, Euler, do you have any opinion on this approach? I\n> > personally still prefer the approach implemented in v10 [1] especially\n> > due to the latest finding by Wang-San that we can't update the\n> > lag-tracker apart from when it is invoked at the transaction end.\n> > However, I am fine if we like this approach more.\n> >\n> > It seems v15 is simpler and less error prone than v10. v10 has a mix of\n> > OutputPluginUpdateProgress() and the new function update_progress(). The v10\n> > also calls update_progress() for every change action in pgoutput_change(). It\n> > is not a good approach for maintainability -- new changes like sequences need\n> > extra calls.\n> >\n>\n> Okay, let's use the v15 approach as Sawada-San also seems to have a\n> preference for that.\n>\n> > However, as you mentioned there should handle the track lag case.\n> >\n> > Both patches change the OutputPluginUpdateProgress() so it cannot be\n> > backpatched. Are you planning to backpatch it? If so, the boolean variable\n> > (last_write or end_xacts depending of which version you are considering) could\n> > be added to LogicalDecodingContext.\n> >\n>\n> If we add it to LogicalDecodingContext then I think we have to always\n> reset the variable after its use which will make it look ugly and\n> error-prone. I was not thinking to backpatch it because of the API\n> change but I guess if we want to backpatch then we can add it to\n> LogicalDecodingContext for back-branches. I am not sure if that will\n> look committable but surely we can try.\n>\n\nEven, if we want to add the variable in the struct in back-branches,\nwe need to ensure not to change the size of the struct as it is\nexposed, see email [1] for a similar mistake we made in another case.\n\n[1] - https://www.postgresql.org/message-id/2358496.1649168259%40sss.pgh.pa.us\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 18 Apr 2022 10:05:44 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Mon, Apr 18, 2022 at 00:35 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> On Mon, Apr 18, 2022 at 1:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > On Thu, Apr 14, 2022 at 5:50 PM Masahiko Sawada <sawada.mshk@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > On Wed, Apr 13, 2022 at 7:45 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > > >\r\n> > > > On Mon, Apr 11, 2022 at 12:09 PM wangw.fnst@fujitsu.com\r\n> > > > <wangw.fnst@fujitsu.com> wrote:\r\n> > > > >\r\n> > > > > So I skip tracking lag during a transaction just like the current HEAD.\r\n> > > > > Attach the new patch.\r\n> > > > >\r\n> > > >\r\n> > > > Thanks, please find the updated patch where I have slightly\r\n> > > > modified the comments.\r\n> > > >\r\n> > > > Sawada-San, Euler, do you have any opinion on this approach? I\r\n> > > > personally still prefer the approach implemented in v10 [1]\r\n> > > > especially due to the latest finding by Wang-San that we can't\r\n> > > > update the lag-tracker apart from when it is invoked at the transaction end.\r\n> > > > However, I am fine if we like this approach more.\r\n> > >\r\n> > > Thank you for updating the patch.\r\n> > >\r\n> > > The current patch looks much better than v10 which requires to call\r\n> > > to\r\n> > > update_progress() every path.\r\n> > >\r\n> > > Regarding v15 patch, I'm concerned a bit that the new function name,\r\n> > > update_progress(), is too generic. How about\r\n> > > update_replation_progress() or something more specific name?\r\n> > >\r\n> >\r\n> > Do you intend to say update_replication_progress()? The word\r\n> > 'replation' doesn't make sense to me. I am fine with this suggestion.\r\n> \r\n> Yeah, that was a typo. I meant update_replication_progress().\r\nThanks for your comments.\r\n\r\n> > > Regarding v15 patch, I'm concerned a bit that the new function name,\r\n> > > update_progress(), is too generic. How about\r\n> > > update_replation_progress() or something more specific name?\r\nImprove as suggested. Change the name from update_progress to\r\nupdate_replication_progress.\r\n\r\n> > > ---\r\n> > > + if (end_xact)\r\n> > > + {\r\n> > > + /* Update progress tracking at xact end. */\r\n> > > + OutputPluginUpdateProgress(ctx, skipped_xact, end_xact);\r\n> > > + changes_count = 0;\r\n> > > + return;\r\n> > > + }\r\n> > > +\r\n> > > + /*\r\n> > > + * After continuously processing CHANGES_THRESHOLD changes,\r\n> > > we try to send\r\n> > > + * a keepalive message if required.\r\n> > > + *\r\n> > > + * We don't want to try sending a keepalive message after\r\n> > > processing each\r\n> > > + * change as that can have overhead. Testing reveals that there is no\r\n> > > + * noticeable overhead in doing it after continuously\r\n> > > processing 100 or so\r\n> > > + * changes.\r\n> > > + */\r\n> > > +#define CHANGES_THRESHOLD 100\r\n> > > + if (++changes_count >= CHANGES_THRESHOLD)\r\n> > > + {\r\n> > > + OutputPluginUpdateProgress(ctx, skipped_xact, end_xact);\r\n> > > + changes_count = 0;\r\n> > > + }\r\n> > > \r\n> > > Can we merge two if branches since we do the same things? Or did you\r\n> > > separate them for better readability?\r\nImprove as suggested. Merge two if-branches.\r\n\r\nAttach the new patch.\r\n1. Rename the new function(update_progress) to update_replication_progress. [suggestion by Sawada-San]\r\n2. Merge two if-branches in new function update_replication_progress. [suggestion by Sawada-San.]\r\n3. Improve comments to make them clear. [suggestions by Euler-San.]\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Mon, 18 Apr 2022 06:16:40 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Thur, Apr 14, 2022 at 8:21 PM Euler Taveira <euler@eulerto.com> wrote:\n>\nThanks for your comments.\n\n> + * For a large transaction, if we don't send any change to the downstream for a\n> + * long time then it can timeout. This can happen when all or most of the\n> + * changes are either not published or got filtered out.\n> \n> We should probable mention that \"long time\" is wal_receiver_timeout on\n> subscriber.\nImprove as suggested.\nAdd \"(exceeds the wal_receiver_timeout of standby)\" to explain what \"long time\"\nmeans.\n\n> + * change as that can have overhead. Testing reveals that there is no\n> + * noticeable overhead in doing it after continuously processing 100 or so\n> + * changes.\n> \n> Tests revealed that ...\nImprove as suggested.\n\n> + * We don't have a mechanism to get the ack for any LSN other than end xact\n> + * lsn from the downstream. So, we track lag only for end xact lsn's.\n> \n> s/lsn/LSN/ and s/lsn's/LSNs/\n> \n> I would say \"end of transaction LSN\".\nImprove as suggested.\n\n> + * If too many changes are processed then try to send a keepalive message to\n> + * receiver to avoid timeouts.\n> \n> In logical replication, if too many changes are processed then try to send a\n> keepalive message. It might avoid a timeout in the subscriber.\nImprove as suggested.\n\nKindly have a look at new patch shared in [1].\n\n[1] - https://www.postgresql.org/message-id/OS3PR01MB627561344A2C7ECF68E41D7E9EF39%40OS3PR01MB6275.jpnprd01.prod.outlook.com\n\nRegards,\nWang wei\n\n\n",
"msg_date": "Mon, 18 Apr 2022 06:19:15 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Mon, Apr 18, 2022 at 3:16 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Mon, Apr 18, 2022 at 00:35 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > On Mon, Apr 18, 2022 at 1:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Apr 14, 2022 at 5:50 PM Masahiko Sawada <sawada.mshk@gmail.com>\n> > wrote:\n> > > >\n> > > > On Wed, Apr 13, 2022 at 7:45 PM Amit Kapila <amit.kapila16@gmail.com>\n> > wrote:\n> > > > >\n> > > > > On Mon, Apr 11, 2022 at 12:09 PM wangw.fnst@fujitsu.com\n> > > > > <wangw.fnst@fujitsu.com> wrote:\n> > > > > >\n> > > > > > So I skip tracking lag during a transaction just like the current HEAD.\n> > > > > > Attach the new patch.\n> > > > > >\n> > > > >\n> > > > > Thanks, please find the updated patch where I have slightly\n> > > > > modified the comments.\n> > > > >\n> > > > > Sawada-San, Euler, do you have any opinion on this approach? I\n> > > > > personally still prefer the approach implemented in v10 [1]\n> > > > > especially due to the latest finding by Wang-San that we can't\n> > > > > update the lag-tracker apart from when it is invoked at the transaction end.\n> > > > > However, I am fine if we like this approach more.\n> > > >\n> > > > Thank you for updating the patch.\n> > > >\n> > > > The current patch looks much better than v10 which requires to call\n> > > > to\n> > > > update_progress() every path.\n> > > >\n> > > > Regarding v15 patch, I'm concerned a bit that the new function name,\n> > > > update_progress(), is too generic. How about\n> > > > update_replation_progress() or something more specific name?\n> > > >\n> > >\n> > > Do you intend to say update_replication_progress()? The word\n> > > 'replation' doesn't make sense to me. I am fine with this suggestion.\n> >\n> > Yeah, that was a typo. I meant update_replication_progress().\n> Thanks for your comments.\n>\n> > > > Regarding v15 patch, I'm concerned a bit that the new function name,\n> > > > update_progress(), is too generic. How about\n> > > > update_replation_progress() or something more specific name?\n> Improve as suggested. Change the name from update_progress to\n> update_replication_progress.\n>\n> > > > ---\n> > > > + if (end_xact)\n> > > > + {\n> > > > + /* Update progress tracking at xact end. */\n> > > > + OutputPluginUpdateProgress(ctx, skipped_xact, end_xact);\n> > > > + changes_count = 0;\n> > > > + return;\n> > > > + }\n> > > > +\n> > > > + /*\n> > > > + * After continuously processing CHANGES_THRESHOLD changes,\n> > > > we try to send\n> > > > + * a keepalive message if required.\n> > > > + *\n> > > > + * We don't want to try sending a keepalive message after\n> > > > processing each\n> > > > + * change as that can have overhead. Testing reveals that there is no\n> > > > + * noticeable overhead in doing it after continuously\n> > > > processing 100 or so\n> > > > + * changes.\n> > > > + */\n> > > > +#define CHANGES_THRESHOLD 100\n> > > > + if (++changes_count >= CHANGES_THRESHOLD)\n> > > > + {\n> > > > + OutputPluginUpdateProgress(ctx, skipped_xact, end_xact);\n> > > > + changes_count = 0;\n> > > > + }\n> > > >\n> > > > Can we merge two if branches since we do the same things? Or did you\n> > > > separate them for better readability?\n> Improve as suggested. Merge two if-branches.\n>\n> Attach the new patch.\n> 1. Rename the new function(update_progress) to update_replication_progress. [suggestion by Sawada-San]\n> 2. Merge two if-branches in new function update_replication_progress. [suggestion by Sawada-San.]\n> 3. Improve comments to make them clear. [suggestions by Euler-San.]\n\nThank you for updating the patch.\n\n+ * For a large transaction, if we don't send any change to the downstream for a\n+ * long time(exceeds the wal_receiver_timeout of standby) then it can timeout.\n+ * This can happen when all or most of the changes are either not published or\n+ * got filtered out.\n\n+ */\n+ if(end_xact || ++changes_count >= CHANGES_THRESHOLD)\n+ {\n\nWe need a whitespace before '(' at above two places. The rest looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 19 Apr 2022 10:32:07 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Mon, Apr 19, 2022 at 9:32 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> Thank you for updating the patch.\r\nThanks for your comments.\r\n\r\n> + * For a large transaction, if we don't send any change to the\r\n> + downstream for a\r\n> + * long time(exceeds the wal_receiver_timeout of standby) then it can\r\n> timeout.\r\n> + * This can happen when all or most of the changes are either not\r\n> + published or\r\n> + * got filtered out.\r\n> \r\n> + */\r\n> + if(end_xact || ++changes_count >= CHANGES_THRESHOLD) {\r\n> \r\n> We need a whitespace before '(' at above two places. The rest looks good to me.\r\nFix these.\r\n\r\nAttach the new patch.\r\n1. Fix wrong formatting. [suggestion by Sawada-San]\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Tue, 19 Apr 2022 01:52:15 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Mon, Apr 18, 2022 at 00:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Mon, Apr 18, 2022 at 9:29 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > On Thu, Apr 14, 2022 at 5:52 PM Euler Taveira <euler@eulerto.com> wrote:\r\n> > >\r\n> > > On Wed, Apr 13, 2022, at 7:45 AM, Amit Kapila wrote:\r\n> > >\r\n> > > Sawada-San, Euler, do you have any opinion on this approach? I\r\n> > > personally still prefer the approach implemented in v10 [1]\r\n> > > especially due to the latest finding by Wang-San that we can't\r\n> > > update the lag-tracker apart from when it is invoked at the transaction end.\r\n> > > However, I am fine if we like this approach more.\r\n> > >\r\n> > > It seems v15 is simpler and less error prone than v10. v10 has a mix\r\n> > > of\r\n> > > OutputPluginUpdateProgress() and the new function update_progress().\r\n> > > The v10 also calls update_progress() for every change action in\r\n> > > pgoutput_change(). It is not a good approach for maintainability --\r\n> > > new changes like sequences need extra calls.\r\n> > >\r\n> >\r\n> > Okay, let's use the v15 approach as Sawada-San also seems to have a\r\n> > preference for that.\r\n> >\r\n> > > However, as you mentioned there should handle the track lag case.\r\n> > >\r\n> > > Both patches change the OutputPluginUpdateProgress() so it cannot be\r\n> > > backpatched. Are you planning to backpatch it? If so, the boolean\r\n> > > variable (last_write or end_xacts depending of which version you are\r\n> > > considering) could be added to LogicalDecodingContext.\r\n> > >\r\n> >\r\n> > If we add it to LogicalDecodingContext then I think we have to always\r\n> > reset the variable after its use which will make it look ugly and\r\n> > error-prone. I was not thinking to backpatch it because of the API\r\n> > change but I guess if we want to backpatch then we can add it to\r\n> > LogicalDecodingContext for back-branches. I am not sure if that will\r\n> > look committable but surely we can try.\r\n> >\r\n> \r\n> Even, if we want to add the variable in the struct in back-branches, we need to\r\n> ensure not to change the size of the struct as it is exposed, see email [1] for a\r\n> similar mistake we made in another case.\r\n> \r\n> [1] - https://www.postgresql.org/message-\r\n> id/2358496.1649168259%40sss.pgh.pa.us\r\nThanks for your comments.\r\n\r\nI did some checks about adding the new variable in LogicalDecodingContext.\r\nI found that because of padding, if we add the new variable at the end of\r\nstructure, it dose not make the structure size change. I verified this in\r\nREL_10~REL_14.\r\n\r\nSo as suggested by Euler-San and Amit-San, I wrote the patch for REL_14. Attach\r\nthis patch. To prevent patch confusion, the patch for HEAD is also attached.\r\nThe patch for REL_14:\r\n REL_14_v1-0001-Fix-the-logical-replication-timeout-during-large-.patch\r\nThe patch for HEAD:\r\n v17-0001-Fix-the-logical-replication-timeout-during-large.patch\r\n\r\nThe following is the details of checks.\r\nOn gcc/Linux/x86-64, in REL_14, by looking at the size of each member variable\r\nin the structure LogicalDecodingContext, I found that there are three parts\r\npadding behind the following member variables:\r\n- 7 bytes after fast_forward\r\n- 4 bytes after prepared_write\r\n- 4 bytes after write_xid\r\n\r\nIf we add the new variable at the end of structure (bool takes one byte), it\r\nmeans we will only consume one byte of padding after member write_xid. And\r\nthen, at the end of the struct, 3 padding are still required. For easy\r\nunderstanding, please refer to the following simple calculation.\r\n(In REL14, the size of structure LogicalDecodingContext is 304 bytes.)\r\nBefore adding new variable (In REL14):\r\n8+8+8+8+8+1+168+8+8+8+8+8+8+8+8+1+1+1+1+8+4 = 289 (if padding is not considered)\r\n +7 +4 +4 = +15 (the padding)\r\nSo, the size of structure LogicalDecodingContext is 289+15=304.\r\nAfter adding new variable (In REL14 with patch):\r\n8+8+8+8+8+1+168+8+8+8+8+8+8+8+8+1+1+1+1+8+4+1 = 290 (if padding is not considered)\r\n +7 +4 +3 = +14 (the padding)\r\nSo, the size of structure LogicalDecodingContext is 290+14=304.\r\n\r\nBTW, the size of structure LogicalDecodingContext in REL_10~REL_13 is 184, 200,\r\n200,200 respectively. And I found that at the end of the structure\r\nLogicalDecodingContext, there are always the following members:\r\n```\r\n XLogRecPtr write_location; --> 8\r\n TransactionId write_xid; --> 4\r\n --> There are 4 padding after write_xid.\r\n```\r\nIt means at the end of structure LogicalDecodingContext, there are 4 bytes\r\npadding. So, if we add a new bool type variable (It takes one byte) at the end\r\nof the structure LogicalDecodingContext, I think in the current REL_10~REL_14,\r\nbecause of padding, the size of the structure LogicalDecodingContext will not\r\nchange.\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Wed, 20 Apr 2022 02:46:49 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Apr 20, 2022 at 11:46 AM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Mon, Apr 18, 2022 at 00:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Mon, Apr 18, 2022 at 9:29 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Apr 14, 2022 at 5:52 PM Euler Taveira <euler@eulerto.com> wrote:\n> > > >\n> > > > On Wed, Apr 13, 2022, at 7:45 AM, Amit Kapila wrote:\n> > > >\n> > > > Sawada-San, Euler, do you have any opinion on this approach? I\n> > > > personally still prefer the approach implemented in v10 [1]\n> > > > especially due to the latest finding by Wang-San that we can't\n> > > > update the lag-tracker apart from when it is invoked at the transaction end.\n> > > > However, I am fine if we like this approach more.\n> > > >\n> > > > It seems v15 is simpler and less error prone than v10. v10 has a mix\n> > > > of\n> > > > OutputPluginUpdateProgress() and the new function update_progress().\n> > > > The v10 also calls update_progress() for every change action in\n> > > > pgoutput_change(). It is not a good approach for maintainability --\n> > > > new changes like sequences need extra calls.\n> > > >\n> > >\n> > > Okay, let's use the v15 approach as Sawada-San also seems to have a\n> > > preference for that.\n> > >\n> > > > However, as you mentioned there should handle the track lag case.\n> > > >\n> > > > Both patches change the OutputPluginUpdateProgress() so it cannot be\n> > > > backpatched. Are you planning to backpatch it? If so, the boolean\n> > > > variable (last_write or end_xacts depending of which version you are\n> > > > considering) could be added to LogicalDecodingContext.\n> > > >\n> > >\n> > > If we add it to LogicalDecodingContext then I think we have to always\n> > > reset the variable after its use which will make it look ugly and\n> > > error-prone. I was not thinking to backpatch it because of the API\n> > > change but I guess if we want to backpatch then we can add it to\n> > > LogicalDecodingContext for back-branches. I am not sure if that will\n> > > look committable but surely we can try.\n> > >\n> >\n> > Even, if we want to add the variable in the struct in back-branches, we need to\n> > ensure not to change the size of the struct as it is exposed, see email [1] for a\n> > similar mistake we made in another case.\n> >\n> > [1] - https://www.postgresql.org/message-\n> > id/2358496.1649168259%40sss.pgh.pa.us\n> Thanks for your comments.\n>\n> I did some checks about adding the new variable in LogicalDecodingContext.\n> I found that because of padding, if we add the new variable at the end of\n> structure, it dose not make the structure size change. I verified this in\n> REL_10~REL_14.\n>\n> So as suggested by Euler-San and Amit-San, I wrote the patch for REL_14. Attach\n> this patch. To prevent patch confusion, the patch for HEAD is also attached.\n> The patch for REL_14:\n> REL_14_v1-0001-Fix-the-logical-replication-timeout-during-large-.patch\n> The patch for HEAD:\n> v17-0001-Fix-the-logical-replication-timeout-during-large.patch\n>\n> The following is the details of checks.\n> On gcc/Linux/x86-64, in REL_14, by looking at the size of each member variable\n> in the structure LogicalDecodingContext, I found that there are three parts\n> padding behind the following member variables:\n> - 7 bytes after fast_forward\n> - 4 bytes after prepared_write\n> - 4 bytes after write_xid\n>\n> If we add the new variable at the end of structure (bool takes one byte), it\n> means we will only consume one byte of padding after member write_xid. And\n> then, at the end of the struct, 3 padding are still required. For easy\n> understanding, please refer to the following simple calculation.\n> (In REL14, the size of structure LogicalDecodingContext is 304 bytes.)\n> Before adding new variable (In REL14):\n> 8+8+8+8+8+1+168+8+8+8+8+8+8+8+8+1+1+1+1+8+4 = 289 (if padding is not considered)\n> +7 +4 +4 = +15 (the padding)\n> So, the size of structure LogicalDecodingContext is 289+15=304.\n> After adding new variable (In REL14 with patch):\n> 8+8+8+8+8+1+168+8+8+8+8+8+8+8+8+1+1+1+1+8+4+1 = 290 (if padding is not considered)\n> +7 +4 +3 = +14 (the padding)\n> So, the size of structure LogicalDecodingContext is 290+14=304.\n>\n> BTW, the size of structure LogicalDecodingContext in REL_10~REL_13 is 184, 200,\n> 200,200 respectively. And I found that at the end of the structure\n> LogicalDecodingContext, there are always the following members:\n> ```\n> XLogRecPtr write_location; --> 8\n> TransactionId write_xid; --> 4\n> --> There are 4 padding after write_xid.\n> ```\n\nI'm concerned that this 4-byte padding at the end of the struct could\ndepend on platforms (there might be no padding in 32-bit platforms?).\nIt seems to me that it's better to put it after fast_forward where the\nnew field should fall within the padding space.\n\nBTW the changes in\nREL_14_v1-0001-Fix-the-logical-replication-timeout-during-large-.patch,\nadding end_xact to LogicalDecodingContext, seems good to me and it\nmight be better than the approach of v17 patch from plugin developers’\nperspective? This is because they won’t need to pass true/false to\nend_xact of OutputPluginUpdateProgress(). Furthermore, if we do what\nwe do in update_replication_progress() in\nOutputPluginUpdateProgress(), what plugins need to do will be just to\ncall OutputPluginUpdate() in every callback and they don't need to\nhave the CHANGES_THRESHOLD logic. What do you think?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 20 Apr 2022 16:20:48 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Apr 20, 2022 at 12:51 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Apr 20, 2022 at 11:46 AM wangw.fnst@fujitsu.com\n> <wangw.fnst@fujitsu.com> wrote:\n> > ```\n>\n> I'm concerned that this 4-byte padding at the end of the struct could\n> depend on platforms (there might be no padding in 32-bit platforms?).\n>\n\nGood point, but ...\n\n> It seems to me that it's better to put it after fast_forward where the\n> new field should fall within the padding space.\n>\n\nCan we add the variable in between the existing variables in the\nstructure in the back branches? Normally, we add at the end to avoid\nany breakage of existing apps. See commit 56e366f675 and discussion at\n[1]. That is related to enum but I think we follow the same for\nstructures.\n\n[1] - https://www.postgresql.org/message-id/7dab0929-a966-0c0a-4726-878fced2fe00%40enterprisedb.com\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 20 Apr 2022 14:38:09 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Apr 20, 2022 at 2:38 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 20, 2022 at 12:51 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Apr 20, 2022 at 11:46 AM wangw.fnst@fujitsu.com\n> > <wangw.fnst@fujitsu.com> wrote:\n> > > ```\n> >\n> > I'm concerned that this 4-byte padding at the end of the struct could\n> > depend on platforms (there might be no padding in 32-bit platforms?).\n> >\n>\n> Good point, but ...\n>\n> > It seems to me that it's better to put it after fast_forward where the\n> > new field should fall within the padding space.\n> >\n>\n> Can we add the variable in between the existing variables in the\n> structure in the back branches?\n>\n\nI think it should be fine if it falls in the padding space. We have\ndone similar changes recently in back-branches [1]. I think it would\nbe then better to have it in the same place in HEAD as well?\n\n[1] - https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=10520f4346876aad4941797c2255a21bdac74739\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 20 Apr 2022 15:42:39 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Apr 20, 2022 at 7:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 20, 2022 at 2:38 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Apr 20, 2022 at 12:51 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Wed, Apr 20, 2022 at 11:46 AM wangw.fnst@fujitsu.com\n> > > <wangw.fnst@fujitsu.com> wrote:\n> > > > ```\n> > >\n> > > I'm concerned that this 4-byte padding at the end of the struct could\n> > > depend on platforms (there might be no padding in 32-bit platforms?).\n> > >\n> >\n> > Good point, but ...\n> >\n> > > It seems to me that it's better to put it after fast_forward where the\n> > > new field should fall within the padding space.\n> > >\n> >\n> > Can we add the variable in between the existing variables in the\n> > structure in the back branches?\n> >\n>\n> I think it should be fine if it falls in the padding space. We have\n> done similar changes recently in back-branches [1].\n\nYes.\n\n> I think it would\n> be then better to have it in the same place in HEAD as well?\n\nAs far as I can see in the v17 patch, which is for HEAD, we don't add\na variable to LogicalDecodingContext, but did you refer to another\npatch?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 20 Apr 2022 21:51:52 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Apr 20, 2022 at 6:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Wed, Apr 20, 2022 at 2:38 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > On Wed, Apr 20, 2022 at 12:51 PM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> > >\r\n> > > On Wed, Apr 20, 2022 at 11:46 AM wangw.fnst@fujitsu.com\r\n> > > <wangw.fnst@fujitsu.com> wrote:\r\n> > > > ```\r\n> > >\r\n> > > I'm concerned that this 4-byte padding at the end of the struct could\r\n> > > depend on platforms (there might be no padding in 32-bit platforms?).\r\n> > >\r\n> >\r\n> > Good point, but ...\r\n> >\r\n> > > It seems to me that it's better to put it after fast_forward where the\r\n> > > new field should fall within the padding space.\r\n> > >\r\n> >\r\n> > Can we add the variable in between the existing variables in the\r\n> > structure in the back branches?\r\n> >\r\n> \r\n> I think it should be fine if it falls in the padding space. We have\r\n> done similar changes recently in back-branches [1]. I think it would\r\n> be then better to have it in the same place in HEAD as well?\r\n> \r\n> [1] -\r\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=10520f4346\r\n> 876aad4941797c2255a21bdac74739\r\nThanks for your comments.\r\n\r\nThe comments by Sawada-San sound reasonable to me.\r\nAfter doing check, I found that padding in HEAD is the same as in REL14.\r\nSo I change the approach of patch for HEAD just like the patch for REL14.\r\n\r\nOn Wed, Apr 20, 2022 at 3:21 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> I'm concerned that this 4-byte padding at the end of the struct could\r\n> depend on platforms (there might be no padding in 32-bit platforms?).\r\n> It seems to me that it's better to put it after fast_forward where the\r\n> new field should fall within the padding space.\r\nFixed. Add new variable after fast_forward.\r\n\r\n> BTW the changes in\r\n> REL_14_v1-0001-Fix-the-logical-replication-timeout-during-large-.patch,\r\n> adding end_xact to LogicalDecodingContext, seems good to me and it\r\n> might be better than the approach of v17 patch from plugin developers’\r\n> perspective? This is because they won’t need to pass true/false to\r\n> end_xact of OutputPluginUpdateProgress(). Furthermore, if we do what\r\n> we do in update_replication_progress() in\r\n> OutputPluginUpdateProgress(), what plugins need to do will be just to\r\n> call OutputPluginUpdate() in every callback and they don't need to\r\n> have the CHANGES_THRESHOLD logic. What do you think?\r\nChange the approach of patch for HEAD. (The size of structure does not change.)\r\nAlso move the logical of function update_replication_progress to function\r\nOutputPluginUpdateProgress.\r\n\r\nAttach the patches. [suggestion by Sawada-San]\r\n1. Change the position of the new variable in structure.\r\n2. Change the approach of the patch for HEAD.\r\n3. Delete the new function update_replication_progress.\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Thu, 21 Apr 2022 02:14:41 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Apr 20, 2022 at 6:22 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Apr 20, 2022 at 7:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> > I think it would\n> > be then better to have it in the same place in HEAD as well?\n>\n> As far as I can see in the v17 patch, which is for HEAD, we don't add\n> a variable to LogicalDecodingContext, but did you refer to another\n> patch?\n>\n\nNo, I thought it is better to follow the same approach in HEAD as\nwell. Do you see any problem with it?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 21 Apr 2022 07:49:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Thu, Apr 21, 2022 at 11:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 20, 2022 at 6:22 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Apr 20, 2022 at 7:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> >\n> > > I think it would\n> > > be then better to have it in the same place in HEAD as well?\n> >\n> > As far as I can see in the v17 patch, which is for HEAD, we don't add\n> > a variable to LogicalDecodingContext, but did you refer to another\n> > patch?\n> >\n>\n> No, I thought it is better to follow the same approach in HEAD as\n> well. Do you see any problem with it?\n\nNo, that makes sense to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 21 Apr 2022 13:59:50 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Apr 21, 2022 at 10:15 AM I wrote:\r\n> The comments by Sawada-San sound reasonable to me.\r\n> After doing check, I found that padding in HEAD is the same as in REL14.\r\n> So I change the approach of patch for HEAD just like the patch for REL14.\r\n\r\nAlso attach the back-branch patches for REL10~REL13.\r\n(REL12 and REL11 patch are the same, so only post one patch for these two\r\nbranches.)\r\n\r\nThe patch for HEAD:\r\n HEAD_v18-0001-Fix-the-logical-replication-timeout-during-large.patch\r\nThe patch for REL14:\r\n REL14_v2-0001-Fix-the-logical-replication-timeout-during-large-.patch\r\nThe patch for REL13:\r\n REL13_v1-0001-Fix-the-logical-replication-timeout-during-large-.patch\r\nThe patch for REL12 and REL11:\r\n REL12-REL11_v1-0001-Fix-the-logical-replication-timeout-during-large-.patch\r\nThe patch for REL10:\r\n REL10_v1-0001-Fix-the-logical-replication-timeout-during-large-.patch\r\n\r\nBTW, after doing check, I found that padding in REL11~REL13 are similar as HEAD\r\nand REL14 (7 bytes padding after fast_forward). But in REL10, the padding is\r\ndifferent. There are three parts padding behind the following member variables:\r\n- 4 bytes after options\r\n- 6 bytes after prepared_write\r\n- 4 bytes after write_xid\r\nSo, in the patches for branches REL11~HEAD, I add the new variable after\r\nfast_forward. In the patch for branch REL10, I add the new variable after\r\nprepared_write.\r\nFor each version, the size of the structure does not change after applying the\r\npatch.\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Thu, 21 Apr 2022 09:50:57 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Wednesday, April 20, 2022 3:21 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> BTW the changes in\r\n> REL_14_v1-0001-Fix-the-logical-replication-timeout-during-large-.patch,\r\n> adding end_xact to LogicalDecodingContext, seems good to me and it\r\n> might be better than the approach of v17 patch from plugin developers’\r\n> perspective? This is because they won’t need to pass true/false to\r\n> end_xact of OutputPluginUpdateProgress(). Furthermore, if we do what\r\n> we do in update_replication_progress() in\r\n> OutputPluginUpdateProgress(), what plugins need to do will be just to\r\n> call OutputPluginUpdate() in every callback and they don't need to\r\n> have the CHANGES_THRESHOLD logic. What do you think?\r\n\r\nHi Sawada-san, Wang\r\n\r\nI was looking at the patch and noticed that we moved some logic from\r\nupdate_replication_progress() to OutputPluginUpdateProgress() like\r\nyour suggestion.\r\n\r\nI have a question about this change. In the patch we added some\r\nrestriction in this function OutputPluginUpdateProgress() like below:\r\n\r\n+ /*\r\n+ * If we are at the end of transaction LSN, update progress tracking.\r\n+ * Otherwise, after continuously processing CHANGES_THRESHOLD changes, we\r\n+ * try to send a keepalive message if required.\r\n+ */\r\n+ if (ctx->end_xact || ++changes_count >= CHANGES_THRESHOLD)\r\n+ {\r\n+ ctx->update_progress(ctx, ctx->write_location, ctx->write_xid,\r\n+ skipped_xact);\r\n+ changes_count = 0;\r\n+ }\r\n\r\nAfter the patch, we won't be able to always invoke the update_progress() if the\r\ncaller are at the middle of transaction(e.g. end_xact = false). The bebavior of the\r\npublic function OutputPluginUpdateProgress() is changed. I am thinking is it ok to\r\nchange this at back-branches ?\r\n\r\nBecause OutputPluginUpdateProgress() is a public function for the extension\r\ndeveloper, just a little concerned about the behavior change here.\r\n\r\nBesides, the check of 'end_xact' and the 'CHANGES_THRESHOLD' seems specified to\r\nthe pgoutput. I am not very sure that if plugin author also needs these\r\nlogic(they might want to change the strategy), so I'd like to confirm it with\r\nyou.\r\n\r\nBest regards,\r\nHou zj\r\n\r\n",
"msg_date": "Thu, 28 Apr 2022 10:01:31 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Thu, Apr 21, 2022 at 3:21 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n\nI think it is better to keep the new variable 'end_xact' at the end of\nthe struct where it belongs for HEAD. In back branches, we can keep it\nat the place as you have. Apart from that, I have made some cosmetic\nchanges and changed a few comments in the attached. Let's use this to\nprepare patches for back-branches.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Thu, 28 Apr 2022 15:55:43 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Thur, Apr 28, 2022 at 6:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Thu, Apr 21, 2022 at 3:21 PM wangw.fnst@fujitsu.com\r\n> <wangw.fnst@fujitsu.com> wrote:\r\n> >\r\n> \r\n> I think it is better to keep the new variable 'end_xact' at the end of\r\n> the struct where it belongs for HEAD. In back branches, we can keep it\r\n> at the place as you have. Apart from that, I have made some cosmetic\r\n> changes and changed a few comments in the attached. Let's use this to\r\n> prepare patches for back-branches.\r\nThanks for your review and improvement.\r\n\r\nI improved the back-branch patches according to your modifications.\r\nAttach the back-branch patches for REL10~REL14.\r\n(Also attach the patch for HEAD, I did not make any changes to this patch.)\r\n\r\nBTW, I found Hou-san shared some points. After our discussion, I will update\r\nthe patches if required.\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Fri, 29 Apr 2022 05:35:58 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Thu, Apr 28, 2022 at 7:01 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Wednesday, April 20, 2022 3:21 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > BTW the changes in\n> > REL_14_v1-0001-Fix-the-logical-replication-timeout-during-large-.patch,\n> > adding end_xact to LogicalDecodingContext, seems good to me and it\n> > might be better than the approach of v17 patch from plugin developers’\n> > perspective? This is because they won’t need to pass true/false to\n> > end_xact of OutputPluginUpdateProgress(). Furthermore, if we do what\n> > we do in update_replication_progress() in\n> > OutputPluginUpdateProgress(), what plugins need to do will be just to\n> > call OutputPluginUpdate() in every callback and they don't need to\n> > have the CHANGES_THRESHOLD logic. What do you think?\n>\n> Hi Sawada-san, Wang\n>\n> I was looking at the patch and noticed that we moved some logic from\n> update_replication_progress() to OutputPluginUpdateProgress() like\n> your suggestion.\n>\n> I have a question about this change. In the patch we added some\n> restriction in this function OutputPluginUpdateProgress() like below:\n>\n> + /*\n> + * If we are at the end of transaction LSN, update progress tracking.\n> + * Otherwise, after continuously processing CHANGES_THRESHOLD changes, we\n> + * try to send a keepalive message if required.\n> + */\n> + if (ctx->end_xact || ++changes_count >= CHANGES_THRESHOLD)\n> + {\n> + ctx->update_progress(ctx, ctx->write_location, ctx->write_xid,\n> + skipped_xact);\n> + changes_count = 0;\n> + }\n>\n> After the patch, we won't be able to always invoke the update_progress() if the\n> caller are at the middle of transaction(e.g. end_xact = false). The bebavior of the\n> public function OutputPluginUpdateProgress() is changed. I am thinking is it ok to\n> change this at back-branches ?\n>\n> Because OutputPluginUpdateProgress() is a public function for the extension\n> developer, just a little concerned about the behavior change here.\n\nGood point.\n\nAs you pointed out, it would not be good if we change the behavior of\nOutputPluginUpdateProgress() in back branches. Also, after more\nthought, it is not a good idea even for HEAD since there might be\nbackground workers that use logical decoding and the timeout checking\nmight not be relevant at all with them.\n\nBTW, I think you're concerned about the plugins that call\nOutputPluginUpdateProgress() at the middle of the transaction (i.e.,\nend_xact = false). We have the following change that makes the\nwalsender not update the progress at the middle of the transaction. Do\nyou think it is okay since it's not common usage to call\nOutputPluginUpdateProgress() at the middle of the transaction by the\nplugin that is used by the walsender?\n\n #define WALSND_LOGICAL_LAG_TRACK_INTERVAL_MS 1000\n- if (!TimestampDifferenceExceeds(sendTime, now,\n+ if (end_xact && TimestampDifferenceExceeds(sendTime, now,\n WALSND_LOGICAL_LAG_TRACK_INTERVAL_MS))\n- return;\n+ {\n+ LagTrackerWrite(lsn, now);\n+ sendTime = now;\n+ }\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 2 May 2022 11:03:13 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Mon, May 2, 2022 at 7:33 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> On Thu, Apr 28, 2022 at 7:01 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > Hi Sawada-san, Wang\n> >\n> > I was looking at the patch and noticed that we moved some logic from\n> > update_replication_progress() to OutputPluginUpdateProgress() like\n> > your suggestion.\n> >\n> > I have a question about this change. In the patch we added some\n> > restriction in this function OutputPluginUpdateProgress() like below:\n> >\n> > + /*\n> > + * If we are at the end of transaction LSN, update progress tracking.\n> > + * Otherwise, after continuously processing CHANGES_THRESHOLD changes, we\n> > + * try to send a keepalive message if required.\n> > + */\n> > + if (ctx->end_xact || ++changes_count >= CHANGES_THRESHOLD)\n> > + {\n> > + ctx->update_progress(ctx, ctx->write_location, ctx->write_xid,\n> > + skipped_xact);\n> > + changes_count = 0;\n> > + }\n> >\n> > After the patch, we won't be able to always invoke the update_progress() if the\n> > caller are at the middle of transaction(e.g. end_xact = false). The bebavior of the\n> > public function OutputPluginUpdateProgress() is changed. I am thinking is it ok to\n> > change this at back-branches ?\n> >\n> > Because OutputPluginUpdateProgress() is a public function for the extension\n> > developer, just a little concerned about the behavior change here.\n>\n> Good point.\n>\n> As you pointed out, it would not be good if we change the behavior of\n> OutputPluginUpdateProgress() in back branches. Also, after more\n> thought, it is not a good idea even for HEAD since there might be\n> background workers that use logical decoding and the timeout checking\n> might not be relevant at all with them.\n>\n\nSo, shall we go back to the previous approach of using a separate\nfunction update_replication_progress?\n\n> BTW, I think you're concerned about the plugins that call\n> OutputPluginUpdateProgress() at the middle of the transaction (i.e.,\n> end_xact = false). We have the following change that makes the\n> walsender not update the progress at the middle of the transaction. Do\n> you think it is okay since it's not common usage to call\n> OutputPluginUpdateProgress() at the middle of the transaction by the\n> plugin that is used by the walsender?\n>\n\nWe have done that purposefully as otherwise, the lag tracker shows\nincorrect information. See email [1]. The reason is that we always get\nack from subscribers for transaction end. Also, prior to this patch we\nnever call the lag tracker recording apart from the transaction end,\nso as a bug fix we shouldn't try to change it.\n\n[1] - https://www.postgresql.org/message-id/OS3PR01MB62755D216245199554DDC8DB9EEA9%40OS3PR01MB6275.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 2 May 2022 08:02:31 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Mon, May 2, 2022 at 11:32 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, May 2, 2022 at 7:33 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > On Thu, Apr 28, 2022 at 7:01 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > Hi Sawada-san, Wang\n> > >\n> > > I was looking at the patch and noticed that we moved some logic from\n> > > update_replication_progress() to OutputPluginUpdateProgress() like\n> > > your suggestion.\n> > >\n> > > I have a question about this change. In the patch we added some\n> > > restriction in this function OutputPluginUpdateProgress() like below:\n> > >\n> > > + /*\n> > > + * If we are at the end of transaction LSN, update progress tracking.\n> > > + * Otherwise, after continuously processing CHANGES_THRESHOLD changes, we\n> > > + * try to send a keepalive message if required.\n> > > + */\n> > > + if (ctx->end_xact || ++changes_count >= CHANGES_THRESHOLD)\n> > > + {\n> > > + ctx->update_progress(ctx, ctx->write_location, ctx->write_xid,\n> > > + skipped_xact);\n> > > + changes_count = 0;\n> > > + }\n> > >\n> > > After the patch, we won't be able to always invoke the update_progress() if the\n> > > caller are at the middle of transaction(e.g. end_xact = false). The bebavior of the\n> > > public function OutputPluginUpdateProgress() is changed. I am thinking is it ok to\n> > > change this at back-branches ?\n> > >\n> > > Because OutputPluginUpdateProgress() is a public function for the extension\n> > > developer, just a little concerned about the behavior change here.\n> >\n> > Good point.\n> >\n> > As you pointed out, it would not be good if we change the behavior of\n> > OutputPluginUpdateProgress() in back branches. Also, after more\n> > thought, it is not a good idea even for HEAD since there might be\n> > background workers that use logical decoding and the timeout checking\n> > might not be relevant at all with them.\n> >\n>\n> So, shall we go back to the previous approach of using a separate\n> function update_replication_progress?\n\nOk, agreed.\n\n>\n> > BTW, I think you're concerned about the plugins that call\n> > OutputPluginUpdateProgress() at the middle of the transaction (i.e.,\n> > end_xact = false). We have the following change that makes the\n> > walsender not update the progress at the middle of the transaction. Do\n> > you think it is okay since it's not common usage to call\n> > OutputPluginUpdateProgress() at the middle of the transaction by the\n> > plugin that is used by the walsender?\n> >\n>\n> We have done that purposefully as otherwise, the lag tracker shows\n> incorrect information. See email [1]. The reason is that we always get\n> ack from subscribers for transaction end. Also, prior to this patch we\n> never call the lag tracker recording apart from the transaction end,\n> so as a bug fix we shouldn't try to change it.\n\nMake sense.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 2 May 2022 11:36:32 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Mon, May 2, 2022 at 8:07 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, May 2, 2022 at 11:32 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > So, shall we go back to the previous approach of using a separate\n> > function update_replication_progress?\n>\n> Ok, agreed.\n>\n\nAttached, please find the updated patch accordingly. Currently, I have\nprepared it for HEAD, if you don't see any problem with this, we can\nprepare the back-branch patches based on this.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Wed, 4 May 2022 15:48:47 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, May 4, 2022 at 7:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, May 2, 2022 at 8:07 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, May 2, 2022 at 11:32 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > So, shall we go back to the previous approach of using a separate\n> > > function update_replication_progress?\n> >\n> > Ok, agreed.\n> >\n>\n> Attached, please find the updated patch accordingly. Currently, I have\n> prepared it for HEAD, if you don't see any problem with this, we can\n> prepare the back-branch patches based on this.\n\nThank you for updating the patch. Looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 6 May 2022 10:53:47 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Fri, May 6, 2022 at 9:54 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> On Wed, May 4, 2022 at 7:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > On Mon, May 2, 2022 at 8:07 AM Masahiko Sawada <sawada.mshk@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > On Mon, May 2, 2022 at 11:32 AM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > > >\r\n> > > >\r\n> > > > So, shall we go back to the previous approach of using a separate\r\n> > > > function update_replication_progress?\r\n> > >\r\n> > > Ok, agreed.\r\n> > >\r\n> >\r\n> > Attached, please find the updated patch accordingly. Currently, I have\r\n> > prepared it for HEAD, if you don't see any problem with this, we can\r\n> > prepare the back-branch patches based on this.\r\n> \r\n> Thank you for updating the patch. Looks good to me.\r\nThanks for your review.\r\n\r\nImprove the back-branch patches according to the discussion.\r\nMove the CHANGES_THRESHOLD logic from function OutputPluginUpdateProgress to\r\nnew funcion update_replication_progress.\r\nIn addition, improve all patches formatting with pgindent.\r\n\r\nAttach the patches.\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Fri, 6 May 2022 07:11:56 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Fri, May 6, 2022 at 12:42 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Fri, May 6, 2022 at 9:54 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > On Wed, May 4, 2022 at 7:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, May 2, 2022 at 8:07 AM Masahiko Sawada <sawada.mshk@gmail.com>\n> > wrote:\n> > > >\n> > > > On Mon, May 2, 2022 at 11:32 AM Amit Kapila <amit.kapila16@gmail.com>\n> > wrote:\n> > > > >\n> > > > >\n> > > > > So, shall we go back to the previous approach of using a separate\n> > > > > function update_replication_progress?\n> > > >\n> > > > Ok, agreed.\n> > > >\n> > >\n> > > Attached, please find the updated patch accordingly. Currently, I have\n> > > prepared it for HEAD, if you don't see any problem with this, we can\n> > > prepare the back-branch patches based on this.\n> >\n> > Thank you for updating the patch. Looks good to me.\n> Thanks for your review.\n>\n> Improve the back-branch patches according to the discussion.\n>\n\nThanks. The patch LGTM. I'll push and back-patch this after the\ncurrent minor release is done unless there are more comments related\nto this work.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 9 May 2022 12:17:15 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Mon, May 9, 2022 at 3:47 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, May 6, 2022 at 12:42 PM wangw.fnst@fujitsu.com\n> <wangw.fnst@fujitsu.com> wrote:\n> >\n> > On Fri, May 6, 2022 at 9:54 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > On Wed, May 4, 2022 at 7:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Mon, May 2, 2022 at 8:07 AM Masahiko Sawada <sawada.mshk@gmail.com>\n> > > wrote:\n> > > > >\n> > > > > On Mon, May 2, 2022 at 11:32 AM Amit Kapila <amit.kapila16@gmail.com>\n> > > wrote:\n> > > > > >\n> > > > > >\n> > > > > > So, shall we go back to the previous approach of using a separate\n> > > > > > function update_replication_progress?\n> > > > >\n> > > > > Ok, agreed.\n> > > > >\n> > > >\n> > > > Attached, please find the updated patch accordingly. Currently, I have\n> > > > prepared it for HEAD, if you don't see any problem with this, we can\n> > > > prepare the back-branch patches based on this.\n> > >\n> > > Thank you for updating the patch. Looks good to me.\n> > Thanks for your review.\n> >\n> > Improve the back-branch patches according to the discussion.\n> >\n>\n\nThe patches look good to me too.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 9 May 2022 17:46:35 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Mon, May 9, 2022, at 3:47 AM, Amit Kapila wrote:\n> Thanks. The patch LGTM. I'll push and back-patch this after the\n> current minor release is done unless there are more comments related\n> to this work.\nLooks sane to me. (I only tested the HEAD version)\n\n+ bool end_xact = ctx->end_xact;\n\nDo you really need a new variable here? It has the same name and the new one\nisn't changed during the execution.\n\nDoes this issue deserve a test? A small wal_receiver_timeout. Although, I'm not\nsure how stable the test will be.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Mon, May 9, 2022, at 3:47 AM, Amit Kapila wrote:Thanks. The patch LGTM. I'll push and back-patch this after thecurrent minor release is done unless there are more comments relatedto this work.Looks sane to me. (I only tested the HEAD version)+ bool end_xact = ctx->end_xact;Do you really need a new variable here? It has the same name and the new oneisn't changed during the execution.Does this issue deserve a test? A small wal_receiver_timeout. Although, I'm notsure how stable the test will be.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Mon, 09 May 2022 10:30:52 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Mon, May 9, 2022 at 7:01 PM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Mon, May 9, 2022, at 3:47 AM, Amit Kapila wrote:\n>\n> Thanks. The patch LGTM. I'll push and back-patch this after the\n> current minor release is done unless there are more comments related\n> to this work.\n>\n> Looks sane to me. (I only tested the HEAD version)\n>\n> + bool end_xact = ctx->end_xact;\n>\n> Do you really need a new variable here? It has the same name and the new one\n> isn't changed during the execution.\n>\n\nI think both ways should be okay. I thought the proposed way is okay\nbecause it is used in more than one place and is probably slightly\neasier to follow by having a separate variable.\n\n> Does this issue deserve a test? A small wal_receiver_timeout. Although, I'm not\n> sure how stable the test will be.\n>\n\nYes, the main part is how to write a stable test because estimating\nhow many changes are enough for the configured wal_receiver_timeout to\npass on all the buildfarm machines is tricky. Also, I am not sure how\nimportant is to test this behavior because based on this theory we\nshould have tests for all kinds of timeouts.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 10 May 2022 08:52:38 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Mon, May 9, 2022 at 11:23 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Mon, May 9, 2022 at 7:01 PM Euler Taveira <euler@eulerto.com> wrote:\r\n> >\r\n> > On Mon, May 9, 2022, at 3:47 AM, Amit Kapila wrote:\r\n> >\r\n> > Thanks. The patch LGTM. I'll push and back-patch this after the\r\n> > current minor release is done unless there are more comments related\r\n> > to this work.\r\n> > ......\r\n> > Does this issue deserve a test? A small wal_receiver_timeout. Although, I'm\r\n> not\r\n> > sure how stable the test will be.\r\n> >\r\n> \r\n> Yes, the main part is how to write a stable test because estimating\r\n> how many changes are enough for the configured wal_receiver_timeout to\r\n> pass on all the buildfarm machines is tricky. Also, I am not sure how\r\n> important is to test this behavior because based on this theory we\r\n> should have tests for all kinds of timeouts.\r\nYse, I think we could not guarantee the stability of this test.\r\nIn addition, if we set wal_receiver_timeout too small, it may cause timeout\r\nunrelated to this bug. And if we set bigger wal_receiver_timeout and use larger\r\ntransaction in order to minimize the impact of machine performance, I think\r\nthis might take some time and might risk making the build farm slow.\r\n\r\nRegards,\r\nWang wei\r\n",
"msg_date": "Tue, 10 May 2022 03:36:55 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Mon, May 9, 2022 at 2:17 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> The patches look good to me too.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 11 May 2022 17:03:54 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "Hello Amit,\n\nIn version 14.4 the timeout problem for logical replication happens again\ndespite the patch provided for this issue in this version. When bulky\nmaterialized views are reloaded it broke logical replication. It is\npossible to solve this problem by using your new \"streaming\" option.\nHave you ever had this issue reported to you?\n\nRegards\n\nFabrice\n\n2022-10-10 17:19:02 CEST [538424]: [17-1]\nuser=postgres,db=dbxxxa00,client=[local] CONTEXT: SQL statement \"REFRESH\nMATERIALIZED VIEW sxxxa00.table_base\"\n PL/pgSQL function refresh_materialized_view(text) line 5 at EXECUTE\n2022-10-10 17:19:02 CEST [538424]: [18-1]\nuser=postgres,db=dbxxxa00,client=[local] STATEMENT: select\nrefresh_materialized_view('sxxxa00.table_base');\n2022-10-10 17:19:02 CEST [538424]: [19-1]\nuser=postgres,db=dbxxxa00,client=[local] LOG: duration: 264815.652 ms\nstatement: select refresh_materialized_view('sxxxa00.table_base');\n2022-10-10 17:19:27 CEST [559156]: [1-1] user=,db=,client= LOG: automatic\nvacuum of table \"dbxxxa00.sxxxa00.table_base\": index scans: 0\n pages: 0 removed, 296589 remain, 0 skipped due to pins, 0 skipped\nfrozen\n tuples: 0 removed, 48472622 remain, 0 are dead but not yet\nremovable, oldest xmin: 1501528\n index scan not needed: 0 pages from table (0.00% of total) had 0\ndead item identifiers removed\n I/O timings: read: 1.494 ms, write: 0.000 ms\n avg read rate: 0.028 MB/s, avg write rate: 107.952 MB/s\n buffer usage: 593301 hits, 77 misses, 294605 dirtied\n WAL usage: 296644 records, 46119 full page images, 173652718 bytes\n system usage: CPU: user: 17.26 s, system: 0.29 s, elapsed: 21.32 s\n2022-10-10 17:19:28 CEST [559156]: [2-1] user=,db=,client= LOG: automatic\nanalyze of table \"dbxxxa00.sxxxa00.table_base\"\n I/O timings: read: 0.043 ms, write: 0.000 ms\n avg read rate: 0.026 MB/s, avg write rate: 0.026 MB/s\n buffer usage: 30308 hits, 2 misses, 2 dirtied\n system usage: CPU: user: 0.54 s, system: 0.00 s, elapsed: 0.59 s\n2022-10-10 17:19:34 CEST [3898111]: [6840-1] user=,db=,client= LOG:\ncheckpoint complete: wrote 1194 buffers (0.0%); 0 WAL file(s) added, 0\nremoved, 0 recycled; write=269.551 s, sync=0.002 s, total=269.560 s; sync\nfiles=251, longest=0.00\n1 s, average=0.001 s; distance=583790 kB, estimate=583790 kB\n2022-10-10 17:20:02 CEST [716163]: [2-1] user=,db=,client= ERROR:\nterminating logical replication worker due to timeout\n2022-10-10 17:20:02 CEST [3897921]: [13-1] user=,db=,client= LOG:\nbackground worker \"logical replication worker\" (PID 716163) exited with\nexit code 1\n2022-10-10 17:20:02 CEST [561346]: [1-1] user=,db=,client= LOG: logical\nreplication apply worker for subscription \"subxxx_sxxxa00\" has started\n\nOn Fri, Apr 1, 2022 at 6:09 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Fri, Apr 1, 2022 at 8:28 AM Euler Taveira <euler@eulerto.com> wrote:\n> >\n> > On Thu, Mar 31, 2022, at 11:27 PM, Amit Kapila wrote:\n> >\n> > This is exactly our initial analysis and we have tried a patch on\n> > these lines and it has a noticeable overhead. See [1]. Calling this\n> > for each change or each skipped change can bring noticeable overhead\n> > that is why we decided to call it after a certain threshold (100) of\n> > skipped changes. Now, surely as mentioned in my previous reply we can\n> > make it generic such that instead of calling this (update_progress\n> > function as in the patch) for skipped cases, we call it always. Will\n> > that make it better?\n> >\n> > That's what I have in mind but using a different approach.\n> >\n> > > The functions CreateInitDecodingContext and CreateDecodingContext\n> receives the\n> > > update_progress function as a parameter. These functions are called in\n> 2\n> > > places: (a) streaming replication protocol (CREATE_REPLICATION_SLOT)\n> and (b)\n> > > SQL logical decoding functions (pg_logical_*_changes). Case (a) uses\n> > > WalSndUpdateProgress as a progress function. Case (b) does not have\n> one because\n> > > it is not required -- local decoding/communication. There is no custom\n> update\n> > > progress routine for each output plugin which leads me to the question:\n> > > couldn't we encapsulate the update progress call into the callback\n> functions?\n> > >\n> >\n> > Sorry, I don't get your point. What exactly do you mean by this?\n> > AFAIS, currently we call this output plugin API in pgoutput functions\n> > only, do you intend to get it invoked from a different place?\n> >\n> > It seems I didn't make myself clear. The callbacks I'm referring to the\n> > *_cb_wrapper functions. After every ctx->callbacks.foo_cb() call into a\n> > *_cb_wrapper() function, we have something like:\n> >\n> > if (ctx->progress & PGOUTPUT_PROGRESS_FOO)\n> > NewUpdateProgress(ctx, false);\n> >\n> > The NewUpdateProgress function would contain a logic similar to the\n> > update_progress() from the proposed patch. (A different function name\n> here just\n> > to avoid confusion.)\n> >\n> > The output plugin is responsible to set ctx->progress with the callback\n> > variables (for example, PGOUTPUT_PROGRESS_CHANGE for change_cb()) that\n> we would\n> > like to run NewUpdateProgress.\n> >\n>\n> This sounds like a conflicting approach to what we currently do.\n> Currently, OutputPluginUpdateProgress() is called from the xact\n> related pgoutput functions like pgoutput_commit_txn(),\n> pgoutput_prepare_txn(), pgoutput_commit_prepared_txn(), etc. So, if we\n> follow what you are saying then for some of the APIs like\n> pgoutput_change/_message/_truncate, we need to set the parameter to\n> invoke NewUpdateProgress() which will internally call\n> OutputPluginUpdateProgress(), and for the remaining APIs, we will call\n> in the corresponding pgoutput_* function. I feel if we want to make it\n> more generic than the current patch, it is better to directly call\n> what you are referring to here as NewUpdateProgress() in all remaining\n> APIs like pgoutput_change/_truncate, etc.\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nHello Amit,In version 14.4 the timeout problem for logical replication happens again despite the patch provided for this issue in this version. When bulky materialized views are reloaded it broke logical replication. It is possible to solve this problem by using your new \"streaming\" option.Have you ever had this issue reported to you?Regards Fabrice2022-10-10 17:19:02 CEST [538424]: [17-1] user=postgres,db=dbxxxa00,client=[local] CONTEXT: SQL statement \"REFRESH MATERIALIZED VIEW sxxxa00.table_base\" PL/pgSQL function refresh_materialized_view(text) line 5 at EXECUTE2022-10-10 17:19:02 CEST [538424]: [18-1] user=postgres,db=dbxxxa00,client=[local] STATEMENT: select refresh_materialized_view('sxxxa00.table_base');2022-10-10 17:19:02 CEST [538424]: [19-1] user=postgres,db=dbxxxa00,client=[local] LOG: duration: 264815.652 ms statement: select refresh_materialized_view('sxxxa00.table_base');2022-10-10 17:19:27 CEST [559156]: [1-1] user=,db=,client= LOG: automatic vacuum of table \"dbxxxa00.sxxxa00.table_base\": index scans: 0 pages: 0 removed, 296589 remain, 0 skipped due to pins, 0 skipped frozen tuples: 0 removed, 48472622 remain, 0 are dead but not yet removable, oldest xmin: 1501528 index scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removed I/O timings: read: 1.494 ms, write: 0.000 ms avg read rate: 0.028 MB/s, avg write rate: 107.952 MB/s buffer usage: 593301 hits, 77 misses, 294605 dirtied WAL usage: 296644 records, 46119 full page images, 173652718 bytes system usage: CPU: user: 17.26 s, system: 0.29 s, elapsed: 21.32 s2022-10-10 17:19:28 CEST [559156]: [2-1] user=,db=,client= LOG: automatic analyze of table \"dbxxxa00.sxxxa00.table_base\" I/O timings: read: 0.043 ms, write: 0.000 ms avg read rate: 0.026 MB/s, avg write rate: 0.026 MB/s buffer usage: 30308 hits, 2 misses, 2 dirtied system usage: CPU: user: 0.54 s, system: 0.00 s, elapsed: 0.59 s2022-10-10 17:19:34 CEST [3898111]: [6840-1] user=,db=,client= LOG: checkpoint complete: wrote 1194 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=269.551 s, sync=0.002 s, total=269.560 s; sync files=251, longest=0.001 s, average=0.001 s; distance=583790 kB, estimate=583790 kB2022-10-10 17:20:02 CEST [716163]: [2-1] user=,db=,client= ERROR: terminating logical replication worker due to timeout2022-10-10 17:20:02 CEST [3897921]: [13-1] user=,db=,client= LOG: background worker \"logical replication worker\" (PID 716163) exited with exit code 12022-10-10 17:20:02 CEST [561346]: [1-1] user=,db=,client= LOG: logical replication apply worker for subscription \"subxxx_sxxxa00\" has startedOn Fri, Apr 1, 2022 at 6:09 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Fri, Apr 1, 2022 at 8:28 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Thu, Mar 31, 2022, at 11:27 PM, Amit Kapila wrote:\n>\n> This is exactly our initial analysis and we have tried a patch on\n> these lines and it has a noticeable overhead. See [1]. Calling this\n> for each change or each skipped change can bring noticeable overhead\n> that is why we decided to call it after a certain threshold (100) of\n> skipped changes. Now, surely as mentioned in my previous reply we can\n> make it generic such that instead of calling this (update_progress\n> function as in the patch) for skipped cases, we call it always. Will\n> that make it better?\n>\n> That's what I have in mind but using a different approach.\n>\n> > The functions CreateInitDecodingContext and CreateDecodingContext receives the\n> > update_progress function as a parameter. These functions are called in 2\n> > places: (a) streaming replication protocol (CREATE_REPLICATION_SLOT) and (b)\n> > SQL logical decoding functions (pg_logical_*_changes). Case (a) uses\n> > WalSndUpdateProgress as a progress function. Case (b) does not have one because\n> > it is not required -- local decoding/communication. There is no custom update\n> > progress routine for each output plugin which leads me to the question:\n> > couldn't we encapsulate the update progress call into the callback functions?\n> >\n>\n> Sorry, I don't get your point. What exactly do you mean by this?\n> AFAIS, currently we call this output plugin API in pgoutput functions\n> only, do you intend to get it invoked from a different place?\n>\n> It seems I didn't make myself clear. The callbacks I'm referring to the\n> *_cb_wrapper functions. After every ctx->callbacks.foo_cb() call into a\n> *_cb_wrapper() function, we have something like:\n>\n> if (ctx->progress & PGOUTPUT_PROGRESS_FOO)\n> NewUpdateProgress(ctx, false);\n>\n> The NewUpdateProgress function would contain a logic similar to the\n> update_progress() from the proposed patch. (A different function name here just\n> to avoid confusion.)\n>\n> The output plugin is responsible to set ctx->progress with the callback\n> variables (for example, PGOUTPUT_PROGRESS_CHANGE for change_cb()) that we would\n> like to run NewUpdateProgress.\n>\n\nThis sounds like a conflicting approach to what we currently do.\nCurrently, OutputPluginUpdateProgress() is called from the xact\nrelated pgoutput functions like pgoutput_commit_txn(),\npgoutput_prepare_txn(), pgoutput_commit_prepared_txn(), etc. So, if we\nfollow what you are saying then for some of the APIs like\npgoutput_change/_message/_truncate, we need to set the parameter to\ninvoke NewUpdateProgress() which will internally call\nOutputPluginUpdateProgress(), and for the remaining APIs, we will call\nin the corresponding pgoutput_* function. I feel if we want to make it\nmore generic than the current patch, it is better to directly call\nwhat you are referring to here as NewUpdateProgress() in all remaining\nAPIs like pgoutput_change/_truncate, etc.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Tue, 18 Oct 2022 16:35:02 +0200",
"msg_from": "Fabrice Chapuis <fabrice636861@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Tue, Oct 18, 2022 at 22:35 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:\r\n> Hello Amit,\r\n>\r\n> In version 14.4 the timeout problem for logical replication happens again despite\r\n> the patch provided for this issue in this version. When bulky materialized views\r\n> are reloaded it broke logical replication. It is possible to solve this problem by\r\n> using your new \"streaming\" option.\r\n> Have you ever had this issue reported to you?\r\n>\r\n> Regards\r\n>\r\n> Fabrice\r\n>\r\n> 2022-10-10 17:19:02 CEST [538424]: [17-1]\r\n> user=postgres,db=dbxxxa00,client=[local] CONTEXT: SQL statement \"REFRESH\r\n> MATERIALIZED VIEW sxxxa00.table_base\"\r\n> PL/pgSQL function refresh_materialized_view(text) line 5 at EXECUTE\r\n> 2022-10-10 17:19:02 CEST [538424]: [18-1]\r\n> user=postgres,db=dbxxxa00,client=[local] STATEMENT: select\r\n> refresh_materialized_view('sxxxa00.table_base');\r\n> 2022-10-10 17:19:02 CEST [538424]: [19-1]\r\n> user=postgres,db=dbxxxa00,client=[local] LOG: duration: 264815.652\r\n> ms statement: select refresh_materialized_view('sxxxa00.table_base');\r\n> 2022-10-10 17:19:27 CEST [559156]: [1-1] user=,db=,client= LOG: automatic\r\n> vacuum of table \"dbxxxa00.sxxxa00.table_base\": index scans: 0\r\n> pages: 0 removed, 296589 remain, 0 skipped due to pins, 0 skipped frozen\r\n> tuples: 0 removed, 48472622 remain, 0 are dead but not yet removable,\r\n> oldest xmin: 1501528\r\n> index scan not needed: 0 pages from table (0.00% of total) had 0 dead item\r\n> identifiers removed\r\n> I/O timings: read: 1.494 ms, write: 0.000 ms\r\n> avg read rate: 0.028 MB/s, avg write rate: 107.952 MB/s\r\n> buffer usage: 593301 hits, 77 misses, 294605 dirtied\r\n> WAL usage: 296644 records, 46119 full page images, 173652718 bytes\r\n> system usage: CPU: user: 17.26 s, system: 0.29 s, elapsed: 21.32 s\r\n> 2022-10-10 17:19:28 CEST [559156]: [2-1] user=,db=,client= LOG: automatic\r\n> analyze of table \"dbxxxa00.sxxxa00.table_base\"\r\n> I/O timings: read: 0.043 ms, write: 0.000 ms\r\n> avg read rate: 0.026 MB/s, avg write rate: 0.026 MB/s\r\n> buffer usage: 30308 hits, 2 misses, 2 dirtied\r\n> system usage: CPU: user: 0.54 s, system: 0.00 s, elapsed: 0.59 s\r\n> 2022-10-10 17:19:34 CEST [3898111]: [6840-1] user=,db=,client= LOG: checkpoint\r\n> complete: wrote 1194 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled;\r\n> write=269.551 s, sync=0.002 s, total=269.560 s; sync files=251, longest=0.00\r\n> 1 s, average=0.001 s; distance=583790 kB, estimate=583790 kB\r\n> 2022-10-10 17:20:02 CEST [716163]: [2-1] user=,db=,client= ERROR: terminating\r\n> logical replication worker due to timeout\r\n> 2022-10-10 17:20:02 CEST [3897921]: [13-1] user=,db=,client= LOG: background\r\n> worker \"logical replication worker\" (PID 716163) exited with exit code 1\r\n> 2022-10-10 17:20:02 CEST [561346]: [1-1] user=,db=,client= LOG: logical\r\n> replication apply worker for subscription \"subxxx_sxxxa00\" has started\r\n\r\nThanks for reporting!\r\n\r\nThere is one thing I want to confirm:\r\nIs the statement `select refresh_materialized_view('sxxxa00.table_base');`\r\nexecuted on the publisher-side?\r\n\r\nIf so, I think the reason for this timeout problem could be that during DDL\r\n(`REFRESH MATERIALIZED VIEW`), lots of temporary data is generated due to\r\nrewrite. Since these temporary data will not be processed by the pgoutput \r\nplugin, our previous fix for DML had no impact on this case.\r\nI think setting \"streaming\" option to \"on\" could work around this problem.\r\n\r\nI tried to write a draft patch (see attachment) on REL_14_4 to fix this.\r\nI tried it locally and it seems to work.\r\nCould you please confirm whether this problem is fixed after applying this\r\ndraft patch?\r\n\r\nIf this draft patch works, I will improve it and try to fix this problem.\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Wed, 19 Oct 2022 08:15:26 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "Yes the refresh of MV is on the Publisher Side.\nThanks for your draft patch, I'll try it\nI'll back to you as soonas possible\n\nOne question: why the refresh of the MV is a DDL not a DML?\n\nRegards\n\nFabrice\n\nOn Wed, 19 Oct 2022, 10:15 wangw.fnst@fujitsu.com <wangw.fnst@fujitsu.com>\nwrote:\n\n> On Tue, Oct 18, 2022 at 22:35 PM Fabrice Chapuis <fabrice636861@gmail.com>\n> wrote:\n> > Hello Amit,\n> >\n> > In version 14.4 the timeout problem for logical replication happens\n> again despite\n> > the patch provided for this issue in this version. When bulky\n> materialized views\n> > are reloaded it broke logical replication. It is possible to solve this\n> problem by\n> > using your new \"streaming\" option.\n> > Have you ever had this issue reported to you?\n> >\n> > Regards\n> >\n> > Fabrice\n> >\n> > 2022-10-10 17:19:02 CEST [538424]: [17-1]\n> > user=postgres,db=dbxxxa00,client=[local] CONTEXT: SQL statement \"REFRESH\n> > MATERIALIZED VIEW sxxxa00.table_base\"\n> > PL/pgSQL function refresh_materialized_view(text) line 5 at\n> EXECUTE\n> > 2022-10-10 17:19:02 CEST [538424]: [18-1]\n> > user=postgres,db=dbxxxa00,client=[local] STATEMENT: select\n> > refresh_materialized_view('sxxxa00.table_base');\n> > 2022-10-10 17:19:02 CEST [538424]: [19-1]\n> > user=postgres,db=dbxxxa00,client=[local] LOG: duration: 264815.652\n> > ms statement: select refresh_materialized_view('sxxxa00.table_base');\n> > 2022-10-10 17:19:27 CEST [559156]: [1-1] user=,db=,client= LOG:\n> automatic\n> > vacuum of table \"dbxxxa00.sxxxa00.table_base\": index scans: 0\n> > pages: 0 removed, 296589 remain, 0 skipped due to pins, 0\n> skipped frozen\n> > tuples: 0 removed, 48472622 remain, 0 are dead but not yet\n> removable,\n> > oldest xmin: 1501528\n> > index scan not needed: 0 pages from table (0.00% of total) had 0\n> dead item\n> > identifiers removed\n> > I/O timings: read: 1.494 ms, write: 0.000 ms\n> > avg read rate: 0.028 MB/s, avg write rate: 107.952 MB/s\n> > buffer usage: 593301 hits, 77 misses, 294605 dirtied\n> > WAL usage: 296644 records, 46119 full page images, 173652718\n> bytes\n> > system usage: CPU: user: 17.26 s, system: 0.29 s, elapsed: 21.32\n> s\n> > 2022-10-10 17:19:28 CEST [559156]: [2-1] user=,db=,client= LOG:\n> automatic\n> > analyze of table \"dbxxxa00.sxxxa00.table_base\"\n> > I/O timings: read: 0.043 ms, write: 0.000 ms\n> > avg read rate: 0.026 MB/s, avg write rate: 0.026 MB/s\n> > buffer usage: 30308 hits, 2 misses, 2 dirtied\n> > system usage: CPU: user: 0.54 s, system: 0.00 s, elapsed: 0.59 s\n> > 2022-10-10 17:19:34 CEST [3898111]: [6840-1] user=,db=,client= LOG:\n> checkpoint\n> > complete: wrote 1194 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0\n> recycled;\n> > write=269.551 s, sync=0.002 s, total=269.560 s; sync files=251,\n> longest=0.00\n> > 1 s, average=0.001 s; distance=583790 kB, estimate=583790 kB\n> > 2022-10-10 17:20:02 CEST [716163]: [2-1] user=,db=,client= ERROR:\n> terminating\n> > logical replication worker due to timeout\n> > 2022-10-10 17:20:02 CEST [3897921]: [13-1] user=,db=,client= LOG:\n> background\n> > worker \"logical replication worker\" (PID 716163) exited with exit code 1\n> > 2022-10-10 17:20:02 CEST [561346]: [1-1] user=,db=,client= LOG: logical\n> > replication apply worker for subscription \"subxxx_sxxxa00\" has started\n>\n> Thanks for reporting!\n>\n> There is one thing I want to confirm:\n> Is the statement `select refresh_materialized_view('sxxxa00.table_base');`\n> executed on the publisher-side?\n>\n> If so, I think the reason for this timeout problem could be that during DDL\n> (`REFRESH MATERIALIZED VIEW`), lots of temporary data is generated due to\n> rewrite. Since these temporary data will not be processed by the pgoutput\n> plugin, our previous fix for DML had no impact on this case.\n> I think setting \"streaming\" option to \"on\" could work around this problem.\n>\n> I tried to write a draft patch (see attachment) on REL_14_4 to fix this.\n> I tried it locally and it seems to work.\n> Could you please confirm whether this problem is fixed after applying this\n> draft patch?\n>\n> If this draft patch works, I will improve it and try to fix this problem.\n>\n> Regards,\n> Wang wei\n>\n\nYes the refresh of MV is on the Publisher Side.Thanks for your draft patch, I'll try itI'll back to you as soonas possibleOne question: why the refresh of the MV is a DDL not a DML?RegardsFabrice On Wed, 19 Oct 2022, 10:15 wangw.fnst@fujitsu.com <wangw.fnst@fujitsu.com> wrote:On Tue, Oct 18, 2022 at 22:35 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:\n> Hello Amit,\n>\n> In version 14.4 the timeout problem for logical replication happens again despite\n> the patch provided for this issue in this version. When bulky materialized views\n> are reloaded it broke logical replication. It is possible to solve this problem by\n> using your new \"streaming\" option.\n> Have you ever had this issue reported to you?\n>\n> Regards\n>\n> Fabrice\n>\n> 2022-10-10 17:19:02 CEST [538424]: [17-1]\n> user=postgres,db=dbxxxa00,client=[local] CONTEXT: SQL statement \"REFRESH\n> MATERIALIZED VIEW sxxxa00.table_base\"\n> PL/pgSQL function refresh_materialized_view(text) line 5 at EXECUTE\n> 2022-10-10 17:19:02 CEST [538424]: [18-1]\n> user=postgres,db=dbxxxa00,client=[local] STATEMENT: select\n> refresh_materialized_view('sxxxa00.table_base');\n> 2022-10-10 17:19:02 CEST [538424]: [19-1]\n> user=postgres,db=dbxxxa00,client=[local] LOG: duration: 264815.652\n> ms statement: select refresh_materialized_view('sxxxa00.table_base');\n> 2022-10-10 17:19:27 CEST [559156]: [1-1] user=,db=,client= LOG: automatic\n> vacuum of table \"dbxxxa00.sxxxa00.table_base\": index scans: 0\n> pages: 0 removed, 296589 remain, 0 skipped due to pins, 0 skipped frozen\n> tuples: 0 removed, 48472622 remain, 0 are dead but not yet removable,\n> oldest xmin: 1501528\n> index scan not needed: 0 pages from table (0.00% of total) had 0 dead item\n> identifiers removed\n> I/O timings: read: 1.494 ms, write: 0.000 ms\n> avg read rate: 0.028 MB/s, avg write rate: 107.952 MB/s\n> buffer usage: 593301 hits, 77 misses, 294605 dirtied\n> WAL usage: 296644 records, 46119 full page images, 173652718 bytes\n> system usage: CPU: user: 17.26 s, system: 0.29 s, elapsed: 21.32 s\n> 2022-10-10 17:19:28 CEST [559156]: [2-1] user=,db=,client= LOG: automatic\n> analyze of table \"dbxxxa00.sxxxa00.table_base\"\n> I/O timings: read: 0.043 ms, write: 0.000 ms\n> avg read rate: 0.026 MB/s, avg write rate: 0.026 MB/s\n> buffer usage: 30308 hits, 2 misses, 2 dirtied\n> system usage: CPU: user: 0.54 s, system: 0.00 s, elapsed: 0.59 s\n> 2022-10-10 17:19:34 CEST [3898111]: [6840-1] user=,db=,client= LOG: checkpoint\n> complete: wrote 1194 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled;\n> write=269.551 s, sync=0.002 s, total=269.560 s; sync files=251, longest=0.00\n> 1 s, average=0.001 s; distance=583790 kB, estimate=583790 kB\n> 2022-10-10 17:20:02 CEST [716163]: [2-1] user=,db=,client= ERROR: terminating\n> logical replication worker due to timeout\n> 2022-10-10 17:20:02 CEST [3897921]: [13-1] user=,db=,client= LOG: background\n> worker \"logical replication worker\" (PID 716163) exited with exit code 1\n> 2022-10-10 17:20:02 CEST [561346]: [1-1] user=,db=,client= LOG: logical\n> replication apply worker for subscription \"subxxx_sxxxa00\" has started\n\nThanks for reporting!\n\nThere is one thing I want to confirm:\nIs the statement `select refresh_materialized_view('sxxxa00.table_base');`\nexecuted on the publisher-side?\n\nIf so, I think the reason for this timeout problem could be that during DDL\n(`REFRESH MATERIALIZED VIEW`), lots of temporary data is generated due to\nrewrite. Since these temporary data will not be processed by the pgoutput \nplugin, our previous fix for DML had no impact on this case.\nI think setting \"streaming\" option to \"on\" could work around this problem.\n\nI tried to write a draft patch (see attachment) on REL_14_4 to fix this.\nI tried it locally and it seems to work.\nCould you please confirm whether this problem is fixed after applying this\ndraft patch?\n\nIf this draft patch works, I will improve it and try to fix this problem.\n\nRegards,\nWang wei",
"msg_date": "Thu, 20 Oct 2022 07:46:50 +0200",
"msg_from": "Fabrice Chapuis <fabrice636861@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Thurs, Oct 20, 2022 at 13:47 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:\r\n> Yes the refresh of MV is on the Publisher Side.\r\n> Thanks for your draft patch, I'll try it\r\n> I'll back to you as soonas possible\r\n\r\nThanks a lot.\r\n\r\n> One question: why the refresh of the MV is a DDL not a DML?\r\n\r\nSince in the source, the type of command `REFRESH MATERIALIZED VIEW` is\r\n`CMD_UTILITY`, I think this command is DDL (see CmdType in file nodes.h).\r\n\r\nBTW, after trying to search for DML in the pg-doc, I found the relevant\r\ndescription in the below link:\r\nhttps://www.postgresql.org/docs/devel/logical-replication-publication.html\r\n\r\nRegards,\r\nWang wei\r\n",
"msg_date": "Thu, 20 Oct 2022 07:08:36 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "Hello Wang,\nI tested the draft patch in my lab for Postgres 14.4, the refresh of the\nmaterialized view ran without generating the timeout on the worker.\nDo you plan to propose this patch at the next commit fest.\n\nRegards,\nFabrice\n\nOn Wed, Oct 19, 2022 at 10:15 AM wangw.fnst@fujitsu.com <\nwangw.fnst@fujitsu.com> wrote:\n\n> On Tue, Oct 18, 2022 at 22:35 PM Fabrice Chapuis <fabrice636861@gmail.com>\n> wrote:\n> > Hello Amit,\n> >\n> > In version 14.4 the timeout problem for logical replication happens\n> again despite\n> > the patch provided for this issue in this version. When bulky\n> materialized views\n> > are reloaded it broke logical replication. It is possible to solve this\n> problem by\n> > using your new \"streaming\" option.\n> > Have you ever had this issue reported to you?\n> >\n> > Regards\n> >\n> > Fabrice\n> >\n> > 2022-10-10 17:19:02 CEST [538424]: [17-1]\n> > user=postgres,db=dbxxxa00,client=[local] CONTEXT: SQL statement \"REFRESH\n> > MATERIALIZED VIEW sxxxa00.table_base\"\n> > PL/pgSQL function refresh_materialized_view(text) line 5 at\n> EXECUTE\n> > 2022-10-10 17:19:02 CEST [538424]: [18-1]\n> > user=postgres,db=dbxxxa00,client=[local] STATEMENT: select\n> > refresh_materialized_view('sxxxa00.table_base');\n> > 2022-10-10 17:19:02 CEST [538424]: [19-1]\n> > user=postgres,db=dbxxxa00,client=[local] LOG: duration: 264815.652\n> > ms statement: select refresh_materialized_view('sxxxa00.table_base');\n> > 2022-10-10 17:19:27 CEST [559156]: [1-1] user=,db=,client= LOG:\n> automatic\n> > vacuum of table \"dbxxxa00.sxxxa00.table_base\": index scans: 0\n> > pages: 0 removed, 296589 remain, 0 skipped due to pins, 0\n> skipped frozen\n> > tuples: 0 removed, 48472622 remain, 0 are dead but not yet\n> removable,\n> > oldest xmin: 1501528\n> > index scan not needed: 0 pages from table (0.00% of total) had 0\n> dead item\n> > identifiers removed\n> > I/O timings: read: 1.494 ms, write: 0.000 ms\n> > avg read rate: 0.028 MB/s, avg write rate: 107.952 MB/s\n> > buffer usage: 593301 hits, 77 misses, 294605 dirtied\n> > WAL usage: 296644 records, 46119 full page images, 173652718\n> bytes\n> > system usage: CPU: user: 17.26 s, system: 0.29 s, elapsed: 21.32\n> s\n> > 2022-10-10 17:19:28 CEST [559156]: [2-1] user=,db=,client= LOG:\n> automatic\n> > analyze of table \"dbxxxa00.sxxxa00.table_base\"\n> > I/O timings: read: 0.043 ms, write: 0.000 ms\n> > avg read rate: 0.026 MB/s, avg write rate: 0.026 MB/s\n> > buffer usage: 30308 hits, 2 misses, 2 dirtied\n> > system usage: CPU: user: 0.54 s, system: 0.00 s, elapsed: 0.59 s\n> > 2022-10-10 17:19:34 CEST [3898111]: [6840-1] user=,db=,client= LOG:\n> checkpoint\n> > complete: wrote 1194 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0\n> recycled;\n> > write=269.551 s, sync=0.002 s, total=269.560 s; sync files=251,\n> longest=0.00\n> > 1 s, average=0.001 s; distance=583790 kB, estimate=583790 kB\n> > 2022-10-10 17:20:02 CEST [716163]: [2-1] user=,db=,client= ERROR:\n> terminating\n> > logical replication worker due to timeout\n> > 2022-10-10 17:20:02 CEST [3897921]: [13-1] user=,db=,client= LOG:\n> background\n> > worker \"logical replication worker\" (PID 716163) exited with exit code 1\n> > 2022-10-10 17:20:02 CEST [561346]: [1-1] user=,db=,client= LOG: logical\n> > replication apply worker for subscription \"subxxx_sxxxa00\" has started\n>\n> Thanks for reporting!\n>\n> There is one thing I want to confirm:\n> Is the statement `select refresh_materialized_view('sxxxa00.table_base');`\n> executed on the publisher-side?\n>\n> If so, I think the reason for this timeout problem could be that during DDL\n> (`REFRESH MATERIALIZED VIEW`), lots of temporary data is generated due to\n> rewrite. Since these temporary data will not be processed by the pgoutput\n> plugin, our previous fix for DML had no impact on this case.\n> I think setting \"streaming\" option to \"on\" could work around this problem.\n>\n> I tried to write a draft patch (see attachment) on REL_14_4 to fix this.\n> I tried it locally and it seems to work.\n> Could you please confirm whether this problem is fixed after applying this\n> draft patch?\n>\n> If this draft patch works, I will improve it and try to fix this problem.\n>\n> Regards,\n> Wang wei\n>\n\nHello Wang,I tested the draft patch in my lab for Postgres 14.4, the refresh of the materialized view ran without generating the timeout on the worker.Do you plan to propose this patch at the next commit fest.Regards,FabriceOn Wed, Oct 19, 2022 at 10:15 AM wangw.fnst@fujitsu.com <wangw.fnst@fujitsu.com> wrote:On Tue, Oct 18, 2022 at 22:35 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:\n> Hello Amit,\n>\n> In version 14.4 the timeout problem for logical replication happens again despite\n> the patch provided for this issue in this version. When bulky materialized views\n> are reloaded it broke logical replication. It is possible to solve this problem by\n> using your new \"streaming\" option.\n> Have you ever had this issue reported to you?\n>\n> Regards\n>\n> Fabrice\n>\n> 2022-10-10 17:19:02 CEST [538424]: [17-1]\n> user=postgres,db=dbxxxa00,client=[local] CONTEXT: SQL statement \"REFRESH\n> MATERIALIZED VIEW sxxxa00.table_base\"\n> PL/pgSQL function refresh_materialized_view(text) line 5 at EXECUTE\n> 2022-10-10 17:19:02 CEST [538424]: [18-1]\n> user=postgres,db=dbxxxa00,client=[local] STATEMENT: select\n> refresh_materialized_view('sxxxa00.table_base');\n> 2022-10-10 17:19:02 CEST [538424]: [19-1]\n> user=postgres,db=dbxxxa00,client=[local] LOG: duration: 264815.652\n> ms statement: select refresh_materialized_view('sxxxa00.table_base');\n> 2022-10-10 17:19:27 CEST [559156]: [1-1] user=,db=,client= LOG: automatic\n> vacuum of table \"dbxxxa00.sxxxa00.table_base\": index scans: 0\n> pages: 0 removed, 296589 remain, 0 skipped due to pins, 0 skipped frozen\n> tuples: 0 removed, 48472622 remain, 0 are dead but not yet removable,\n> oldest xmin: 1501528\n> index scan not needed: 0 pages from table (0.00% of total) had 0 dead item\n> identifiers removed\n> I/O timings: read: 1.494 ms, write: 0.000 ms\n> avg read rate: 0.028 MB/s, avg write rate: 107.952 MB/s\n> buffer usage: 593301 hits, 77 misses, 294605 dirtied\n> WAL usage: 296644 records, 46119 full page images, 173652718 bytes\n> system usage: CPU: user: 17.26 s, system: 0.29 s, elapsed: 21.32 s\n> 2022-10-10 17:19:28 CEST [559156]: [2-1] user=,db=,client= LOG: automatic\n> analyze of table \"dbxxxa00.sxxxa00.table_base\"\n> I/O timings: read: 0.043 ms, write: 0.000 ms\n> avg read rate: 0.026 MB/s, avg write rate: 0.026 MB/s\n> buffer usage: 30308 hits, 2 misses, 2 dirtied\n> system usage: CPU: user: 0.54 s, system: 0.00 s, elapsed: 0.59 s\n> 2022-10-10 17:19:34 CEST [3898111]: [6840-1] user=,db=,client= LOG: checkpoint\n> complete: wrote 1194 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled;\n> write=269.551 s, sync=0.002 s, total=269.560 s; sync files=251, longest=0.00\n> 1 s, average=0.001 s; distance=583790 kB, estimate=583790 kB\n> 2022-10-10 17:20:02 CEST [716163]: [2-1] user=,db=,client= ERROR: terminating\n> logical replication worker due to timeout\n> 2022-10-10 17:20:02 CEST [3897921]: [13-1] user=,db=,client= LOG: background\n> worker \"logical replication worker\" (PID 716163) exited with exit code 1\n> 2022-10-10 17:20:02 CEST [561346]: [1-1] user=,db=,client= LOG: logical\n> replication apply worker for subscription \"subxxx_sxxxa00\" has started\n\nThanks for reporting!\n\nThere is one thing I want to confirm:\nIs the statement `select refresh_materialized_view('sxxxa00.table_base');`\nexecuted on the publisher-side?\n\nIf so, I think the reason for this timeout problem could be that during DDL\n(`REFRESH MATERIALIZED VIEW`), lots of temporary data is generated due to\nrewrite. Since these temporary data will not be processed by the pgoutput \nplugin, our previous fix for DML had no impact on this case.\nI think setting \"streaming\" option to \"on\" could work around this problem.\n\nI tried to write a draft patch (see attachment) on REL_14_4 to fix this.\nI tried it locally and it seems to work.\nCould you please confirm whether this problem is fixed after applying this\ndraft patch?\n\nIf this draft patch works, I will improve it and try to fix this problem.\n\nRegards,\nWang wei",
"msg_date": "Fri, 4 Nov 2022 11:13:02 +0100",
"msg_from": "Fabrice Chapuis <fabrice636861@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Fri, Nov 4, 2022 at 18:13 PM Fabrice Chapuis <fabrice636861@gmail.com> wrote:\r\n> Hello Wang,\r\n> \r\n> I tested the draft patch in my lab for Postgres 14.4, the refresh of the\r\n> materialized view ran without generating the timeout on the worker.\r\n> Do you plan to propose this patch at the next commit fest.\r\n\r\nThanks for your confirmation!\r\nI will add this thread to the commit fest soon.\r\n\r\nThe following is the problem analysis and fix approach:\r\nI think the problem is when there is a DDL in a transaction that generates lots\r\nof temporary data due to rewrite rules, these temporary data will not be\r\nprocessed by the pgoutput - plugin. Therefore, the previous fix (f95d53e) for\r\nDML had no impact on this case.\r\n\r\nTo fix this, I think we need to try to send the keepalive messages after each\r\nchange is processed by walsender, not in the pgoutput-plugin.\r\n\r\nAttach the patch.\r\n\r\nRegards,\r\nWang wei",
"msg_date": "Tue, 8 Nov 2022 03:04:33 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "Hi Wang,\nThanks for working on this. One of our customer faced a similar\nsituation when running BDR with PostgreSQL.\n\nI tested your patch and it solves the problem.\n\nPlease find some review comments below\n\nOn Tue, Nov 8, 2022 at 8:34 AM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n>\n> Attach the patch.\n>\n\n+/*\n+ * Helper function for ReorderBufferProcessTXN for updating progress.\n+ */\n+static inline void\n+ReorderBufferUpdateProgress(ReorderBuffer *rb, ReorderBufferTXN *txn,\n+ ReorderBufferChange *change)\n+{\n+ LogicalDecodingContext *ctx = rb->private_data;\n+ static int changes_count = 0;\n\nIt's not easy to know that a variable is static when reading the code which\nuses it. So it's easy to interpret code wrong. I would probably track it\nthrough logical decoding context itself OR through a global variable like other\nplaces where we track the last timestamps. But there's more below on this.\n\n+\n+ if (!ctx->update_progress)\n+ return;\n+\n+ Assert(!ctx->fast_forward);\n+\n+ /* set output state */\n+ ctx->accept_writes = false;\n+ ctx->write_xid = txn->xid;\n+ ctx->write_location = change->lsn;\n+ ctx->end_xact = false;\n\nThis patch reverts many of the changes of the previous commit which tried to\nfix this issue i.e. 55558df2374. end_xact was introduced by the same commit but\nwithout much explanation of that in the commit message. Its only user,\nWalSndUpdateProgress(), is probably making a wrong assumption as well.\n\n * We don't have a mechanism to get the ack for any LSN other than end\n * xact LSN from the downstream. So, we track lag only for end of\n * transaction LSN.\n\nIIUC, WAL sender tracks the LSN of the last WAL record read in sentPtr which is\nsent downstream through a keep alive message. Downstream may acknowledge this\nLSN. So we do get ack for any LSN, not just commit LSN.\n\nSo I propose removing end_xact as well.\n\n+\n+ /*\n+ * We don't want to try sending a keepalive message after processing each\n+ * change as that can have overhead. Tests revealed that there is no\n+ * noticeable overhead in doing it after continuously processing 100 or so\n+ * changes.\n+ */\n+#define CHANGES_THRESHOLD 100\n\nI think a time based threashold makes more sense. What if the timeout was\nnearing and those 100 changes just took little more time causing a timeout? We\nalready have a time based threashold in WalSndKeepaliveIfNecessary(). And that\nfunction is invoked after reading every WAL record in WalSndLoop(). So it does\nnot look like it's an expensive function. If it is expensive we might want to\nworry about WalSndLoop as well. Does it make more sense to remove this\nthreashold?\n\n+\n+ /*\n+ * After continuously processing CHANGES_THRESHOLD changes, we\n+ * try to send a keepalive message if required.\n+ */\n+ if (++changes_count >= CHANGES_THRESHOLD)\n+ {\n+ ctx->update_progress(ctx, ctx->write_location, ctx->write_xid, false);\n+ changes_count = 0;\n+ }\n+}\n+\n\nOn the other thread, I mentioned that we don't have a TAP test for it.\nI agree with\nAmit's opinion there that it's hard to create a test which will timeout\neverywhere. I think what we need is a way to control the time required for\ndecoding a transaction.\n\nA rough idea is to induce a small sleep after decoding every change. The amount\nof sleep * number of changes will help us estimate and control the amount of\ntime taken to decode a transaction. Then we create a transaction which will\ntake longer than the timeout threashold to decode. But that's a\nsignificant code. I\ndon't think PostgreSQL has a facility to induce a delay at a particular place\nin the code.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 6 Jan 2023 12:35:31 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Fri, Jan 6, 2023 at 12:35 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> +\n> + /*\n> + * We don't want to try sending a keepalive message after processing each\n> + * change as that can have overhead. Tests revealed that there is no\n> + * noticeable overhead in doing it after continuously processing 100 or so\n> + * changes.\n> + */\n> +#define CHANGES_THRESHOLD 100\n>\n> I think a time based threashold makes more sense. What if the timeout was\n> nearing and those 100 changes just took little more time causing a timeout? We\n> already have a time based threashold in WalSndKeepaliveIfNecessary(). And that\n> function is invoked after reading every WAL record in WalSndLoop(). So it does\n> not look like it's an expensive function. If it is expensive we might want to\n> worry about WalSndLoop as well. Does it make more sense to remove this\n> threashold?\n>\n\nWe have previously tried this for every change [1] and it brings\nnoticeable overhead. In fact, even doing it for every 10 changes also\nhad some overhead which is why we reached this threshold number. I\ndon't think it can lead to timeout due to skipping changes but sure if\nwe see any such report we can further fine-tune this setting or will\ntry to make it time-based but for now I feel it would be safe to use\nthis threshold.\n\n> +\n> + /*\n> + * After continuously processing CHANGES_THRESHOLD changes, we\n> + * try to send a keepalive message if required.\n> + */\n> + if (++changes_count >= CHANGES_THRESHOLD)\n> + {\n> + ctx->update_progress(ctx, ctx->write_location, ctx->write_xid, false);\n> + changes_count = 0;\n> + }\n> +}\n> +\n>\n> On the other thread, I mentioned that we don't have a TAP test for it.\n> I agree with\n> Amit's opinion there that it's hard to create a test which will timeout\n> everywhere. I think what we need is a way to control the time required for\n> decoding a transaction.\n>\n> A rough idea is to induce a small sleep after decoding every change. The amount\n> of sleep * number of changes will help us estimate and control the amount of\n> time taken to decode a transaction. Then we create a transaction which will\n> take longer than the timeout threashold to decode. But that's a\n> significant code. I\n> don't think PostgreSQL has a facility to induce a delay at a particular place\n> in the code.\n>\n\nYeah, I don't know how to induce such a delay while decoding changes.\n\nOne more thing, I think it would be better to expose a new callback\nAPI via reorder buffer as suggested previously [2] similar to other\nreorder buffer APIs instead of directly using reorderbuffer API to\ninvoke plugin API.\n\n\n[1] - https://www.postgresql.org/message-id/OS3PR01MB6275DFFDAC7A59FA148931529E209%40OS3PR01MB6275.jpnprd01.prod.outlook.com\n[2] - https://www.postgresql.org/message-id/CAA4eK1%2BfQjndoBOFUn9Wy0hhm3MLyUWEpcT9O7iuCELktfdBiQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 9 Jan 2023 10:33:44 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Fri, Jan 6, 2023 at 15:06 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:\r\n> Hi Wang,\r\n> Thanks for working on this. One of our customer faced a similar\r\n> situation when running BDR with PostgreSQL.\r\n> \r\n> I tested your patch and it solves the problem.\r\n> \r\n> Please find some review comments below\r\n\r\nThanks for your testing and comments.\r\n\r\n> +/*\r\n> + * Helper function for ReorderBufferProcessTXN for updating progress.\r\n> + */\r\n> +static inline void\r\n> +ReorderBufferUpdateProgress(ReorderBuffer *rb, ReorderBufferTXN *txn,\r\n> + ReorderBufferChange *change)\r\n> +{\r\n> + LogicalDecodingContext *ctx = rb->private_data;\r\n> + static int changes_count = 0;\r\n> \r\n> It's not easy to know that a variable is static when reading the code which\r\n> uses it. So it's easy to interpret code wrong. I would probably track it\r\n> through logical decoding context itself OR through a global variable like other\r\n> places where we track the last timestamps. But there's more below on this.\r\n\r\nI'm not sure if we need to add global variables or member variables for a\r\ncumulative count that is only used here. How would you feel if I add some\r\ncomments when declaring this static variable?\r\n\r\n> +\r\n> + if (!ctx->update_progress)\r\n> + return;\r\n> +\r\n> + Assert(!ctx->fast_forward);\r\n> +\r\n> + /* set output state */\r\n> + ctx->accept_writes = false;\r\n> + ctx->write_xid = txn->xid;\r\n> + ctx->write_location = change->lsn;\r\n> + ctx->end_xact = false;\r\n> \r\n> This patch reverts many of the changes of the previous commit which tried to\r\n> fix this issue i.e. 55558df2374. end_xact was introduced by the same commit but\r\n> without much explanation of that in the commit message. Its only user,\r\n> WalSndUpdateProgress(), is probably making a wrong assumption as well.\r\n> \r\n> * We don't have a mechanism to get the ack for any LSN other than end\r\n> * xact LSN from the downstream. So, we track lag only for end of\r\n> * transaction LSN.\r\n> \r\n> IIUC, WAL sender tracks the LSN of the last WAL record read in sentPtr which is\r\n> sent downstream through a keep alive message. Downstream may\r\n> acknowledge this\r\n> LSN. So we do get ack for any LSN, not just commit LSN.\r\n> \r\n> So I propose removing end_xact as well.\r\n\r\nWe didn't track the lag during a transaction because it could make the\r\ncalculations of lag functionality inaccurate. If we track every lsn, it could\r\nfail to record important lsn information because of\r\nWALSND_LOGICAL_LAG_TRACK_INTERVAL_MS (see function WalSndUpdateProgress).\r\nPlease see details in [1] and [2].\r\n\r\nRegards,\r\nWang Wei\r\n\r\n[1] - https://www.postgresql.org/message-id/OS3PR01MB62755D216245199554DDC8DB9EEA9%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n[2] - https://www.postgresql.org/message-id/OS3PR01MB627514AE0B3040D8F55A68B99EEA9%40OS3PR01MB6275.jpnprd01.prod.outlook.com\r\n",
"msg_date": "Mon, 9 Jan 2023 10:38:06 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Mon, Jan 9, 2023 at 13:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n>\r\n\r\nThanks for your comments.\r\n\r\n> One more thing, I think it would be better to expose a new callback\r\n> API via reorder buffer as suggested previously [2] similar to other\r\n> reorder buffer APIs instead of directly using reorderbuffer API to\r\n> invoke plugin API.\r\n\r\nYes, I agree. I think it would be better to add a new callback API on the HEAD.\r\nSo, I improved the fix approach:\r\nIntroduce a new optional callback to update the process. This callback function\r\nis invoked at the end inside the main loop of the function\r\nReorderBufferProcessTXN() for each change. In this way, I think it seems that\r\nsimilar timeout problems could be avoided.\r\n\r\nBTW, I did the performance test for this patch. When running the SQL that\r\nreproduces the problem (refresh the materialized view in sync logical\r\nreplication mode), the running time of new function pgoutput_update_progress is\r\nless than 0.1% of the total time. I think this result looks OK.\r\n\r\nAttach the new patch.\r\n\r\nRegards,\r\nWang Wei",
"msg_date": "Wed, 11 Jan 2023 10:41:47 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Mon, Jan 9, 2023 at 4:08 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Fri, Jan 6, 2023 at 15:06 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:\n>\n> I'm not sure if we need to add global variables or member variables for a\n> cumulative count that is only used here. How would you feel if I add some\n> comments when declaring this static variable?\n\nI see WalSndUpdateProgress::sendTime is static already. So this seems\nfine. A comment will help sure.\n\n>\n> > +\n> > + if (!ctx->update_progress)\n> > + return;\n> > +\n> > + Assert(!ctx->fast_forward);\n> > +\n> > + /* set output state */\n> > + ctx->accept_writes = false;\n> > + ctx->write_xid = txn->xid;\n> > + ctx->write_location = change->lsn;\n> > + ctx->end_xact = false;\n> >\n> > This patch reverts many of the changes of the previous commit which tried to\n> > fix this issue i.e. 55558df2374. end_xact was introduced by the same commit but\n> > without much explanation of that in the commit message. Its only user,\n> > WalSndUpdateProgress(), is probably making a wrong assumption as well.\n> >\n> > * We don't have a mechanism to get the ack for any LSN other than end\n> > * xact LSN from the downstream. So, we track lag only for end of\n> > * transaction LSN.\n> >\n> > IIUC, WAL sender tracks the LSN of the last WAL record read in sentPtr which is\n> > sent downstream through a keep alive message. Downstream may\n> > acknowledge this\n> > LSN. So we do get ack for any LSN, not just commit LSN.\n> >\n> > So I propose removing end_xact as well.\n>\n> We didn't track the lag during a transaction because it could make the\n> calculations of lag functionality inaccurate. If we track every lsn, it could\n> fail to record important lsn information because of\n> WALSND_LOGICAL_LAG_TRACK_INTERVAL_MS (see function WalSndUpdateProgress).\n> Please see details in [1] and [2].\n\nLagTrackerRead() interpolates to reduce the inaccuracy. I don't\nunderstand why we need to track the end LSN only. But I don't think\nthat affects this fix. So I am fine if we want to leave end_xact\nthere.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 16 Jan 2023 21:57:50 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Jan 11, 2023 at 4:11 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Mon, Jan 9, 2023 at 13:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> Thanks for your comments.\n>\n> > One more thing, I think it would be better to expose a new callback\n> > API via reorder buffer as suggested previously [2] similar to other\n> > reorder buffer APIs instead of directly using reorderbuffer API to\n> > invoke plugin API.\n>\n> Yes, I agree. I think it would be better to add a new callback API on the HEAD.\n> So, I improved the fix approach:\n> Introduce a new optional callback to update the process. This callback function\n> is invoked at the end inside the main loop of the function\n> ReorderBufferProcessTXN() for each change. In this way, I think it seems that\n> similar timeout problems could be avoided.\n\nI am a bit worried about the indirections that the wrappers and hooks\ncreate. Output plugins call OutputPluginUpdateProgress() in callbacks\nbut I don't see why ReorderBufferProcessTXN() needs a callback to\ncall OutputPluginUpdateProgress. I don't think output plugins are\ngoing to do anything special with that callback than just call\nOutputPluginUpdateProgress. Every output plugin will need to implement\nit and if they do not they will face the timeout problem. That would\nbe unnecessary. Instead ReorderBufferUpdateProgress() in your first\npatch was more direct and readable. That way the fix works for any\noutput plugin. In fact, I am wondering whether we could have a call in\nReorderBufferProcessTxn() at the end of transaction\n(commit/prepare/commit prepared/abort prepared) instead of the\ncorresponding output plugin callbacks calling\nOutputPluginUpdateProgress().\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 16 Jan 2023 22:06:43 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Mon, Jan 16, 2023 at 10:06 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Wed, Jan 11, 2023 at 4:11 PM wangw.fnst@fujitsu.com\n> <wangw.fnst@fujitsu.com> wrote:\n> >\n> > On Mon, Jan 9, 2023 at 13:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> >\n> > Thanks for your comments.\n> >\n> > > One more thing, I think it would be better to expose a new callback\n> > > API via reorder buffer as suggested previously [2] similar to other\n> > > reorder buffer APIs instead of directly using reorderbuffer API to\n> > > invoke plugin API.\n> >\n> > Yes, I agree. I think it would be better to add a new callback API on the HEAD.\n> > So, I improved the fix approach:\n> > Introduce a new optional callback to update the process. This callback function\n> > is invoked at the end inside the main loop of the function\n> > ReorderBufferProcessTXN() for each change. In this way, I think it seems that\n> > similar timeout problems could be avoided.\n>\n> I am a bit worried about the indirections that the wrappers and hooks\n> create. Output plugins call OutputPluginUpdateProgress() in callbacks\n> but I don't see why ReorderBufferProcessTXN() needs a callback to\n> call OutputPluginUpdateProgress.\n>\n\nYeah, I think we can do it as we are doing the previous approach but\nwe need an additional wrapper (update_progress_cb_wrapper()) as the\ncurrent patch has so that we can add error context information. This\nis similar to why we have a wrapper for all other callbacks like\nchange_cb_wrapper.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 17 Jan 2023 15:34:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Tue, Jan 17, 2023 at 3:34 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> >\n> > I am a bit worried about the indirections that the wrappers and hooks\n> > create. Output plugins call OutputPluginUpdateProgress() in callbacks\n> > but I don't see why ReorderBufferProcessTXN() needs a callback to\n> > call OutputPluginUpdateProgress.\n> >\n>\n> Yeah, I think we can do it as we are doing the previous approach but\n> we need an additional wrapper (update_progress_cb_wrapper()) as the\n> current patch has so that we can add error context information. This\n> is similar to why we have a wrapper for all other callbacks like\n> change_cb_wrapper.\n>\n\nUltimately OutputPluginUpdateProgress() will be called - which in turn\nwill call ctx->update_progress. I don't see wrappers around\nOutputPluginWrite or OutputPluginPrepareWrite. But I see that those\ntwo are called always from output plugin, so indirectly those are\ncalled through a wrapper. I also see that update_progress_cb_wrapper()\nis similar, as far as wrapper is concerned, to\nReorderBufferUpdateProgress() in the earlier patch.\nReorderBufferUpdateProgress() looks more readable than the wrapper.\n\nIf we want to keep the wrapper at least we should use a different\nvariable name. update_progress is also there LogicalDecodingContext\nand will be indirectly called from ReorderBuffer::update_progress.\nSomebody might think that there's some recursion involved there.\nThat's a mighty confusion.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 17 Jan 2023 18:41:46 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Tue, Jan 17, 2023 at 6:41 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Tue, Jan 17, 2023 at 3:34 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > >\n> > > I am a bit worried about the indirections that the wrappers and hooks\n> > > create. Output plugins call OutputPluginUpdateProgress() in callbacks\n> > > but I don't see why ReorderBufferProcessTXN() needs a callback to\n> > > call OutputPluginUpdateProgress.\n> > >\n> >\n> > Yeah, I think we can do it as we are doing the previous approach but\n> > we need an additional wrapper (update_progress_cb_wrapper()) as the\n> > current patch has so that we can add error context information. This\n> > is similar to why we have a wrapper for all other callbacks like\n> > change_cb_wrapper.\n> >\n>\n> Ultimately OutputPluginUpdateProgress() will be called - which in turn\n> will call ctx->update_progress.\n>\n\nNo, update_progress_cb_wrapper() should directly call\nctx->update_progress(). The key reason to have a\nupdate_progress_cb_wrapper() is that it allows us to add error context\ninformation (see the usage of output_plugin_error_callback).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 18 Jan 2023 10:58:34 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Jan 18, 2023 at 13:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Tue, Jan 17, 2023 at 6:41 PM Ashutosh Bapat\r\n> <ashutosh.bapat.oss@gmail.com> wrote:\r\n> >\r\n> > On Tue, Jan 17, 2023 at 3:34 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > > >\r\n> > > > I am a bit worried about the indirections that the wrappers and hooks\r\n> > > > create. Output plugins call OutputPluginUpdateProgress() in callbacks\r\n> > > > but I don't see why ReorderBufferProcessTXN() needs a callback to\r\n> > > > call OutputPluginUpdateProgress.\r\n> > > >\r\n> > >\r\n> > > Yeah, I think we can do it as we are doing the previous approach but\r\n> > > we need an additional wrapper (update_progress_cb_wrapper()) as the\r\n> > > current patch has so that we can add error context information. This\r\n> > > is similar to why we have a wrapper for all other callbacks like\r\n> > > change_cb_wrapper.\r\n> > >\r\n> >\r\n> > Ultimately OutputPluginUpdateProgress() will be called - which in turn\r\n> > will call ctx->update_progress.\r\n> >\r\n> \r\n> No, update_progress_cb_wrapper() should directly call\r\n> ctx->update_progress(). The key reason to have a\r\n> update_progress_cb_wrapper() is that it allows us to add error context\r\n> information (see the usage of output_plugin_error_callback).\r\n\r\nI think it makes sense. This also avoids the need for every output plugin to\r\nimplement the callback. So I tried to improve the patch based on this approach.\r\n\r\nAnd I tried to add some comments for this new callback to distinguish it from\r\nctx->update_progress.\r\n\r\nAttach the new patch.\r\n\r\nRegards,\r\nWang Wei",
"msg_date": "Wed, 18 Jan 2023 08:19:08 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Jan 18, 2023 at 1:49 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Wed, Jan 18, 2023 at 13:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Tue, Jan 17, 2023 at 6:41 PM Ashutosh Bapat\n> > <ashutosh.bapat.oss@gmail.com> wrote:\n> > >\n> > > On Tue, Jan 17, 2023 at 3:34 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > > >\n> > > > > I am a bit worried about the indirections that the wrappers and hooks\n> > > > > create. Output plugins call OutputPluginUpdateProgress() in callbacks\n> > > > > but I don't see why ReorderBufferProcessTXN() needs a callback to\n> > > > > call OutputPluginUpdateProgress.\n> > > > >\n> > > >\n> > > > Yeah, I think we can do it as we are doing the previous approach but\n> > > > we need an additional wrapper (update_progress_cb_wrapper()) as the\n> > > > current patch has so that we can add error context information. This\n> > > > is similar to why we have a wrapper for all other callbacks like\n> > > > change_cb_wrapper.\n> > > >\n> > >\n> > > Ultimately OutputPluginUpdateProgress() will be called - which in turn\n> > > will call ctx->update_progress.\n> > >\n> >\n> > No, update_progress_cb_wrapper() should directly call\n> > ctx->update_progress(). The key reason to have a\n> > update_progress_cb_wrapper() is that it allows us to add error context\n> > information (see the usage of output_plugin_error_callback).\n>\n> I think it makes sense. This also avoids the need for every output plugin to\n> implement the callback. So I tried to improve the patch based on this approach.\n>\n> And I tried to add some comments for this new callback to distinguish it from\n> ctx->update_progress.\n\nComments don't help when using cscope or some such code browsing tool.\nBetter to use a different variable name.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 18 Jan 2023 17:37:19 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Jan 18, 2023 at 5:37 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Wed, Jan 18, 2023 at 1:49 PM wangw.fnst@fujitsu.com\n> <wangw.fnst@fujitsu.com> wrote:\n> >\n> > On Wed, Jan 18, 2023 at 13:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > On Tue, Jan 17, 2023 at 6:41 PM Ashutosh Bapat\n> > > <ashutosh.bapat.oss@gmail.com> wrote:\n> > > >\n> > > > On Tue, Jan 17, 2023 at 3:34 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > > >\n> > > > > > I am a bit worried about the indirections that the wrappers and hooks\n> > > > > > create. Output plugins call OutputPluginUpdateProgress() in callbacks\n> > > > > > but I don't see why ReorderBufferProcessTXN() needs a callback to\n> > > > > > call OutputPluginUpdateProgress.\n> > > > > >\n> > > > >\n> > > > > Yeah, I think we can do it as we are doing the previous approach but\n> > > > > we need an additional wrapper (update_progress_cb_wrapper()) as the\n> > > > > current patch has so that we can add error context information. This\n> > > > > is similar to why we have a wrapper for all other callbacks like\n> > > > > change_cb_wrapper.\n> > > > >\n> > > >\n> > > > Ultimately OutputPluginUpdateProgress() will be called - which in turn\n> > > > will call ctx->update_progress.\n> > > >\n> > >\n> > > No, update_progress_cb_wrapper() should directly call\n> > > ctx->update_progress(). The key reason to have a\n> > > update_progress_cb_wrapper() is that it allows us to add error context\n> > > information (see the usage of output_plugin_error_callback).\n> >\n> > I think it makes sense. This also avoids the need for every output plugin to\n> > implement the callback. So I tried to improve the patch based on this approach.\n> >\n> > And I tried to add some comments for this new callback to distinguish it from\n> > ctx->update_progress.\n>\n> Comments don't help when using cscope or some such code browsing tool.\n> Better to use a different variable name.\n>\n\n+ /*\n+ * Callback to be called when updating progress during sending data of a\n+ * transaction (and its subtransactions) to the output plugin.\n+ */\n+ ReorderBufferUpdateProgressCB update_progress;\n\nAre you suggesting changing the name of the above variable? If so, how\nabout apply_progress, progress, or updateprogress? If you don't like\nany of these then feel free to suggest something else. If we change\nthe variable name then accordingly, we need to update\nReorderBufferUpdateProgressCB as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 18 Jan 2023 18:00:12 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Jan 18, 2023 at 6:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> + */\n> + ReorderBufferUpdateProgressCB update_progress;\n>\n> Are you suggesting changing the name of the above variable? If so, how\n> about apply_progress, progress, or updateprogress? If you don't like\n> any of these then feel free to suggest something else. If we change\n> the variable name then accordingly, we need to update\n> ReorderBufferUpdateProgressCB as well.\n>\n\nI would liked to have all the callback names renamed with prefix\n\"rbcb_xxx\" so that they have very less chances of conflicting with\nsimilar names in the code base. But it's probably late to do that :).\n\nHow are update_txn_progress since the CB is supposed to be used only\nwithin a transaction? or update_progress_txn?\nupdate_progress_cb_wrapper needs a change of name as well.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Thu, 19 Jan 2023 16:13:23 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Thu, Jan 19, 2023 at 4:13 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Wed, Jan 18, 2023 at 6:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > + */\n> > + ReorderBufferUpdateProgressCB update_progress;\n> >\n> > Are you suggesting changing the name of the above variable? If so, how\n> > about apply_progress, progress, or updateprogress? If you don't like\n> > any of these then feel free to suggest something else. If we change\n> > the variable name then accordingly, we need to update\n> > ReorderBufferUpdateProgressCB as well.\n> >\n>\n> I would liked to have all the callback names renamed with prefix\n> \"rbcb_xxx\" so that they have very less chances of conflicting with\n> similar names in the code base. But it's probably late to do that :).\n>\n> How are update_txn_progress since the CB is supposed to be used only\n> within a transaction? or update_progress_txn?\n>\n\nPersonally, I would prefer 'apply_progress' as it would be similar to\na few other callbacks like apply_change, apply_truncate, or as is\nproposed by patch update_progress again because it is similar to\nexisting callbacks like commit_prepared. If you and others don't like\nany of those then we can go for 'update_progress_txn' as well. Anybody\nelse has an opinion on this?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 19 Jan 2023 17:07:12 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "Here are some review comments for patch v3-0001.\n\n======\nCommit message\n\n1.\nThe problem is when there is a DDL in a transaction that generates lots of\ntemporary data due to rewrite rules, these temporary data will not be processed\nby the pgoutput - plugin. Therefore, the previous fix (f95d53e) for DML had no\nimpact on this case.\n\n~\n\n1a.\nIMO this comment needs to give a bit of background about the original\nproblem here, rather than just starting with \"The problem is\" which is\ndescribing the flaws of the previous fix.\n\n~\n\n1b.\n\"pgoutput - plugin\" -> \"pgoutput plugin\" ??\n\n~~~\n\n2.\n\nTo fix this, we introduced a new ReorderBuffer callback -\n'ReorderBufferUpdateProgressCB'. This callback is called to try to update the\nprocess after each change has been processed during sending data of a\ntransaction (and its subtransactions) to the output plugin.\n\nIIUC it's not really \"after each change\" - shouldn't this comment\nmention something about the CHANGES_THRESHOLD 100?\n\n======\nsrc/backend/replication/logical/logical.c\n\n3. forward declaration\n\n+/* update progress callback */\n+static void update_progress_cb_wrapper(ReorderBuffer *cache,\n+ ReorderBufferTXN *txn,\n+ ReorderBufferChange *change);\n\nI felt this function wrapper name was a bit misleading... AFAIK every\nother wrapper really does just wrap their respective functions. But\nthis one seems a bit different because it calls the wrapped function\nONLY if some threshold is exceeded. IMO maybe this function could have\nsome name that conveys this better:\n\ne.g. update_progress_cb_wrapper_with_threshold\n\n~~~\n\n4. update_progress_cb_wrapper\n\n+/*\n+ * Update progress callback\n+ *\n+ * Try to update progress and send a keepalive message if too many changes were\n+ * processed when processing txn.\n+ *\n+ * For a large transaction, if we don't send any change to the downstream for a\n+ * long time (exceeds the wal_receiver_timeout of standby) then it can timeout.\n+ * This can happen when all or most of the changes are either not published or\n+ * got filtered out.\n+ */\n\nSUGGESTION (instead of the \"Try to update\" sentence)\nSend a keepalive message whenever more than <CHANGES_THRESHOLD>\nchanges are encountered while processing a transaction.\n\n~~~\n\n5.\n\n+static void\n+update_progress_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,\n+ ReorderBufferChange *change)\n+{\n+ LogicalDecodingContext *ctx = cache->private_data;\n+ LogicalErrorCallbackState state;\n+ ErrorContextCallback errcallback;\n+ static int changes_count = 0; /* Static variable used to accumulate\n+ * the number of changes while\n+ * processing txn. */\n+\n\nIMO this may be more readable if the static 'changes_count' local var\nwas declared first and separated from the other vars by a blank line.\n\n~~~\n\n6.\n\n+ /*\n+ * We don't want to try sending a keepalive message after processing each\n+ * change as that can have overhead. Tests revealed that there is no\n+ * noticeable overhead in doing it after continuously processing 100 or so\n+ * changes.\n+ */\n+#define CHANGES_THRESHOLD 100\n\n6a.\nI think it might be better to define this right at the top of the\nfunction adjacent to the 'changes_count' variable (e.g. a bit like the\noriginal HEAD code looked)\n\n~\n\n6b.\nSUGGESTION (for the comment)\nSending keepalive messages after every change has some overhead, but\ntesting showed there is no noticeable overhead if keepalive is only\nsent after every ~100 changes.\n\n~~~\n\n7.\n\n+\n+ /*\n+ * After continuously processing CHANGES_THRESHOLD changes, we\n+ * try to send a keepalive message if required.\n+ */\n+ if (++changes_count >= CHANGES_THRESHOLD)\n+ {\n+ ctx->update_progress(ctx, ctx->write_location, ctx->write_xid, false);\n+ changes_count = 0;\n+ }\n+\n\n7a.\nSUGGESTION (for comment)\nSend a keepalive message after every CHANGES_THRESHOLD changes.\n\n~\n\n7b.\nWould it be neater to just call OutputPluginUpdateProgress here instead?\n\ne.g.\nBEFORE\nctx->update_progress(ctx, ctx->write_location, ctx->write_xid, false);\nAFTER\nOutputPluginUpdateProgress(ctx, false);\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 20 Jan 2023 13:10:23 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Fri, Jan 20, 2023 at 7:40 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are some review comments for patch v3-0001.\n>\n> ======\n> src/backend/replication/logical/logical.c\n>\n> 3. forward declaration\n>\n> +/* update progress callback */\n> +static void update_progress_cb_wrapper(ReorderBuffer *cache,\n> + ReorderBufferTXN *txn,\n> + ReorderBufferChange *change);\n>\n> I felt this function wrapper name was a bit misleading... AFAIK every\n> other wrapper really does just wrap their respective functions. But\n> this one seems a bit different because it calls the wrapped function\n> ONLY if some threshold is exceeded. IMO maybe this function could have\n> some name that conveys this better:\n>\n> e.g. update_progress_cb_wrapper_with_threshold\n>\n\nI am wondering whether it would be better to move the threshold logic\nto the caller. Previously this logic was inside the function because\nit was being invoked from multiple places but now that won't be the\ncase. Also, then your concern about the name would also be addressed.\n\n>\n> ~\n>\n> 7b.\n> Would it be neater to just call OutputPluginUpdateProgress here instead?\n>\n> e.g.\n> BEFORE\n> ctx->update_progress(ctx, ctx->write_location, ctx->write_xid, false);\n> AFTER\n> OutputPluginUpdateProgress(ctx, false);\n>\n\nWe already check whether ctx->update_progress is defined or not which\nis the only extra job done by OutputPluginUpdateProgress but probably\nwe can consolidate the checks and directly invoke\nOutputPluginUpdateProgress.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 20 Jan 2023 10:05:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Fri, Jan 20, 2023 at 3:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jan 20, 2023 at 7:40 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Here are some review comments for patch v3-0001.\n> >\n> > ======\n> > src/backend/replication/logical/logical.c\n> >\n> > 3. forward declaration\n> >\n> > +/* update progress callback */\n> > +static void update_progress_cb_wrapper(ReorderBuffer *cache,\n> > + ReorderBufferTXN *txn,\n> > + ReorderBufferChange *change);\n> >\n> > I felt this function wrapper name was a bit misleading... AFAIK every\n> > other wrapper really does just wrap their respective functions. But\n> > this one seems a bit different because it calls the wrapped function\n> > ONLY if some threshold is exceeded. IMO maybe this function could have\n> > some name that conveys this better:\n> >\n> > e.g. update_progress_cb_wrapper_with_threshold\n> >\n>\n> I am wondering whether it would be better to move the threshold logic\n> to the caller. Previously this logic was inside the function because\n> it was being invoked from multiple places but now that won't be the\n> case. Also, then your concern about the name would also be addressed.\n>\n> >\n> > ~\n> >\n> > 7b.\n> > Would it be neater to just call OutputPluginUpdateProgress here instead?\n> >\n> > e.g.\n> > BEFORE\n> > ctx->update_progress(ctx, ctx->write_location, ctx->write_xid, false);\n> > AFTER\n> > OutputPluginUpdateProgress(ctx, false);\n> >\n>\n> We already check whether ctx->update_progress is defined or not which\n> is the only extra job done by OutputPluginUpdateProgress but probably\n> we can consolidate the checks and directly invoke\n> OutputPluginUpdateProgress.\n>\n\nYes, I saw that, but I thought it was better to keep the early exit\nfrom update_progress_cb_wrapper, so incurring just one additional\nboolean check for every 100 changes was not anything to worry about.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Fri, 20 Jan 2023 16:28:09 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Thu, Jan 19, 2023 at 19:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Thu, Jan 19, 2023 at 4:13 PM Ashutosh Bapat\r\n> <ashutosh.bapat.oss@gmail.com> wrote:\r\n> >\r\n> > On Wed, Jan 18, 2023 at 6:00 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > > + */\r\n> > > + ReorderBufferUpdateProgressCB update_progress;\r\n> > >\r\n> > > Are you suggesting changing the name of the above variable? If so, how\r\n> > > about apply_progress, progress, or updateprogress? If you don't like\r\n> > > any of these then feel free to suggest something else. If we change\r\n> > > the variable name then accordingly, we need to update\r\n> > > ReorderBufferUpdateProgressCB as well.\r\n> > >\r\n> >\r\n> > I would liked to have all the callback names renamed with prefix\r\n> > \"rbcb_xxx\" so that they have very less chances of conflicting with\r\n> > similar names in the code base. But it's probably late to do that :).\r\n> >\r\n> > How are update_txn_progress since the CB is supposed to be used only\r\n> > within a transaction? or update_progress_txn?\r\n> >\r\n> \r\n> Personally, I would prefer 'apply_progress' as it would be similar to\r\n> a few other callbacks like apply_change, apply_truncate, or as is\r\n> proposed by patch update_progress again because it is similar to\r\n> existing callbacks like commit_prepared. If you and others don't like\r\n> any of those then we can go for 'update_progress_txn' as well. Anybody\r\n> else has an opinion on this?\r\n\r\nI think 'update_progress_txn' might be better. Because I think this name seems\r\nto make it easier to know that this callback is used to update process when\r\nprocessing txn. So, I rename it to 'update_progress_txn'.\r\n\r\nI have addressed all the comments and here is the new version patch.\r\n\r\nRegards,\r\nWang Wei",
"msg_date": "Fri, 20 Jan 2023 07:17:25 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Fri, Jan 20, 2023 at 12:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Fri, Jan 20, 2023 at 7:40 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> >\r\n> > Here are some review comments for patch v3-0001.\r\n> >\r\n> > ======\r\n> > src/backend/replication/logical/logical.c\r\n> >\r\n> > 3. forward declaration\r\n> >\r\n> > +/* update progress callback */\r\n> > +static void update_progress_cb_wrapper(ReorderBuffer *cache,\r\n> > + ReorderBufferTXN *txn,\r\n> > + ReorderBufferChange *change);\r\n> >\r\n> > I felt this function wrapper name was a bit misleading... AFAIK every\r\n> > other wrapper really does just wrap their respective functions. But\r\n> > this one seems a bit different because it calls the wrapped function\r\n> > ONLY if some threshold is exceeded. IMO maybe this function could have\r\n> > some name that conveys this better:\r\n> >\r\n> > e.g. update_progress_cb_wrapper_with_threshold\r\n> >\r\n> \r\n> I am wondering whether it would be better to move the threshold logic\r\n> to the caller. Previously this logic was inside the function because\r\n> it was being invoked from multiple places but now that won't be the\r\n> case. Also, then your concern about the name would also be addressed.\r\n\r\nAgree. Moved the threshold logic to the function ReorderBufferProcessTXN.\r\n\r\n> >\r\n> > ~\r\n> >\r\n> > 7b.\r\n> > Would it be neater to just call OutputPluginUpdateProgress here instead?\r\n> >\r\n> > e.g.\r\n> > BEFORE\r\n> > ctx->update_progress(ctx, ctx->write_location, ctx->write_xid, false);\r\n> > AFTER\r\n> > OutputPluginUpdateProgress(ctx, false);\r\n> >\r\n> \r\n> We already check whether ctx->update_progress is defined or not which\r\n> is the only extra job done by OutputPluginUpdateProgress but probably\r\n> we can consolidate the checks and directly invoke\r\n> OutputPluginUpdateProgress.\r\n\r\nChanged. Invoke the function OutputPluginUpdateProgress directly in the new\r\ncallback.\r\n\r\nRegards,\r\nWang Wei\r\n",
"msg_date": "Fri, 20 Jan 2023 07:18:21 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Fri, Jan 20, 2023 at 10:10 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Here are some review comments for patch v3-0001.\r\n\r\nThanks for your comments.\r\n\r\n> ======\r\n> Commit message\r\n> \r\n> 1.\r\n> The problem is when there is a DDL in a transaction that generates lots of\r\n> temporary data due to rewrite rules, these temporary data will not be\r\n> processed\r\n> by the pgoutput - plugin. Therefore, the previous fix (f95d53e) for DML had no\r\n> impact on this case.\r\n> \r\n> ~\r\n> \r\n> 1a.\r\n> IMO this comment needs to give a bit of background about the original\r\n> problem here, rather than just starting with \"The problem is\" which is\r\n> describing the flaws of the previous fix.\r\n\r\nAdded some related message.\r\n\r\n> ~\r\n> \r\n> 1b.\r\n> \"pgoutput - plugin\" -> \"pgoutput plugin\" ??\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 2.\r\n> \r\n> To fix this, we introduced a new ReorderBuffer callback -\r\n> 'ReorderBufferUpdateProgressCB'. This callback is called to try to update the\r\n> process after each change has been processed during sending data of a\r\n> transaction (and its subtransactions) to the output plugin.\r\n> \r\n> IIUC it's not really \"after each change\" - shouldn't this comment\r\n> mention something about the CHANGES_THRESHOLD 100?\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 4. update_progress_cb_wrapper\r\n> \r\n> +/*\r\n> + * Update progress callback\r\n> + *\r\n> + * Try to update progress and send a keepalive message if too many changes\r\n> were\r\n> + * processed when processing txn.\r\n> + *\r\n> + * For a large transaction, if we don't send any change to the downstream for a\r\n> + * long time (exceeds the wal_receiver_timeout of standby) then it can\r\n> timeout.\r\n> + * This can happen when all or most of the changes are either not published or\r\n> + * got filtered out.\r\n> + */\r\n> \r\n> SUGGESTION (instead of the \"Try to update\" sentence)\r\n> Send a keepalive message whenever more than <CHANGES_THRESHOLD>\r\n> changes are encountered while processing a transaction.\r\n\r\nSince it's possible that keep-alive messages won't be sent even if the\r\nthreshold is reached (see function WalSndKeepaliveIfNecessary), I thought it\r\nmight be better to use \"try to\".\r\nAnd rewrote the comments here because the threshold logic is moved to the\r\nfunction ReorderBufferProcessTXN.\r\n\r\n> ~~~\r\n> \r\n> 5.\r\n> \r\n> +static void\r\n> +update_progress_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,\r\n> + ReorderBufferChange *change)\r\n> +{\r\n> + LogicalDecodingContext *ctx = cache->private_data;\r\n> + LogicalErrorCallbackState state;\r\n> + ErrorContextCallback errcallback;\r\n> + static int changes_count = 0; /* Static variable used to accumulate\r\n> + * the number of changes while\r\n> + * processing txn. */\r\n> +\r\n> \r\n> IMO this may be more readable if the static 'changes_count' local var\r\n> was declared first and separated from the other vars by a blank line.\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 6.\r\n> \r\n> + /*\r\n> + * We don't want to try sending a keepalive message after processing each\r\n> + * change as that can have overhead. Tests revealed that there is no\r\n> + * noticeable overhead in doing it after continuously processing 100 or so\r\n> + * changes.\r\n> + */\r\n> +#define CHANGES_THRESHOLD 100\r\n> \r\n> 6a.\r\n> I think it might be better to define this right at the top of the\r\n> function adjacent to the 'changes_count' variable (e.g. a bit like the\r\n> original HEAD code looked)\r\n\r\nChanged.\r\n\r\n> ~\r\n> \r\n> 6b.\r\n> SUGGESTION (for the comment)\r\n> Sending keepalive messages after every change has some overhead, but\r\n> testing showed there is no noticeable overhead if keepalive is only\r\n> sent after every ~100 changes.\r\n\r\nChanged.\r\n\r\n> ~~~\r\n> \r\n> 7.\r\n> \r\n> +\r\n> + /*\r\n> + * After continuously processing CHANGES_THRESHOLD changes, we\r\n> + * try to send a keepalive message if required.\r\n> + */\r\n> + if (++changes_count >= CHANGES_THRESHOLD)\r\n> + {\r\n> + ctx->update_progress(ctx, ctx->write_location, ctx->write_xid, false);\r\n> + changes_count = 0;\r\n> + }\r\n> +\r\n> \r\n> 7a.\r\n> SUGGESTION (for comment)\r\n> Send a keepalive message after every CHANGES_THRESHOLD changes.\r\n\r\nChanged.\r\n\r\nRegards,\r\nWang Wei\r\n",
"msg_date": "Fri, 20 Jan 2023 07:19:28 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "Here are my review comments for patch v4-0001\n\n======\nGeneral\n\n1.\n\nIt makes no real difference, but I was wondering about:\n\"update txn progress\" versus \"update progress txn\"\n\nI thought that the first way sounds more natural. YMMV.\n\nIf you change this then there is impact for the typedef, function\nnames, comments, member names:\n\nReorderBufferUpdateTxnProgressCB --> ReorderBufferUpdateProgressTxnCB\n\n“/* update progress txn callback */” --> “/* update txn progress callback */”\n\nupdate_progress_txn_cb_wrapper --> update_txn_progress_cb_wrapper\n\nupdated_progress_txn --> update_txn_progress\n\n======\nCommit message\n\n2.\n\nThe problem is when there is a DDL in a transaction that generates lots of\ntemporary data due to rewrite rules, these temporary data will not be processed\nby the pgoutput plugin. The previous commit (f95d53e) only fixed timeouts\ncaused by filtering out changes in pgoutput. Therefore, the previous fix for\nDML had no impact on this case.\n\n~\n\nIMO this still some rewording to say up-front what the the actual\nproblem -- i.e. an avoidable timeout occuring.\n\nSUGGESTION (or something like this...)\n\nWhen there is a DDL in a transaction that generates lots of temporary\ndata due to rewrite rules, this temporary data will not be processed\nby the pgoutput plugin. This means it is possible for a timeout to\noccur if a sufficiently long time elapses since the last pgoutput\nmessage. A previous commit (f95d53e) fixed a similar scenario in this\narea, but that only fixed timeouts for DML going through pgoutput, so\nit did not address this DDL timeout case.\n\n======\nsrc/backend/replication/logical/logical.c\n\n3. update_progress_txn_cb_wrapper\n\n+/*\n+ * Update progress callback while processing a transaction.\n+ *\n+ * Try to update progress and send a keepalive message during sending data of a\n+ * transaction (and its subtransactions) to the output plugin.\n+ *\n+ * For a large transaction, if we don't send any change to the downstream for a\n+ * long time (exceeds the wal_receiver_timeout of standby) then it can timeout.\n+ * This can happen when all or most of the changes are either not published or\n+ * got filtered out.\n+ */\n+static void\n+update_progress_txn_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN *txn,\n+ ReorderBufferChange *change)\n\nSimplify the \"Try to...\" paragraph. And other part should also mention\nabout DDL.\n\nSUGGESTION\n\nTry send a keepalive message during transaction processing.\n\nThis is done because if we don't send any change to the downstream for\na long time (exceeds the wal_receiver_timeout of standby), then it can\ntimeout. This can happen for large DDL, or for large transactions when\nall or most of the changes are either not published or got filtered\nout.\n\n======\n.../replication/logical/reorderbuffer.c\n\n4. ReorderBufferProcessTXN\n\n@@ -2105,6 +2105,19 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,\nReorderBufferTXN *txn,\n\n PG_TRY();\n {\n+ /*\n+ * Static variable used to accumulate the number of changes while\n+ * processing txn.\n+ */\n+ static int changes_count = 0;\n+\n+ /*\n+ * Sending keepalive messages after every change has some overhead, but\n+ * testing showed there is no noticeable overhead if keepalive is only\n+ * sent after every ~100 changes.\n+ */\n+#define CHANGES_THRESHOLD 100\n+\n\nIMO these can be relocated to be declared/defined inside the \"while\"\nloop -- i.e. closer to where they are being used.\n\n~~~\n\n5.\n\n+ if (++changes_count >= CHANGES_THRESHOLD)\n+ {\n+ rb->update_progress_txn(rb, txn, change);\n+ changes_count = 0;\n+ }\n\nWhen there is no update_progress function this code is still incurring\nsome small additional overhead for incrementing and testing the\nTHRESHOLD every time, and also needlessly calling to the wrapper every\n100x. This overhead could be avoided with a simpler up-front check\nlike shown below. OTOH, maybe the overhead is insignificant enough\nthat just leaving the curent code is neater?\n\nLogicalDecodingContext *ctx = rb->private_data;\n...\nif (ctx->update_progress_txn && (++changes_count >= CHANGES_THRESHOLD))\n{\nrb->update_progress_txn(rb, txn, change);\nchanges_count = 0;\n}\n\n------\nKind Reagrds,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 23 Jan 2023 11:50:58 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Mon, Jan 23, 2023 at 6:21 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> 1.\n>\n> It makes no real difference, but I was wondering about:\n> \"update txn progress\" versus \"update progress txn\"\n>\n\nYeah, I think we can go either way but I still prefer \"update progress\ntxn\" as that is more closer to LogicalOutputPluginWriterUpdateProgress\ncallback name.\n\n>\n> 5.\n>\n> + if (++changes_count >= CHANGES_THRESHOLD)\n> + {\n> + rb->update_progress_txn(rb, txn, change);\n> + changes_count = 0;\n> + }\n>\n> When there is no update_progress function this code is still incurring\n> some small additional overhead for incrementing and testing the\n> THRESHOLD every time, and also needlessly calling to the wrapper every\n> 100x. This overhead could be avoided with a simpler up-front check\n> like shown below. OTOH, maybe the overhead is insignificant enough\n> that just leaving the curent code is neater?\n>\n\nAs far as built-in logical replication is concerned, it will be\ndefined and I don't know if the overhead will be significant enough in\nthis case. Also, one can say that for the cases it is defined, we are\nadding this check multiple times (it is already checked inside\nOutputPluginUpdateProgress). So, I would prefer a neat code here.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 23 Jan 2023 09:03:39 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Monday, January 23, 2023 8:51 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> \r\n> Here are my review comments for patch v4-0001\r\n> ======\r\n> Commit message\r\n> \r\n> 2.\r\n> \r\n> The problem is when there is a DDL in a transaction that generates lots of\r\n> temporary data due to rewrite rules, these temporary data will not be processed\r\n> by the pgoutput plugin. The previous commit (f95d53e) only fixed timeouts\r\n> caused by filtering out changes in pgoutput. Therefore, the previous fix for DML\r\n> had no impact on this case.\r\n> \r\n> ~\r\n> \r\n> IMO this still some rewording to say up-front what the the actual problem -- i.e.\r\n> an avoidable timeout occuring.\r\n> \r\n> SUGGESTION (or something like this...)\r\n> \r\n> When there is a DDL in a transaction that generates lots of temporary data due\r\n> to rewrite rules, this temporary data will not be processed by the pgoutput\r\n> plugin. This means it is possible for a timeout to occur if a sufficiently long time\r\n> elapses since the last pgoutput message. A previous commit (f95d53e) fixed a\r\n> similar scenario in this area, but that only fixed timeouts for DML going through\r\n> pgoutput, so it did not address this DDL timeout case.\r\n\r\nThanks, I changed the commit message as suggested.\r\n\r\n> ======\r\n> src/backend/replication/logical/logical.c\r\n> \r\n> 3. update_progress_txn_cb_wrapper\r\n> \r\n> +/*\r\n> + * Update progress callback while processing a transaction.\r\n> + *\r\n> + * Try to update progress and send a keepalive message during sending\r\n> +data of a\r\n> + * transaction (and its subtransactions) to the output plugin.\r\n> + *\r\n> + * For a large transaction, if we don't send any change to the\r\n> +downstream for a\r\n> + * long time (exceeds the wal_receiver_timeout of standby) then it can timeout.\r\n> + * This can happen when all or most of the changes are either not\r\n> +published or\r\n> + * got filtered out.\r\n> + */\r\n> +static void\r\n> +update_progress_txn_cb_wrapper(ReorderBuffer *cache, ReorderBufferTXN\r\n> *txn,\r\n> + ReorderBufferChange *change)\r\n> \r\n> Simplify the \"Try to...\" paragraph. And other part should also mention about DDL.\r\n> \r\n> SUGGESTION\r\n> \r\n> Try send a keepalive message during transaction processing.\r\n> \r\n> This is done because if we don't send any change to the downstream for a long\r\n> time (exceeds the wal_receiver_timeout of standby), then it can timeout. This can\r\n> happen for large DDL, or for large transactions when all or most of the changes\r\n> are either not published or got filtered out.\r\n\r\nChanged.\r\n\r\n> ======\r\n> .../replication/logical/reorderbuffer.c\r\n> \r\n> 4. ReorderBufferProcessTXN\r\n> \r\n> @@ -2105,6 +2105,19 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,\r\n> ReorderBufferTXN *txn,\r\n> \r\n> PG_TRY();\r\n> {\r\n> + /*\r\n> + * Static variable used to accumulate the number of changes while\r\n> + * processing txn.\r\n> + */\r\n> + static int changes_count = 0;\r\n> +\r\n> + /*\r\n> + * Sending keepalive messages after every change has some overhead, but\r\n> + * testing showed there is no noticeable overhead if keepalive is only\r\n> + * sent after every ~100 changes.\r\n> + */\r\n> +#define CHANGES_THRESHOLD 100\r\n> +\r\n> \r\n> IMO these can be relocated to be declared/defined inside the \"while\"\r\n> loop -- i.e. closer to where they are being used.\r\n\r\nMoved into the while loop.\r\n\r\nAttach the new version patch which addressed above comments.\r\nAlso attach a simple script which use \"refresh matview\" to reproduce\r\nthis timeout problem just in case some one want to try to reproduce this.\r\n\r\nBest regards,\r\nHou zj",
"msg_date": "Mon, 23 Jan 2023 10:03:30 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "Hi Hou-san, Here are my review comments for v5-0001.\n\n======\nsrc/backend/replication/logical/reorderbuffer.c\n\n1.\n@@ -2446,6 +2452,23 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,\nReorderBufferTXN *txn,\n elog(ERROR, \"tuplecid value in changequeue\");\n break;\n }\n+\n+ /*\n+ * Sending keepalive messages after every change has some overhead, but\n+ * testing showed there is no noticeable overhead if keepalive is only\n+ * sent after every ~100 changes.\n+ */\n+#define CHANGES_THRESHOLD 100\n+\n+ /*\n+ * Try to send a keepalive message after every CHANGES_THRESHOLD\n+ * changes.\n+ */\n+ if (++changes_count >= CHANGES_THRESHOLD)\n+ {\n+ rb->update_progress_txn(rb, txn, change);\n+ changes_count = 0;\n+ }\n\nI noticed you put the #define adjacent to the only usage of it,\ninstead of with the other variable declaration like it was before.\nProbably it is better how you have done it, but:\n\n1a.\nThe comment indentation is incorrect.\n\n~\n\n1b.\nSince the #define is adjacent to its only usage IMO now the 2nd\ncomment is redundant. So the code can just say\n\n /*\n * Sending keepalive messages after every change has some\noverhead, but\n * testing showed there is no noticeable overhead if\nkeepalive is only\n * sent after every ~100 changes.\n */\n#define CHANGES_THRESHOLD 100\n if (++changes_count >= CHANGES_THRESHOLD)\n {\n rb->update_progress_txn(rb, txn, change);\n changes_count = 0;\n }\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 24 Jan 2023 11:28:28 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Tues, Jan 24, 2023 at 8:28 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Hi Hou-san, Here are my review comments for v5-0001.\r\n\r\nThanks for your comments.\r\n\r\n> ======\r\n> src/backend/replication/logical/reorderbuffer.c\r\n> \r\n> 1.\r\n> @@ -2446,6 +2452,23 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,\r\n> ReorderBufferTXN *txn,\r\n> elog(ERROR, \"tuplecid value in changequeue\");\r\n> break;\r\n> }\r\n> +\r\n> + /*\r\n> + * Sending keepalive messages after every change has some overhead, but\r\n> + * testing showed there is no noticeable overhead if keepalive is only\r\n> + * sent after every ~100 changes.\r\n> + */\r\n> +#define CHANGES_THRESHOLD 100\r\n> +\r\n> + /*\r\n> + * Try to send a keepalive message after every CHANGES_THRESHOLD\r\n> + * changes.\r\n> + */\r\n> + if (++changes_count >= CHANGES_THRESHOLD)\r\n> + {\r\n> + rb->update_progress_txn(rb, txn, change);\r\n> + changes_count = 0;\r\n> + }\r\n> \r\n> I noticed you put the #define adjacent to the only usage of it,\r\n> instead of with the other variable declaration like it was before.\r\n> Probably it is better how you have done it, but:\r\n> \r\n> 1a.\r\n> The comment indentation is incorrect.\r\n> \r\n> ~\r\n> \r\n> 1b.\r\n> Since the #define is adjacent to its only usage IMO now the 2nd\r\n> comment is redundant. So the code can just say\r\n> \r\n> /*\r\n> * Sending keepalive messages after every change has some\r\n> overhead, but\r\n> * testing showed there is no noticeable overhead if\r\n> keepalive is only\r\n> * sent after every ~100 changes.\r\n> */\r\n> #define CHANGES_THRESHOLD 100\r\n> if (++changes_count >= CHANGES_THRESHOLD)\r\n> {\r\n> rb->update_progress_txn(rb, txn, change);\r\n> changes_count = 0;\r\n> }\r\n\r\nChanged as suggested.\r\n\r\nAttach the new patch.\r\n\r\nRegards,\r\nWang Wei",
"msg_date": "Tue, 24 Jan 2023 02:45:09 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Tue, Jan 24, 2023 at 1:45 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Tues, Jan 24, 2023 at 8:28 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > Hi Hou-san, Here are my review comments for v5-0001.\n>\n> Thanks for your comments.\n...\n>\n> Changed as suggested.\n>\n> Attach the new patch.\n\nThanks! Patch v6 LGTM.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 24 Jan 2023 15:15:49 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Tue, Jan 24, 2023 at 8:15 AM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> Attach the new patch.\n>\n\nI think the patch missed to handle the case of non-transactional\nmessages which was previously getting handled. I have tried to address\nthat in the attached. Is there a reason that shouldn't be handled?\nApart from that changed a few comments. If my understanding is\ncorrect, then we need to change the callback update_progress_txn name\nas well because now it needs to handle both transactional and\nnon-transactional changes. How about update_progress_write? We\naccordingly need to change the comments for the callback.\n\nAdditionally, I think we should have a test case to show we don't time\nout because of not processing non-transactional messages. See\npgoutput_message for cases where it doesn't process the message.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Wed, 25 Jan 2023 16:55:57 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Wednesday, January 25, 2023 7:26 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> \r\n> On Tue, Jan 24, 2023 at 8:15 AM wangw.fnst@fujitsu.com\r\n> <wangw.fnst@fujitsu.com> wrote:\r\n> >\r\n> > Attach the new patch.\r\n> >\r\n> \r\n> I think the patch missed to handle the case of non-transactional messages which\r\n> was previously getting handled. I have tried to address that in the attached. Is\r\n> there a reason that shouldn't be handled?\r\n\r\nThanks for updating the patch!\r\n\r\nI thought about the non-transactional message. I think it seems fine if we\r\ndon’t handle it for timeout because such message is decoded via:\r\n\r\nWalSndLoop\r\n-XLogSendLogical\r\n--LogicalDecodingProcessRecord\r\n---logicalmsg_decode\r\n----ReorderBufferQueueMessage\r\n-----rb->message() -- //maybe send the message or do nothing here.\r\n\r\nAfter invoking rb->message(), we will directly return to the main\r\nloop(WalSndLoop) where we will get a chance to call\r\nWalSndKeepaliveIfNecessary() to avoid the timeout.\r\n\r\nThis is a bit different from transactional changes, because for transactional changes, we\r\nwill buffer them and then send every buffered change one by one(via\r\nReorderBufferProcessTXN) without going back to the WalSndLoop, so we don't get\r\na chance to send keepalive message if necessary, which is more likely to cause the\r\ntimeout problem.\r\n\r\nI will also test the non-transactional message for timeout in case I missed something.\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Fri, 27 Jan 2023 11:48:02 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Fri, Jan 27, 2023 at 5:18 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Wednesday, January 25, 2023 7:26 PM Amit Kapila <amit.kapila16@gmail.com>\n> >\n> > On Tue, Jan 24, 2023 at 8:15 AM wangw.fnst@fujitsu.com\n> > <wangw.fnst@fujitsu.com> wrote:\n> > >\n> > > Attach the new patch.\n> > >\n> >\n> > I think the patch missed to handle the case of non-transactional messages which\n> > was previously getting handled. I have tried to address that in the attached. Is\n> > there a reason that shouldn't be handled?\n>\n> Thanks for updating the patch!\n>\n> I thought about the non-transactional message. I think it seems fine if we\n> don’t handle it for timeout because such message is decoded via:\n>\n> WalSndLoop\n> -XLogSendLogical\n> --LogicalDecodingProcessRecord\n> ---logicalmsg_decode\n> ----ReorderBufferQueueMessage\n> -----rb->message() -- //maybe send the message or do nothing here.\n>\n> After invoking rb->message(), we will directly return to the main\n> loop(WalSndLoop) where we will get a chance to call\n> WalSndKeepaliveIfNecessary() to avoid the timeout.\n>\n\nValid point. But this means the previous handling of non-transactional\nmessages was also redundant.\n\n> This is a bit different from transactional changes, because for transactional changes, we\n> will buffer them and then send every buffered change one by one(via\n> ReorderBufferProcessTXN) without going back to the WalSndLoop, so we don't get\n> a chance to send keepalive message if necessary, which is more likely to cause the\n> timeout problem.\n>\n> I will also test the non-transactional message for timeout in case I missed something.\n>\n\nOkay, thanks. Please see if we can test a mix of transactional and\nnon-transactional messages as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 27 Jan 2023 17:24:54 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Fri, Jan 27, 2023 at 19:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Fri, Jan 27, 2023 at 5:18 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Wednesday, January 25, 2023 7:26 PM Amit Kapila\r\n> <amit.kapila16@gmail.com>\r\n> > >\r\n> > > On Tue, Jan 24, 2023 at 8:15 AM wangw.fnst@fujitsu.com\r\n> > > <wangw.fnst@fujitsu.com> wrote:\r\n> > > >\r\n> > > > Attach the new patch.\r\n> > > >\r\n> > >\r\n> > > I think the patch missed to handle the case of non-transactional messages\r\n> which\r\n> > > was previously getting handled. I have tried to address that in the attached.\r\n> Is\r\n> > > there a reason that shouldn't be handled?\r\n> >\r\n> > Thanks for updating the patch!\r\n> >\r\n> > I thought about the non-transactional message. I think it seems fine if we\r\n> > don’t handle it for timeout because such message is decoded via:\r\n> >\r\n> > WalSndLoop\r\n> > -XLogSendLogical\r\n> > --LogicalDecodingProcessRecord\r\n> > ---logicalmsg_decode\r\n> > ----ReorderBufferQueueMessage\r\n> > -----rb->message() -- //maybe send the message or do nothing here.\r\n> >\r\n> > After invoking rb->message(), we will directly return to the main\r\n> > loop(WalSndLoop) where we will get a chance to call\r\n> > WalSndKeepaliveIfNecessary() to avoid the timeout.\r\n> >\r\n> \r\n> Valid point. But this means the previous handling of non-transactional\r\n> messages was also redundant.\r\n\r\nThanks for the analysis, I think it makes sense. So I removed the handling of\r\nnon-transactional messages.\r\n\r\n> > This is a bit different from transactional changes, because for transactional\r\n> changes, we\r\n> > will buffer them and then send every buffered change one by one(via\r\n> > ReorderBufferProcessTXN) without going back to the WalSndLoop, so we\r\n> don't get\r\n> > a chance to send keepalive message if necessary, which is more likely to cause\r\n> the\r\n> > timeout problem.\r\n> >\r\n> > I will also test the non-transactional message for timeout in case I missed\r\n> something.\r\n> >\r\n> \r\n> Okay, thanks. Please see if we can test a mix of transactional and\r\n> non-transactional messages as well.\r\n\r\nI tested a mix transaction of transactional and non-transactional messages on\r\nthe current HEAD and reproduced the timeout problem. I think this result is OK.\r\nBecause when decoding a transaction, non-transactional changes are processed\r\ndirectly and the function WalSndKeepaliveIfNecessary is called, while\r\ntransactional changes are cached and processed after decoding. After decoding,\r\nonly transactional changes will be processed (in the function\r\nReorderBufferProcessTXN), so the timeout problem will still be reproduced.\r\n\r\nAfter applying the v8 patch, the test mentioned above didn't reproduce the\r\ntimeout problem (Attach this test script 'test_with_nontransactional.sh').\r\n\r\nAttach the new patch.\r\n\r\nRegards,\r\nWang Wei",
"msg_date": "Sun, 29 Jan 2023 07:41:07 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Sun, Jan 29, 2023 3:41 PM wangw.fnst@fujitsu.com <wangw.fnst@fujitsu.com> wrote:\r\n> \r\n> I tested a mix transaction of transactional and non-transactional messages on\r\n> the current HEAD and reproduced the timeout problem. I think this result is OK.\r\n> Because when decoding a transaction, non-transactional changes are processed\r\n> directly and the function WalSndKeepaliveIfNecessary is called, while\r\n> transactional changes are cached and processed after decoding. After decoding,\r\n> only transactional changes will be processed (in the function\r\n> ReorderBufferProcessTXN), so the timeout problem will still be reproduced.\r\n> \r\n> After applying the v8 patch, the test mentioned above didn't reproduce the\r\n> timeout problem (Attach this test script 'test_with_nontransactional.sh').\r\n> \r\n> Attach the new patch.\r\n> \r\n\r\nThanks for updating the patch. Here is a comment.\r\n\r\nIn update_progress_txn_cb_wrapper(), it looks like we need to reset\r\nchanges_count to 0 after calling OutputPluginUpdateProgress(), otherwise\r\nOutputPluginUpdateProgress() will always be called after 100 changes.\r\n\r\nRegards,\r\nShi yu\r\n\r\n",
"msg_date": "Mon, 30 Jan 2023 03:36:59 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Mon, Jan 30, 2023 11:37 AM Shi, Yu/侍 雨 <shiy.fnst@cn.fujitsu.com> wrote:\r\n> On Sun, Jan 29, 2023 3:41 PM wangw.fnst@fujitsu.com\r\n> <wangw.fnst@fujitsu.com> wrote:\r\n> >\r\n> > I tested a mix transaction of transactional and non-transactional messages on\r\n> > the current HEAD and reproduced the timeout problem. I think this result is\r\n> OK.\r\n> > Because when decoding a transaction, non-transactional changes are\r\n> processed\r\n> > directly and the function WalSndKeepaliveIfNecessary is called, while\r\n> > transactional changes are cached and processed after decoding. After\r\n> decoding,\r\n> > only transactional changes will be processed (in the function\r\n> > ReorderBufferProcessTXN), so the timeout problem will still be reproduced.\r\n> >\r\n> > After applying the v8 patch, the test mentioned above didn't reproduce the\r\n> > timeout problem (Attach this test script 'test_with_nontransactional.sh').\r\n> >\r\n> > Attach the new patch.\r\n> >\r\n> \r\n> Thanks for updating the patch. Here is a comment.\r\n\r\nThanks for your comment.\r\n\r\n> In update_progress_txn_cb_wrapper(), it looks like we need to reset\r\n> changes_count to 0 after calling OutputPluginUpdateProgress(), otherwise\r\n> OutputPluginUpdateProgress() will always be called after 100 changes.\r\n\r\nYes, I think you are right.\r\nFixed this problem.\r\n\r\nAttach the new patch.\r\n\r\nRegards,\r\nWang Wei",
"msg_date": "Mon, 30 Jan 2023 05:06:48 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Mon, Jan 30, 2023 at 10:36 AM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Mon, Jan 30, 2023 11:37 AM Shi, Yu/侍 雨 <shiy.fnst@cn.fujitsu.com> wrote:\n> > On Sun, Jan 29, 2023 3:41 PM wangw.fnst@fujitsu.com\n> > <wangw.fnst@fujitsu.com> wrote:\n>\n> Yes, I think you are right.\n> Fixed this problem.\n>\n\n+ /*\n+ * Trying to send keepalive message after every change has some\n+ * overhead, but testing showed there is no noticeable overhead if\n+ * we do it after every ~100 changes.\n+ */\n+#define CHANGES_THRESHOLD 100\n+\n+ if (++changes_count < CHANGES_THRESHOLD)\n+ return;\n...\n+ changes_count = 0;\n\nI think it is better to have this threshold-related code in that\ncaller as we have in the previous version. Also, let's modify the\ncomment as follows:\"\nIt is possible that the data is not sent to downstream for a long time\neither because the output plugin filtered it or there is a DDL that\ngenerates a lot of data that is not processed by the plugin. So, in\nsuch cases, the downstream can timeout. To avoid that we try to send a\nkeepalive message if required. Trying to send a keepalive message\nafter every change has some overhead, but testing showed there is no\nnoticeable overhead if we do it after every ~100 changes.\"\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 30 Jan 2023 12:24:51 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Mon, Jan 30, 2023 at 14:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Mon, Jan 30, 2023 at 10:36 AM wangw.fnst@fujitsu.com\r\n> <wangw.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Mon, Jan 30, 2023 11:37 AM Shi, Yu/侍 雨 <shiy.fnst@cn.fujitsu.com>\r\n> wrote:\r\n> > > On Sun, Jan 29, 2023 3:41 PM wangw.fnst@fujitsu.com\r\n> > > <wangw.fnst@fujitsu.com> wrote:\r\n> >\r\n> > Yes, I think you are right.\r\n> > Fixed this problem.\r\n> >\r\n> \r\n> + /*\r\n> + * Trying to send keepalive message after every change has some\r\n> + * overhead, but testing showed there is no noticeable overhead if\r\n> + * we do it after every ~100 changes.\r\n> + */\r\n> +#define CHANGES_THRESHOLD 100\r\n> +\r\n> + if (++changes_count < CHANGES_THRESHOLD)\r\n> + return;\r\n> ...\r\n> + changes_count = 0;\r\n> \r\n> I think it is better to have this threshold-related code in that\r\n> caller as we have in the previous version. Also, let's modify the\r\n> comment as follows:\"\r\n> It is possible that the data is not sent to downstream for a long time\r\n> either because the output plugin filtered it or there is a DDL that\r\n> generates a lot of data that is not processed by the plugin. So, in\r\n> such cases, the downstream can timeout. To avoid that we try to send a\r\n> keepalive message if required. Trying to send a keepalive message\r\n> after every change has some overhead, but testing showed there is no\r\n> noticeable overhead if we do it after every ~100 changes.\"\r\n\r\nChanged as suggested.\r\n\r\nI also removed the comment atop the function update_progress_txn_cb_wrapper to\r\nbe consistent with the nearby *_cb_wrapper functions.\r\n\r\nAttach the new patch.\r\n\r\nRegards,\r\nWang Wei",
"msg_date": "Mon, 30 Jan 2023 09:50:08 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Mon, Jan 30, 2023 at 17:50 PM I wrote:\r\n> Attach the new patch.\r\n\r\nWhen invoking the function ReorderBufferProcessTXN, the threshold-related\r\ncounter \"changes_count\" may have some random value from the previous\r\ntransaction's processing. To fix this, I moved the definition of the counter\r\n\"changes_count\" outside the while-loop and did not use the keyword \"static\".\r\n\r\nAttach the new patch.\r\n\r\nRegards,\r\nWang Wei",
"msg_date": "Tue, 31 Jan 2023 09:23:22 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
},
{
"msg_contents": "On Tue, Jan 31, 2023 at 2:53 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Mon, Jan 30, 2023 at 17:50 PM I wrote:\n> > Attach the new patch.\n>\n> When invoking the function ReorderBufferProcessTXN, the threshold-related\n> counter \"changes_count\" may have some random value from the previous\n> transaction's processing. To fix this, I moved the definition of the counter\n> \"changes_count\" outside the while-loop and did not use the keyword \"static\".\n>\n> Attach the new patch.\n>\n\nThanks, the patch looks good to me. I have slightly adjusted one of\nthe comments and ran pgindent. See attached. As mentioned in the\ncommit message, we shouldn't backpatch this as this requires a new\ncallback and moreover, users can increase the wal_sender_timeout and\nwal_receiver_timeout to avoid this problem. What do you think?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Tue, 31 Jan 2023 16:57:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Tue, Jan 31, 2023 at 4:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> Thanks, the patch looks good to me. I have slightly adjusted one of\n> the comments and ran pgindent. See attached. As mentioned in the\n> commit message, we shouldn't backpatch this as this requires a new\n> callback and moreover, users can increase the wal_sender_timeout and\n> wal_receiver_timeout to avoid this problem. What do you think?\n\nThe callback and the implementation is all in core. What's the risk\nyou see in backpatching it?\n\nCustomers can adjust the timeouts, but only after the receiver has\ntimed out a few times. Replication remains broekn till they notice it\nand adjust timeouts. By that time WAL has piled up. It also takes a\nfew attempts to increase timeouts since the time taken by a\ntransaction to decode can not be estimated beforehand. All that makes\nit worth back-patching if it's possible. We had a customer who piled\nup GBs of WAL before realising that this is the problem. Their system\nalmost came to a halt due to that.\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 31 Jan 2023 17:03:42 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Tue, Jan 31, 2023 at 5:03 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Tue, Jan 31, 2023 at 4:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > Thanks, the patch looks good to me. I have slightly adjusted one of\n> > the comments and ran pgindent. See attached. As mentioned in the\n> > commit message, we shouldn't backpatch this as this requires a new\n> > callback and moreover, users can increase the wal_sender_timeout and\n> > wal_receiver_timeout to avoid this problem. What do you think?\n>\n> The callback and the implementation is all in core. What's the risk\n> you see in backpatching it?\n>\n\nBecause we are changing the exposed structure and which can break\nexisting extensions using it.\n\n> Customers can adjust the timeouts, but only after the receiver has\n> timed out a few times. Replication remains broekn till they notice it\n> and adjust timeouts. By that time WAL has piled up. It also takes a\n> few attempts to increase timeouts since the time taken by a\n> transaction to decode can not be estimated beforehand. All that makes\n> it worth back-patching if it's possible. We had a customer who piled\n> up GBs of WAL before realising that this is the problem. Their system\n> almost came to a halt due to that.\n>\n\nWhich version are they using? If they are at >=14, using \"streaming =\non\" for a subscription should also avoid this problem.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 31 Jan 2023 17:12:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Tue, Jan 31, 2023 at 5:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jan 31, 2023 at 5:03 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > On Tue, Jan 31, 2023 at 4:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > Thanks, the patch looks good to me. I have slightly adjusted one of\n> > > the comments and ran pgindent. See attached. As mentioned in the\n> > > commit message, we shouldn't backpatch this as this requires a new\n> > > callback and moreover, users can increase the wal_sender_timeout and\n> > > wal_receiver_timeout to avoid this problem. What do you think?\n> >\n> > The callback and the implementation is all in core. What's the risk\n> > you see in backpatching it?\n> >\n>\n> Because we are changing the exposed structure and which can break\n> existing extensions using it.\n\nIs that because we are adding the new member in the middle of the\nstructure? Shouldn't extensions provide new libraries with each\nmaintenance release of PG?\n\n>\n> > Customers can adjust the timeouts, but only after the receiver has\n> > timed out a few times. Replication remains broekn till they notice it\n> > and adjust timeouts. By that time WAL has piled up. It also takes a\n> > few attempts to increase timeouts since the time taken by a\n> > transaction to decode can not be estimated beforehand. All that makes\n> > it worth back-patching if it's possible. We had a customer who piled\n> > up GBs of WAL before realising that this is the problem. Their system\n> > almost came to a halt due to that.\n> >\n>\n> Which version are they using? If they are at >=14, using \"streaming =\n> on\" for a subscription should also avoid this problem.\n\n13.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 31 Jan 2023 20:24:36 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "Here are my review comments for v13-00001.\n\n======\nCommit message\n\n1.\nThe DDLs like Refresh Materialized views that generate lots of temporary\ndata due to rewrite rules may not be processed by output plugins (for\nexample pgoutput). So, we won't send keep-alive messages for a long time\nwhile processing such commands and that can lead the subscriber side to\ntimeout.\n\n~\n\nSUGGESTION (minor rearranged way to say the same thing)\n\nFor DDLs that generate lots of temporary data due to rewrite rules\n(e.g. REFRESH MATERIALIZED VIEW) the output plugins (e.g. pgoutput)\nmay not be processed for a long time. Since we don't send keep-alive\nmessages while processing such commands that can lead the subscriber\nside to timeout.\n\n~~~\n\n2.\nThe commit message says what the problem is, but it doesn’t seem to\ndescribe what this patch does to fix the problem.\n\n======\nsrc/backend/replication/logical/reorderbuffer.c\n\n3.\n+ /*\n+ * It is possible that the data is not sent to downstream for a\n+ * long time either because the output plugin filtered it or there\n+ * is a DDL that generates a lot of data that is not processed by\n+ * the plugin. So, in such cases, the downstream can timeout. To\n+ * avoid that we try to send a keepalive message if required.\n+ * Trying to send a keepalive message after every change has some\n+ * overhead, but testing showed there is no noticeable overhead if\n+ * we do it after every ~100 changes.\n+ */\n\n\n3a.\n\"data is not sent to downstream\" --> \"data is not sent downstream\" (?)\n\n~\n\n3b.\n\"So, in such cases,\" --> \"In such cases,\"\n\n~~~\n\n4.\n+#define CHANGES_THRESHOLD 100\n+\n+ if (++changes_count >= CHANGES_THRESHOLD)\n+ {\n+ rb->update_progress_txn(rb, txn, change->lsn);\n+ changes_count = 0;\n+ }\n\nI was wondering if it would have been simpler to write this code like below.\n\nAlso, by doing it this way the 'changes_count' variable name makes\nmore sense IMO, otherwise (for current code) maybe it should be called\nsomething like 'changes_since_last_keepalive'\n\nSUGGESTION\nif (++changes_count % CHANGES_THRESHOLD == 0)\n rb->update_progress_txn(rb, txn, change->lsn);\n\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 1 Feb 2023 10:13:19 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Feb 1, 2023 at 4:43 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are my review comments for v13-00001.\n>\n> ======\n> Commit message\n>\n> 1.\n> The DDLs like Refresh Materialized views that generate lots of temporary\n> data due to rewrite rules may not be processed by output plugins (for\n> example pgoutput). So, we won't send keep-alive messages for a long time\n> while processing such commands and that can lead the subscriber side to\n> timeout.\n>\n> ~\n>\n> SUGGESTION (minor rearranged way to say the same thing)\n>\n> For DDLs that generate lots of temporary data due to rewrite rules\n> (e.g. REFRESH MATERIALIZED VIEW) the output plugins (e.g. pgoutput)\n> may not be processed for a long time. Since we don't send keep-alive\n> messages while processing such commands that can lead the subscriber\n> side to timeout.\n>\n\nHmm, this makes it less clear and in fact changed the meaning.\n\n> ~~~\n>\n> 2.\n> The commit message says what the problem is, but it doesn’t seem to\n> describe what this patch does to fix the problem.\n>\n\nI thought it was apparent and the code comments made it clear.\n\n>\n> 4.\n> +#define CHANGES_THRESHOLD 100\n> +\n> + if (++changes_count >= CHANGES_THRESHOLD)\n> + {\n> + rb->update_progress_txn(rb, txn, change->lsn);\n> + changes_count = 0;\n> + }\n>\n> I was wondering if it would have been simpler to write this code like below.\n>\n> Also, by doing it this way the 'changes_count' variable name makes\n> more sense IMO, otherwise (for current code) maybe it should be called\n> something like 'changes_since_last_keepalive'\n>\n> SUGGESTION\n> if (++changes_count % CHANGES_THRESHOLD == 0)\n> rb->update_progress_txn(rb, txn, change->lsn);\n>\n\nI find the current code in the patch clear and easy to understand.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 1 Feb 2023 09:05:48 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Tue, Jan 31, 2023 at 8:24 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Tue, Jan 31, 2023 at 5:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Jan 31, 2023 at 5:03 PM Ashutosh Bapat\n> > <ashutosh.bapat.oss@gmail.com> wrote:\n> > >\n> > > On Tue, Jan 31, 2023 at 4:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > > Thanks, the patch looks good to me. I have slightly adjusted one of\n> > > > the comments and ran pgindent. See attached. As mentioned in the\n> > > > commit message, we shouldn't backpatch this as this requires a new\n> > > > callback and moreover, users can increase the wal_sender_timeout and\n> > > > wal_receiver_timeout to avoid this problem. What do you think?\n> > >\n> > > The callback and the implementation is all in core. What's the risk\n> > > you see in backpatching it?\n> > >\n> >\n> > Because we are changing the exposed structure and which can break\n> > existing extensions using it.\n>\n> Is that because we are adding the new member in the middle of the\n> structure?\n>\n\nNot only that but this changes the size of the structure and we want\nto avoid that as well in stable branches. See email [1] (you can't\nchange the struct size either ...). As per my understanding, our usual\npractice is to not change the exposed structure's size/definition in\nstable branches.\n\n\n[1] - https://www.postgresql.org/message-id/2358496.1649168259%40sss.pgh.pa.us\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 1 Feb 2023 10:04:50 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Feb 1, 2023 at 10:04 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jan 31, 2023 at 8:24 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > On Tue, Jan 31, 2023 at 5:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Jan 31, 2023 at 5:03 PM Ashutosh Bapat\n> > > <ashutosh.bapat.oss@gmail.com> wrote:\n> > > >\n> > > > On Tue, Jan 31, 2023 at 4:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > > Thanks, the patch looks good to me. I have slightly adjusted one of\n> > > > > the comments and ran pgindent. See attached. As mentioned in the\n> > > > > commit message, we shouldn't backpatch this as this requires a new\n> > > > > callback and moreover, users can increase the wal_sender_timeout and\n> > > > > wal_receiver_timeout to avoid this problem. What do you think?\n> > > >\n> > > > The callback and the implementation is all in core. What's the risk\n> > > > you see in backpatching it?\n> > > >\n> > >\n> > > Because we are changing the exposed structure and which can break\n> > > existing extensions using it.\n> >\n> > Is that because we are adding the new member in the middle of the\n> > structure?\n> >\n>\n> Not only that but this changes the size of the structure and we want\n> to avoid that as well in stable branches. See email [1] (you can't\n> change the struct size either ...). As per my understanding, our usual\n> practice is to not change the exposed structure's size/definition in\n> stable branches.\n>\n>\n\nI am planning to push this to HEAD sometime next week (by Wednesday).\nTo backpatch this, we need to fix it in some non-standard way, like\nwithout introducing a callback which I am not sure is a good idea. If\nsome other committers vote to get this in back branches with that or\nsome different idea that can be backpatched then we can do that\nseparately as well. I don't see this as a must-fix in back branches\nbecause we have a workaround (increase timeout) or users can use the\nstreaming option (for >=14).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 3 Feb 2023 10:13:54 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-03 10:13:54 +0530, Amit Kapila wrote:\n> I am planning to push this to HEAD sometime next week (by Wednesday).\n> To backpatch this, we need to fix it in some non-standard way, like\n> without introducing a callback which I am not sure is a good idea. If\n> some other committers vote to get this in back branches with that or\n> some different idea that can be backpatched then we can do that\n> separately as well. I don't see this as a must-fix in back branches\n> because we have a workaround (increase timeout) or users can use the\n> streaming option (for >=14).\n\nI just saw the commit go in, and a quick scan over it makes me think neither\nthis commit, nor f95d53eded, which unfortunately was already backpatched, is\nthe right direction. The wrong direction likely started quite a bit earlier,\nwith 024711bb544.\n\nIt feels quite fundamentally wrong that bascially every output plugin needs to\ncall a special function in nearly every callback.\n\nIn 024711bb544 there was just one call to OutputPluginUpdateProgress() in\npgoutput.c. Quite tellingly, it just updated pgoutput, without touching\ntest_decoding.\n\nThen a8fd13cab0b added to more calls. 63cf61cdeb7 yet another.\n\n\nThis makes no sense. There's lots of output plugins out there. There's an\nincreasing number of callbacks. This isn't a maintainable path forward.\n\n\nIf we want to call something to maintain state, it has to be happening from\ncentral infrastructure.\n\n\nIt feels quite odd architecturally that WalSndUpdateProgress() ends up\nflushing out writes - that's far far from obvious.\n\nI don't think:\n/*\n * Wait until there is no pending write. Also process replies from the other\n * side and check timeouts during that.\n */\nstatic void\nProcessPendingWrites(void)\n\nIs really a good name. What are we processing? What are we actually waiting\nfor - because we don't actually wait for the data to sent out or anything,\njust that they're in a network buffer.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 7 Feb 2023 21:27:52 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Wed, Feb 8, 2023 at 10:57 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2023-02-03 10:13:54 +0530, Amit Kapila wrote:\n> > I am planning to push this to HEAD sometime next week (by Wednesday).\n> > To backpatch this, we need to fix it in some non-standard way, like\n> > without introducing a callback which I am not sure is a good idea. If\n> > some other committers vote to get this in back branches with that or\n> > some different idea that can be backpatched then we can do that\n> > separately as well. I don't see this as a must-fix in back branches\n> > because we have a workaround (increase timeout) or users can use the\n> > streaming option (for >=14).\n>\n> I just saw the commit go in, and a quick scan over it makes me think neither\n> this commit, nor f95d53eded, which unfortunately was already backpatched, is\n> the right direction. The wrong direction likely started quite a bit earlier,\n> with 024711bb544.\n>\n> It feels quite fundamentally wrong that bascially every output plugin needs to\n> call a special function in nearly every callback.\n>\n> In 024711bb544 there was just one call to OutputPluginUpdateProgress() in\n> pgoutput.c. Quite tellingly, it just updated pgoutput, without touching\n> test_decoding.\n>\n> Then a8fd13cab0b added to more calls. 63cf61cdeb7 yet another.\n>\n\nI think the original commit 024711bb544 forgets to call it in\ntest_decoding and the other commits followed the same and missed to\nupdate test_decoding.\n\n>\n> This makes no sense. There's lots of output plugins out there. There's an\n> increasing number of callbacks. This isn't a maintainable path forward.\n>\n>\n> If we want to call something to maintain state, it has to be happening from\n> central infrastructure.\n>\n>\n> It feels quite odd architecturally that WalSndUpdateProgress() ends up\n> flushing out writes - that's far far from obvious.\n>\n> I don't think:\n> /*\n> * Wait until there is no pending write. Also process replies from the other\n> * side and check timeouts during that.\n> */\n> static void\n> ProcessPendingWrites(void)\n>\n> Is really a good name. What are we processing?\n>\n\nIt is for sending the keep_alive message (if required). That is\nnormally done when we skipped processing a transaction to ensure sync\nreplication is not delayed. It has been discussed previously [1][2] to\nextend the WalSndUpdateProgress() interface. Basically, as explained\nby Craig [2], this has to be done from plugin as it can do filtering\nor there could be other reasons why the output plugin skips all\nchanges. We used the same interface for sending keep-alive message\nwhen we processed a lot of (DDL) changes without sending anything to\nplugin.\n\n[1] - https://www.postgresql.org/message-id/20200309183018.tzkzwu635sd366ej%40alap3.anarazel.de\n[2] - https://www.postgresql.org/message-id/CAMsr%2BYE3o8Dt890Q8wTooY2MpN0JvdHqUAHYL-LNhBryXOPaKg%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 8 Feb 2023 13:36:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-08 13:36:02 +0530, Amit Kapila wrote:\n> On Wed, Feb 8, 2023 at 10:57 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2023-02-03 10:13:54 +0530, Amit Kapila wrote:\n> > > I am planning to push this to HEAD sometime next week (by Wednesday).\n> > > To backpatch this, we need to fix it in some non-standard way, like\n> > > without introducing a callback which I am not sure is a good idea. If\n> > > some other committers vote to get this in back branches with that or\n> > > some different idea that can be backpatched then we can do that\n> > > separately as well. I don't see this as a must-fix in back branches\n> > > because we have a workaround (increase timeout) or users can use the\n> > > streaming option (for >=14).\n> >\n> > I just saw the commit go in, and a quick scan over it makes me think neither\n> > this commit, nor f95d53eded, which unfortunately was already backpatched, is\n> > the right direction. The wrong direction likely started quite a bit earlier,\n> > with 024711bb544.\n> >\n> > It feels quite fundamentally wrong that bascially every output plugin needs to\n> > call a special function in nearly every callback.\n> >\n> > In 024711bb544 there was just one call to OutputPluginUpdateProgress() in\n> > pgoutput.c. Quite tellingly, it just updated pgoutput, without touching\n> > test_decoding.\n> >\n> > Then a8fd13cab0b added to more calls. 63cf61cdeb7 yet another.\n> >\n> \n> I think the original commit 024711bb544 forgets to call it in\n> test_decoding and the other commits followed the same and missed to\n> update test_decoding.\n\nI think that's a symptom of the wrong architecture having been chosen. This\nshould *never* have been the task of output plugins.\n\n\n> > I don't think:\n> > /*\n> > * Wait until there is no pending write. Also process replies from the other\n> > * side and check timeouts during that.\n> > */\n> > static void\n> > ProcessPendingWrites(void)\n> >\n> > Is really a good name. What are we processing?\n> >\n> \n> It is for sending the keep_alive message (if required). That is\n> normally done when we skipped processing a transaction to ensure sync\n> replication is not delayed.\n\nBut how is that \"processing pending writes\"? For me \"processing\" implies we're\ndoing some analysis on them or such.\n\n\nIf we want to write data in WalSndUpdateProgress(), shouldn't we move the\ncommon code of WalSndWriteData() and WalSndUpdateProgress() into\nProcessPendingWrites()?\n\n\n> It has been discussed previously [1][2] to\n> extend the WalSndUpdateProgress() interface. Basically, as explained\n> by Craig [2], this has to be done from plugin as it can do filtering\n> or there could be other reasons why the output plugin skips all\n> changes. We used the same interface for sending keep-alive message\n> when we processed a lot of (DDL) changes without sending anything to\n> plugin.\n>\n> [1] - https://www.postgresql.org/message-id/20200309183018.tzkzwu635sd366ej%40alap3.anarazel.de\n> [2] - https://www.postgresql.org/message-id/CAMsr%2BYE3o8Dt890Q8wTooY2MpN0JvdHqUAHYL-LNhBryXOPaKg%40mail.gmail.com\n\nI don't buy that this has to be done by the output plugin. The actual sending\nout of data happens via the LogicalDecodingContext callbacks, so we very well\ncan know whether we recently did send out data or not.\n\nThis really is a concern of the LogicalDecodingContext, it has pretty much\nnothing to do with output plugins. We should remove all calls of\nOutputPluginUpdateProgress() from pgoutput, and add the necessary calls to\nLogicalDecodingContext->update_progress() to generic code. And\n\nAdditionally we should either rename WalSndUpdateProgress(), because it's now\ndoing *far* more than \"updating progress\", or alternatively, split it into two\nfunctions.\n\n\nI don't think the syncrep logic in WalSndUpdateProgress really works as-is -\nconsider what happens if e.g. the origin filter filters out entire\ntransactions. We'll afaics never get to WalSndUpdateProgress(). In some cases\nwe'll be lucky because we'll return quickly to XLogSendLogical(), but not\nreliably.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 8 Feb 2023 10:18:41 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-08 10:18:41 -0800, Andres Freund wrote:\n> I don't think the syncrep logic in WalSndUpdateProgress really works as-is -\n> consider what happens if e.g. the origin filter filters out entire\n> transactions. We'll afaics never get to WalSndUpdateProgress(). In some cases\n> we'll be lucky because we'll return quickly to XLogSendLogical(), but not\n> reliably.\n\nIs it actually the right thing to check SyncRepRequested() in that logic? It's\nquite common to set up syncrep so that individual users or transactions opt\ninto syncrep, but to leave the default disabled.\n\nI don't really see an alternative to making this depend solely on\nsync_standbys_defined.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 8 Feb 2023 10:30:37 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-08 10:30:37 -0800, Andres Freund wrote:\n> On 2023-02-08 10:18:41 -0800, Andres Freund wrote:\n> > I don't think the syncrep logic in WalSndUpdateProgress really works as-is -\n> > consider what happens if e.g. the origin filter filters out entire\n> > transactions. We'll afaics never get to WalSndUpdateProgress(). In some cases\n> > we'll be lucky because we'll return quickly to XLogSendLogical(), but not\n> > reliably.\n>\n> Is it actually the right thing to check SyncRepRequested() in that logic? It's\n> quite common to set up syncrep so that individual users or transactions opt\n> into syncrep, but to leave the default disabled.\n>\n> I don't really see an alternative to making this depend solely on\n> sync_standbys_defined.\n\nHacking on a rough prototype how I think this should rather look, I had a few\nquestions / remarks:\n\n- We probably need to call UpdateProgress from a bunch of places in decode.c\n as well? Indicating that we're lagging by a lot, just because all\n transactions were in another database seems decidedly suboptimal.\n\n- Why should lag tracking only be updated at commit like points? That seems\n like it adds odd discontinuinities?\n\n- The mix of skipped_xact and ctx->end_xact in WalSndUpdateProgress() seems\n somewhat odd. They have very overlapping meanings IMO.\n\n- there's no UpdateProgress calls in pgoutput_stream_abort(), but ISTM there\n should be? It's legit progress.\n\n- That's from 6912acc04f0: I find LagTrackerRead(), LagTrackerWrite() quite\n confusing, naming-wise. IIUC \"reading\" is about receiving confirmation\n messages, \"writing\" about the time the record was generated. ISTM that the\n current time is a quite poor approximation in XLogSendPhysical(), but pretty\n much meaningless in WalSndUpdateProgress()? Am I missing something?\n\n- Aren't the wal_sender_timeout / 2 checks in WalSndUpdateProgress(),\n WalSndWriteData() missing wal_sender_timeout <= 0 checks?\n\n- I don't really understand why f95d53edged55 added !end_xact to the if\n condition for ProcessPendingWrites(). Is the theory that we'll end up in an\n outer loop soon?\n\n\nAttached is a current, quite rough, prototype. It addresses some of the points\nraised, but far from all. There's also several XXXs/FIXMEs in it. I changed\nthe file-ending to .txt to avoid hijacking the CF entry.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 8 Feb 2023 12:02:35 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Thu, Feb 9, 2023 at 1:33 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hacking on a rough prototype how I think this should rather look, I had a few\n> questions / remarks:\n>\n> - We probably need to call UpdateProgress from a bunch of places in decode.c\n> as well? Indicating that we're lagging by a lot, just because all\n> transactions were in another database seems decidedly suboptimal.\n>\n\nWe can do that but I think in all those cases we will reach quickly\nenough back to walsender logic (WalSndLoop - that will send keepalive\nif required) that we don't need to worry. After processing each\nrecord, the logic will return back to the main loop that will send\nkeepalive if required. Also, while reading WAL if we need to block, it\nwill call WalSndWaitForWal() which will send keepalive if required.\nThe real problem we have seen in the field reports or tests is that\nwhen we process a large transaction where changes are queued in the\nreorderbuffer and while processing those we discard all or most of the\nchanges.\n\nThe patch calls update_progress in change_cb_wrapper and other\nwrappers which will miss the case of DDLs that generates a lot of data\nthat is not processed by the plugin. I think for that we either need\nto call update_progress from reorderbuffer.c similar to what the patch\nhas removed or we need some other way to address it. Do you have any\nbetter idea?\n\n> - Why should lag tracking only be updated at commit like points? That seems\n> like it adds odd discontinuinities?\n>\n\nWe have previously experimented to call it from non-commit locations\nbut that turned out to give inaccurate information about Lag. See\nemail [1].\n\n> - The mix of skipped_xact and ctx->end_xact in WalSndUpdateProgress() seems\n> somewhat odd. They have very overlapping meanings IMO.\n>\n> - there's no UpdateProgress calls in pgoutput_stream_abort(), but ISTM there\n> should be? It's legit progress.\n>\n\nAgreed with both of the above points.\n\n> - That's from 6912acc04f0: I find LagTrackerRead(), LagTrackerWrite() quite\n> confusing, naming-wise. IIUC \"reading\" is about receiving confirmation\n> messages, \"writing\" about the time the record was generated. ISTM that the\n> current time is a quite poor approximation in XLogSendPhysical(), but pretty\n> much meaningless in WalSndUpdateProgress()? Am I missing something?\n>\n\nLeaving it for Thomas to answer.\n\n> - Aren't the wal_sender_timeout / 2 checks in WalSndUpdateProgress(),\n> WalSndWriteData() missing wal_sender_timeout <= 0 checks?\n>\n\nIt seems we are checking that via\nProcessPendingWrites()->WalSndKeepaliveIfNecessary(). Do you think we\nneed to check it before as well?\n\n> - I don't really understand why f95d53edged55 added !end_xact to the if\n> condition for ProcessPendingWrites(). Is the theory that we'll end up in an\n> outer loop soon?\n>\n\nYes. For non-empty xacts, we will anyway send a commit message. For\nempty (skipped) xacts, we will send for synchronous replication case\nto avoid any delay.\n\n>\n> Attached is a current, quite rough, prototype. It addresses some of the points\n> raised, but far from all. There's also several XXXs/FIXMEs in it. I changed\n> the file-ending to .txt to avoid hijacking the CF entry.\n>\n\nI have started a separate thread to avoid such confusion. I hope that\nis fine with you.\n\n> > > I don't think the syncrep logic in WalSndUpdateProgress really works as-is -\n> > > consider what happens if e.g. the origin filter filters out entire\n> > > transactions. We'll afaics never get to WalSndUpdateProgress(). In some cases\n> > > we'll be lucky because we'll return quickly to XLogSendLogical(), but not\n> > > reliably.\n> >\n\nWhich case are you worried about? As mentioned in one of the previous\npoints I thought the timeout/keepalive handling in the callers should\nbe enough.\n\n> > Is it actually the right thing to check SyncRepRequested() in that logic? It's\n> > quite common to set up syncrep so that individual users or transactions opt\n> > into syncrep, but to leave the default disabled.\n> >\n> > I don't really see an alternative to making this depend solely on\n> > sync_standbys_defined.\n\nFair point.\n\nHow about renaming ProcessPendingWrites to WaitToSendPendingWrites or\nWalSndWaitToSendPendingWrites?\n\n[1] - https://www.postgresql.org/message-id/OS3PR01MB62755D216245199554DDC8DB9EEA9%40OS3PR01MB6275.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 9 Feb 2023 11:21:41 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Rework LogicalOutputPluginWriterUpdateProgress (WAS Re: Logical\n replication timeout ...)"
},
{
"msg_contents": "On Thu, Feb 9, 2023 at 11:21 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n>\n> How about renaming ProcessPendingWrites to WaitToSendPendingWrites or\n> WalSndWaitToSendPendingWrites?\n>\n\nHow about renaming WalSndUpdateProgress() to\nWalSndUpdateProgressAndSendKeepAlive() or\nWalSndUpdateProgressAndKeepAlive()?\n\nOne thing to note about the changes we are discussing here is that\nsome of the plugins like wal2json already call\nOutputPluginUpdateProgress in their commit callback. They may need to\nupdate it accordingly.\n\nOne difference I see with the patch is that I think we will end up\nsending keepalive for empty prepared transactions even though we don't\nskip sending begin/prepare messages for those. The reason why we don't\nskip sending prepare for empty 2PC xacts is that if the WALSender\nrestarts after the PREPARE of a transaction and before the COMMIT\nPREPARED of the same transaction then we won't be able to figure out\nif we have skipped sending BEGIN/PREPARE of a transaction. To skip\nsending prepare for empty xacts, we previously thought of some ideas\nlike (a) At commit-prepare time have a check on the subscriber-side to\nknow whether there is a corresponding prepare for it before actually\ndoing commit-prepare but that sounded costly. (b) somehow persist the\ninformation whether the PREPARE for a xact is already sent and then\nuse that information for commit prepared but again that also didn't\nsound like a good idea.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 9 Feb 2023 15:54:19 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rework LogicalOutputPluginWriterUpdateProgress (WAS Re: Logical\n replication timeout ...)"
},
{
"msg_contents": "On Wed, 8 Feb 2023 at 15:04, Andres Freund <andres@anarazel.de> wrote:\n>\n> Attached is a current, quite rough, prototype. It addresses some of the points\n> raised, but far from all. There's also several XXXs/FIXMEs in it. I changed\n> the file-ending to .txt to avoid hijacking the CF entry.\n\nIt looks like this patch has received quite a generous helping of\nfeedback from Andres. I'm setting it to Waiting on Author.\n\nOn the one hand it looks like there's a lot of work to do on this but\non the other hand it sounds like this is a live problem in the field\nso if it can get done in time for release that would be great but if\nnot then feel free to move it to the next commitfest (which means next\nrelease).\n\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Wed, 1 Mar 2023 15:18:44 -0500",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication timeout problem"
},
{
"msg_contents": "On Thu, Mar 2, 2023 at 4:19 AM Gregory Stark (as CFM) <stark.cfm@gmail.com> wrote:\t\r\n> On Wed, 8 Feb 2023 at 15:04, Andres Freund <andres@anarazel.de> wrote:\r\n> >\r\n> > Attached is a current, quite rough, prototype. It addresses some of the points\r\n> > raised, but far from all. There's also several XXXs/FIXMEs in it. I changed\r\n> > the file-ending to .txt to avoid hijacking the CF entry.\r\n> \r\n> It looks like this patch has received quite a generous helping of\r\n> feedback from Andres. I'm setting it to Waiting on Author.\r\n> \r\n> On the one hand it looks like there's a lot of work to do on this but\r\n> on the other hand it sounds like this is a live problem in the field\r\n> so if it can get done in time for release that would be great but if\r\n> not then feel free to move it to the next commitfest (which means next\r\n> release).\r\n\r\nHi,\r\n\r\nSince this patch is an improvement to the architecture in HEAD, we started\r\nanother new thread [1] on this topic to develop related patch.\r\n\r\nIt seems that we could modify the details of this CF entry to point to the new\r\nthread and change the status to 'Needs Review'.\r\n\r\n[1] - https://www.postgresql.org/message-id/20230210210423.r26ndnfmuifie4f6%40awork3.anarazel.de\r\n\r\nRegards,\r\nWang Wei\r\n",
"msg_date": "Thu, 2 Mar 2023 02:32:21 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Logical replication timeout problem"
}
] |
[
{
"msg_contents": "Hi,\n\nA colleague tried PG 14 internally and it failed during cluster creation, when\nusing the PGDG rpm packages. A bit of debugging shows that the problem is\nthat the packaging script specifies the password using --pwfile /dev/zero.\n\nIn 14+ this turns out to lead to an endless loop in pg_get_line_append().\n\nThe --pwfile /dev/zero was added in\nhttps://git.postgresql.org/gitweb/?p=pgrpms.git;a=commitdiff;h=8ca418709ef49a1781f0ea8e6166b139106135ff\n\nDevrim, what was the goal? Even in 13 this didn't achieve anything?\n\n\nWhile I don't think passing /dev/zero is a good idea (it mostly seems to\ncircumvent \"\"password file \\\"%s\\\" is empty\", without achieving anything, given\nthe password will be empty). I think we still ought to make pg_get_line() a\nbit more resilient against '\\0'?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 Sep 2021 10:46:44 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "initdb --pwfile /dev/zero"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> A colleague tried PG 14 internally and it failed during cluster creation, when\n> using the PGDG rpm packages. A bit of debugging shows that the problem is\n> that the packaging script specifies the password using --pwfile /dev/zero.\n\n> In 14+ this turns out to lead to an endless loop in pg_get_line_append().\n\nWell, that's because that file will source an infinite amount of stuff.\n\n> I think we still ought to make pg_get_line() a\n> bit more resilient against '\\0'?\n\nI don't think '\\0' is the problem. The only fix for this would be to\nre-introduce some fixed limit on how long a line we'll read, which\nI'm not too thrilled about. I think this is better classified as\nuser error.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 Sep 2021 14:48:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: initdb --pwfile /dev/zero"
},
{
"msg_contents": "Hi,\n\nOn 2021-09-17 14:48:42 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > A colleague tried PG 14 internally and it failed during cluster creation, when\n> > using the PGDG rpm packages. A bit of debugging shows that the problem is\n> > that the packaging script specifies the password using --pwfile /dev/zero.\n> \n> > In 14+ this turns out to lead to an endless loop in pg_get_line_append().\n> \n> Well, that's because that file will source an infinite amount of stuff.\n> \n> > I think we still ought to make pg_get_line() a\n> > bit more resilient against '\\0'?\n> \n> I don't think '\\0' is the problem. The only fix for this would be to\n> re-introduce some fixed limit on how long a line we'll read, which\n> I'm not too thrilled about.\n\nWell, '\\0' can be classified as the end of a line imo. So I don't think it'd\nrequire a line lenght limit.\n\n\n> I think this is better classified as user error.\n\nI also can live with that.\n\n\nI don't really understand how the current PGDG rpms work given this? Does\nnobody use the provided /usr/pgsql-14/bin/postgresql-14-setup?\n\nhttps://git.postgresql.org/gitweb/?p=pgrpms.git;a=blob;f=rpm/redhat/master/non-common/postgresql-14/main/postgresql-14-setup;h=d111033fc3f3bc03c243f424fd60c3e8ddf2e466;hb=HEAD#l139\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 Sep 2021 12:53:48 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: initdb --pwfile /dev/zero"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-09-17 14:48:42 -0400, Tom Lane wrote:\n>> I don't think '\\0' is the problem. The only fix for this would be to\n>> re-introduce some fixed limit on how long a line we'll read, which\n>> I'm not too thrilled about.\n\n> Well, '\\0' can be classified as the end of a line imo. So I don't think it'd\n> require a line lenght limit.\n\nMeh. Those functions are specified to act like fgets(), which does not\nthink that \\0 terminates a line AFAIK.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 Sep 2021 16:09:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: initdb --pwfile /dev/zero"
}
] |
[
{
"msg_contents": "Hi,\n\nI am the author of a PostgreSQL C++ client library, taoPQ (https://github.com/taocpp/taopq), wrapping the C-API of libpq.\n\nIn case of an error when I received a PGresult*, I can access the SQLSTATE by calling\n\n PGresult* pgresult = ...;\n const char* sqlstate = PQresultErrorField( pgresult, PG_DIAG_SQLSTATE );\n\nHowever, this is not possible in a couple of other cases where I don't have a PGresult*, only the PGconn* is available:\n\n* PQconnectdb (and variants)\n\n* PQputCopyData\n* PQputCopyEnd\n* PQgetCopyData\n\n* lo_* (large object functions)\n\nObviously, I can take the error message from PQerrorMessage and throw a generic runtime error - but it would be so much nicer if I could use the SQLSTATE to throw the correct exception class and give users more information just like I do for PGresult*.\n\nAfter some research, it appears that PGconn* does have a field called last_sqlstate - it just can't be accessed.\nAre there any problems adding a simple accessor to libpq? Or is there some way to access it that I'm missing?\n\nRegards,\nDaniel\n\n\n\n",
"msg_date": "Sat, 18 Sep 2021 01:36:35 +0200",
"msg_from": "Daniel Frey <d.frey@gmx.de>",
"msg_from_op": true,
"msg_subject": "Access last_sqlstate from libpq"
},
{
"msg_contents": "On Friday, September 17, 2021, Daniel Frey <d.frey@gmx.de> wrote:\n>\n>\n> However, this is not possible in a couple of other cases where I don't\n> have a PGresult*, only the PGconn* is available:\n>\n> * PQconnectdb (and variants)\n>\n> * PQputCopyData\n> * PQputCopyEnd\n> * PQgetCopyData\n>\n> * lo_* (large object functions)\n>\n> After some research, it appears that PGconn* does have a field called\n> last_sqlstate - it just can't be accessed.\n> Are there any problems adding a simple accessor to libpq? Or is there some\n> way to access it that I'm missing?\n>\n\nI suspect the reason for the omission is that there isn’t any usable data\nto be gotten. Those interfaces are not SQL interfaces and thus do not have\na relevant last_sqlstate to report.\n\nDavid J.\n\nOn Friday, September 17, 2021, Daniel Frey <d.frey@gmx.de> wrote:\nHowever, this is not possible in a couple of other cases where I don't have a PGresult*, only the PGconn* is available:\n\n* PQconnectdb (and variants)\n\n* PQputCopyData\n* PQputCopyEnd\n* PQgetCopyData\n\n* lo_* (large object functions)\nAfter some research, it appears that PGconn* does have a field called last_sqlstate - it just can't be accessed.\nAre there any problems adding a simple accessor to libpq? Or is there some way to access it that I'm missing?\nI suspect the reason for the omission is that there isn’t any usable data to be gotten. Those interfaces are not SQL interfaces and thus do not have a relevant last_sqlstate to report.David J.",
"msg_date": "Fri, 17 Sep 2021 16:45:25 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Access last_sqlstate from libpq"
},
{
"msg_contents": "> On 18. Sep 2021, at 01:45, David G. Johnston <david.g.johnston@gmail.com> wrote:\n> \n> \n> \n> On Friday, September 17, 2021, Daniel Frey <d.frey@gmx.de> wrote:\n> \n> However, this is not possible in a couple of other cases where I don't have a PGresult*, only the PGconn* is available:\n> \n> * PQconnectdb (and variants)\n> \n> * PQputCopyData\n> * PQputCopyEnd\n> * PQgetCopyData\n> \n> * lo_* (large object functions)\n> \n> After some research, it appears that PGconn* does have a field called last_sqlstate - it just can't be accessed.\n> Are there any problems adding a simple accessor to libpq? Or is there some way to access it that I'm missing?\n> \n> I suspect the reason for the omission is that there isn’t any usable data to be gotten. Those interfaces are not SQL interfaces and thus do not have a relevant last_sqlstate to report.\n> \n> David J.\n\nAre you sure or are you guessing? It appears that for PQconnectdb there are a couple of SQLSTATES defined which could help users. The 08 Class \"Connection Exception\" contains at least 08001, 08004, 08P01 which could be helpful for users. For PGputCopyData, etc. Class 22 contains a lot of states that could explain what went wrong (in case it's the data), other states potentially also apply (like when the connection is lost, etc.). Even for large data it might me helpful to see states that indicate if the server ran out of disk space, etc.\n\nMaybe not all of this is currently implemented (i.e. a reasonable SQLSTATE is stored in last_sqlstate), but I would hope that it is in some cases.\n\nDaniel\n\n\n\n",
"msg_date": "Sat, 18 Sep 2021 02:00:14 +0200",
"msg_from": "Daniel Frey <d.frey@gmx.de>",
"msg_from_op": true,
"msg_subject": "Re: Access last_sqlstate from libpq"
},
{
"msg_contents": "On Friday, September 17, 2021, Daniel Frey <d.frey@gmx.de> wrote:\n>\n>\n> Are you sure or are you guessing?\n\n\n>\nGuessing regarding the implementations of these interfaces.\n\nDavid J.\n\nOn Friday, September 17, 2021, Daniel Frey <d.frey@gmx.de> wrote:\n\nAre you sure or are you guessing?Guessing regarding the implementations of these interfaces.David J.",
"msg_date": "Fri, 17 Sep 2021 17:09:35 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Access last_sqlstate from libpq"
},
{
"msg_contents": "Daniel Frey <d.frey@gmx.de> writes:\n> In case of an error when I received a PGresult*, I can access the SQLSTATE by calling\n\n> PGresult* pgresult = ...;\n> const char* sqlstate = PQresultErrorField( pgresult, PG_DIAG_SQLSTATE );\n\nRight ...\n\n> However, this is not possible in a couple of other cases where I don't have a PGresult*, only the PGconn* is available:\n> * PQconnectdb (and variants)\n> * PQputCopyData\n> * PQputCopyEnd\n> * PQgetCopyData\n\nIn these cases, any error you might get is probably from libpq itself,\nnot from the server. libpq does not generate SQLSTATEs for its errors,\nso it's likely that last_sqlstate is not relevant at all.\n\n(Getting libpq to assign SQLSTATEs to its errors has been on the to-do\nlist for a couple of decades. I'm not holding my breath for somebody\nto undertake that.)\n\n> Are there any problems adding a simple accessor to libpq?\n\nI would be strongly against that unless somebody first did the\nlegwork to ensure it was meaningful.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 Sep 2021 22:29:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Access last_sqlstate from libpq"
}
] |
[
{
"msg_contents": "We had left it as an open issue whether or not to risk back-patching\n5c056b0c2 into stable branches [1]. While updating the v14 release notes,\nI realized that we can't put off that decision any longer, because we\nhave to decide now whether to document that as a new behavior in v14.\n\nI'm inclined to back-patch, since nobody has complained about this\nin 14beta3. Thoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/CABNQVagu3bZGqiTjb31a8D5Od3fUMs7Oh3gmZMQZVHZ%3DuWWWfQ%40mail.gmail.com\n\n\n",
"msg_date": "Sat, 18 Sep 2021 13:06:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "So, about that cast-to-typmod-minus-one business"
},
{
"msg_contents": "On Sat, Sep 18, 2021 at 10:06 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> We had left it as an open issue whether or not to risk back-patching\n> 5c056b0c2 into stable branches [1]. While updating the v14 release notes,\n> I realized that we can't put off that decision any longer, because we\n> have to decide now whether to document that as a new behavior in v14.\n>\n> I'm inclined to back-patch, since nobody has complained about this\n> in 14beta3. Thoughts?\n>\n> regards, tom lane\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/CABNQVagu3bZGqiTjb31a8D5Od3fUMs7Oh3gmZMQZVHZ%3DuWWWfQ%40mail.gmail.com\n>\n>\n> Hi,\n+1 to backporting.\n\nThanks\n\nOn Sat, Sep 18, 2021 at 10:06 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:We had left it as an open issue whether or not to risk back-patching\n5c056b0c2 into stable branches [1]. While updating the v14 release notes,\nI realized that we can't put off that decision any longer, because we\nhave to decide now whether to document that as a new behavior in v14.\n\nI'm inclined to back-patch, since nobody has complained about this\nin 14beta3. Thoughts?\n\n regards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/CABNQVagu3bZGqiTjb31a8D5Od3fUMs7Oh3gmZMQZVHZ%3DuWWWfQ%40mail.gmail.com\n\nHi,+1 to backporting.Thanks",
"msg_date": "Sat, 18 Sep 2021 10:59:38 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: So, about that cast-to-typmod-minus-one business"
},
{
"msg_contents": "On Sat, 18 Sep 2021, 18:06 Tom Lane, <tgl@sss.pgh.pa.us> wrote:\n\n>\n> I'm inclined to back-patch\n>\n\n+1\n\nRegards,\nDean\n\n>\n\nOn Sat, 18 Sep 2021, 18:06 Tom Lane, <tgl@sss.pgh.pa.us> wrote:\nI'm inclined to back-patch+1Regards,Dean",
"msg_date": "Sat, 18 Sep 2021 22:44:42 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: So, about that cast-to-typmod-minus-one business"
},
{
"msg_contents": "+1 backporting\n\nTony\n\nOn 2021/9/19 01:06, Tom Lane wrote:\n> We had left it as an open issue whether or not to risk back-patching\n> 5c056b0c2 into stable branches [1]. While updating the v14 release notes,\n> I realized that we can't put off that decision any longer, because we\n> have to decide now whether to document that as a new behavior in v14.\n>\n> I'm inclined to back-patch, since nobody has complained about this\n> in 14beta3. Thoughts?\n>\n> \t\t\tregards, tom lane\n>\n> [1] https://www.postgresql.org/message-id/flat/CABNQVagu3bZGqiTjb31a8D5Od3fUMs7Oh3gmZMQZVHZ%3DuWWWfQ%40mail.gmail.com\n\n\n",
"msg_date": "Sun, 19 Sep 2021 09:43:22 +0800",
"msg_from": "DEVOPS_WwIT <devops@ww-it.cn>",
"msg_from_op": false,
"msg_subject": "Re: So, about that cast-to-typmod-minus-one business"
},
{
"msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> On Sat, 18 Sep 2021, 18:06 Tom Lane, <tgl@sss.pgh.pa.us> wrote:\n>> I'm inclined to back-patch\n\n> +1\n\nDone.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Sep 2021 11:49:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: So, about that cast-to-typmod-minus-one business"
}
] |
[
{
"msg_contents": "In reviewing Paul's application period patch, I noticed some very curious\nsyntax in the test cases. I learned that Paul is equally confused by it,\nand has asked about it in his PgCon 2020 presentation\n\n> SELECT '2018-03-04' AT TIME ZONE INTERVAL '2' HOUR TO MINUTE;\n timezone\n---------------------\n 2018-03-04 05:02:00\n(1 row)\n\nSearching around, I found several instances of this syntax being used\n[1][2][3], but with one important clarifying difference: the expected\nsyntax was\n\n> SELECT '2018-03-04' AT TIME ZONE INTERVAL '2:00' HOUR TO MINUTE;\n timezone\n---------------------\n 2018-03-04 07:00:00\n(1 row)\n\nNow I understand that the user probably meant to do this:\n\n# SELECT '2018-03-04' AT TIME ZONE INTERVAL '2' HOUR;\n timezone\n---------------------\n 2018-03-04 07:00:00\n(1 row)\n\nBut none of this is in our own documentation.\n\nBefore I write a patch to add this to the documentation, I'm curious what\nlevel of sloppiness we should tolerate in the interval calculation. Should\nwe enforce the time string to actually conform to the format laid out in\nthe X TO Y spec? If we don't require that, is it correct to say that the\nvalues will be filled from order of least significance to greatest?\n\n[1]\nhttps://www.vertica.com/docs/9.2.x/HTML/Content/Authoring/SQLReferenceManual/DataTypes/Date-Time/TIMESTAMPATTIMEZONE.htm\n[2]\nhttps://docs.teradata.com/r/kmuOwjp1zEYg98JsB8fu_A/aWY6mGNJ5CYJlSDrvgDQag\n[3]\nhttps://community.snowflake.com/s/question/0D50Z00009AqIaSSAV/is-it-possible-to-add-an-interval-of-5-hours-to-the-session-timezone-\n\nIn reviewing Paul's application period patch, I noticed some very curious syntax in the test cases. I learned that Paul is equally confused by it, and has asked about it in his PgCon 2020 presentation> SELECT '2018-03-04' AT TIME ZONE INTERVAL '2' HOUR TO MINUTE; timezone --------------------- 2018-03-04 05:02:00(1 row)Searching around, I found several instances of this syntax being used [1][2][3], but with one important clarifying difference: the expected syntax was\n> SELECT '2018-03-04' AT TIME ZONE INTERVAL '2:00' HOUR TO MINUTE; timezone --------------------- 2018-03-04 07:00:00(1 row)Now I understand that the user probably meant to do this:# SELECT '2018-03-04' AT TIME ZONE INTERVAL '2' HOUR; timezone --------------------- 2018-03-04 07:00:00(1 row)But none of this is in our own documentation.Before I write a patch to add this to the documentation, I'm curious what level of sloppiness we should tolerate in the interval calculation. Should we enforce the time string to actually conform to the format laid out in the X TO Y spec? If we don't require that, is it correct to say that the values will be filled from order of least significance to greatest?[1] https://www.vertica.com/docs/9.2.x/HTML/Content/Authoring/SQLReferenceManual/DataTypes/Date-Time/TIMESTAMPATTIMEZONE.htm[2] https://docs.teradata.com/r/kmuOwjp1zEYg98JsB8fu_A/aWY6mGNJ5CYJlSDrvgDQag[3] https://community.snowflake.com/s/question/0D50Z00009AqIaSSAV/is-it-possible-to-add-an-interval-of-5-hours-to-the-session-timezone-",
"msg_date": "Sat, 18 Sep 2021 21:28:49 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Undocumented AT TIME ZONE INTERVAL syntax"
},
{
"msg_contents": "Corey Huinker <corey.huinker@gmail.com> writes:\n>> SELECT '2018-03-04' AT TIME ZONE INTERVAL '2' HOUR TO MINUTE;\n\n> ... But none of this is in our own documentation.\n\nThat's not entirely true. [1] says\n\n When writing an interval constant with a fields specification, or when\n assigning a string to an interval column that was defined with a\n fields specification, the interpretation of unmarked quantities\n depends on the fields. For example INTERVAL '1' YEAR is read as 1\n year, whereas INTERVAL '1' means 1 second. Also, field values “to the\n right” of the least significant field allowed by the fields\n specification are silently discarded. For example, writing INTERVAL '1\n day 2:03:04' HOUR TO MINUTE results in dropping the seconds field, but\n not the day field.\n\nBut I'd certainly agree that a couple of examples are not a specification.\nLooking at DecodeInterval, it looks like the rule is that unmarked or\nambiguous fields are matched to the lowest field mentioned by the typmod\nrestriction. Thus\n\nregression=# SELECT INTERVAL '4:2' HOUR TO MINUTE;\n interval \n----------\n 04:02:00\n(1 row)\n\nregression=# SELECT INTERVAL '4:2' MINUTE TO SECOND;\n interval \n----------\n 00:04:02\n(1 row)\n\nIf you wanted to improve this para it'd be cool with me.\n\n> Before I write a patch to add this to the documentation, I'm curious what\n> level of sloppiness we should tolerate in the interval calculation. Should\n> we enforce the time string to actually conform to the format laid out in\n> the X TO Y spec?\n\nWe have never thrown away high-order fields:\n\nregression=# SELECT INTERVAL '1 day 4:2' MINUTE TO SECOND;\n interval \n----------------\n 1 day 00:04:02\n(1 row)\n\nAFAICS we consider that the typmod provides a rounding rule, not a\nlicense to transform the value to something entirely different.\n\nI'm not sure what the SQL spec says here, but I'd be real hesitant to\nchange the behavior of cases that we've accepted for twenty-plus\nyears, unless they're just obviously insane. Which these aren't IMO.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-INTERVAL-INPUT\n\n\n",
"msg_date": "Sun, 19 Sep 2021 10:56:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Undocumented AT TIME ZONE INTERVAL syntax"
},
{
"msg_contents": "On Sun, Sep 19, 2021 at 10:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Corey Huinker <corey.huinker@gmail.com> writes:\n> >> SELECT '2018-03-04' AT TIME ZONE INTERVAL '2' HOUR TO MINUTE;\n>\n> > ... But none of this is in our own documentation.\n>\n> That's not entirely true. [1] says\n>\n> When writing an interval constant with a fields specification, or when\n> assigning a string to an interval column that was defined with a\n> fields specification, the interpretation of unmarked quantities\n> depends on the fields. For example INTERVAL '1' YEAR is read as 1\n> year, whereas INTERVAL '1' means 1 second. Also, field values “to the\n> right” of the least significant field allowed by the fields\n> specification are silently discarded. For example, writing INTERVAL '1\n> day 2:03:04' HOUR TO MINUTE results in dropping the seconds field, but\n> not the day field.\n>\n\nThat text addresses the case of the unadorned string (seconds) and the\noverflow\ncase (more string values than places to put them), but doesn't really\naddress\nthe underflow.\n\n\n>\n> But I'd certainly agree that a couple of examples are not a specification.\n> Looking at DecodeInterval, it looks like the rule is that unmarked or\n> ambiguous fields are matched to the lowest field mentioned by the typmod\n> restriction. Thus\n>\n> regression=# SELECT INTERVAL '4:2' HOUR TO MINUTE;\n> interval\n> ----------\n> 04:02:00\n> (1 row)\n>\n> regression=# SELECT INTERVAL '4:2' MINUTE TO SECOND;\n> interval\n> ----------\n> 00:04:02\n> (1 row)\n\n\n# SELECT INTERVAL '04:02' HOUR TO SECOND;\n\n interval\n\n----------\n\n 04:02:00\n\n\nThis result was a bit unexpected, and the existing documentation doesn't\naddress underflow cases like this.\n\nSo, restating all this to get ready to document it, the rule seems to be:\n\n\n1. Integer strings with no spaces or colons will always apply to the\nrightmost end of the restriction given, lack of a restriction means seconds.\n\n\nExample:\n\n\n# SELECT INTERVAL '2' HOUR TO SECOND, INTERVAL '2' HOUR TO MINUTE, INTERVAL\n'2';\n interval | interval | interval\n----------+----------+----------\n 00:00:02 | 00:02:00 | 00:00:02\n(1 row)\n\n\n\n2. Strings with time context (space separator for days, : for everything\nelse) will apply starting with the leftmost part of the spec that fits,\ncontinuing to the right until string values are exhausted.\n\n\nExamples:\n\n\n# SELECT INTERVAL '4:2' HOUR TO SECOND, INTERVAL '4:2' DAY TO SECOND;\n interval | interval\n----------+----------\n 04:02:00 | 04:02:00\n\n(1 row)\n\n\n\n> If you wanted to improve this para it'd be cool with me.\n>\n\nI think people's eyes are naturally drawn to the example tables, and\nbecause the rules for handling string underflow are subtle, I think a few\nconcrete examples are the way to go.\n\n\n\n>\n> > Before I write a patch to add this to the documentation, I'm curious what\n> > level of sloppiness we should tolerate in the interval calculation.\n> Should\n> > we enforce the time string to actually conform to the format laid out in\n> > the X TO Y spec?\n>\n> We have never thrown away high-order fields:\n>\n\nAnd with the above I'm now clear that we're fine with the existing behavior\nfor underflow.\n\n\n>\n> I'm not sure what the SQL spec says here, but I'd be real hesitant to\n> change the behavior of cases that we've accepted for twenty-plus\n> years, unless they're just obviously insane. Which these aren't IMO.\n>\n\nYeah, I really didn't expect to change the behavior, but wanted to make\nsure that the existing behavior was understood. I'll whip up a patch.\n\nOn Sun, Sep 19, 2021 at 10:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Corey Huinker <corey.huinker@gmail.com> writes:\n>> SELECT '2018-03-04' AT TIME ZONE INTERVAL '2' HOUR TO MINUTE;\n\n> ... But none of this is in our own documentation.\n\nThat's not entirely true. [1] says\n\n When writing an interval constant with a fields specification, or when\n assigning a string to an interval column that was defined with a\n fields specification, the interpretation of unmarked quantities\n depends on the fields. For example INTERVAL '1' YEAR is read as 1\n year, whereas INTERVAL '1' means 1 second. Also, field values “to the\n right” of the least significant field allowed by the fields\n specification are silently discarded. For example, writing INTERVAL '1\n day 2:03:04' HOUR TO MINUTE results in dropping the seconds field, but\n not the day field.That text addresses the case of the unadorned string (seconds) and the overflowcase (more string values than places to put them), but doesn't really addressthe underflow. \n\nBut I'd certainly agree that a couple of examples are not a specification.\nLooking at DecodeInterval, it looks like the rule is that unmarked or\nambiguous fields are matched to the lowest field mentioned by the typmod\nrestriction. Thus\n\nregression=# SELECT INTERVAL '4:2' HOUR TO MINUTE;\n interval \n----------\n 04:02:00\n(1 row)\n\nregression=# SELECT INTERVAL '4:2' MINUTE TO SECOND;\n interval \n----------\n 00:04:02\n(1 row)# SELECT INTERVAL '04:02' HOUR TO SECOND;\n interval \n----------\n 04:02:00\nThis result was a bit unexpected, and the existing documentation doesn't address underflow cases like this.So, restating all this to get ready to document it, the rule seems to be:1. Integer strings with no spaces or colons will always apply to the rightmost end of the restriction given, lack of a restriction means seconds.Example:# SELECT INTERVAL '2' HOUR TO SECOND, INTERVAL '2' HOUR TO MINUTE, INTERVAL '2'; interval | interval | interval ----------+----------+---------- 00:00:02 | 00:02:00 | 00:00:02(1 row)2. Strings with time context (space separator for days, : for everything else) will apply starting with the leftmost part of the spec that fits, continuing to the right until string values are exhausted.Examples:# SELECT INTERVAL '4:2' HOUR TO SECOND, INTERVAL '4:2' DAY TO SECOND; interval | interval ----------+---------- 04:02:00 | 04:02:00(1 row) \nIf you wanted to improve this para it'd be cool with me.I think people's eyes are naturally drawn to the example tables, and because the rules for handling string underflow are subtle, I think a few concrete examples are the way to go. \n\n> Before I write a patch to add this to the documentation, I'm curious what\n> level of sloppiness we should tolerate in the interval calculation. Should\n> we enforce the time string to actually conform to the format laid out in\n> the X TO Y spec?\n\nWe have never thrown away high-order fields:And with the above I'm now clear that we're fine with the existing behavior for underflow. \n\nI'm not sure what the SQL spec says here, but I'd be real hesitant to\nchange the behavior of cases that we've accepted for twenty-plus\nyears, unless they're just obviously insane. Which these aren't IMO.Yeah, I really didn't expect to change the behavior, but wanted to make sure that the existing behavior was understood. I'll whip up a patch.",
"msg_date": "Sun, 19 Sep 2021 17:01:49 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Undocumented AT TIME ZONE INTERVAL syntax"
},
{
"msg_contents": ">\n>\n>> Yeah, I really didn't expect to change the behavior, but wanted to make\n> sure that the existing behavior was understood. I'll whip up a patch.\n>\n\nAttached is an attempt at an explanation of the edge cases I was\nencountering, as well as some examples. If nothing else, the examples will\ndraw eyes and searches to the explanations that were already there.",
"msg_date": "Sun, 19 Sep 2021 23:35:57 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Undocumented AT TIME ZONE INTERVAL syntax"
},
{
"msg_contents": "Corey Huinker <corey.huinker@gmail.com> writes:\n> Attached is an attempt at an explanation of the edge cases I was\n> encountering, as well as some examples. If nothing else, the examples will\n> draw eyes and searches to the explanations that were already there.\n\nI looked this over and have a few thoughts:\n\n* I don't think your explanation of the behavior of colon-separated\ntimes is quite correct; for example, it doesn't correctly describe\nthis:\n\nregression=# select INTERVAL '2:03:04' minute to second;\n interval \n----------\n 02:03:04\n(1 row)\n\nI think the actual rule is that hh:mm:ss is always interpreted that\nway regardless of the typmod (though we may then drop low-order\nfields if the typmod says to). Two colon-separated numbers are\ninterpreted as hh:mm by default, but as mm:ss if the typmod is\nexactly \"minute to second\". (This might work better in a separate\npara; the one you've modified here is mostly about what we do with\nunmarked quantities, but the use of colons makes these numbers\nnot unmarked.)\n\n* I'm not sure I would bother with examples for half-broken formats\nlike \"2:\". People who really want to know about that can experiment,\nwhile for the rest of us it seems like adding confusion.\n\n* In the same vein, I think your 0002 example adds way more confusion\nthan illumination. Maybe better would be a less contrived offset,\nsay\n\n# SELECT TIMESTAMP WITH TIME ZONE '2001-02-16 20:38:40+00' AT TIME ZONE INTERVAL '3:00:00';\n timezone \n---------------------\n 2001-02-16 23:38:40\n(1 row)\n\nwhich could be put after the second example and glossed as \"The third\nexample rotates a timestamp specified in UTC to the zone three hours\neast of Greenwich, using a constant interval as the time zone\nspecification\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 12 Nov 2021 17:15:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Undocumented AT TIME ZONE INTERVAL syntax"
}
] |
[
{
"msg_contents": "I have been trying to get a reply or interest in either updating\nPostgreSQL to support the following, or for there to be a public,\nfree for any use Extension put out there, that will support the following:\n\n\n############################################################\n# High Precision Numeric and Elementary Functions Support. #\n############################################################\n\n-Integer (HPZ) Z, or Rational Decimal Q (HPQ) numbers support.\n\n-Recurring Rational Numbers and recurring Irrational Numbers can be appropriately\ntruncated, by a precision value, to obtain an approximating value. The latter\nphenomenon is a finite Rational value, possibly with integer and/or decimal parts at the\nsame time. These may be positive or negative, standard number line, values.\n\n-Forward and Inverse operations accuracy, withstanding truncation,\ncan be maintained by storing and normalising the expression behind a value,\n(or just include pointers to the value) and displaying the evaluation.\nThis system will uphold any precision.\n\n-A defaulting number of significant figures (precision), in one copy of one field\nin memory that is held in there, as a filter, for all HPZ and HPQ numbers.\nFor example, 20 significant figures, as a default, to start by.\n\n-A function that varies the precision filter for every HPZ and HPQ number at once.\n\n-Value assignment to a typed variable by =.\n\n-Base 10 Arithmetic and comparisons support on Base 10 Integer and Rational Decimal numbers.\n+,-,*,/,%,^,=,!=,<>,>,<,>=,<=, ::\nThese include full finite division and integer only division, with no remainder.\nThe defaulting ability of numbers data in lower types to automatically be casted\nup to HPZ or HPQ, where specified and occuring in PostgreSQL code.\n\n-Reified support with broader syntax and operations within PostgreSQL, in all the obvious\nand less than obvious places. Tables and related phenomena, Indexing, the Window type,\nRecord type, direct compatability with Aggregate and Window Functions, the Recursive keyword,\nare all parts of a larger subset that may re-interact with HPZ or HPQ.\n\n#############################################################################################\n\n-Mathematical and Operational functions support:\n\nprecision(BIGINT input)\n\ncast(TEXT as HPZ) returns HPZ;\ncast(TEXT as HPQ) returns HPQ;\ncast(HPQ as TEXT) returns TEXT;\ncast(HPZ as TEXT) returns TEXT;\ncast(HPZ as HPQ) returns HPQ;\ncast(HPQ as HPZ) returns HPZ;\ncast(HPZ as SMALLINT) returns SMALLINT;\ncast(SMALLINT as HPQ) returns HPZ;\ncast(HPZ as INTEGER) returns INTEGER;\ncast(INTEGER as HPZ) returns HPZ;\ncast(HPZ as BIGINT) returns BIGINT;\ncast(BIGINT as HPZ) returns HPZ;\ncast(HPQ as REAL) returns REAL;\ncast(REAL as HPQ) returns HPQ\ncast(DOUBLE PRECISION as HPQ) returns HPQ;\ncast(HPQ as DOUBLE PRECISION) returns DOUBLE PRECISION;\ncast(HPQ as DECIMAL) returns DECIMAL;\ncast(DECIMAL as HPQ) returns HPQ;\n\nsign(HPQ input) returns HPQ;\nabs(HPQ input) returns HPQ;\nceil(HPQ input) returns HPQ;\nfloor(HPQ input) returns HPQ;\nround(HPQ input) returns HPZ;\npi() returns HPQ;\ne() returns HPQ;\npower(HPQ base, HPQ exponent) returns HPQ;\nsqrt(HPQ input) returns HPQ\nnroot(HPZ theroot, HPQ input) returns HPQ;\nlog10(HPQ input) returns HPQ;\nloge(HPQ input) returns HPQ;\nlog2(HPQ input) returns HPQ;\nfactorial(HPZ input) returns HPZ;\nnCr(HPZ objects, HPZ selectionSize) returns HPZ\nnPr(HPZ objects, HPZ selectionSize) returns HPZ\n\ndegrees(HPQ input) returns HPQ;\nradians(HPQ input) returns HPQ;\nsind(HPQ input) returns HPQ;\ncosd(HPQ input) returns HPQ;\ntand(HPQ input) returns HPQ;\nasind(HPQ input) returns HPQ;\nacosd(HPQ input) returns HPQ;\natand(HPQ input) returns HPQ;\nsinr(HPQ input) returns HPQ;\ncosr(HPQ input) returns HPQ;\ntanr(HPQ input) returns HPQ;\nasinr(HPQ input) returns HPQ;\nacosr(HPQ input) returns HPQ;\natanr(HPQ input) returns HPQ;\n\n##########################################################################################\n\n-Informative articles on all these things exist at:\nComparison Operators: https://en.wikipedia.org/wiki/Relational_operator\nFloor and Ceiling Functions: https://en.wikipedia.org/wiki/Floor_and_ceiling_functions\nArithmetic Operations: https://en.wikipedia.org/wiki/Arithmetic\nInteger Division: https://en.wikipedia.org/wiki/Division_(mathematics)#Of_integers\nModulus Operation: https://en.wikipedia.org/wiki/Modulo_operation\nRounding (Commercial Rounding): https://en.wikipedia.org/wiki/Rounding\nFactorial Operation: https://en.wikipedia.org/wiki/Factorial\nDegrees: https://en.wikipedia.org/wiki/Degree_(angle)\nRadians: https://en.wikipedia.org/wiki/Radian\nElementary Functions: https://en.wikipedia.org/wiki/Elementary_function\n\n-Ease of installation support. Particularly for Windows and Linux. *.exe, *.msi or *.rpm, *.deb, *.bin installers.\nWith a PostgreSQL standard installation. Installation and Activation instructions included.\n\nThe following chart could be used to help test trigonometry outputs, under\nFurther Condsideration of the Unit Circle:\nhttps://courses.lumenlearning.com/boundless-algebra/chapter/trigonometric-functions-and-the-unit-circle/\n#############################################################################################\n\n\n\n\n\n\n\n\nI have been trying to get a reply or interest in either updating\n\nPostgreSQL to support the following, or for there to be a public,\n\nfree for any use Extension put out there, that will support the following:\n\n\n\n\n\n\n\n############################################################\n# High Precision Numeric and Elementary Functions Support. #\n############################################################\n\n\n-Integer (HPZ) Z, or Rational Decimal Q (HPQ) numbers support.\n\n\n-Recurring Rational Numbers and recurring Irrational Numbers can be appropriately\ntruncated, by a precision value, to obtain an approximating value. The latter\nphenomenon is a finite Rational value, possibly with integer and/or decimal parts at the\n\nsame time. These may be positive or negative, standard number line, values.\n\n\n-Forward and Inverse operations accuracy, withstanding truncation,\ncan be maintained by storing and normalising the expression behind a value, \n(or just include pointers to the value) and displaying the evaluation.\nThis system will uphold any precision.\n\n\n-A defaulting number of significant figures (precision), in one copy of one field\nin memory that is held in there, as a filter, for all HPZ and HPQ numbers.\nFor example, 20 significant figures, as a default, to start by.\n\n\n-A function that varies the precision filter for every HPZ and HPQ number at once.\n\n\n\n-Value assignment to a typed variable by =.\n\n\n-Base 10 Arithmetic and comparisons support on Base 10 Integer and Rational Decimal numbers.\n+,-,*,/,%,^,=,!=,<>,>,<,>=,<=, ::\nThese include full finite division and integer only division, with no remainder.\nThe defaulting ability of numbers data in lower types to automatically be casted\nup to HPZ or HPQ, where specified and occuring in PostgreSQL code.\n\n\n-Reified support with broader syntax and operations within PostgreSQL, in all the obvious\nand less than obvious places. Tables and related phenomena, Indexing, the Window type,\n\nRecord type, direct compatability with Aggregate and Window Functions, the Recursive keyword,\nare all parts of a larger subset that may re-interact with HPZ or HPQ.\n\n\n#############################################################################################\n\n\n-Mathematical and Operational functions support:\n\n\nprecision(BIGINT input)\n\n\ncast(TEXT as HPZ) returns HPZ;\ncast(TEXT as HPQ) returns HPQ;\ncast(HPQ as TEXT) returns TEXT;\ncast(HPZ as TEXT) returns TEXT;\ncast(HPZ as HPQ) returns HPQ;\ncast(HPQ as HPZ) returns HPZ;\ncast(HPZ as SMALLINT) returns SMALLINT;\ncast(SMALLINT as HPQ) returns HPZ;\ncast(HPZ as INTEGER) returns INTEGER;\ncast(INTEGER as HPZ) returns HPZ;\ncast(HPZ as BIGINT) returns BIGINT;\ncast(BIGINT as HPZ) returns HPZ;\ncast(HPQ as REAL) returns REAL;\ncast(REAL as HPQ) returns HPQ\ncast(DOUBLE PRECISION as HPQ) returns HPQ;\ncast(HPQ as DOUBLE PRECISION) returns DOUBLE PRECISION;\ncast(HPQ as DECIMAL) returns DECIMAL;\ncast(DECIMAL as HPQ) returns HPQ;\n\n\nsign(HPQ input) returns HPQ;\nabs(HPQ input) returns HPQ;\nceil(HPQ input) returns HPQ;\nfloor(HPQ input) returns HPQ;\nround(HPQ input) returns HPZ;\npi() returns HPQ;\ne() returns HPQ;\npower(HPQ base, HPQ exponent) returns HPQ;\nsqrt(HPQ input) returns HPQ\nnroot(HPZ theroot, HPQ input) returns HPQ;\nlog10(HPQ input) returns HPQ;\nloge(HPQ input) returns HPQ;\nlog2(HPQ input) returns HPQ;\nfactorial(HPZ input) returns HPZ;\nnCr(HPZ objects, HPZ selectionSize) returns HPZ\nnPr(HPZ objects, HPZ selectionSize) returns HPZ\n\n\ndegrees(HPQ input) returns HPQ;\nradians(HPQ input) returns HPQ;\nsind(HPQ input) returns HPQ;\ncosd(HPQ input) returns HPQ;\ntand(HPQ input) returns HPQ;\nasind(HPQ input) returns HPQ;\nacosd(HPQ input) returns HPQ;\natand(HPQ input) returns HPQ;\nsinr(HPQ input) returns HPQ;\ncosr(HPQ input) returns HPQ;\ntanr(HPQ input) returns HPQ;\nasinr(HPQ input) returns HPQ;\nacosr(HPQ input) returns HPQ;\natanr(HPQ input) returns HPQ;\n\n\n##########################################################################################\n\n\n-Informative articles on all these things exist at:\nComparison Operators: https://en.wikipedia.org/wiki/Relational_operator\nFloor and Ceiling Functions: https://en.wikipedia.org/wiki/Floor_and_ceiling_functions\nArithmetic Operations: https://en.wikipedia.org/wiki/Arithmetic\nInteger Division: https://en.wikipedia.org/wiki/Division_(mathematics)#Of_integers\nModulus Operation: https://en.wikipedia.org/wiki/Modulo_operation\nRounding (Commercial Rounding): https://en.wikipedia.org/wiki/Rounding\nFactorial Operation: https://en.wikipedia.org/wiki/Factorial\nDegrees: https://en.wikipedia.org/wiki/Degree_(angle)\nRadians: https://en.wikipedia.org/wiki/Radian\nElementary Functions: https://en.wikipedia.org/wiki/Elementary_function\n\n\n-Ease of installation support. Particularly for Windows and Linux. *.exe, *.msi or *.rpm, *.deb, *.bin installers.\nWith a PostgreSQL standard installation. Installation and Activation instructions included.\n\n\nThe following chart could be used to help test trigonometry outputs, under\nFurther Condsideration of the Unit Circle:\nhttps://courses.lumenlearning.com/boundless-algebra/chapter/trigonometric-functions-and-the-unit-circle/\n#############################################################################################",
"msg_date": "Sun, 19 Sep 2021 05:42:32 +0000",
"msg_from": "A Z <poweruserm@live.com.au>",
"msg_from_op": true,
"msg_subject": "Improved PostgreSQL Mathematics Support."
},
{
"msg_contents": "Dear PostgreSQL Hackers,\n\nI have been trying to get a reply or interest in either updating\nPostgreSQL to support High Precision mathematical types,\nwith arithmetic and elementary functions support, or release\nof an Extension which has accomplished the same thing.\n\nIs there someone on this email list which could please have a look\nat the specifications that I have posted, and reply and get back to\nme? I would be more than thrilled if something could be done\nto improve PostgreSQL in this area.\n\nYours Sincerely,\n\nZ.M.\n\n\n\n\n\n\n\n\n\nDear PostgreSQL Hackers,\n\n\n\nI have been trying to get a reply or interest in either updating\n\n\n\nPostgreSQL to support High Precision mathematical types,\n\nwith arithmetic and elementary functions support, or release\n\nof an Extension which has accomplished the same thing.\n\n\n\n\nIs there someone on this email list which could please have a look\n\nat the specifications that I have posted, and reply and get back to\n\nme? I would be more than thrilled if something could be done\n\nto improve PostgreSQL in this area.\n\n\n\n\nYours Sincerely,\n\n\n\n\nZ.M.",
"msg_date": "Mon, 20 Sep 2021 01:40:18 +0000",
"msg_from": "A Z <poweruserm@live.com.au>",
"msg_from_op": true,
"msg_subject": "Improved PostgreSQL Mathematics Support."
},
{
"msg_contents": "On Sunday, September 19, 2021, A Z <poweruserm@live.com.au> wrote:\n\n>\n> Is there someone on this email list which could please have a look\n> at the specifications that I have posted, and reply and get back to\n> me?\n>\n\nGiven the number of posts you’ve made I would have to conclude that the\nanswer to that question is no. There is presently no interest in this from\nthe people who read these mailing lists.\n\nDavid J.\n\nOn Sunday, September 19, 2021, A Z <poweruserm@live.com.au> wrote:\n\n\n\nIs there someone on this email list which could please have a look\n\nat the specifications that I have posted, and reply and get back to\n\nme?Given the number of posts you’ve made I would have to conclude that the answer to that question is no. There is presently no interest in this from the people who read these mailing lists.David J.",
"msg_date": "Sun, 19 Sep 2021 18:48:06 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improved PostgreSQL Mathematics Support."
},
{
"msg_contents": "Your request is essentially to wrap the GMP library into native types in Postgres. This can be done as custom types and adding postgres extensions as you suggested originally. The work to be done is straightforward, but there is a lot of work so it would take a awhile to implement. The big integer part is rather simple, while there is some work to be done there the fractional part will take significantly longer because reasons ( think testing edge cases), but it is doable if there is interest in implementing this.\n\n-The MuchPIR Team\n\nSent with [ProtonMail](https://protonmail.com/) Secure Email.\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Sunday, September 19th, 2021 at 9:40 PM, A Z <poweruserm@live.com.au> wrote:\n\n> Dear PostgreSQL Hackers,\n>\n> I have been trying to get a reply or interest in either updating\n> PostgreSQL to support High Precision mathematical types,\n> with arithmetic and elementary functions support, or release\n> of an Extension which has accomplished the same thing.\n>\n> Is there someone on this email list which could please have a look\n> at the specifications that I have posted, and reply and get back to\n> me? I would be more than thrilled if something could be done\n> to improve PostgreSQL in this area.\n>\n> Yours Sincerely,\n>\n> Z.M.\nYour request is essentially to wrap the GMP library into native types in Postgres. This can be done as custom types and adding postgres extensions as you suggested originally. The work to be done is straightforward, but there is a lot of work so it would take a awhile to implement. The big integer part is rather simple, while there is some work to be done there the fractional part will take significantly longer because reasons ( think testing edge cases), but it is doable if there is interest in implementing this.-The MuchPIR TeamSent with ProtonMail Secure Email. \r\n ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\r\n On Sunday, September 19th, 2021 at 9:40 PM, A Z <poweruserm@live.com.au> wrote:\n\nDear PostgreSQL Hackers,\n\n\n\nI have been trying to get a reply or interest in either updating\n\n\n\r\nPostgreSQL to support High Precision mathematical types,\n\r\nwith arithmetic and elementary functions support, or release\n\r\nof an Extension which has accomplished the same thing.\n\n\n\n\r\nIs there someone on this email list which could please have a look\n\r\nat the specifications that I have posted, and reply and get back to\n\r\nme? I would be more than thrilled if something could be done\n\r\nto improve PostgreSQL in this area.\n\n\n\n\r\nYours Sincerely,\n\n\n\n\r\nZ.M.",
"msg_date": "Mon, 20 Sep 2021 02:40:37 +0000",
"msg_from": "\"Private Information Retrieval(PIR)\" <postgresql-pir@pm.me>",
"msg_from_op": false,
"msg_subject": "Re: Improved PostgreSQL Mathematics Support."
}
] |
[
{
"msg_contents": "The return value of _bt_bottomupdel_pass() is advisory; it reports\n\"failure\" for a deletion pass that was just suboptimal (it rarely\nreports failure because it couldn't delete anything at all). Bottom-up\ndeletion preemptively falls back on a version-orientated deduplication\npass when it didn't quite delete as many items as it hoped to delete.\nThis policy avoids thrashing, particularly with low cardinality\nindexes, where the number of distinct TIDs per tableam/heapam block\ntends to drive when and how TIDs get deleted.\n\nI have noticed an unintended and undesirable interaction between this\nnew behavior and an older deduplication behavior: it's possible for\n_bt_bottomupdel_pass() to return false to trigger a preemptive\nversion-orientated deduplication pass that ends up using\ndeduplication's \"single value\" strategy. This is just contradictory on\nits face: a version-orientated deduplication pass tries to prevent a\nversion-driven page split altogether, whereas a single value strategy\ndeduplication pass is specifically supposed to set things up for an\nimminent page split (a split that uses nbtsplitloc.c's single value\nstrategy). Clearly we shouldn't prepare for a page split and try to\navoid a page split at the same time!\n\nThe practical consequence of this oversight is that leaf pages full of\nduplicates (all duplicates of the same single value) are currently\nmuch more likely to have a version-driven page split (from non-HOT\nupdates) than similar pages that have two or three distinct key\nvalues. Another undesirable consequence is that we'll waste cycles in\naffected cases; any future bottom-up index deletion passes will waste\ntime on the tuples that the intervening deduplication pass\ndeliberately declined to merge together (as any single value dedup\npass will). In other words, the heuristics described in comments above\n_bt_bottomupdel_finish_pending() can become confused by this\nmisbehavior (after an initial round of deletion + deduplication, in a\nlater round of deletion). This interaction is clearly a bug. It's easy\nto avoid.\n\nAttached patch fixes the bug by slightly narrowing the conditions\nunder which we'll consider if we should apply deduplication's single\nvalue strategy. We were already not even considering it with a unique\nindex, where it was always clear that this is only a\nversion-orientated deduplication pass. It seems natural to also check\nwhether or not we just had a \"failed\" call to _bt_bottomupdel_pass()\n-- this is a logical extension of what we do already. Barring\nobjections, I will apply this patch (and backpatch to Postgres 14) in\na few days.\n\n-- \nPeter Geoghegan",
"msg_date": "Sun, 19 Sep 2021 19:47:42 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Unintended interaction between bottom-up deletion and deduplication's\n single value strategy"
},
{
"msg_contents": "On Sun, Sep 19, 2021 at 7:47 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached patch fixes the bug by slightly narrowing the conditions\n> under which we'll consider if we should apply deduplication's single\n> value strategy.\n\nPushed this fix a moment ago.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 21 Sep 2021 19:02:03 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Unintended interaction between bottom-up deletion and\n deduplication's single value strategy"
}
] |
[
{
"msg_contents": "Hi,\nI have been working on a patch for Postgres. I'm curious about the\nsuggested style for braces around if statements - some places don't include\nbraces around an if statement body, if the if statement body is a single\nline.\n\nThe \"Coding Conventions\" don't contain any advice here (although maybe they\nshould link to the \"Developer FAQ\"?)\nhttps://www.postgresql.org/docs/devel/source.html\n\nThe Postgres Wiki has a bit that says to \"See also the Formatting section\n<http://developer.postgresql.org/pgdocs/postgres/source-format.html> in the\ndocumentation,\" but that link 404's, so I'm not sure where it is supposed\nto go.\nhttps://wiki.postgresql.org/wiki/Developer_FAQ#What.27s_the_formatting_style_used_in_PostgreSQL_source_code.3F\n\nThanks,\nKevin\n\n--\nKevin Burke\nphone: 925-271-7005 | kevin.burke.dev\n\nHi,I have been working on a patch for Postgres. I'm curious about the suggested style for braces around if statements - some places don't include braces around an if statement body, if the if statement body is a single line.The \"Coding Conventions\" don't contain any advice here (although maybe they should link to the \"Developer FAQ\"?) https://www.postgresql.org/docs/devel/source.htmlThe Postgres Wiki has a bit that says to \"See also the Formatting section in the documentation,\" but that link 404's, so I'm not sure where it is supposed to go.https://wiki.postgresql.org/wiki/Developer_FAQ#What.27s_the_formatting_style_used_in_PostgreSQL_source_code.3FThanks,Kevin--Kevin Burkephone: 925-271-7005 | kevin.burke.dev",
"msg_date": "Sun, 19 Sep 2021 20:37:18 -0700",
"msg_from": "Kevin Burke <kevin@burke.dev>",
"msg_from_op": true,
"msg_subject": "Coding guidelines for braces + spaces - link 404's"
},
{
"msg_contents": "On 20.09.21 05:37, Kevin Burke wrote:\n> I have been working on a patch for Postgres. I'm curious about the \n> suggested style for braces around if statements - some places don't \n> include braces around an if statement body, if the if statement body is \n> a single line.\n\nGenerally, the braces should be omitted if the body is only a single \nline. An exception is sometimes made for symmetry if another branch \nuses more than one line. So\n\n if (foo)\n bar();\n\nbut\n\n if (foo)\n {\n bar();\n }\n else\n {\n baz();\n qux();\n }\n\n\n",
"msg_date": "Mon, 20 Sep 2021 13:48:29 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Coding guidelines for braces + spaces - link 404's"
},
{
"msg_contents": "Kevin Burke <kevin@burke.dev> writes:\n> The Postgres Wiki has a bit that says to \"See also the Formatting section\n> <http://developer.postgresql.org/pgdocs/postgres/source-format.html> in the\n> documentation,\" but that link 404's, so I'm not sure where it is supposed\n> to go.\n\nObsolete link, evidently. It should point to\n\nhttps://www.postgresql.org/docs/devel/source-format.html\n\nWill fix.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Sep 2021 10:15:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Coding guidelines for braces + spaces - link 404's"
}
] |
[
{
"msg_contents": "While testing a patch I fat-fingered a CREATE DATABASE statement by tab\ncompleting *after* the semicolon, with no space between the objname and\nsemicolon. The below options were presented, which at this point aren't really\napplicable:\n\ndb=# create database foo;\nALLOW_CONNECTIONS ENCODING LC_COLLATE LOCALE TABLESPACE\nCONNECTION LIMIT IS_TEMPLATE LC_CTYPE OWNER TEMPLATE\n\nDROP DATABASE has a similar tab completion which makes about as much sense:\n\ndb=# drop database foo;WITH (\n\nChecking prev_wd for not ending with ';' as per the attached makes \"objname;\"\nbehave like \"objname ;\". Is there a reason for not doing that which I'm\nmissing? I didn't check for others, but if this seems reasonable I'll go\nthrough to find any other similar cases.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Mon, 20 Sep 2021 15:06:17 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "psql: tab completion differs on semicolon placement"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n\n> While testing a patch I fat-fingered a CREATE DATABASE statement by tab\n> completing *after* the semicolon, with no space between the objname and\n> semicolon. The below options were presented, which at this point aren't really\n> applicable:\n>\n> db=# create database foo;\n> ALLOW_CONNECTIONS ENCODING LC_COLLATE LOCALE TABLESPACE\n> CONNECTION LIMIT IS_TEMPLATE LC_CTYPE OWNER TEMPLATE\n>\n> DROP DATABASE has a similar tab completion which makes about as much sense:\n>\n> db=# drop database foo;WITH (\n>\n> Checking prev_wd for not ending with ';' as per the attached makes \"objname;\"\n> behave like \"objname ;\". Is there a reason for not doing that which I'm\n> missing? I didn't check for others, but if this seems reasonable I'll go\n> through to find any other similar cases.\n\nThe same applies to any completion after a MatchAny that ends in a any\nof the WORD_BREAKS characters (except whitespace and () which are\nhandled specially).\n\n#define WORD_BREAKS \"\\t\\n@$><=;|&{() \"\n\nIMO a fix should be more principled than just special-casing semicolon\nand CREATE TABLE. Maybe get_previous_words() should stop when it sees\nan unquoted semicolon?\n\n- ilmari\n\n\n",
"msg_date": "Mon, 20 Sep 2021 20:26:51 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: psql: tab completion differs on semicolon placement"
},
{
"msg_contents": "> On 20 Sep 2021, at 21:26, Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> wrote:\n\n> IMO a fix should be more principled than just special-casing semicolon\n> and CREATE TABLE. Maybe get_previous_words() should stop when it sees\n> an unquoted semicolon?\n\nAgreed, something along those lines makes sense. I will familiarize myself\nwith this file (which until today has been a blank spot) and will see what I\ncan come up with.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 20 Sep 2021 22:58:04 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: psql: tab completion differs on semicolon placement"
},
{
"msg_contents": "On Mon, Sep 20, 2021 at 08:26:51PM +0100, Dagfinn Ilmari Manns�ker wrote:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n> \n> > While testing a patch I fat-fingered a CREATE DATABASE statement by tab\n> > completing *after* the semicolon, with no space between the objname and\n> > semicolon. The below options were presented, which at this point aren't really\n> > applicable:\n> >\n> > db=# create database foo;\n> > ALLOW_CONNECTIONS ENCODING LC_COLLATE LOCALE TABLESPACE\n> > CONNECTION LIMIT IS_TEMPLATE LC_CTYPE OWNER TEMPLATE\n> >\n> > DROP DATABASE has a similar tab completion which makes about as much sense:\n> >\n> > db=# drop database foo;WITH (\n> >\n> > Checking prev_wd for not ending with ';' as per the attached makes \"objname;\"\n> > behave like \"objname ;\". Is there a reason for not doing that which I'm\n> > missing? I didn't check for others, but if this seems reasonable I'll go\n> > through to find any other similar cases.\n> \n> The same applies to any completion after a MatchAny that ends in a any\n> of the WORD_BREAKS characters (except whitespace and () which are\n> handled specially).\n> \n> #define WORD_BREAKS \"\\t\\n@$><=;|&{() \"\n> \n> IMO a fix should be more principled than just special-casing semicolon\n> and CREATE TABLE. Maybe get_previous_words() should stop when it sees\n> an unquoted semicolon?\n\nIs there some reason get_previous_words() shouldn't stop for\neverything that's WORD_BREAKS? If not, making that the test might make the\ngeneral rule a little simpler to write, and if WORD_BREAKS ever\nchanged, for example to include all space, or all breaking space, or\nsimilar, the consequences would at least not propagate through\nseemingly unrelated code.\n\nAt the moment, get_previous_words() does look for everything in\nWORD_BREAKS, and then accounts for double quotes (\") and then does\nsomething clever to account for double quotes and the quoting behavior\nthat doubling them (\"\") accomplishes. Anyhow, that looks like it\nshould work in this case, but clearly it's not.\n\nWould it be less error prone to do these checks and maybe push or pop\none or more stacks holding state as each character came in? I suspect\nthe overhead would be unnoticeable even on the slowest* client.\n\nBest,\nDavid.\n\n* One possible exception would be a gigantic paste, a case where psql\n can be prevented from attempting tab completion, although the\n prevention measures involve a pretty obscure terminal setting:\n https://cirw.in/blog/bracketed-paste\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Mon, 20 Sep 2021 23:04:01 +0000",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: psql: tab completion differs on semicolon placement"
},
{
"msg_contents": "David Fetter <david@fetter.org> writes:\n\n> On Mon, Sep 20, 2021 at 08:26:51PM +0100, Dagfinn Ilmari Mannsåker wrote:\n>\n>> The same applies to any completion after a MatchAny that ends in a any\n>> of the WORD_BREAKS characters (except whitespace and () which are\n>> handled specially).\n>> \n>> #define WORD_BREAKS \"\\t\\n@$><=;|&{() \"\n>> \n>> IMO a fix should be more principled than just special-casing semicolon\n>> and CREATE TABLE. Maybe get_previous_words() should stop when it sees\n>> an unquoted semicolon?\n>\n> Is there some reason get_previous_words() shouldn't stop for\n> everything that's WORD_BREAKS? If not, making that the test might make the\n> general rule a little simpler to write, and if WORD_BREAKS ever\n> changed, for example to include all space, or all breaking space, or\n> similar, the consequences would at least not propagate through\n> seemingly unrelated code.\n\nBy \"stopping\" I meant ignoring everything before the last semicolon when\nsplitting the buffer into words, i.e. not putting them into the\nprevious_words array, so they're not considered by the\n(Tail|Head)?Matches(CS)? macros. WORD_BREAKS is the list of characters\nused for splitting the input buffer into the previous_words array, so it\nwould need to keep going past those, or you'd only be able to match the\nlast word when tab completing, rendering the entire exercise pointless.\n\n> At the moment, get_previous_words() does look for everything in\n> WORD_BREAKS, and then accounts for double quotes (\") and then does\n> something clever to account for double quotes and the quoting behavior\n> that doubling them (\"\") accomplishes. Anyhow, that looks like it\n> should work in this case, but clearly it's not.\n\nWORD_BREAK characters inside double-quoted identifiers are handled\ncorreclty, but only after you've typed the closing quote. If you have\nan ambiguous prefix that contains a WORD_BREAK character, you can't\ntab-complete the rest:\n\nilmari@[local]:5432 ~=# drop table \"foo<tab><tab>\n\"foo$bar\" \"foo$zot\" \"foo-bar\" \"foo-zot\"\nilmari@[local]:5432 ~=# drop table \"foo-<tab><tab>\n\"foo-bar\" \"foo-zot\"\nilmari@[local]:5432 ~=# rop table \"foo$<tab><tab>\n\nilmari@[local]:5432 ~=# drop table \"foo$bar\" <tab><tab>\ncascade restrict\n\nTangentially, I would argue that $ shouldn't be a WORD_BREAK character,\nsince it's valid in unquoted identifiers (except at the start, just like\nnumbers). But you do need to quote such identifiers when\ntab-completing, since quote_ident() quotes anything tht's not all\nlowercase letters, underscores and numbers.\n\n- ilmari\n\n\n",
"msg_date": "Tue, 21 Sep 2021 11:25:08 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: psql: tab completion differs on semicolon placement"
}
] |
[
{
"msg_contents": "One thing is needed and is not solved yet is delayed replication on logical\nreplication. Would be interesting to document it on Restrictions page,\nright ?\n\nregards,\nMarcos\n\nOne thing is needed and is not solved yet is delayed replication on logical replication. Would be interesting to document it on Restrictions page, right ?regards,Marcos",
"msg_date": "Mon, 20 Sep 2021 13:16:39 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "logical replication restrictions"
},
{
"msg_contents": "On Mon, Sep 20, 2021 at 9:47 PM Marcos Pegoraro <marcos@f10.com.br> wrote:\n>\n> One thing is needed and is not solved yet is delayed replication on logical replication. Would be interesting to document it on Restrictions page, right ?\n>\n\nWhat do you mean by delayed replication? Is it that by default we send\nthe transactions at commit?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 21 Sep 2021 08:14:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical replication restrictions"
},
{
"msg_contents": "No, I´m talking about that configuration you can have on standby servers\nrecovery_min_apply_delay = '8h'\n\nAtenciosamente,\n\n\n\n\nEm seg., 20 de set. de 2021 às 23:44, Amit Kapila <amit.kapila16@gmail.com>\nescreveu:\n\n> On Mon, Sep 20, 2021 at 9:47 PM Marcos Pegoraro <marcos@f10.com.br> wrote:\n> >\n> > One thing is needed and is not solved yet is delayed replication on\n> logical replication. Would be interesting to document it on Restrictions\n> page, right ?\n> >\n>\n> What do you mean by delayed replication? Is it that by default we send\n> the transactions at commit?\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nNo, I´m talking about that configuration you can have on standby serversrecovery_min_apply_delay = '8h'Atenciosamente, Em seg., 20 de set. de 2021 às 23:44, Amit Kapila <amit.kapila16@gmail.com> escreveu:On Mon, Sep 20, 2021 at 9:47 PM Marcos Pegoraro <marcos@f10.com.br> wrote:\n>\n> One thing is needed and is not solved yet is delayed replication on logical replication. Would be interesting to document it on Restrictions page, right ?\n>\n\nWhat do you mean by delayed replication? Is it that by default we send\nthe transactions at commit?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Tue, 21 Sep 2021 07:51:14 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Re: logical replication restrictions"
},
{
"msg_contents": "On Tue, Sep 21, 2021 at 4:21 PM Marcos Pegoraro <marcos@f10.com.br> wrote:\n\n> No, I´m talking about that configuration you can have on standby servers\n> recovery_min_apply_delay = '8h'\n>\n>\noh okay, I think this can be useful in some cases where we want to avoid\ndata loss similar to its use for physical standby. For example, if the user\nhas by mistake truncated the table (or deleted some required data) on the\npublisher, we can always it from the subscriber if we have such a feature.\n\nHaving said that, I am not sure if we can call it a restriction. It is more\nof a TODO kind of thing. It doesn't sound advisable to me to keep growing\nthe current Restrictions page [1].\n\n[1] - https://wiki.postgresql.org/wiki/Todo\n[2] -\nhttps://www.postgresql.org/docs/devel/logical-replication-restrictions.html\n\n-- \nWith Regards,\nAmit Kapila.\n\nOn Tue, Sep 21, 2021 at 4:21 PM Marcos Pegoraro <marcos@f10.com.br> wrote:No, I´m talking about that configuration you can have on standby serversrecovery_min_apply_delay = '8h'oh okay, I think this can be useful in some cases where we want to avoid data loss similar to its use for physical standby. For example, if the user has by mistake truncated the table (or deleted some required data) on the publisher, we can always it from the subscriber if we have such a feature.Having said that, I am not sure if we can call it a restriction. It is more of a TODO kind of thing. It doesn't sound advisable to me to keep growing the current Restrictions page [1].[1] - https://wiki.postgresql.org/wiki/Todo[2] - https://www.postgresql.org/docs/devel/logical-replication-restrictions.html-- With Regards,Amit Kapila.",
"msg_date": "Wed, 22 Sep 2021 09:48:12 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical replication restrictions"
},
{
"msg_contents": ">\n> oh okay, I think this can be useful in some cases where we want to avoid\n> data loss similar to its use for physical standby. For example, if the user\n> has by mistake truncated the table (or deleted some required data) on the\n> publisher, we can always it from the subscriber if we have such a feature.\n>\n> Having said that, I am not sure if we can call it a restriction. It is\n> more of a TODO kind of thing. It doesn't sound advisable to me to keep\n> growing the current Restrictions page\n>\n\nOK, so, could you guide me where to start on this feature ?\n\nregards,\nMarcos\n\noh okay, I think this can be useful in some cases where we want to avoid data loss similar to its use for physical standby. For example, if the user has by mistake truncated the table (or deleted some required data) on the publisher, we can always it from the subscriber if we have such a feature.Having said that, I am not sure if we can call it a restriction. It is more of a TODO kind of thing. It doesn't sound advisable to me to keep growing the current Restrictions page OK, so, could you guide me where to start on this feature ? regards, Marcos",
"msg_date": "Wed, 22 Sep 2021 08:56:09 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Re: logical replication restrictions"
},
{
"msg_contents": "On Wed, Sep 22, 2021, at 1:18 AM, Amit Kapila wrote:\n> On Tue, Sep 21, 2021 at 4:21 PM Marcos Pegoraro <marcos@f10.com.br> wrote:\n>> No, I´m talking about that configuration you can have on standby servers\n>> recovery_min_apply_delay = '8h'\n>> \n> \n> oh okay, I think this can be useful in some cases where we want to avoid data loss similar to its use for physical standby. For example, if the user has by mistake truncated the table (or deleted some required data) on the publisher, we can always it from the subscriber if we have such a feature.\n> \n> Having said that, I am not sure if we can call it a restriction. It is more of a TODO kind of thing. It doesn't sound advisable to me to keep growing the current Restrictions page [1].\nIt is a new feature. pglogical supports it and it is useful for delayed\nsecondary server and if, for some business reason, you have to delay when data\nis available. There might be other use cases but these are the ones I regularly\nheard from customers.\n\nBTW, I have a WIP patch for this feature. I didn't have enough time to post it\nbecause it lacks documentation and tests. I'm planning to do it as soon as this\nCF ends.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, Sep 22, 2021, at 1:18 AM, Amit Kapila wrote:On Tue, Sep 21, 2021 at 4:21 PM Marcos Pegoraro <marcos@f10.com.br> wrote:No, I´m talking about that configuration you can have on standby serversrecovery_min_apply_delay = '8h'oh okay, I think this can be useful in some cases where we want to avoid data loss similar to its use for physical standby. For example, if the user has by mistake truncated the table (or deleted some required data) on the publisher, we can always it from the subscriber if we have such a feature.Having said that, I am not sure if we can call it a restriction. It is more of a TODO kind of thing. It doesn't sound advisable to me to keep growing the current Restrictions page [1].It is a new feature. pglogical supports it and it is useful for delayedsecondary server and if, for some business reason, you have to delay when datais available. There might be other use cases but these are the ones I regularlyheard from customers.BTW, I have a WIP patch for this feature. I didn't have enough time to post itbecause it lacks documentation and tests. I'm planning to do it as soon as thisCF ends.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Wed, 22 Sep 2021 13:57:29 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: logical replication restrictions"
},
{
"msg_contents": ">\n> No, I´m talking about that configuration you can have on standby servers\n> recovery_min_apply_delay = '8h'\n>\n>\n> oh okay, I think this can be useful in some cases where we want to avoid\n> data loss similar to its use for physical standby. For example, if the user\n> has by mistake truncated the table (or deleted some required data) on the\n> publisher, we can always it from the subscriber if we have such a feature.\n>\n> Having said that, I am not sure if we can call it a restriction. It is\n> more of a TODO kind of thing. It doesn't sound advisable to me to keep\n> growing the current Restrictions page [1].\n>\n> It is a new feature. pglogical supports it and it is useful for delayed\n> secondary server and if, for some business reason, you have to delay when\n> data\n> is available. There might be other use cases but these are the ones I\n> regularly\n> heard from customers.\n>\n> BTW, I have a WIP patch for this feature. I didn't have enough time to\n> post it\n> because it lacks documentation and tests. I'm planning to do it as soon as\n> this\n> CF ends.\n>\n> Fine, let me know if you need any help, testing, for example.\n\nNo, I´m talking about that configuration you can have on standby serversrecovery_min_apply_delay = '8h'oh okay, I think this can be useful in some cases where we want to avoid data loss similar to its use for physical standby. For example, if the user has by mistake truncated the table (or deleted some required data) on the publisher, we can always it from the subscriber if we have such a feature.Having said that, I am not sure if we can call it a restriction. It is more of a TODO kind of thing. It doesn't sound advisable to me to keep growing the current Restrictions page [1].It is a new feature. pglogical supports it and it is useful for delayedsecondary server and if, for some business reason, you have to delay when datais available. There might be other use cases but these are the ones I regularlyheard from customers.BTW, I have a WIP patch for this feature. I didn't have enough time to post itbecause it lacks documentation and tests. I'm planning to do it as soon as thisCF ends.Fine, let me know if you need any help, testing, for example.",
"msg_date": "Wed, 22 Sep 2021 14:22:20 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Re: logical replication restrictions"
},
{
"msg_contents": "On Wed, Sep 22, 2021 at 10:27 PM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Wed, Sep 22, 2021, at 1:18 AM, Amit Kapila wrote:\n>\n> On Tue, Sep 21, 2021 at 4:21 PM Marcos Pegoraro <marcos@f10.com.br> wrote:\n>\n> No, I´m talking about that configuration you can have on standby servers\n> recovery_min_apply_delay = '8h'\n>\n>\n> oh okay, I think this can be useful in some cases where we want to avoid data loss similar to its use for physical standby. For example, if the user has by mistake truncated the table (or deleted some required data) on the publisher, we can always it from the subscriber if we have such a feature.\n>\n> Having said that, I am not sure if we can call it a restriction. It is more of a TODO kind of thing. It doesn't sound advisable to me to keep growing the current Restrictions page [1].\n>\n> It is a new feature. pglogical supports it and it is useful for delayed\n> secondary server and if, for some business reason, you have to delay when data\n> is available.\n>\n\nWhat kind of reasons do you see where users prefer to delay except to\navoid data loss in the case where users unintentionally removed some\ndata from the primary?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 23 Sep 2021 11:23:25 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical replication restrictions"
},
{
"msg_contents": ">\n> What kind of reasons do you see where users prefer to delay except to\n> avoid data loss in the case where users unintentionally removed some\n> data from the primary?\n>\n>\n> Debugging. Suppose I have a problem, but that problem occurs once a week\nor a month. When this problem occurs again a monitoring system sends me a\nmessage ... Hey, that problem occurred again. Then, as I configured my\nreplica to Delay = '30 min', I have time to connect to it and wait, record\nby record coming and see exactly what made that mistake.\n\nWhat kind of reasons do you see where users prefer to delay except to\navoid data loss in the case where users unintentionally removed some\ndata from the primary?\n\nDebugging. Suppose I have a problem, but that problem occurs once a week or a month. When this problem occurs again a monitoring system sends me a message ... Hey, that problem occurred again. Then, as I configured my replica to Delay = '30 min', I have time to connect to it and wait, record by record coming and see exactly what made that mistake.",
"msg_date": "Thu, 23 Sep 2021 07:22:59 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Re: logical replication restrictions"
},
{
"msg_contents": "On Wed, Sep 22, 2021 at 6:18 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Sep 21, 2021 at 4:21 PM Marcos Pegoraro <marcos@f10.com.br> wrote:\n>>\n>> No, I´m talking about that configuration you can have on standby servers\n>> recovery_min_apply_delay = '8h'\n>>\n>\n> oh okay, I think this can be useful in some cases where we want to avoid data loss similar to its use for physical standby. For example, if the user has by mistake truncated the table (or deleted some required data) on the publisher, we can always it from the subscriber if we have such a feature.\n>\n> Having said that, I am not sure if we can call it a restriction. It is more of a TODO kind of thing. It doesn't sound advisable to me to keep growing the current Restrictions page [1].\n\nOne could argue that not having delayed apply *is* a restriction\ncompared to both physical replication and \"the original upstream\"\npg_logical.\n\nI think therefore it should be mentioned in \"Restrictions\" so people\nconsidering moving from physical streaming to pg_logical or just\ntrying to decide whether to use pg_logical are warned.\n\nAlso, the Restrictions page starts with \" These might be addressed in\nfuture releases.\" so there is no exclusivity of being either a\nrestriction or TODO.\n\n> [1] - https://wiki.postgresql.org/wiki/Todo\n> [2] - https://www.postgresql.org/docs/devel/logical-replication-restrictions.html\n\n\n-----\nHannu Krosing\nGoogle Cloud - We have a long list of planned contributions and we are hiring.\nContact me if interested.\n\n\n",
"msg_date": "Sat, 25 Sep 2021 21:31:55 +0200",
"msg_from": "Hannu Krosing <hannuk@google.com>",
"msg_from_op": false,
"msg_subject": "Re: logical replication restrictions"
},
{
"msg_contents": "On Wed, Sep 22, 2021, at 1:57 PM, Euler Taveira wrote:\n> On Wed, Sep 22, 2021, at 1:18 AM, Amit Kapila wrote:\n>> On Tue, Sep 21, 2021 at 4:21 PM Marcos Pegoraro <marcos@f10.com.br> wrote:\n>>> No, I´m talking about that configuration you can have on standby servers\n>>> recovery_min_apply_delay = '8h'\n>>> \n>> \n>> oh okay, I think this can be useful in some cases where we want to avoid data loss similar to its use for physical standby. For example, if the user has by mistake truncated the table (or deleted some required data) on the publisher, we can always it from the subscriber if we have such a feature.\n>> \n>> Having said that, I am not sure if we can call it a restriction. It is more of a TODO kind of thing. It doesn't sound advisable to me to keep growing the current Restrictions page [1].\n> It is a new feature. pglogical supports it and it is useful for delayed\n> secondary server and if, for some business reason, you have to delay when data\n> is available. There might be other use cases but these are the ones I regularly\n> heard from customers.\n> \n> BTW, I have a WIP patch for this feature. I didn't have enough time to post it\n> because it lacks documentation and tests. I'm planning to do it as soon as this\n> CF ends.\nLong time, no patch. Here it is. I will provide documentation in the next\nversion. I would appreciate some feedback.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/",
"msg_date": "Mon, 28 Feb 2022 21:18:31 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: logical replication restrictions"
},
{
"msg_contents": "On Tuesday, March 1, 2022 9:19 AM Euler Taveira <euler@eulerto.com> wrote:\r\n> Long time, no patch. Here it is. I will provide documentation in the next\r\n> \r\n> version. I would appreciate some feedback.\r\nHi, thank you for posting the patch !\r\n\r\n\r\n$ git am v1-0001-Time-delayed-logical-replication-subscriber.patch\r\n\r\nApplying: Time-delayed logical replication subscriber\r\nerror: patch failed: src/backend/catalog/system_views.sql:1261\r\nerror: src/backend/catalog/system_views.sql: patch does not apply\r\n\r\n\r\nFYI, by one recent commit(7a85073), the HEAD redesigned pg_stat_subscription_workers.\r\nThus, the blow change can't be applied. Could you please rebase v1 ?\r\n\r\n\r\ndiff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql\r\nindex 3cb69b1f87..1cc0d86f2e 100644\r\n--- a/src/backend/catalog/system_views.sql\r\n+++ b/src/backend/catalog/system_views.sql\r\n@@ -1261,7 +1261,8 @@ REVOKE ALL ON pg_replication_origin_status FROM public;\r\n -- All columns of pg_subscription except subconninfo are publicly readable.\r\n REVOKE ALL ON pg_subscription FROM public;\r\n GRANT SELECT (oid, subdbid, subname, subowner, subenabled, subbinary,\r\n- substream, subtwophasestate, subslotname, subsynccommit, subpublications)\r\n+ substream, subtwophasestate, subslotname, subsynccommit,\r\n+ subapplydelay, subpublications)\r\n ON pg_subscription TO public;\r\n\r\n CREATE VIEW pg_stat_subscription_workers AS\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Tue, 1 Mar 2022 06:27:44 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: logical replication restrictions"
},
{
"msg_contents": "On Tue, Mar 1, 2022, at 3:27 AM, osumi.takamichi@fujitsu.com wrote:\n> $ git am v1-0001-Time-delayed-logical-replication-subscriber.patch\nI generally use -3 to fall back on 3-way merge. Doesn't it work for you?\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Tue, Mar 1, 2022, at 3:27 AM, osumi.takamichi@fujitsu.com wrote:$ git am v1-0001-Time-delayed-logical-replication-subscriber.patchI generally use -3 to fall back on 3-way merge. Doesn't it work for you?--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Tue, 01 Mar 2022 20:54:16 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: logical replication restrictions"
},
{
"msg_contents": "On Wednesday, March 2, 2022 8:54 AM Euler Taveira <euler@eulerto.com> wrote:\n> On Tue, Mar 1, 2022, at 3:27 AM, osumi.takamichi@fujitsu.com\n> <mailto:osumi.takamichi@fujitsu.com> wrote:\n> \n> \n> \t$ git am v1-0001-Time-delayed-logical-replication-subscriber.patch\n> \n> \n> I generally use -3 to fall back on 3-way merge. Doesn't it work for you?\nIt did. Excuse me for making noises.\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Wed, 2 Mar 2022 01:49:01 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: logical replication restrictions"
},
{
"msg_contents": "On Mon, Feb 28, 2022, at 9:18 PM, Euler Taveira wrote:\n> Long time, no patch. Here it is. I will provide documentation in the next\n> version. I would appreciate some feedback.\nThis patch is broken since commit 705e20f8550c0e8e47c0b6b20b5f5ffd6ffd9e33. I\nrebased it.\n\nI added documentation that explains how this parameter works. I decided to\nrename the parameter from apply_delay to min_apply_delay to use the same\nterminology from the physical replication. IMO the new name seems clear that\nthere isn't a guarantee that we are always x ms behind the publisher. Indeed,\ndue to processing/transferring the delay might be higher than the specified\ninterval.\n\nI refactored the way the delay is applied. The previous patch is only covering\na regular transaction. This new one also covers prepared transaction. The\ncurrent design intercepts the transaction during the first change (at the time\nit will start the transaction to apply the changes) and applies the delay\nbefore effectively starting the transaction. The previous patch uses\nbegin_replication_step() as this point. However, to support prepared\ntransactions I changed the apply_delay signature to accepts a timestamp\nparameter (because we use another variable to calculate the delay for prepared\ntransactions -- prepare_time). Hence, the apply_delay() moved to another places\n-- apply_handle_begin and apply_handle_begin_prepare().\n\nThe new code does not apply the delay in 2 situations:\n\n* STREAM START: streamed transactions might not have commit_time or\n prepare_time set. I'm afraid it is not possible to use the referred variables\n because at STREAM START time we don't have a transaction commit time. The\n protocol could provide a timestamp that indicates when it starts streaming\n the transaction then we could use it to apply the delay. Unfortunately, we\n don't have it. Having said that this new patch does not apply delay for\n streamed transactions.\n* non-transaction messages: the delay could be applied to non-transaction\n messages too. It is sent independently of the transaction that contains it.\n Since the logical replication does not send messages to the subscriber, this\n is not an issue. However, consumers that use pgoutput and wants to implement\n a delay will require it.\n\nI'm still looking for a way to support streamed transactions without much\nsurgery into the logical replication protocol.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/",
"msg_date": "Sun, 20 Mar 2022 21:40:40 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: logical replication restrictions"
},
{
"msg_contents": "On 2022-03-20 21:40:40 -0300, Euler Taveira wrote:\n> On Mon, Feb 28, 2022, at 9:18 PM, Euler Taveira wrote:\n> > Long time, no patch. Here it is. I will provide documentation in the next\n> > version. I would appreciate some feedback.\n> This patch is broken since commit 705e20f8550c0e8e47c0b6b20b5f5ffd6ffd9e33. I\n> rebased it.\n\nThis fails tests, specifically it seems psql crashes:\nhttps://cirrus-ci.com/task/6592281292570624?logs=cores#L46\n\nMarked as waiting-on-author.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 21 Mar 2022 18:04:10 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: logical replication restrictions"
},
{
"msg_contents": "On Mon, Mar 21, 2022, at 10:04 PM, Andres Freund wrote:\n> On 2022-03-20 21:40:40 -0300, Euler Taveira wrote:\n> > On Mon, Feb 28, 2022, at 9:18 PM, Euler Taveira wrote:\n> > > Long time, no patch. Here it is. I will provide documentation in the next\n> > > version. I would appreciate some feedback.\n> > This patch is broken since commit 705e20f8550c0e8e47c0b6b20b5f5ffd6ffd9e33. I\n> > rebased it.\n> \n> This fails tests, specifically it seems psql crashes:\n> https://cirrus-ci.com/task/6592281292570624?logs=cores#L46\nYeah. I forgot to test this patch with cassert before sending it. :( I didn't\nsend a new patch because there is another issue (with int128) that I'm\ncurrently reworking. I'll send another patch soon.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Mon, Mar 21, 2022, at 10:04 PM, Andres Freund wrote:On 2022-03-20 21:40:40 -0300, Euler Taveira wrote:> On Mon, Feb 28, 2022, at 9:18 PM, Euler Taveira wrote:> > Long time, no patch. Here it is. I will provide documentation in the next> > version. I would appreciate some feedback.> This patch is broken since commit 705e20f8550c0e8e47c0b6b20b5f5ffd6ffd9e33. I> rebased it.This fails tests, specifically it seems psql crashes:https://cirrus-ci.com/task/6592281292570624?logs=cores#L46Yeah. I forgot to test this patch with cassert before sending it. :( I didn'tsend a new patch because there is another issue (with int128) that I'mcurrently reworking. I'll send another patch soon.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Mon, 21 Mar 2022 22:09:58 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: logical replication restrictions"
},
{
"msg_contents": "On Mon, Mar 21, 2022, at 10:09 PM, Euler Taveira wrote:\n> On Mon, Mar 21, 2022, at 10:04 PM, Andres Freund wrote:\n>> On 2022-03-20 21:40:40 -0300, Euler Taveira wrote:\n>> > On Mon, Feb 28, 2022, at 9:18 PM, Euler Taveira wrote:\n>> > > Long time, no patch. Here it is. I will provide documentation in the next\n>> > > version. I would appreciate some feedback.\n>> > This patch is broken since commit 705e20f8550c0e8e47c0b6b20b5f5ffd6ffd9e33. I\n>> > rebased it.\n>> \n>> This fails tests, specifically it seems psql crashes:\n>> https://cirrus-ci.com/task/6592281292570624?logs=cores#L46\n> Yeah. I forgot to test this patch with cassert before sending it. :( I didn't\n> send a new patch because there is another issue (with int128) that I'm\n> currently reworking. I'll send another patch soon.\nHere is another version after rebasing it. In this version I fixed the psql\nissue and rewrote interval_to_ms function.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/",
"msg_date": "Wed, 23 Mar 2022 18:19:34 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: logical replication restrictions"
},
{
"msg_contents": "On Wed, Mar 23, 2022, at 6:19 PM, Euler Taveira wrote:\n> On Mon, Mar 21, 2022, at 10:09 PM, Euler Taveira wrote:\n>> On Mon, Mar 21, 2022, at 10:04 PM, Andres Freund wrote:\n>>> On 2022-03-20 21:40:40 -0300, Euler Taveira wrote:\n>>> > On Mon, Feb 28, 2022, at 9:18 PM, Euler Taveira wrote:\n>>> > > Long time, no patch. Here it is. I will provide documentation in the next\n>>> > > version. I would appreciate some feedback.\n>>> > This patch is broken since commit 705e20f8550c0e8e47c0b6b20b5f5ffd6ffd9e33. I\n>>> > rebased it.\n>>> \n>>> This fails tests, specifically it seems psql crashes:\n>>> https://cirrus-ci.com/task/6592281292570624?logs=cores#L46\n>> Yeah. I forgot to test this patch with cassert before sending it. :( I didn't\n>> send a new patch because there is another issue (with int128) that I'm\n>> currently reworking. I'll send another patch soon.\n> Here is another version after rebasing it. In this version I fixed the psql\n> issue and rewrote interval_to_ms function.\n From the previous version, I added support for streamed transactions. For\nstreamed transactions, the delay is applied during STREAM COMMIT message.\nThat's ok if we add the delay before applying the spooled messages. Hence, we\nguarantee that the delay is applied *before* each transaction. The same logic\nis applied to prepared transactions. The delay is introduced before applying\nthe spooled messages in STREAM PREPARE message.\n\nTests were refactored a bit. A test for streamed transaction was included too.\n\nVersion 4 is attached.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/",
"msg_date": "Mon, 04 Jul 2022 14:41:26 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: logical replication restrictions"
},
{
"msg_contents": "Here are some review comments for your v4-0001 patch. I hope they are\nuseful for you.\n\n======\n\n1. General\n\nThis thread name \"logical replication restrictions\" seems quite\nunrelated to the patch here. Maybe it's better to start a new thread\notherwise nobody is going to recognise what this thread is really\nabout.\n\n======\n\n2. Commit message\n\nSimilar to physical replication, a time-delayed copy of the data for\nlogical replication is useful for some scenarios (specially to fix\nerrors that might cause data loss).\n\n\"specially\" -> \"particularly\" ?\n\n~~~\n\n3. Commit message\n\nMaybe take some examples from the regression tests to show usage of\nthe new parameter\n\n======\n\n4. doc/src/sgml/catalogs.sgml\n\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>subapplydelay</structfield> <type>int8</type>\n+ </para>\n+ <para>\n+ Delay the application of changes by a specified amount of time.\n+ </para></entry>\n+ </row>\n\nI think this should say that the units are ms.\n\n======\n\n5. doc/src/sgml/ref/create_subscription.sgml\n\n+ <varlistentry>\n+ <term><literal>min_apply_delay</literal> (<type>integer</type>)</term>\n+ <listitem>\n\nIs the \"integer\" type here correct? It might eventually be stored as\nan integer, but IIUC (going by the tests) from the user point-of-view\nthis parameter is really \"text\" type for representing ms or interval,\nright?\n\n~~~\n\n6. doc/src/sgml/ref/create_subscription.sgml\n\n Similar\n+ to the physical replication feature\n+ (<xref linkend=\"guc-recovery-min-apply-delay\"/>), it may be useful to\n+ have a time-delayed copy of data for logical replication.\n\nSUGGESTION\nAs with the physical replication feature (recovery_min_apply_delay),\nit can be useful for logical replication to delay the data\nreplication.\n\n~~~\n\n7. doc/src/sgml/ref/create_subscription.sgml\n\nDelays in logical\n+ decoding and in transfer the transaction may reduce the actual wait\n+ time.\n\nSUGGESTION\nTime spent in logical decoding and in transferring the transaction may\nreduce the actual wait time.\n\n~~~\n\n8. doc/src/sgml/ref/create_subscription.sgml\n\nIf the system clocks on publisher and subscriber are not\n+ synchronized, this may lead to apply changes earlier than expected.\n\nWhy just say \"earlier than expected\"? If the publisher's time is ahead\nof the subscriber then the changes might also be *later* than\nexpected, right? So, perhaps it is better to just say \"other than\nexpected\".\n\n~~~\n\n9. doc/src/sgml/ref/create_subscription.sgml\n\nShould there also be a big warning box about the impact if using\nsynchronous_commit (like the other streaming replication page has this\nwarning)?\n\n~~~\n\n10. doc/src/sgml/ref/create_subscription.sgml\n\nI think there should be some examples somewhere showing how to specify\nthis parameter. Maybe they are better added somewhere in \"31.2\nSubscription\" and xrefed from here.\n\n======\n\n11. src/backend/commands/subscriptioncmds.c - parse_subscription_options\n\nI think there should be a default assignment to 0 (done where all the\nother supported option defaults are set)\n\n~~~\n\n12. src/backend/commands/subscriptioncmds.c - parse_subscription_options\n\n+ if (opts->min_apply_delay < 0)\n+ ereport(ERROR,\n+ errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n+ errmsg(\"option \\\"%s\\\" must not be negative\", \"min_apply_delay\"));\n+\n\nI thought this check only needs to be do within scope of the preceding\nif - (IsSet(supported_opts, SUBOPT_MIN_APPLY_DELAY) &&\nstrcmp(defel->defname, \"min_apply_delay\") == 0)\n\n======\n\n13. src/backend/commands/subscriptioncmds.c - AlterSubscription\n\n@@ -1093,6 +1126,17 @@ AlterSubscription(ParseState *pstate,\nAlterSubscriptionStmt *stmt,\n if (opts.enabled)\n ApplyLauncherWakeupAtCommit();\n\n+ /*\n+ * If this subscription has been disabled and it has an apply\n+ * delay set, wake up the logical replication worker to finish\n+ * it as soon as possible.\n+ */\n+ if (!opts.enabled && sub->applydelay > 0)\n\nI did not really understand the logic why should the min_apply_delay\noverride the enabled=false? It is a called *minimum* delay so if it\nends up being way over the parameter value (because the subscription\nis disabled) then why does that matter?\n\n======\n\n14. src/backend/replication/logical/worker.c\n\n@@ -252,6 +252,7 @@ WalReceiverConn *LogRepWorkerWalRcvConn = NULL;\n\n Subscription *MySubscription = NULL;\n static bool MySubscriptionValid = false;\n+TimestampTz MySubscriptionMinApplyDelayUntil = 0;\n\nLooking at the only usage of this variable (in apply_delay) and how it\nis used I did see why this cannot just be a local member of the\napply_delay function?\n\n~~~\n\n15. src/backend/replication/logical/worker.c - apply_delay\n\n+/*\n+ * Apply the informed delay for the transaction.\n+ *\n+ * A regular transaction uses the commit time to calculate the delay. A\n+ * prepared transaction uses the prepare time to calculate the delay.\n+ */\n+static void\n+apply_delay(TimestampTz ts)\n\nI didn't think it needs to mention here about the different kinds of\ntransactions because where it comes from has nothing really to do with\nthis function's logic.\n\n~~~\n\n16. src/backend/replication/logical/worker.c - apply_delay\n\nRefer to comment #14 about MySubscriptionMinApplyDelayUntil.\n\n~~~\n\n17. src/backend/replication/logical/worker.c - apply_handle_stream_prepare\n\n@@ -1090,6 +1146,19 @@ apply_handle_stream_prepare(StringInfo s)\n\n elog(DEBUG1, \"received prepare for streamed transaction %u\",\nprepare_data.xid);\n\n+ /*\n+ * Should we delay the current prepared transaction?\n+ *\n+ * Although the delay is applied in BEGIN PREPARE messages, streamed\n+ * prepared transactions apply the delay in a STREAM PREPARE message.\n+ * That's ok because no changes have been applied yet\n+ * (apply_spooled_messages() will do it).\n+ * The STREAM START message does not contain a prepare time (it will be\n+ * available when the in-progress prepared transaction finishes), hence, it\n+ * was not possible to apply a delay at that time.\n+ */\n+ apply_delay(prepare_data.prepare_time);\n+\n\nIt seems to rely on the spooling happening at the end. But won't this\ncause a problem later when/if the \"parallel apply\" patch [1] is pushed\nand the stream bgworkers are doing stuff on the fly instead of\nspooling at the end?\n\nOr are you expecting that the \"parallel apply\" feature should be\ndisabled if there is any min_apply_delay parameter specified?\n\n~~~\n\n18. src/backend/replication/logical/worker.c - apply_handle_stream_commit\n\nDitto comment #17.\n\n======\n\n19. src/bin/psql/tab-complete.c\n\nLet's keep the alphabetical order of the parameters in COMPLETE_WITH, as per [2]\n\n======\n\n20. src/include/catalog/pg_subscription.h\n\n@@ -58,6 +58,8 @@ CATALOG(pg_subscription,6100,SubscriptionRelationId)\nBKI_SHARED_RELATION BKI_ROW\n XLogRecPtr subskiplsn; /* All changes finished at this LSN are\n * skipped */\n\n+ int64 subapplydelay; /* Replication apply delay */\n+\n\nIMO the comment should mention the units \"(ms)\"\n\n======\n\n21. src/test/regress/sql/subscription.sql\n\nThere are some test cases for CREATE SUBSCRIPTION but there are no\ntest cases for ALTER SUBSCRIPTION changing this new parameter.\n\n====\n\n22. src/test/subscription/t/032_apply_delay.pl\n\nI received the following error when trying to run these 'subscription' tests:\n\nt/032_apply_delay.pl ............... No such class log_location at\nt/032_apply_delay.pl line 49, near \"my log_location\"\nsyntax error at t/032_apply_delay.pl line 49, near \"my log_location =\"\nGlobal symbol \"$log_location\" requires explicit package name at\nt/032_apply_delay.pl line 103.\nGlobal symbol \"$log_location\" requires explicit package name at\nt/032_apply_delay.pl line 105.\nGlobal symbol \"$log_location\" requires explicit package name at\nt/032_apply_delay.pl line 105.\nGlobal symbol \"$log_location\" requires explicit package name at\nt/032_apply_delay.pl line 107.\nGlobal symbol \"$sect\" requires explicit package name at\nt/032_apply_delay.pl line 108.\nExecution of t/032_apply_delay.pl aborted due to compilation errors.\nt/032_apply_delay.pl ............... Dubious, test returned 255 (wstat\n65280, 0xff00)\nNo subtests run\nt/100_bugs.pl ...................... ok\n\nTest Summary Report\n-------------------\nt/032_apply_delay.pl (Wstat: 65280 Tests: 0 Failed: 0)\n Non-zero exit status: 255\n Parse errors: No plan found in TAP output\n\n------\n[1] https://www.postgresql.org/message-id/flat/CAA4eK1%2BwyN6zpaHUkCLorEWNx75MG0xhMwcFhvjqm2KURZEAGw%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/CAHut%2BPucvKZgg_eJzUW--iL6DXHg1Jwj6F09tQziE3kUF67uLg%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 5 Jul 2022 18:41:51 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical replication restrictions"
},
{
"msg_contents": "On Tue, Jul 5, 2022 at 2:12 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are some review comments for your v4-0001 patch. I hope they are\n> useful for you.\n>\n> ======\n>\n> 1. General\n>\n> This thread name \"logical replication restrictions\" seems quite\n> unrelated to the patch here. Maybe it's better to start a new thread\n> otherwise nobody is going to recognise what this thread is really\n> about.\n>\n\n+1.\n\n>\n> 17. src/backend/replication/logical/worker.c - apply_handle_stream_prepare\n>\n> @@ -1090,6 +1146,19 @@ apply_handle_stream_prepare(StringInfo s)\n>\n> elog(DEBUG1, \"received prepare for streamed transaction %u\",\n> prepare_data.xid);\n>\n> + /*\n> + * Should we delay the current prepared transaction?\n> + *\n> + * Although the delay is applied in BEGIN PREPARE messages, streamed\n> + * prepared transactions apply the delay in a STREAM PREPARE message.\n> + * That's ok because no changes have been applied yet\n> + * (apply_spooled_messages() will do it).\n> + * The STREAM START message does not contain a prepare time (it will be\n> + * available when the in-progress prepared transaction finishes), hence, it\n> + * was not possible to apply a delay at that time.\n> + */\n> + apply_delay(prepare_data.prepare_time);\n> +\n>\n> It seems to rely on the spooling happening at the end. But won't this\n> cause a problem later when/if the \"parallel apply\" patch [1] is pushed\n> and the stream bgworkers are doing stuff on the fly instead of\n> spooling at the end?\n>\n\nI wonder why we don't apply the delay on commit/commit_prepared\nrecords only similar to physical replication. See recoveryApplyDelay.\nOne more advantage would be then we don't need to worry about\ntransactions that we are going to skip due SKIP feature for\nsubscribers.\n\nOne more thing that might be worth discussing is whether introducing a\nnew subscription parameter for this feature is a better idea or can we\nuse guc (either an existing or a new one). Users may want to set this\nonly for a particular subscription or set of subscriptions in which\ncase it is better to have this as a subscription level parameter.\nOTOH, I was slightly worried that if this will be used for all\nsubscriptions on a subscriber then it will be burdensome for users.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 5 Jul 2022 17:59:20 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical replication restrictions"
},
{
"msg_contents": "Hi Euler,\n\nI've some comments/questions about the latest version (v4) of your patch.\n\nFirstly, I think the patch needs a rebase. CI currently cannot apply it [1].\n\n22. src/test/subscription/t/032_apply_delay.pl\n>\n> I received the following error when trying to run these 'subscription'\n> tests:\n>\n> t/032_apply_delay.pl ............... No such class log_location at\n> t/032_apply_delay.pl line 49, near \"my log_location\"\n> syntax error at t/032_apply_delay.pl line 49, near \"my log_location =\"\n>\n\nI'm having these errors too. Seems like some declarations are missing.\n\n\n+ specified amount of time. If this value is specified without\n>> units,\n>\n> + it is taken as milliseconds. The default is zero, adding no\n>> delay.\n>\n> + </para>\n>\n> I'm also having an issue when I give min_apply_delay parameter without\nunits.\nI expect that if I set min_apply_delay to 5000 (without any unit), it will\nbe interpreted as 5000 ms.\n\nI tried:\npostgres=# CREATE SUBSCRIPTION testsub CONNECTION 'dbname=postgres\nport=5432' PUBLICATION testpub WITH (min_apply_delay=5000);\n\nAnd logs showed:\n2022-07-13 20:26:52.231 +03 [5422] LOG: logical replication apply delay:\n4999999 ms\n2022-07-13 20:26:52.231 +03 [5422] CONTEXT: processing remote data for\nreplication origin \"pg_18126\" during \"BEGIN\" in transaction 3152 finished\nat 0/465D7A0\n\nLooks like it starts from 5000000 ms instead of 5000 ms for me. If I state\nthe unit as ms, then it works correctly.\n\n\nLastly, I have a question about this delay during tablesync.\nIt's stated here that apply delays are not for initial tablesync.\n\n <para>\n>\n> + The delay occurs only on WAL records for transaction begins and\n>> after\n>\n> + the initial table synchronization. It is possible that the\n>\n> + replication delay between publisher and subscriber exceeds the\n>> value\n>\n> + of this parameter, in which case no delay is added. Note that\n>> the\n>\n> + delay is calculated between the WAL time stamp as written on\n>\n> + publisher and the current time on the subscriber. Delays in\n>> logical\n>\n> + decoding and in transfer the transaction may reduce the actual\n>> wait\n>\n> + time. If the system clocks on publisher and subscriber are not\n>\n> + synchronized, this may lead to apply changes earlier than\n>> expected.\n>\n> + This is not a major issue because a typical setting of this\n>> parameter\n>\n> + are much larger than typical time deviations between servers.\n>\n> + </para>\n>\n>\nThere might be a case where tablesync workers are in SYNCWAIT state and\nwaiting for apply worker to tell them to CATCHUP.\nAnd if apply worker is waiting in apply_delay function, tablesync workers\nwill be stuck at SYNCWAIT state and this might delay tablesync at least\n\"min_apply_delay\" amount of time or more.\nIs it something we would want? What do you think?\n\n\n[1] http://cfbot.cputube.org/patch_38_3581.log\n\n\nBest,\nMelih\n\nHi Euler,I've some comments/questions about the latest version (v4) of your patch.Firstly, I think the patch needs a rebase. CI currently cannot apply it [1].22. src/test/subscription/t/032_apply_delay.plI received the following error when trying to run these 'subscription' tests:t/032_apply_delay.pl ............... No such class log_location att/032_apply_delay.pl line 49, near \"my log_location\"syntax error at t/032_apply_delay.pl line 49, near \"my log_location =\"I'm having these errors too. Seems like some declarations are missing.+ specified amount of time. If this value is specified without units,+ it is taken as milliseconds. The default is zero, adding no delay.+ </para>I'm also having an issue when I give min_apply_delay parameter without units.I expect that if I set min_apply_delay to 5000 (without any unit), it will be interpreted as 5000 ms.I tried:postgres=# CREATE SUBSCRIPTION testsub CONNECTION 'dbname=postgres port=5432' PUBLICATION testpub WITH (min_apply_delay=5000);And logs showed:2022-07-13 20:26:52.231 +03 [5422] LOG: logical replication apply delay: 4999999 ms2022-07-13 20:26:52.231 +03 [5422] CONTEXT: processing remote data for replication origin \"pg_18126\" during \"BEGIN\" in transaction 3152 finished at 0/465D7A0Looks like it starts from 5000000 ms instead of 5000 ms for me. If I state the unit as ms, then it works correctly. Lastly, I have a question about this delay during tablesync. It's stated here that apply delays are not for initial tablesync. <para>+ The delay occurs only on WAL records for transaction begins and after+ the initial table synchronization. It is possible that the+ replication delay between publisher and subscriber exceeds the value+ of this parameter, in which case no delay is added. Note that the+ delay is calculated between the WAL time stamp as written on+ publisher and the current time on the subscriber. Delays in logical+ decoding and in transfer the transaction may reduce the actual wait+ time. If the system clocks on publisher and subscriber are not+ synchronized, this may lead to apply changes earlier than expected.+ This is not a major issue because a typical setting of this parameter+ are much larger than typical time deviations between servers.+ </para>There might be a case where tablesync workers are in SYNCWAIT state and waiting for apply worker to tell them to CATCHUP. And if apply worker is waiting in apply_delay function, tablesync workers will be stuck at SYNCWAIT state and this might delay tablesync at least \"min_apply_delay\" amount of time or more.Is it something we would want? What do you think?[1] http://cfbot.cputube.org/patch_38_3581.logBest,Melih",
"msg_date": "Wed, 13 Jul 2022 20:34:36 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical replication restrictions"
},
{
"msg_contents": "On Tue, Jul 5, 2022, at 5:41 AM, Peter Smith wrote:\n> Here are some review comments for your v4-0001 patch. I hope they are\n> useful for you.\nThanks for your review.\n\n> This thread name \"logical replication restrictions\" seems quite\n> unrelated to the patch here. Maybe it's better to start a new thread\n> otherwise nobody is going to recognise what this thread is really\n> about.\nI agree that the $SUBJECT does not describe the proposal. I decided that it is\nnot worth creating a thread because (i) there are some interaction and they\ncould be monitoring this thread and (ii) the CF entry has the correct\ndescription.\n\n> Similar to physical replication, a time-delayed copy of the data for\n> logical replication is useful for some scenarios (specially to fix\n> errors that might cause data loss).\nI changed the commit message a bit. \n\n> Maybe take some examples from the regression tests to show usage of\n> the new parameter\nI don't think an example is really useful in a commit message. If you are\nchecking this commit, it is a matter of reading the regression tests or\ndocumentation to obtain an example of how to use it.\n\n> I think this should say that the units are ms.\nUnit included.\n\n> Is the \"integer\" type here correct? It might eventually be stored as\n> an integer, but IIUC (going by the tests) from the user point-of-view\n> this parameter is really \"text\" type for representing ms or interval,\n> right?\nThe internal representation is integer. The unit is correct. If you use units,\nthe format is text that what the section [1] calls \"Numeric with Unit\". Even\nif the user is unsure about its usage, an example might help here.\n\n> SUGGESTION\n> As with the physical replication feature (recovery_min_apply_delay),\n> it can be useful for logical replication to delay the data\n> replication.\nIt is not \"data replication\", it is applying changes. I reworded that sentence.\n\n> SUGGESTION\n> Time spent in logical decoding and in transferring the transaction may\n> reduce the actual wait time.\nChanged.\n\n> If the system clocks on publisher and subscriber are not\n> + synchronized, this may lead to apply changes earlier than expected.\n> \n> Why just say \"earlier than expected\"? If the publisher's time is ahead\n> of the subscriber then the changes might also be *later* than\n> expected, right? So, perhaps it is better to just say \"other than\n> expected\".\nThis sentence is similar to another one in the recovery_min_apply_delay. I want\nto emphasize the fact that even if you use a 30-minute delay, it might apply a\nchange that happened 29 minutes 55 seconds ago. The main reason for this\nfeature is to avoid modifying changes *earlier*. If it applies the change 30\nminutes 5 seconds, it is fine.\n\n> Should there also be a big warning box about the impact if using\n> synchronous_commit (like the other streaming replication page has this\n> warning)?\nImpact? Could you elaborate?\n\n> I think there should be some examples somewhere showing how to specify\n> this parameter. Maybe they are better added somewhere in \"31.2\n> Subscription\" and xrefed from here.\nI added one example in the CREATE SUBSCRIPTION. We can add an example in the\nsection 31.2, however, since it is a new chapter I think it lacks examples for\nthe other options too (streaming, two_phase, copy_data, ...). It could be\nsubmitted as a separate patch IMO.\n\n> I think there should be a default assignment to 0 (done where all the\n> other supported option defaults are set)\nIt could for completeness. the memset() takes care of it. Anyway, I added it to\nthe beginning of the parse_subscription_options().\n\n> + if (opts->min_apply_delay < 0)\n> + ereport(ERROR,\n> + errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n> + errmsg(\"option \\\"%s\\\" must not be negative\", \"min_apply_delay\"));\n> +\n> \n> I thought this check only needs to be do within scope of the preceding\n> if - (IsSet(supported_opts, SUBOPT_MIN_APPLY_DELAY) &&\n> strcmp(defel->defname, \"min_apply_delay\") == 0)\nFixed.\n\n> + /*\n> + * If this subscription has been disabled and it has an apply\n> + * delay set, wake up the logical replication worker to finish\n> + * it as soon as possible.\n> + */\n> + if (!opts.enabled && sub->applydelay > 0)\n> \n> I did not really understand the logic why should the min_apply_delay\n> override the enabled=false? It is a called *minimum* delay so if it\n> ends up being way over the parameter value (because the subscription\n> is disabled) then why does that matter?\nIt doesn't. The main point of this code (as I tried to explain in the comment)\nis to kill the worker as soon as possible if you disable the subscription.\nIsn't the comment clear?\n\n> Subscription *MySubscription = NULL;\n> static bool MySubscriptionValid = false;\n> +TimestampTz MySubscriptionMinApplyDelayUntil = 0;\n> \n> Looking at the only usage of this variable (in apply_delay) and how it\n> is used I did see why this cannot just be a local member of the\n> apply_delay function?\nGood catch. A previous patch used that variable outside that function scope.\n\n> +/*\n> + * Apply the informed delay for the transaction.\n> + *\n> + * A regular transaction uses the commit time to calculate the delay. A\n> + * prepared transaction uses the prepare time to calculate the delay.\n> + */\n> +static void\n> +apply_delay(TimestampTz ts)\n> \n> I didn't think it needs to mention here about the different kinds of\n> transactions because where it comes from has nothing really to do with\n> this function's logic.\nFixed.\n\n> Refer to comment #14 about MySubscriptionMinApplyDelayUntil.\nFixed.\n\n> It seems to rely on the spooling happening at the end. But won't this\n> cause a problem later when/if the \"parallel apply\" patch [1] is pushed\n> and the stream bgworkers are doing stuff on the fly instead of\n> spooling at the end?\n> \n> Or are you expecting that the \"parallel apply\" feature should be\n> disabled if there is any min_apply_delay parameter specified?\nI didn't read the \"parallel apply\" patch yet.\n\n> Let's keep the alphabetical order of the parameters in COMPLETE_WITH, as per [2]\nFixed.\n\n> + int64 subapplydelay; /* Replication apply delay */\n> +\n> \n> IMO the comment should mention the units \"(ms)\"\nI'm not sure. It should be documented in the catalogs. It is an important\ninformation for user-visible interface. There are a few places in the\ndocumentation that the unit is mentioned.\n\n> There are some test cases for CREATE SUBSCRIPTION but there are no\n> test cases for ALTER SUBSCRIPTION changing this new parameter.\nI added a test to cover ALTER SUBSCRIPTION and also for the disabling a\nsubscription that contains a delay set.\n\n> I received the following error when trying to run these 'subscription' tests:\nFixed.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/",
"msg_date": "Mon, 01 Aug 2022 09:07:47 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: logical replication restrictions"
},
{
"msg_contents": "On Tue, Jul 5, 2022, at 9:29 AM, Amit Kapila wrote:\n> I wonder why we don't apply the delay on commit/commit_prepared\n> records only similar to physical replication. See recoveryApplyDelay.\n> One more advantage would be then we don't need to worry about\n> transactions that we are going to skip due SKIP feature for\n> subscribers.\nI added an explanation at the top of apply_delay(). I didn't read the \"parallel\napply\" patch yet. I'll do soon to understand how the current design for\nstreamed transactions conflicts with the parallel apply patch.\n\n+ * It applies the delay for the next transaction but before starting the\n+ * transaction. The main reason for this design is to avoid a long-running\n+ * transaction (which can cause some operational challenges) if the user sets a\n+ * high value for the delay. This design is different from the physical\n+ * replication (that applies the delay at commit time) mainly because write\n+ * operations may allow some issues (such as bloat and locks) that can be\n+ * minimized if it does not keep the transaction open for such a long time.\n+ */\n+static void\n+apply_delay(TimestampTz ts)\n\nRegarding the skip transaction feature, we could certainly skip the\ntransactions combined with the apply delay. However, it introduces complexity\nfor a rare use case IMO. Besides that, the skip transaction code path is fast,\nhence, it is very unlikely that the current patch will impose some issues to\nthe skip transaction feature. (Remember that the main goal for this feature is\nto provide an old state of the database.)\n\n> One more thing that might be worth discussing is whether introducing a\n> new subscription parameter for this feature is a better idea or can we\n> use guc (either an existing or a new one). Users may want to set this\n> only for a particular subscription or set of subscriptions in which\n> case it is better to have this as a subscription level parameter.\n> OTOH, I was slightly worried that if this will be used for all\n> subscriptions on a subscriber then it will be burdensome for users.\nThat's a good point. Logical replication is per database and it is slightly\ndifferent from physical replication that is per cluster. In physical\nreplication, you have no choice but to have a GUC. It is very unlikely that\nsomeone wants to delay all logical replicas. Therefore, the benefit of having a\nGUC is quite small.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Tue, Jul 5, 2022, at 9:29 AM, Amit Kapila wrote:I wonder why we don't apply the delay on commit/commit_preparedrecords only similar to physical replication. See recoveryApplyDelay.One more advantage would be then we don't need to worry abouttransactions that we are going to skip due SKIP feature forsubscribers.I added an explanation at the top of apply_delay(). I didn't read the \"parallelapply\" patch yet. I'll do soon to understand how the current design forstreamed transactions conflicts with the parallel apply patch.+ * It applies the delay for the next transaction but before starting the+ * transaction. The main reason for this design is to avoid a long-running+ * transaction (which can cause some operational challenges) if the user sets a+ * high value for the delay. This design is different from the physical+ * replication (that applies the delay at commit time) mainly because write+ * operations may allow some issues (such as bloat and locks) that can be+ * minimized if it does not keep the transaction open for such a long time.+ */+static void+apply_delay(TimestampTz ts)Regarding the skip transaction feature, we could certainly skip thetransactions combined with the apply delay. However, it introduces complexityfor a rare use case IMO. Besides that, the skip transaction code path is fast,hence, it is very unlikely that the current patch will impose some issues tothe skip transaction feature. (Remember that the main goal for this feature isto provide an old state of the database.)One more thing that might be worth discussing is whether introducing anew subscription parameter for this feature is a better idea or can weuse guc (either an existing or a new one). Users may want to set thisonly for a particular subscription or set of subscriptions in whichcase it is better to have this as a subscription level parameter.OTOH, I was slightly worried that if this will be used for allsubscriptions on a subscriber then it will be burdensome for users.That's a good point. Logical replication is per database and it is slightlydifferent from physical replication that is per cluster. In physicalreplication, you have no choice but to have a GUC. It is very unlikely thatsomeone wants to delay all logical replicas. Therefore, the benefit of having aGUC is quite small.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Mon, 01 Aug 2022 10:15:25 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: logical replication restrictions"
},
{
"msg_contents": "On Mon, Aug 1, 2022 at 6:46 PM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Tue, Jul 5, 2022, at 9:29 AM, Amit Kapila wrote:\n>\n> I wonder why we don't apply the delay on commit/commit_prepared\n> records only similar to physical replication. See recoveryApplyDelay.\n> One more advantage would be then we don't need to worry about\n> transactions that we are going to skip due SKIP feature for\n> subscribers.\n>\n> I added an explanation at the top of apply_delay(). I didn't read the \"parallel\n> apply\" patch yet. I'll do soon to understand how the current design for\n> streamed transactions conflicts with the parallel apply patch.\n>\n> + * It applies the delay for the next transaction but before starting the\n> + * transaction. The main reason for this design is to avoid a long-running\n> + * transaction (which can cause some operational challenges) if the user sets a\n> + * high value for the delay. This design is different from the physical\n> + * replication (that applies the delay at commit time) mainly because write\n> + * operations may allow some issues (such as bloat and locks) that can be\n> + * minimized if it does not keep the transaction open for such a long time.\n> + */\n\nYour explanation makes sense to me. The other point to consider is\nthat there can be cases where we may not apply operation for the\ntransaction because of empty transactions (we don't yet skip empty\nxacts for prepared transactions). So, won't it be better to apply the\ndelay just before we apply the first change for a transaction? Do we\nwant to apply the delay during table sync as we sometimes do need to\nenter apply phase while doing table sync?\n\n>\n> One more thing that might be worth discussing is whether introducing a\n> new subscription parameter for this feature is a better idea or can we\n> use guc (either an existing or a new one). Users may want to set this\n> only for a particular subscription or set of subscriptions in which\n> case it is better to have this as a subscription level parameter.\n> OTOH, I was slightly worried that if this will be used for all\n> subscriptions on a subscriber then it will be burdensome for users.\n>\n> That's a good point. Logical replication is per database and it is slightly\n> different from physical replication that is per cluster. In physical\n> replication, you have no choice but to have a GUC. It is very unlikely that\n> someone wants to delay all logical replicas. Therefore, the benefit of having a\n> GUC is quite small.\n>\n\nFair enough.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 3 Aug 2022 18:57:21 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical replication restrictions"
},
{
"msg_contents": "On Wed, Jul 13, 2022, at 2:34 PM, Melih Mutlu wrote:\n\n[Sorry for the delay...]\n\n> 22. src/test/subscription/t/032_apply_delay.pl\n>> \n>> I received the following error when trying to run these 'subscription' tests:\n>> \n>> t/032_apply_delay.pl ............... No such class log_location at\n>> t/032_apply_delay.pl line 49, near \"my log_location\"\n>> syntax error at t/032_apply_delay.pl line 49, near \"my log_location =\"\n> \n> I'm having these errors too. Seems like some declarations are missing.\nFixed in v5.\n\n> \n>>> + specified amount of time. If this value is specified without units,\n>>> + it is taken as milliseconds. The default is zero, adding no delay.\n>>> + </para>\n> I'm also having an issue when I give min_apply_delay parameter without units.\n> I expect that if I set min_apply_delay to 5000 (without any unit), it will be interpreted as 5000 ms.\nGood catch. I fixed it in v5.\n\n> \n> Lastly, I have a question about this delay during tablesync. \n> It's stated here that apply delays are not for initial tablesync.\n> \n>>> <para>\n>>> + The delay occurs only on WAL records for transaction begins and after\n>>> + the initial table synchronization. It is possible that the\n>>> + replication delay between publisher and subscriber exceeds the value\n>>> + of this parameter, in which case no delay is added. Note that the\n>>> + delay is calculated between the WAL time stamp as written on\n>>> + publisher and the current time on the subscriber. Delays in logical\n>>> + decoding and in transfer the transaction may reduce the actual wait\n>>> + time. If the system clocks on publisher and subscriber are not\n>>> + synchronized, this may lead to apply changes earlier than expected.\n>>> + This is not a major issue because a typical setting of this parameter\n>>> + are much larger than typical time deviations between servers.\n>>> + </para>\n> \n> There might be a case where tablesync workers are in SYNCWAIT state and waiting for apply worker to tell them to CATCHUP. \n> And if apply worker is waiting in apply_delay function, tablesync workers will be stuck at SYNCWAIT state and this might delay tablesync at least \"min_apply_delay\" amount of time or more.\n> Is it something we would want? What do you think?\nGood catch. That's an oversight. It should wait for the initial table\nsynchronization before starting to apply the delay. The main reason is the\ncurrent logical replication worker design. It only closes the tablesync workers\nafter the catchup phase. As you noticed we cannot impose the delay as soon as\nthe COPY finishes because it will take a long time to finish due to possibly\nlack of workers. Instead, let's wait for the READY state for all tables then\napply the delay. I added an explanation for it.\n\nI also modified the test a bit to use the new function\nwait_for_subscription_sync introduced in the commit\n0c20dd33db1607d6a85ffce24238c1e55e384b49.\n\nI attached a v6.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/",
"msg_date": "Mon, 08 Aug 2022 18:46:56 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: logical replication restrictions"
},
{
"msg_contents": "On Wed, Aug 3, 2022, at 10:27 AM, Amit Kapila wrote:\n> Your explanation makes sense to me. The other point to consider is\n> that there can be cases where we may not apply operation for the\n> transaction because of empty transactions (we don't yet skip empty\n> xacts for prepared transactions). So, won't it be better to apply the\n> delay just before we apply the first change for a transaction? Do we\n> want to apply the delay during table sync as we sometimes do need to\n> enter apply phase while doing table sync?\nI thought about the empty transactions but decided to not complicate the code\nmainly because skipping transactions is not a code path that will slow down\nthis feature. As explained in the documentation, there is no harm in delaying a\ntransaction for more than min_apply_delay; it cannot apply earlier. Having said\nthat I decided to do nothing. I'm also not sure if it deserves a comment or if\nthis email is a possible explanation for this decision.\n\nRegarding the table sync that was mention by Melih, I sent a new version (v6)\nthat fixed this oversight. The current logical replication worker design make\nit difficult to apply the delay in the catchup phase; tablesync workers are not\nclosed as soon as the COPY finishes (which means possibly running out of\nworkers sooner). After all tablesync workers have reached READY state, the\napply delay is activated. The documentation was correct; the code wasn't.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, Aug 3, 2022, at 10:27 AM, Amit Kapila wrote:Your explanation makes sense to me. The other point to consider isthat there can be cases where we may not apply operation for thetransaction because of empty transactions (we don't yet skip emptyxacts for prepared transactions). So, won't it be better to apply thedelay just before we apply the first change for a transaction? Do wewant to apply the delay during table sync as we sometimes do need toenter apply phase while doing table sync?I thought about the empty transactions but decided to not complicate the codemainly because skipping transactions is not a code path that will slow downthis feature. As explained in the documentation, there is no harm in delaying atransaction for more than min_apply_delay; it cannot apply earlier. Having saidthat I decided to do nothing. I'm also not sure if it deserves a comment or ifthis email is a possible explanation for this decision.Regarding the table sync that was mention by Melih, I sent a new version (v6)that fixed this oversight. The current logical replication worker design makeit difficult to apply the delay in the catchup phase; tablesync workers are notclosed as soon as the COPY finishes (which means possibly running out ofworkers sooner). After all tablesync workers have reached READY state, theapply delay is activated. The documentation was correct; the code wasn't.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Mon, 08 Aug 2022 19:22:22 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: logical replication restrictions"
},
{
"msg_contents": "On Tuesday, August 9, 2022 6:47 AM Euler Taveira <euler@eulerto.com> wrote:\n> I attached a v6.\nHi, thank you for posting the updated patch.\n\n\nMinor review comments for v6.\n\n(1) commit message\n\n\"If the subscriber sets min_apply_delay parameter, ...\"\n\nI suggest we use subscription rather than subscriber, because\nthis parameter refers to and is used for one subscription.\nMy suggestion is\n\"If one subscription sets min_apply_delay parameter, ...\"\nIn case if you agree, there are other places to apply this change.\n\n\n(2) commit message\n\nIt might be better to write a note for committer\nlike \"Bump catalog version\" at the bottom of the commit message.\n\n\n(3) unit alignment between recovery_min_apply_delay and min_apply_delay\n\nThe former interprets input number as milliseconds in case of no units,\nwhile the latter takes it as seconds without units.\nI feel it would be better to make them aligned.\n\n\n(4) catalogs.sgml\n\n+ Delay the application of changes by a specified amount of time. The\n+ unit is in milliseconds.\n\nAs a column explanation, it'd be better to use a noun\nin the first sentence to make this description aligned with\nother places. My suggestion is\n\"Application delay of changes by ....\".\n\n\n(5) pg_subscription.c\n\nThere is one missing blank line before writing if statement.\nIt's written in the AlterSubscription for other cases.\n\n@@ -1100,6 +1130,12 @@ AlterSubscription(ParseState *pstate, AlterSubscriptionStmt *stmt,\n replaces[Anum_pg_subscription_subdisableonerr - 1]\n = true;\n }\n+ if (IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY))\n\n\n(6) tab-complete.c\n\nThe order of tab-complete parameters listed in the COMPLETE_WITH\nshould follow alphabetical order. \"min_apply_delay\" can come before \"origin\".\nWe can refer to d547f7c commit.\n\n\n(7) 032_apply_delay.pl\n\nThere are missing whitespaces after comma in the mod functions.\n\nUPDATE test_tab SET b = md5(b) WHERE mod(a,2) = 0;\nDELETE FROM test_tab WHERE mod(a,3) = 0;\n\n\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Wed, 10 Aug 2022 12:39:29 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: logical replication restrictions"
},
{
"msg_contents": "On Wed, Aug 10, 2022, at 9:39 AM, osumi.takamichi@fujitsu.com wrote:\n> Minor review comments for v6.\nThanks for your review. I'm attaching v7.\n\n> \"If the subscriber sets min_apply_delay parameter, ...\"\n> \n> I suggest we use subscription rather than subscriber, because\n> this parameter refers to and is used for one subscription.\n> My suggestion is\n> \"If one subscription sets min_apply_delay parameter, ...\"\n> In case if you agree, there are other places to apply this change.\nI changed the terminology to subscription. I also checked other \"subscriber\"\noccurrences but I don't think it should be changed. Some of them are used as\npublisher/subscriber pair. If you think there is another sentence to consider,\npoint it out.\n\n> It might be better to write a note for committer\n> like \"Bump catalog version\" at the bottom of the commit message.\nIt is a committer task to bump the catalog number. IMO it is easy to notice\n(using a git hook?) that it must bump it when we are modifying the catalog.\nAFAICS there is no recommendation to add such a warning.\n\n> The former interprets input number as milliseconds in case of no units,\n> while the latter takes it as seconds without units.\n> I feel it would be better to make them aligned.\nIn a previous version I decided not to add a code to attach a unit when there\nisn't one. Instead, I changed the documentation to reflect what interval_in\nuses (seconds as unit). Under reflection, let's use ms as default unit if the\nuser doesn't specify one.\n\nI fixed all the other suggestions too.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/",
"msg_date": "Wed, 10 Aug 2022 17:33:00 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: logical replication restrictions"
},
{
"msg_contents": "On Tue, Aug 9, 2022 at 3:52 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Wed, Aug 3, 2022, at 10:27 AM, Amit Kapila wrote:\n>\n> Your explanation makes sense to me. The other point to consider is\n> that there can be cases where we may not apply operation for the\n> transaction because of empty transactions (we don't yet skip empty\n> xacts for prepared transactions). So, won't it be better to apply the\n> delay just before we apply the first change for a transaction? Do we\n> want to apply the delay during table sync as we sometimes do need to\n> enter apply phase while doing table sync?\n>\n> I thought about the empty transactions but decided to not complicate the code\n> mainly because skipping transactions is not a code path that will slow down\n> this feature. As explained in the documentation, there is no harm in delaying a\n> transaction for more than min_apply_delay; it cannot apply earlier. Having said\n> that I decided to do nothing. I'm also not sure if it deserves a comment or if\n> this email is a possible explanation for this decision.\n>\n\nI don't know what makes you think it will complicate the code. But\nanyway thinking further about the way apply_delay is used at various\nplaces in the patch, as pointed out by Peter Smith it seems it won't\nwork for the parallel apply feature where we start applying the\ntransaction immediately after start stream. I was wondering why don't\nwe apply delay after each commit of the transaction rather than at the\nbegin command. We can remember if the transaction has made any change\nand if so then after commit, apply the delay. If we can do that then\nit will alleviate the concern of empty and skipped xacts as well.\n\nAnother thing I was wondering how to determine what is a good delay\ntime for tests and found that current tests in replay_delay.pl uses\n3s, so should we use the same for apply delay tests in this patch as\nwell?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 11 Aug 2022 16:03:08 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical replication restrictions"
},
{
"msg_contents": "Dear Euler,\n\nThank you for making the patch! I'm also interested in the patch so I want to join the thread.\n\nWhile testing your patch, I noticed that the 032_apply_delay.pl failed.\nPSA logs that generated on my machine. This failure is same as reported by cfbot[1].\n\nIt seemed that the apply worker could not exit and starts WaitLatch() again even if the subscription had been disabled.\nFollowings are cited from attached log.\n\n```\n...\n2022-09-14 09:44:30.489 UTC [14880] 032_apply_delay.pl LOG: statement: ALTER SUBSCRIPTION tap_sub SET (min_apply_delay = 86460000)\n2022-09-14 09:44:30.525 UTC [14777] DEBUG: sending feedback (force 0) to recv 0/1690220, write 0/1690220, flush 0/1690220\n2022-09-14 09:44:30.526 UTC [14759] DEBUG: server process (PID 14878) exited with exit code 0\n2022-09-14 09:44:30.535 UTC [14777] DEBUG: logical replication apply delay: 86460000 ms\n2022-09-14 09:44:30.535 UTC [14777] CONTEXT: processing remote data for replication origin \"pg_16393\" during \"BEGIN\" in transaction 734 finished at 0/16902A8\n2022-09-14 09:44:30.576 UTC [14759] DEBUG: forked new backend, pid=14884 socket=6\n2022-09-14 09:44:30.578 UTC [14759] DEBUG: server process (PID 14880) exited with exit code 0\n2022-09-14 09:44:30.583 UTC [14884] 032_apply_delay.pl LOG: statement: ALTER SUBSCRIPTION tap_sub DISABLE\n2022-09-14 09:44:30.589 UTC [14777] DEBUG: logical replication apply delay: 86459945 ms\n2022-09-14 09:44:30.589 UTC [14777] CONTEXT: processing remote data for replication origin \"pg_16393\" during \"BEGIN\" in transaction 734 finished at 0/16902A8\n2022-09-14 09:44:30.608 UTC [14759] DEBUG: forked new backend, pid=14886 socket=6\n2022-09-14 09:44:30.632 UTC [14886] 032_apply_delay.pl LOG: statement: SELECT count(1) = 0 FROM pg_stat_subscription WHERE subname = 'tap_sub' AND pid IS NOT NULL;\n2022-09-14 09:44:30.665 UTC [14759] DEBUG: server process (PID 14884) exited with exit code 0\n...\n```\n\nI think this may be caused because the delayed worker will not read the modified catalog even if ALTER SUBSCRIPTION ... DISABLED is called.\nI also attached the fix patch that can be applied after yours. It seems OK on my env.\n\n[1]: https://cirrus-ci.com/task/4888001967816704\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Wed, 14 Sep 2022 11:10:45 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: logical replication restrictions"
},
{
"msg_contents": "Hi,\n\nSorry for noise but I found another bug.\nWhen the 032_apply_delay.pl is modified like following,\nthe test will be always failed even if my patch is applied.\n\n```\n# Disable subscription. worker should die immediately.\n-$node_subscriber->safe_psql('postgres',\n- \"ALTER SUBSCRIPTION tap_sub DISABLE\"\n+$node_subscriber->safe_psql('postgres', q{\n+BEGIN;\n+ALTER SUBSCRIPTION tap_sub DISABLE;\n+SELECT pg_sleep(1);\n+COMMIT;\n+}\n );\n```\n\nThe point of failure is same as I reported previously.\n\n```\n...\n2022-09-14 12:00:48.891 UTC [11330] 032_apply_delay.pl LOG: statement: ALTER SUBSCRIPTION tap_sub SET (min_apply_delay = 86460000)\n2022-09-14 12:00:48.910 UTC [11226] DEBUG: sending feedback (force 0) to recv 0/1690220, write 0/1690220, flush 0/1690220\n2022-09-14 12:00:48.937 UTC [11208] DEBUG: server process (PID 11328) exited with exit code 0\n2022-09-14 12:00:48.950 UTC [11226] DEBUG: logical replication apply delay: 86459996 ms\n2022-09-14 12:00:48.950 UTC [11226] CONTEXT: processing remote data for replication origin \"pg_16393\" during \"BEGIN\" in transaction 734 finished at 0/16902A8\n2022-09-14 12:00:48.979 UTC [11208] DEBUG: forked new backend, pid=11334 socket=6\n2022-09-14 12:00:49.007 UTC [11334] 032_apply_delay.pl LOG: statement: BEGIN;\n2022-09-14 12:00:49.008 UTC [11334] 032_apply_delay.pl LOG: statement: ALTER SUBSCRIPTION tap_sub DISABLE;\n2022-09-14 12:00:49.009 UTC [11334] 032_apply_delay.pl LOG: statement: SELECT pg_sleep(1);\n2022-09-14 12:00:49.009 UTC [11226] DEBUG: check status of MySubscription\n2022-09-14 12:00:49.009 UTC [11226] CONTEXT: processing remote data for replication origin \"pg_16393\" during \"BEGIN\" in transaction 734 finished at 0/16902A8\n2022-09-14 12:00:49.009 UTC [11226] DEBUG: logical replication apply delay: 86459937 ms\n2022-09-14 12:00:49.009 UTC [11226] CONTEXT: processing remote data for replication origin \"pg_16393\" during \"BEGIN\" in transaction 734 finished at 0/16902A8\n...\n```\n\nI think it may be caused that waken worker read catalogs that have not modified yet.\nIn AlterSubscription(), the backend kicks the apply worker ASAP, but it should be at \nend of the transaction, like ApplyLauncherWakeupAtCommit() and AtEOXact_ApplyLauncher().\n\n```\n+ /*\n+ * If this subscription has been disabled and it has an apply\n+ * delay set, wake up the logical replication worker to finish\n+ * it as soon as possible.\n+ */\n+ if (!opts.enabled && sub->applydelay > 0)\n+ logicalrep_worker_wakeup(sub->oid, InvalidOid);\n+\n```\n\nHow do you think?\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Wed, 14 Sep 2022 12:26:52 +0000",
"msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: logical replication restrictions"
},
{
"msg_contents": "Hi Euler, a long time ago you ask me a few questions about my previous\nreview [1].\n\nHere are my replies, plus a few other review comments for patch v7-0001.\n\n======\n\n1. doc/src/sgml/catalogs.sgml\n\n+ <para>\n+ Application delay of changes by a specified amount of time. The\n+ unit is in milliseconds.\n+ </para></entry>\n\nThe wording sounds a bit strange still. How about below\n\nSUGGESTION\nThe length of time (ms) to delay the application of changes.\n\n=======\n\n2. Other documentation?\n\nMaybe should say something on the Logical Replication Subscription\npage about this? (31.2 Subscription)\n\n=======\n\n3. doc/src/sgml/ref/create_subscription.sgml\n\n+ synchronized, this may lead to apply changes earlier than expected.\n+ This is not a major issue because a typical setting of this parameter\n+ are much larger than typical time deviations between servers.\n\nWording?\n\nSUGGESTION\n... than expected, but this is not a major issue because this\nparameter is typically much larger than the time deviations between\nservers.\n\n~~~\n\n4. Q/A\n\n From [2] you asked:\n\n> Should there also be a big warning box about the impact if using\n> synchronous_commit (like the other streaming replication page has this\n> warning)?\nImpact? Could you elaborate?\n\n~\n\nI noticed the streaming replication docs for recovery_min_apply_delay\nhas a big red warning box saying that setting this GUC may block the\nsynchronous commits. So I was saying won’t a similar big red warning\nbe needed also for this min_apply_delay parameter if the delay is used\nin conjunction with a publisher wanting synchronous commit because it\nmight block everything?\n\n~~~\n\n4. Example\n\n+<programlisting>\n+CREATE SUBSCRIPTION foo\n+ CONNECTION 'host=192.168.1.50 port=5432 user=foo dbname=foodb'\n+ PUBLICATION baz\n+ WITH (copy_data = false, min_apply_delay = '4h');\n+</programlisting></para>\n\nIf the example named the subscription/publication as ‘mysub’ and\n‘mypub’ I think it would be more consistent with the existing\nexamples.\n\n======\n\n5. src/backend/commands/subscriptioncmds.c - SubOpts\n\n@@ -89,6 +91,7 @@ typedef struct SubOpts\n bool disableonerr;\n char *origin;\n XLogRecPtr lsn;\n+ int64 min_apply_delay;\n } SubOpts;\n\nI feel it would be better to be explicit about the storage units. So\ncall this member ‘min_apply_delay_ms’. E.g. then other code in\nparse_subscription_options will be more natural when you are\nconverting using and assigning them to this member.\n\n~~~\n\n6. - parse_subscription_options\n\n+ /*\n+ * If there is no unit, interval_in takes second as unit. This\n+ * parameter expects millisecond as unit so add a unit (ms) if\n+ * there isn't one.\n+ */\n\nThe comment feels awkward. How about below\n\nSUGGESTION\nIf no unit was specified, then explicitly add 'ms' otherwise the\ninterval_in function would assume 'seconds'\n\n~~~\n\n7. - parse_subscription_options\n\n(This is a repeat of [1] review comment #12)\n\n+ if (opts->min_apply_delay < 0 && IsSet(supported_opts,\nSUBOPT_MIN_APPLY_DELAY))\n+ ereport(ERROR,\n+ errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n+ errmsg(\"option \\\"%s\\\" must not be negative\", \"min_apply_delay\"));\n\nWhy is this code here instead of inside the previous code block where\nthe min_apply_delay was assigned in the first place?\n\n======\n\n8. src/backend/replication/logical/worker.c - apply_delay\n\n+ * When min_apply_delay parameter is set on subscriber, we wait long enough to\n+ * make sure a transaction is applied at least that interval behind the\n+ * publisher.\n\n\"on subscriber\" -> \"on the subscription\"\n\n~~~\n\n9.\n\n+ * Apply delay only after all tablesync workers have reached READY state. A\n+ * tablesync worker are kept until it reaches READY state. If we allow the\n\n\nWording ??\n\n\"A tablesync worker are kept until it reaches READY state.\" ??\n\n~~~\n\n10.\n\n10a.\n+ /* nothing to do if no delay set */\n\nUppercase comment\n/* Nothing to do if no delay set */\n\n~\n\n10b.\n+ /* set apply delay */\n\nUppercase comment\n/* Set apply delay */\n\n\n~~~\n\n11. - apply_handle_stream_prepare / apply_handle_stream_commit\n\nThe previous concern about incompatibility with the \"Parallel Apply\"\nwork (see [1] review comments #17, #18) is still a pending issue,\nisn't it?\n\n======\n\n12. src/backend/utils/adt/timestamp.c interval_to_ms\n\n+/*\n+ * Given an Interval returns the number of milliseconds.\n+ */\n+int64\n+interval_to_ms(const Interval *interval)\n\nSUGGESTION\nReturns the number of milliseconds in the specified Interval.\n\n~~~\n\n13.\n\n\n+ /* adds portion time (in ms) to the previous result. */\n\nUppercase comment\n/* Adds portion time (in ms) to the previous result. *\n\n======\n\n14. src/bin/pg_dump/pg_dump.c - getSubscriptions\n\n+ {\n+ appendPQExpBufferStr(query, \" s.suborigin,\\n\");\n+ appendPQExpBufferStr(query, \" s.subapplydelay\\n\");\n+ }\n\nThis could be done using just a single appendPQExpBufferStr if you\nwant to have 1 call instead of 2.\n\n======\n\n15. src/bin/psql/describe.c - describeSubscriptions\n\n+ /* origin and min_apply_delay are only supported in v16 and higher */\n\nUppercase comment\n/* Origin and min_apply_delay are only supported in v16 and higher */\n\n======\n\n16. src/include/catalog/pg_subscription.h\n\n+ int64 subapplydelay; /* Replication apply delay */\n+\n\nConsider renaming this as 'subapplydelayms' to make the units perfectly clear.\n\n======\n\n17. src/test/regress/sql/subscription.sql\n\nIs [1] review comment 21 (There are some test cases for CREATE\nSUBSCRIPTION but there are no\ntest cases for ALTER SUBSCRIPTION changing this new parameter.) still\na pending item?\n\n\n------\n[1] My v4 review -\nhttps://www.postgresql.org/message-id/CAHut%2BPvugkna7avUQLydg602hymc8qMp%3DCRT2ZCTGbi8Bkfv%2BA%40mail.gmail.com\n[2] Euler's reply to my v4 review -\nhttps://www.postgresql.org/message-id/acfc1946-a73e-4e9d-86b3-b19cba225a41%40www.fastmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 5 Oct 2022 20:41:43 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical replication restrictions"
},
{
"msg_contents": "Dear Euler,\n\nDo you have enough time to handle the issue? Our discussion has been suspended for two months...\n\nIf you could not allocate a time to discuss this problem because of other important tasks or events,\nwe would like to take over the thread and modify your patch.\n\nWe've planned that we will start to address comments and reported bugs if you would not respond by the end of this week.\nI look forward to hearing from you.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Tue, 8 Nov 2022 05:27:06 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: logical replication restrictions"
},
{
"msg_contents": "At Wed, 10 Aug 2022 17:33:00 -0300, \"Euler Taveira\" <euler@eulerto.com> wrote in \n> On Wed, Aug 10, 2022, at 9:39 AM, osumi.takamichi@fujitsu.com wrote:\n> > Minor review comments for v6.\n> Thanks for your review. I'm attaching v7.\n\nUsing interval is not standard as this kind of parameters but it seems\nconvenient. On the other hand, it's not great that the unit month\nintroduces some subtle ambiguity. This patch translates a month to 30\ndays but I'm not sure it's the right thing to do. Perhaps we shouldn't\nallow the units upper than days.\n\napply_delay() chokes the message-receiving path so that a not-so-long\ndelay can cause a replication timeout to fire. I think we should\nprocess walsender pings even while delaying. Needing to make\nreplication timeout longer than apply delay is not great, I think.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 09 Nov 2022 15:41:23 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical replication restrictions"
},
{
"msg_contents": "On Thu, 11 Aug 2022 at 02:03, Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Wed, Aug 10, 2022, at 9:39 AM, osumi.takamichi@fujitsu.com wrote:\n>\n> Minor review comments for v6.\n>\n> Thanks for your review. I'm attaching v7.\n>\n> \"If the subscriber sets min_apply_delay parameter, ...\"\n>\n> I suggest we use subscription rather than subscriber, because\n> this parameter refers to and is used for one subscription.\n> My suggestion is\n> \"If one subscription sets min_apply_delay parameter, ...\"\n> In case if you agree, there are other places to apply this change.\n>\n> I changed the terminology to subscription. I also checked other \"subscriber\"\n> occurrences but I don't think it should be changed. Some of them are used as\n> publisher/subscriber pair. If you think there is another sentence to consider,\n> point it out.\n>\n> It might be better to write a note for committer\n> like \"Bump catalog version\" at the bottom of the commit message.\n>\n> It is a committer task to bump the catalog number. IMO it is easy to notice\n> (using a git hook?) that it must bump it when we are modifying the catalog.\n> AFAICS there is no recommendation to add such a warning.\n>\n> The former interprets input number as milliseconds in case of no units,\n> while the latter takes it as seconds without units.\n> I feel it would be better to make them aligned.\n>\n> In a previous version I decided not to add a code to attach a unit when there\n> isn't one. Instead, I changed the documentation to reflect what interval_in\n> uses (seconds as unit). Under reflection, let's use ms as default unit if the\n> user doesn't specify one.\n>\n> I fixed all the other suggestions too.\n\nFew comments:\n1) I feel if the user has specified a long delay there is a chance\nthat replication may not continue if the replication slot falls behind\nthe current LSN by more than max_slot_wal_keep_size. I feel we should\nadd this reference in the documentation of min_apply_delay as the\nreplication will not continue in this case.\n\n2) I also noticed that if we have to shut down the publisher server\nwith a long min_apply_delay configuration, the publisher server cannot\nbe stopped as the walsender waits for the data to be replicated. Is\nthis behavior ok for the server to wait in this case? If this behavior\nis ok, we could add a log message as it is not very evident from the\nlog files why the server could not be shut down.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 12 Nov 2022 19:21:18 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical replication restrictions"
},
{
"msg_contents": "On Tuesday, November 8, 2022 2:27 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\n> If you could not allocate a time to discuss this problem because of other\n> important tasks or events, we would like to take over the thread and modify\n> your patch.\n> \n> We've planned that we will start to address comments and reported bugs if\n> you would not respond by the end of this week.\nHi,\n\n\nI've simply rebased the patch to make it applicable on top of HEAD\nand make the tests pass. Note there are still open pending comments\nand I'm going to start to address those.\n\nI've written Euler as the original author in the commit message\nto note his credit.\n\n\n\nBest Regards,\n\tTakamichi Osumi",
"msg_date": "Mon, 14 Nov 2022 01:08:49 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: logical replication restrictions"
},
{
"msg_contents": "Hi,\n\nThe thread title doesn't really convey the topic under discussion, so\nchanged it. IIRC, this has been mentioned by others as well in the\nthread.\n\nOn Sat, Nov 12, 2022 at 7:21 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Few comments:\n> 1) I feel if the user has specified a long delay there is a chance\n> that replication may not continue if the replication slot falls behind\n> the current LSN by more than max_slot_wal_keep_size. I feel we should\n> add this reference in the documentation of min_apply_delay as the\n> replication will not continue in this case.\n>\n\nThis makes sense to me.\n\n> 2) I also noticed that if we have to shut down the publisher server\n> with a long min_apply_delay configuration, the publisher server cannot\n> be stopped as the walsender waits for the data to be replicated. Is\n> this behavior ok for the server to wait in this case? If this behavior\n> is ok, we could add a log message as it is not very evident from the\n> log files why the server could not be shut down.\n>\n\nI think for this case, the behavior should be the same as for physical\nreplication. Can you please check what is behavior for the case you\nare worried about in physical replication? Note, we already have a\nsimilar parameter for recovery_min_apply_delay for physical\nreplication.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 14 Nov 2022 12:14:44 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Mon, Nov 14, 2022 at 12:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Nov 12, 2022 at 7:21 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Few comments:\n> > 1) I feel if the user has specified a long delay there is a chance\n> > that replication may not continue if the replication slot falls behind\n> > the current LSN by more than max_slot_wal_keep_size. I feel we should\n> > add this reference in the documentation of min_apply_delay as the\n> > replication will not continue in this case.\n> >\n>\n> This makes sense to me.\n>\n> > 2) I also noticed that if we have to shut down the publisher server\n> > with a long min_apply_delay configuration, the publisher server cannot\n> > be stopped as the walsender waits for the data to be replicated. Is\n> > this behavior ok for the server to wait in this case? If this behavior\n> > is ok, we could add a log message as it is not very evident from the\n> > log files why the server could not be shut down.\n> >\n>\n> I think for this case, the behavior should be the same as for physical\n> replication. Can you please check what is behavior for the case you\n> are worried about in physical replication? Note, we already have a\n> similar parameter for recovery_min_apply_delay for physical\n> replication.\n>\n\nI don't understand the reason for the below change in the patch:\n\n+ /*\n+ * If this subscription has been disabled and it has an apply\n+ * delay set, wake up the logical replication worker to finish\n+ * it as soon as possible.\n+ */\n+ if (!opts.enabled && sub->applydelay > 0)\n+ logicalrep_worker_wakeup(sub->oid, InvalidOid);\n+\n\nIt seems to me Kuroda-San has proposed this change [1] to fix the test\nbut it is not clear to me why such a change is required. Why can't\nCHECK_FOR_INTERRUPTS() after waiting, followed by the existing below\ncode [2] in LogicalRepApplyLoop() sufficient to handle parameter\nupdates?\n\n[2]\nif (!in_remote_transaction && !in_streamed_transaction)\n{\n/*\n* If we didn't get any transactions for a while there might be\n* unconsumed invalidation messages in the queue, consume them\n* now.\n*/\nAcceptInvalidationMessages();\nmaybe_reread_subscription();\n...\n\n\n[1] - https://www.postgresql.org/message-id/TYAPR01MB5866F9716A18DA0C68A2CDB3F5469%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 14 Nov 2022 12:33:58 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> I don't understand the reason for the below change in the patch:\r\n> \r\n> + /*\r\n> + * If this subscription has been disabled and it has an apply\r\n> + * delay set, wake up the logical replication worker to finish\r\n> + * it as soon as possible.\r\n> + */\r\n> + if (!opts.enabled && sub->applydelay > 0)\r\n> + logicalrep_worker_wakeup(sub->oid, InvalidOid);\r\n> +\r\n> \r\n> It seems to me Kuroda-San has proposed this change [1] to fix the test\r\n> but it is not clear to me why such a change is required. Why can't\r\n> CHECK_FOR_INTERRUPTS() after waiting, followed by the existing below\r\n> code [2] in LogicalRepApplyLoop() sufficient to handle parameter\r\n> updates?\r\n> \r\n> [2]\r\n> if (!in_remote_transaction && !in_streamed_transaction)\r\n> {\r\n> /*\r\n> * If we didn't get any transactions for a while there might be\r\n> * unconsumed invalidation messages in the queue, consume them\r\n> * now.\r\n> */\r\n> AcceptInvalidationMessages();\r\n> maybe_reread_subscription();\r\n> ...\r\n\r\nI mentioned the case with a long min_apply_delay configuration. \r\n\r\nThe worker will exit normally if apply_delay() has been ended and then it can reach\r\nLogicalRepApplyLoop(). It works well if the delay is short and workers can wake up\r\nimmediately. But if workers have long min_apply_delay, they cannot go out the\r\nwhile-loop, so worker processes remain for a long time. According to test code,\r\nit is determined that worker should die immediately and we have a\r\ntest-case that we try to kill the worker with min_apply_delay = 1 day.\r\n\r\nAlso note that the launcher process will not set a latch or send a SIGTERM even\r\nif the subscription is altered to enabled=f. In the launcher main loop, the\r\nlauncher reads pg_subscription periodically but they do not consider about changes\r\nof parameters. They just skip doing something if they find disabled subscriptions.\r\n\r\nIf the situation can be ignored, we may be able to remove lines.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Mon, 14 Nov 2022 08:58:10 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Wed, Nov 9, 2022 at 12:11 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 10 Aug 2022 17:33:00 -0300, \"Euler Taveira\" <euler@eulerto.com> wrote in\n> > On Wed, Aug 10, 2022, at 9:39 AM, osumi.takamichi@fujitsu.com wrote:\n> > > Minor review comments for v6.\n> > Thanks for your review. I'm attaching v7.\n>\n> Using interval is not standard as this kind of parameters but it seems\n> convenient. On the other hand, it's not great that the unit month\n> introduces some subtle ambiguity. This patch translates a month to 30\n> days but I'm not sure it's the right thing to do. Perhaps we shouldn't\n> allow the units upper than days.\n>\n\nAgreed. Isn't the same thing already apply to recovery_min_apply_delay\nfor which the maximum unit seems to be in days? If so, there is no\nreason to do something different here?\n\n> apply_delay() chokes the message-receiving path so that a not-so-long\n> delay can cause a replication timeout to fire. I think we should\n> process walsender pings even while delaying. Needing to make\n> replication timeout longer than apply delay is not great, I think.\n>\n\nAgain, I think for this case also the behavior should be similar to\nhow we handle recovery_min_apply_delay.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 14 Nov 2022 15:45:23 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Mon, Nov 14, 2022 at 2:28 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> > I don't understand the reason for the below change in the patch:\n> >\n> > + /*\n> > + * If this subscription has been disabled and it has an apply\n> > + * delay set, wake up the logical replication worker to finish\n> > + * it as soon as possible.\n> > + */\n> > + if (!opts.enabled && sub->applydelay > 0)\n> > + logicalrep_worker_wakeup(sub->oid, InvalidOid);\n> > +\n> >\n> > It seems to me Kuroda-San has proposed this change [1] to fix the test\n> > but it is not clear to me why such a change is required. Why can't\n> > CHECK_FOR_INTERRUPTS() after waiting, followed by the existing below\n> > code [2] in LogicalRepApplyLoop() sufficient to handle parameter\n> > updates?\n> >\n> > [2]\n> > if (!in_remote_transaction && !in_streamed_transaction)\n> > {\n> > /*\n> > * If we didn't get any transactions for a while there might be\n> > * unconsumed invalidation messages in the queue, consume them\n> > * now.\n> > */\n> > AcceptInvalidationMessages();\n> > maybe_reread_subscription();\n> > ...\n>\n> I mentioned the case with a long min_apply_delay configuration.\n>\n> The worker will exit normally if apply_delay() has been ended and then it can reach\n> LogicalRepApplyLoop(). It works well if the delay is short and workers can wake up\n> immediately. But if workers have long min_apply_delay, they cannot go out the\n> while-loop, so worker processes remain for a long time. According to test code,\n> it is determined that worker should die immediately and we have a\n> test-case that we try to kill the worker with min_apply_delay = 1 day.\n>\n\nSo, why only honor the 'disable' option of the subscription? For\nexample, one can change 'min_apply_delay' and it seems\nrecoveryApplyDelay() honors a similar change in the recovery\nparameter. Is there a way to set the latch of the worker process, so\nthat it can recheck if anything is changed?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 14 Nov 2022 16:03:55 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> > > It seems to me Kuroda-San has proposed this change [1] to fix the test\r\n> > > but it is not clear to me why such a change is required. Why can't\r\n> > > CHECK_FOR_INTERRUPTS() after waiting, followed by the existing below\r\n> > > code [2] in LogicalRepApplyLoop() sufficient to handle parameter\r\n> > > updates?\r\n\r\n(I forgot to say, this change was not proposed by me. I said that there should be\r\nmodified. I thought workers should wake up after the transaction was committed.)\r\n\r\n> So, why only honor the 'disable' option of the subscription? For\r\n> example, one can change 'min_apply_delay' and it seems\r\n> recoveryApplyDelay() honors a similar change in the recovery\r\n> parameter. Is there a way to set the latch of the worker process, so\r\n> that it can recheck if anything is changed?\r\n\r\nI have not considered about it, but seems reasonable. We may be able to\r\ndo maybe_reread_subscription() if subscription parameters are changed\r\nand latch is set.\r\n\r\nCurrently, IIUC we try to disable subscription regardless of the state, but\r\nshould we avoid to reread catalog if workers are handling the transactions,\r\nlike LogicalRepApplyLoop()?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Mon, 14 Nov 2022 13:22:40 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Mon, Nov 14, 2022 at 6:52 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear Amit,\n>\n> > > > It seems to me Kuroda-San has proposed this change [1] to fix the test\n> > > > but it is not clear to me why such a change is required. Why can't\n> > > > CHECK_FOR_INTERRUPTS() after waiting, followed by the existing below\n> > > > code [2] in LogicalRepApplyLoop() sufficient to handle parameter\n> > > > updates?\n>\n> (I forgot to say, this change was not proposed by me. I said that there should be\n> modified. I thought workers should wake up after the transaction was committed.)\n>\n> > So, why only honor the 'disable' option of the subscription? For\n> > example, one can change 'min_apply_delay' and it seems\n> > recoveryApplyDelay() honors a similar change in the recovery\n> > parameter. Is there a way to set the latch of the worker process, so\n> > that it can recheck if anything is changed?\n>\n> I have not considered about it, but seems reasonable. We may be able to\n> do maybe_reread_subscription() if subscription parameters are changed\n> and latch is set.\n>\n\nOne more thing I would like you to consider is the point raised by me\nrelated to this patch's interaction with the parallel apply feature as\nmentioned in the email [1]. I am not sure the idea proposed in that\nemail [1] is a good one because delaying after applying commit may not\nbe good as we want to delay the apply of the transaction(s) on\nsubscribers by this feature. I feel this needs more thought.\n\n> Currently, IIUC we try to disable subscription regardless of the state, but\n> should we avoid to reread catalog if workers are handling the transactions,\n> like LogicalRepApplyLoop()?\n>\n\nIIUC, here you are referring to reading catalogs again via the\nfunction maybe_reread_subscription(), right? If so, I think the idea\nis to not invoke it frequently to avoid increasing transaction apply\ntime. However, when you are anyway going to wait for a delay, it may\nnot matter. I feel it would be better to add some comments saying that\nwe don't want workers to wait for a long time if users have disabled\nthe subscription or reduced the apply_delay time.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1JRs0v9Z65HWKEZg3quWx4LiQ%3DpddTJZ_P1koXsbR3TMA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 15 Nov 2022 12:33:59 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "2022年11月14日(月) 10:09 Takamichi Osumi (Fujitsu) <osumi.takamichi@fujitsu.com>:\n>\n> On Tuesday, November 8, 2022 2:27 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\n> > If you could not allocate a time to discuss this problem because of other\n> > important tasks or events, we would like to take over the thread and modify\n> > your patch.\n> >\n> > We've planned that we will start to address comments and reported bugs if\n> > you would not respond by the end of this week.\n> Hi,\n>\n>\n> I've simply rebased the patch to make it applicable on top of HEAD\n> and make the tests pass. Note there are still open pending comments\n> and I'm going to start to address those.\n>\n> I've written Euler as the original author in the commit message\n> to note his credit.\n\nHi\n\nThanks for the updated patch.\n\nWhile reviewing the patch backlog, we have determined that this patch adds\none or more TAP tests but has not added the test to the \"meson.build\" file.\n\nTo do this, locate the relevant \"meson.build\" file for each test and add it\nin the 'tests' dictionary, which will look something like this:\n\n 'tap': {\n 'tests': [\n 't/001_basic.pl',\n ],\n },\n\nFor some additional details please see this Wiki article:\n\n https://wiki.postgresql.org/wiki/Meson_for_patch_authors\n\nFor more information on the meson build system for PostgreSQL see:\n\n https://wiki.postgresql.org/wiki/Meson\n\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Wed, 16 Nov 2022 12:57:44 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical replication restrictions"
},
{
"msg_contents": "On Mon, 14 Nov 2022 at 12:14, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Hi,\n>\n> The thread title doesn't really convey the topic under discussion, so\n> changed it. IIRC, this has been mentioned by others as well in the\n> thread.\n>\n> On Sat, Nov 12, 2022 at 7:21 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Few comments:\n> > 1) I feel if the user has specified a long delay there is a chance\n> > that replication may not continue if the replication slot falls behind\n> > the current LSN by more than max_slot_wal_keep_size. I feel we should\n> > add this reference in the documentation of min_apply_delay as the\n> > replication will not continue in this case.\n> >\n>\n> This makes sense to me.\n>\n> > 2) I also noticed that if we have to shut down the publisher server\n> > with a long min_apply_delay configuration, the publisher server cannot\n> > be stopped as the walsender waits for the data to be replicated. Is\n> > this behavior ok for the server to wait in this case? If this behavior\n> > is ok, we could add a log message as it is not very evident from the\n> > log files why the server could not be shut down.\n> >\n>\n> I think for this case, the behavior should be the same as for physical\n> replication. Can you please check what is behavior for the case you\n> are worried about in physical replication? Note, we already have a\n> similar parameter for recovery_min_apply_delay for physical\n> replication.\n\nIn the case of physical replication by setting\nrecovery_min_apply_delay, I noticed that both primary and standby\nnodes were getting stopped successfully immediately after the stop\nserver command. In case of logical replication, stop server fails:\npg_ctl -D publisher -l publisher.log stop -c\nwaiting for server to shut\ndown...............................................................\nfailed\npg_ctl: server does not shut down\n\nIn case of logical replication, the server does not get stopped\nbecause the walsender process is not able to exit:\nps ux | grep walsender\nvignesh 1950789 75.3 0.0 8695216 22284 ? Rs 11:51 1:08\npostgres: walsender vignesh [local] START_REPLICATION\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 22 Nov 2022 14:45:26 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Wednesday, October 5, 2022 6:42 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Hi Euler, a long time ago you ask me a few questions about my previous review\r\n> [1].\r\n> \r\n> Here are my replies, plus a few other review comments for patch v7-0001.\r\nHi, thank you for your comments.\r\n\r\n\r\n> ======\r\n> \r\n> 1. doc/src/sgml/catalogs.sgml\r\n> \r\n> + <para>\r\n> + Application delay of changes by a specified amount of time. The\r\n> + unit is in milliseconds.\r\n> + </para></entry>\r\n> \r\n> The wording sounds a bit strange still. How about below\r\n> \r\n> SUGGESTION\r\n> The length of time (ms) to delay the application of changes.\r\nFixed.\r\n\r\n\r\n> =======\r\n> \r\n> 2. Other documentation?\r\n> \r\n> Maybe should say something on the Logical Replication Subscription page\r\n> about this? (31.2 Subscription)\r\nAdded.\r\n\r\n \r\n> =======\r\n> \r\n> 3. doc/src/sgml/ref/create_subscription.sgml\r\n> \r\n> + synchronized, this may lead to apply changes earlier than expected.\r\n> + This is not a major issue because a typical setting of this parameter\r\n> + are much larger than typical time deviations between servers.\r\n> \r\n> Wording?\r\n> \r\n> SUGGESTION\r\n> ... than expected, but this is not a major issue because this parameter is\r\n> typically much larger than the time deviations between servers.\r\nFixed.\r\n\r\n\r\n\r\n> ~~~\r\n> \r\n> 4. Q/A\r\n> \r\n> From [2] you asked:\r\n> \r\n> > Should there also be a big warning box about the impact if using\r\n> > synchronous_commit (like the other streaming replication page has this\r\n> > warning)?\r\n> Impact? Could you elaborate?\r\n> \r\n> ~\r\n> \r\n> I noticed the streaming replication docs for recovery_min_apply_delay has a big\r\n> red warning box saying that setting this GUC may block the synchronous\r\n> commits. So I was saying won’t a similar big red warning be needed also for\r\n> this min_apply_delay parameter if the delay is used in conjunction with a\r\n> publisher wanting synchronous commit because it might block everything?\r\nI agree with you. Fixed.\r\n\r\n\r\n \r\n> ~~~\r\n> \r\n> 4. Example\r\n> \r\n> +<programlisting>\r\n> +CREATE SUBSCRIPTION foo\r\n> + CONNECTION 'host=192.168.1.50 port=5432 user=foo\r\n> dbname=foodb'\r\n> + PUBLICATION baz\r\n> + WITH (copy_data = false, min_apply_delay = '4h');\r\n> +</programlisting></para>\r\n> \r\n> If the example named the subscription/publication as ‘mysub’ and ‘mypub’ I\r\n> think it would be more consistent with the existing examples.\r\nFixed.\r\n\r\n\r\n \r\n> ======\r\n> \r\n> 5. src/backend/commands/subscriptioncmds.c - SubOpts\r\n> \r\n> @@ -89,6 +91,7 @@ typedef struct SubOpts\r\n> bool disableonerr;\r\n> char *origin;\r\n> XLogRecPtr lsn;\r\n> + int64 min_apply_delay;\r\n> } SubOpts;\r\n> \r\n> I feel it would be better to be explicit about the storage units. So call this\r\n> member ‘min_apply_delay_ms’. E.g. then other code in\r\n> parse_subscription_options will be more natural when you are converting using\r\n> and assigning them to this member.\r\nI don't think we use such names including units explicitly.\r\nCould you please tell me a similar example for this ?\r\n\r\n\r\n\r\n> ~~~\r\n> \r\n> 6. - parse_subscription_options\r\n> \r\n> + /*\r\n> + * If there is no unit, interval_in takes second as unit. This\r\n> + * parameter expects millisecond as unit so add a unit (ms) if\r\n> + * there isn't one.\r\n> + */\r\n> \r\n> The comment feels awkward. How about below\r\n> \r\n> SUGGESTION\r\n> If no unit was specified, then explicitly add 'ms' otherwise the interval_in\r\n> function would assume 'seconds'\r\nFixed.\r\n\r\n\r\n> ~~~\r\n> \r\n> 7. - parse_subscription_options\r\n> \r\n> (This is a repeat of [1] review comment #12)\r\n> \r\n> + if (opts->min_apply_delay < 0 && IsSet(supported_opts,\r\n> SUBOPT_MIN_APPLY_DELAY))\r\n> + ereport(ERROR,\r\n> + errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\r\n> + errmsg(\"option \\\"%s\\\" must not be negative\", \"min_apply_delay\"));\r\n> \r\n> Why is this code here instead of inside the previous code block where the\r\n> min_apply_delay was assigned in the first place?\r\nChanged.\r\n\r\n\r\n> ======\r\n> \r\n> 8. src/backend/replication/logical/worker.c - apply_delay\r\n> \r\n> + * When min_apply_delay parameter is set on subscriber, we wait long\r\n> + enough to\r\n> + * make sure a transaction is applied at least that interval behind the\r\n> + * publisher.\r\n> \r\n> \"on subscriber\" -> \"on the subscription\"\r\nFixed.\r\n\r\n\r\n\r\n> ~~~\r\n> \r\n> 9.\r\n> \r\n> + * Apply delay only after all tablesync workers have reached READY\r\n> + state. A\r\n> + * tablesync worker are kept until it reaches READY state. If we allow\r\n> + the\r\n> \r\n> \r\n> Wording ??\r\n> \r\n> \"A tablesync worker are kept until it reaches READY state.\" ??\r\nI removed the sentence.\r\n\r\n\r\n> ~~~\r\n> \r\n> 10.\r\n> \r\n> 10a.\r\n> + /* nothing to do if no delay set */\r\n> \r\n> Uppercase comment\r\n> /* Nothing to do if no delay set */\r\n> \r\n> ~\r\n> \r\n> 10b.\r\n> + /* set apply delay */\r\n> \r\n> Uppercase comment\r\n> /* Set apply delay */\r\nBoth are fixed.\r\n\r\n\r\n \r\n> ~~~\r\n> \r\n> 11. - apply_handle_stream_prepare / apply_handle_stream_commit\r\n> \r\n> The previous concern about incompatibility with the \"Parallel Apply\"\r\n> work (see [1] review comments #17, #18) is still a pending issue, isn't it?\r\nYes, I think so.\r\nKindly have a look at [1].\r\n\r\n\r\n> ======\r\n> \r\n> 12. src/backend/utils/adt/timestamp.c interval_to_ms\r\n> \r\n> +/*\r\n> + * Given an Interval returns the number of milliseconds.\r\n> + */\r\n> +int64\r\n> +interval_to_ms(const Interval *interval)\r\n> \r\n> SUGGESTION\r\n> Returns the number of milliseconds in the specified Interval.\r\nFixed.\r\n\r\n\r\n\r\n> ~~~\r\n> \r\n> 13.\r\n> \r\n> \r\n> + /* adds portion time (in ms) to the previous result. */\r\n> \r\n> Uppercase comment\r\n> /* Adds portion time (in ms) to the previous result. *\r\nFixed.\r\n\r\n\r\n> ======\r\n> \r\n> 14. src/bin/pg_dump/pg_dump.c - getSubscriptions\r\n> \r\n> + {\r\n> + appendPQExpBufferStr(query, \" s.suborigin,\\n\");\r\n> + appendPQExpBufferStr(query, \" s.subapplydelay\\n\"); }\r\n> \r\n> This could be done using just a single appendPQExpBufferStr if you want to\r\n> have 1 call instead of 2.\r\nMade them together.\r\n\r\n\r\n> ======\r\n> \r\n> 15. src/bin/psql/describe.c - describeSubscriptions\r\n> \r\n> + /* origin and min_apply_delay are only supported in v16 and higher */\r\n> \r\n> Uppercase comment\r\n> /* Origin and min_apply_delay are only supported in v16 and higher */\r\nFixed.\r\n\r\n\r\n> ======\r\n> \r\n> 16. src/include/catalog/pg_subscription.h\r\n> \r\n> + int64 subapplydelay; /* Replication apply delay */\r\n> +\r\n> \r\n> Consider renaming this as 'subapplydelayms' to make the units perfectly clear.\r\nSimilar to the 5th comments, I can't find any examples for this.\r\nI'd like to keep it general, which makes me feel it is more aligned with\r\nexisting codes.\r\n\r\n\r\n> ======\r\n> \r\n> 17. src/test/regress/sql/subscription.sql\r\n> \r\n> Is [1] review comment 21 (There are some test cases for CREATE\r\n> SUBSCRIPTION but there are no test cases for ALTER SUBSCRIPTION\r\n> changing this new parameter.) still a pending item?\r\nAdded one test case for alter subscription.\r\n\r\n\r\nAlso, I removed the function of logicalrep_worker_wakeup()\r\nthat was trigged by AlterSubscription only when disabling the subscription.\r\nThis is achieved and replaced by another patch proposed in [2] in a general manner.\r\n\r\nThere are still some pending comments for this patch,\r\nbut I'll share the current patch once.\r\n\r\nLastly, thank you so much, Kuroda-san for giving me many advice and\r\nsuggestion for some modification of this patch.\r\n\r\n\r\n[1] - https://www.postgresql.org/message-id/CAA4eK1JJFpgqE0ehAb7C9YFkJ-Xe-W1ZUPZptEfYjNJM4G-sLA%40mail.gmail.com\r\n[2] - https://www.postgresql.org/message-id/20221122004119.GA132961%40nathanxps13\r\n\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Thu, 24 Nov 2022 15:15:25 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Wednesday, November 16, 2022 12:58 PM Ian Lawrence Barwick <barwick@gmail.com> wrote:\r\n> 2022年11月14日(月) 10:09 Takamichi Osumi (Fujitsu)\r\n> <osumi.takamichi@fujitsu.com>:\r\n> > I've simply rebased the patch to make it applicable on top of HEAD and\r\n> > make the tests pass. Note there are still open pending comments and\r\n> > I'm going to start to address those.\r\n> Thanks for the updated patch.\r\n> \r\n> While reviewing the patch backlog, we have determined that this patch adds\r\n> one or more TAP tests but has not added the test to the \"meson.build\" file.\r\n> \r\n> To do this, locate the relevant \"meson.build\" file for each test and add it in the\r\n> 'tests' dictionary, which will look something like this:\r\n> \r\n> 'tap': {\r\n> 'tests': [\r\n> 't/001_basic.pl',\r\n> ],\r\n> },\r\n> \r\n> For some additional details please see this Wiki article:\r\n> \r\n> https://wiki.postgresql.org/wiki/Meson_for_patch_authors\r\n> \r\n> For more information on the meson build system for PostgreSQL see:\r\n> \r\n> https://wiki.postgresql.org/wiki/Meson\r\nHi, thanks for your notification.\r\n\r\n\r\nYou are right. Modified.\r\nThe updated patch can be seen in [1].\r\n\r\n\r\n[1] - https://www.postgresql.org/message-id/TYCPR01MB8373775ECC6972289AF8CB30ED0F9%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Thu, 24 Nov 2022 15:18:34 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi,\r\n\r\n\r\nOn Tuesday, November 22, 2022 6:15 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> On Mon, 14 Nov 2022 at 12:14, Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > Hi,\r\n> >\r\n> > The thread title doesn't really convey the topic under discussion, so\r\n> > changed it. IIRC, this has been mentioned by others as well in the\r\n> > thread.\r\n> >\r\n> > On Sat, Nov 12, 2022 at 7:21 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> > >\r\n> > > Few comments:\r\n> > > 1) I feel if the user has specified a long delay there is a chance\r\n> > > that replication may not continue if the replication slot falls\r\n> > > behind the current LSN by more than max_slot_wal_keep_size. I feel\r\n> > > we should add this reference in the documentation of min_apply_delay\r\n> > > as the replication will not continue in this case.\r\n> > >\r\n> >\r\n> > This makes sense to me.\r\nModified accordingly. The updated patch is in [1].\r\n\r\n\r\n> >\r\n> > > 2) I also noticed that if we have to shut down the publisher server\r\n> > > with a long min_apply_delay configuration, the publisher server\r\n> > > cannot be stopped as the walsender waits for the data to be\r\n> > > replicated. Is this behavior ok for the server to wait in this case?\r\n> > > If this behavior is ok, we could add a log message as it is not very\r\n> > > evident from the log files why the server could not be shut down.\r\n> > >\r\n> >\r\n> > I think for this case, the behavior should be the same as for physical\r\n> > replication. Can you please check what is behavior for the case you\r\n> > are worried about in physical replication? Note, we already have a\r\n> > similar parameter for recovery_min_apply_delay for physical\r\n> > replication.\r\n> \r\n> In the case of physical replication by setting recovery_min_apply_delay, I\r\n> noticed that both primary and standby nodes were getting stopped successfully\r\n> immediately after the stop server command. In case of logical replication, stop\r\n> server fails:\r\n> pg_ctl -D publisher -l publisher.log stop -c waiting for server to shut\r\n> down...............................................................\r\n> failed\r\n> pg_ctl: server does not shut down\r\n> \r\n> In case of logical replication, the server does not get stopped because the\r\n> walsender process is not able to exit:\r\n> ps ux | grep walsender\r\n> vignesh 1950789 75.3 0.0 8695216 22284 ? Rs 11:51 1:08\r\n> postgres: walsender vignesh [local] START_REPLICATION\r\nThanks, I could reproduce this and I'll update this point in a subsequent version.\r\n\r\n\r\n\r\n[1] - https://www.postgresql.org/message-id/TYCPR01MB8373775ECC6972289AF8CB30ED0F9%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Thu, 24 Nov 2022 15:21:58 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi,\r\n\r\n\r\nOn Monday, November 14, 2022 7:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Wed, Nov 9, 2022 at 12:11 PM Kyotaro Horiguchi\r\n> <horikyota.ntt@gmail.com> wrote:\r\n> >\r\n> > At Wed, 10 Aug 2022 17:33:00 -0300, \"Euler Taveira\"\r\n> > <euler@eulerto.com> wrote in\r\n> > > On Wed, Aug 10, 2022, at 9:39 AM, osumi.takamichi@fujitsu.com wrote:\r\n> > > > Minor review comments for v6.\r\n> > > Thanks for your review. I'm attaching v7.\r\n> >\r\n> > Using interval is not standard as this kind of parameters but it seems\r\n> > convenient. On the other hand, it's not great that the unit month\r\n> > introduces some subtle ambiguity. This patch translates a month to 30\r\n> > days but I'm not sure it's the right thing to do. Perhaps we shouldn't\r\n> > allow the units upper than days.\r\n> >\r\n> \r\n> Agreed. Isn't the same thing already apply to recovery_min_apply_delay for\r\n> which the maximum unit seems to be in days? If so, there is no reason to do\r\n> something different here?\r\nThe corresponding one of physical replication had the\r\nupper limit of INT_MAX(like it means 24 days is OK, but 25 days isn't).\r\nI added this test in the patch posted in [1].\r\n\r\n\r\n> \r\n> > apply_delay() chokes the message-receiving path so that a not-so-long\r\n> > delay can cause a replication timeout to fire. I think we should\r\n> > process walsender pings even while delaying. Needing to make\r\n> > replication timeout longer than apply delay is not great, I think.\r\n> >\r\n> \r\n> Again, I think for this case also the behavior should be similar to how we handle\r\n> recovery_min_apply_delay.\r\nYes, I agree with you.\r\nThis feature makes it easier to trigger the publisher's timeout,\r\nwhich can't be observed in the physical replication.\r\nI'll do the investigation and modify this point in a subsequent version.\r\n\r\n\r\n[1] - https://www.postgresql.org/message-id/TYCPR01MB8373775ECC6972289AF8CB30ED0F9%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Thu, 24 Nov 2022 15:31:48 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi,\r\n\r\n\r\nOn Thursday, August 11, 2022 7:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Tue, Aug 9, 2022 at 3:52 AM Euler Taveira <euler@eulerto.com> wrote:\r\n> >\r\n> > On Wed, Aug 3, 2022, at 10:27 AM, Amit Kapila wrote:\r\n> >\r\n> > Your explanation makes sense to me. The other point to consider is\r\n> > that there can be cases where we may not apply operation for the\r\n> > transaction because of empty transactions (we don't yet skip empty\r\n> > xacts for prepared transactions). So, won't it be better to apply the\r\n> > delay just before we apply the first change for a transaction? Do we\r\n> > want to apply the delay during table sync as we sometimes do need to\r\n> > enter apply phase while doing table sync?\r\n> >\r\n> > I thought about the empty transactions but decided to not complicate\r\n> > the code mainly because skipping transactions is not a code path that\r\n> > will slow down this feature. As explained in the documentation, there\r\n> > is no harm in delaying a transaction for more than min_apply_delay; it\r\n> > cannot apply earlier. Having said that I decided to do nothing. I'm\r\n> > also not sure if it deserves a comment or if this email is a possible explanation\r\n> for this decision.\r\n> >\r\n> \r\n> I don't know what makes you think it will complicate the code. But anyway\r\n> thinking further about the way apply_delay is used at various places in the patch,\r\n> as pointed out by Peter Smith it seems it won't work for the parallel apply\r\n> feature where we start applying the transaction immediately after start stream.\r\n> I was wondering why don't we apply delay after each commit of the transaction\r\n> rather than at the begin command. We can remember if the transaction has\r\n> made any change and if so then after commit, apply the delay. If we can do that\r\n> then it will alleviate the concern of empty and skipped xacts as well.\r\nI agree with this direction. I'll update this point in a subsequent patch.\r\n\r\n\r\n> Another thing I was wondering how to determine what is a good delay time for\r\n> tests and found that current tests in replay_delay.pl uses 3s, so should we use\r\n> the same for apply delay tests in this patch as well?\r\nFixed in the patch posted in [1].\r\n\r\n\r\n\r\n[1] - https://www.postgresql.org/message-id/TYCPR01MB8373775ECC6972289AF8CB30ED0F9%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Thu, 24 Nov 2022 15:39:57 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: logical replication restrictions"
},
{
"msg_contents": "On Fri, Nov 25, 2022 at 2:15 AM Takamichi Osumi (Fujitsu)\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Wednesday, October 5, 2022 6:42 PM Peter Smith <smithpb2250@gmail.com> wrote:\n...\n>\n> > ======\n> >\n> > 5. src/backend/commands/subscriptioncmds.c - SubOpts\n> >\n> > @@ -89,6 +91,7 @@ typedef struct SubOpts\n> > bool disableonerr;\n> > char *origin;\n> > XLogRecPtr lsn;\n> > + int64 min_apply_delay;\n> > } SubOpts;\n> >\n> > I feel it would be better to be explicit about the storage units. So call this\n> > member ‘min_apply_delay_ms’. E.g. then other code in\n> > parse_subscription_options will be more natural when you are converting using\n> > and assigning them to this member.\n> I don't think we use such names including units explicitly.\n> Could you please tell me a similar example for this ?\n>\n\nRegex search \"\\..*_ms[e\\s]\" finds some members where the unit is in\nthe member name.\n\ne.g. delay_ms (see EnableTimeoutParams in timeout.h)\ne.g. interval_in_ms (see timeout_paramsin timeout.c)\n\nRegex search \".*_ms[e\\s]\" finds many local variables where the unit is\nin the variable name\n\n> > ======\n> >\n> > 16. src/include/catalog/pg_subscription.h\n> >\n> > + int64 subapplydelay; /* Replication apply delay */\n> > +\n> >\n> > Consider renaming this as 'subapplydelayms' to make the units perfectly clear.\n> Similar to the 5th comments, I can't find any examples for this.\n> I'd like to keep it general, which makes me feel it is more aligned with\n> existing codes.\n>\n\nAs above.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 25 Nov 2022 07:42:50 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tue, Nov 15, 2022 at 12:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Nov 14, 2022 at 6:52 PM Hayato Kuroda (Fujitsu)\n> <kuroda.hayato@fujitsu.com> wrote:\n> >\n> > Dear Amit,\n> >\n> > > > > It seems to me Kuroda-San has proposed this change [1] to fix the test\n> > > > > but it is not clear to me why such a change is required. Why can't\n> > > > > CHECK_FOR_INTERRUPTS() after waiting, followed by the existing below\n> > > > > code [2] in LogicalRepApplyLoop() sufficient to handle parameter\n> > > > > updates?\n> >\n> > (I forgot to say, this change was not proposed by me. I said that there should be\n> > modified. I thought workers should wake up after the transaction was committed.)\n> >\n> > > So, why only honor the 'disable' option of the subscription? For\n> > > example, one can change 'min_apply_delay' and it seems\n> > > recoveryApplyDelay() honors a similar change in the recovery\n> > > parameter. Is there a way to set the latch of the worker process, so\n> > > that it can recheck if anything is changed?\n> >\n> > I have not considered about it, but seems reasonable. We may be able to\n> > do maybe_reread_subscription() if subscription parameters are changed\n> > and latch is set.\n> >\n>\n> One more thing I would like you to consider is the point raised by me\n> related to this patch's interaction with the parallel apply feature as\n> mentioned in the email [1]. I am not sure the idea proposed in that\n> email [1] is a good one because delaying after applying commit may not\n> be good as we want to delay the apply of the transaction(s) on\n> subscribers by this feature. I feel this needs more thought.\n>\n\nI have thought a bit more about this and we have the following options\nto choose the delay point from. (a) apply delay just before committing\na transaction. As mentioned in comments in the patch this can lead to\nbloat and locks held for a long time. (b) apply delay before starting\nto apply changes for a transaction but here the problem is which time\nto consider. In some cases, like for streaming transactions, we don't\nreceive the commit/prepare xact time in the start message. (c) use (b)\nbut use the previous transaction's commit time. (d) apply delay after\ncommitting a transaction by using the xact's commit time.\n\nAt this stage, among above, I feel any one of (c) or (d) is worth\nconsidering. Now, the difference between (c) and (d) is that if after\ncommit the next xact's data is already delayed by more than\nmin_apply_delay time then we don't need to kick the new logic of apply\ndelay.\n\nThe other thing to consider whether we need to process any keepalive\nmessages during the delay because otherwise, walsender may think that\nthe subscriber is not available and time out. This may not be a\nproblem for synchronous replication but otherwise, it could be a\nproblem.\n\nThoughts?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 2 Dec 2022 12:35:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Here are some review comments for patch v9-0001:\n\n======\n\nGENERAL\n\n1. min_ prefix?\n\nWhat's the significance of the \"min_\" prefix for this parameter? I'm\nguessing the background is that at one time it was considered to be a\nGUC so took a name similar to GUC recovery_min_apply_delay (??)\n\nBut in practice, I think it is meaningless and/or misleading. For\nexample, suppose the user wants to defer replication by 1hr. IMO it is\nmore natural to just say \"defer replication by 1 hr\" (aka\napply_delay='1hr') Clearly it means replication will take place about\n1 hr into the future. OTHO saying \"defer replication by a MINIMUM of 1\nhr\" (aka min_apply_delay='1hr') is quite vague because then it is\nequally valid if the replication gets delayed by 1 hr or 2 hrs or 5\ndays or 3 weeks since all of those satisfy the minimum delay. The\nimplementation could hardwire a delay of INT_MAX ms but clearly,\nthat's not really what the user would expect.\n\n~\n\nSo, I think this parameter should be renamed just as 'apply_delay'.\n\nBut, if you still decide to keep it as 'min_apply_delay' then there is\na lot of other code that ought to be changed to be consistent with\nthat name.\ne.g.\n- subapplydelay in catalogs.sgml --> subminapplydelay\n- subapplydelay in system_views.sql --> subminapplydelay\n- subapplydelay in pg_subscription.h --> subminapplydelay\n- subapplydelay in dump.h --> subminapplydelay\n- i_subapplydelay in pg_dump.c --> i_subminapplydelay\n- applydelay member name of Form_pg_subscription --> minapplydelay\n- \"Apply Delay\" for the column name displayed by describe.c --> \"Min\napply delay\"\n- more...\n\n(IMO the fact that so much code does not currently say 'min' at all is\njust evidence that the 'min' prefix really didn't really mean much in\nthe first place)\n\n\n======\n\ndoc/src/sgml/catalogs.sgml\n\n2. Section 31.2 Subscription\n\n+ <para>\n+ Time delayed replica of subscription is available by indicating\n+ <literal>min_apply_delay</literal>. See\n+ <xref linkend=\"sql-createsubscription\"/> for details.\n+ </para>\n\nHow about saying like:\n\nSUGGESTION\nThe subscriber replication can be instructed to lag behind the\npublisher side changes by specifying the\n<literal>min_apply_delay</literal> subscription parameter. See XXX for\ndetails.\n\n======\n\ndoc/src/sgml/ref/create_subscription.sgml\n\n3. min_apply_delay\n\n+ <para>\n+ By default, subscriber applies changes as soon as possible. As with\n+ the physical replication feature\n+ (<xref linkend=\"guc-recovery-min-apply-delay\"/>), it can be useful to\n+ have a time-delayed logical replica. This parameter allows you to\n+ delay the application of changes by a specified amount of time. If\n+ this value is specified without units, it is taken as milliseconds.\n+ The default is zero, adding no delay.\n+ </para>\n\n\"subscriber applies\" -> \"the subscriber applies\"\n\n\"allows you\" -> \"lets the user\"\n\n\"The default is zero, adding no delay.\" -> \"The default is zero (no delay).\"\n\n~\n\n4.\n\n+ larger than the time deviations between servers. Note that\n+ in the case when this parameter is set to a long value, the\n+ replication may not continue if the replication slot falls behind the\n+ current LSN by more than <literal>max_slot_wal_keep_size</literal>.\n+ See more details in <xref linkend=\"guc-max-slot-wal-keep-size\"/>.\n+ </para>\n\n4a.\nSUGGESTION\nNote that if this parameter is set to a long delay, the replication\nwill stop if the replication slot falls behind the current LSN by more\nthan <literal>max_slot_wal_keep_size</literal>.\n\n~\n\n4b.\nWhen it is rendered (like below) it looks a bit repetitive:\n... if the replication slot falls behind the current LSN by more than\nmax_slot_wal_keep_size. See more details in max_slot_wal_keep_size.\n\n~\n\nIMO the previous sentence should include the link.\n\nSUGGESTION\nif the replication slot falls behind the current LSN by more than\n<link linkend =\n\"guc-max-slot-wal-keep-size\"><literal>max_slot_wal_keep_size</literal></link>.\n\n~\n\n5.\n\n+ <para>\n+ Synchronous replication is affected by this setting when\n+ <varname>synchronous_commit</varname> is set to\n+ <literal>remote_write</literal>; every <literal>COMMIT</literal>\n+ will need to wait to be applied.\n+ </para>\n\nYes, this deserves a big warning -- but I am just not quite sure of\nthe details. I think this impacts more than just \"remote_rewrite\" --\ne.g. the same problem would happen if \"synchronous_standby_names\" is\nnon-empty.\n\nI think this warning needs to be more generic to cover everything.\nMaybe something like below\n\nSUGGESTION:\nDelaying the replication can mean there is a much longer time between\nmaking a change on the publisher, and that change being committed on\nthe subscriber. This can have a big impact on synchronous replication.\nSee https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-SYNCHRONOUS-COMMIT\n\n\n======\n\nsrc/backend/commands/subscriptioncmds.c\n\n6. parse_subscription_options\n\n+ ms = interval_to_ms(interval);\n+ if (ms < 0 || ms > INT_MAX)\n+ ereport(ERROR,\n+ errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n+ errmsg(\"%lld ms is outside the valid range for option \\\"%s\\\"\",\n+ (long long) ms, \"min_apply_delay\"));\n\n\"for option\" -> \"for parameter\"\n\n======\n\nsrc/backend/replication/logical/worker.c\n\n7. apply_delay\n\n+static void\n+apply_delay(TimestampTz ts)\n\nIMO having a delay is not the usual case. So, would a better name for\nthis function be 'maybe_delay'?\n\n~\n\n8.\n\n+ * high value for the delay. This design is different from the physical\n+ * replication (that applies the delay at commit time) mainly because write\n+ * operations may allow some issues (such as bloat and locks) that can be\n+ * minimized if it does not keep the transaction open for such a long time.\n\nSomething seems not quite right with this wording -- is there a better\nway of describing this?\n\n~\n\n9.\n\n+ /*\n+ * Delay apply until all tablesync workers have reached READY state. If we\n+ * allow the delay during the catchup phase, once we reach the limit of\n+ * tablesync workers, it will impose a delay for each subsequent worker.\n+ * It means it will take a long time to finish the initial table\n+ * synchronization.\n+ */\n+ if (!AllTablesyncsReady())\n+ return;\n\n\"Delay apply until...\" -> \"The min_apply_delay parameter is ignored until...\"\n\n~\n\n10.\n\n+ /*\n+ * The worker may be waken because of the ALTER SUBSCRIPTION ...\n+ * DISABLE, so the catalog pg_subscription should be read again.\n+ */\n+ if (!in_remote_transaction && !in_streamed_transaction)\n+ {\n+ AcceptInvalidationMessages();\n+ maybe_reread_subscription();\n+ }\n+ }\n\n\"waken\" -> \"woken\"\n\n======\n\nsrc/bin/psql/describe.c\n\n11. describeSubscriptions\n\n+ /* Origin and min_apply_delay are only supported in v16 and higher */\n if (pset.sversion >= 160000)\n appendPQExpBuffer(&buf,\n- \", suborigin AS \\\"%s\\\"\\n\",\n- gettext_noop(\"Origin\"));\n+ \", suborigin AS \\\"%s\\\"\\n\"\n+ \", subapplydelay AS \\\"%s\\\"\\n\",\n+ gettext_noop(\"Origin\"),\n+ gettext_noop(\"Apply delay\"));\n\nIIUC the psql command is supposed to display useful information to the\nuser, so I wondered if it is worthwhile to put the units in this\ncolumn header -- \"Apply delay (ms)\" instead of just \"Apply delay\"\nbecause that would make it far easier to understand the meaning\nwithout having to check the documentation to discover the units.\n\n======\n\nsrc/include/utils/timestamp.h\n\n12.\n\n+extern int64 interval_to_ms(const Interval *interval);\n+\n\nFor consistency with the other interval conversion functions exposed\nhere maybe this one should have been called 'interval2ms'\n\n======\n\nsrc/test/subscription/t/032_apply_delay.pl\n\n13.\n\nIIUC this test is checking if a delay has occurred by inspecting the\ndebug logs to see if a certain code path including \"logical\nreplication apply delay\" is logged. I guess that is OK, but another\nway might be to compare the actual timing values of the published and\nreplicated rows.\n\nThe publisher table can have a column with default now() and the\nsubscriber side table can have an *additional* column also with\ndefault now(). After replication, those two timestamp values can be\ncompared to check if the difference exceeds the min_time_delay\nparameter specified.\n\n------\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 6 Dec 2022 19:00:07 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tue, Dec 6, 2022 at 1:30 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are some review comments for patch v9-0001:\n>\n> ======\n>\n> GENERAL\n>\n> 1. min_ prefix?\n>\n> What's the significance of the \"min_\" prefix for this parameter? I'm\n> guessing the background is that at one time it was considered to be a\n> GUC so took a name similar to GUC recovery_min_apply_delay (??)\n>\n> But in practice, I think it is meaningless and/or misleading. For\n> example, suppose the user wants to defer replication by 1hr. IMO it is\n> more natural to just say \"defer replication by 1 hr\" (aka\n> apply_delay='1hr') Clearly it means replication will take place about\n> 1 hr into the future. OTHO saying \"defer replication by a MINIMUM of 1\n> hr\" (aka min_apply_delay='1hr') is quite vague because then it is\n> equally valid if the replication gets delayed by 1 hr or 2 hrs or 5\n> days or 3 weeks since all of those satisfy the minimum delay. The\n> implementation could hardwire a delay of INT_MAX ms but clearly,\n> that's not really what the user would expect.\n>\n\nThere is another way to look at this naming. It is quite possible user\nhas set its value as '1 second' and the transaction is delayed by more\nthan that say because the publisher delayed sending it. There could be\nvarious reasons why the publisher could delay like it was busy\nprocessing another workload, the replication connection between\npublisher and subscriber was not working, etc. Moreover, it will be\nsimilar to the same parameter for physical replication. So, I think\nkeeping min in the name is a good idea.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 6 Dec 2022 16:40:32 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Friday, December 2, 2022 4:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Tue, Nov 15, 2022 at 12:33 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > One more thing I would like you to consider is the point raised by me\r\n> > related to this patch's interaction with the parallel apply feature as\r\n> > mentioned in the email [1]. I am not sure the idea proposed in that\r\n> > email [1] is a good one because delaying after applying commit may not\r\n> > be good as we want to delay the apply of the transaction(s) on\r\n> > subscribers by this feature. I feel this needs more thought.\r\n> >\r\n> \r\n> I have thought a bit more about this and we have the following options to\r\n> choose the delay point from. (a) apply delay just before committing a\r\n> transaction. As mentioned in comments in the patch this can lead to bloat and\r\n> locks held for a long time. (b) apply delay before starting to apply changes for a\r\n> transaction but here the problem is which time to consider. In some cases, like\r\n> for streaming transactions, we don't receive the commit/prepare xact time in\r\n> the start message. (c) use (b) but use the previous transaction's commit time.\r\n> (d) apply delay after committing a transaction by using the xact's commit time.\r\n> \r\n> At this stage, among above, I feel any one of (c) or (d) is worth considering. Now,\r\n> the difference between (c) and (d) is that if after commit the next xact's data is\r\n> already delayed by more than min_apply_delay time then we don't need to kick\r\n> the new logic of apply delay.\r\n> \r\n> The other thing to consider whether we need to process any keepalive\r\n> messages during the delay because otherwise, walsender may think that the\r\n> subscriber is not available and time out. This may not be a problem for\r\n> synchronous replication but otherwise, it could be a problem.\r\n> \r\n> Thoughts?\r\nHi,\r\n\r\n\r\nThank you for your comments !\r\nBelow are some analysis for the major points above.\r\n\r\n(1) About the timing to apply the delay\r\n\r\nOne approach of (b) would be best. The idea is to delay all types of transaction's application\r\nbased on the time when one transaction arrives at the subscriber node.\r\n\r\nOne advantage of this approach over (c) and (d) is that this can avoid the case\r\nwhere we might apply a transaction immediately without waiting,\r\nif there are two transactions sequentially and the time in between exceeds the min_apply_delay time.\r\n\r\nWhen we receive stream-in-progress transactions, we'll check whether the time for delay\r\nhas passed or not at first in this approach.\r\n\r\n\r\n(2) About the timeout issue\r\n\r\nWhen having a look at the physical replication internals,\r\nit conducts sending feedback and application of delay separately on different processes.\r\nOTOH, the logical replication needs to achieve those within one process.\r\n\r\nWhen we want to apply delay and avoid the timeout,\r\nwe should not store all the transactions data into memory.\r\nSo, one approach for this is to serialize the transaction data and after the delay,\r\nwe apply the transactions data. However, this means if users adopt this feature,\r\nthen all transaction data that should be delayed would be serialized.\r\nWe are not sure if this sounds a valid approach or not.\r\n\r\nOne another approach was to divide the time of delay in apply_delay() and\r\nutilize the divided time for WaitLatch and sends the keepalive messages from there.\r\nBut, this approach requires some change on the level of libpq layer\r\n(like implementing a new function for wal receiver in order to monitor if\r\nthe data from the publisher is readable or not there).\r\n\r\nProbably, the first idea to serialize the delayed transactions might be better on this point.\r\n\r\nAny feedback is welcome.\r\n\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Tue, 6 Dec 2022 12:13:57 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi,\n\nThe tests fail on cfbot:\nhttps://cirrus-ci.com/task/4533866329800704\n\nThey only seem to fail on 32bit linux.\n\nhttps://api.cirrus-ci.com/v1/artifact/task/4533866329800704/testrun/build-32/testrun/subscription/032_apply_delay/log/regress_log_032_apply_delay\n[06:27:10.628](0.138s) ok 2 - check if the new rows were applied to subscriber\ntimed out waiting for match: (?^:logical replication apply delay) at /tmp/cirrus-ci-build/src/test/subscription/t/032_apply_delay.pl line 124.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 6 Dec 2022 11:08:43 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "At Tue, 6 Dec 2022 11:08:43 -0800, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> The tests fail on cfbot:\n> https://cirrus-ci.com/task/4533866329800704\n> \n> They only seem to fail on 32bit linux.\n> \n> https://api.cirrus-ci.com/v1/artifact/task/4533866329800704/testrun/build-32/testrun/subscription/032_apply_delay/log/regress_log_032_apply_delay\n> [06:27:10.628](0.138s) ok 2 - check if the new rows were applied to subscriber\n> timed out waiting for match: (?^:logical replication apply delay) at /tmp/cirrus-ci-build/src/test/subscription/t/032_apply_delay.pl line 124.\n\nIt fails for me on 64bit Linux.. (Rocky 8.7)\n\n> t/032_apply_delay.pl ............... Dubious, test returned 29 (wstat 7424, 0x1d00)\n> No subtests run\n..\n> t/032_apply_delay.pl (Wstat: 7424 Tests: 0 Failed: 0)\n> Non-zero exit status: 29\n> Parse errors: No plan found in TAP output\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 07 Dec 2022 11:59:43 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tue, Dec 6, 2022 at 5:44 PM Takamichi Osumi (Fujitsu)\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Friday, December 2, 2022 4:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Tue, Nov 15, 2022 at 12:33 PM Amit Kapila <amit.kapila16@gmail.com>\n> > wrote:\n> > > One more thing I would like you to consider is the point raised by me\n> > > related to this patch's interaction with the parallel apply feature as\n> > > mentioned in the email [1]. I am not sure the idea proposed in that\n> > > email [1] is a good one because delaying after applying commit may not\n> > > be good as we want to delay the apply of the transaction(s) on\n> > > subscribers by this feature. I feel this needs more thought.\n> > >\n> >\n> > I have thought a bit more about this and we have the following options to\n> > choose the delay point from. (a) apply delay just before committing a\n> > transaction. As mentioned in comments in the patch this can lead to bloat and\n> > locks held for a long time. (b) apply delay before starting to apply changes for a\n> > transaction but here the problem is which time to consider. In some cases, like\n> > for streaming transactions, we don't receive the commit/prepare xact time in\n> > the start message. (c) use (b) but use the previous transaction's commit time.\n> > (d) apply delay after committing a transaction by using the xact's commit time.\n> >\n> > At this stage, among above, I feel any one of (c) or (d) is worth considering. Now,\n> > the difference between (c) and (d) is that if after commit the next xact's data is\n> > already delayed by more than min_apply_delay time then we don't need to kick\n> > the new logic of apply delay.\n> >\n> > The other thing to consider whether we need to process any keepalive\n> > messages during the delay because otherwise, walsender may think that the\n> > subscriber is not available and time out. This may not be a problem for\n> > synchronous replication but otherwise, it could be a problem.\n> >\n> > Thoughts?\n> Hi,\n>\n>\n> Thank you for your comments !\n> Below are some analysis for the major points above.\n>\n> (1) About the timing to apply the delay\n>\n> One approach of (b) would be best. The idea is to delay all types of transaction's application\n> based on the time when one transaction arrives at the subscriber node.\n>\n\nBut I think it will unnecessarily add the delay when there is a delay\nin sending the transaction by the publisher (say due to the reason\nthat publisher was busy handling other workloads or there was a\ntemporary network communication break between publisher and\nsubscriber). This could probably be the reason why physical\nreplication (via recovery_min_apply_delay) uses the commit time of the\nsending side.\n\n> One advantage of this approach over (c) and (d) is that this can avoid the case\n> where we might apply a transaction immediately without waiting,\n> if there are two transactions sequentially and the time in between exceeds the min_apply_delay time.\n>\n\nI am not sure if I understand your point. However, I think even if the\ntransactions are sequential but if the time between them exceeds (say\nbecause the publisher was down) min_apply_delay, there is no need to\napply additional delay.\n\n> When we receive stream-in-progress transactions, we'll check whether the time for delay\n> has passed or not at first in this approach.\n>\n>\n> (2) About the timeout issue\n>\n> When having a look at the physical replication internals,\n> it conducts sending feedback and application of delay separately on different processes.\n> OTOH, the logical replication needs to achieve those within one process.\n>\n> When we want to apply delay and avoid the timeout,\n> we should not store all the transactions data into memory.\n> So, one approach for this is to serialize the transaction data and after the delay,\n> we apply the transactions data.\n>\n\nIt is not clear to me how this will avoid a timeout.\n\n> However, this means if users adopt this feature,\n> then all transaction data that should be delayed would be serialized.\n> We are not sure if this sounds a valid approach or not.\n>\n> One another approach was to divide the time of delay in apply_delay() and\n> utilize the divided time for WaitLatch and sends the keepalive messages from there.\n>\n\nDo we anytime send keepalive messages from the apply side? I think we\nonly send feedback reply messages as a response to the publisher's\nkeep_alive message. So, we need to do something similar for this if\nyou want to follow this approach.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 7 Dec 2022 10:36:45 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Wednesday, December 7, 2022 12:00 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> At Tue, 6 Dec 2022 11:08:43 -0800, Andres Freund <andres@anarazel.de> wrote\n> in\n> > Hi,\n> >\n> > The tests fail on cfbot:\n> > https://cirrus-ci.com/task/4533866329800704\n> >\n> > They only seem to fail on 32bit linux.\n> >\n> > https://api.cirrus-ci.com/v1/artifact/task/4533866329800704/testrun/bu\n> > ild-32/testrun/subscription/032_apply_delay/log/regress_log_032_apply_\n> > delay\n> > [06:27:10.628](0.138s) ok 2 - check if the new rows were applied to\n> > subscriber timed out waiting for match: (?^:logical replication apply delay) at\n> /tmp/cirrus-ci-build/src/test/subscription/t/032_apply_delay.pl line 124.\n> \n> It fails for me on 64bit Linux.. (Rocky 8.7)\n> \n> > t/032_apply_delay.pl ............... Dubious, test returned 29 (wstat\n> > 7424, 0x1d00) No subtests run\n> ..\n> > t/032_apply_delay.pl (Wstat: 7424 Tests: 0 Failed: 0)\n> > Non-zero exit status: 29\n> > Parse errors: No plan found in TAP output\n> \n> regards.\nHi, thank you so much for your notifications !\n\nI'll look into the failures.\n\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Wed, 7 Dec 2022 05:23:17 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi Vignesh,\r\n\r\n> In the case of physical replication by setting\r\n> recovery_min_apply_delay, I noticed that both primary and standby\r\n> nodes were getting stopped successfully immediately after the stop\r\n> server command. In case of logical replication, stop server fails:\r\n> pg_ctl -D publisher -l publisher.log stop -c\r\n> waiting for server to shut\r\n> down...............................................................\r\n> failed\r\n> pg_ctl: server does not shut down\r\n> \r\n> In case of logical replication, the server does not get stopped\r\n> because the walsender process is not able to exit:\r\n> ps ux | grep walsender\r\n> vignesh 1950789 75.3 0.0 8695216 22284 ? Rs 11:51 1:08\r\n> postgres: walsender vignesh [local] START_REPLICATION\r\n\r\nThanks for reporting the issue. I analyzed about it.\r\n\r\n\r\nThis issue has occurred because the apply worker cannot reply during the delay.\r\nI think we may have to modify the mechanism that delays applying transactions.\r\n\r\nWhen walsender processes are requested to shut down, it can shut down only after\r\nthat all the sent WALs are replicated on the subscriber. This check is done in\r\nWalSndDone(), and the replicated position will be updated when processes handle\r\nthe reply messages from a subscriber, in ProcessStandbyReplyMessage().\r\n\r\nIn the case of physical replication, the walreciever can receive WALs and reply\r\neven if the application is delayed. It means that the replicated position will\r\nbe transported to the publisher side immediately. So the walsender can exit.\r\n\r\nIn terms of logical replication, however, the worker cannot reply to the\r\nwalsender while delaying the transaction with this patch at present. It causes\r\nthe replicated position to be never transported upstream and the walsender cannot\r\nexit.\r\n\r\n\r\nBased on the above analysis, we can conclude that the worker must update the\r\nflushpos and reply to the walsender while delaying the transaction if we want\r\nto solve the issue. This cannot be done in the current approach, and a newer\r\nproposed one[1] may be able to solve this, although it's currently under discussion.\r\n\r\n\r\nNote that a similar issue can reproduce while doing the physical replication.\r\nWhen the wal_sender_timeout is set to 0 and the network between primary and\r\nsecondary is broken after that primary sends WALs to secondary, we cannot stop\r\nthe primary node.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYCPR01MB8373FA10EB2DB2BF8E458604ED1B9%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Fri, 9 Dec 2022 05:19:37 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Andres,\n\nThanks for reporting! I have analyzed the problem and found the root cause.\n\nThis feature seemed not to work on 32-bit OSes. This was because the calculation\nof delay_time was wrong. The first argument of this should be TimestampTz datatype, not Datum:\n\n```\n+ /* Set apply delay */\n+ delay_until = TimestampTzPlusMilliseconds(TimestampTzGetDatum(ts),\n+ MySubscription->applydelay);\n```\n\nIn more detail, the datum representation of int64 contains the value itself\non 64-bit OSes, but it contains the pointer to the value on 32-bit.\n\nAfter modifying the issue, this will work on 32-bit environments.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Fri, 9 Dec 2022 06:38:23 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi,\n\n\nOn Friday, December 9, 2022 3:38 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\n> Thanks for reporting! I have analyzed the problem and found the root cause.\n> \n> This feature seemed not to work on 32-bit OSes. This was because the\n> calculation of delay_time was wrong. The first argument of this should be\n> TimestampTz datatype, not Datum:\n> \n> ```\n> + /* Set apply delay */\n> + delay_until =\n> TimestampTzPlusMilliseconds(TimestampTzGetDatum(ts),\n> +\n> + MySubscription->applydelay);\n> ```\n> \n> In more detail, the datum representation of int64 contains the value itself on\n> 64-bit OSes, but it contains the pointer to the value on 32-bit.\n> \n> After modifying the issue, this will work on 32-bit environments.\nThank you for your analysis.\n\nYeah, it seems we conduct addition of values to the pointer value,\nwhich is returned from the call of TimestampTzGetDatum(), on 32-bit machine\nby mistake.\n\nI'll remove the call in my next version.\n\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Fri, 9 Dec 2022 15:08:27 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hello.\n\nI asked about unexpected walsender termination caused by this feature\nbut I think I didn't received an answer for it and the behavior is\nstill exists.\n\nSpecifically, if servers have the following settings, walsender\nterminates for replication timeout. After that, connection is restored\nafter the LR delay elapses. Although it can be said to be working in\nthat sense, the error happens repeatedly every about min_apply_delay\ninternvals but is hard to distinguish from network troubles. I'm not\nsure you're deliberately okay with it but, I don't think the behavior\ncausing replication timeouts is acceptable.\n\n> wal_sender_timeout = 10s;\n> wal_receiver_temeout = 10s;\n> \n> create subscription ... with (min_apply_delay='60s');\n\nThis is a kind of artificial but timeout=60s and delay=5m is not an\nuncommon setup and that also causes this \"issue\".\n\nsubscriber:\n> 2022-12-12 14:17:18.139 JST LOG: terminating walsender process due to replication timeout\n> 2022-12-12 14:18:11.076 JST LOG: starting logical decoding for slot \"s1\"\n...\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 12 Dec 2022 14:54:09 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Wednesday, December 7, 2022 2:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Tue, Dec 6, 2022 at 5:44 PM Takamichi Osumi (Fujitsu)\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > On Friday, December 2, 2022 4:05 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > > On Tue, Nov 15, 2022 at 12:33 PM Amit Kapila\r\n> > > <amit.kapila16@gmail.com>\r\n> > > wrote:\r\n> > > > One more thing I would like you to consider is the point raised by\r\n> > > > me related to this patch's interaction with the parallel apply\r\n> > > > feature as mentioned in the email [1]. I am not sure the idea\r\n> > > > proposed in that email [1] is a good one because delaying after\r\n> > > > applying commit may not be good as we want to delay the apply of\r\n> > > > the transaction(s) on subscribers by this feature. I feel this needs more\r\n> thought.\r\n> > > >\r\n> > >\r\n> > > I have thought a bit more about this and we have the following\r\n> > > options to choose the delay point from. (a) apply delay just before\r\n> > > committing a transaction. As mentioned in comments in the patch this\r\n> > > can lead to bloat and locks held for a long time. (b) apply delay\r\n> > > before starting to apply changes for a transaction but here the\r\n> > > problem is which time to consider. In some cases, like for streaming\r\n> > > transactions, we don't receive the commit/prepare xact time in the start\r\n> message. (c) use (b) but use the previous transaction's commit time.\r\n> > > (d) apply delay after committing a transaction by using the xact's commit\r\n> time.\r\n> > >\r\n> > > At this stage, among above, I feel any one of (c) or (d) is worth\r\n> > > considering. Now, the difference between (c) and (d) is that if\r\n> > > after commit the next xact's data is already delayed by more than\r\n> > > min_apply_delay time then we don't need to kick the new logic of apply\r\n> delay.\r\n> > >\r\n> > > The other thing to consider whether we need to process any keepalive\r\n> > > messages during the delay because otherwise, walsender may think\r\n> > > that the subscriber is not available and time out. This may not be a\r\n> > > problem for synchronous replication but otherwise, it could be a problem.\r\n> > >\r\n> > > Thoughts?\r\n> > (1) About the timing to apply the delay\r\n> >\r\n> > One approach of (b) would be best. The idea is to delay all types of\r\n> > transaction's application based on the time when one transaction arrives at\r\n> the subscriber node.\r\n> >\r\n> \r\n> But I think it will unnecessarily add the delay when there is a delay in sending\r\n> the transaction by the publisher (say due to the reason that publisher was busy\r\n> handling other workloads or there was a temporary network communication\r\n> break between publisher and subscriber). This could probably be the reason\r\n> why physical replication (via recovery_min_apply_delay) uses the commit time of\r\n> the sending side.\r\nYou are right. The approach (b) adds additional (or unnecessary) delay\r\ndue to network communication or machine troubles in streaming-in-progress cases.\r\nWe agreed this approach (b) has the disadvantage.\r\n\r\n\r\n> > One advantage of this approach over (c) and (d) is that this can avoid\r\n> > the case where we might apply a transaction immediately without\r\n> > waiting, if there are two transactions sequentially and the time in between\r\n> exceeds the min_apply_delay time.\r\n> >\r\n> \r\n> I am not sure if I understand your point. However, I think even if the\r\n> transactions are sequential but if the time between them exceeds (say because\r\n> the publisher was down) min_apply_delay, there is no need to apply additional\r\n> delay.\r\nI'm sorry, my description was not accurate. \r\n\r\nAs for the approach (c), kindly imagine two transactions (txn1, txn2) are executed\r\non the publisher side and the publisher tries to send both of them to the subscriber.\r\nHere, there is no network trouble and the publisher isn't busy for other workloads.\r\nHowever, the diff of the time between txn1 and txn2 execeeds \"min_apply_delay\"\r\n(which is set to the subscriber).\r\n\r\nIn this case, when the txn2 is a stream-in-progress transaction,\r\nwe don't apply any delay for txn2 when it arrives on the subscriber.\r\nIt's because before txn2 comes to the subscriber, \"min_apply_delay\"\r\nhas already passed on the publisher side.\r\nThis means there's a case we don't apply any delay when we choose approach (c).\r\n\r\nThe approach (d) has also similar disadvantage.\r\nIIUC, in this approach the subscriber applies delay after committing a transaction,\r\nbased on the commit/prepare time of publisher side. Kindly, imagine two transactions\r\nare executed on the publisher and the 2nd transaction completes after the subscriber's delay\r\nfor the 1st transaction. Again, there is no network troubles and no heavy workloads on the publisher.\r\nIf so, the delay for the txn1 already finishes when the 2nd transaction\r\narrives on the subscriber, then the 2nd transaction will be applied immediately without delay.\r\n\r\nAnother new discussion point is to utilize (b) and stream commit/stream prepare time\r\nand apply the delay immediately before applying the spool files of the transactions\r\nin the stream-in-progress transaction cases.\r\n\r\nDoes someone has any opinion on those approaches ?\r\n\r\n\r\nLastly, thanks Amit-san and Kuroda-san for giving me\r\nso many offlist feedbacks about those significant points.\r\n\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Mon, 12 Dec 2022 07:23:20 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThis is a reply for later part of your e-mail.\r\n\r\n> > (2) About the timeout issue\r\n> >\r\n> > When having a look at the physical replication internals,\r\n> > it conducts sending feedback and application of delay separately on different\r\n> processes.\r\n> > OTOH, the logical replication needs to achieve those within one process.\r\n> >\r\n> > When we want to apply delay and avoid the timeout,\r\n> > we should not store all the transactions data into memory.\r\n> > So, one approach for this is to serialize the transaction data and after the delay,\r\n> > we apply the transactions data.\r\n> >\r\n> \r\n> It is not clear to me how this will avoid a timeout.\r\n\r\nAt first, the reason why the timeout occurs is that while delaying the apply\r\nworker neither reads messages from the walsender nor replies to it.\r\nThe worker's last_recv_timeout will be not updated because it does not receive\r\nmessages. This leads to wal_receiver_timeout. Similarly, the walsender's\r\nlast_processing will be not updated and exit due to the timeout because the\r\nworker does not reply to upstream.\r\n\r\nBased on the above, we thought that workers must receive and handle messages\r\nevenif they are delaying applying transactions. In more detail, workers must\r\niterate the outer loop in LogicalRepApplyLoop().\r\n\r\nIf workers receive transactions but they need to delay applying, they must keep\r\nmessages somewhere. So we came up with the idea that workers serialize changes\r\nonce and apply later. Our basic design is as follows:\r\n\r\n* All transactions areserialized to files if min_apply_delay is set to non-zero.\r\n* After receiving the commit message and spending time, workers reads and\r\n applies spooled messages\r\n\r\n> > However, this means if users adopt this feature,\r\n> > then all transaction data that should be delayed would be serialized.\r\n> > We are not sure if this sounds a valid approach or not.\r\n> >\r\n> > One another approach was to divide the time of delay in apply_delay() and\r\n> > utilize the divided time for WaitLatch and sends the keepalive messages from\r\n> there.\r\n> >\r\n> \r\n> Do we anytime send keepalive messages from the apply side? I think we\r\n> only send feedback reply messages as a response to the publisher's\r\n> keep_alive message. So, we need to do something similar for this if\r\n> you want to follow this approach.\r\n\r\nRight, and the above mechanism is needed for workers to understand messages\r\nand send feedback replies as a response to the publisher's keepalive message.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Mon, 12 Dec 2022 07:34:49 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Monday, December 12, 2022 2:54 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> I asked about unexpected walsender termination caused by this feature but I\n> think I didn't received an answer for it and the behavior is still exists.\n> \n> Specifically, if servers have the following settings, walsender terminates for\n> replication timeout. After that, connection is restored after the LR delay elapses.\n> Although it can be said to be working in that sense, the error happens\n> repeatedly every about min_apply_delay internvals but is hard to distinguish\n> from network troubles. I'm not sure you're deliberately okay with it but, I don't\n> think the behavior causing replication timeouts is acceptable.\n> \n> > wal_sender_timeout = 10s;\n> > wal_receiver_temeout = 10s;\n> >\n> > create subscription ... with (min_apply_delay='60s');\n> \n> This is a kind of artificial but timeout=60s and delay=5m is not an uncommon\n> setup and that also causes this \"issue\".\n> \n> subscriber:\n> > 2022-12-12 14:17:18.139 JST LOG: terminating walsender process due to\n> > replication timeout\n> > 2022-12-12 14:18:11.076 JST LOG: starting logical decoding for slot \"s1\"\n> ...\nHi, Horiguchi-san\n\n\nThank you so much for your report!\nYes. Currently, how to deal with the timeout issue is under discussion.\nSome analysis about the root cause are also there.\n\nKindly have a look at [1].\n\n\n[1] - https://www.postgresql.org/message-id/TYAPR01MB58669394A67F2340B82E42D1F5E29%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Mon, 12 Dec 2022 07:42:30 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tuesday, December 6, 2022 5:00 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Here are some review comments for patch v9-0001:\r\nHi, thank you for your reviews !\r\n\r\n> \r\n> ======\r\n> \r\n> GENERAL\r\n> \r\n> 1. min_ prefix?\r\n> \r\n> What's the significance of the \"min_\" prefix for this parameter? I'm guessing the\r\n> background is that at one time it was considered to be a GUC so took a name\r\n> similar to GUC recovery_min_apply_delay (??)\r\n> \r\n> But in practice, I think it is meaningless and/or misleading. For example,\r\n> suppose the user wants to defer replication by 1hr. IMO it is more natural to\r\n> just say \"defer replication by 1 hr\" (aka\r\n> apply_delay='1hr') Clearly it means replication will take place about\r\n> 1 hr into the future. OTHO saying \"defer replication by a MINIMUM of 1 hr\" (aka\r\n> min_apply_delay='1hr') is quite vague because then it is equally valid if the\r\n> replication gets delayed by 1 hr or 2 hrs or 5 days or 3 weeks since all of those\r\n> satisfy the minimum delay. The implementation could hardwire a delay of\r\n> INT_MAX ms but clearly, that's not really what the user would expect.\r\n> \r\n> ~\r\n> \r\n> So, I think this parameter should be renamed just as 'apply_delay'.\r\n> \r\n> But, if you still decide to keep it as 'min_apply_delay' then there is a lot of other\r\n> code that ought to be changed to be consistent with that name.\r\n> e.g.\r\n> - subapplydelay in catalogs.sgml --> subminapplydelay\r\n> - subapplydelay in system_views.sql --> subminapplydelay\r\n> - subapplydelay in pg_subscription.h --> subminapplydelay\r\n> - subapplydelay in dump.h --> subminapplydelay\r\n> - i_subapplydelay in pg_dump.c --> i_subminapplydelay\r\n> - applydelay member name of Form_pg_subscription --> minapplydelay\r\n> - \"Apply Delay\" for the column name displayed by describe.c --> \"Min apply\r\n> delay\"\r\nI followed the suggestion to keep the \"min_\" prefix in [1].\r\nFixed. \r\n\r\n\r\n> - more...\r\n> \r\n> (IMO the fact that so much code does not currently say 'min' at all is just\r\n> evidence that the 'min' prefix really didn't really mean much in the first place)\r\n> \r\n> \r\n> ======\r\n> \r\n> doc/src/sgml/catalogs.sgml\r\n> \r\n> 2. Section 31.2 Subscription\r\n> \r\n> + <para>\r\n> + Time delayed replica of subscription is available by indicating\r\n> + <literal>min_apply_delay</literal>. See\r\n> + <xref linkend=\"sql-createsubscription\"/> for details.\r\n> + </para>\r\n> \r\n> How about saying like:\r\n> \r\n> SUGGESTION\r\n> The subscriber replication can be instructed to lag behind the publisher side\r\n> changes by specifying the <literal>min_apply_delay</literal> subscription\r\n> parameter. See XXX for details.\r\nFixed.\r\n\r\n\r\n> ======\r\n> \r\n> doc/src/sgml/ref/create_subscription.sgml\r\n> \r\n> 3. min_apply_delay\r\n> \r\n> + <para>\r\n> + By default, subscriber applies changes as soon as possible. As with\r\n> + the physical replication feature\r\n> + (<xref linkend=\"guc-recovery-min-apply-delay\"/>), it can be useful\r\n> to\r\n> + have a time-delayed logical replica. This parameter allows you to\r\n> + delay the application of changes by a specified amount of time. If\r\n> + this value is specified without units, it is taken as milliseconds.\r\n> + The default is zero, adding no delay.\r\n> + </para>\r\n> \r\n> \"subscriber applies\" -> \"the subscriber applies\"\r\n> \r\n> \"allows you\" -> \"lets the user\"\r\n> \r\n> \"The default is zero, adding no delay.\" -> \"The default is zero (no delay).\"\r\nFixed.\r\n\r\n\r\n> ~\r\n> \r\n> 4.\r\n> \r\n> + larger than the time deviations between servers. Note that\r\n> + in the case when this parameter is set to a long value, the\r\n> + replication may not continue if the replication slot falls behind the\r\n> + current LSN by more than\r\n> <literal>max_slot_wal_keep_size</literal>.\r\n> + See more details in <xref linkend=\"guc-max-slot-wal-keep-size\"/>.\r\n> + </para>\r\n> \r\n> 4a.\r\n> SUGGESTION\r\n> Note that if this parameter is set to a long delay, the replication will stop if the\r\n> replication slot falls behind the current LSN by more than\r\n> <literal>max_slot_wal_keep_size</literal>.\r\nFixed.\r\n\r\n> ~\r\n> \r\n> 4b.\r\n> When it is rendered (like below) it looks a bit repetitive:\r\n> ... if the replication slot falls behind the current LSN by more than\r\n> max_slot_wal_keep_size. See more details in max_slot_wal_keep_size.\r\nThanks! Fixed the redundancy.\r\n\r\n\r\n> ~\r\n> \r\n> IMO the previous sentence should include the link.\r\n> \r\n> SUGGESTION\r\n> if the replication slot falls behind the current LSN by more than <link linkend =\r\n> \"guc-max-slot-wal-keep-size\"><literal>max_slot_wal_keep_size</literal></lin\r\n> k>.\r\nFixed.\r\n\r\n> ~\r\n> \r\n> 5.\r\n> \r\n> + <para>\r\n> + Synchronous replication is affected by this setting when\r\n> + <varname>synchronous_commit</varname> is set to\r\n> + <literal>remote_write</literal>; every <literal>COMMIT</literal>\r\n> + will need to wait to be applied.\r\n> + </para>\r\n> \r\n> Yes, this deserves a big warning -- but I am just not quite sure of the details. I\r\n> think this impacts more than just \"remote_rewrite\" -- e.g. the same problem\r\n> would happen if \"synchronous_standby_names\" is non-empty.\r\n> \r\n> I think this warning needs to be more generic to cover everything.\r\n> Maybe something like below\r\n> \r\n> SUGGESTION:\r\n> Delaying the replication can mean there is a much longer time between making\r\n> a change on the publisher, and that change being committed on the subscriber.\r\n> This can have a big impact on synchronous replication.\r\n> See\r\n> https://www.postgresql.org/docs/current/runtime-config-wal.html#GUC-SYN\r\n> CHRONOUS-COMMIT\r\nFixed.\r\n\r\n\r\n> \r\n> ======\r\n> \r\n> src/backend/commands/subscriptioncmds.c\r\n> \r\n> 6. parse_subscription_options\r\n> \r\n> + ms = interval_to_ms(interval);\r\n> + if (ms < 0 || ms > INT_MAX)\r\n> + ereport(ERROR,\r\n> + errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\r\n> + errmsg(\"%lld ms is outside the valid range for option \\\"%s\\\"\",\r\n> + (long long) ms, \"min_apply_delay\"));\r\n> \r\n> \"for option\" -> \"for parameter\"\r\nFixed.\r\n\r\n> ======\r\n> \r\n> src/backend/replication/logical/worker.c\r\n> \r\n> 7. apply_delay\r\n> \r\n> +static void\r\n> +apply_delay(TimestampTz ts)\r\n> \r\n> IMO having a delay is not the usual case. So, would a better name for this\r\n> function be 'maybe_delay'?\r\nMakes sense. I follow some other functions such as \r\nmaybe_reread_subscription and maybe_start_skipping_changes.\r\n\r\n\r\n> ~\r\n> \r\n> 8.\r\n> \r\n> + * high value for the delay. This design is different from the physical\r\n> + * replication (that applies the delay at commit time) mainly because\r\n> + write\r\n> + * operations may allow some issues (such as bloat and locks) that can\r\n> + be\r\n> + * minimized if it does not keep the transaction open for such a long time.\r\n> \r\n> Something seems not quite right with this wording -- is there a better way of\r\n> describing this?\r\nI reworded the entire paragraph. Could you please check ?\r\n\r\n\r\n> ~\r\n> \r\n> 9.\r\n> \r\n> + /*\r\n> + * Delay apply until all tablesync workers have reached READY state. If\r\n> + we\r\n> + * allow the delay during the catchup phase, once we reach the limit of\r\n> + * tablesync workers, it will impose a delay for each subsequent worker.\r\n> + * It means it will take a long time to finish the initial table\r\n> + * synchronization.\r\n> + */\r\n> + if (!AllTablesyncsReady())\r\n> + return;\r\n> \r\n> \"Delay apply until...\" -> \"The min_apply_delay parameter is ignored until...\"\r\nFixed.\r\n\r\n\r\n> ~\r\n> \r\n> 10.\r\n> \r\n> + /*\r\n> + * The worker may be waken because of the ALTER SUBSCRIPTION ...\r\n> + * DISABLE, so the catalog pg_subscription should be read again.\r\n> + */\r\n> + if (!in_remote_transaction && !in_streamed_transaction) {\r\n> + AcceptInvalidationMessages(); maybe_reread_subscription(); } }\r\n> \r\n> \"waken\" -> \"woken\"\r\nI have removed this sentence for a new change\r\nto recalculate the diffms for any updates of the \"min_apply_delay\" parameter.\r\n\r\nPlease have a look at maybe_delay_apply().\r\n\r\n> ======\r\n> \r\n> src/bin/psql/describe.c\r\n> \r\n> 11. describeSubscriptions\r\n> \r\n> + /* Origin and min_apply_delay are only supported in v16 and higher */\r\n> if (pset.sversion >= 160000)\r\n> appendPQExpBuffer(&buf,\r\n> - \", suborigin AS \\\"%s\\\"\\n\",\r\n> - gettext_noop(\"Origin\"));\r\n> + \", suborigin AS \\\"%s\\\"\\n\"\r\n> + \", subapplydelay AS \\\"%s\\\"\\n\",\r\n> + gettext_noop(\"Origin\"),\r\n> + gettext_noop(\"Apply delay\"));\r\n> \r\n> IIUC the psql command is supposed to display useful information to the user, so\r\n> I wondered if it is worthwhile to put the units in this column header -- \"Apply\r\n> delay (ms)\" instead of just \"Apply delay\"\r\n> because that would make it far easier to understand the meaning without\r\n> having to check the documentation to discover the units.\r\nFixed.\r\n\r\n\r\n> ======\r\n> \r\n> src/include/utils/timestamp.h\r\n> \r\n> 12.\r\n> \r\n> +extern int64 interval_to_ms(const Interval *interval);\r\n> +\r\n> \r\n> For consistency with the other interval conversion functions exposed here\r\n> maybe this one should have been called 'interval2ms'\r\nFixed.\r\n\r\n\r\n> ======\r\n> \r\n> src/test/subscription/t/032_apply_delay.pl\r\n> \r\n> 13.\r\n> \r\n> IIUC this test is checking if a delay has occurred by inspecting the debug logs to\r\n> see if a certain code path including \"logical replication apply delay\" is logged. I\r\n> guess that is OK, but another way might be to compare the actual timing values\r\n> of the published and replicated rows.\r\n> \r\n> The publisher table can have a column with default now() and the subscriber\r\n> side table can have an *additional* column also with default now(). After\r\n> replication, those two timestamp values can be compared to check if the\r\n> difference exceeds the min_time_delay parameter specified.\r\nAdded this check.\r\n\r\n\r\nThis patch now depends on a patch posted in another thread in [2]\r\nfor TAP test of \"min_apply_delay\" feature. Without this patch,\r\nif one backend process executes ALTER SUBSCRIPTION SET min_apply_delay,\r\nwhile the apply worker gets another message for apply_dispatch,\r\nthe apply worker doesn't notice the reset and utilizes the old value for\r\nthat incoming transaction. To fix this, I posted the patch together.\r\n(During the patch creation, I don't any change any code logs of the\r\nwakeup patch, but for my env, I adjusted the line feed.) \r\n\r\n\r\nKindly have a look at the updated patch.\r\n\r\n[1] - https://www.postgresql.org/message-id/CAA4eK1J9HEL-U32FwkHXLOGXPV_Fu%2Bnb%2B1KpV7hTbnqbBNnDUQ%40mail.gmail.com\r\n[2] - https://www.postgresql.org/message-id/20221122004119.GA132961@nathanxps13\r\n\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Mon, 12 Dec 2022 10:40:45 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Friday, November 25, 2022 5:43 AM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> On Fri, Nov 25, 2022 at 2:15 AM Takamichi Osumi (Fujitsu)\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > On Wednesday, October 5, 2022 6:42 PM Peter Smith\r\n> <smithpb2250@gmail.com> wrote:\r\n> ...\r\n> >\r\n> > > ======\r\n> > >\r\n> > > 5. src/backend/commands/subscriptioncmds.c - SubOpts\r\n> > >\r\n> > > @@ -89,6 +91,7 @@ typedef struct SubOpts\r\n> > > bool disableonerr;\r\n> > > char *origin;\r\n> > > XLogRecPtr lsn;\r\n> > > + int64 min_apply_delay;\r\n> > > } SubOpts;\r\n> > >\r\n> > > I feel it would be better to be explicit about the storage units. So\r\n> > > call this member ‘min_apply_delay_ms’. E.g. then other code in\r\n> > > parse_subscription_options will be more natural when you are\r\n> > > converting using and assigning them to this member.\r\n> > I don't think we use such names including units explicitly.\r\n> > Could you please tell me a similar example for this ?\r\n> >\r\n> \r\n> Regex search \"\\..*_ms[e\\s]\" finds some members where the unit is in the\r\n> member name.\r\n> \r\n> e.g. delay_ms (see EnableTimeoutParams in timeout.h) e.g. interval_in_ms (see\r\n> timeout_paramsin timeout.c)\r\n> \r\n> Regex search \".*_ms[e\\s]\" finds many local variables where the unit is in the\r\n> variable name\r\n> \r\n> > > ======\r\n> > >\r\n> > > 16. src/include/catalog/pg_subscription.h\r\n> > >\r\n> > > + int64 subapplydelay; /* Replication apply delay */\r\n> > > +\r\n> > >\r\n> > > Consider renaming this as 'subapplydelayms' to make the units perfectly\r\n> clear.\r\n> > Similar to the 5th comments, I can't find any examples for this.\r\n> > I'd like to keep it general, which makes me feel it is more aligned\r\n> > with existing codes.\r\nHi, thank you for sharing this info.\r\n\r\nI searched the codes where I could feel the merits to add \"ms\"\r\nat the end of the variable names.\r\nAdding the unites would help to calculate or convert some time related values.\r\nIn this patch there is only a couple of functions, like maybe_delay_apply()\r\nor for conversion of time, parse_subscription_options.\r\n\r\nI feel changing just a couple of structures might be awkward,\r\nwhile changing all internal structures is too much. So, I keep the names\r\nas those were after some modifications shared in [1].\r\nIf you have any better idea, please let me know.\r\n\r\n\r\n[1] - https://www.postgresql.org/message-id/TYCPR01MB83730C23CB7D29E57368BECDEDE29%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Mon, 12 Dec 2022 11:09:18 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi,\n\nOn Saturday, December 10, 2022 12:08 AM Takamichi Osumi (Fujitsu) <osumi.takamichi@fujitsu.com> wrote:\n> On Friday, December 9, 2022 3:38 PM Kuroda, Hayato/黒田 隼人\n> <kuroda.hayato@fujitsu.com> wrote:\n> > Thanks for reporting! I have analyzed the problem and found the root cause.\n> >\n> > This feature seemed not to work on 32-bit OSes. This was because the\n> > calculation of delay_time was wrong. The first argument of this should\n> > be TimestampTz datatype, not Datum:\n> >\n> > ```\n> > + /* Set apply delay */\n> > + delay_until =\n> > TimestampTzPlusMilliseconds(TimestampTzGetDatum(ts),\n> > +\n> > + MySubscription->applydelay);\n> > ```\n> >\n> > In more detail, the datum representation of int64 contains the value\n> > itself on 64-bit OSes, but it contains the pointer to the value on 32-bit.\n> >\n> > After modifying the issue, this will work on 32-bit environments.\n> Thank you for your analysis.\n> \n> Yeah, it seems we conduct addition of values to the pointer value, which is\n> returned from the call of TimestampTzGetDatum(), on 32-bit machine by\n> mistake.\n> \n> I'll remove the call in my next version.\nApplied this fix in the last version, shared in [1].\n\n\n[1] - https://www.postgresql.org/message-id/TYCPR01MB83730C23CB7D29E57368BECDEDE29%40TYCPR01MB8373.jpnprd01.prod.outlook.com\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Mon, 12 Dec 2022 11:20:28 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Mon, Dec 12, 2022 at 1:04 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> This is a reply for later part of your e-mail.\n>\n> > > (2) About the timeout issue\n> > >\n> > > When having a look at the physical replication internals,\n> > > it conducts sending feedback and application of delay separately on different\n> > processes.\n> > > OTOH, the logical replication needs to achieve those within one process.\n> > >\n> > > When we want to apply delay and avoid the timeout,\n> > > we should not store all the transactions data into memory.\n> > > So, one approach for this is to serialize the transaction data and after the delay,\n> > > we apply the transactions data.\n> > >\n> >\n> > It is not clear to me how this will avoid a timeout.\n>\n> At first, the reason why the timeout occurs is that while delaying the apply\n> worker neither reads messages from the walsender nor replies to it.\n> The worker's last_recv_timeout will be not updated because it does not receive\n> messages. This leads to wal_receiver_timeout. Similarly, the walsender's\n> last_processing will be not updated and exit due to the timeout because the\n> worker does not reply to upstream.\n>\n> Based on the above, we thought that workers must receive and handle messages\n> evenif they are delaying applying transactions. In more detail, workers must\n> iterate the outer loop in LogicalRepApplyLoop().\n>\n> If workers receive transactions but they need to delay applying, they must keep\n> messages somewhere. So we came up with the idea that workers serialize changes\n> once and apply later. Our basic design is as follows:\n>\n> * All transactions areserialized to files if min_apply_delay is set to non-zero.\n> * After receiving the commit message and spending time, workers reads and\n> applies spooled messages\n>\n\nI think this may be more work than required because in some cases\ndoing I/O just to delay xacts will later lead to more work. Can't we\nsend some ping to walsender to communicate that walreceiver is alive?\nWe already seem to be sending a ping in LogicalRepApplyLoop if we\nhaven't heard anything from the server for more than\nwal_receiver_timeout / 2. Now, it is possible that the walsender is\nterminated due to some other reason and we need to see if we can\ndetect that or if it will only be detected once the walreceiver's\ndelay time is over.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 12 Dec 2022 18:10:00 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hello.\n\nAt Mon, 12 Dec 2022 07:42:30 +0000, \"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com> wrote in \n> On Monday, December 12, 2022 2:54 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > I asked about unexpected walsender termination caused by this feature but I\n> > think I didn't received an answer for it and the behavior is still exists.\n..\n> Thank you so much for your report!\n> Yes. Currently, how to deal with the timeout issue is under discussion.\n> Some analysis about the root cause are also there.\n> \n> Kindly have a look at [1].\n> \n> \n> [1] - https://www.postgresql.org/message-id/TYAPR01MB58669394A67F2340B82E42D1F5E29%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n\nOops. Thank you for the pointer. Will visit there.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 13 Dec 2022 10:23:24 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "At Mon, 12 Dec 2022 18:10:00 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Mon, Dec 12, 2022 at 1:04 PM Hayato Kuroda (Fujitsu)\n> <kuroda.hayato@fujitsu.com> wrote:\n> > once and apply later. Our basic design is as follows:\n> >\n> > * All transactions areserialized to files if min_apply_delay is set to non-zero.\n> > * After receiving the commit message and spending time, workers reads and\n> > applies spooled messages\n> >\n> \n> I think this may be more work than required because in some cases\n> doing I/O just to delay xacts will later lead to more work. Can't we\n> send some ping to walsender to communicate that walreceiver is alive?\n> We already seem to be sending a ping in LogicalRepApplyLoop if we\n> haven't heard anything from the server for more than\n> wal_receiver_timeout / 2. Now, it is possible that the walsender is\n> terminated due to some other reason and we need to see if we can\n> detect that or if it will only be detected once the walreceiver's\n> delay time is over.\n\nFWIW, I thought the same thing with Amit.\n\nWhat we should do here is logrep workers notifying to walsender that\nit's living and the communication in-between is fine, and maybe the\nworker's status. Spontaneous send_feedback() calls while delaying will\nbe sufficient for this purpose. We might need to supress extra forced\nfeedbacks instead. In contrast the worker doesn't need to bother to\nknow whether the peer is living until it receives the next data. But\nwe might need to adjust the wait_time in LogicalRepApplyLoop().\n\nBut, I'm not sure what will happen when walsender is blocked by\nbuffer-full for a long time.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 13 Dec 2022 11:05:21 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Wednesday, December 7, 2022 12:00 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> At Tue, 6 Dec 2022 11:08:43 -0800, Andres Freund <andres@anarazel.de> wrote\n> in\n> > Hi,\n> >\n> > The tests fail on cfbot:\n> > https://cirrus-ci.com/task/4533866329800704\n> >\n> > They only seem to fail on 32bit linux.\n> >\n> > https://api.cirrus-ci.com/v1/artifact/task/4533866329800704/testrun/bu\n> > ild-32/testrun/subscription/032_apply_delay/log/regress_log_032_apply_\n> > delay\n> > [06:27:10.628](0.138s) ok 2 - check if the new rows were applied to\n> > subscriber timed out waiting for match: (?^:logical replication apply delay) at\n> /tmp/cirrus-ci-build/src/test/subscription/t/032_apply_delay.pl line 124.\n> \n> It fails for me on 64bit Linux.. (Rocky 8.7)\n> \n> > t/032_apply_delay.pl ............... Dubious, test returned 29 (wstat\n> > 7424, 0x1d00) No subtests run\n> ..\n> > t/032_apply_delay.pl (Wstat: 7424 Tests: 0 Failed: 0)\n> > Non-zero exit status: 29\n> > Parse errors: No plan found in TAP output\nHi, Horiguchi-san\n\n\nSorry for being late.\n\nWe couldn't reproduce this failure and\nfind the same type of failure on the cfbot from the past failures.\nIt seems no subtests run in your environment.\n\nCould you please share the log files, if you have\nor when you can reproduce this ?\n\nFYI, the latest patch is attached in [1].\n\n\n[1] - https://www.postgresql.org/message-id/TYCPR01MB83730C23CB7D29E57368BECDEDE29%40TYCPR01MB8373.jpnprd01.prod.outlook.com\n\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Tue, 13 Dec 2022 02:28:49 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "At Tue, 13 Dec 2022 02:28:49 +0000, \"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com> wrote in \n> On Wednesday, December 7, 2022 12:00 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> We couldn't reproduce this failure and\n> find the same type of failure on the cfbot from the past failures.\n> It seems no subtests run in your environment.\n\nVery sorry for that. The test script is found to be a left-over file\nin a git-reset'ed working tree. Please forget about it.\n\nFWIW, the latest patch passed make-world for me on Rocky8/x86_64.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 13 Dec 2022 13:27:17 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tuesday, December 13, 2022 1:27 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> At Tue, 13 Dec 2022 02:28:49 +0000, \"Takamichi Osumi (Fujitsu)\"\n> <osumi.takamichi@fujitsu.com> wrote in\n> > On Wednesday, December 7, 2022 12:00 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > We couldn't reproduce this failure and find the same type of failure\n> > on the cfbot from the past failures.\n> > It seems no subtests run in your environment.\n> \n> Very sorry for that. The test script is found to be a left-over file in a git-reset'ed\n> working tree. Please forget about it.\n> \n> FWIW, the latest patch passed make-world for me on Rocky8/x86_64.\nHi,\n\n\nNo problem at all.\nAlso, thank you for your testing and confirming the latest one!\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Tue, 13 Dec 2022 04:43:51 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tue, Dec 13, 2022 at 7:35 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Mon, 12 Dec 2022 18:10:00 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > On Mon, Dec 12, 2022 at 1:04 PM Hayato Kuroda (Fujitsu)\n> > <kuroda.hayato@fujitsu.com> wrote:\n> > > once and apply later. Our basic design is as follows:\n> > >\n> > > * All transactions areserialized to files if min_apply_delay is set to non-zero.\n> > > * After receiving the commit message and spending time, workers reads and\n> > > applies spooled messages\n> > >\n> >\n> > I think this may be more work than required because in some cases\n> > doing I/O just to delay xacts will later lead to more work. Can't we\n> > send some ping to walsender to communicate that walreceiver is alive?\n> > We already seem to be sending a ping in LogicalRepApplyLoop if we\n> > haven't heard anything from the server for more than\n> > wal_receiver_timeout / 2. Now, it is possible that the walsender is\n> > terminated due to some other reason and we need to see if we can\n> > detect that or if it will only be detected once the walreceiver's\n> > delay time is over.\n>\n> FWIW, I thought the same thing with Amit.\n>\n> What we should do here is logrep workers notifying to walsender that\n> it's living and the communication in-between is fine, and maybe the\n> worker's status. Spontaneous send_feedback() calls while delaying will\n> be sufficient for this purpose. We might need to supress extra forced\n> feedbacks instead. In contrast the worker doesn't need to bother to\n> know whether the peer is living until it receives the next data. But\n> we might need to adjust the wait_time in LogicalRepApplyLoop().\n>\n> But, I'm not sure what will happen when walsender is blocked by\n> buffer-full for a long time.\n>\n\nYeah, I think ideally it will timeout but if we have a solution like\nduring delay, we keep sending ping messages time-to-time, it should\nwork fine. However, that needs to be verified. Do you see any reasons\nwhy that won't work?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 13 Dec 2022 17:05:35 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "At Tue, 13 Dec 2022 17:05:35 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Tue, Dec 13, 2022 at 7:35 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Mon, 12 Dec 2022 18:10:00 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> Yeah, I think ideally it will timeout but if we have a solution like\n> during delay, we keep sending ping messages time-to-time, it should\n> work fine. However, that needs to be verified. Do you see any reasons\n> why that won't work?\n\nAh. I meant that \"I have no clear idea of whether\" by \"I'm not sure\".\n\nI looked there a bit further. Finally ProcessPendingWrites() waits for\nstreaming socket to be writable thus no critical problem found here.\nThat being said, it might be better ProcessPendingWrites() refrain\nfrom sending consecutive keepalives while waiting, 30s ping timeout\nand 1h delay may result in 120 successive pings. It might not be a big\ndeal but..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 14 Dec 2022 10:35:02 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Horiguchi-san, Amit,\n\n> > On Tue, Dec 13, 2022 at 7:35 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > At Mon, 12 Dec 2022 18:10:00 +0530, Amit Kapila\n> <amit.kapila16@gmail.com> wrote in\n> > Yeah, I think ideally it will timeout but if we have a solution like\n> > during delay, we keep sending ping messages time-to-time, it should\n> > work fine. However, that needs to be verified. Do you see any reasons\n> > why that won't work?\n\nI have implemented and tested that workers wake up per wal_receiver_timeout/2\nand send keepalive. Basically it works well, but I found two problems.\nDo you have any good suggestions about them?\n\n1)\n\nWith this PoC at present, workers calculate sending intervals based on its\nwal_receiver_timeout, and it is suppressed when the parameter is set to zero.\n\nThis means that there is a possibility that walsender is timeout when wal_sender_timeout\nin publisher and wal_receiver_timeout in subscriber is different.\nSupposing that wal_sender_timeout is 2min, wal_receiver_tiemout is 5min,\nand min_apply_delay is 10min. The worker on subscriber will wake up per 2.5min and\nsend keepalives, but walsender exits before the message arrives to publisher.\n\nOne idea to avoid that is to send the min_apply_delay subscriber option to publisher\nand compare them, but it may be not sufficient. Because XXX_timout GUC parameters\ncould be modified later.\n\n2)\n\nThe issue reported by Vignesh-san[1] has still remained. I have already analyzed that [2],\nthe root cause is that flushed WAL is not updated and sent to the publisher. Even\nif workers send keepalive messages to pub during the delay, the flushed position\ncannot be modified.\n\n[1]: https://www.postgresql.org/message-id/CALDaNm1vT8qNBqHivtAgYur-5-YkwF026VHtw9srd4fsdeaufA%40mail.gmail.com\n[2]: https://www.postgresql.org/message-id/TYAPR01MB5866F6BE7399E6343A96E016F51C9%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Wed, 14 Dec 2022 10:46:17 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Fri, Dec 9, 2022 at 10:49 AM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Hi Vignesh,\n>\n> > In the case of physical replication by setting\n> > recovery_min_apply_delay, I noticed that both primary and standby\n> > nodes were getting stopped successfully immediately after the stop\n> > server command. In case of logical replication, stop server fails:\n> > pg_ctl -D publisher -l publisher.log stop -c\n> > waiting for server to shut\n> > down...............................................................\n> > failed\n> > pg_ctl: server does not shut down\n> >\n> > In case of logical replication, the server does not get stopped\n> > because the walsender process is not able to exit:\n> > ps ux | grep walsender\n> > vignesh 1950789 75.3 0.0 8695216 22284 ? Rs 11:51 1:08\n> > postgres: walsender vignesh [local] START_REPLICATION\n>\n> Thanks for reporting the issue. I analyzed about it.\n>\n>\n> This issue has occurred because the apply worker cannot reply during the delay.\n> I think we may have to modify the mechanism that delays applying transactions.\n>\n> When walsender processes are requested to shut down, it can shut down only after\n> that all the sent WALs are replicated on the subscriber. This check is done in\n> WalSndDone(), and the replicated position will be updated when processes handle\n> the reply messages from a subscriber, in ProcessStandbyReplyMessage().\n>\n> In the case of physical replication, the walreciever can receive WALs and reply\n> even if the application is delayed. It means that the replicated position will\n> be transported to the publisher side immediately. So the walsender can exit.\n>\n\nI think it is not only the replicated positions but it also checks if\nthere is any pending send in WalSndDone(). Why is it a must to send\nall pending WAL and confirm that it is flushed on standby before the\nshutdown for physical standby? Is it because otherwise, we may lose\nthe required WAL? I am asking because it is better to see if those\nconditions apply to logical replication as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 14 Dec 2022 16:29:45 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Wed, Dec 14, 2022 at 4:16 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear Horiguchi-san, Amit,\n>\n> > > On Tue, Dec 13, 2022 at 7:35 AM Kyotaro Horiguchi\n> > > <horikyota.ntt@gmail.com> wrote:\n> > > >\n> > > > At Mon, 12 Dec 2022 18:10:00 +0530, Amit Kapila\n> > <amit.kapila16@gmail.com> wrote in\n> > > Yeah, I think ideally it will timeout but if we have a solution like\n> > > during delay, we keep sending ping messages time-to-time, it should\n> > > work fine. However, that needs to be verified. Do you see any reasons\n> > > why that won't work?\n>\n> I have implemented and tested that workers wake up per wal_receiver_timeout/2\n> and send keepalive. Basically it works well, but I found two problems.\n> Do you have any good suggestions about them?\n>\n> 1)\n>\n> With this PoC at present, workers calculate sending intervals based on its\n> wal_receiver_timeout, and it is suppressed when the parameter is set to zero.\n>\n> This means that there is a possibility that walsender is timeout when wal_sender_timeout\n> in publisher and wal_receiver_timeout in subscriber is different.\n> Supposing that wal_sender_timeout is 2min, wal_receiver_tiemout is 5min,\n> and min_apply_delay is 10min. The worker on subscriber will wake up per 2.5min and\n> send keepalives, but walsender exits before the message arrives to publisher.\n>\n> One idea to avoid that is to send the min_apply_delay subscriber option to publisher\n> and compare them, but it may be not sufficient. Because XXX_timout GUC parameters\n> could be modified later.\n>\n\nHow about restarting the apply worker if min_apply_delay changes? Will\nthat be sufficient?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 14 Dec 2022 16:30:28 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "At Wed, 14 Dec 2022 10:46:17 +0000, \"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com> wrote in \n> I have implemented and tested that workers wake up per wal_receiver_timeout/2\n> and send keepalive. Basically it works well, but I found two problems.\n> Do you have any good suggestions about them?\n> \n> 1)\n> \n> With this PoC at present, workers calculate sending intervals based on its\n> wal_receiver_timeout, and it is suppressed when the parameter is set to zero.\n> \n> This means that there is a possibility that walsender is timeout when wal_sender_timeout\n> in publisher and wal_receiver_timeout in subscriber is different.\n> Supposing that wal_sender_timeout is 2min, wal_receiver_tiemout is 5min,\n\nIt seems to me wal_receiver_status_interval is better for this use.\nIt's enough for us to docuemnt that \"wal_r_s_interval should be\nshorter than wal_sener_timeout/2 especially when logical replication\nconnection is using min_apply_delay. Otherwise you will suffer\nrepeated termination of walsender\".\n\n> and min_apply_delay is 10min. The worker on subscriber will wake up per 2.5min and\n> send keepalives, but walsender exits before the message arrives to publisher.\n> \n> One idea to avoid that is to send the min_apply_delay subscriber option to publisher\n> and compare them, but it may be not sufficient. Because XXX_timout GUC parameters\n> could be modified later.\n\n# Anyway, I don't think such asymmetric setup is preferable.\n\n> 2)\n> \n> The issue reported by Vignesh-san[1] has still remained. I have already analyzed that [2],\n> the root cause is that flushed WAL is not updated and sent to the publisher. Even\n> if workers send keepalive messages to pub during the delay, the flushed position\n> cannot be modified.\n\nI didn't look closer but the cause I guess is walsender doesn't die\nuntil all WAL has been sent, while logical delay chokes replication\nstream. Allowing walsender to finish ignoring replication status\nwouldn't be great. One idea is to let logical workers send delaying\nstatus.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 15 Dec 2022 10:46:11 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "At Wed, 14 Dec 2022 16:30:28 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Wed, Dec 14, 2022 at 4:16 PM Hayato Kuroda (Fujitsu)\n> <kuroda.hayato@fujitsu.com> wrote:\n> > One idea to avoid that is to send the min_apply_delay subscriber option to publisher\n> > and compare them, but it may be not sufficient. Because XXX_timout GUC parameters\n> > could be modified later.\n> >\n> \n> How about restarting the apply worker if min_apply_delay changes? Will\n> that be sufficient?\n\nMmm. If publisher knows that value, isn't it albe to delay *sending*\ndata in the first place? This will resolve many known issues including\nwalsender's un-terminatability, possible buffer-full and status packet\nexchanging.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 15 Dec 2022 10:52:00 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Thu, Dec 15, 2022 at 7:22 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 14 Dec 2022 16:30:28 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > On Wed, Dec 14, 2022 at 4:16 PM Hayato Kuroda (Fujitsu)\n> > <kuroda.hayato@fujitsu.com> wrote:\n> > > One idea to avoid that is to send the min_apply_delay subscriber option to publisher\n> > > and compare them, but it may be not sufficient. Because XXX_timout GUC parameters\n> > > could be modified later.\n> > >\n> >\n> > How about restarting the apply worker if min_apply_delay changes? Will\n> > that be sufficient?\n>\n> Mmm. If publisher knows that value, isn't it albe to delay *sending*\n> data in the first place? This will resolve many known issues including\n> walsender's un-terminatability, possible buffer-full and status packet\n> exchanging.\n>\n\nYeah, but won't it change the meaning of this parameter? Say the\nsubscriber was busy enough that it doesn't need to add an additional\ndelay before applying a particular transaction(s) but adding a delay\nto such a transaction on the publisher will actually make it take much\nlonger to reflect than expected. We probably need to name this\nparameter as min_send_delay if we want to do what you are saying and\nthen I don't know if it serves the actual need and also it will be\ndifferent from what we do in physical standby.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 15 Dec 2022 09:18:55 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Thu, Dec 15, 2022 at 7:16 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 14 Dec 2022 10:46:17 +0000, \"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com> wrote in\n> > I have implemented and tested that workers wake up per wal_receiver_timeout/2\n> > and send keepalive. Basically it works well, but I found two problems.\n> > Do you have any good suggestions about them?\n> >\n> > 1)\n> >\n> > With this PoC at present, workers calculate sending intervals based on its\n> > wal_receiver_timeout, and it is suppressed when the parameter is set to zero.\n> >\n> > This means that there is a possibility that walsender is timeout when wal_sender_timeout\n> > in publisher and wal_receiver_timeout in subscriber is different.\n> > Supposing that wal_sender_timeout is 2min, wal_receiver_tiemout is 5min,\n>\n> It seems to me wal_receiver_status_interval is better for this use.\n> It's enough for us to docuemnt that \"wal_r_s_interval should be\n> shorter than wal_sener_timeout/2 especially when logical replication\n> connection is using min_apply_delay. Otherwise you will suffer\n> repeated termination of walsender\".\n>\n\nThis sounds reasonable to me.\n\n> > and min_apply_delay is 10min. The worker on subscriber will wake up per 2.5min and\n> > send keepalives, but walsender exits before the message arrives to publisher.\n> >\n> > One idea to avoid that is to send the min_apply_delay subscriber option to publisher\n> > and compare them, but it may be not sufficient. Because XXX_timout GUC parameters\n> > could be modified later.\n>\n> # Anyway, I don't think such asymmetric setup is preferable.\n>\n> > 2)\n> >\n> > The issue reported by Vignesh-san[1] has still remained. I have already analyzed that [2],\n> > the root cause is that flushed WAL is not updated and sent to the publisher. Even\n> > if workers send keepalive messages to pub during the delay, the flushed position\n> > cannot be modified.\n>\n> I didn't look closer but the cause I guess is walsender doesn't die\n> until all WAL has been sent, while logical delay chokes replication\n> stream.\n>\n\nRight, I also think so.\n\n> Allowing walsender to finish ignoring replication status\n> wouldn't be great.\n>\n\nYes, that would be ideal. But do you know why that is a must?\n\n> One idea is to let logical workers send delaying\n> status.\n>\n\nHow can that help?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 15 Dec 2022 09:23:12 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "At Thu, 15 Dec 2022 09:23:12 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Thu, Dec 15, 2022 at 7:16 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > Allowing walsender to finish ignoring replication status\n> > wouldn't be great.\n> >\n> \n> Yes, that would be ideal. But do you know why that is a must?\n\nI believe a graceful shutdown (fast and smart) of a replication set is expected to be in sync. Of course we can change the policy to allow walsnder to stop before confirming all WAL have been applied. However walsender doesn't have an idea of wheter the peer is intentionally delaying or not.\n\n> > One idea is to let logical workers send delaying\n> > status.\n> >\n> \n> How can that help?\n\nIf logical worker notifies \"I'm intentionally pausing replication for\nnow, so if you wan to shutting down, plese go ahead ignoring me\",\npublisher can legally run a (kind of) dirty shut down.\n\n# It looks a bit too much, though..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 15 Dec 2022 13:14:56 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "At Thu, 15 Dec 2022 09:18:55 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Thu, Dec 15, 2022 at 7:22 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Wed, 14 Dec 2022 16:30:28 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > > On Wed, Dec 14, 2022 at 4:16 PM Hayato Kuroda (Fujitsu)\n> > > <kuroda.hayato@fujitsu.com> wrote:\n> > > > One idea to avoid that is to send the min_apply_delay subscriber option to publisher\n> > > > and compare them, but it may be not sufficient. Because XXX_timout GUC parameters\n> > > > could be modified later.\n> > > >\n> > >\n> > > How about restarting the apply worker if min_apply_delay changes? Will\n> > > that be sufficient?\n> >\n> > Mmm. If publisher knows that value, isn't it albe to delay *sending*\n> > data in the first place? This will resolve many known issues including\n> > walsender's un-terminatability, possible buffer-full and status packet\n> > exchanging.\n> >\n> \n> Yeah, but won't it change the meaning of this parameter? Say the\n\nInternally changes, but does not change on its face. The difference is\nonly in where the choking point exists. If \".._apply_delay\" should\nwork literally, we should go the way Kuroda-san proposed. Namely,\n\"apply worker has received the data, but will deilay applying it\". If\nwe technically name it correctly for the current behavior, it would be\n\"min_receive_delay\" or \"min_choking_interval\".\n\n> subscriber was busy enough that it doesn't need to add an additional\n> delay before applying a particular transaction(s) but adding a delay\n> to such a transaction on the publisher will actually make it take much\n> longer to reflect than expected. We probably need to name this\n\nIsn't the name min_apply_delay implying the same behavior? Even though\nthe delay time will be a bit prolonged.\n\n> parameter as min_send_delay if we want to do what you are saying and\n> then I don't know if it serves the actual need and also it will be\n> different from what we do in physical standby.\n\nIn the first place phisical and logical replication works differently\nand the mechanism to delaying \"apply\" differs even in the current\nstate in terms of logrep delay choking stream.\n\nI guess they cannot be different in terms of normal operation. But I'm\nnot sure.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 15 Dec 2022 13:41:52 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Thu, Dec 15, 2022 at 10:11 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 15 Dec 2022 09:18:55 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > On Thu, Dec 15, 2022 at 7:22 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > At Wed, 14 Dec 2022 16:30:28 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > > > On Wed, Dec 14, 2022 at 4:16 PM Hayato Kuroda (Fujitsu)\n> > > > <kuroda.hayato@fujitsu.com> wrote:\n> > > > > One idea to avoid that is to send the min_apply_delay subscriber option to publisher\n> > > > > and compare them, but it may be not sufficient. Because XXX_timout GUC parameters\n> > > > > could be modified later.\n> > > > >\n> > > >\n> > > > How about restarting the apply worker if min_apply_delay changes? Will\n> > > > that be sufficient?\n> > >\n> > > Mmm. If publisher knows that value, isn't it albe to delay *sending*\n> > > data in the first place? This will resolve many known issues including\n> > > walsender's un-terminatability, possible buffer-full and status packet\n> > > exchanging.\n> > >\n> >\n> > Yeah, but won't it change the meaning of this parameter? Say the\n>\n> Internally changes, but does not change on its face. The difference is\n> only in where the choking point exists. If \".._apply_delay\" should\n> work literally, we should go the way Kuroda-san proposed. Namely,\n> \"apply worker has received the data, but will deilay applying it\". If\n> we technically name it correctly for the current behavior, it would be\n> \"min_receive_delay\" or \"min_choking_interval\".\n>\n> > subscriber was busy enough that it doesn't need to add an additional\n> > delay before applying a particular transaction(s) but adding a delay\n> > to such a transaction on the publisher will actually make it take much\n> > longer to reflect than expected. We probably need to name this\n>\n> Isn't the name min_apply_delay implying the same behavior? Even though\n> the delay time will be a bit prolonged.\n>\n\nSorry, I don't understand what you intend to say in this point. In\nabove, I mean that the currently proposed patch won't have such a\nproblem but if we apply delay on publisher the problem can happen.\n\n> > parameter as min_send_delay if we want to do what you are saying and\n> > then I don't know if it serves the actual need and also it will be\n> > different from what we do in physical standby.\n>\n> In the first place phisical and logical replication works differently\n> and the mechanism to delaying \"apply\" differs even in the current\n> state in terms of logrep delay choking stream.\n>\n\nI think the first preference is to make it work in a similar way (as\nmuch as possible) to how this parameter works in physical standby and\nif that is not at all possible then we may consider other approaches.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 15 Dec 2022 10:29:17 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "At Thu, 15 Dec 2022 10:29:17 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Thu, Dec 15, 2022 at 10:11 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Thu, 15 Dec 2022 09:18:55 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > > On Thu, Dec 15, 2022 at 7:22 AM Kyotaro Horiguchi\n> > > <horikyota.ntt@gmail.com> wrote:\n> > > subscriber was busy enough that it doesn't need to add an additional\n> > > delay before applying a particular transaction(s) but adding a delay\n> > > to such a transaction on the publisher will actually make it take much\n> > > longer to reflect than expected. We probably need to name this\n> >\n> > Isn't the name min_apply_delay implying the same behavior? Even though\n> > the delay time will be a bit prolonged.\n> >\n> \n> Sorry, I don't understand what you intend to say in this point. In\n> above, I mean that the currently proposed patch won't have such a\n> problem but if we apply delay on publisher the problem can happen.\n\nAre you saing about the sender-side delay lets the whole transaction\n(if it have not streamed out) stay on the sender side? If so... yeah,\nI agree that it is undesirable.\n\n> > > parameter as min_send_delay if we want to do what you are saying and\n> > > then I don't know if it serves the actual need and also it will be\n> > > different from what we do in physical standby.\n> >\n> > In the first place phisical and logical replication works differently\n> > and the mechanism to delaying \"apply\" differs even in the current\n> > state in terms of logrep delay choking stream.\n> >\n> \n> I think the first preference is to make it work in a similar way (as\n> much as possible) to how this parameter works in physical standby and\n> if that is not at all possible then we may consider other approaches.\n\nI uderstood that. However, still I think choking the stream on the\nreceiver-side alone is kind of ugly since it is breaking the protocol\nassumption, that is, the in-band maintenance packets are processed in\na on-time manner on the peer under normal operation (even though\ninvolving some delays for some natural reasons). In this regard, I\ninclined to be in favor of Kuroda-san'sproposal..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 15 Dec 2022 14:52:32 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Horiguchi-san, Amit,\n\n> > Yes, that would be ideal. But do you know why that is a must?\n> \n> I believe a graceful shutdown (fast and smart) of a replication set is expected to\n> be in sync. Of course we can change the policy to allow walsnder to stop before\n> confirming all WAL have been applied. However walsender doesn't have an idea\n> of wheter the peer is intentionally delaying or not.\n\nThis mechanism was introduced by 985bd7[1], which was needed to support a\n\"clean\" switchover. I think it is needed for physical replication, but it is not\nclear for the logical case.\n\nWhen the postmaster is stopped in fast or smart mode, we expected that all\nmodifications were received by secondary. This requirement seems to be not changed\nfrom the initial commit.\n\nBefore 985bd7, the walsender exited just after sending the final WAL, which meant\nthat sometimes the last packet could not reach to secondary. So there was a possibility\nof failing to reboot the primary as a new secondary because the new primary does\nnot have the last WAL record. To avoid the above walsender started waiting for\nflush before exiting.\n\nBut in the case of logical replication, I'm not sure whether this limitation is\nreally needed or not. I think it may be OK that walsender exits without waiting,\nin case of delaying applies. Because we don't have to consider the above issue\nfor logical replication.\n\n[1]: https://github.com/postgres/postgres/commit/985bd7d49726c9f178558491d31a570d47340459\n\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Thu, 15 Dec 2022 08:12:52 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Thu, Dec 15, 2022 at 1:42 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear Horiguchi-san, Amit,\n>\n> > > Yes, that would be ideal. But do you know why that is a must?\n> >\n> > I believe a graceful shutdown (fast and smart) of a replication set is expected to\n> > be in sync. Of course we can change the policy to allow walsnder to stop before\n> > confirming all WAL have been applied. However walsender doesn't have an idea\n> > of wheter the peer is intentionally delaying or not.\n>\n> This mechanism was introduced by 985bd7[1], which was needed to support a\n> \"clean\" switchover. I think it is needed for physical replication, but it is not\n> clear for the logical case.\n>\n> When the postmaster is stopped in fast or smart mode, we expected that all\n> modifications were received by secondary. This requirement seems to be not changed\n> from the initial commit.\n>\n> Before 985bd7, the walsender exited just after sending the final WAL, which meant\n> that sometimes the last packet could not reach to secondary. So there was a possibility\n> of failing to reboot the primary as a new secondary because the new primary does\n> not have the last WAL record. To avoid the above walsender started waiting for\n> flush before exiting.\n>\n> But in the case of logical replication, I'm not sure whether this limitation is\n> really needed or not. I think it may be OK that walsender exits without waiting,\n> in case of delaying applies. Because we don't have to consider the above issue\n> for logical replication.\n>\n\nI also don't see the need for this mechanism for logical replication,\nand in fact, why do we need to even wait for sending the existing WAL?\n\nI think the reason why we don't need to wait for logical replication\nis that after the restart, we always start sending WAL from the\nlocation requested by the subscriber, or till the point where the\npublisher knows the confirmed flush location of the subscriber.\nConsider another case where after restart publisher (node-1) wants to\nact as a subscriber for the previous subscriber (node-2). Now, the new\nsubscriber (node-1) won't have a way to tell the new publisher\n(node-2) that starts from the location where the node-1 went down as\nWAL locations between publisher and subscriber need not be same.\n\nThis brings us to the question of whether users can use logical\nreplication for the scenario where they want the old master to follow\nthe new master after the restart which we typically do in physical\nreplication, if so how?\n\nAnother related point to consider is what is the behavior of\nsynchronous replication when shutdown has been performed both in the\ncase of physical and logical replication especially when the\ntime-delayed replication feature is enabled?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 16 Dec 2022 09:21:10 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Thu, Dec 15, 2022 at 11:22 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 15 Dec 2022 10:29:17 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > On Thu, Dec 15, 2022 at 10:11 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > At Thu, 15 Dec 2022 09:18:55 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > > > On Thu, Dec 15, 2022 at 7:22 AM Kyotaro Horiguchi\n> > > > <horikyota.ntt@gmail.com> wrote:\n> > > > subscriber was busy enough that it doesn't need to add an additional\n> > > > delay before applying a particular transaction(s) but adding a delay\n> > > > to such a transaction on the publisher will actually make it take much\n> > > > longer to reflect than expected. We probably need to name this\n> > >\n> > > Isn't the name min_apply_delay implying the same behavior? Even though\n> > > the delay time will be a bit prolonged.\n> > >\n> >\n> > Sorry, I don't understand what you intend to say in this point. In\n> > above, I mean that the currently proposed patch won't have such a\n> > problem but if we apply delay on publisher the problem can happen.\n>\n> Are you saing about the sender-side delay lets the whole transaction\n> (if it have not streamed out) stay on the sender side?\n>\n\nIt will not stay on the sender side forever but rather will be sent\nafter the min_apply_delay. The point I wanted to raise is that maybe\nthe delay won't need to be applied where we will end up delaying it.\nBecause when we apply the delay on apply side, it will take into\naccount the other load of apply side. I don't know how much it matters\nbut it appears logical to add the delay on applying side.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 16 Dec 2022 09:49:23 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Amit,\n\n> I also don't see the need for this mechanism for logical replication,\n> and in fact, why do we need to even wait for sending the existing WAL?\n\nIs it meant that logicalrep walsenders do not have to track WalSndCaughtUp and\nany pending data in the output buffer?\n\n> I think the reason why we don't need to wait for logical replication\n> is that after the restart, we always start sending WAL from the\n> location requested by the subscriber, or till the point where the\n> publisher knows the confirmed flush location of the subscriber.\n> Consider another case where after restart publisher (node-1) wants to\n> act as a subscriber for the previous subscriber (node-2). Now, the new\n> subscriber (node-1) won't have a way to tell the new publisher\n> (node-2) that starts from the location where the node-1 went down as\n> WAL locations between publisher and subscriber need not be same.\n\nYou mean to say that such mechanism was made for supporting switchover, but logical\nreplication cannot do because new subscriber cannot request definitively unknown\nchanges for it, right? It seems reasonable to me.\n\n> This brings us to the question of whether users can use logical\n> replication for the scenario where they want the old master to follow\n> the new master after the restart which we typically do in physical\n> replication, if so how?\n\nMaybe to support such use-case, 2-way replication is needed\n(but this is out-of-scope of this thread).\n\n> Another related point to consider is what is the behavior of\n> synchronous replication when shutdown has been performed both in the\n> case of physical and logical replication especially when the\n> time-delayed replication feature is enabled?\n\nIn physical replication without any failures, it seems that users can stop primary\nserver even if the applications are delaying on secondary. This is because sent WALs\nare immediately flushed on secondary and walreceiver replies its position. The\ntransaction has been already committed at that time, and the transported changes\nwill be applied on secondary after spending time.\n\nIIUC we can achieve that when logical walsenders do not consider the remote status\nwhile shutting down, but I want to hear another opinion and we must confirm by testing...\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Fri, 16 Dec 2022 06:41:03 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Fri, Dec 16, 2022 at 12:11 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear Amit,\n>\n> > I also don't see the need for this mechanism for logical replication,\n> > and in fact, why do we need to even wait for sending the existing WAL?\n>\n> Is it meant that logicalrep walsenders do not have to track WalSndCaughtUp and\n> any pending data in the output buffer?\n>\n\nI haven't checked the details but I think what you are saying is correct.\n\n>\n> > Another related point to consider is what is the behavior of\n> > synchronous replication when shutdown has been performed both in the\n> > case of physical and logical replication especially when the\n> > time-delayed replication feature is enabled?\n>\n> In physical replication without any failures, it seems that users can stop primary\n> server even if the applications are delaying on secondary. This is because sent WALs\n> are immediately flushed on secondary and walreceiver replies its position.\n>\n\nWhat happens when synchronous_commit's value is remote_apply and the\nuser has also set synchronous_standby_names to corresponding standby?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 20 Dec 2022 10:35:52 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> > > Another related point to consider is what is the behavior of\r\n> > > synchronous replication when shutdown has been performed both in the\r\n> > > case of physical and logical replication especially when the\r\n> > > time-delayed replication feature is enabled?\r\n> >\r\n> > In physical replication without any failures, it seems that users can stop primary\r\n> > server even if the applications are delaying on secondary. This is because sent\r\n> WALs\r\n> > are immediately flushed on secondary and walreceiver replies its position.\r\n> >\r\n> \r\n> What happens when synchronous_commit's value is remote_apply and the\r\n> user has also set synchronous_standby_names to corresponding standby?\r\n\r\nEven if synchronous_commit is set to remote_apply, the primary server can be\r\nshut down. The reason why walsender can exit is that it does not care about the\r\nstatus whether WALs are \"applied\" or not. It just checks the \"flushed\" WAL\r\nposition, not applied one.\r\n\r\nI think we should start another thread about changing the shut-down condition,\r\nso forked[1].\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB586668E50FC2447AD7F92491F5E89%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 22 Dec 2022 05:50:03 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi,\r\n\r\n\r\nOn Thursday, December 15, 2022 12:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Thu, Dec 15, 2022 at 7:16 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\r\n> wrote:\r\n> >\r\n> > At Wed, 14 Dec 2022 10:46:17 +0000, \"Hayato Kuroda (Fujitsu)\"\r\n> > <kuroda.hayato@fujitsu.com> wrote in\r\n> > > I have implemented and tested that workers wake up per\r\n> > > wal_receiver_timeout/2 and send keepalive. Basically it works well, but I\r\n> found two problems.\r\n> > > Do you have any good suggestions about them?\r\n> > >\r\n> > > 1)\r\n> > >\r\n> > > With this PoC at present, workers calculate sending intervals based\r\n> > > on its wal_receiver_timeout, and it is suppressed when the parameter is set\r\n> to zero.\r\n> > >\r\n> > > This means that there is a possibility that walsender is timeout\r\n> > > when wal_sender_timeout in publisher and wal_receiver_timeout in\r\n> subscriber is different.\r\n> > > Supposing that wal_sender_timeout is 2min, wal_receiver_tiemout is\r\n> > > 5min,\r\n> >\r\n> > It seems to me wal_receiver_status_interval is better for this use.\r\n> > It's enough for us to docuemnt that \"wal_r_s_interval should be\r\n> > shorter than wal_sener_timeout/2 especially when logical replication\r\n> > connection is using min_apply_delay. Otherwise you will suffer\r\n> > repeated termination of walsender\".\r\n> >\r\n> \r\n> This sounds reasonable to me.\r\nOkay, I changed the time interval to wal_receiver_status_interval\r\nand added this description about timeout.\r\n\r\nFYI, wal_receiver_status_interval by definition specifies\r\nthe minimum frequency for the WAL receiver process to send information\r\nto the upstream. So I utilized the value for WaitLatch directly.\r\nMy descriptions of the documentation change follow it.\r\n\r\n> > > and min_apply_delay is 10min. The worker on subscriber will wake up\r\n> > > per 2.5min and send keepalives, but walsender exits before the message\r\n> arrives to publisher.\r\n> > >\r\n> > > One idea to avoid that is to send the min_apply_delay subscriber\r\n> > > option to publisher and compare them, but it may be not sufficient.\r\n> > > Because XXX_timout GUC parameters could be modified later.\r\n> >\r\n> > # Anyway, I don't think such asymmetric setup is preferable.\r\n> >\r\n> > > 2)\r\n> > >\r\n> > > The issue reported by Vignesh-san[1] has still remained. I have\r\n> > > already analyzed that [2], the root cause is that flushed WAL is not\r\n> > > updated and sent to the publisher. Even if workers send keepalive\r\n> > > messages to pub during the delay, the flushed position cannot be modified.\r\n> >\r\n> > I didn't look closer but the cause I guess is walsender doesn't die\r\n> > until all WAL has been sent, while logical delay chokes replication\r\n> > stream.\r\nFor the (2) issue, a new thread has been created independently from this thread in [1].\r\nI'll leave any new changes to the thread on this point.\r\n\r\nAttached the updated patch.\r\nAgain, I used one basic patch in another thread to wake up logical replication worker\r\nshared in [2] for TAP tests.\r\n\r\n[1] - https://www.postgresql.org/message-id/TYAPR01MB586668E50FC2447AD7F92491F5E89@TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n[2] - https://www.postgresql.org/message-id/flat/20221122004119.GA132961%40nathanxps13\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Thu, 22 Dec 2022 06:01:49 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Thursday, December 22, 2022 3:02 PM Takamichi Osumi (Fujitsu) <osumi.takamichi@fujitsu.com> wrote:\r\n> Attached the updated patch.\r\n> Again, I used one basic patch in another thread to wake up logical replication\r\n> worker shared in [2] for TAP tests.\r\nThe v11 caused a cfbot failure in [1]. But, failed tests looked irrelevant\r\nto the feature to me at present.\r\n\r\nWhile waiting for another test execution of cfbot, I'd like to check the detailed reason\r\nand update the patch if necessary.\r\n\r\n[1] - https://cirrus-ci.com/task/4580705867399168\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Fri, 23 Dec 2022 15:46:34 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Fri, Dec 23, 2022 at 9:16 PM Takamichi Osumi (Fujitsu)\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Thursday, December 22, 2022 3:02 PM Takamichi Osumi (Fujitsu) <osumi.takamichi@fujitsu.com> wrote:\n> > Attached the updated patch.\n> > Again, I used one basic patch in another thread to wake up logical replication\n> > worker shared in [2] for TAP tests.\n> The v11 caused a cfbot failure in [1]. But, failed tests looked irrelevant\n> to the feature to me at present.\n>\n\nI have done some review for the patch and I have a few comments.\n\n1.\nA.\n+ <literal>wal_sender_timeout</literal> on the publisher. Otherwise, the\n+ walsender repeatedly terminates due to timeout during the delay of\n+ the subscriber.\n\n\nB.\n+/*\n+ * In order to avoid walsender's timeout during time delayed replication,\n+ * it's necessaary to keep sending feedbacks during the delay from the worker\n+ * process. Meanwhile, the feature delays the apply before starting the\n+ * transaction and thus we don't write WALs for the suspended changes during\n+ * the wait. Hence, in the case the worker process sends a feedback during the\n+ * delay, avoid having positions of the flushed and apply LSN overwritten by\n+ * the latest LSN.\n+ */\n\n- Seems like these two statements are conflicting, I mean if we are\nsending feedback then why the walsender will timeout?\n\n- Typo /necessaary/necessary\n\n\n2.\n+ *\n+ * During the time delayed replication, avoid reporting the suspeended\n+ * latest LSN are already flushed and written, to the publisher.\n */\nTypo /suspeended/suspended\n\n3.\n+ if (wal_receiver_status_interval > 0\n+ && diffms > wal_receiver_status_interval)\n+ {\n+ WaitLatch(MyLatch,\n+ WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n+ (long) wal_receiver_status_interval,\n+ WAIT_EVENT_RECOVERY_APPLY_DELAY);\n+ send_feedback(last_received, true, false);\n+ }\n+ else\n+ WaitLatch(MyLatch,\n+ WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n+ diffms,\n+ WAIT_EVENT_RECOVERY_APPLY_DELAY);\n\nI think here we should add some comments to explain about sending\nfeedback, something like what we have explained at the time of\ndefining the \"in_delaying_apply\" variable.\n\n4.\n\n+ * Although the delay is applied in BEGIN messages, streamed transactions\n+ * apply the delay in a STREAM COMMIT message. That's ok because no\n+ * changes have been applied yet (apply_spooled_messages() will do it).\n+ * The STREAM START message would be a natural choice for this delay but\n+ * there is no commit time yet (it will be available when the in-progress\n+ * transaction finishes), hence, it was not possible to apply a delay at\n+ * that time.\n+ */\n+ maybe_delay_apply(commit_data.committime);\n\nI am wondering how this will interact with the parallel apply worker\nwhere we do not spool the data in file? How are we going to get the\ncommit time of the transaction without applying the changes?\n\n5.\n+ /*\n+ * The following operations use these special functions to detect\n+ * overflow. Number of ms per informed days.\n+ */\n\nThis comment doesn't make much sense, I think this needs to be rephrased.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 26 Dec 2022 14:12:41 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Mon, Dec 26, 2022 at 2:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Fri, Dec 23, 2022 at 9:16 PM Takamichi Osumi (Fujitsu)\n> <osumi.takamichi@fujitsu.com> wrote:\n> >\n>\n> 4.\n>\n> + * Although the delay is applied in BEGIN messages, streamed transactions\n> + * apply the delay in a STREAM COMMIT message. That's ok because no\n> + * changes have been applied yet (apply_spooled_messages() will do it).\n> + * The STREAM START message would be a natural choice for this delay but\n> + * there is no commit time yet (it will be available when the in-progress\n> + * transaction finishes), hence, it was not possible to apply a delay at\n> + * that time.\n> + */\n> + maybe_delay_apply(commit_data.committime);\n>\n> I am wondering how this will interact with the parallel apply worker\n> where we do not spool the data in file? How are we going to get the\n> commit time of the transaction without applying the changes?\n>\n\nThere is no sane way to do this. So, I think these features won't work\ntogether, we can disable parallelism when this is active. Considering\nthat parallel apply is to speed up the transactions apply and this\nfeature is to slow down the apply, so even if they don't work together\nthat should be okay. Does that make sense?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 26 Dec 2022 14:44:32 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Mon, Dec 26, 2022 at 2:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Dec 26, 2022 at 2:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Fri, Dec 23, 2022 at 9:16 PM Takamichi Osumi (Fujitsu)\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > >\n> >\n> > 4.\n> >\n> > + * Although the delay is applied in BEGIN messages, streamed transactions\n> > + * apply the delay in a STREAM COMMIT message. That's ok because no\n> > + * changes have been applied yet (apply_spooled_messages() will do it).\n> > + * The STREAM START message would be a natural choice for this delay but\n> > + * there is no commit time yet (it will be available when the in-progress\n> > + * transaction finishes), hence, it was not possible to apply a delay at\n> > + * that time.\n> > + */\n> > + maybe_delay_apply(commit_data.committime);\n> >\n> > I am wondering how this will interact with the parallel apply worker\n> > where we do not spool the data in file? How are we going to get the\n> > commit time of the transaction without applying the changes?\n> >\n>\n> There is no sane way to do this.\n\nYeah, there is no sane way to do it.\n\n So, I think these features won't work\n> together, we can disable parallelism when this is active. Considering\n> that parallel apply is to speed up the transactions apply and this\n> feature is to slow down the apply, so even if they don't work together\n> that should be okay. Does that make sense?\n\nYes, this makes sense.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 26 Dec 2022 19:37:00 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Mon, Dec 26, 2022 at 7:37 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Dec 26, 2022 at 2:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Dec 26, 2022 at 2:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Fri, Dec 23, 2022 at 9:16 PM Takamichi Osumi (Fujitsu)\n> > > <osumi.takamichi@fujitsu.com> wrote:\n> > > >\n> > >\n> > > 4.\n> > >\n> > > + * Although the delay is applied in BEGIN messages, streamed transactions\n> > > + * apply the delay in a STREAM COMMIT message. That's ok because no\n> > > + * changes have been applied yet (apply_spooled_messages() will do it).\n> > > + * The STREAM START message would be a natural choice for this delay but\n> > > + * there is no commit time yet (it will be available when the in-progress\n> > > + * transaction finishes), hence, it was not possible to apply a delay at\n> > > + * that time.\n> > > + */\n> > > + maybe_delay_apply(commit_data.committime);\n> > >\n> > > I am wondering how this will interact with the parallel apply worker\n> > > where we do not spool the data in file? How are we going to get the\n> > > commit time of the transaction without applying the changes?\n> > >\n> >\n> > There is no sane way to do this.\n>\n> Yeah, there is no sane way to do it.\n>\n> So, I think these features won't work\n> > together, we can disable parallelism when this is active. Considering\n> > that parallel apply is to speed up the transactions apply and this\n> > feature is to slow down the apply, so even if they don't work together\n> > that should be okay. Does that make sense?\n>\n> Yes, this makes sense.\n>\n\nBTW, the blocking problem with this patch is to deal with shutdown as\ndiscussed in the thread [1]. In short, the problem is that at\nshutdown, we wait for walsender to send all pending data and ensure\nall data is flushed in the remote node. But, if the other node is\nwaiting due to a time-delayed apply then shutdown won't be successful.\nIt would be really great if you can let us know your thoughts in the\nthread [1] as that can help to move this work forward.\n\n[1] - https://www.postgresql.org/message-id/TYAPR01MB586668E50FC2447AD7F92491F5E89%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 27 Dec 2022 09:32:52 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tue, Dec 27, 2022 at 9:33 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n\n>\n> BTW, the blocking problem with this patch is to deal with shutdown as\n> discussed in the thread [1].\n\nI will have a look.\n\n In short, the problem is that at\n> shutdown, we wait for walsender to send all pending data and ensure\n> all data is flushed in the remote node. But, if the other node is\n> waiting due to a time-delayed apply then shutdown won't be successful.\n> It would be really great if you can let us know your thoughts in the\n> thread [1] as that can help to move this work forward.\n\nOkay, so you mean to say that with logical the shutdown will be\ndelayed until all the changes are applied on the subscriber but the\nsame is not true for physical standby? Is it because on physical\nstandby we flush the WAL before applying?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 27 Dec 2022 11:42:01 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tue, Dec 27, 2022 at 11:42 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Dec 27, 2022 at 9:33 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> >\n> > BTW, the blocking problem with this patch is to deal with shutdown as\n> > discussed in the thread [1].\n>\n> I will have a look.\n>\n\nThanks!\n\n> In short, the problem is that at\n> > shutdown, we wait for walsender to send all pending data and ensure\n> > all data is flushed in the remote node. But, if the other node is\n> > waiting due to a time-delayed apply then shutdown won't be successful.\n> > It would be really great if you can let us know your thoughts in the\n> > thread [1] as that can help to move this work forward.\n>\n> Okay, so you mean to say that with logical the shutdown will be\n> delayed until all the changes are applied on the subscriber but the\n> same is not true for physical standby?\n\nRight.\n\n> Is it because on physical\n> standby we flush the WAL before applying?\n>\n\nYes, the walreceiver first flushes the WAL before applying.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 27 Dec 2022 12:14:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi hackers,\r\n\r\n> On Thursday, December 22, 2022 3:02 PM Takamichi Osumi (Fujitsu)\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> > Attached the updated patch.\r\n> > Again, I used one basic patch in another thread to wake up logical replication\r\n> > worker shared in [2] for TAP tests.\r\n> The v11 caused a cfbot failure in [1]. But, failed tests looked irrelevant\r\n> to the feature to me at present.\r\n> \r\n> While waiting for another test execution of cfbot, I'd like to check the detailed\r\n> reason\r\n> and update the patch if necessary.\r\n\r\nI have investigated the failure and it seemed that it has been caused by VACUUM FREEZE.\r\nFollowings were copied from the server log.\r\n\r\n```\r\n2022-12-23 08:50:20.175 UTC [34653][postmaster] LOG: server process (PID 37171) was terminated by signal 6: Abort trap\r\n2022-12-23 08:50:20.175 UTC [34653][postmaster] DETAIL: Failed process was running: VACUUM FREEZE tab_freeze;\r\n2022-12-23 08:50:20.175 UTC [34653][postmaster] LOG: terminating any other active server processes\r\n```\r\n\r\nSame error has been raised in other threads [1], so we have concluded that this is not related with the patch.\r\nThe report was raised in another thread [2].\r\n\r\n[1]: https://cirrus-ci.com/task/5630405437554688\r\n[2]: https://www.postgresql.org/message-id/TYAPR01MB5866B24104FD80B5D7E65C3EF5ED9%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Tue, 27 Dec 2022 07:09:31 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Dilip,\r\n\r\nThanks for reviewing our patch! PSA new version patch set.\r\nAgain, 0001 is not made by us, brought from [1].\r\n\r\n> I have done some review for the patch and I have a few comments.\r\n>\r\n> 1.\r\n> A.\r\n> + <literal>wal_sender_timeout</literal> on the publisher. Otherwise, the\r\n> + walsender repeatedly terminates due to timeout during the delay of\r\n> + the subscriber.\r\n> \r\n> \r\n> B.\r\n> +/*\r\n> + * In order to avoid walsender's timeout during time delayed replication,\r\n> + * it's necessaary to keep sending feedbacks during the delay from the worker\r\n> + * process. Meanwhile, the feature delays the apply before starting the\r\n> + * transaction and thus we don't write WALs for the suspended changes during\r\n> + * the wait. Hence, in the case the worker process sends a feedback during the\r\n> + * delay, avoid having positions of the flushed and apply LSN overwritten by\r\n> + * the latest LSN.\r\n> + */\r\n> \r\n> - Seems like these two statements are conflicting, I mean if we are\r\n> sending feedback then why the walsender will timeout?\r\n\r\nIt is a possibility that timeout is occurred because the interval between feedback\r\nmessages may become longer than wal_sender_timeout. Reworded and added descriptions.\r\n\r\n> - Typo /necessaary/necessary\r\n\r\nFixed.\r\n\r\n> 2.\r\n> + *\r\n> + * During the time delayed replication, avoid reporting the suspeended\r\n> + * latest LSN are already flushed and written, to the publisher.\r\n> */\r\n> Typo /suspeended/suspended\r\n\r\nFixed.\r\n\r\n> 3.\r\n> + if (wal_receiver_status_interval > 0\r\n> + && diffms > wal_receiver_status_interval)\r\n> + {\r\n> + WaitLatch(MyLatch,\r\n> + WL_LATCH_SET | WL_TIMEOUT |\r\n> WL_EXIT_ON_PM_DEATH,\r\n> + (long) wal_receiver_status_interval,\r\n> + WAIT_EVENT_RECOVERY_APPLY_DELAY);\r\n> + send_feedback(last_received, true, false);\r\n> + }\r\n> + else\r\n> + WaitLatch(MyLatch,\r\n> + WL_LATCH_SET | WL_TIMEOUT |\r\n> WL_EXIT_ON_PM_DEATH,\r\n> + diffms,\r\n> + WAIT_EVENT_RECOVERY_APPLY_DELAY);\r\n> \r\n> I think here we should add some comments to explain about sending\r\n> feedback, something like what we have explained at the time of\r\n> defining the \"in_delaying_apply\" variable.\r\n\r\nAdded.\r\n\r\n> 4.\r\n> \r\n> + * Although the delay is applied in BEGIN messages, streamed transactions\r\n> + * apply the delay in a STREAM COMMIT message. That's ok because no\r\n> + * changes have been applied yet (apply_spooled_messages() will do it).\r\n> + * The STREAM START message would be a natural choice for this delay\r\n> but\r\n> + * there is no commit time yet (it will be available when the in-progress\r\n> + * transaction finishes), hence, it was not possible to apply a delay at\r\n> + * that time.\r\n> + */\r\n> + maybe_delay_apply(commit_data.committime);\r\n> \r\n> I am wondering how this will interact with the parallel apply worker\r\n> where we do not spool the data in file? How are we going to get the\r\n> commit time of the transaction without applying the changes?\r\n\r\nWe think that parallel apply workers should not delay applications because if\r\nthey delay transactions before committing they may hold locks very long time.\r\n\r\n> 5.\r\n> + /*\r\n> + * The following operations use these special functions to detect\r\n> + * overflow. Number of ms per informed days.\r\n> + */\r\n> \r\n> This comment doesn't make much sense, I think this needs to be rephrased.\r\n\r\nChanged to simpler expression.\r\n\r\nWe have also fixed wrong usage of wal_receiver_status_interval. We must convert\r\nthe unit from [s] to [ms] when it is passed to WaitLatch().\r\n\r\n\r\nNote that more than half of the modifications are done by Osumi-san.\r\n\r\n[1]: https://www.postgresql.org/message-id/20221215224721.GA694065%40nathanxps13\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Tue, 27 Dec 2022 09:29:02 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tue, 27 Dec 2022 at 14:59, Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n> Note that more than half of the modifications are done by Osumi-san.\n\n1) This global variable can be removed as it is used only in\nsend_feedback which is called from maybe_delay_apply so we could pass\nit as a function argument:\n+ * delay, avoid having positions of the flushed and apply LSN overwritten by\n+ * the latest LSN.\n+ */\n+static bool in_delaying_apply = false;\n+static XLogRecPtr last_received = InvalidXLogRecPtr;\n+\n\n2) -1 gets converted to -1000\n\n+int64\n+interval2ms(const Interval *interval)\n+{\n+ int64 days;\n+ int64 ms;\n+ int64 result;\n+\n+ days = interval->month * INT64CONST(30);\n+ days += interval->day;\n+\n+ /* Detect whether the value of interval can cause an overflow. */\n+ if (pg_mul_s64_overflow(days, MSECS_PER_DAY, &result))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n+ errmsg(\"bigint out of range\")));\n+\n+ /* Adds portion time (in ms) to the previous result. */\n+ ms = interval->time / INT64CONST(1000);\n+ if (pg_add_s64_overflow(result, ms, &result))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n+ errmsg(\"bigint out of range\")));\n\ncreate subscription sub7 connection 'dbname=regression host=localhost\nport=5432' publication pub1 with (min_apply_delay = '-1');\nERROR: -1000 ms is outside the valid range for parameter \"min_apply_delay\"\n\n\n3) This can be slightly reworded:\n+ <para>\n+ The length of time (ms) to delay the application of changes.\n+ </para></entry>\nto:\nDelay applying the changes by a specified amount of time(ms).\n\n4) maybe_delay_apply can be moved from apply_handle_stream_prepare to\napply_spooled_messages so that it is consistent with\nmaybe_start_skipping_changes:\n@@ -1120,6 +1240,19 @@ apply_handle_stream_prepare(StringInfo s)\n\n elog(DEBUG1, \"received prepare for streamed transaction %u\",\nprepare_data.xid);\n\n+ /*\n+ * Should we delay the current prepared transaction?\n+ *\n+ * Although the delay is applied in BEGIN PREPARE messages, streamed\n+ * prepared transactions apply the delay in a STREAM PREPARE message.\n+ * That's ok because no changes have been applied yet\n+ * (apply_spooled_messages() will do it). The STREAM START message does\n+ * not contain a prepare time (it will be available when the in-progress\n+ * prepared transaction finishes), hence, it was not possible to apply a\n+ * delay at that time.\n+ */\n+ maybe_delay_apply(prepare_data.prepare_time);\n\n\nThat way the call from apply_handle_stream_commit can also be removed.\n\n\n5) typo transfering should be transferring\n+ publisher and the current time on the subscriber. Time\nspent in logical\n+ decoding and in transfering the transaction may reduce the\nactual wait\n+ time. If the system clocks on publisher and subscriber are not\n\n6) feedbacks can be changed to feedback messages\n+ * it's necessary to keep sending feedbacks during the delay from the worker\n+ * process. Meanwhile, the feature delays the apply before starting the\n\n7)\n+ /*\n+ * Suppress overwrites of flushed and writtten positions by the lastest\n+ * LSN in send_feedback().\n+ */\n\n7a) typo writtten should be written\n\n7b) lastest should latest\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 3 Jan 2023 12:31:10 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": ">\n> On Tue, 27 Dec 2022 at 14:59, Hayato Kuroda (Fujitsu)\n> <kuroda.hayato@fujitsu.com> wrote:\n> > Note that more than half of the modifications are done by Osumi-san.\n>\n\nPlease find a few minor comments.\n1.\n+ diffms = TimestampDifferenceMilliseconds(GetCurrentTimestamp(),\n+\n\n TimestampTzPlusMilliseconds(ts, MySubscription->minapplydelay));\n on unix, above code looks unaligned (copied from unix)\n\n2. same with:\n+ interval = DatumGetIntervalP(DirectFunctionCall3(interval_in,\n+\n\n CStringGetDatum(val),\n+\n\n ObjectIdGetDatum(InvalidOid),\n+\n\n Int32GetDatum(-1)));\nperhaps due to tabs?\n\n2. comment not clear:\n+ * During the time delayed replication, avoid reporting the suspended\n+ * latest LSN are already flushed and written, to the publisher.\n\n3.\n+ * Call send_feedback() to prevent the publisher from exiting by\n+ * timeout during the delay, when wal_receiver_status_interval is\n+ * available. The WALs for this delayed transaction is neither\n+ * written nor flushed yet, Thus, we don't make the latest LSN\n+ * overwrite those positions of the update message for this delay.\n\n yet, Thus, we --> yet, thus, we/ yet. Thus, we\n\n\n4.\n+ /* Adds portion time (in ms) to the previous result. */\n+ ms = interval->time / INT64CONST(1000);\nIs interval->time always in micro-seconds here?\n\n\n\nThanks\nShveta\n\n\n",
"msg_date": "Tue, 3 Jan 2023 16:52:08 +0530",
"msg_from": "shveta malik <shveta.malik@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tuesday, December 27, 2022 6:29 PM Tuesday, December 27, 2022 6:29 PM wrote:\r\n> Thanks for reviewing our patch! PSA new version patch set.\r\nNow, the patches fails to apply to the HEAD,\r\nbecause of recent commits (c6e1f62e2c and 216a784829c) as reported in [1].\r\n\r\nI'll rebase the patch with other changes when I post a new version.\r\n\r\n\r\n[1] - http://cfbot.cputube.org/patch_41_3581.log\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Tue, 10 Jan 2023 02:27:42 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tuesday, January 3, 2023 4:01 PM vignesh C <vignesh21@gmail.com> wrote:\r\nHi, thanks for your review !\r\n\r\n\r\n> 1) This global variable can be removed as it is used only in send_feedback which\r\n> is called from maybe_delay_apply so we could pass it as a function argument:\r\n> + * delay, avoid having positions of the flushed and apply LSN\r\n> +overwritten by\r\n> + * the latest LSN.\r\n> + */\r\n> +static bool in_delaying_apply = false;\r\n> +static XLogRecPtr last_received = InvalidXLogRecPtr;\r\n> +\r\nI have removed the first variable and make it one of the arguments for send_feedback().\r\n\r\n> 2) -1 gets converted to -1000\r\n> \r\n> +int64\r\n> +interval2ms(const Interval *interval)\r\n> +{\r\n> + int64 days;\r\n> + int64 ms;\r\n> + int64 result;\r\n> +\r\n> + days = interval->month * INT64CONST(30);\r\n> + days += interval->day;\r\n> +\r\n> + /* Detect whether the value of interval can cause an overflow. */\r\n> + if (pg_mul_s64_overflow(days, MSECS_PER_DAY, &result))\r\n> + ereport(ERROR,\r\n> +\r\n> (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\r\n> + errmsg(\"bigint out of range\")));\r\n> +\r\n> + /* Adds portion time (in ms) to the previous result. */\r\n> + ms = interval->time / INT64CONST(1000);\r\n> + if (pg_add_s64_overflow(result, ms, &result))\r\n> + ereport(ERROR,\r\n> +\r\n> (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\r\n> + errmsg(\"bigint out of range\")));\r\n> \r\n> create subscription sub7 connection 'dbname=regression host=localhost\r\n> port=5432' publication pub1 with (min_apply_delay = '-1');\r\n> ERROR: -1000 ms is outside the valid range for parameter \"min_apply_delay\"\r\nGood catch! Fixed in order to make input '-1' interpretted as -1 ms.\r\n\r\n> 3) This can be slightly reworded:\r\n> + <para>\r\n> + The length of time (ms) to delay the application of changes.\r\n> + </para></entry>\r\n> to:\r\n> Delay applying the changes by a specified amount of time(ms).\r\nThis has been suggested in [1] by Peter Smith. So, I'd like to keep the current patch's description.\r\nThen, I didn't change this.\r\n\r\n> 4) maybe_delay_apply can be moved from apply_handle_stream_prepare to\r\n> apply_spooled_messages so that it is consistent with\r\n> maybe_start_skipping_changes:\r\n> @@ -1120,6 +1240,19 @@ apply_handle_stream_prepare(StringInfo s)\r\n> \r\n> elog(DEBUG1, \"received prepare for streamed transaction %u\",\r\n> prepare_data.xid);\r\n> \r\n> + /*\r\n> + * Should we delay the current prepared transaction?\r\n> + *\r\n> + * Although the delay is applied in BEGIN PREPARE messages,\r\n> streamed\r\n> + * prepared transactions apply the delay in a STREAM PREPARE\r\n> message.\r\n> + * That's ok because no changes have been applied yet\r\n> + * (apply_spooled_messages() will do it). The STREAM START message\r\n> does\r\n> + * not contain a prepare time (it will be available when the in-progress\r\n> + * prepared transaction finishes), hence, it was not possible to apply a\r\n> + * delay at that time.\r\n> + */\r\n> + maybe_delay_apply(prepare_data.prepare_time);\r\n> \r\n> \r\n> That way the call from apply_handle_stream_commit can also be removed.\r\nSounds good. I moved the call of maybe_delay_apply() to the apply_spooled_messages().\r\nNow it's aligned with maybe_start_skipping_changes().\r\n\r\n> 5) typo transfering should be transferring\r\n> + publisher and the current time on the subscriber. Time\r\n> spent in logical\r\n> + decoding and in transfering the transaction may reduce the\r\n> actual wait\r\n> + time. If the system clocks on publisher and subscriber are\r\n> + not\r\nFixed.\r\n\r\n> 6) feedbacks can be changed to feedback messages\r\n> + * it's necessary to keep sending feedbacks during the delay from the\r\n> + worker\r\n> + * process. Meanwhile, the feature delays the apply before starting the\r\nFixed.\r\n\r\n> 7)\r\n> + /*\r\n> + * Suppress overwrites of flushed and writtten positions by the lastest\r\n> + * LSN in send_feedback().\r\n> + */\r\n> \r\n> 7a) typo writtten should be written\r\n> \r\n> 7b) lastest should latest\r\nI have removed this sentence. So, those typos are removed.\r\n\r\nPlease have a look at the updated patch.\r\n\r\n[1] - https://www.postgresql.org/message-id/CAHut%2BPttQdFMNM2c6WqKt2c9G6r3ZKYRGHm04RR-4p4fyA4WRg%40mail.gmail.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Tue, 10 Jan 2023 14:11:49 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tuesday, January 3, 2023 8:22 PM shveta malik <shveta.malik@gmail.com> wrote:\r\n> Please find a few minor comments.\r\nThanks for your review !\r\n\r\n> 1.\r\n> + diffms = TimestampDifferenceMilliseconds(GetCurrentTimestamp(),\r\n> +\r\n> \r\n> TimestampTzPlusMilliseconds(ts, MySubscription->minapplydelay)); on\r\n> unix, above code looks unaligned (copied from unix)\r\n> \r\n> 2. same with:\r\n> + interval = DatumGetIntervalP(DirectFunctionCall3(interval_in,\r\n> +\r\n> \r\n> CStringGetDatum(val),\r\n> +\r\n> \r\n> ObjectIdGetDatum(InvalidOid),\r\n> +\r\n> \r\n> Int32GetDatum(-1)));\r\n> perhaps due to tabs?\r\nThose patches indentation look OK. I checked them\r\nby pgindent and less command described in [1]. So, I didn't change those.\r\n\r\n\r\n> 2. comment not clear:\r\n> + * During the time delayed replication, avoid reporting the suspended\r\n> + * latest LSN are already flushed and written, to the publisher.\r\nYou are right. I fixed this part to make it clearer.\r\nCould you please check ?\r\n\r\n> 3.\r\n> + * Call send_feedback() to prevent the publisher from exiting by\r\n> + * timeout during the delay, when wal_receiver_status_interval is\r\n> + * available. The WALs for this delayed transaction is neither\r\n> + * written nor flushed yet, Thus, we don't make the latest LSN\r\n> + * overwrite those positions of the update message for this delay.\r\n> \r\n> yet, Thus, we --> yet, thus, we/ yet. Thus, we\r\nYeah, you are right. But, I have removed the last sentence, because the last one\r\nexplains some internals of send_feedback(). I judged that it would be awkward\r\nto describe it in maybe_delay_apply(). Now this part has become concise.\r\n\r\n> 4.\r\n> + /* Adds portion time (in ms) to the previous result. */\r\n> + ms = interval->time / INT64CONST(1000);\r\n> Is interval->time always in micro-seconds here?\r\nYeah, it seems so. Some internal codes indicate it. Kindly have a look at functions\r\nsuch as make_interval() and interval2itm().\r\n\r\nPlease have a look at the latest patch v12 in [2].\r\n\r\n[1] - https://www.postgresql.org/docs/current/source-format.html\r\n[2] - https://www.postgresql.org/message-id/TYCPR01MB837340F78F4A16F542589195EDFF9%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Tue, 10 Jan 2023 14:33:48 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tuesday, January 10, 2023 11:28 AM I wrote:\r\n> On Tuesday, December 27, 2022 6:29 PM Tuesday, December 27, 2022 6:29 PM\r\n> wrote:\r\n> > Thanks for reviewing our patch! PSA new version patch set.\r\n> Now, the patches fails to apply to the HEAD, because of recent commits\r\n> (c6e1f62e2c and 216a784829c) as reported in [1].\r\n> \r\n> I'll rebase the patch with other changes when I post a new version.\r\nThis is done in the patch in [1].\r\nPlease note that because of the commit c6e1f62e2c,\r\nwe don't need the 1st patch we borrowed from another thread in [2] any more.\r\n\r\n[1] - https://www.postgresql.org/message-id/TYCPR01MB837340F78F4A16F542589195EDFF9%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n[2] - https://www.postgresql.org/message-id/flat/20221122004119.GA132961%40nathanxps13\r\n\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Tue, 10 Jan 2023 14:41:44 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tue, Jan 10, 2023 at 7:42 PM Takamichi Osumi (Fujitsu)\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Tuesday, January 3, 2023 4:01 PM vignesh C <vignesh21@gmail.com> wrote:\n> Hi, thanks for your review !\n>\n>\n> > 1) This global variable can be removed as it is used only in send_feedback which\n> > is called from maybe_delay_apply so we could pass it as a function argument:\n> > + * delay, avoid having positions of the flushed and apply LSN\n> > +overwritten by\n> > + * the latest LSN.\n> > + */\n> > +static bool in_delaying_apply = false;\n> > +static XLogRecPtr last_received = InvalidXLogRecPtr;\n> > +\n> I have removed the first variable and make it one of the arguments for send_feedback().\n>\n> > 2) -1 gets converted to -1000\n> >\n> > +int64\n> > +interval2ms(const Interval *interval)\n> > +{\n> > + int64 days;\n> > + int64 ms;\n> > + int64 result;\n> > +\n> > + days = interval->month * INT64CONST(30);\n> > + days += interval->day;\n> > +\n> > + /* Detect whether the value of interval can cause an overflow. */\n> > + if (pg_mul_s64_overflow(days, MSECS_PER_DAY, &result))\n> > + ereport(ERROR,\n> > +\n> > (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n> > + errmsg(\"bigint out of range\")));\n> > +\n> > + /* Adds portion time (in ms) to the previous result. */\n> > + ms = interval->time / INT64CONST(1000);\n> > + if (pg_add_s64_overflow(result, ms, &result))\n> > + ereport(ERROR,\n> > +\n> > (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n> > + errmsg(\"bigint out of range\")));\n> >\n> > create subscription sub7 connection 'dbname=regression host=localhost\n> > port=5432' publication pub1 with (min_apply_delay = '-1');\n> > ERROR: -1000 ms is outside the valid range for parameter \"min_apply_delay\"\n> Good catch! Fixed in order to make input '-1' interpretted as -1 ms.\n>\n> > 3) This can be slightly reworded:\n> > + <para>\n> > + The length of time (ms) to delay the application of changes.\n> > + </para></entry>\n> > to:\n> > Delay applying the changes by a specified amount of time(ms).\n> This has been suggested in [1] by Peter Smith. So, I'd like to keep the current patch's description.\n> Then, I didn't change this.\n>\n> > 4) maybe_delay_apply can be moved from apply_handle_stream_prepare to\n> > apply_spooled_messages so that it is consistent with\n> > maybe_start_skipping_changes:\n> > @@ -1120,6 +1240,19 @@ apply_handle_stream_prepare(StringInfo s)\n> >\n> > elog(DEBUG1, \"received prepare for streamed transaction %u\",\n> > prepare_data.xid);\n> >\n> > + /*\n> > + * Should we delay the current prepared transaction?\n> > + *\n> > + * Although the delay is applied in BEGIN PREPARE messages,\n> > streamed\n> > + * prepared transactions apply the delay in a STREAM PREPARE\n> > message.\n> > + * That's ok because no changes have been applied yet\n> > + * (apply_spooled_messages() will do it). The STREAM START message\n> > does\n> > + * not contain a prepare time (it will be available when the in-progress\n> > + * prepared transaction finishes), hence, it was not possible to apply a\n> > + * delay at that time.\n> > + */\n> > + maybe_delay_apply(prepare_data.prepare_time);\n> >\n> >\n> > That way the call from apply_handle_stream_commit can also be removed.\n> Sounds good. I moved the call of maybe_delay_apply() to the apply_spooled_messages().\n> Now it's aligned with maybe_start_skipping_changes().\n>\n> > 5) typo transfering should be transferring\n> > + publisher and the current time on the subscriber. Time\n> > spent in logical\n> > + decoding and in transfering the transaction may reduce the\n> > actual wait\n> > + time. If the system clocks on publisher and subscriber are\n> > + not\n> Fixed.\n>\n> > 6) feedbacks can be changed to feedback messages\n> > + * it's necessary to keep sending feedbacks during the delay from the\n> > + worker\n> > + * process. Meanwhile, the feature delays the apply before starting the\n> Fixed.\n>\n> > 7)\n> > + /*\n> > + * Suppress overwrites of flushed and writtten positions by the lastest\n> > + * LSN in send_feedback().\n> > + */\n> >\n> > 7a) typo writtten should be written\n> >\n> > 7b) lastest should latest\n> I have removed this sentence. So, those typos are removed.\n>\n> Please have a look at the updated patch.\n>\n> [1] - https://www.postgresql.org/message-id/CAHut%2BPttQdFMNM2c6WqKt2c9G6r3ZKYRGHm04RR-4p4fyA4WRg%40mail.gmail.com\n>\n>\n\nHi,\n\n1.\n+ errmsg(\"min_apply_delay must not be set when streaming = parallel\")));\nwe give the same error msg for both the cases:\na. when subscription is created with streaming=parallel but we are\ntrying to alter subscription to set min_apply_delay >0\nb. when subscription is created with some min_apply_delay and we are\ntrying to alter subscription to make it streaming=parallel.\nFor case a, error msg looks fine but for case b, I think error msg\nshould be changed slightly.\nALTER SUBSCRIPTION regress_testsub SET (streaming = parallel);\nERROR: min_apply_delay must not be set when streaming = parallel\nThis gives the feeling that we are trying to modify min_apply_delay\nbut we are not. Maybe we can change it to:\n\"subscription with min_apply_delay must not be allowed to stream\nparallel\" (or something better)\n\nthanks\nShveta\n\n\n",
"msg_date": "Wed, 11 Jan 2023 15:27:13 +0530",
"msg_from": "shveta malik <shveta.malik@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Wed, Jan 11, 2023 at 3:27 PM shveta malik <shveta.malik@gmail.com> wrote:\n>\n> On Tue, Jan 10, 2023 at 7:42 PM Takamichi Osumi (Fujitsu)\n> <osumi.takamichi@fujitsu.com> wrote:\n> >\n> > On Tuesday, January 3, 2023 4:01 PM vignesh C <vignesh21@gmail.com> wrote:\n> > Hi, thanks for your review !\n> >\n> >\n> > > 1) This global variable can be removed as it is used only in send_feedback which\n> > > is called from maybe_delay_apply so we could pass it as a function argument:\n> > > + * delay, avoid having positions of the flushed and apply LSN\n> > > +overwritten by\n> > > + * the latest LSN.\n> > > + */\n> > > +static bool in_delaying_apply = false;\n> > > +static XLogRecPtr last_received = InvalidXLogRecPtr;\n> > > +\n> > I have removed the first variable and make it one of the arguments for send_feedback().\n> >\n> > > 2) -1 gets converted to -1000\n> > >\n> > > +int64\n> > > +interval2ms(const Interval *interval)\n> > > +{\n> > > + int64 days;\n> > > + int64 ms;\n> > > + int64 result;\n> > > +\n> > > + days = interval->month * INT64CONST(30);\n> > > + days += interval->day;\n> > > +\n> > > + /* Detect whether the value of interval can cause an overflow. */\n> > > + if (pg_mul_s64_overflow(days, MSECS_PER_DAY, &result))\n> > > + ereport(ERROR,\n> > > +\n> > > (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n> > > + errmsg(\"bigint out of range\")));\n> > > +\n> > > + /* Adds portion time (in ms) to the previous result. */\n> > > + ms = interval->time / INT64CONST(1000);\n> > > + if (pg_add_s64_overflow(result, ms, &result))\n> > > + ereport(ERROR,\n> > > +\n> > > (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n> > > + errmsg(\"bigint out of range\")));\n> > >\n> > > create subscription sub7 connection 'dbname=regression host=localhost\n> > > port=5432' publication pub1 with (min_apply_delay = '-1');\n> > > ERROR: -1000 ms is outside the valid range for parameter \"min_apply_delay\"\n> > Good catch! Fixed in order to make input '-1' interpretted as -1 ms.\n> >\n> > > 3) This can be slightly reworded:\n> > > + <para>\n> > > + The length of time (ms) to delay the application of changes.\n> > > + </para></entry>\n> > > to:\n> > > Delay applying the changes by a specified amount of time(ms).\n> > This has been suggested in [1] by Peter Smith. So, I'd like to keep the current patch's description.\n> > Then, I didn't change this.\n> >\n> > > 4) maybe_delay_apply can be moved from apply_handle_stream_prepare to\n> > > apply_spooled_messages so that it is consistent with\n> > > maybe_start_skipping_changes:\n> > > @@ -1120,6 +1240,19 @@ apply_handle_stream_prepare(StringInfo s)\n> > >\n> > > elog(DEBUG1, \"received prepare for streamed transaction %u\",\n> > > prepare_data.xid);\n> > >\n> > > + /*\n> > > + * Should we delay the current prepared transaction?\n> > > + *\n> > > + * Although the delay is applied in BEGIN PREPARE messages,\n> > > streamed\n> > > + * prepared transactions apply the delay in a STREAM PREPARE\n> > > message.\n> > > + * That's ok because no changes have been applied yet\n> > > + * (apply_spooled_messages() will do it). The STREAM START message\n> > > does\n> > > + * not contain a prepare time (it will be available when the in-progress\n> > > + * prepared transaction finishes), hence, it was not possible to apply a\n> > > + * delay at that time.\n> > > + */\n> > > + maybe_delay_apply(prepare_data.prepare_time);\n> > >\n> > >\n> > > That way the call from apply_handle_stream_commit can also be removed.\n> > Sounds good. I moved the call of maybe_delay_apply() to the apply_spooled_messages().\n> > Now it's aligned with maybe_start_skipping_changes().\n> >\n> > > 5) typo transfering should be transferring\n> > > + publisher and the current time on the subscriber. Time\n> > > spent in logical\n> > > + decoding and in transfering the transaction may reduce the\n> > > actual wait\n> > > + time. If the system clocks on publisher and subscriber are\n> > > + not\n> > Fixed.\n> >\n> > > 6) feedbacks can be changed to feedback messages\n> > > + * it's necessary to keep sending feedbacks during the delay from the\n> > > + worker\n> > > + * process. Meanwhile, the feature delays the apply before starting the\n> > Fixed.\n> >\n> > > 7)\n> > > + /*\n> > > + * Suppress overwrites of flushed and writtten positions by the lastest\n> > > + * LSN in send_feedback().\n> > > + */\n> > >\n> > > 7a) typo writtten should be written\n> > >\n> > > 7b) lastest should latest\n> > I have removed this sentence. So, those typos are removed.\n> >\n> > Please have a look at the updated patch.\n> >\n> > [1] - https://www.postgresql.org/message-id/CAHut%2BPttQdFMNM2c6WqKt2c9G6r3ZKYRGHm04RR-4p4fyA4WRg%40mail.gmail.com\n> >\n> >\n>\n> Hi,\n>\n> 1.\n> + errmsg(\"min_apply_delay must not be set when streaming = parallel\")));\n> we give the same error msg for both the cases:\n> a. when subscription is created with streaming=parallel but we are\n> trying to alter subscription to set min_apply_delay >0\n> b. when subscription is created with some min_apply_delay and we are\n> trying to alter subscription to make it streaming=parallel.\n> For case a, error msg looks fine but for case b, I think error msg\n> should be changed slightly.\n> ALTER SUBSCRIPTION regress_testsub SET (streaming = parallel);\n> ERROR: min_apply_delay must not be set when streaming = parallel\n> This gives the feeling that we are trying to modify min_apply_delay\n> but we are not. Maybe we can change it to:\n> \"subscription with min_apply_delay must not be allowed to stream\n> parallel\" (or something better)\n>\n> thanks\n> Shveta\n\nSorry for multiple emails. One suggestion:\n\n2.\nI think users can set ' wal_receiver_status_interval ' to 0 or more\nthan 'wal_sender_timeout'. But is this a frequent use-case scenario or\ndo we see DBAs setting these in such a way by mistake? If so, then I\nthink, it is better to give Warning message in such a case when a user\ntries to create or alter a subscription with a large 'min_apply_delay'\n(>= 'wal_sender_timeout') , rather than leaving it to the user's\nunderstanding that WalSender may repeatedly timeout in such a case.\nParse_subscription_options and AlterSubscription can be modified to\nlog a warning. Any thoughts?\n\nthanks\nShveta\n\n\n",
"msg_date": "Wed, 11 Jan 2023 16:05:31 +0530",
"msg_from": "shveta malik <shveta.malik@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tue, 10 Jan 2023 at 19:41, Takamichi Osumi (Fujitsu)\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Tuesday, January 3, 2023 4:01 PM vignesh C <vignesh21@gmail.com> wrote:\n> Hi, thanks for your review !\n>\n> Please have a look at the updated patch.\n\nThanks for the updated patch, few comments:\n1) Comment inconsistency across create and alter subscription, better\nto keep it same:\n+ /*\n+ * Do additional checking for disallowed combination when\nmin_apply_delay\n+ * was not zero.\n+ */\n+ if (IsSet(supported_opts, SUBOPT_MIN_APPLY_DELAY) &&\n+ opts->min_apply_delay > 0)\n+ {\n+ if (opts->streaming == LOGICALREP_STREAM_PARALLEL)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_SYNTAX_ERROR)),\n+ errmsg(\"min_apply_delay must\nnot be set when streaming = parallel\"));\n+ }\n\n+ /*\n+ * Test the combination of\nstreaming mode and\n+ * min_apply_delay\n+ */\n+ if (opts.streaming ==\nLOGICALREP_STREAM_PARALLEL &&\n+ sub->minapplydelay > 0)\n+ ereport(ERROR,\n+\n(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+\nerrmsg(\"min_apply_delay must not be set when streaming = parallel\")));\n\n2) ereport inconsistency, braces around errcode is present in few\nplaces and not present in few places, it is better to keep it\nconsistent by removing it:\n2.a)\n+ if (opts->streaming == LOGICALREP_STREAM_PARALLEL)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_SYNTAX_ERROR)),\n+ errmsg(\"min_apply_delay must\nnot be set when streaming = parallel\"));\n\n2.b)\n+ if (opts.streaming ==\nLOGICALREP_STREAM_PARALLEL &&\n+ sub->minapplydelay > 0)\n+ ereport(ERROR,\n+\n(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+\nerrmsg(\"min_apply_delay must not be set when streaming = parallel\")));\n\n2.c)\n+ if (opts.min_apply_delay > 0 &&\n+ sub->stream ==\nLOGICALREP_STREAM_PARALLEL)\n+ ereport(ERROR,\n+\n(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+\nerrmsg(\"min_apply_delay must not be set when streaming = parallel\")));\n\n2.d)\n+ if (pg_mul_s64_overflow(days, MSECS_PER_DAY, &result))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n+ errmsg(\"bigint out of range\")));\n\n2.e)\n+ if (pg_add_s64_overflow(result, ms, &result))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n+ errmsg(\"bigint out of range\")));\n\n3) this include is not required, I could compile without it\n--- a/src/backend/commands/subscriptioncmds.c\n+++ b/src/backend/commands/subscriptioncmds.c\n@@ -48,6 +48,7 @@\n #include \"utils/memutils.h\"\n #include \"utils/pg_lsn.h\"\n #include \"utils/syscache.h\"\n+#include \"utils/timestamp.h\"\n\n4)\n4.a)\nShould this be changed:\n/* Adds portion time (in ms) to the previous result. */\nto\n/* Adds portion time (in ms) to the previous result */\n\n4.b)\nShould this be changed:\n/* Detect whether the value of interval can cause an overflow. */\nto\n/* Detect whether the value of interval can cause an overflow */\n\n5) Can this \"ALTER SUBSCRIPTION regress_testsub SET (min_apply_delay =\n'1d')\" be combined along with \"-- success -- 123 ms\", that way few\nstatements could be reduced\n+-- success -- 86400000 ms\n+CREATE SUBSCRIPTION regress_testsub CONNECTION\n'dbname=regress_doesnotexist' PUBLICATION testpub WITH (connect =\nfalse, min_apply_delay = 123);\n+ALTER SUBSCRIPTION regress_testsub SET (min_apply_delay = '1d');\n+\n+\\dRs+\n+\n+ALTER SUBSCRIPTION regress_testsub SET (slot_name = NONE);\n+DROP SUBSCRIPTION regress_testsub;\n\n6) Can we do the interval testing along with alter subscription and\ncombined with \"-- success -- 123 ms\" test, that way few statements\ncould be reduced\n+-- success -- interval is converted into ms and stored as integer\n+CREATE SUBSCRIPTION regress_testsub CONNECTION\n'dbname=regress_doesnotexist' PUBLICATION testpub WITH (connect =\nfalse, min_apply_delay = '4h 27min 35s');\n+\n+\\dRs+\n+\n+ALTER SUBSCRIPTION regress_testsub SET (slot_name = NONE);\n+DROP SUBSCRIPTION regress_testsub;\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 11 Jan 2023 17:00:51 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Shveta,\r\n\r\nThanks for reviewing! PSA new version.\r\n\r\n> 1.\r\n> + errmsg(\"min_apply_delay must not be set when streaming = parallel\")));\r\n> we give the same error msg for both the cases:\r\n> a. when subscription is created with streaming=parallel but we are\r\n> trying to alter subscription to set min_apply_delay >0\r\n> b. when subscription is created with some min_apply_delay and we are\r\n> trying to alter subscription to make it streaming=parallel.\r\n> For case a, error msg looks fine but for case b, I think error msg\r\n> should be changed slightly.\r\n> ALTER SUBSCRIPTION regress_testsub SET (streaming = parallel);\r\n> ERROR: min_apply_delay must not be set when streaming = parallel\r\n> This gives the feeling that we are trying to modify min_apply_delay\r\n> but we are not. Maybe we can change it to:\r\n> \"subscription with min_apply_delay must not be allowed to stream\r\n> parallel\" (or something better)\r\n\r\nYour point that error messages are strange is right. And while\r\nchecking other ones, I found they have very similar styles. Therefore I reworded\r\nERROR messages in AlterSubscription() and parse_subscription_options() to follow\r\nthem. Which version is better?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Wed, 11 Jan 2023 12:46:24 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "> 2.\r\n> I think users can set ' wal_receiver_status_interval ' to 0 or more\r\n> than 'wal_sender_timeout'. But is this a frequent use-case scenario or\r\n> do we see DBAs setting these in such a way by mistake? If so, then I\r\n> think, it is better to give Warning message in such a case when a user\r\n> tries to create or alter a subscription with a large 'min_apply_delay'\r\n> (>= 'wal_sender_timeout') , rather than leaving it to the user's\r\n> understanding that WalSender may repeatedly timeout in such a case.\r\n> Parse_subscription_options and AlterSubscription can be modified to\r\n> log a warning. Any thoughts?\r\n\r\nYes, DBAs may set wal_receiver_status_interval to more than wal_sender_timeout by\r\nmistake.\r\n\r\nBut to handle the scenario we must compare between min_apply_delay *on subscriber*\r\nand wal_sender_timeout *on publisher*. Both values are not transferred to opposite\r\nsides, so the WARNING cannot be raised. I considered that such a mechanism seemed\r\nto be complex. The discussion around [1] may be useful.\r\n\r\n[1]: https://www.postgresql.org/message-id/CAA4eK1Lq%2Bh8qo%2BrqGU-E%2BhwJKAHYocV54y4pvou4rLysCgYD-g%40mail.gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 11 Jan 2023 12:46:33 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Vignesh,\r\n\r\nThanks for reviewing!\r\n\r\n> 1) Comment inconsistency across create and alter subscription, better\r\n> to keep it same:\r\n\r\nA comment for CREATE SUBSCRIPTION became same as ALTER's one.\r\n\r\n> 2) ereport inconsistency, braces around errcode is present in few\r\n> places and not present in few places, it is better to keep it\r\n> consistent by removing it:\r\n\r\nRemoved.\r\n\r\n> 3) this include is not required, I could compile without it\r\n\r\nRemoved. Timestamp datatype is not used in subscriptioncmds.c.\r\n\r\n> 4)\r\n> 4.a)\r\n> Should this be changed:\r\n> /* Adds portion time (in ms) to the previous result. */\r\n> to\r\n> /* Adds portion time (in ms) to the previous result */\r\n\r\nChanged.\r\n\r\n> 4.b)\r\n> Should this be changed:\r\n> /* Detect whether the value of interval can cause an overflow. */\r\n> to\r\n> /* Detect whether the value of interval can cause an overflow */\r\n\r\nChanged.\r\n\r\n> 5) Can this \"ALTER SUBSCRIPTION regress_testsub SET (min_apply_delay =\r\n> '1d')\" be combined along with \"-- success -- 123 ms\", that way few\r\n> statements could be reduced\r\n\r\n> 6) Can we do the interval testing along with alter subscription and\r\n> combined with \"-- success -- 123 ms\" test, that way few statements\r\n> could be reduced\r\n\r\nTo keep the code coverage, either of them must remain. 5) was cleanly removed and\r\n6) was combined to you suggested. In addition, comments were updated to clarify\r\nthe testcase.\r\n\r\nPlease have a look at the latest patch v14 in [1].\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5866D0527B1B8D589F1C2551F5FC9%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 11 Jan 2023 12:48:17 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "At Wed, 11 Jan 2023 12:46:24 +0000, \"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com> wrote in \n> them. Which version is better?\n\n\nSome comments by a quick loock, different from the above.\n\n\n+ CONNECTION 'host=192.168.1.50 port=5432 user=foo dbname=foodb'\n\nI understand that we (not PG people, but IT people) are supposed to\nuse in documents a certain set of special addresses that is guaranteed\nnot to be routed in the field.\n\n> TEST-NET-1 : 192.0.2.0/24\n> TEST-NET-2 : 198.51.100.0/24\n> TEST-NET-3 : 203.0.113.0/24\n\n(I found 192.83.123.89 in the postgres_fdw doc, but it'd be another issue..)\n\n\n+\t\t\tif (strspn(tmp, \"-0123456789 \") == strlen(tmp))\n\nDo we need to bother spending another memory block for apparent\nnon-digits here?\n\n\n+\t\t\t\t\t\terrmsg(INT64_FORMAT \" ms is outside the valid range for parameter \\\"%s\\\"\",\n\nWe don't use INT64_FORMAT in translatable message strings. Cast then\nuse %lld instead.\n\nThis message looks unfriendly as it doesn't suggest the valid range,\nand it shows the input value in a different unit from what was in the\ninput. A I think we can spell it as \"\\\"%s\\\" is outside the valid range\nfor subsciription parameter \\\"%s\\\" (0 .. <INT_MAX> in millisecond)\"\n\n+\tint64\t\tmin_apply_delay;\n..\n+\t\t\tif (ms < 0 || ms > INT_MAX)\n\nWhy is the variable wider than required?\n\n\n+\t\t\t\t\terrmsg(\"%s and %s are mutually exclusive options\",\n+\t\t\t\t\t\t \"min_apply_delay > 0\", \"streaming = parallel\"));\n\nMmm. Couldn't we refuse 0 as min_apply_delay?\n\n\n+\t\t\t\t\t\tsub->minapplydelay > 0)\n...\n+\t\t\t\t\tif (opts.min_apply_delay > 0 &&\n\nIs there any reason for the differenciation?\n\n\n+\t\t\t\t\t\t\t\terrmsg(\"cannot set %s for subscription with %s\",\n+\t\t\t\t\t\t\t\t\t \"streaming = parallel\", \"min_apply_delay > 0\"));\n\nI think that this shoud be more like human-speking. Say, \"cannot\nenable min_apply_delay for subscription in parallel streaming mode\" or\nsomething.. The same is applicable to the nearby message.\n\n\n\n+static void maybe_delay_apply(TimestampTz ts);\n\n apply_spooled_messages(FileSet *stream_fileset, TransactionId xid,\n-\t\t\t\t\t XLogRecPtr lsn)\n+\t\t\t\t\t XLogRecPtr lsn, TimestampTz ts)\n\n\"ts\" looks too generic. Couldn't it be more specific?\nWe need a explanation for the parameter in the function comment.\n\n\n+\tif (!am_parallel_apply_worker())\n+\t{\n+\t\tAssert(ts > 0);\n+\t\tmaybe_delay_apply(ts);\n\nIt seems to me better that the if condition and assertion are checked\ninside maybe_delay_apply().\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 12 Jan 2023 12:03:38 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Wed, Jan 11, 2023 at 6:16 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear Shveta,\n>\n> Thanks for reviewing! PSA new version.\n>\n> > 1.\n> > + errmsg(\"min_apply_delay must not be set when streaming = parallel\")));\n> > we give the same error msg for both the cases:\n> > a. when subscription is created with streaming=parallel but we are\n> > trying to alter subscription to set min_apply_delay >0\n> > b. when subscription is created with some min_apply_delay and we are\n> > trying to alter subscription to make it streaming=parallel.\n> > For case a, error msg looks fine but for case b, I think error msg\n> > should be changed slightly.\n> > ALTER SUBSCRIPTION regress_testsub SET (streaming = parallel);\n> > ERROR: min_apply_delay must not be set when streaming = parallel\n> > This gives the feeling that we are trying to modify min_apply_delay\n> > but we are not. Maybe we can change it to:\n> > \"subscription with min_apply_delay must not be allowed to stream\n> > parallel\" (or something better)\n>\n> Your point that error messages are strange is right. And while\n> checking other ones, I found they have very similar styles. Therefore I reworded\n> ERROR messages in AlterSubscription() and parse_subscription_options() to follow\n> them. Which version is better?\n>\n\nv14 one looks much better. Thanks!\n\nthanks\nShveta\n\n\n",
"msg_date": "Thu, 12 Jan 2023 08:48:20 +0530",
"msg_from": "shveta malik <shveta.malik@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Wed, Jan 11, 2023 at 6:16 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> > 2.\n> > I think users can set ' wal_receiver_status_interval ' to 0 or more\n> > than 'wal_sender_timeout'. But is this a frequent use-case scenario or\n> > do we see DBAs setting these in such a way by mistake? If so, then I\n> > think, it is better to give Warning message in such a case when a user\n> > tries to create or alter a subscription with a large 'min_apply_delay'\n> > (>= 'wal_sender_timeout') , rather than leaving it to the user's\n> > understanding that WalSender may repeatedly timeout in such a case.\n> > Parse_subscription_options and AlterSubscription can be modified to\n> > log a warning. Any thoughts?\n>\n> Yes, DBAs may set wal_receiver_status_interval to more than wal_sender_timeout by\n> mistake.\n>\n> But to handle the scenario we must compare between min_apply_delay *on subscriber*\n> and wal_sender_timeout *on publisher*. Both values are not transferred to opposite\n> sides, so the WARNING cannot be raised. I considered that such a mechanism seemed\n> to be complex. The discussion around [1] may be useful.\n>\n> [1]: https://www.postgresql.org/message-id/CAA4eK1Lq%2Bh8qo%2BrqGU-E%2BhwJKAHYocV54y4pvou4rLysCgYD-g%40mail.gmail.com\n>\n\nokay, I see. So even when 'wal_receiver_status_interval' is set to 0,\nno log/warning is needed when the user tries to set min_apply_delay>0?\nAre we good with doc alone?\n\nOne trivial correction in config.sgml:\n+ terminates due to the timeout errors. Hence, make sure this parameter\n+ shorter than the <literal>wal_sender_timeout</literal> of the publisher.\nHence, make sure this parameter is shorter... <is missing>\n\nthanks\nShveta\n\n\n",
"msg_date": "Thu, 12 Jan 2023 15:21:29 +0530",
"msg_from": "shveta malik <shveta.malik@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi,\n\nI've a question about 032_apply_delay.pl.\n\n+# Test ALTER SUBSCRIPTION. Delay 86460 seconds (1 day 1 minute).\n> +$node_subscriber->safe_psql('postgres',\n> + \"ALTER SUBSCRIPTION tap_sub SET (min_apply_delay = 86460000)\"\n> +);\n> +\n> +# New row to trigger apply delay.\n> +$node_publisher->safe_psql('postgres',\n> + \"INSERT INTO test_tab VALUES (0, 'foobar')\");\n> +\n\n\nI couldn't quite see how these lines test whether ALTER SUBSCRIPTION\nsuccessfully worked.\nDon't we need to check that min_apply_delay really changed as a result?\n\nBut also I see that subscription.sql already tests this ALTER SUBSCRIPTION\nbehaviour.\n\nBest,\n-- \nMelih Mutlu\nMicrosoft\n\nHi,I've a question about 032_apply_delay.pl.+# Test ALTER SUBSCRIPTION. Delay 86460 seconds (1 day 1 minute).+$node_subscriber->safe_psql('postgres',+ \"ALTER SUBSCRIPTION tap_sub SET (min_apply_delay = 86460000)\"+);++# New row to trigger apply delay.+$node_publisher->safe_psql('postgres',+ \"INSERT INTO test_tab VALUES (0, 'foobar')\");+I couldn't quite see how these lines test whether ALTER SUBSCRIPTION successfully worked.Don't we need to check that min_apply_delay really changed as a result?But also I see that subscription.sql already tests this ALTER SUBSCRIPTION behaviour.Best,-- Melih MutluMicrosoft",
"msg_date": "Thu, 12 Jan 2023 16:11:46 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Thursday, January 12, 2023 12:04 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> At Wed, 11 Jan 2023 12:46:24 +0000, \"Hayato Kuroda (Fujitsu)\"\n> <kuroda.hayato@fujitsu.com> wrote in\n> > them. Which version is better?\n> \n> \n> Some comments by a quick loock, different from the above.\nHoriguchi-san, thanks for your review !\n\n\n> + CONNECTION 'host=192.168.1.50 port=5432 user=foo\n> dbname=foodb'\n> \n> I understand that we (not PG people, but IT people) are supposed to use in\n> documents a certain set of special addresses that is guaranteed not to be\n> routed in the field.\n> \n> > TEST-NET-1 : 192.0.2.0/24\n> > TEST-NET-2 : 198.51.100.0/24\n> > TEST-NET-3 : 203.0.113.0/24\n> \n> (I found 192.83.123.89 in the postgres_fdw doc, but it'd be another issue..)\nFixed. If necessary we can create another thread for this.\n\n> +\t\t\tif (strspn(tmp, \"-0123456789 \") == strlen(tmp))\n> \n> Do we need to bother spending another memory block for apparent non-digits\n> here?\nYes. The characters are necessary to handle an issue reported in [1].\nThe issue happened if the user inputs a negative value,\nthen the length comparison became different between strspn and strlen\nand the input value was recognized as seconds, when\nthe unit wasn't described. This led to a wrong error message for the user.\n\nThose addition of such characters solve the issue.\n\n> +\t\t\t\t\t\terrmsg(INT64_FORMAT \" ms\n> is outside the valid range for parameter\n> +\\\"%s\\\"\",\n> \n> We don't use INT64_FORMAT in translatable message strings. Cast then\n> use %lld instead.\nThanks for teaching us. Fixed.\n\n> This message looks unfriendly as it doesn't suggest the valid range, and it\n> shows the input value in a different unit from what was in the input. A I think we\n> can spell it as \"\\\"%s\\\" is outside the valid range for subsciription parameter\n> \\\"%s\\\" (0 .. <INT_MAX> in millisecond)\"\nMakes sense. I incorporated the valid range with the aligned format of recovery_min_apply_delay.\nFYI, the physical replication's GUC doesn't write the unites for the range like below.\nI followed and applied this style.\n\n---\nLOG: -1 ms is outside the valid range for parameter \"recovery_min_apply_delay\" (0 .. 2147483647)\nFATAL: configuration file \"/home/k5user/new/pg/l/make_v15/slave/postgresql.conf\" contains errors\n---\n\n> +\tint64\t\tmin_apply_delay;\n> ..\n> +\t\t\tif (ms < 0 || ms > INT_MAX)\n> \n> Why is the variable wider than required?\nYou are right. Fixed.\n\n> +\t\t\t\t\terrmsg(\"%s and %s are mutually\n> exclusive options\",\n> +\t\t\t\t\t\t \"min_apply_delay > 0\",\n> \"streaming = parallel\"));\n> \n> Mmm. Couldn't we refuse 0 as min_apply_delay?\nSorry, the previous patch's behavior wasn't consistent with this error message.\n\nIn the previous patch, if we conducted alter subscription\nwith stream = parallel and min_apply_delay = 0 (from a positive value) at the same time,\nthe alter command failed, although this should succeed by this time-delayed feature specification.\nWe fixed this part accordingly by some more tests in AlterSubscription().\n\nBy the way, we should allow users to change min_apply_dealy to 0\nwhenever they want from different value. Then, we didn't restrict\nthis kind of operation.\n\n> +\t\t\t\t\t\tsub->minapplydelay > 0)\n> ...\n> +\t\t\t\t\tif (opts.min_apply_delay > 0 &&\n> \n> Is there any reason for the differenciation?\nYes. The former is the object for an existing subscription configuration.\nFor example, if we alter subscription with setting streaming = 'parallel'\nfor a subscription created with min_apply_delay = '1 day', we\nneed to reject the alter command. The latter is new settings.\n\n\n> +\n> \terrmsg(\"cannot set %s for subscription with %s\",\n> +\n> \"streaming = parallel\", \"min_apply_delay > 0\"));\n> \n> I think that this shoud be more like human-speking. Say, \"cannot enable\n> min_apply_delay for subscription in parallel streaming mode\" or something..\n> The same is applicable to the nearby message.\nReworded the error messages. Please check.\n\n> +static void maybe_delay_apply(TimestampTz ts);\n> \n> apply_spooled_messages(FileSet *stream_fileset, TransactionId xid,\n> -\t\t\t\t\t XLogRecPtr lsn)\n> +\t\t\t\t\t XLogRecPtr lsn, TimestampTz ts)\n> \n> \"ts\" looks too generic. Couldn't it be more specific?\n> We need a explanation for the parameter in the function comment.\nChanged it to finish_ts, since it indicates commit/prepare time.\nThis terminology should be aligned with finish lsn.\n\n> +\tif (!am_parallel_apply_worker())\n> +\t{\n> +\t\tAssert(ts > 0);\n> +\t\tmaybe_delay_apply(ts);\n> \n> It seems to me better that the if condition and assertion are checked inside\n> maybe_delay_apply().\nFixed.\n\n\n[1] - https://www.postgresql.org/message-id/CALDaNm3Bpzhh60nU-keuGxMPb-OhcqsfpCN3ysfCfCJ-2ShYPA%40mail.gmail.com\n\n\nBest Regards,\n\tTakamichi Osumi",
"msg_date": "Thu, 12 Jan 2023 15:39:23 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi, Shveta\n\n\nThanks for your comments!\nOn Thursday, January 12, 2023 6:51 PM shveta malik <shveta.malik@gmail.com> wrote:\n> > Yes, DBAs may set wal_receiver_status_interval to more than\n> > wal_sender_timeout by mistake.\n> >\n> > But to handle the scenario we must compare between min_apply_delay *on\n> > subscriber* and wal_sender_timeout *on publisher*. Both values are not\n> > transferred to opposite sides, so the WARNING cannot be raised. I\n> > considered that such a mechanism seemed to be complex. The discussion\n> around [1] may be useful.\n> >\n> > [1]:\n> >\n> https://www.postgresql.org/message-id/CAA4eK1Lq%2Bh8qo%2BrqGU-E%2B\n> hwJK\n> > AHYocV54y4pvou4rLysCgYD-g%40mail.gmail.com\n> >\n> \n> okay, I see. So even when 'wal_receiver_status_interval' is set to 0, no\n> log/warning is needed when the user tries to set min_apply_delay>0?\n> Are we good with doc alone?\nYes. As far as I can remember, we don't emit log or warning\nfor some kind of combination of those parameters (in the context\nof timeout too). So, it should be fine.\n\n\n> One trivial correction in config.sgml:\n> + terminates due to the timeout errors. Hence, make sure this parameter\n> + shorter than the <literal>wal_sender_timeout</literal> of the\n> publisher.\n> Hence, make sure this parameter is shorter... <is missing>\nFixed.\n\nKindly have a look at the latest patch shared in [1].\n\n\n[1] - https://www.postgresql.org/message-id/TYCPR01MB83739C6133B50DDA8BAD1601EDFD9%40TYCPR01MB8373.jpnprd01.prod.outlook.com\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Thu, 12 Jan 2023 15:54:10 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi, Melih\n\n\nOn Thursday, January 12, 2023 10:12 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n> I've a question about 032_apply_delay.pl.\n> ...\n> I couldn't quite see how these lines test whether ALTER SUBSCRIPTION successfully worked.\n> Don't we need to check that min_apply_delay really changed as a result?\nYeah, we should check it from the POV of apply worker's debug logs.\n\nThe latest patch posted in [1] addressed your concern,\nby checking the logged delay time in the server log.\n\nI'd say what we could do is to check the logged time is long enough\nafter the ALTER SUBSCRIPTION command.\n\nPlease have a look at the patch.\n\n\n[1] - https://www.postgresql.org/message-id/TYCPR01MB83739C6133B50DDA8BAD1601EDFD9%40TYCPR01MB8373.jpnprd01.prod.outlook.com\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Thu, 12 Jan 2023 16:13:47 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Thu, 12 Jan 2023 at 21:09, Takamichi Osumi (Fujitsu)\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Thursday, January 12, 2023 12:04 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > At Wed, 11 Jan 2023 12:46:24 +0000, \"Hayato Kuroda (Fujitsu)\"\n> > <kuroda.hayato@fujitsu.com> wrote in\n> > > them. Which version is better?\n> >\n> >\n> > Some comments by a quick loock, different from the above.\n> Horiguchi-san, thanks for your review !\n>\n>\n> > + CONNECTION 'host=192.168.1.50 port=5432 user=foo\n> > dbname=foodb'\n> >\n> > I understand that we (not PG people, but IT people) are supposed to use in\n> > documents a certain set of special addresses that is guaranteed not to be\n> > routed in the field.\n> >\n> > > TEST-NET-1 : 192.0.2.0/24\n> > > TEST-NET-2 : 198.51.100.0/24\n> > > TEST-NET-3 : 203.0.113.0/24\n> >\n> > (I found 192.83.123.89 in the postgres_fdw doc, but it'd be another issue..)\n> Fixed. If necessary we can create another thread for this.\n>\n> > + if (strspn(tmp, \"-0123456789 \") == strlen(tmp))\n> >\n> > Do we need to bother spending another memory block for apparent non-digits\n> > here?\n> Yes. The characters are necessary to handle an issue reported in [1].\n> The issue happened if the user inputs a negative value,\n> then the length comparison became different between strspn and strlen\n> and the input value was recognized as seconds, when\n> the unit wasn't described. This led to a wrong error message for the user.\n>\n> Those addition of such characters solve the issue.\n>\n> > + errmsg(INT64_FORMAT \" ms\n> > is outside the valid range for parameter\n> > +\\\"%s\\\"\",\n> >\n> > We don't use INT64_FORMAT in translatable message strings. Cast then\n> > use %lld instead.\n> Thanks for teaching us. Fixed.\n>\n> > This message looks unfriendly as it doesn't suggest the valid range, and it\n> > shows the input value in a different unit from what was in the input. A I think we\n> > can spell it as \"\\\"%s\\\" is outside the valid range for subsciription parameter\n> > \\\"%s\\\" (0 .. <INT_MAX> in millisecond)\"\n> Makes sense. I incorporated the valid range with the aligned format of recovery_min_apply_delay.\n> FYI, the physical replication's GUC doesn't write the unites for the range like below.\n> I followed and applied this style.\n>\n> ---\n> LOG: -1 ms is outside the valid range for parameter \"recovery_min_apply_delay\" (0 .. 2147483647)\n> FATAL: configuration file \"/home/k5user/new/pg/l/make_v15/slave/postgresql.conf\" contains errors\n> ---\n>\n> > + int64 min_apply_delay;\n> > ..\n> > + if (ms < 0 || ms > INT_MAX)\n> >\n> > Why is the variable wider than required?\n> You are right. Fixed.\n>\n> > + errmsg(\"%s and %s are mutually\n> > exclusive options\",\n> > + \"min_apply_delay > 0\",\n> > \"streaming = parallel\"));\n> >\n> > Mmm. Couldn't we refuse 0 as min_apply_delay?\n> Sorry, the previous patch's behavior wasn't consistent with this error message.\n>\n> In the previous patch, if we conducted alter subscription\n> with stream = parallel and min_apply_delay = 0 (from a positive value) at the same time,\n> the alter command failed, although this should succeed by this time-delayed feature specification.\n> We fixed this part accordingly by some more tests in AlterSubscription().\n>\n> By the way, we should allow users to change min_apply_dealy to 0\n> whenever they want from different value. Then, we didn't restrict\n> this kind of operation.\n>\n> > + sub->minapplydelay > 0)\n> > ...\n> > + if (opts.min_apply_delay > 0 &&\n> >\n> > Is there any reason for the differenciation?\n> Yes. The former is the object for an existing subscription configuration.\n> For example, if we alter subscription with setting streaming = 'parallel'\n> for a subscription created with min_apply_delay = '1 day', we\n> need to reject the alter command. The latter is new settings.\n>\n>\n> > +\n> > errmsg(\"cannot set %s for subscription with %s\",\n> > +\n> > \"streaming = parallel\", \"min_apply_delay > 0\"));\n> >\n> > I think that this shoud be more like human-speking. Say, \"cannot enable\n> > min_apply_delay for subscription in parallel streaming mode\" or something..\n> > The same is applicable to the nearby message.\n> Reworded the error messages. Please check.\n>\n> > +static void maybe_delay_apply(TimestampTz ts);\n> >\n> > apply_spooled_messages(FileSet *stream_fileset, TransactionId xid,\n> > - XLogRecPtr lsn)\n> > + XLogRecPtr lsn, TimestampTz ts)\n> >\n> > \"ts\" looks too generic. Couldn't it be more specific?\n> > We need a explanation for the parameter in the function comment.\n> Changed it to finish_ts, since it indicates commit/prepare time.\n> This terminology should be aligned with finish lsn.\n>\n> > + if (!am_parallel_apply_worker())\n> > + {\n> > + Assert(ts > 0);\n> > + maybe_delay_apply(ts);\n> >\n> > It seems to me better that the if condition and assertion are checked inside\n> > maybe_delay_apply().\n> Fixed.\n>\n\nThanks for the updated patch, Few comments:\n1) Since the min_apply_delay = 3, but you have specified 2s, there\nmight be a possibility that it can log delay as 1000ms due to\npub/sub/network delay and the test can fail randomly, If we cannot\nensure this log file value, check_apply_delay_time verification alone\nshould be sufficient.\n+is($result, qq(5|1|5), 'check if the new rows were applied to subscriber');\n+\n+check_apply_delay_log(\"logical replication apply delay\", \"2000\");\n\n2) I'm not sure if this will add any extra coverage as the altering\nvalue of min_apply_delay is already tested in the regression, if so\nthis test can be removed:\n+# Test ALTER SUBSCRIPTION. Delay 86460 seconds (1 day 1 minute).\n+$node_subscriber->safe_psql('postgres',\n+ \"ALTER SUBSCRIPTION tap_sub SET (min_apply_delay = 86460000)\"\n+);\n+\n+# New row to trigger apply delay.\n+$node_publisher->safe_psql('postgres',\n+ \"INSERT INTO test_tab VALUES (0, 'foobar')\");\n+\n+check_apply_delay_log(\"logical replication apply delay\", \"80000000\");\n\n3) We generally keep the subroutines before the tests, it can be kept\naccordingly:\n3.a)\n+sub check_apply_delay_log\n+{\n+ my ($message, $expected) = @_;\n+ $expected = 0 unless defined $expected;\n+\n+ my $old_log_location = $log_location;\n\n3.b)\n+sub check_apply_delay_time\n+{\n+ my ($primary_key, $expected_diffs) = @_;\n+\n+ my $inserted_time_on_pub = $node_publisher->safe_psql('postgres', qq[\n+ SELECT extract(epoch from c) FROM test_tab WHERE a =\n$primary_key;\n+ ]);\n+\n\n4) typo \"more then once\" should be \"more than once\"\n+ regress_testsub | regress_subscription_user | f |\n{testpub,testpub1,testpub2} | f | off | d |\nf | any | 0 | off\n| dbname=regress_doesnotexist | 0/0\n (1 row)\n\n -- fail - publication used more then once\n@@ -316,10 +316,10 @@ ERROR: publication \"testpub3\" is not in\nsubscription \"regress_testsub\"\n -- ok - delete publications\n ALTER SUBSCRIPTION regress_testsub DROP PUBLICATION testpub1,\ntestpub2 WITH (refresh = false);\n \\dRs+\n\n5) This can be changed to \"Is it larger than expected?\"\n+ # Is it larger than expected ?\n+ cmp_ok($logged_delay, '>', $expected,\n+ \"The wait time of the apply worker is long\nenough expectedly\"\n+ );\n\n6) 2022 should be changed to 2023\n+++ b/src/test/subscription/t/032_apply_delay.pl\n@@ -0,0 +1,170 @@\n+\n+# Copyright (c) 2022, PostgreSQL Global Development Group\n+\n+# Test replication apply delay\n\n7) Termination full stop is not required for single line comments:\n7.a)\n+use Test::More;\n+\n+# Create publisher node.\n+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');\n\n7.b) +\n+# Create subscriber node.\n+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');\n\n7.c) +\n+# Create some preexisting content on publisher.\n+$node_publisher->safe_psql('postgres',\n\n7.d) similarly in rest of the files\n\n8) Is it possible to add one test for spooling also?\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 14 Jan 2023 11:57:29 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi,\n\n\nOn Saturday, January 14, 2023 3:27 PM vignesh C <vignesh21@gmail.com> wrote:\n> 1) Since the min_apply_delay = 3, but you have specified 2s, there might be a\n> possibility that it can log delay as 1000ms due to pub/sub/network delay and\n> the test can fail randomly, If we cannot ensure this log file value,\n> check_apply_delay_time verification alone should be sufficient.\n> +is($result, qq(5|1|5), 'check if the new rows were applied to\n> +subscriber');\n> +\n> +check_apply_delay_log(\"logical replication apply delay\", \"2000\");\nYou are right. Removed the left-time check of the 1st call of check_apply_delay_log(). \n\n\n> 2) I'm not sure if this will add any extra coverage as the altering value of\n> min_apply_delay is already tested in the regression, if so this test can be\n> removed:\n> +# Test ALTER SUBSCRIPTION. Delay 86460 seconds (1 day 1 minute).\n> +$node_subscriber->safe_psql('postgres',\n> + \"ALTER SUBSCRIPTION tap_sub SET (min_apply_delay =\n> 86460000)\"\n> +);\n> +\n> +# New row to trigger apply delay.\n> +$node_publisher->safe_psql('postgres',\n> + \"INSERT INTO test_tab VALUES (0, 'foobar')\");\n> +\n> +check_apply_delay_log(\"logical replication apply delay\", \"80000000\");\nWhile addressing this point, I've noticed that there is a\nbehavior difference between physical replication's recovery_min_apply_delay\nand this feature when stopping the replication during delays.\n\nAt present, in the latter case,\nthe apply worker exits without applying the suspended transaction\nafter ALTER SUBSCRIPTION DISABLE command for the subscription.\nMeanwhile, there is no \"disabling\" command for physical replication,\nbut I checked the behavior about what happens for promoting a secondary\nduring the delay of recovery_min_apply_delay for physical replication as one example.\nThe transaction has become visible even in the promoting in the middle of delay.\n\nI'm not sure if I should make the time-delayed LR aligned with this behavior.\nDoes someone has an opinion for this ?\n\nBy the way, the above test code can be used for the test case\nwhen the apply worker is in a delay but the transaction has been canceled by\nALTER SUBSCRIPTION DISABLE command. So, I didn't remove it at this stage.\n> 3) We generally keep the subroutines before the tests, it can be kept\n> accordingly:\n> 3.a)\n> +sub check_apply_delay_log\n> +{\n> + my ($message, $expected) = @_;\n> + $expected = 0 unless defined $expected;\n> +\n> + my $old_log_location = $log_location;\n> \n> 3.b)\n> +sub check_apply_delay_time\n> +{\n> + my ($primary_key, $expected_diffs) = @_;\n> +\n> + my $inserted_time_on_pub = $node_publisher->safe_psql('postgres',\n> qq[\n> + SELECT extract(epoch from c) FROM test_tab WHERE a =\n> $primary_key;\n> + ]);\n> +\nFixed.\n\n \n> 4) typo \"more then once\" should be \"more than once\"\n> + regress_testsub | regress_subscription_user | f |\n> {testpub,testpub1,testpub2} | f | off | d |\n> f | any | 0 | off\n> | dbname=regress_doesnotexist | 0/0\n> (1 row)\n> \n> -- fail - publication used more then once @@ -316,10 +316,10 @@ ERROR:\n> publication \"testpub3\" is not in subscription \"regress_testsub\"\n> -- ok - delete publications\n> ALTER SUBSCRIPTION regress_testsub DROP PUBLICATION testpub1,\n> testpub2 WITH (refresh = false);\n> \\dRs+\nThis was an existing typo on HEAD. Addressed in other thread in [1].\n\n \n> 5) This can be changed to \"Is it larger than expected?\"\n> + # Is it larger than expected ?\n> + cmp_ok($logged_delay, '>', $expected,\n> + \"The wait time of the apply worker is long\n> enough expectedly\"\n> + );\nFixed.\n \n> 6) 2022 should be changed to 2023\n> +++ b/src/test/subscription/t/032_apply_delay.pl\n> @@ -0,0 +1,170 @@\n> +\n> +# Copyright (c) 2022, PostgreSQL Global Development Group\n> +\n> +# Test replication apply delay\nFixed.\n\n\n> 7) Termination full stop is not required for single line comments:\n> 7.a)\n> +use Test::More;\n> +\n> +# Create publisher node.\n> +my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');\n> \n> 7.b) +\n> +# Create subscriber node.\n> +my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');\n> \n> 7.c) +\n> +# Create some preexisting content on publisher.\n> +$node_publisher->safe_psql('postgres',\n> \n> 7.d) similarly in rest of the files\nRemoved the periods for single line comments.\n\n\n> 8) Is it possible to add one test for spooling also?\nThere is a streaming transaction case in the TAP test already.\n\n\nI conducted some minor comment modifications along with above changes.\nKindly have a look at the v16.\n\n[1] - https://www.postgresql.org/message-id/flat/TYCPR01MB83737EA140C79B7D099F65E8EDC69%40TYCPR01MB8373.jpnprd01.prod.outlook.com\n\n\nBest Regards,\n\tTakamichi Osumi",
"msg_date": "Tue, 17 Jan 2023 11:00:11 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear hackers,\n\n> At present, in the latter case,\n> the apply worker exits without applying the suspended transaction\n> after ALTER SUBSCRIPTION DISABLE command for the subscription.\n> Meanwhile, there is no \"disabling\" command for physical replication,\n> but I checked the behavior about what happens for promoting a secondary\n> during the delay of recovery_min_apply_delay for physical replication as one\n> example.\n> The transaction has become visible even in the promoting in the middle of delay.\n> \n> I'm not sure if I should make the time-delayed LR aligned with this behavior.\n> Does someone has an opinion for this ?\n\nI put my opinion here. The current specification is correct; we should not follow\na physical replication manner.\nOne motivation for this feature is to offer opportunities to correct data loss\nerrors. When accidental delete events occur, DBA can stop propagations on subscribers\nby disabling the subscription, with the patch at present.\nIIUC, when the subscription is disabled before transactions are started,\nworkers exit and stop applications. This feature delays starting txns, so we\nshould regard such an alternation as that is executed before the transaction.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Tue, 17 Jan 2023 12:45:14 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tue, Jan 17, 2023 at 4:30 PM Takamichi Osumi (Fujitsu)\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Saturday, January 14, 2023 3:27 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> > 2) I'm not sure if this will add any extra coverage as the altering value of\n> > min_apply_delay is already tested in the regression, if so this test can be\n> > removed:\n> > +# Test ALTER SUBSCRIPTION. Delay 86460 seconds (1 day 1 minute).\n> > +$node_subscriber->safe_psql('postgres',\n> > + \"ALTER SUBSCRIPTION tap_sub SET (min_apply_delay =\n> > 86460000)\"\n> > +);\n> > +\n> > +# New row to trigger apply delay.\n> > +$node_publisher->safe_psql('postgres',\n> > + \"INSERT INTO test_tab VALUES (0, 'foobar')\");\n> > +\n> > +check_apply_delay_log(\"logical replication apply delay\", \"80000000\");\n> While addressing this point, I've noticed that there is a\n> behavior difference between physical replication's recovery_min_apply_delay\n> and this feature when stopping the replication during delays.\n>\n> At present, in the latter case,\n> the apply worker exits without applying the suspended transaction\n> after ALTER SUBSCRIPTION DISABLE command for the subscription.\n>\n\nIn the previous paragraph, you said the behavior difference while\nstopping the replication but it is not clear from where this DISABLE\ncommand comes in that scenario.\n\n> Meanwhile, there is no \"disabling\" command for physical replication,\n> but I checked the behavior about what happens for promoting a secondary\n> during the delay of recovery_min_apply_delay for physical replication as one example.\n> The transaction has become visible even in the promoting in the middle of delay.\n>\n\nWhat causes such a transaction to be visible after promotion? Ideally,\nif the commit doesn't succeed, the transaction shouldn't be visible.\nDo, we allow the transaction waiting due to delay to get committed on\npromotion?\n\n> I'm not sure if I should make the time-delayed LR aligned with this behavior.\n> Does someone has an opinion for this ?\n>\n\nCan you please explain a bit more as asked above to understand the difference?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 17 Jan 2023 18:24:08 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi,\n\nOn Tuesday, January 17, 2023 9:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Tue, Jan 17, 2023 at 4:30 PM Takamichi Osumi (Fujitsu)\n> <osumi.takamichi@fujitsu.com> wrote:\n> >\n> > On Saturday, January 14, 2023 3:27 PM vignesh C <vignesh21@gmail.com>\n> wrote:\n> >\n> > > 2) I'm not sure if this will add any extra coverage as the altering\n> > > value of min_apply_delay is already tested in the regression, if so\n> > > this test can be\n> > > removed:\n> > > +# Test ALTER SUBSCRIPTION. Delay 86460 seconds (1 day 1 minute).\n> > > +$node_subscriber->safe_psql('postgres',\n> > > + \"ALTER SUBSCRIPTION tap_sub SET (min_apply_delay =\n> > > 86460000)\"\n> > > +);\n> > > +\n> > > +# New row to trigger apply delay.\n> > > +$node_publisher->safe_psql('postgres',\n> > > + \"INSERT INTO test_tab VALUES (0, 'foobar')\");\n> > > +\n> > > +check_apply_delay_log(\"logical replication apply delay\",\n> > > +\"80000000\");\n> > While addressing this point, I've noticed that there is a behavior\n> > difference between physical replication's recovery_min_apply_delay and\n> > this feature when stopping the replication during delays.\n> >\n> > At present, in the latter case,\n> > the apply worker exits without applying the suspended transaction\n> > after ALTER SUBSCRIPTION DISABLE command for the subscription.\n> >\n> \n> In the previous paragraph, you said the behavior difference while stopping the\n> replication but it is not clear from where this DISABLE command comes in that\n> scenario.\nSorry for my unclear description. I mean \"stopping the replication\" is\nto disable the subscription during the \"min_apply_delay\" wait time on logical\nreplication setup.\n\nI proposed and mentioned this discussion point to define\nhow the time-delayed apply worker should behave when there is a transaction\ndelayed by \"min_apply_delay\" parameter and additionally the user issues\nALTER SUBSCRIPTION ... DISABLE during the delay. When it comes to physical\nreplication, it's hard to find a perfect correspondent for LR's ALTER SUBSCRIPTION\nDISABLE command, but I chose a scenario to promote a secondary during\n\"recovery_min_apply_delay\" for comparison this time. After the promotion of\nthe secondary in the physical replication, the transaction\ncommitted on the publisher but delayed on the secondary can be seen.\nThis would be because CheckForStandbyTrigger in recoveryApplyDelay returns true\nand we apply the record by breaking the wait.\nI checked and got the LOG message \"received promote request\" in the secondary log\nwhen I tested this case.\n\n> > Meanwhile, there is no \"disabling\" command for physical replication,\n> > but I checked the behavior about what happens for promoting a\n> > secondary during the delay of recovery_min_apply_delay for physical\n> replication as one example.\n> > The transaction has become visible even in the promoting in the middle of\n> delay.\n> >\n> \n> What causes such a transaction to be visible after promotion? Ideally, if the\n> commit doesn't succeed, the transaction shouldn't be visible.\n> Do, we allow the transaction waiting due to delay to get committed on\n> promotion?\nThe commit succeeded on the primary and then I promoted the secondary\nduring the \"recovery_min_apply_delay\" wait of the transaction. Then, the result\nis the transaction turned out to be available on the promoted secondary.\n\n\n> > I'm not sure if I should make the time-delayed LR aligned with this behavior.\n> > Does someone has an opinion for this ?\n> >\n> \n> Can you please explain a bit more as asked above to understand the\n> difference?\nSo, the current difference is that the time-delayed apply\nworker of logical replication doesn't apply the delayed transaction on the subscriber\nwhen the subscription has been disabled during the delay, while (in one example\nof a promotion) the physical replication does the apply of the delayed transaction.\n\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Wed, 18 Jan 2023 01:06:55 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Wed, Jan 18, 2023 at 6:37 AM Takamichi Osumi (Fujitsu)\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> >\n> > Can you please explain a bit more as asked above to understand the\n> > difference?\n> So, the current difference is that the time-delayed apply\n> worker of logical replication doesn't apply the delayed transaction on the subscriber\n> when the subscription has been disabled during the delay, while (in one example\n> of a promotion) the physical replication does the apply of the delayed transaction.\n>\n\nI don't see any particular reason here to allow the transaction apply\nto complete if the subscription is disabled. Note, that here we are\nwaiting at the beginning of the transaction and for large\ntransactions, it might cause a significant delay if we allow applying\nthe xact. OTOH, if someone comes up with a valid use case to allow the\ntransaction apply to get completed after the subscription is disabled\nthen we can anyway do it later as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 18 Jan 2023 10:48:40 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi,\n\n\nOn Wednesday, January 18, 2023 2:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Wed, Jan 18, 2023 at 6:37 AM Takamichi Osumi (Fujitsu)\n> <osumi.takamichi@fujitsu.com> wrote:\n> >\n> > >\n> > > Can you please explain a bit more as asked above to understand the\n> > > difference?\n> > So, the current difference is that the time-delayed apply worker of\n> > logical replication doesn't apply the delayed transaction on the\n> > subscriber when the subscription has been disabled during the delay,\n> > while (in one example of a promotion) the physical replication does the apply\n> of the delayed transaction.\n> >\n> \n> I don't see any particular reason here to allow the transaction apply to complete\n> if the subscription is disabled. Note, that here we are waiting at the beginning\n> of the transaction and for large transactions, it might cause a significant delay if\n> we allow applying the xact. OTOH, if someone comes up with a valid use case\n> to allow the transaction apply to get completed after the subscription is\n> disabled then we can anyway do it later as well.\nThis makes sense. I agree with you. So, I'll keep the current behavior of\nthe patch.\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Wed, 18 Jan 2023 06:11:02 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": " Here are my review comments for the latest patch v16-0001. (excluding\nthe test code)\n\n======\n\nGeneral\n\n1.\n\nSince the value of min_apply_delay cannot be < 0, I was thinking\nprobably it should have been declared everywhere in this patch as a\nuint64 instead of an int64, right?\n\n======\n\nCommit message\n\n2.\n\nIf the subscription sets min_apply_delay parameter, the logical\nreplication worker will delay the transaction commit for min_apply_delay\nmilliseconds.\n\n~\n\nIMO there should be another sentence before this just to say that a\nnew parameter is being added:\n\ne.g.\nThis patch implements a new subscription parameter called 'min_apply_delay'.\n\n======\n\ndoc/src/sgml/config.sgml\n\n3.\n\n+ <para>\n+ For time-delayed logical replication, the apply worker sends a Standby\n+ Status Update message to the corresponding publisher per the indicated\n+ time of this parameter. Therefore, if this parameter is longer than\n+ <literal>wal_sender_timeout</literal> on the publisher, then the\n+ walsender doesn't get any update message during the delay and repeatedly\n+ terminates due to the timeout errors. Hence, make sure this parameter is\n+ shorter than the <literal>wal_sender_timeout</literal> of the publisher.\n+ If this parameter is set to zero with time-delayed replication, the\n+ apply worker doesn't send any feedback messages during the\n+ <literal>min_apply_delay</literal>.\n+ </para>\n\n\nThis paragraph seemed confusing. I think it needs to be reworded to\nchange all of the \"this parameter\" references because there are at\nleast 3 different parameters mentioned in this paragraph. e.g. maybe\njust change them to explicitly name the parameter you are talking\nabout.\n\nI also think it needs to mention the ‘min_apply_delay’ subscription\nparameter up-front and then refer to it appropriately.\n\nThe end result might be something like I wrote below (this is just my\nguess – probably you can word it better).\n\nSUGGESTION\nFor time-delayed logical replication (i.e. when the subscription is\ncreated with parameter min_apply_delay > 0), the apply worker sends a\nStandby Status Update message to the publisher with a period of\nwal_receiver_status_interval . Make sure to set\nwal_receiver_status_interval less than the wal_sender_timeout on the\npublisher, otherwise, the walsender will repeatedly terminate due to\nthe timeout errors. If wal_receiver_status_interval is set to zero,\nthe apply worker doesn't send any feedback messages during the\nsubscriber’s min_apply_delay period.\n\n======\n\ndoc/src/sgml/ref/create_subscription.sgml\n\n4.\n\n+ <para>\n+ By default, the subscriber applies changes as soon as possible. As\n+ with the physical replication feature\n+ (<xref linkend=\"guc-recovery-min-apply-delay\"/>), it can be useful to\n+ have a time-delayed logical replica. This parameter lets the user to\n+ delay the application of changes by a specified amount of\ntime. If this\n+ value is specified without units, it is taken as milliseconds. The\n+ default is zero(no delay).\n+ </para>\n\n4a.\nAs with the physical replication feature (recovery_min_apply_delay),\nit can be useful to have a time-delayed logical replica.\n\nIMO not sure that the above sentence is necessary. It seems only to be\nsaying that this parameter can be useful. Why do we need to say that?\n\n~\n\n4b.\n\"This parameter lets the user to delay\" -> \"This parameter lets the user delay\"\nOR\n\"This parameter lets the user to delay\" -> \"This parameter allows the\nuser to delay\"\n\n~\n\n4c.\n\"If this value is specified without units\" -> \"If the value is\nspecified without units\"\n\n~\n\n4d.\n\"zero(no delay).\" -> \"zero (no delay).\"\n\n----\n\n5.\n\n+ <para>\n+ The delay occurs only on WAL records for transaction begins and after\n+ the initial table synchronization. It is possible that the\n+ replication delay between publisher and subscriber exceeds the value\n+ of this parameter, in which case no delay is added. Note that the\n+ delay is calculated between the WAL time stamp as written on\n+ publisher and the current time on the subscriber. Time\nspent in logical\n+ decoding and in transferring the transaction may reduce the\nactual wait\n+ time. If the system clocks on publisher and subscriber are not\n+ synchronized, this may lead to apply changes earlier than expected,\n+ but this is not a major issue because this parameter is\ntypically much\n+ larger than the time deviations between servers. Note that if this\n+ parameter is set to a long delay, the replication will stop if the\n+ replication slot falls behind the current LSN by more than\n+ <link\nlinkend=\"guc-max-slot-wal-keep-size\"><literal>max_slot_wal_keep_size</literal></link>.\n+ </para>\n\nI think the first part can be reworded slightly. See what you think\nabout the suggestion below.\n\nSUGGESTION\nAny delay occurs only on WAL records for transaction begins after all\ninitial table synchronization has finished. The delay is calculated\nbetween the WAL timestamp as written on the publisher and the current\ntime on the subscriber. Any overhead of time spent in logical decoding\nand in transferring the transaction may reduce the actual wait time.\nIt is also possible that the overhead already exceeds the requested\n'min_apply_delay' value, in which case no additional wait is\nnecessary. If the system clocks...\n\n----\n\n6.\n\n+ <para>\n+ Setting streaming to <literal>parallel</literal> mode and\n<literal>min_apply_delay</literal>\n+ simultaneously is not supported.\n+ </para>\n\nSUGGESTION\nA non-zero min_apply_delay parameter is not allowed when streaming in\nparallel mode.\n\n======\n\nsrc/backend/commands/subscriptioncmds.c\n\n7. parse_subscription_options\n\n@@ -404,6 +445,17 @@ parse_subscription_options(ParseState *pstate,\nList *stmt_options,\n \"slot_name = NONE\", \"create_slot = false\")));\n }\n }\n+\n+ /* Test the combination of streaming mode and min_apply_delay */\n+ if (IsSet(supported_opts, SUBOPT_MIN_APPLY_DELAY) &&\n+ opts->min_apply_delay > 0)\n+ {\n+ if (opts->streaming == LOGICALREP_STREAM_PARALLEL)\n+ ereport(ERROR,\n+ errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"%s and %s are mutually exclusive options\",\n+ \"min_apply_delay > 0\", \"streaming = parallel\"));\n+ }\n\nSUGGESTION (comment)\nThe combination of parallel streaming mode and min_apply_delay is not allowed.\n\n~~~\n\n8. AlterSubscription (general)\n\nI observed during testing there are 3 different errors….\n\nAt subscription CREATE time you can get this error:\nERROR: min_apply_delay > 0 and streaming = parallel are mutually\nexclusive options\n\nIf you try to ALTER the min_apply_delay when already streaming =\nparallel you can get this error:\nERROR: cannot enable min_apply_delay for subscription in streaming =\nparallel mode\n\nIf you try to ALTER the streaming to be parallel if there is already a\nmin_apply_delay > 0 then you can get this error:\nERROR: cannot enable streaming = parallel mode for subscription with\nmin_apply_delay\n\n~\n\nIMO there is no need to have 3 different error message texts. I think\nall these cases are explained by just the first text (ERROR:\nmin_apply_delay > 0 and streaming = parallel are mutually exclusive\noptions)\n\n\n~~~\n\n9. AlterSubscription\n\n@@ -1098,6 +1152,18 @@ AlterSubscription(ParseState *pstate,\nAlterSubscriptionStmt *stmt,\n\n if (IsSet(opts.specified_opts, SUBOPT_STREAMING))\n {\n+ /*\n+ * Test the combination of streaming mode and\n+ * min_apply_delay\n+ */\n+ if (opts.streaming == LOGICALREP_STREAM_PARALLEL)\n+ if ((IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) &&\nopts.min_apply_delay > 0) ||\n+ (!IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) &&\nsub->minapplydelay > 0))\n+ ereport(ERROR,\n+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"cannot enable %s mode for subscription with %s\",\n+ \"streaming = parallel\", \"min_apply_delay\"));\n+\n\n9a.\nSUGGESTION (comment)\nThe combination of parallel streaming mode and min_apply_delay is not allowed.\n\n~\n\n9b.\n(see AlterSubscription general review comment #8 above)\nHere you can use the same comment error message that says\nmin_apply_delay > 0 and streaming = parallel are mutually exclusive\noptions.\n\n~~~\n\n10. AlterSubscription\n\n@@ -1111,6 +1177,25 @@ AlterSubscription(ParseState *pstate,\nAlterSubscriptionStmt *stmt,\n = true;\n }\n\n+ if (IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY))\n+ {\n+ /*\n+ * Test the combination of streaming mode and\n+ * min_apply_delay\n+ */\n+ if (opts.min_apply_delay > 0)\n+ if ((IsSet(opts.specified_opts, SUBOPT_STREAMING) && opts.streaming\n== LOGICALREP_STREAM_PARALLEL) ||\n+ (!IsSet(opts.specified_opts, SUBOPT_STREAMING) && sub->stream ==\nLOGICALREP_STREAM_PARALLEL))\n+ ereport(ERROR,\n+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"cannot enable %s for subscription in %s mode\",\n+ \"min_apply_delay\", \"streaming = parallel\"));\n+\n+ values[Anum_pg_subscription_subminapplydelay - 1] =\n+ Int64GetDatum(opts.min_apply_delay);\n+ replaces[Anum_pg_subscription_subminapplydelay - 1] = true;\n+ }\n\n10a.\nSUGGESTION (comment)\nThe combination of parallel streaming mode and min_apply_delay is not allowed.\n\n~\n\n10b.\n(see AlterSubscription general review comment #8 above)\nHere you can use the same comment error message that says\nmin_apply_delay > 0 and streaming = parallel are mutually exclusive\noptions.\n\n======\n\n.../replication/logical/applyparallelworker.c\n\n11.\n\n@@ -704,7 +704,8 @@ pa_process_spooled_messages_if_required(void)\n {\n apply_spooled_messages(&MyParallelShared->fileset,\n MyParallelShared->xid,\n- InvalidXLogRecPtr);\n+ InvalidXLogRecPtr,\n+ 0);\n\nIMO this passing of 0 is a bit strange because it is currently acting\nlike a dummy value since the apply_spooled_messages will never make\nuse of the 'finish_ts' anyway (since this call is from a parallel\napply worker).\n\nI think a better way to code this might be to pass the 0 (same as you\nare doing here) but inside the apply_spooled_messages change the code:\n\nFROM\nif (!am_parallel_apply_worker())\nmaybe_delay_apply(finish_ts);\n\nTO\nif (finish_ts)\nmaybe_delay_apply(finish_ts);\n\nThat does 2 things.\n- It makes the passed-in 0 have some meaning\n- It simplifies the apply_spooled_messages code\n\n======\n\nsrc/backend/replication/logical/worker.c\n\n12.\n\n@@ -318,6 +318,17 @@ static List *on_commit_wakeup_workers_subids = NIL;\n bool in_remote_transaction = false;\n static XLogRecPtr remote_final_lsn = InvalidXLogRecPtr;\n\n+/*\n+ * In order to avoid walsender's timeout during time-delayed replication,\n+ * it's necessary to keep sending feedback messages during the delay from the\n+ * worker process. Meanwhile, the feature delays the apply before starting the\n+ * transaction and thus we don't write WALs for the suspended changes during\n+ * the wait. Hence, in the case the worker process sends a feedback message\n+ * during the delay, we should not make positions of the flushed and apply LSN\n+ * overwritten by the last received latest LSN. See send_feedback()\nfor details.\n+ */\n+static XLogRecPtr last_received = InvalidXLogRecPtr;\n\n12a.\nSuggest a small change to the first sentence of the comment.\n\nBEFORE\nIn order to avoid walsender's timeout during time-delayed replication,\nit's necessary to keep sending feedback messages during the delay from\nthe worker process.\n\nAFTER\nIn order to avoid walsender timeout for time-delayed replication the\nworker process keeps sending feedback messages during the delay\nperiod.\n\n~\n\n12b.\n\"Hence, in the case\" -> \"When\"\n\n~~~\n\n13. forward declare\n\n-static void send_feedback(XLogRecPtr recvpos, bool force, bool requestReply);\n+static void send_feedback(XLogRecPtr recvpos, bool force, bool requestReply,\n+ bool in_delaying_apply);\n\nChange the param name:\n\n\"in_delaying_apply\" -> \"in_delayed_apply” (??)\n\n~~~\n\n14. maybe_delay_apply\n\n+ /* Nothing to do if no delay set */\n+ if (MySubscription->minapplydelay <= 0)\n+ return;\n\nIIUC min_apply_delay cannot be < 0 so this condition could simply be:\n\nif (!MySubscription->minapplydelay)\nreturn;\n\n~~~\n\n15. maybe_delay_apply\n\n+ /*\n+ * The min_apply_delay parameter is ignored until all tablesync workers\n+ * have reached READY state. If we allow the delay during the catchup\n+ * phase, once we reach the limit of tablesync workers, it will impose a\n+ * delay for each subsequent worker. It means it will take a long time to\n+ * finish the initial table synchronization.\n+ */\n+ if (!AllTablesyncsReady())\n+ return;\n\nSUGGESTION (slight rewording)\nThe min_apply_delay parameter is ignored until all tablesync workers\nhave reached READY state. This is because if we allowed the delay\nduring the catchup phase, then once we reached the limit of tablesync\nworkers it would impose a delay for each subsequent worker. That would\ncause initial table synchronization completion to take a long time.\n\n~~~\n\n16. maybe_delay_apply\n\n+ while (true)\n+ {\n+ long diffms;\n+\n+ ResetLatch(MyLatch);\n+\n+ CHECK_FOR_INTERRUPTS();\n\nIMO there should be some small explanatory comment here at the top of\nthe while loop.\n\n~~~\n\n17. apply_spooled_messages\n\n@@ -2024,6 +2141,21 @@ apply_spooled_messages(FileSet *stream_fileset,\nTransactionId xid,\n int fileno;\n off_t offset;\n\n+ /*\n+ * Should we delay the current transaction?\n+ *\n+ * Unlike the regular (non-streamed) cases, the delay is applied in a\n+ * STREAM COMMIT/STREAM PREPARE message for streamed transactions. The\n+ * STREAM START message does not contain a commit/prepare time (it will be\n+ * available when the in-progress transaction finishes). Hence, it's not\n+ * appropriate to apply a delay at that time.\n+ *\n+ * It's not allowed to execute time-delayed replication with parallel\n+ * apply feature.\n+ */\n+ if (!am_parallel_apply_worker())\n+ maybe_delay_apply(finish_ts);\n\nThat whole comment part \"Unlike the regular (non-streamed) cases\"\nseems misplaced here. Perhaps this part of the comment is better put\ninto the function header where the meaning of 'finish_ts' is\nexplained?\n\n~~~\n\n18. apply_spooled_messages\n\n+ * It's not allowed to execute time-delayed replication with parallel\n+ * apply feature.\n+ */\n+ if (!am_parallel_apply_worker())\n+ maybe_delay_apply(finish_ts);\n\nAs was mentioned in comment #11 above this code could be changed like\n\nif (finish_ts)\nmaybe_delay_apply(finish_ts);\nthen you don't even need to make mention of \"parallel apply\" at all here.\n\nOTOH if you want to still have the parallel apply comment then maybe\nreword it like this:\n\"It is not allowed to combine time-delayed replication with the\nparallel apply feature.\"\n\n~~~\n\n19. apply_spooled_messages\n\nIf you chose not to do my suggestion from comment #11, then there are\n2 identical conditions (!am_parallel_apply_worker()); In this case, I\nwas wondering if it would be better to refactor to use a single\ncondition instead.\n\n~~~\n\n20. send_feedback\n(same as comment #13)\n\nMaybe change the new param name to “in_delayed_apply”?\n\n~~~\n\n21.\n\n@@ -3737,8 +3869,15 @@ send_feedback(XLogRecPtr recvpos, bool force,\nbool requestReply)\n /*\n * No outstanding transactions to flush, we can report the latest received\n * position. This is important for synchronous replication.\n+ *\n+ * During the delay of time-delayed replication, do not tell the publisher\n+ * that the received latest LSN is already applied and flushed at this\n+ * stage, since we don't apply the transaction yet. If we do so, it leads\n+ * to a wrong assumption of logical replication progress on the publisher\n+ * side. Here, we just send a feedback message to avoid publisher's\n+ * timeout during the delay.\n */\n\nMinor rewording of the comment\n\nSUGGESTION\nIf the subscriber side apply is delayed (because of time-delayed\nreplication) then do not tell the publisher that the received latest\nLSN is already applied and flushed, otherwise, it leads to the\npublisher side making a wrong assumption of logical replication\nprogress. Instead, we just send a feedback message to avoid a\npublisher timeout during the delay.\n\n\n======\n\n\nsrc/bin/pg_dump/pg_dump.c\n\n22.\n\n@@ -4546,9 +4547,14 @@ getSubscriptions(Archive *fout)\n LOGICALREP_TWOPHASE_STATE_DISABLED);\n\n if (fout->remoteVersion >= 160000)\n- appendPQExpBufferStr(query, \" s.suborigin\\n\");\n+ appendPQExpBufferStr(query,\n+ \" s.suborigin,\\n\"\n+ \" s.subminapplydelay\\n\");\n else\n- appendPQExpBuffer(query, \" '%s' AS suborigin\\n\", LOGICALREP_ORIGIN_ANY);\n+ {\n+ appendPQExpBuffer(query, \" '%s' AS suborigin,\\n\", LOGICALREP_ORIGIN_ANY);\n+ appendPQExpBufferStr(query, \" 0 AS subminapplydelay\\n\");\n+ }\n\nCan’t those appends in the else part can be combined to a single\nappendPQExpBuffer\n\nappendPQExpBuffer(query,\n\" '%s' AS suborigin,\\n\"\n\" 0 AS subminapplydelay\\n\"\nLOGICALREP_ORIGIN_ANY);\n\n\n======\n\nsrc/include/catalog/pg_subscription.h\n\n23.\n\n@@ -70,6 +70,8 @@ CATALOG(pg_subscription,6100,SubscriptionRelationId)\nBKI_SHARED_RELATION BKI_ROW\n XLogRecPtr subskiplsn; /* All changes finished at this LSN are\n * skipped */\n\n+ int64 subminapplydelay; /* Replication apply delay */\n+\n NameData subname; /* Name of the subscription */\n\n Oid subowner BKI_LOOKUP(pg_authid); /* Owner of the subscription */\n\nSUGGESTION (for comment)\nReplication apply delay (ms)\n\n~~\n\n24.\n\n@@ -120,6 +122,7 @@ typedef struct Subscription\n * in */\n XLogRecPtr skiplsn; /* All changes finished at this LSN are\n * skipped */\n+ int64 minapplydelay; /* Replication apply delay */\n\nSUGGESTION (for comment)\nReplication apply delay (ms)\n\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 18 Jan 2023 18:06:17 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Wed, Jan 18, 2023 at 6:06 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are my review comments for the latest patch v16-0001. (excluding\n> the test code)\n>\n...\n>\n> 8. AlterSubscription (general)\n>\n> I observed during testing there are 3 different errors….\n>\n> At subscription CREATE time you can get this error:\n> ERROR: min_apply_delay > 0 and streaming = parallel are mutually\n> exclusive options\n>\n> If you try to ALTER the min_apply_delay when already streaming =\n> parallel you can get this error:\n> ERROR: cannot enable min_apply_delay for subscription in streaming =\n> parallel mode\n>\n> If you try to ALTER the streaming to be parallel if there is already a\n> min_apply_delay > 0 then you can get this error:\n> ERROR: cannot enable streaming = parallel mode for subscription with\n> min_apply_delay\n>\n> ~\n>\n> IMO there is no need to have 3 different error message texts. I think\n> all these cases are explained by just the first text (ERROR:\n> min_apply_delay > 0 and streaming = parallel are mutually exclusive\n> options)\n>\n>\n\nAfter checking the regression test output I can see the merit of your\nseparate error messages like this, even if they are maybe not strictly\nnecessary. So feel free to ignore my previous review comment.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 19 Jan 2023 12:41:58 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Wed, Jan 18, 2023 at 6:06 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are my review comments for the latest patch v16-0001. (excluding\n> the test code)\n>\n\nAnd here are some review comments for the v16-0001 test code.\n\n======\n\nsrc/test/regress/sql/subscription.sql\n\n1. General\nFor all comments\n\n\"time delayed replication\" -> \"time-delayed replication\" maybe is better?\n\n~~~\n\n2.\n-- fail - utilizing streaming = parallel with time delayed replication\nis not supported.\n\nFor readability please put a blank line before this test.\n\n~~~\n\n3.\n-- success -- value without unit is taken as milliseconds\n\n\"value\" -> \"min_apply_delay value\"\n\n~~~\n\n4.\n-- success -- interval is converted into ms and stored as integer\n\n\"interval\" -> \"min_apply_delay interval\"\n\n\"integer\" -> \"an integer\"\n\n~~~\n\n5.\nYou could also add another test where min_apply_delay is 0\n\nThen the following combination can be confirmed OK -- success create\nsubscription with (streaming=parallel, min_apply_delay=0)\n\n~~\n\n6.\n-- fail - alter subscription with min_apply_delay should fail when\nstreaming = parallel is set.\nCREATE SUBSCRIPTION regress_testsub CONNECTION\n'dbname=regress_doesnotexist' PUBLICATION testpub WITH (connect =\nfalse, streaming = parallel);\n\nThere is another way to do this test without creating a brand-new\nsubscription. You could just alter the existing subscription like:\nALTER ... SET (min_apply_delay = 0)\nthen ALTER ... SET (parallel = streaming)\nthen ALTER ... SET (min_apply_delay = 123)\n\n======\n\nsrc/test/subscription/t/032_apply_delay.pl\n\n7. sub check_apply_delay_log\n\n my ($node_subscriber, $message, $expected) = @_;\n\nWhy pass in the message text? I is always the same so can be hardwired\nin this function, right?\n\n~~~\n\n8.\n# Get the delay time in the server log\n\n\"int the server log\" -> \"from the server log\" (?)\n\n~~~\n\n9.\n qr/$message: (\\d+) ms/\n or die \"could not get delayed time\";\n my $logged_delay = $1;\n\n # Is it larger than expected?\n cmp_ok($logged_delay, '>', $expected,\n \"The wait time of the apply worker is long enough expectedly\"\n );\n\n9a.\n\"could not get delayed time\" -> \"could not get the apply worker wait time\"\n\n9b.\n\"The wait time of the apply worker is long enough expectedly\" -> \"The\napply worker wait time has expected duration\"\n\n~~~\n\n10.\nsub check_apply_delay_time\n\n\nMaybe a brief explanatory comment for this function is needed to\nexplain the unreplicated column c.\n\n~~~\n\n11.\n$node_subscriber->safe_psql('postgres',\n \"CREATE SUBSCRIPTION tap_sub CONNECTION '$publisher_connstr\napplication_name=$appname' PUBLICATION tap_pub WITH (streaming = on,\nmin_apply_delay = '3s')\"\n\n\nI think there should be a comment here highlighting that you are\nsetting up a subscriber time delay of 3 seconds, and then later you\ncan better describe the parameters for the checking functions...\n\ne.g. (add this comment)\n# verifies that the subscriber lags the publisher by at least 3 seconds\ncheck_apply_delay_time($node_publisher, $node_subscriber, '5', '3');\n\ne.g.\n# verifies that the subscriber lags the publisher by at least 3 seconds\ncheck_apply_delay_time($node_publisher, $node_subscriber, '8', '3');\n\n~~~\n\n12.\n# Test whether ALTER SUBSCRIPTION changes the delayed time of the apply worker\n# (1 day 1 minute).\n$node_subscriber->safe_psql('postgres',\n \"ALTER SUBSCRIPTION tap_sub SET (min_apply_delay = 86460000)\"\n);\n\nUpdate the comment with another note.\n# Note - The extra 1 min is to account for any decoding/network overhead.\n\n~~~\n\n13.\n# Make sure we have long enough min_apply_delay after the ALTER command\ncheck_apply_delay_log($node_subscriber, \"logical replication apply\ndelay\", \"80000000\");\n\nIMO the expectation of 1 day (86460000 ms) wait time might be a better\nnumber for your \"expected\" value.\n\nSo update the comment/call like this:\n\n# Make sure the apply worker knows to wait for more than 1 day (86400000 ms)\ncheck_apply_delay_log($node_subscriber, \"logical replication apply\ndelay\", \"86400000\");\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 19 Jan 2023 12:49:14 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Thursday, January 19, 2023 10:49 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> On Wed, Jan 18, 2023 at 6:06 PM Peter Smith <smithpb2250@gmail.com>\n> wrote:\n> >\n> > Here are my review comments for the latest patch v16-0001. (excluding\n> > the test code)\n> >\n> \n> And here are some review comments for the v16-0001 test code.\nHi, thanks for your review !\n\n\n> ======\n> \n> src/test/regress/sql/subscription.sql\n> \n> 1. General\n> For all comments\n> \n> \"time delayed replication\" -> \"time-delayed replication\" maybe is better?\nFixed.\n\n> ~~~\n> \n> 2.\n> -- fail - utilizing streaming = parallel with time delayed replication is not\n> supported.\n> \n> For readability please put a blank line before this test.\nFixed.\n\n> ~~~\n> \n> 3.\n> -- success -- value without unit is taken as milliseconds\n> \n> \"value\" -> \"min_apply_delay value\"\nFixed.\n\n\n> ~~~\n> \n> 4.\n> -- success -- interval is converted into ms and stored as integer\n> \n> \"interval\" -> \"min_apply_delay interval\"\n> \n> \"integer\" -> \"an integer\"\nBoth are fixed.\n\n\n> ~~~\n> \n> 5.\n> You could also add another test where min_apply_delay is 0\n> \n> Then the following combination can be confirmed OK -- success create\n> subscription with (streaming=parallel, min_apply_delay=0)\nThis combination is added with the modification for #6.\n\n> ~~\n> \n> 6.\n> -- fail - alter subscription with min_apply_delay should fail when streaming =\n> parallel is set.\n> CREATE SUBSCRIPTION regress_testsub CONNECTION\n> 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (connect = false,\n> streaming = parallel);\n> \n> There is another way to do this test without creating a brand-new subscription.\n> You could just alter the existing subscription like:\n> ALTER ... SET (min_apply_delay = 0)\n> then ALTER ... SET (parallel = streaming) then ALTER ... SET (min_apply_delay\n> = 123)\nFixed.\n\n> ======\n> \n> src/test/subscription/t/032_apply_delay.pl\n> \n> 7. sub check_apply_delay_log\n> \n> my ($node_subscriber, $message, $expected) = @_;\n> \n> Why pass in the message text? I is always the same so can be hardwired in this\n> function, right?\nFixed.\n\n> ~~~\n> \n> 8.\n> # Get the delay time in the server log\n> \n> \"int the server log\" -> \"from the server log\" (?)\nFixed.\n\n> ~~~\n> \n> 9.\n> qr/$message: (\\d+) ms/\n> or die \"could not get delayed time\";\n> my $logged_delay = $1;\n> \n> # Is it larger than expected?\n> cmp_ok($logged_delay, '>', $expected,\n> \"The wait time of the apply worker is long enough expectedly\"\n> );\n> \n> 9a.\n> \"could not get delayed time\" -> \"could not get the apply worker wait time\"\n> \n> 9b.\n> \"The wait time of the apply worker is long enough expectedly\" -> \"The apply\n> worker wait time has expected duration\"\nBoth are fixed.\n\n\n> ~~~\n> \n> 10.\n> sub check_apply_delay_time\n> \n> \n> Maybe a brief explanatory comment for this function is needed to explain the\n> unreplicated column c.\nAdded.\n\n> ~~~\n> \n> 11.\n> $node_subscriber->safe_psql('postgres',\n> \"CREATE SUBSCRIPTION tap_sub CONNECTION '$publisher_connstr\n> application_name=$appname' PUBLICATION tap_pub WITH (streaming = on,\n> min_apply_delay = '3s')\"\n> \n> \n> I think there should be a comment here highlighting that you are setting up a\n> subscriber time delay of 3 seconds, and then later you can better describe the\n> parameters for the checking functions...\nAdded a comment for CREATE SUBSCRIPTION command.\n\n> e.g. (add this comment)\n> # verifies that the subscriber lags the publisher by at least 3 seconds\n> check_apply_delay_time($node_publisher, $node_subscriber, '5', '3');\n> \n> e.g.\n> # verifies that the subscriber lags the publisher by at least 3 seconds\n> check_apply_delay_time($node_publisher, $node_subscriber, '8', '3');\nAdded.\n\n\n> ~~~\n> \n> 12.\n> # Test whether ALTER SUBSCRIPTION changes the delayed time of the apply\n> worker # (1 day 1 minute).\n> $node_subscriber->safe_psql('postgres',\n> \"ALTER SUBSCRIPTION tap_sub SET (min_apply_delay = 86460000)\"\n> );\n> \n> Update the comment with another note.\n> # Note - The extra 1 min is to account for any decoding/network overhead.\nOkay, added the comment. In general, TAP tests\nfail if we wait for more than 3 minutes. Then,\nwe should think setting the maximum consumed time\nmore than 3 minutes is safe. For example, if\n(which should not happen usually, but)\nwe consumed more than 1 minutes between this ALTER SUBSCRIPTION SET\nand below check_apply_delay_log() then, the test will fail.\n\nSo made the extra time bigger.\n> ~~~\n> \n> 13.\n> # Make sure we have long enough min_apply_delay after the ALTER command\n> check_apply_delay_log($node_subscriber, \"logical replication apply delay\",\n> \"80000000\");\n> \n> IMO the expectation of 1 day (86460000 ms) wait time might be a better number\n> for your \"expected\" value.\n> \n> So update the comment/call like this:\n> \n> # Make sure the apply worker knows to wait for more than 1 day (86400000 ms)\n> check_apply_delay_log($node_subscriber, \"logical replication apply delay\",\n> \"86400000\");\nUpdated the comment and the function call.\n\nKindly have a look at the updated patch v17.\n\n\nBest Regards,\n\tTakamichi Osumi",
"msg_date": "Thu, 19 Jan 2023 06:35:58 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Wednesday, January 18, 2023 4:06 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> Here are my review comments for the latest patch v16-0001. (excluding the\n> test code)\nHi, thank you for your review !\n\n> ======\n> \n> General\n> \n> 1.\n> \n> Since the value of min_apply_delay cannot be < 0, I was thinking probably it\n> should have been declared everywhere in this patch as a\n> uint64 instead of an int64, right?\nNo, we won't be able to adopt this idea.\n\nIt seems that we are not able to use uint for catalog type.\nSo, can't applying it to the pg_subscription.h definitions\nand then similarly Int64GetDatum to store catalog variables\nand the argument variable of Int64GetDatum.\n\nPlus, there is a possibility that type Interval becomes negative value,\nthen we are not able to change the int64 variable to get\nthe return value of interval2ms().\n\n> ======\n> \n> Commit message\n> \n> 2.\n> \n> If the subscription sets min_apply_delay parameter, the logical replication\n> worker will delay the transaction commit for min_apply_delay milliseconds.\n> \n> ~\n> \n> IMO there should be another sentence before this just to say that a new\n> parameter is being added:\n> \n> e.g.\n> This patch implements a new subscription parameter called\n> 'min_apply_delay'.\nAdded.\n\n\n> ======\n> \n> doc/src/sgml/config.sgml\n> \n> 3.\n> \n> + <para>\n> + For time-delayed logical replication, the apply worker sends a Standby\n> + Status Update message to the corresponding publisher per the\n> indicated\n> + time of this parameter. Therefore, if this parameter is longer than\n> + <literal>wal_sender_timeout</literal> on the publisher, then the\n> + walsender doesn't get any update message during the delay and\n> repeatedly\n> + terminates due to the timeout errors. Hence, make sure this parameter\n> is\n> + shorter than the <literal>wal_sender_timeout</literal> of the\n> publisher.\n> + If this parameter is set to zero with time-delayed replication, the\n> + apply worker doesn't send any feedback messages during the\n> + <literal>min_apply_delay</literal>.\n> + </para>\n> \n> \n> This paragraph seemed confusing. I think it needs to be reworded to change all\n> of the \"this parameter\" references because there are at least 3 different\n> parameters mentioned in this paragraph. e.g. maybe just change them to\n> explicitly name the parameter you are talking about.\n> \n> I also think it needs to mention the ‘min_apply_delay’ subscription parameter\n> up-front and then refer to it appropriately.\n> \n> The end result might be something like I wrote below (this is just my guess ?\n> probably you can word it better).\n> \n> SUGGESTION\n> For time-delayed logical replication (i.e. when the subscription is created with\n> parameter min_apply_delay > 0), the apply worker sends a Standby Status\n> Update message to the publisher with a period of wal_receiver_status_interval .\n> Make sure to set wal_receiver_status_interval less than the\n> wal_sender_timeout on the publisher, otherwise, the walsender will repeatedly\n> terminate due to the timeout errors. If wal_receiver_status_interval is set to zero,\n> the apply worker doesn't send any feedback messages during the subscriber’s\n> min_apply_delay period.\nApplied. Also, I added one reference for min_apply_delay parameter\nat the end of this description.\n\n\n> ======\n> \n> doc/src/sgml/ref/create_subscription.sgml\n> \n> 4.\n> \n> + <para>\n> + By default, the subscriber applies changes as soon as possible. As\n> + with the physical replication feature\n> + (<xref linkend=\"guc-recovery-min-apply-delay\"/>), it can be\n> useful to\n> + have a time-delayed logical replica. This parameter lets the user to\n> + delay the application of changes by a specified amount of\n> time. If this\n> + value is specified without units, it is taken as milliseconds. The\n> + default is zero(no delay).\n> + </para>\n> \n> 4a.\n> As with the physical replication feature (recovery_min_apply_delay), it can be\n> useful to have a time-delayed logical replica.\n> \n> IMO not sure that the above sentence is necessary. It seems only to be saying\n> that this parameter can be useful. Why do we need to say that?\nRemoved the sentence.\n\n\n> ~\n> \n> 4b.\n> \"This parameter lets the user to delay\" -> \"This parameter lets the user delay\"\n> OR\n> \"This parameter lets the user to delay\" -> \"This parameter allows the user to\n> delay\"\nFixed.\n\n \n> ~\n> \n> 4c.\n> \"If this value is specified without units\" -> \"If the value is specified without\n> units\"\nFixed.\n \n> ~\n> \n> 4d.\n> \"zero(no delay).\" -> \"zero (no delay).\"\nFixed.\n\n> ----\n> \n> 5.\n> \n> + <para>\n> + The delay occurs only on WAL records for transaction begins and\n> after\n> + the initial table synchronization. It is possible that the\n> + replication delay between publisher and subscriber exceeds the\n> value\n> + of this parameter, in which case no delay is added. Note that the\n> + delay is calculated between the WAL time stamp as written on\n> + publisher and the current time on the subscriber. Time\n> spent in logical\n> + decoding and in transferring the transaction may reduce the\n> actual wait\n> + time. If the system clocks on publisher and subscriber are not\n> + synchronized, this may lead to apply changes earlier than\n> expected,\n> + but this is not a major issue because this parameter is\n> typically much\n> + larger than the time deviations between servers. Note that if this\n> + parameter is set to a long delay, the replication will stop if the\n> + replication slot falls behind the current LSN by more than\n> + <link\n> linkend=\"guc-max-slot-wal-keep-size\"><literal>max_slot_wal_keep_size</\n> literal></link>.\n> + </para>\n> \n> I think the first part can be reworded slightly. See what you think about the\n> suggestion below.\n> \n> SUGGESTION\n> Any delay occurs only on WAL records for transaction begins after all initial\n> table synchronization has finished. The delay is calculated between the WAL\n> timestamp as written on the publisher and the current time on the subscriber.\n> Any overhead of time spent in logical decoding and in transferring the\n> transaction may reduce the actual wait time.\n> It is also possible that the overhead already exceeds the requested\n> 'min_apply_delay' value, in which case no additional wait is necessary. If the\n> system clocks...\nAddressed.\n\n\n> ----\n> \n> 6.\n> \n> + <para>\n> + Setting streaming to <literal>parallel</literal> mode and\n> <literal>min_apply_delay</literal>\n> + simultaneously is not supported.\n> + </para>\n> \n> SUGGESTION\n> A non-zero min_apply_delay parameter is not allowed when streaming in\n> parallel mode.\nApplied.\n\n\n> ======\n> \n> src/backend/commands/subscriptioncmds.c\n> \n> 7. parse_subscription_options\n> \n> @@ -404,6 +445,17 @@ parse_subscription_options(ParseState *pstate, List\n> *stmt_options,\n> \"slot_name = NONE\", \"create_slot = false\")));\n> }\n> }\n> +\n> + /* Test the combination of streaming mode and min_apply_delay */ if\n> + (IsSet(supported_opts, SUBOPT_MIN_APPLY_DELAY) &&\n> + opts->min_apply_delay > 0)\n> + {\n> + if (opts->streaming == LOGICALREP_STREAM_PARALLEL)\n> ereport(ERROR,\n> + errcode(ERRCODE_SYNTAX_ERROR), errmsg(\"%s and %s are mutually\n> + exclusive options\",\n> + \"min_apply_delay > 0\", \"streaming = parallel\")); }\n> \n> SUGGESTION (comment)\n> The combination of parallel streaming mode and min_apply_delay is not\n> allowed.\nFixed.\n\n\n> ~~~\n> \n> 8. AlterSubscription (general)\n> \n> I observed during testing there are 3 different errors….\n> \n> At subscription CREATE time you can get this error:\n> ERROR: min_apply_delay > 0 and streaming = parallel are mutually exclusive\n> options\n> \n> If you try to ALTER the min_apply_delay when already streaming = parallel you\n> can get this error:\n> ERROR: cannot enable min_apply_delay for subscription in streaming =\n> parallel mode\n> \n> If you try to ALTER the streaming to be parallel if there is already a\n> min_apply_delay > 0 then you can get this error:\n> ERROR: cannot enable streaming = parallel mode for subscription with\n> min_apply_delay\nYes. This is because the existing error message styles\nin AlterSubscription and parse_subscription_options.\n\nThe former uses \"mutually exclusive\" messages consistently,\nwhile the latter does \"cannot enable ...\" ones.\n> ~\n> \n> IMO there is no need to have 3 different error message texts. I think all these\n> cases are explained by just the first text (ERROR:\n> min_apply_delay > 0 and streaming = parallel are mutually exclusive\n> options)\nThen, we followed this kind of formats.\n\n\n> ~~~\n> \n> 9. AlterSubscription\n> \n> @@ -1098,6 +1152,18 @@ AlterSubscription(ParseState *pstate,\n> AlterSubscriptionStmt *stmt,\n> \n> if (IsSet(opts.specified_opts, SUBOPT_STREAMING))\n> {\n> + /*\n> + * Test the combination of streaming mode and\n> + * min_apply_delay\n> + */\n> + if (opts.streaming == LOGICALREP_STREAM_PARALLEL) if\n> + ((IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) &&\n> opts.min_apply_delay > 0) ||\n> + (!IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) &&\n> sub->minapplydelay > 0))\n> + ereport(ERROR,\n> + errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> + errmsg(\"cannot enable %s mode for subscription with %s\",\n> + \"streaming = parallel\", \"min_apply_delay\"));\n> +\n> \n> 9a.\n> SUGGESTION (comment)\n> The combination of parallel streaming mode and min_apply_delay is not\n> allowed.\nFixed.\n\n\n> ~\n> \n> 9b.\n> (see AlterSubscription general review comment #8 above) Here you can use the\n> same comment error message that says min_apply_delay > 0 and streaming =\n> parallel are mutually exclusive options.\nAs described above, we followed the current style in the existing functions.\n\n\n> ~~~\n> \n> 10. AlterSubscription\n> \n> @@ -1111,6 +1177,25 @@ AlterSubscription(ParseState *pstate,\n> AlterSubscriptionStmt *stmt,\n> = true;\n> }\n> \n> + if (IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY)) {\n> + /*\n> + * Test the combination of streaming mode and\n> + * min_apply_delay\n> + */\n> + if (opts.min_apply_delay > 0)\n> + if ((IsSet(opts.specified_opts, SUBOPT_STREAMING) && opts.streaming\n> == LOGICALREP_STREAM_PARALLEL) ||\n> + (!IsSet(opts.specified_opts, SUBOPT_STREAMING) && sub->stream ==\n> LOGICALREP_STREAM_PARALLEL))\n> + ereport(ERROR,\n> + errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> + errmsg(\"cannot enable %s for subscription in %s mode\",\n> + \"min_apply_delay\", \"streaming = parallel\"));\n> +\n> + values[Anum_pg_subscription_subminapplydelay - 1] =\n> + Int64GetDatum(opts.min_apply_delay);\n> + replaces[Anum_pg_subscription_subminapplydelay - 1] = true; }\n> \n> 10a.\n> SUGGESTION (comment)\n> The combination of parallel streaming mode and min_apply_delay is not\n> allowed.\nFixed.\n\n\n> ~\n> \n> 10b.\n> (see AlterSubscription general review comment #8 above) Here you can use the\n> same comment error message that says min_apply_delay > 0 and streaming =\n> parallel are mutually exclusive options.\nSame as 9b.\n\n> ======\n> \n> .../replication/logical/applyparallelworker.c\n> \n> 11.\n> \n> @@ -704,7 +704,8 @@ pa_process_spooled_messages_if_required(void)\n> {\n> apply_spooled_messages(&MyParallelShared->fileset,\n> MyParallelShared->xid,\n> - InvalidXLogRecPtr);\n> + InvalidXLogRecPtr,\n> + 0);\n> \n> IMO this passing of 0 is a bit strange because it is currently acting like a dummy\n> value since the apply_spooled_messages will never make use of the 'finish_ts'\n> anyway (since this call is from a parallel apply worker).\n> \n> I think a better way to code this might be to pass the 0 (same as you are doing\n> here) but inside the apply_spooled_messages change the code:\n> \n> FROM\n> if (!am_parallel_apply_worker())\n> maybe_delay_apply(finish_ts);\n> \n> TO\n> if (finish_ts)\n> maybe_delay_apply(finish_ts);\n> \n> That does 2 things.\n> - It makes the passed-in 0 have some meaning\n> - It simplifies the apply_spooled_messages code\nAdopted.\n\n\n> ======\n> \n> src/backend/replication/logical/worker.c\n> \n> 12.\n> \n> @@ -318,6 +318,17 @@ static List *on_commit_wakeup_workers_subids =\n> NIL; bool in_remote_transaction = false; static XLogRecPtr\n> remote_final_lsn = InvalidXLogRecPtr;\n> \n> +/*\n> + * In order to avoid walsender's timeout during time-delayed\n> +replication,\n> + * it's necessary to keep sending feedback messages during the delay\n> +from the\n> + * worker process. Meanwhile, the feature delays the apply before\n> +starting the\n> + * transaction and thus we don't write WALs for the suspended changes\n> +during\n> + * the wait. Hence, in the case the worker process sends a feedback\n> +message\n> + * during the delay, we should not make positions of the flushed and\n> +apply LSN\n> + * overwritten by the last received latest LSN. See send_feedback()\n> for details.\n> + */\n> +static XLogRecPtr last_received = InvalidXLogRecPtr;\n> \n> 12a.\n> Suggest a small change to the first sentence of the comment.\n> \n> BEFORE\n> In order to avoid walsender's timeout during time-delayed replication, it's\n> necessary to keep sending feedback messages during the delay from the\n> worker process.\n> \n> AFTER\n> In order to avoid walsender timeout for time-delayed replication the worker\n> process keeps sending feedback messages during the delay period.\nFixed.\n\n\n> ~\n> \n> 12b.\n> \"Hence, in the case\" -> \"When\"\nFixed.\n\n \n> ~~~\n> \n> 13. forward declare\n> \n> -static void send_feedback(XLogRecPtr recvpos, bool force, bool\n> requestReply);\n> +static void send_feedback(XLogRecPtr recvpos, bool force, bool\n> requestReply,\n> + bool in_delaying_apply);\n> \n> Change the param name:\n> \n> \"in_delaying_apply\" -> \"in_delayed_apply” (??)\nChanged. The initial intention to append the \"in_\"\nprefix is to make the variable name aligned with\nsome other variables such as \"in_remote_transaction\" and\n\"in_streamed_transaction\" that mean the current status\nfor the transaction. So, until there is a better name proposed,\nwe can keep it.\n\n\n> ~~~\n> \n> 14. maybe_delay_apply\n> \n> + /* Nothing to do if no delay set */\n> + if (MySubscription->minapplydelay <= 0) return;\n> \n> IIUC min_apply_delay cannot be < 0 so this condition could simply be:\n> \n> if (!MySubscription->minapplydelay)\n> return;\nFixed.\n\n\n> ~~~\n> \n> 15. maybe_delay_apply\n> \n> + /*\n> + * The min_apply_delay parameter is ignored until all tablesync workers\n> + * have reached READY state. If we allow the delay during the catchup\n> + * phase, once we reach the limit of tablesync workers, it will impose\n> + a\n> + * delay for each subsequent worker. It means it will take a long time\n> + to\n> + * finish the initial table synchronization.\n> + */\n> + if (!AllTablesyncsReady())\n> + return;\n> \n> SUGGESTION (slight rewording)\n> The min_apply_delay parameter is ignored until all tablesync workers have\n> reached READY state. This is because if we allowed the delay during the\n> catchup phase, then once we reached the limit of tablesync workers it would\n> impose a delay for each subsequent worker. That would cause initial table\n> synchronization completion to take a long time.\nFixed.\n\n\n> ~~~\n> \n> 16. maybe_delay_apply\n> \n> + while (true)\n> + {\n> + long diffms;\n> +\n> + ResetLatch(MyLatch);\n> +\n> + CHECK_FOR_INTERRUPTS();\n> \n> IMO there should be some small explanatory comment here at the top of the\n> while loop.\nAdded.\n\n\n> ~~~\n> \n> 17. apply_spooled_messages\n> \n> @@ -2024,6 +2141,21 @@ apply_spooled_messages(FileSet *stream_fileset,\n> TransactionId xid,\n> int fileno;\n> off_t offset;\n> \n> + /*\n> + * Should we delay the current transaction?\n> + *\n> + * Unlike the regular (non-streamed) cases, the delay is applied in a\n> + * STREAM COMMIT/STREAM PREPARE message for streamed transactions.\n> The\n> + * STREAM START message does not contain a commit/prepare time (it will\n> + be\n> + * available when the in-progress transaction finishes). Hence, it's\n> + not\n> + * appropriate to apply a delay at that time.\n> + *\n> + * It's not allowed to execute time-delayed replication with parallel\n> + * apply feature.\n> + */\n> + if (!am_parallel_apply_worker())\n> + maybe_delay_apply(finish_ts);\n> \n> That whole comment part \"Unlike the regular (non-streamed) cases\"\n> seems misplaced here. Perhaps this part of the comment is better put into\n> the function header where the meaning of 'finish_ts' is explained?\nMoved it to the header comment for maybe_delay_apply.\n\n\n> ~~~\n> \n> 18. apply_spooled_messages\n> \n> + * It's not allowed to execute time-delayed replication with parallel\n> + * apply feature.\n> + */\n> + if (!am_parallel_apply_worker())\n> + maybe_delay_apply(finish_ts);\n> \n> As was mentioned in comment #11 above this code could be changed like\n> \n> if (finish_ts)\n> maybe_delay_apply(finish_ts);\n> then you don't even need to make mention of \"parallel apply\" at all here.\n> \n> OTOH if you want to still have the parallel apply comment then maybe reword it\n> like this:\n> \"It is not allowed to combine time-delayed replication with the parallel apply\n> feature.\"\nChanged and now I don't mention the parallel apply feature.\n\n> ~~~\n> \n> 19. apply_spooled_messages\n> \n> If you chose not to do my suggestion from comment #11, then there are\n> 2 identical conditions (!am_parallel_apply_worker()); In this case, I was\n> wondering if it would be better to refactor to use a single condition instead.\nI applied #11 comment. Now, the conditions are not identical.\n\n> ~~~\n> \n> 20. send_feedback\n> (same as comment #13)\n> \n> Maybe change the new param name to “in_delayed_apply”?\nChanged.\n\n\n> ~~~\n> \n> 21.\n> \n> @@ -3737,8 +3869,15 @@ send_feedback(XLogRecPtr recvpos, bool force,\n> bool requestReply)\n> /*\n> * No outstanding transactions to flush, we can report the latest received\n> * position. This is important for synchronous replication.\n> + *\n> + * During the delay of time-delayed replication, do not tell the\n> + publisher\n> + * that the received latest LSN is already applied and flushed at this\n> + * stage, since we don't apply the transaction yet. If we do so, it\n> + leads\n> + * to a wrong assumption of logical replication progress on the\n> + publisher\n> + * side. Here, we just send a feedback message to avoid publisher's\n> + * timeout during the delay.\n> */\n> \n> Minor rewording of the comment\n> \n> SUGGESTION\n> If the subscriber side apply is delayed (because of time-delayed\n> replication) then do not tell the publisher that the received latest LSN is already\n> applied and flushed, otherwise, it leads to the publisher side making a wrong\n> assumption of logical replication progress. Instead, we just send a feedback\n> message to avoid a publisher timeout during the delay.\nAdopted.\n\n\n> ======\n> \n> \n> src/bin/pg_dump/pg_dump.c\n> \n> 22.\n> \n> @@ -4546,9 +4547,14 @@ getSubscriptions(Archive *fout)\n> LOGICALREP_TWOPHASE_STATE_DISABLED);\n> \n> if (fout->remoteVersion >= 160000)\n> - appendPQExpBufferStr(query, \" s.suborigin\\n\");\n> + appendPQExpBufferStr(query,\n> + \" s.suborigin,\\n\"\n> + \" s.subminapplydelay\\n\");\n> else\n> - appendPQExpBuffer(query, \" '%s' AS suborigin\\n\",\n> LOGICALREP_ORIGIN_ANY);\n> + {\n> + appendPQExpBuffer(query, \" '%s' AS suborigin,\\n\",\n> + LOGICALREP_ORIGIN_ANY); appendPQExpBufferStr(query, \" 0 AS\n> + subminapplydelay\\n\"); }\n> \n> Can’t those appends in the else part can be combined to a single\n> appendPQExpBuffer\n> \n> appendPQExpBuffer(query,\n> \" '%s' AS suborigin,\\n\"\n> \" 0 AS subminapplydelay\\n\"\n> LOGICALREP_ORIGIN_ANY);\nAdopted.\n\n\n> ======\n> \n> src/include/catalog/pg_subscription.h\n> \n> 23.\n> \n> @@ -70,6 +70,8 @@ CATALOG(pg_subscription,6100,SubscriptionRelationId)\n> BKI_SHARED_RELATION BKI_ROW\n> XLogRecPtr subskiplsn; /* All changes finished at this LSN are\n> * skipped */\n> \n> + int64 subminapplydelay; /* Replication apply delay */\n> +\n> NameData subname; /* Name of the subscription */\n> \n> Oid subowner BKI_LOOKUP(pg_authid); /* Owner of the subscription */\n> \n> SUGGESTION (for comment)\n> Replication apply delay (ms)\nFixed.\n\n> ~~\n> \n> 24.\n> \n> @@ -120,6 +122,7 @@ typedef struct Subscription\n> * in */\n> XLogRecPtr skiplsn; /* All changes finished at this LSN are\n> * skipped */\n> + int64 minapplydelay; /* Replication apply delay */\n> \n> SUGGESTION (for comment)\n> Replication apply delay (ms)\nFixed.\n\n\nKindly have a look at the latest v17 patch in [1].\n\n\n[1] - https://www.postgresql.org/message-id/TYCPR01MB8373F5162C7A0E6224670CF0EDC49%40TYCPR01MB8373.jpnprd01.prod.outlook.com\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Thu, 19 Jan 2023 07:12:14 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Thursday, January 19, 2023 10:42 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> On Wed, Jan 18, 2023 at 6:06 PM Peter Smith <smithpb2250@gmail.com>\n> wrote:\n> >\n> > Here are my review comments for the latest patch v16-0001. (excluding\n> > the test code)\n> >\n> ...\n> >\n> > 8. AlterSubscription (general)\n> >\n> > I observed during testing there are 3 different errors….\n> >\n> > At subscription CREATE time you can get this error:\n> > ERROR: min_apply_delay > 0 and streaming = parallel are mutually\n> > exclusive options\n> >\n> > If you try to ALTER the min_apply_delay when already streaming =\n> > parallel you can get this error:\n> > ERROR: cannot enable min_apply_delay for subscription in streaming =\n> > parallel mode\n> >\n> > If you try to ALTER the streaming to be parallel if there is already a\n> > min_apply_delay > 0 then you can get this error:\n> > ERROR: cannot enable streaming = parallel mode for subscription with\n> > min_apply_delay\n> >\n> > ~\n> >\n> > IMO there is no need to have 3 different error message texts. I think\n> > all these cases are explained by just the first text (ERROR:\n> > min_apply_delay > 0 and streaming = parallel are mutually exclusive\n> > options)\n> >\n> >\n> \n> After checking the regression test output I can see the merit of your separate\n> error messages like this, even if they are maybe not strictly necessary. So feel\n> free to ignore my previous review comment.\nThank you for your notification.\n\nI wrote another reason why we wrote those messages in [1].\nSo, please have a look at it.\n\n[1] - https://www.postgresql.org/message-id/TYCPR01MB8373447440202B248BB63805EDC49%40TYCPR01MB8373.jpnprd01.prod.outlook.com\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Thu, 19 Jan 2023 08:16:11 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Thu, 19 Jan 2023 at 12:06, Takamichi Osumi (Fujitsu)\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> Updated the comment and the function call.\n>\n> Kindly have a look at the updated patch v17.\n\nThanks for the updated patch, few comments:\n1) min_apply_delay was accepting values like '600 m s h', I was not\nsure if we should allow this:\nalter subscription sub1 set (min_apply_delay = ' 600 m s h');\n\n+ /*\n+ * If no unit was specified, then explicitly\nadd 'ms' otherwise\n+ * the interval_in function would assume 'seconds'.\n+ */\n+ if (strspn(tmp, \"-0123456789 \") == strlen(tmp))\n+ val = psprintf(\"%sms\", tmp);\n+ else\n+ val = tmp;\n+\n+ interval =\nDatumGetIntervalP(DirectFunctionCall3(interval_in,\n+\n\nCStringGetDatum(val),\n+\n\nObjectIdGetDatum(InvalidOid),\n+\n Int32GetDatum(-1)));\n\n2) How about adding current_txn_wait_time in\npg_stat_subscription_stats, we can update the current_txn_wait_time\nperiodically, this will help the user to check approximately how much\ntime is left(min_apply_delay - stat value) before this transaction\nwill be applied in the subscription. If you agree this can be 0002\npatch.\n\n3) There is one check at parse_subscription_options and another check\nin AlterSubscription, this looks like a redundant check in case of\nalter subscription, can we try to merge and keep in one place:\n/*\n* The combination of parallel streaming mode and min_apply_delay is not\n* allowed.\n*/\nif (IsSet(supported_opts, SUBOPT_MIN_APPLY_DELAY) &&\nopts->min_apply_delay > 0)\n{\nif (opts->streaming == LOGICALREP_STREAM_PARALLEL)\nereport(ERROR,\nerrcode(ERRCODE_SYNTAX_ERROR),\nerrmsg(\"%s and %s are mutually exclusive options\",\n \"min_apply_delay > 0\", \"streaming = parallel\"));\n}\n\nif (IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY))\n{\n/*\n* The combination of parallel streaming mode and\n* min_apply_delay is not allowed.\n*/\nif (opts.min_apply_delay > 0)\nif ((IsSet(opts.specified_opts, SUBOPT_STREAMING) && opts.streaming ==\nLOGICALREP_STREAM_PARALLEL) ||\n(!IsSet(opts.specified_opts, SUBOPT_STREAMING) && sub->stream ==\nLOGICALREP_STREAM_PARALLEL))\nereport(ERROR,\nerrcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\nerrmsg(\"cannot enable %s for subscription in %s mode\",\n \"min_apply_delay\", \"streaming = parallel\"));\n\nvalues[Anum_pg_subscription_subminapplydelay - 1] =\nInt64GetDatum(opts.min_apply_delay);\nreplaces[Anum_pg_subscription_subminapplydelay - 1] = true;\n}\n\n4) typo \"execeeds\" should be \"exceeds\"\n\n+ time on the subscriber. Any overhead of time spent in\nlogical decoding\n+ and in transferring the transaction may reduce the actual wait time.\n+ It is also possible that the overhead already execeeds the requested\n+ <literal>min_apply_delay</literal> value, in which case no additional\n+ wait is necessary. If the system clocks on publisher and subscriber\n+ are not synchronized, this may lead to apply changes earlier than\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 19 Jan 2023 16:24:50 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Thu, Jan 19, 2023 at 4:25 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Thu, 19 Jan 2023 at 12:06, Takamichi Osumi (Fujitsu)\n> <osumi.takamichi@fujitsu.com> wrote:\n> >\n> > Updated the comment and the function call.\n> >\n> > Kindly have a look at the updated patch v17.\n>\n> Thanks for the updated patch, few comments:\n> 1) min_apply_delay was accepting values like '600 m s h', I was not\n> sure if we should allow this:\n> alter subscription sub1 set (min_apply_delay = ' 600 m s h');\n>\n\nI think here we should have specs similar to recovery_min_apply_delay.\n\n>\n> 2) How about adding current_txn_wait_time in\n> pg_stat_subscription_stats, we can update the current_txn_wait_time\n> periodically, this will help the user to check approximately how much\n> time is left(min_apply_delay - stat value) before this transaction\n> will be applied in the subscription. If you agree this can be 0002\n> patch.\n>\n\nDo we have any similar stats for recovery_min_apply_delay? If not, I\nsuggest let's postpone this to see if users really need such a\nparameter.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 19 Jan 2023 18:29:05 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Thu, Jan 19, 2023 at 12:06 PM Takamichi Osumi (Fujitsu)\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> Kindly have a look at the updated patch v17.\n>\n\nCan we try to optimize the test time for this test? On my machine, it\nis the second highest time-consuming test in src/test/subscription. It\nseems you are waiting twice for apply_delay and both are for streaming\ncases by varying the number of changes. I think it should be just once\nand that too for the non-streaming case. I think it would be good to\ntest streaming code path interaction but not sure if it is important\nenough to have two test cases for apply_delay.\n\nOne minor comment that I observed while going through the patch.\n+ /*\n+ * The combination of parallel streaming mode and min_apply_delay is not\n+ * allowed.\n+ */\n+ if (IsSet(supported_opts, SUBOPT_MIN_APPLY_DELAY) &&\n+ opts->min_apply_delay > 0)\n\nI think it would be good if you can specify the reason for not\nallowing this combination in the comments.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 19 Jan 2023 18:47:22 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Thu, 19 Jan 2023 at 18:29, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jan 19, 2023 at 4:25 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Thu, 19 Jan 2023 at 12:06, Takamichi Osumi (Fujitsu)\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > >\n> > > Updated the comment and the function call.\n> > >\n> > > Kindly have a look at the updated patch v17.\n> >\n> > Thanks for the updated patch, few comments:\n> > 1) min_apply_delay was accepting values like '600 m s h', I was not\n> > sure if we should allow this:\n> > alter subscription sub1 set (min_apply_delay = ' 600 m s h');\n> >\n>\n> I think here we should have specs similar to recovery_min_apply_delay.\n>\n> >\n> > 2) How about adding current_txn_wait_time in\n> > pg_stat_subscription_stats, we can update the current_txn_wait_time\n> > periodically, this will help the user to check approximately how much\n> > time is left(min_apply_delay - stat value) before this transaction\n> > will be applied in the subscription. If you agree this can be 0002\n> > patch.\n> >\n>\n> Do we have any similar stats for recovery_min_apply_delay? If not, I\n> suggest let's postpone this to see if users really need such a\n> parameter.\n\nI did not find any statistics for recovery_min_apply_delay, ok it can\nbe delayed to a later time.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 19 Jan 2023 18:53:37 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Thu, Jan 19, 2023 at 12:42 PM Takamichi Osumi (Fujitsu)\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Wednesday, January 18, 2023 4:06 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > Here are my review comments for the latest patch v16-0001. (excluding the\n> > test code)\n> Hi, thank you for your review !\n>\n> > ======\n> >\n> > General\n> >\n> > 1.\n> >\n> > Since the value of min_apply_delay cannot be < 0, I was thinking probably it\n> > should have been declared everywhere in this patch as a\n> > uint64 instead of an int64, right?\n> No, we won't be able to adopt this idea.\n>\n> It seems that we are not able to use uint for catalog type.\n> So, can't applying it to the pg_subscription.h definitions\n> and then similarly Int64GetDatum to store catalog variables\n> and the argument variable of Int64GetDatum.\n>\n> Plus, there is a possibility that type Interval becomes negative value,\n> then we are not able to change the int64 variable to get\n> the return value of interval2ms().\n>\n> > ======\n> >\n> > Commit message\n> >\n> > 2.\n> >\n> > If the subscription sets min_apply_delay parameter, the logical replication\n> > worker will delay the transaction commit for min_apply_delay milliseconds.\n> >\n> > ~\n> >\n> > IMO there should be another sentence before this just to say that a new\n> > parameter is being added:\n> >\n> > e.g.\n> > This patch implements a new subscription parameter called\n> > 'min_apply_delay'.\n> Added.\n>\n>\n> > ======\n> >\n> > doc/src/sgml/config.sgml\n> >\n> > 3.\n> >\n> > + <para>\n> > + For time-delayed logical replication, the apply worker sends a Standby\n> > + Status Update message to the corresponding publisher per the\n> > indicated\n> > + time of this parameter. Therefore, if this parameter is longer than\n> > + <literal>wal_sender_timeout</literal> on the publisher, then the\n> > + walsender doesn't get any update message during the delay and\n> > repeatedly\n> > + terminates due to the timeout errors. Hence, make sure this parameter\n> > is\n> > + shorter than the <literal>wal_sender_timeout</literal> of the\n> > publisher.\n> > + If this parameter is set to zero with time-delayed replication, the\n> > + apply worker doesn't send any feedback messages during the\n> > + <literal>min_apply_delay</literal>.\n> > + </para>\n> >\n> >\n> > This paragraph seemed confusing. I think it needs to be reworded to change all\n> > of the \"this parameter\" references because there are at least 3 different\n> > parameters mentioned in this paragraph. e.g. maybe just change them to\n> > explicitly name the parameter you are talking about.\n> >\n> > I also think it needs to mention the ‘min_apply_delay’ subscription parameter\n> > up-front and then refer to it appropriately.\n> >\n> > The end result might be something like I wrote below (this is just my guess ?\n> > probably you can word it better).\n> >\n> > SUGGESTION\n> > For time-delayed logical replication (i.e. when the subscription is created with\n> > parameter min_apply_delay > 0), the apply worker sends a Standby Status\n> > Update message to the publisher with a period of wal_receiver_status_interval .\n> > Make sure to set wal_receiver_status_interval less than the\n> > wal_sender_timeout on the publisher, otherwise, the walsender will repeatedly\n> > terminate due to the timeout errors. If wal_receiver_status_interval is set to zero,\n> > the apply worker doesn't send any feedback messages during the subscriber’s\n> > min_apply_delay period.\n> Applied. Also, I added one reference for min_apply_delay parameter\n> at the end of this description.\n>\n>\n> > ======\n> >\n> > doc/src/sgml/ref/create_subscription.sgml\n> >\n> > 4.\n> >\n> > + <para>\n> > + By default, the subscriber applies changes as soon as possible. As\n> > + with the physical replication feature\n> > + (<xref linkend=\"guc-recovery-min-apply-delay\"/>), it can be\n> > useful to\n> > + have a time-delayed logical replica. This parameter lets the user to\n> > + delay the application of changes by a specified amount of\n> > time. If this\n> > + value is specified without units, it is taken as milliseconds. The\n> > + default is zero(no delay).\n> > + </para>\n> >\n> > 4a.\n> > As with the physical replication feature (recovery_min_apply_delay), it can be\n> > useful to have a time-delayed logical replica.\n> >\n> > IMO not sure that the above sentence is necessary. It seems only to be saying\n> > that this parameter can be useful. Why do we need to say that?\n> Removed the sentence.\n>\n>\n> > ~\n> >\n> > 4b.\n> > \"This parameter lets the user to delay\" -> \"This parameter lets the user delay\"\n> > OR\n> > \"This parameter lets the user to delay\" -> \"This parameter allows the user to\n> > delay\"\n> Fixed.\n>\n>\n> > ~\n> >\n> > 4c.\n> > \"If this value is specified without units\" -> \"If the value is specified without\n> > units\"\n> Fixed.\n>\n> > ~\n> >\n> > 4d.\n> > \"zero(no delay).\" -> \"zero (no delay).\"\n> Fixed.\n>\n> > ----\n> >\n> > 5.\n> >\n> > + <para>\n> > + The delay occurs only on WAL records for transaction begins and\n> > after\n> > + the initial table synchronization. It is possible that the\n> > + replication delay between publisher and subscriber exceeds the\n> > value\n> > + of this parameter, in which case no delay is added. Note that the\n> > + delay is calculated between the WAL time stamp as written on\n> > + publisher and the current time on the subscriber. Time\n> > spent in logical\n> > + decoding and in transferring the transaction may reduce the\n> > actual wait\n> > + time. If the system clocks on publisher and subscriber are not\n> > + synchronized, this may lead to apply changes earlier than\n> > expected,\n> > + but this is not a major issue because this parameter is\n> > typically much\n> > + larger than the time deviations between servers. Note that if this\n> > + parameter is set to a long delay, the replication will stop if the\n> > + replication slot falls behind the current LSN by more than\n> > + <link\n> > linkend=\"guc-max-slot-wal-keep-size\"><literal>max_slot_wal_keep_size</\n> > literal></link>.\n> > + </para>\n> >\n> > I think the first part can be reworded slightly. See what you think about the\n> > suggestion below.\n> >\n> > SUGGESTION\n> > Any delay occurs only on WAL records for transaction begins after all initial\n> > table synchronization has finished. The delay is calculated between the WAL\n> > timestamp as written on the publisher and the current time on the subscriber.\n> > Any overhead of time spent in logical decoding and in transferring the\n> > transaction may reduce the actual wait time.\n> > It is also possible that the overhead already exceeds the requested\n> > 'min_apply_delay' value, in which case no additional wait is necessary. If the\n> > system clocks...\n> Addressed.\n>\n>\n> > ----\n> >\n> > 6.\n> >\n> > + <para>\n> > + Setting streaming to <literal>parallel</literal> mode and\n> > <literal>min_apply_delay</literal>\n> > + simultaneously is not supported.\n> > + </para>\n> >\n> > SUGGESTION\n> > A non-zero min_apply_delay parameter is not allowed when streaming in\n> > parallel mode.\n> Applied.\n>\n>\n> > ======\n> >\n> > src/backend/commands/subscriptioncmds.c\n> >\n> > 7. parse_subscription_options\n> >\n> > @@ -404,6 +445,17 @@ parse_subscription_options(ParseState *pstate, List\n> > *stmt_options,\n> > \"slot_name = NONE\", \"create_slot = false\")));\n> > }\n> > }\n> > +\n> > + /* Test the combination of streaming mode and min_apply_delay */ if\n> > + (IsSet(supported_opts, SUBOPT_MIN_APPLY_DELAY) &&\n> > + opts->min_apply_delay > 0)\n> > + {\n> > + if (opts->streaming == LOGICALREP_STREAM_PARALLEL)\n> > ereport(ERROR,\n> > + errcode(ERRCODE_SYNTAX_ERROR), errmsg(\"%s and %s are mutually\n> > + exclusive options\",\n> > + \"min_apply_delay > 0\", \"streaming = parallel\")); }\n> >\n> > SUGGESTION (comment)\n> > The combination of parallel streaming mode and min_apply_delay is not\n> > allowed.\n> Fixed.\n>\n>\n> > ~~~\n> >\n> > 8. AlterSubscription (general)\n> >\n> > I observed during testing there are 3 different errors….\n> >\n> > At subscription CREATE time you can get this error:\n> > ERROR: min_apply_delay > 0 and streaming = parallel are mutually exclusive\n> > options\n> >\n> > If you try to ALTER the min_apply_delay when already streaming = parallel you\n> > can get this error:\n> > ERROR: cannot enable min_apply_delay for subscription in streaming =\n> > parallel mode\n> >\n> > If you try to ALTER the streaming to be parallel if there is already a\n> > min_apply_delay > 0 then you can get this error:\n> > ERROR: cannot enable streaming = parallel mode for subscription with\n> > min_apply_delay\n> Yes. This is because the existing error message styles\n> in AlterSubscription and parse_subscription_options.\n>\n> The former uses \"mutually exclusive\" messages consistently,\n> while the latter does \"cannot enable ...\" ones.\n> > ~\n> >\n> > IMO there is no need to have 3 different error message texts. I think all these\n> > cases are explained by just the first text (ERROR:\n> > min_apply_delay > 0 and streaming = parallel are mutually exclusive\n> > options)\n> Then, we followed this kind of formats.\n>\n>\n> > ~~~\n> >\n> > 9. AlterSubscription\n> >\n> > @@ -1098,6 +1152,18 @@ AlterSubscription(ParseState *pstate,\n> > AlterSubscriptionStmt *stmt,\n> >\n> > if (IsSet(opts.specified_opts, SUBOPT_STREAMING))\n> > {\n> > + /*\n> > + * Test the combination of streaming mode and\n> > + * min_apply_delay\n> > + */\n> > + if (opts.streaming == LOGICALREP_STREAM_PARALLEL) if\n> > + ((IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) &&\n> > opts.min_apply_delay > 0) ||\n> > + (!IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) &&\n> > sub->minapplydelay > 0))\n> > + ereport(ERROR,\n> > + errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> > + errmsg(\"cannot enable %s mode for subscription with %s\",\n> > + \"streaming = parallel\", \"min_apply_delay\"));\n> > +\n> >\n> > 9a.\n> > SUGGESTION (comment)\n> > The combination of parallel streaming mode and min_apply_delay is not\n> > allowed.\n> Fixed.\n>\n>\n> > ~\n> >\n> > 9b.\n> > (see AlterSubscription general review comment #8 above) Here you can use the\n> > same comment error message that says min_apply_delay > 0 and streaming =\n> > parallel are mutually exclusive options.\n> As described above, we followed the current style in the existing functions.\n>\n>\n> > ~~~\n> >\n> > 10. AlterSubscription\n> >\n> > @@ -1111,6 +1177,25 @@ AlterSubscription(ParseState *pstate,\n> > AlterSubscriptionStmt *stmt,\n> > = true;\n> > }\n> >\n> > + if (IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY)) {\n> > + /*\n> > + * Test the combination of streaming mode and\n> > + * min_apply_delay\n> > + */\n> > + if (opts.min_apply_delay > 0)\n> > + if ((IsSet(opts.specified_opts, SUBOPT_STREAMING) && opts.streaming\n> > == LOGICALREP_STREAM_PARALLEL) ||\n> > + (!IsSet(opts.specified_opts, SUBOPT_STREAMING) && sub->stream ==\n> > LOGICALREP_STREAM_PARALLEL))\n> > + ereport(ERROR,\n> > + errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> > + errmsg(\"cannot enable %s for subscription in %s mode\",\n> > + \"min_apply_delay\", \"streaming = parallel\"));\n> > +\n> > + values[Anum_pg_subscription_subminapplydelay - 1] =\n> > + Int64GetDatum(opts.min_apply_delay);\n> > + replaces[Anum_pg_subscription_subminapplydelay - 1] = true; }\n> >\n> > 10a.\n> > SUGGESTION (comment)\n> > The combination of parallel streaming mode and min_apply_delay is not\n> > allowed.\n> Fixed.\n>\n>\n> > ~\n> >\n> > 10b.\n> > (see AlterSubscription general review comment #8 above) Here you can use the\n> > same comment error message that says min_apply_delay > 0 and streaming =\n> > parallel are mutually exclusive options.\n> Same as 9b.\n>\n> > ======\n> >\n> > .../replication/logical/applyparallelworker.c\n> >\n> > 11.\n> >\n> > @@ -704,7 +704,8 @@ pa_process_spooled_messages_if_required(void)\n> > {\n> > apply_spooled_messages(&MyParallelShared->fileset,\n> > MyParallelShared->xid,\n> > - InvalidXLogRecPtr);\n> > + InvalidXLogRecPtr,\n> > + 0);\n> >\n> > IMO this passing of 0 is a bit strange because it is currently acting like a dummy\n> > value since the apply_spooled_messages will never make use of the 'finish_ts'\n> > anyway (since this call is from a parallel apply worker).\n> >\n> > I think a better way to code this might be to pass the 0 (same as you are doing\n> > here) but inside the apply_spooled_messages change the code:\n> >\n> > FROM\n> > if (!am_parallel_apply_worker())\n> > maybe_delay_apply(finish_ts);\n> >\n> > TO\n> > if (finish_ts)\n> > maybe_delay_apply(finish_ts);\n> >\n> > That does 2 things.\n> > - It makes the passed-in 0 have some meaning\n> > - It simplifies the apply_spooled_messages code\n> Adopted.\n>\n>\n> > ======\n> >\n> > src/backend/replication/logical/worker.c\n> >\n> > 12.\n> >\n> > @@ -318,6 +318,17 @@ static List *on_commit_wakeup_workers_subids =\n> > NIL; bool in_remote_transaction = false; static XLogRecPtr\n> > remote_final_lsn = InvalidXLogRecPtr;\n> >\n> > +/*\n> > + * In order to avoid walsender's timeout during time-delayed\n> > +replication,\n> > + * it's necessary to keep sending feedback messages during the delay\n> > +from the\n> > + * worker process. Meanwhile, the feature delays the apply before\n> > +starting the\n> > + * transaction and thus we don't write WALs for the suspended changes\n> > +during\n> > + * the wait. Hence, in the case the worker process sends a feedback\n> > +message\n> > + * during the delay, we should not make positions of the flushed and\n> > +apply LSN\n> > + * overwritten by the last received latest LSN. See send_feedback()\n> > for details.\n> > + */\n> > +static XLogRecPtr last_received = InvalidXLogRecPtr;\n> >\n> > 12a.\n> > Suggest a small change to the first sentence of the comment.\n> >\n> > BEFORE\n> > In order to avoid walsender's timeout during time-delayed replication, it's\n> > necessary to keep sending feedback messages during the delay from the\n> > worker process.\n> >\n> > AFTER\n> > In order to avoid walsender timeout for time-delayed replication the worker\n> > process keeps sending feedback messages during the delay period.\n> Fixed.\n>\n>\n> > ~\n> >\n> > 12b.\n> > \"Hence, in the case\" -> \"When\"\n> Fixed.\n>\n>\n> > ~~~\n> >\n> > 13. forward declare\n> >\n> > -static void send_feedback(XLogRecPtr recvpos, bool force, bool\n> > requestReply);\n> > +static void send_feedback(XLogRecPtr recvpos, bool force, bool\n> > requestReply,\n> > + bool in_delaying_apply);\n> >\n> > Change the param name:\n> >\n> > \"in_delaying_apply\" -> \"in_delayed_apply” (??)\n> Changed. The initial intention to append the \"in_\"\n> prefix is to make the variable name aligned with\n> some other variables such as \"in_remote_transaction\" and\n> \"in_streamed_transaction\" that mean the current status\n> for the transaction. So, until there is a better name proposed,\n> we can keep it.\n>\n>\n> > ~~~\n> >\n> > 14. maybe_delay_apply\n> >\n> > + /* Nothing to do if no delay set */\n> > + if (MySubscription->minapplydelay <= 0) return;\n> >\n> > IIUC min_apply_delay cannot be < 0 so this condition could simply be:\n> >\n> > if (!MySubscription->minapplydelay)\n> > return;\n> Fixed.\n>\n>\n> > ~~~\n> >\n> > 15. maybe_delay_apply\n> >\n> > + /*\n> > + * The min_apply_delay parameter is ignored until all tablesync workers\n> > + * have reached READY state. If we allow the delay during the catchup\n> > + * phase, once we reach the limit of tablesync workers, it will impose\n> > + a\n> > + * delay for each subsequent worker. It means it will take a long time\n> > + to\n> > + * finish the initial table synchronization.\n> > + */\n> > + if (!AllTablesyncsReady())\n> > + return;\n> >\n> > SUGGESTION (slight rewording)\n> > The min_apply_delay parameter is ignored until all tablesync workers have\n> > reached READY state. This is because if we allowed the delay during the\n> > catchup phase, then once we reached the limit of tablesync workers it would\n> > impose a delay for each subsequent worker. That would cause initial table\n> > synchronization completion to take a long time.\n> Fixed.\n>\n>\n> > ~~~\n> >\n> > 16. maybe_delay_apply\n> >\n> > + while (true)\n> > + {\n> > + long diffms;\n> > +\n> > + ResetLatch(MyLatch);\n> > +\n> > + CHECK_FOR_INTERRUPTS();\n> >\n> > IMO there should be some small explanatory comment here at the top of the\n> > while loop.\n> Added.\n>\n>\n> > ~~~\n> >\n> > 17. apply_spooled_messages\n> >\n> > @@ -2024,6 +2141,21 @@ apply_spooled_messages(FileSet *stream_fileset,\n> > TransactionId xid,\n> > int fileno;\n> > off_t offset;\n> >\n> > + /*\n> > + * Should we delay the current transaction?\n> > + *\n> > + * Unlike the regular (non-streamed) cases, the delay is applied in a\n> > + * STREAM COMMIT/STREAM PREPARE message for streamed transactions.\n> > The\n> > + * STREAM START message does not contain a commit/prepare time (it will\n> > + be\n> > + * available when the in-progress transaction finishes). Hence, it's\n> > + not\n> > + * appropriate to apply a delay at that time.\n> > + *\n> > + * It's not allowed to execute time-delayed replication with parallel\n> > + * apply feature.\n> > + */\n> > + if (!am_parallel_apply_worker())\n> > + maybe_delay_apply(finish_ts);\n> >\n> > That whole comment part \"Unlike the regular (non-streamed) cases\"\n> > seems misplaced here. Perhaps this part of the comment is better put into\n> > the function header where the meaning of 'finish_ts' is explained?\n> Moved it to the header comment for maybe_delay_apply.\n>\n>\n> > ~~~\n> >\n> > 18. apply_spooled_messages\n> >\n> > + * It's not allowed to execute time-delayed replication with parallel\n> > + * apply feature.\n> > + */\n> > + if (!am_parallel_apply_worker())\n> > + maybe_delay_apply(finish_ts);\n> >\n> > As was mentioned in comment #11 above this code could be changed like\n> >\n> > if (finish_ts)\n> > maybe_delay_apply(finish_ts);\n> > then you don't even need to make mention of \"parallel apply\" at all here.\n> >\n> > OTOH if you want to still have the parallel apply comment then maybe reword it\n> > like this:\n> > \"It is not allowed to combine time-delayed replication with the parallel apply\n> > feature.\"\n> Changed and now I don't mention the parallel apply feature.\n>\n> > ~~~\n> >\n> > 19. apply_spooled_messages\n> >\n> > If you chose not to do my suggestion from comment #11, then there are\n> > 2 identical conditions (!am_parallel_apply_worker()); In this case, I was\n> > wondering if it would be better to refactor to use a single condition instead.\n> I applied #11 comment. Now, the conditions are not identical.\n>\n> > ~~~\n> >\n> > 20. send_feedback\n> > (same as comment #13)\n> >\n> > Maybe change the new param name to “in_delayed_apply”?\n> Changed.\n>\n>\n> > ~~~\n> >\n> > 21.\n> >\n> > @@ -3737,8 +3869,15 @@ send_feedback(XLogRecPtr recvpos, bool force,\n> > bool requestReply)\n> > /*\n> > * No outstanding transactions to flush, we can report the latest received\n> > * position. This is important for synchronous replication.\n> > + *\n> > + * During the delay of time-delayed replication, do not tell the\n> > + publisher\n> > + * that the received latest LSN is already applied and flushed at this\n> > + * stage, since we don't apply the transaction yet. If we do so, it\n> > + leads\n> > + * to a wrong assumption of logical replication progress on the\n> > + publisher\n> > + * side. Here, we just send a feedback message to avoid publisher's\n> > + * timeout during the delay.\n> > */\n> >\n> > Minor rewording of the comment\n> >\n> > SUGGESTION\n> > If the subscriber side apply is delayed (because of time-delayed\n> > replication) then do not tell the publisher that the received latest LSN is already\n> > applied and flushed, otherwise, it leads to the publisher side making a wrong\n> > assumption of logical replication progress. Instead, we just send a feedback\n> > message to avoid a publisher timeout during the delay.\n> Adopted.\n>\n>\n> > ======\n> >\n> >\n> > src/bin/pg_dump/pg_dump.c\n> >\n> > 22.\n> >\n> > @@ -4546,9 +4547,14 @@ getSubscriptions(Archive *fout)\n> > LOGICALREP_TWOPHASE_STATE_DISABLED);\n> >\n> > if (fout->remoteVersion >= 160000)\n> > - appendPQExpBufferStr(query, \" s.suborigin\\n\");\n> > + appendPQExpBufferStr(query,\n> > + \" s.suborigin,\\n\"\n> > + \" s.subminapplydelay\\n\");\n> > else\n> > - appendPQExpBuffer(query, \" '%s' AS suborigin\\n\",\n> > LOGICALREP_ORIGIN_ANY);\n> > + {\n> > + appendPQExpBuffer(query, \" '%s' AS suborigin,\\n\",\n> > + LOGICALREP_ORIGIN_ANY); appendPQExpBufferStr(query, \" 0 AS\n> > + subminapplydelay\\n\"); }\n> >\n> > Can’t those appends in the else part can be combined to a single\n> > appendPQExpBuffer\n> >\n> > appendPQExpBuffer(query,\n> > \" '%s' AS suborigin,\\n\"\n> > \" 0 AS subminapplydelay\\n\"\n> > LOGICALREP_ORIGIN_ANY);\n> Adopted.\n>\n>\n> > ======\n> >\n> > src/include/catalog/pg_subscription.h\n> >\n> > 23.\n> >\n> > @@ -70,6 +70,8 @@ CATALOG(pg_subscription,6100,SubscriptionRelationId)\n> > BKI_SHARED_RELATION BKI_ROW\n> > XLogRecPtr subskiplsn; /* All changes finished at this LSN are\n> > * skipped */\n> >\n> > + int64 subminapplydelay; /* Replication apply delay */\n> > +\n> > NameData subname; /* Name of the subscription */\n> >\n> > Oid subowner BKI_LOOKUP(pg_authid); /* Owner of the subscription */\n> >\n> > SUGGESTION (for comment)\n> > Replication apply delay (ms)\n> Fixed.\n>\n> > ~~\n> >\n> > 24.\n> >\n> > @@ -120,6 +122,7 @@ typedef struct Subscription\n> > * in */\n> > XLogRecPtr skiplsn; /* All changes finished at this LSN are\n> > * skipped */\n> > + int64 minapplydelay; /* Replication apply delay */\n> >\n> > SUGGESTION (for comment)\n> > Replication apply delay (ms)\n> Fixed.\n>\n>\n> Kindly have a look at the latest v17 patch in [1].\n>\n>\n> [1] - https://www.postgresql.org/message-id/TYCPR01MB8373F5162C7A0E6224670CF0EDC49%40TYCPR01MB8373.jpnprd01.prod.outlook.com\n>\n> Best Regards,\n> Takamichi Osumi\n>\n\n1)\nTried different variations of altering 'min_apply_delay'. All passed\nexcept one below:\n\npostgres=# alter subscription mysubnew set (min_apply_delay = '10.9min 1ms');\nALTER SUBSCRIPTION\npostgres=# alter subscription mysubnew set (min_apply_delay = '10.9min 2s 1ms');\nALTER SUBSCRIPTION\n--very similar to above but fails,\npostgres=# alter subscription mysubnew set (min_apply_delay = '10.9s 1ms');\nERROR: invalid input syntax for type interval: \"10.9s 1ms\"\n\n\n2)\n Logging:\n2023-01-19 17:33:16.202 IST [404797] DEBUG: logical replication apply\ndelay: 19979 ms\n2023-01-19 17:33:26.212 IST [404797] DEBUG: logical replication apply\ndelay: 9969 ms\n2023-01-19 17:34:25.730 IST [404962] DEBUG: logical replication apply\ndelay: 179988 ms-->previous wait over, started for next txn\n2023-01-19 17:34:35.737 IST [404962] DEBUG: logical replication apply\ndelay: 169981 ms\n2023-01-19 17:34:45.746 IST [404962] DEBUG: logical replication apply\ndelay: 159972 ms\n\nIs there a way to distinguish between these logs? Maybe dumping xids along-with?\n\nthanks\nShveta\n\n\n",
"msg_date": "Fri, 20 Jan 2023 09:16:47 +0530",
"msg_from": "shveta malik <shveta.malik@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi, Horiguchi-san and Amit-san\n\n\nOn Wednesday, November 9, 2022 3:41 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> Using interval is not standard as this kind of parameters but it seems\n> convenient. On the other hand, it's not great that the unit month introduces\n> some subtle ambiguity. This patch translates a month to 30 days but I'm not\n> sure it's the right thing to do. Perhaps we shouldn't allow the units upper than\n> days.\nIn the past discussion, we talked about the merits to utilize the interval type.\nOn the other hand, now we are facing some incompatibility issues of parsing\nbetween this time-delayed feature and physical replication's recovery_min_apply_delay.\n\nFor instance, the interval type can accept '600 m s h', '1d 10min' and '1m',\nbut the recovery_min_apply_delay makes the server failed to start by all of those.\n\nTherefore, this would confuse users and I'm going to make the feature's input\ncompatible with recovery_min_apply_delay in the next version.\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Fri, 20 Jan 2023 06:13:08 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi Osumi-san, here are my review comments for the latest patch v17-0001.\n\n======\nCommit Message\n\n1.\nProhibit the combination of this feature and parallel streaming mode.\n\nSUGGESTION (using the same wording as in the code comments)\nThe combination of parallel streaming mode and min_apply_delay is not allowed.\n\n======\ndoc/src/sgml/ref/create_subscription.sgml\n\n2.\n+ <para>\n+ By default, the subscriber applies changes as soon as possible. This\n+ parameter allows the user to delay the application of changes by a\n+ specified amount of time. If the value is specified without units, it\n+ is taken as milliseconds. The default is zero (no delay).\n+ </para>\n\nLooking at this again, it seemed a bit strange to repeat \"specified\"\ntwice in 2 sentences. Maybe change one of them.\n\nI’ve also suggested using the word \"interval\" because I don’t think\ndocs yet mentioned anywhere (except in the example) that using\nintervals is possible.\n\nSUGGESTION (for the 2nd sentence)\nThis parameter allows the user to delay the application of changes by\na given time interval.\n\n~~~\n\n3.\n+ <para>\n+ Any delay occurs only on WAL records for transaction begins after all\n+ initial table synchronization has finished. The delay is calculated\n+ between the WAL timestamp as written on the publisher and the current\n+ time on the subscriber. Any overhead of time spent in\nlogical decoding\n+ and in transferring the transaction may reduce the actual wait time.\n+ It is also possible that the overhead already execeeds the requested\n+ <literal>min_apply_delay</literal> value, in which case no additional\n+ wait is necessary. If the system clocks on publisher and subscriber\n+ are not synchronized, this may lead to apply changes earlier than\n+ expected, but this is not a major issue because this parameter is\n+ typically much larger than the time deviations between servers. Note\n+ that if this parameter is set to a long delay, the replication will\n+ stop if the replication slot falls behind the current LSN\nby more than\n+ <link\nlinkend=\"guc-max-slot-wal-keep-size\"><literal>max_slot_wal_keep_size</literal></link>.\n+ </para>\n\n3a.\nTypo \"execeeds\" (I think Vignesh reported this already)\n\n~\n\n3b.\nSUGGESTION (for the 2nd sentence)\nBEFORE\nThe delay is calculated between the WAL timestamp...\nAFTER\nThe delay is calculated as the difference between the WAL timestamp...\n\n~~~\n\n4.\n+ <warning>\n+ <para>\n+ Delaying the replication can mean there is a much longer\ntime between making\n+ a change on the publisher, and that change being\ncommitted on the subscriber.\n+ v\n+ See <xref linkend=\"guc-synchronous-commit\"/>.\n+ </para>\n+ </warning>\n\nIMO maybe there is a better way to express the 2nd sentence:\n\nBEFORE\nThis can have a big impact on synchronous replication.\nAFTER\nThis can impact the performance of synchronous replication.\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n5. parse_subscription_options\n\n@@ -324,6 +328,43 @@ parse_subscription_options(ParseState *pstate,\nList *stmt_options,\n opts->specified_opts |= SUBOPT_LSN;\n opts->lsn = lsn;\n }\n+ else if (IsSet(supported_opts, SUBOPT_MIN_APPLY_DELAY) &&\n+ strcmp(defel->defname, \"min_apply_delay\") == 0)\n+ {\n+ char *val,\n+ *tmp;\n+ Interval *interval;\n+ int64 ms;\n\nIMO 'delay_ms' (or similar) would be a friendlier variable name than just 'ms'\n\n~~~\n\n6.\n@@ -404,6 +445,20 @@ parse_subscription_options(ParseState *pstate,\nList *stmt_options,\n \"slot_name = NONE\", \"create_slot = false\")));\n }\n }\n+\n+ /*\n+ * The combination of parallel streaming mode and min_apply_delay is not\n+ * allowed.\n+ */\n+ if (IsSet(supported_opts, SUBOPT_MIN_APPLY_DELAY) &&\n+ opts->min_apply_delay > 0)\n+ {\n+ if (opts->streaming == LOGICALREP_STREAM_PARALLEL)\n+ ereport(ERROR,\n+ errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"%s and %s are mutually exclusive options\",\n+ \"min_apply_delay > 0\", \"streaming = parallel\"));\n+ }\n\nThis could be expressed as a single condition using &&, maybe also\nwith the brackets eliminated. (Unless you feel the current code is\nmore readable)\n\n~~~\n\n7.\n\n+ if (opts.min_apply_delay > 0)\n+ if ((IsSet(opts.specified_opts, SUBOPT_STREAMING) && opts.streaming\n== LOGICALREP_STREAM_PARALLEL) ||\n+ (!IsSet(opts.specified_opts, SUBOPT_STREAMING) && sub->stream ==\nLOGICALREP_STREAM_PARALLEL))\n+ ereport(ERROR,\n+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"cannot enable %s for subscription in %s mode\",\n+ \"min_apply_delay\", \"streaming = parallel\"));\n\nThese nested ifs could instead be a single \"if\" with && condition.\n(Unless you feel the current code is more readable)\n\n\n======\nsrc/backend/replication/logical/worker.c\n\n8. maybe_delay_apply\n\n+ * Hence, it's not appropriate to apply a delay at the time.\n+ */\n+static void\n+maybe_delay_apply(TimestampTz finish_ts)\n\nThat last sentence \"Hence,... delay at the time\" does not sound\ncorrect. Is there a typo or missing words here?\n\nMaybe it meant to say \"... at the STREAM START time.\"?\n\n~~~\n\n9.\n+ /* This might change wal_receiver_status_interval */\n+ if (ConfigReloadPending)\n+ {\n+ ConfigReloadPending = false;\n+ ProcessConfigFile(PGC_SIGHUP);\n+ }\n\nI was unsure why did you make a special mention of\n'wal_receiver_status_interval' here. I mean, Aren't there also other\nGUCs that might change and affect something here so was there some\nspecial reason only this one was mentioned?\n\n======\nsrc/test/subscription/t/032_apply_delay.pl\n\n10.\n+\n+# Compare inserted time on the publisher with applied time on the subscriber to\n+# confirm the latter is applied after expected time.\n+sub check_apply_delay_time\n\nMaybe the comment could also mention that the time is automatically\nstored in the table column 'c'.\n\n~~~\n\n11.\n+# Confirm the suspended record doesn't get applied expectedly by the ALTER\n+# DISABLE command.\n+$result = $node_subscriber->safe_psql('postgres',\n+ \"SELECT count(a) FROM test_tab WHERE a = 0;\");\n+is($result, qq(0), \"check if the delayed transaction doesn't get\napplied expectedly\");\n\nThe use of \"doesn't get applied expectedly\" (in 2 places here) seemed\nstrange. Maybe it's better to say like\n\nSUGGESTION\n# Confirm disabling the subscription by ALTER DISABLE did not cause\nthe delayed transaction to be applied.\n$result = $node_subscriber->safe_psql('postgres',\n\"SELECT count(a) FROM test_tab WHERE a = 0;\");\nis($result, qq(0), \"check the delayed transaction was not applied\");\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 20 Jan 2023 17:55:39 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Fri, Jan 20, 2023 at 2:47 PM shveta malik <shveta.malik@gmail.com> wrote:\n>\n...\n> 2)\n> Logging:\n> 2023-01-19 17:33:16.202 IST [404797] DEBUG: logical replication apply\n> delay: 19979 ms\n> 2023-01-19 17:33:26.212 IST [404797] DEBUG: logical replication apply\n> delay: 9969 ms\n> 2023-01-19 17:34:25.730 IST [404962] DEBUG: logical replication apply\n> delay: 179988 ms-->previous wait over, started for next txn\n> 2023-01-19 17:34:35.737 IST [404962] DEBUG: logical replication apply\n> delay: 169981 ms\n> 2023-01-19 17:34:45.746 IST [404962] DEBUG: logical replication apply\n> delay: 159972 ms\n>\n> Is there a way to distinguish between these logs? Maybe dumping xids along-with?\n>\n\n+1\n\nAlso, I was thinking of some other logging enhancements\n\na) the message should say that this is the *remaining* time to left to wait.\n\nb) it might be convenient to know from the log what was the original\nmin_apply_delay value in the 1st place.\n\nFor example, the logs might look something like this:\n\nDEBUG: time-delayed replication for txid 1234, min_apply_delay =\n160000 ms. Remaining wait time: 159972 ms\nDEBUG: time-delayed replication for txid 1234, min_apply_delay =\n160000 ms. Remaining wait time: 142828 ms\nDEBUG: time-delayed replication for txid 1234, min_apply_delay =\n160000 ms. Remaining wait time: 129994 ms\nDEBUG: time-delayed replication for txid 1234, min_apply_delay =\n160000 ms. Remaining wait time: 110001 ms\n...\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 20 Jan 2023 18:38:08 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Fri, Jan 20, 2023 at 1:08 PM Peter Smith <smithpb2250@gmail.com> wrote:\n\n> a) the message should say that this is the *remaining* time to left to wait.\n>\n> b) it might be convenient to know from the log what was the original\n> min_apply_delay value in the 1st place.\n>\n> For example, the logs might look something like this:\n>\n> DEBUG: time-delayed replication for txid 1234, min_apply_delay =\n> 160000 ms. Remaining wait time: 159972 ms\n> DEBUG: time-delayed replication for txid 1234, min_apply_delay =\n> 160000 ms. Remaining wait time: 142828 ms\n> DEBUG: time-delayed replication for txid 1234, min_apply_delay =\n> 160000 ms. Remaining wait time: 129994 ms\n> DEBUG: time-delayed replication for txid 1234, min_apply_delay =\n> 160000 ms. Remaining wait time: 110001 ms\n> ...\n>\n\n+1\nThis will also help when min_apply_delay is set to a new value in\nbetween the current wait. Lets say, I started with min_apply_delay=5\nmin, when the worker was half way through this, I changed\nmin_apply_delay to 3 min or say 10min, I see the impact of that change\ni.e. new wait-time is adjusted, but log becomes confusing. So, please\nkeep this scenario as well in mind while improving logging.\n\nthanks\nShveta\n\n\n",
"msg_date": "Fri, 20 Jan 2023 14:23:45 +0530",
"msg_from": "shveta malik <shveta.malik@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Fri, Jan 20, 2023 at 2:23 PM shveta malik <shveta.malik@gmail.com> wrote:\n>\n> On Fri, Jan 20, 2023 at 1:08 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> > a) the message should say that this is the *remaining* time to left to wait.\n> >\n> > b) it might be convenient to know from the log what was the original\n> > min_apply_delay value in the 1st place.\n> >\n> > For example, the logs might look something like this:\n> >\n> > DEBUG: time-delayed replication for txid 1234, min_apply_delay =\n> > 160000 ms. Remaining wait time: 159972 ms\n> > DEBUG: time-delayed replication for txid 1234, min_apply_delay =\n> > 160000 ms. Remaining wait time: 142828 ms\n> > DEBUG: time-delayed replication for txid 1234, min_apply_delay =\n> > 160000 ms. Remaining wait time: 129994 ms\n> > DEBUG: time-delayed replication for txid 1234, min_apply_delay =\n> > 160000 ms. Remaining wait time: 110001 ms\n> > ...\n> >\n>\n> +1\n> This will also help when min_apply_delay is set to a new value in\n> between the current wait. Lets say, I started with min_apply_delay=5\n> min, when the worker was half way through this, I changed\n> min_apply_delay to 3 min or say 10min, I see the impact of that change\n> i.e. new wait-time is adjusted, but log becomes confusing. So, please\n> keep this scenario as well in mind while improving logging.\n>\n\n\nwhen we send-feedback during apply-delay after every\nwal_receiver_status_interval , the log comes as:\n023-01-19 17:12:56.000 IST [404795] DEBUG: sending feedback (force 1)\nto recv 0/1570840, write 0/1570840, flush 0/1570840\n\nShall we have some info here to indicate that it is sent while waiting\nfor apply_delay to distinguish it from other such send-feedback logs?\nIt will\nmake apply_delay flow clear in logs.\n\nthanks\nShveta\n\n\n",
"msg_date": "Fri, 20 Jan 2023 14:43:08 +0530",
"msg_from": "shveta malik <shveta.malik@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Friday, January 20, 2023 3:56 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Hi Osumi-san, here are my review comments for the latest patch v17-0001.\r\nThanks for your review !\r\n\r\n\r\n> ======\r\n> Commit Message\r\n> \r\n> 1.\r\n> Prohibit the combination of this feature and parallel streaming mode.\r\n> \r\n> SUGGESTION (using the same wording as in the code comments) The\r\n> combination of parallel streaming mode and min_apply_delay is not allowed.\r\nOkay. Fixed.\r\n\r\n\r\n> ======\r\n> doc/src/sgml/ref/create_subscription.sgml\r\n> \r\n> 2.\r\n> + <para>\r\n> + By default, the subscriber applies changes as soon as possible.\r\n> This\r\n> + parameter allows the user to delay the application of changes by a\r\n> + specified amount of time. If the value is specified without units, it\r\n> + is taken as milliseconds. The default is zero (no delay).\r\n> + </para>\r\n> \r\n> Looking at this again, it seemed a bit strange to repeat \"specified\"\r\n> twice in 2 sentences. Maybe change one of them.\r\n> \r\n> I’ve also suggested using the word \"interval\" because I don’t think docs yet\r\n> mentioned anywhere (except in the example) that using intervals is possible.\r\n> \r\n> SUGGESTION (for the 2nd sentence)\r\n> This parameter allows the user to delay the application of changes by a given\r\n> time interval.\r\nAdopted.\r\n\r\n\r\n> ~~~\r\n> \r\n> 3.\r\n> + <para>\r\n> + Any delay occurs only on WAL records for transaction begins after\r\n> all\r\n> + initial table synchronization has finished. The delay is calculated\r\n> + between the WAL timestamp as written on the publisher and the\r\n> current\r\n> + time on the subscriber. Any overhead of time spent in\r\n> logical decoding\r\n> + and in transferring the transaction may reduce the actual wait time.\r\n> + It is also possible that the overhead already execeeds the\r\n> requested\r\n> + <literal>min_apply_delay</literal> value, in which case no\r\n> additional\r\n> + wait is necessary. If the system clocks on publisher and subscriber\r\n> + are not synchronized, this may lead to apply changes earlier than\r\n> + expected, but this is not a major issue because this parameter is\r\n> + typically much larger than the time deviations between servers.\r\n> Note\r\n> + that if this parameter is set to a long delay, the replication will\r\n> + stop if the replication slot falls behind the current LSN\r\n> by more than\r\n> + <link\r\n> linkend=\"guc-max-slot-wal-keep-size\"><literal>max_slot_wal_keep_size</\r\n> literal></link>.\r\n> + </para>\r\n> \r\n> 3a.\r\n> Typo \"execeeds\" (I think Vignesh reported this already)\r\nFixed.\r\n\r\n\r\n> ~\r\n> \r\n> 3b.\r\n> SUGGESTION (for the 2nd sentence)\r\n> BEFORE\r\n> The delay is calculated between the WAL timestamp...\r\n> AFTER\r\n> The delay is calculated as the difference between the WAL timestamp...\r\nFixed.\r\n\r\n\r\n> ~~~\r\n> \r\n> 4.\r\n> + <warning>\r\n> + <para>\r\n> + Delaying the replication can mean there is a much longer\r\n> time between making\r\n> + a change on the publisher, and that change being\r\n> committed on the subscriber.\r\n> + v\r\n> + See <xref linkend=\"guc-synchronous-commit\"/>.\r\n> + </para>\r\n> + </warning>\r\n> \r\n> IMO maybe there is a better way to express the 2nd sentence:\r\n> \r\n> BEFORE\r\n> This can have a big impact on synchronous replication.\r\n> AFTER\r\n> This can impact the performance of synchronous replication.\r\nFixed.\r\n\r\n\r\n> ======\r\n> src/backend/commands/subscriptioncmds.c\r\n> \r\n> 5. parse_subscription_options\r\n> \r\n> @@ -324,6 +328,43 @@ parse_subscription_options(ParseState *pstate, List\r\n> *stmt_options,\r\n> opts->specified_opts |= SUBOPT_LSN;\r\n> opts->lsn = lsn;\r\n> }\r\n> + else if (IsSet(supported_opts, SUBOPT_MIN_APPLY_DELAY) &&\r\n> + strcmp(defel->defname, \"min_apply_delay\") == 0) {\r\n> + char *val,\r\n> + *tmp;\r\n> + Interval *interval;\r\n> + int64 ms;\r\n> \r\n> IMO 'delay_ms' (or similar) would be a friendlier variable name than just 'ms'\r\nThe variable name has been changed which is more clear to the feature.\r\n\r\n\r\n> ~~~\r\n> \r\n> 6.\r\n> @@ -404,6 +445,20 @@ parse_subscription_options(ParseState *pstate, List\r\n> *stmt_options,\r\n> \"slot_name = NONE\", \"create_slot = false\")));\r\n> }\r\n> }\r\n> +\r\n> + /*\r\n> + * The combination of parallel streaming mode and min_apply_delay is\r\n> + not\r\n> + * allowed.\r\n> + */\r\n> + if (IsSet(supported_opts, SUBOPT_MIN_APPLY_DELAY) &&\r\n> + opts->min_apply_delay > 0)\r\n> + {\r\n> + if (opts->streaming == LOGICALREP_STREAM_PARALLEL)\r\n> ereport(ERROR,\r\n> + errcode(ERRCODE_SYNTAX_ERROR), errmsg(\"%s and %s are mutually\r\n> + exclusive options\",\r\n> + \"min_apply_delay > 0\", \"streaming = parallel\")); }\r\n> \r\n> This could be expressed as a single condition using &&, maybe also with the\r\n> brackets eliminated. (Unless you feel the current code is more readable)\r\nThe current style is intentional. We feel the code is more readable.\r\n\r\n\r\n> ~~~\r\n> \r\n> 7.\r\n> \r\n> + if (opts.min_apply_delay > 0)\r\n> + if ((IsSet(opts.specified_opts, SUBOPT_STREAMING) && opts.streaming\r\n> == LOGICALREP_STREAM_PARALLEL) ||\r\n> + (!IsSet(opts.specified_opts, SUBOPT_STREAMING) && sub->stream ==\r\n> LOGICALREP_STREAM_PARALLEL))\r\n> + ereport(ERROR,\r\n> + errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n> + errmsg(\"cannot enable %s for subscription in %s mode\",\r\n> + \"min_apply_delay\", \"streaming = parallel\"));\r\n> \r\n> These nested ifs could instead be a single \"if\" with && condition.\r\n> (Unless you feel the current code is more readable)\r\nSame as #6.\r\n\r\n\r\n> ======\r\n> src/backend/replication/logical/worker.c\r\n> \r\n> 8. maybe_delay_apply\r\n> \r\n> + * Hence, it's not appropriate to apply a delay at the time.\r\n> + */\r\n> +static void\r\n> +maybe_delay_apply(TimestampTz finish_ts)\r\n> \r\n> That last sentence \"Hence,... delay at the time\" does not sound correct. Is there\r\n> a typo or missing words here?\r\n> \r\n> Maybe it meant to say \"... at the STREAM START time.\"?\r\nYes. Fixed.\r\n\r\n\r\n> ~~~\r\n> \r\n> 9.\r\n> + /* This might change wal_receiver_status_interval */ if\r\n> + (ConfigReloadPending) { ConfigReloadPending = false;\r\n> + ProcessConfigFile(PGC_SIGHUP); }\r\n> \r\n> I was unsure why did you make a special mention of\r\n> 'wal_receiver_status_interval' here. I mean, Aren't there also other GUCs that\r\n> might change and affect something here so was there some special reason only\r\n> this one was mentioned?\r\nThis should be similar to the recoveryApplyDelay for physical replication.\r\nIt mentions the GUC used in the same function.\r\n\r\n\r\n> ======\r\n> src/test/subscription/t/032_apply_delay.pl\r\n> \r\n> 10.\r\n> +\r\n> +# Compare inserted time on the publisher with applied time on the\r\n> +subscriber to # confirm the latter is applied after expected time.\r\n> +sub check_apply_delay_time\r\n> \r\n> Maybe the comment could also mention that the time is automatically stored in\r\n> the table column 'c'.\r\nAdded.\r\n\r\n\r\n> ~~~\r\n> \r\n> 11.\r\n> +# Confirm the suspended record doesn't get applied expectedly by the\r\n> +ALTER # DISABLE command.\r\n> +$result = $node_subscriber->safe_psql('postgres',\r\n> + \"SELECT count(a) FROM test_tab WHERE a = 0;\"); is($result, qq(0),\r\n> +\"check if the delayed transaction doesn't get\r\n> applied expectedly\");\r\n> \r\n> The use of \"doesn't get applied expectedly\" (in 2 places here) seemed strange.\r\n> Maybe it's better to say like\r\n> \r\n> SUGGESTION\r\n> # Confirm disabling the subscription by ALTER DISABLE did not cause the\r\n> delayed transaction to be applied.\r\n> $result = $node_subscriber->safe_psql('postgres',\r\n> \"SELECT count(a) FROM test_tab WHERE a = 0;\"); is($result, qq(0), \"check\r\n> the delayed transaction was not applied\");\r\nFixed.\r\n\r\n\r\nKindly have a look at the patch v18.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Fri, 20 Jan 2023 18:36:29 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi,\r\n\r\n\r\nOn Friday, January 20, 2023 6:13 PM shveta malik <shveta.malik@gmail.com> wrote:\r\n> On Fri, Jan 20, 2023 at 2:23 PM shveta malik <shveta.malik@gmail.com> wrote:\r\n> >\r\n> > On Fri, Jan 20, 2023 at 1:08 PM Peter Smith <smithpb2250@gmail.com>\r\n> wrote:\r\n> >\r\n> > > a) the message should say that this is the *remaining* time to left to wait.\r\n> > >\r\n> > > b) it might be convenient to know from the log what was the original\r\n> > > min_apply_delay value in the 1st place.\r\n> > >\r\n> > > For example, the logs might look something like this:\r\n> > >\r\n> > > DEBUG: time-delayed replication for txid 1234, min_apply_delay =\r\n> > > 160000 ms. Remaining wait time: 159972 ms\r\n> > > DEBUG: time-delayed replication for txid 1234, min_apply_delay =\r\n> > > 160000 ms. Remaining wait time: 142828 ms\r\n> > > DEBUG: time-delayed replication for txid 1234, min_apply_delay =\r\n> > > 160000 ms. Remaining wait time: 129994 ms\r\n> > > DEBUG: time-delayed replication for txid 1234, min_apply_delay =\r\n> > > 160000 ms. Remaining wait time: 110001 ms ...\r\n> > >\r\n> >\r\n> > +1\r\n> > This will also help when min_apply_delay is set to a new value in\r\n> > between the current wait. Lets say, I started with min_apply_delay=5\r\n> > min, when the worker was half way through this, I changed\r\n> > min_apply_delay to 3 min or say 10min, I see the impact of that change\r\n> > i.e. new wait-time is adjusted, but log becomes confusing. So, please\r\n> > keep this scenario as well in mind while improving logging.\r\n> >\r\n> \r\n> \r\n> when we send-feedback during apply-delay after every\r\n> wal_receiver_status_interval , the log comes as:\r\n> 023-01-19 17:12:56.000 IST [404795] DEBUG: sending feedback (force 1) to\r\n> recv 0/1570840, write 0/1570840, flush 0/1570840\r\n> \r\n> Shall we have some info here to indicate that it is sent while waiting for\r\n> apply_delay to distinguish it from other such send-feedback logs?\r\n> It will\r\n> make apply_delay flow clear in logs.\r\nThis additional tip of log information has been added in the latest v18.\r\nKindly have a look at it in [1].\r\n\r\n\r\n[1] - https://www.postgresql.org/message-id/TYCPR01MB8373BED9E390C4839AF56685EDC59%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Fri, 20 Jan 2023 18:41:38 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Friday, January 20, 2023 5:54 PM shveta malik <shveta.malik@gmail.com> wrote:\r\n> On Fri, Jan 20, 2023 at 1:08 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> \r\n> > a) the message should say that this is the *remaining* time to left to wait.\r\n> >\r\n> > b) it might be convenient to know from the log what was the original\r\n> > min_apply_delay value in the 1st place.\r\n> >\r\n> > For example, the logs might look something like this:\r\n> >\r\n> > DEBUG: time-delayed replication for txid 1234, min_apply_delay =\r\n> > 160000 ms. Remaining wait time: 159972 ms\r\n> > DEBUG: time-delayed replication for txid 1234, min_apply_delay =\r\n> > 160000 ms. Remaining wait time: 142828 ms\r\n> > DEBUG: time-delayed replication for txid 1234, min_apply_delay =\r\n> > 160000 ms. Remaining wait time: 129994 ms\r\n> > DEBUG: time-delayed replication for txid 1234, min_apply_delay =\r\n> > 160000 ms. Remaining wait time: 110001 ms ...\r\n> >\r\n> \r\n> +1\r\n> This will also help when min_apply_delay is set to a new value in between the\r\n> current wait. Lets say, I started with min_apply_delay=5 min, when the worker\r\n> was half way through this, I changed min_apply_delay to 3 min or say 10min, I\r\n> see the impact of that change i.e. new wait-time is adjusted, but log becomes\r\n> confusing. So, please keep this scenario as well in mind while improving\r\n> logging.\r\nYes, now the change of min_apply_delay value can be detected\r\nsince I followed the format provided above. So, this scenario is also covered.\r\n\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Fri, 20 Jan 2023 18:46:53 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Friday, January 20, 2023 12:47 PM shveta malik <shveta.malik@gmail.com> wrote:\r\n> 1)\r\n> Tried different variations of altering 'min_apply_delay'. All passed except one\r\n> below:\r\n> \r\n> postgres=# alter subscription mysubnew set (min_apply_delay = '10.9min\r\n> 1ms'); ALTER SUBSCRIPTION postgres=# alter subscription mysubnew set\r\n> (min_apply_delay = '10.9min 2s 1ms'); ALTER SUBSCRIPTION --very similar to\r\n> above but fails, postgres=# alter subscription mysubnew set\r\n> (min_apply_delay = '10.9s 1ms');\r\n> ERROR: invalid input syntax for type interval: \"10.9s 1ms\"\r\nFYI, this was because the interval type couldn't accept this format.\r\nBut now we changed the input format from interval to integer alinged\r\nwith recovery_min_apply_delay. Thus, we don't face this issue now.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Fri, 20 Jan 2023 18:50:39 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi, \r\n\r\n\r\nOn Thursday, January 19, 2023 10:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Thu, Jan 19, 2023 at 12:06 PM Takamichi Osumi (Fujitsu)\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > Kindly have a look at the updated patch v17.\r\n> >\r\n> \r\n> Can we try to optimize the test time for this test? On my machine, it is the\r\n> second highest time-consuming test in src/test/subscription. It seems you are\r\n> waiting twice for apply_delay and both are for streaming cases by varying the\r\n> number of changes. I think it should be just once and that too for the\r\n> non-streaming case. I think it would be good to test streaming code path\r\n> interaction but not sure if it is important enough to have two test cases for\r\n> apply_delay.\r\nThe first insert test is for non-streaming case and we need both cases\r\nfor coverage. Regarding the time of test, conducted some optimization\r\nsuch as turning off the initial table sync, shortening the time of wait, and so on.\r\n\r\n\r\n> \r\n> One minor comment that I observed while going through the patch.\r\n> + /*\r\n> + * The combination of parallel streaming mode and min_apply_delay is\r\n> + not\r\n> + * allowed.\r\n> + */\r\n> + if (IsSet(supported_opts, SUBOPT_MIN_APPLY_DELAY) &&\r\n> + opts->min_apply_delay > 0)\r\n> \r\n> I think it would be good if you can specify the reason for not allowing this\r\n> combination in the comments.\r\nAdded.\r\n\r\n\r\nPlease have a look at the latest v18 patch in [1].\r\n\r\n\r\n[1] - https://www.postgresql.org/message-id/TYCPR01MB8373BED9E390C4839AF56685EDC59%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Fri, 20 Jan 2023 18:57:37 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi,\r\n\r\n\r\nOn Thursday, January 19, 2023 7:55 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> On Thu, 19 Jan 2023 at 12:06, Takamichi Osumi (Fujitsu)\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > Updated the comment and the function call.\r\n> >\r\n> > Kindly have a look at the updated patch v17.\r\n> \r\n> Thanks for the updated patch, few comments:\r\n> 1) min_apply_delay was accepting values like '600 m s h', I was not sure if we\r\n> should allow this:\r\n> alter subscription sub1 set (min_apply_delay = ' 600 m s h');\r\n> \r\n> + /*\r\n> + * If no unit was specified, then explicitly\r\n> add 'ms' otherwise\r\n> + * the interval_in function would assume 'seconds'.\r\n> + */\r\n> + if (strspn(tmp, \"-0123456789 \") == strlen(tmp))\r\n> + val = psprintf(\"%sms\", tmp);\r\n> + else\r\n> + val = tmp;\r\n> +\r\n> + interval =\r\n> DatumGetIntervalP(DirectFunctionCall3(interval_in,\r\n> +\r\n> \r\n> CStringGetDatum(val),\r\n> +\r\n> \r\n> ObjectIdGetDatum(InvalidOid),\r\n> +\r\n> Int32GetDatum(-1)));\r\n> \r\nFYI, the input can be accepted by the interval type.\r\nNow we changed the direction of the type from interval to integer\r\nbut plus some unit can be added like recovery_min_apply_delay.\r\nPlease check.\r\n\r\n\r\n> 3) There is one check at parse_subscription_options and another check in\r\n> AlterSubscription, this looks like a redundant check in case of alter\r\n> subscription, can we try to merge and keep in one place:\r\n> /*\r\n> * The combination of parallel streaming mode and min_apply_delay is not\r\n> * allowed.\r\n> */\r\n> if (IsSet(supported_opts, SUBOPT_MIN_APPLY_DELAY) &&\r\n> opts->min_apply_delay > 0)\r\n> {\r\n> if (opts->streaming == LOGICALREP_STREAM_PARALLEL) ereport(ERROR,\r\n> errcode(ERRCODE_SYNTAX_ERROR), errmsg(\"%s and %s are mutually\r\n> exclusive options\",\r\n> \"min_apply_delay > 0\", \"streaming = parallel\")); }\r\n> \r\n> if (IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY)) {\r\n> /*\r\n> * The combination of parallel streaming mode and\r\n> * min_apply_delay is not allowed.\r\n> */\r\n> if (opts.min_apply_delay > 0)\r\n> if ((IsSet(opts.specified_opts, SUBOPT_STREAMING) && opts.streaming ==\r\n> LOGICALREP_STREAM_PARALLEL) ||\r\n> (!IsSet(opts.specified_opts, SUBOPT_STREAMING) && sub->stream ==\r\n> LOGICALREP_STREAM_PARALLEL))\r\n> ereport(ERROR,\r\n> errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n> errmsg(\"cannot enable %s for subscription in %s mode\",\r\n> \"min_apply_delay\", \"streaming = parallel\"));\r\n> \r\n> values[Anum_pg_subscription_subminapplydelay - 1] =\r\n> Int64GetDatum(opts.min_apply_delay);\r\n> replaces[Anum_pg_subscription_subminapplydelay - 1] = true; }\r\nWe can't. For create subscription, we need to check the patch\r\nfrom parse_subscription_options, while for alter subscription,\r\nwe need to refer the current MySubscription value for those tests\r\nin AlterSubscription.\r\n\r\n \r\n> 4) typo \"execeeds\" should be \"exceeds\"\r\n> \r\n> + time on the subscriber. Any overhead of time spent in\r\n> logical decoding\r\n> + and in transferring the transaction may reduce the actual wait time.\r\n> + It is also possible that the overhead already execeeds the\r\n> requested\r\n> + <literal>min_apply_delay</literal> value, in which case no\r\n> additional\r\n> + wait is necessary. If the system clocks on publisher and subscriber\r\n> + are not synchronized, this may lead to apply changes earlier\r\n> + than\r\nFixed.\r\n\r\nKindly have a look at the v18 patch in [1].\r\n\r\n\r\n[1] - https://www.postgresql.org/message-id/TYCPR01MB8373BED9E390C4839AF56685EDC59%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Fri, 20 Jan 2023 19:07:30 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Saturday, January 21, 2023 3:36 AM I wrote:\r\n> Kindly have a look at the patch v18.\r\nI've conducted some refactoring for v18.\r\nNow the latest patch should be tidier and\r\nthe comments would be clearer and more aligned as a whole.\r\n\r\nAttached the updated patch v19.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Sun, 22 Jan 2023 12:42:00 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Here are my review comments for v19-0001.\n\n======\nCommit message\n\n1.\nThe combination of parallel streaming mode and min_apply_delay is not\nallowed. The subscriber in the parallel streaming mode applies each\nstream on arrival without the time of commit/prepare. So, the\nsubscriber needs to depend on the arrival time of the stream in this\ncase, if we apply the time-delayed feature for such transactions. Then\nthere is a possibility where some unnecessary delay will be added on\nthe subscriber by network communication break between nodes or other\nheavy work load on the publisher. On the other hand, applying the delay\nat the end of transaction with parallel apply also can cause issues of\nused resource bloat and locks kept in open for a long time. Thus, those\nfeatures can't work together.\n\n~\n\nI think the above is just cut/paste from a code comment within\nsubscriptioncmds.c. See review comments #5 below -- so if the code is\nchanged then this commit message should also change to match it.\n\n======\ndoc/src/sgml/ref/create_subscription.sgml\n\n2.\n+ <varlistentry>\n+ <term><literal>min_apply_delay</literal> (<type>integer</type>)</term>\n+ <listitem>\n+ <para>\n+ By default, the subscriber applies changes as soon as possible. This\n+ parameter allows the user to delay the application of changes by a\n+ given time interval. If the value is specified without units, it is\n+ taken as milliseconds. The default is zero (no delay).\n+ </para>\n\n2a.\nThe pgdocs says this is an integer default to “ms” unit. Also, the\nexample on this same page shows it is set to '4h'. But I did not see\nany mention of what other units are available to the user. Maybe other\ntime units should be mentioned here, or maybe a link should be given\nto the section “20.1.1. Parameter Names and Values\".\n\n~\n\n2b.\nPreviously the word \"interval\" was deliberately used because this\nparameter had interval support. But maybe now it should be changed so\nit is not misleading.\n\n\"a given time interval\" --> \"a given time period\" ??\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n3. Forward declare\n\n+static int defGetMinApplyDelay(DefElem *def);\n\nIf the new function is implemented as static near the top of this\nsource file then this forward declare would not even be necessary,\nright?\n\n~~~\n\n4. parse_subscription_options\n\n@@ -324,6 +328,12 @@ parse_subscription_options(ParseState *pstate,\nList *stmt_options,\n opts->specified_opts |= SUBOPT_LSN;\n opts->lsn = lsn;\n }\n+ else if (IsSet(supported_opts, SUBOPT_MIN_APPLY_DELAY) &&\n+ strcmp(defel->defname, \"min_apply_delay\") == 0)\n+ {\n+ opts->specified_opts |= SUBOPT_MIN_APPLY_DELAY;\n+ opts->min_apply_delay = defGetMinApplyDelay(defel);\n+ }\n\nShould this code fragment be calling errorConflictingDefElem so it\nwill report an error if the same min_apply_delay parameter is\nredundantly repeated? (IIUC, this appears to be the code pattern for\nother parameters nearby).\n\n~~~\n\n5. parse_subscription_options\n\n+ /*\n+ * The combination of parallel streaming mode and min_apply_delay is not\n+ * allowed. The subscriber in the parallel streaming mode applies each\n+ * stream on arrival without the time of commit/prepare. So, the\n+ * subscriber needs to depend on the arrival time of the stream in this\n+ * case, if we apply the time-delayed feature for such transactions. Then\n+ * there is a possibility where some unnecessary delay will be added on\n+ * the subscriber by network communication break between nodes or other\n+ * heavy work load on the publisher. On the other hand, applying the delay\n+ * at the end of transaction with parallel apply also can cause issues of\n+ * used resource bloat and locks kept in open for a long time. Thus, those\n+ * features can't work together.\n+ */\n\nIMO some re-wording might be warranted here. I am not sure quite how\nto do it. Perhaps like below?\n\nSUGGESTION\n\nThe combination of parallel streaming mode and min_apply_delay is not allowed.\n\nHere are some reasons why these features are incompatible:\na. In the parallel streaming mode the subscriber applies each stream\non arrival without knowledge of the commit/prepare time. This means we\ncannot calculate the underlying network/decoding lag between publisher\nand subscriber, and so always waiting for the full 'min_apply_delay'\nperiod might include unnecessary delay.\nb. If we apply the delay at the end of the transaction of the parallel\napply then that would cause issues related to resource bloat and locks\nbeing held for a long time.\n\n~~~\n\n6. defGetMinApplyDelay\n\n+\n+\n+/*\n+ * Extract the min_apply_delay mode value from a DefElem. This is very similar\n+ * to PGC_INT case of parse_and_validate_value(), because min_apply_delay\n+ * accepts the same string as recovery_min_apply_delay.\n+ */\n+int\n+defGetMinApplyDelay(DefElem *def)\n\n6a.\n\"same string\" -> \"same parameter format\" ??\n\n~\n\n6b.\nI thought this function should be implemented as static and located at\nthe top of the subscriptioncmds.c source file.\n\n======\nsrc/backend/replication/logical/worker.c\n\n7. maybe_delay_apply\n\n+static void maybe_delay_apply(TransactionId xid, TimestampTz finish_ts);\n\nIs there a reason why this is here? AFAIK the static implementation\nprecedes any usage so I doubt this forward declaration is required.\n\n~~~\n\n8. send_feedback\n\n@@ -3775,11 +3912,12 @@ send_feedback(XLogRecPtr recvpos, bool force,\nbool requestReply)\n pq_sendint64(reply_message, now); /* sendTime */\n pq_sendbyte(reply_message, requestReply); /* replyRequested */\n\n- elog(DEBUG2, \"sending feedback (force %d) to recv %X/%X, write\n%X/%X, flush %X/%X\",\n+ elog(DEBUG2, \"sending feedback (force %d) to recv %X/%X, write\n%X/%X, flush %X/%X in-delayed: %d\",\n force,\n LSN_FORMAT_ARGS(recvpos),\n LSN_FORMAT_ARGS(writepos),\n- LSN_FORMAT_ARGS(flushpos));\n+ LSN_FORMAT_ARGS(flushpos),\n+ in_delayed_apply);\n\nWondering if it is better to write this as:\n\"sending feedback (force %d, in_delayed_apply %d) to recv %X/%X, write\n%X/%X, flush %X/%X\"\n\n======\nsrc/test/regress/sql/subscription.sql\n\n9. Add new test?\n\nShould there be an additional test to check redundant parameter\nsetting -- eg. \"... WITH (min_apply_delay=123, min_apply_delay=456)\"\n\n(this is related to the review comment #4)\n\n~\n\n10. Add new tests?\n\nShould there be other tests just to verify different units (like 'd',\n'h', 'min') are working OK?\n\n======\nsrc/test/subscription/t/032_apply_delay.pl\n\n11.\n+# Confirm the time-delayed replication has been effective from the server log\n+# message where the apply worker emits for applying delay. Moreover, verifies\n+# that the current worker's delayed time is sufficiently bigger than the\n+# expected value, in order to check any update of the min_apply_delay.\n+sub check_apply_delay_log\n\n\"the current worker's delayed time...\" --> \"the current worker's\nremaining wait time...\" ??\n\n~~~\n\n12.\n+ # Get the delay time from the server log\n+ my $contents = slurp_file($node_subscriber->logfile, $offset);\n\n\"Get the delay time....\" --> \"Get the remaining wait time...\"\n\n~~~\n\n13.\n+# Create a subscription that applies the trasaction after 50 milliseconds delay\n+$node_subscriber->safe_psql('postgres',\n+ \"CREATE SUBSCRIPTION tap_sub CONNECTION '$publisher_connstr\napplication_name=$appname' PUBLICATION tap_pub WITH (copy_data = off,\nmin_apply_delay = '50ms', streaming = 'on')\"\n+);\n\n13a.\ntypo: \"trasaction\"\n\n~\n\n13b\n50ms seems an extremely short time – How do you even know if this is\ntesting anything related to the time delay? You may just be detecting\nthe normal lag between publisher and subscriber without time delay\nhaving much to do with anything.\n\n~\n\n14.\n\n+# Note that we cannot call check_apply_delay_log() here because there is a\n+# possibility that the delay is skipped. The event happens when the WAL\n+# replication between publisher and subscriber is delayed due to a mechanical\n+# problem. The log output will be checked later - substantial delay-time case.\n+\n+# Verify that the subscriber lags the publisher by at least 50 milliseconds\n+check_apply_delay_time($node_publisher, $node_subscriber, '2', '0.05');\n\n14a.\n\"The event happens...\" ??\n\nDid you mean \"This might happen if the WAL...\"\n\n~\n\n14b.\nThe log output will be checked later - substantial delay-time case.\n\nI think that needs re-wording to clarify.\ne.g1. you have nothing called a \"substantial delay-time\" case.\ne.g2. the word \"later\" confused me. Originally, I thought you meant it\nis not tested yet but that you will check it \"later\", but now IIUC you\nare just referring to the \"1 day 5 minutes\" test that comes below in\nthis location TAP file (??)\n\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 23 Jan 2023 19:06:41 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Mon, Jan 23, 2023 at 1:36 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Here are my review comments for v19-0001.\n>\n...\n>\n> 5. parse_subscription_options\n>\n> + /*\n> + * The combination of parallel streaming mode and min_apply_delay is not\n> + * allowed. The subscriber in the parallel streaming mode applies each\n> + * stream on arrival without the time of commit/prepare. So, the\n> + * subscriber needs to depend on the arrival time of the stream in this\n> + * case, if we apply the time-delayed feature for such transactions. Then\n> + * there is a possibility where some unnecessary delay will be added on\n> + * the subscriber by network communication break between nodes or other\n> + * heavy work load on the publisher. On the other hand, applying the delay\n> + * at the end of transaction with parallel apply also can cause issues of\n> + * used resource bloat and locks kept in open for a long time. Thus, those\n> + * features can't work together.\n> + */\n>\n> IMO some re-wording might be warranted here. I am not sure quite how\n> to do it. Perhaps like below?\n>\n> SUGGESTION\n>\n> The combination of parallel streaming mode and min_apply_delay is not allowed.\n>\n> Here are some reasons why these features are incompatible:\n> a. In the parallel streaming mode the subscriber applies each stream\n> on arrival without knowledge of the commit/prepare time. This means we\n> cannot calculate the underlying network/decoding lag between publisher\n> and subscriber, and so always waiting for the full 'min_apply_delay'\n> period might include unnecessary delay.\n> b. If we apply the delay at the end of the transaction of the parallel\n> apply then that would cause issues related to resource bloat and locks\n> being held for a long time.\n>\n> ~~~\n>\n\nHow about something like:\nThe combination of parallel streaming mode and min_apply_delay is not\nallowed. This is because we start applying the transaction stream as\nsoon as the first change arrives without knowing the transaction's\nprepare/commit time. This means we cannot calculate the underlying\nnetwork/decoding lag between publisher and subscriber, and so always\nwaiting for the full 'min_apply_delay' period might include\nunnecessary delay.\n\nThe other possibility is to apply the delay at the end of the parallel\napply transaction but that would cause issues related to resource\nbloat and locks being held for a long time.\n\n\n> 6. defGetMinApplyDelay\n>\n> +\n> +\n> +/*\n> + * Extract the min_apply_delay mode value from a DefElem. This is very similar\n> + * to PGC_INT case of parse_and_validate_value(), because min_apply_delay\n> + * accepts the same string as recovery_min_apply_delay.\n> + */\n> +int\n> +defGetMinApplyDelay(DefElem *def)\n>\n> 6a.\n> \"same string\" -> \"same parameter format\" ??\n>\n> ~\n>\n> 6b.\n> I thought this function should be implemented as static and located at\n> the top of the subscriptioncmds.c source file.\n>\n\nI agree that this should be a static function but I think its current\nlocation is a better place as other similar function is just above it.\n\n>\n> ======\n> src/test/regress/sql/subscription.sql\n>\n> 9. Add new test?\n>\n> Should there be an additional test to check redundant parameter\n> setting -- eg. \"... WITH (min_apply_delay=123, min_apply_delay=456)\"\n>\n\nI don't think that will be of much help. We don't seem to have other\ntests for subscription parameters.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 23 Jan 2023 16:14:44 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Sun, Jan 22, 2023 at 6:12 PM Takamichi Osumi (Fujitsu)\n<osumi.takamichi@fujitsu.com> wrote:\n>\n>\n> Attached the updated patch v19.\n>\n\nFew comments:\n=============\n1.\n}\n+\n+\n+/*\n\nOnly one empty line is sufficient between different functions.\n\n2.\n+ if (IsSet(supported_opts, SUBOPT_MIN_APPLY_DELAY) &&\n+ opts->min_apply_delay > 0 && opts->streaming == LOGICALREP_STREAM_PARALLEL)\n+ ereport(ERROR,\n+ errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"%s and %s are mutually exclusive options\",\n+ \"min_apply_delay > 0\", \"streaming = parallel\"));\n }\n\nI think here we should add a comment for the translator as we are\ndoing in some other nearby cases.\n\n3.\n+ /*\n+ * The combination of parallel streaming mode and\n+ * min_apply_delay is not allowed.\n+ */\n+ if (opts.streaming == LOGICALREP_STREAM_PARALLEL)\n+ if ((IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) &&\nopts.min_apply_delay > 0) ||\n+ (!IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) &&\nsub->minapplydelay > 0))\n+ ereport(ERROR,\n+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"cannot enable %s mode for subscription with %s\",\n+ \"streaming = parallel\", \"min_apply_delay\"));\n+\n\nA. When can second condition ((!IsSet(opts.specified_opts,\nSUBOPT_MIN_APPLY_DELAY) && sub->minapplydelay > 0)) in above check be\ntrue?\nB. In comments, you can say \"See parse_subscription_options.\"\n\n4.\n+/*\n+ * When min_apply_delay parameter is set on the subscriber, we wait long enough\n+ * to make sure a transaction is applied at least that interval behind the\n+ * publisher.\n\nShouldn't this part of the comment needs to be updated after the patch\nhas stopped using interval?\n\n5. How does this feature interacts with the SKIP feature? Currently,\nit doesn't care whether the changes of a particular xact are skipped\nor not. I think that might be okay because anyway the purpose of this\nfeature is to make subscriber lag from publishers. What do you think?\nI feel we can add some comments to indicate the same.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 23 Jan 2023 17:36:13 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Mon, Jan 23, 2023 at 9:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jan 23, 2023 at 1:36 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Here are my review comments for v19-0001.\n> >\n> ...\n> >\n> > 5. parse_subscription_options\n> >\n> > + /*\n> > + * The combination of parallel streaming mode and min_apply_delay is not\n> > + * allowed. The subscriber in the parallel streaming mode applies each\n> > + * stream on arrival without the time of commit/prepare. So, the\n> > + * subscriber needs to depend on the arrival time of the stream in this\n> > + * case, if we apply the time-delayed feature for such transactions. Then\n> > + * there is a possibility where some unnecessary delay will be added on\n> > + * the subscriber by network communication break between nodes or other\n> > + * heavy work load on the publisher. On the other hand, applying the delay\n> > + * at the end of transaction with parallel apply also can cause issues of\n> > + * used resource bloat and locks kept in open for a long time. Thus, those\n> > + * features can't work together.\n> > + */\n> >\n> > IMO some re-wording might be warranted here. I am not sure quite how\n> > to do it. Perhaps like below?\n> >\n> > SUGGESTION\n> >\n> > The combination of parallel streaming mode and min_apply_delay is not allowed.\n> >\n> > Here are some reasons why these features are incompatible:\n> > a. In the parallel streaming mode the subscriber applies each stream\n> > on arrival without knowledge of the commit/prepare time. This means we\n> > cannot calculate the underlying network/decoding lag between publisher\n> > and subscriber, and so always waiting for the full 'min_apply_delay'\n> > period might include unnecessary delay.\n> > b. If we apply the delay at the end of the transaction of the parallel\n> > apply then that would cause issues related to resource bloat and locks\n> > being held for a long time.\n> >\n> > ~~~\n> >\n>\n> How about something like:\n> The combination of parallel streaming mode and min_apply_delay is not\n> allowed. This is because we start applying the transaction stream as\n> soon as the first change arrives without knowing the transaction's\n> prepare/commit time. This means we cannot calculate the underlying\n> network/decoding lag between publisher and subscriber, and so always\n> waiting for the full 'min_apply_delay' period might include\n> unnecessary delay.\n>\n> The other possibility is to apply the delay at the end of the parallel\n> apply transaction but that would cause issues related to resource\n> bloat and locks being held for a long time.\n>\n\n+1. That's better.\n\n>\n> > 6. defGetMinApplyDelay\n> >\n...\n> >\n> > 6b.\n> > I thought this function should be implemented as static and located at\n> > the top of the subscriptioncmds.c source file.\n> >\n>\n> I agree that this should be a static function but I think its current\n> location is a better place as other similar function is just above it.\n>\n\nBut, why not do everything, instead of settling on a half-fix?\n\ne.g.\n1. Change the new function (defGetMinApplyDelay) to be static as it should be\n2. And move defGetMinApplyDelay to the top of the file where IMO it\nreally belongs\n3. And then remove the (now) redundant forward declaration of\ndefGetMinApplyDelay\n4. And also move the existing function (defGetStreamingMode) to the\ntop of the file so that those similar functions (defGetMinApplyDelay\nand defGetStreamingMode) can remain together\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 24 Jan 2023 09:15:53 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Sun, Jan 22, 2023, at 9:42 AM, Takamichi Osumi (Fujitsu) wrote:\n> On Saturday, January 21, 2023 3:36 AM I wrote:\n> > Kindly have a look at the patch v18.\n> I've conducted some refactoring for v18.\n> Now the latest patch should be tidier and\n> the comments would be clearer and more aligned as a whole.\n> \n> Attached the updated patch v19.\n[I haven't been following this thread for a long time...]\n\nGood to know that you keep improving this patch. I have a few suggestions that\nwere easier to provide a patch on top of your latest patch than to provide an\ninline suggestions.\n\nThere are a few documentation polishing. Let me comment some of them above.\n\n- The length of time (ms) to delay the application of changes.\n+ Total time spent delaying the application of changes, in milliseconds\n\nI don't remember if I suggested this description for catalog but IMO the\nsuggestion reads better for me.\n\n- For time-delayed logical replication (i.e. when the subscription is\n- created with parameter min_apply_delay > 0), the apply worker sends a\n- Standby Status Update message to the publisher with a period of\n- <literal>wal_receiver_status_interval</literal>. Make sure to set\n- <literal>wal_receiver_status_interval</literal> less than the\n- <literal>wal_sender_timeout</literal> on the publisher, otherwise, the\n- walsender will repeatedly terminate due to the timeout errors. If\n- <literal>wal_receiver_status_interval</literal> is set to zero, the apply\n- worker doesn't send any feedback messages during the subscriber's\n- <literal>min_apply_delay</literal> period. See\n- <xref linkend=\"sql-createsubscription\"/> for details.\n+ For time-delayed logical replication, the apply worker sends a feedback\n+ message to the publisher every\n+ <varname>wal_receiver_status_interval</varname> milliseconds. Make sure\n+ to set <varname>wal_receiver_status_interval</varname> less than the\n+ <varname>wal_sender_timeout</varname> on the publisher, otherwise, the\n+ <literal>walsender</literal> will repeatedly terminate due to timeout\n+ error. If <varname>wal_receiver_status_interval</varname> is set to\n+ zero, the apply worker doesn't send any feedback messages during the\n+ <literal>min_apply_delay</literal> interval.\n\nI removed the parenthesis explanation about time-delayed logical replication.\nIf you are reading the documentation and does not know what it means you should\n(a) read the logical replication chapter or (b) check the glossary (maybe a new\nentry should be added). I also removed the Standby status Update message but it\nis a low level detail; let's refer to it as feedback message as the other\nsentences do. I changed \"literal\" to \"varname\" that's the correct tag for\nparameters. I replace \"period\" with \"interval\" that was the previous\nterminology. IMO we should be uniform, use one or the other.\n\n- The subscriber replication can be instructed to lag behind the publisher\n- side changes by specifying the <literal>min_apply_delay</literal>\n- subscription parameter. See <xref linkend=\"sql-createsubscription\"/> for\n- details.\n+ A logical replication subscription can delay the application of changes by\n+ specifying the <literal>min_apply_delay</literal> subscription parameter.\n+ See <xref linkend=\"sql-createsubscription\"/> for details.\n\nThis feature refers to a specific subscription, hence, \"logical replication\nsubscription\" instead of \"subscriber replication\".\n\n+ if (IsSet(opts->specified_opts, SUBOPT_MIN_APPLY_DELAY))\n+ errorConflictingDefElem(defel, pstate);\n+\n\nPeter S referred to this missing piece of code too.\n\n-int\n+static int\ndefGetMinApplyDelay(DefElem *def)\n{\n\nIt seems you forgot static keyword.\n\n- elog(DEBUG2, \"time-delayed replication for txid %u, min_apply_delay = %lld ms, Remaining wait time: %ld ms\",\n- xid, (long long) MySubscription->minapplydelay, diffms);\n+ elog(DEBUG2, \"time-delayed replication for txid %u, min_apply_delay = \" INT64_FORMAT \" ms, remaining wait time: %ld ms\",\n+ xid, MySubscription->minapplydelay, diffms);\n\n\nint64 should use format modifier INT64_FORMAT.\n\n- (long) wal_receiver_status_interval * 1000,\n+ wal_receiver_status_interval * 1000L,\n\nCast is not required. I added a suffix to the constant.\n\n- elog(DEBUG2, \"sending feedback (force %d) to recv %X/%X, write %X/%X, flush %X/%X in-delayed: %d\",\n+ elog(DEBUG2, \"sending feedback (force %d) to recv %X/%X, write %X/%X, flush %X/%X, apply delay: %s\",\n force,\n LSN_FORMAT_ARGS(recvpos),\n LSN_FORMAT_ARGS(writepos),\n LSN_FORMAT_ARGS(flushpos),\n- in_delayed_apply);\n+ in_delayed_apply? \"yes\" : \"no\");\n\nIt is better to use a string to represent the yes/no option.\n\n- gettext_noop(\"Min apply delay (ms)\"));\n+ gettext_noop(\"Min apply delay\"));\n\nI don't know if it was discussed but we don't add units to headers. When I\nthink about this parameter representation (internal and external), I decided to\nuse the previous code because it provides a unit for external representation. I\nunderstand that using the same representation as recovery_min_apply_delay is\ngood but the current code does not handle the external representation\naccordingly. (recovery_min_apply_delay uses the GUC machinery to adds the unit\nbut for min_apply_delay, it doesn't).\n\n# Setup for streaming case\n-$node_publisher->append_conf('postgres.conf',\n+$node_publisher->append_conf('postgresql.conf',\n 'logical_decoding_mode = immediate');\n$node_publisher->reload;\n\nFix configuration file name.\n\nMaybe tests should do a better job. I think check_apply_delay_time is fragile\nbecause it does not guarantee that time is not shifted. Time-delayed\nreplication is a subscriber feature and to check its correctness it should\ncheck the logs.\n\n# Note that we cannot call check_apply_delay_log() here because there is a\n# possibility that the delay is skipped. The event happens when the WAL\n# replication between publisher and subscriber is delayed due to a mechanical\n# problem. The log output will be checked later - substantial delay-time case.\n\nIf you might not use the logs for it, it should adjust the min_apply_delay, no?\n\nIt does not exercise the min_apply_delay vs parallel streaming mode.\n\n+ /*\n+ * The combination of parallel streaming mode and\n+ * min_apply_delay is not allowed.\n+ */\n+ if (opts.streaming == LOGICALREP_STREAM_PARALLEL)\n+ if ((IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) && opts.min_apply_delay > 0) ||\n+ (!IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) && sub->minapplydelay > 0))\n+ ereport(ERROR,\n+ errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n+ errmsg(\"cannot enable %s mode for subscription with %s\",\n+ \"streaming = parallel\", \"min_apply_delay\"));\n+\n\nIs this code correct? I also didn't like this message. \"cannot enable streaming\n= parallel mode for subscription with min_apply_delay\" is far from a good error\nmessage. How about refer parallelism to \"parallel streaming mode\".\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/",
"msg_date": "Mon, 23 Jan 2023 20:32:08 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "At Mon, 23 Jan 2023 17:36:13 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Sun, Jan 22, 2023 at 6:12 PM Takamichi Osumi (Fujitsu)\n> <osumi.takamichi@fujitsu.com> wrote:\n> >\n> >\n> > Attached the updated patch v19.\n> Few comments:\n> 2.\n> + if (IsSet(supported_opts, SUBOPT_MIN_APPLY_DELAY) &&\n> + opts->min_apply_delay > 0 && opts->streaming == LOGICALREP_STREAM_PARALLEL)\n> + ereport(ERROR,\n> + errcode(ERRCODE_SYNTAX_ERROR),\n> + errmsg(\"%s and %s are mutually exclusive options\",\n> + \"min_apply_delay > 0\", \"streaming = parallel\"));\n> }\n> \n> I think here we should add a comment for the translator as we are\n> doing in some other nearby cases.\n\nIMHO \"foo > bar\" is not an \"option\". I think we say \"foo and bar are\nmutually exclusive options\" but I think don't say \"foo = x and bar = y\nare.. options\". I wrote a comment as \"this should be more like\nhuman-speaking\" and Euler seems having the same feeling for another\nerror message.\n\nConcretely I would spell this as \"min_apply_delay cannot be enabled\nwhen parallel streaming mode is enabled\" or something. And the\nopposite-direction message nearby would be \"parallel streaming mode\ncannot be enabled when min_apply_delay is enabled.\"\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 24 Jan 2023 09:45:19 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "> Attached the updated patch v19.\n\n+ maybe_delay_apply(TransactionId xid, TimestampTz finish_ts)\n\nI look this spelling strange. How about maybe_apply_delay()?\n\n\nsend_feedback():\n+\t * If the subscriber side apply is delayed (because of time-delayed\n+\t * replication) then do not tell the publisher that the received latest\n+\t * LSN is already applied and flushed, otherwise, it leads to the\n+\t * publisher side making a wrong assumption of logical replication\n+\t * progress. Instead, we just send a feedback message to avoid a publisher\n+\t * timeout during the delay.\n \t */\n-\tif (!have_pending_txes)\n+\tif (!have_pending_txes && !in_delayed_apply)\n \t\tflushpos = writepos = recvpos;\n\nHonestly I don't like this wart. The reason for this is the function\nassumes recvpos = applypos but we actually call it while holding\nunapplied changes, that is, applypos < recvpos.\n\nCouldn't we maintain an additional static variable \"last_applied\"\nalong with last_received? In this case the condition cited above\nwould be as follows and in_delayed_apply will become unnecessary.\n\n+\tif (!have_pending_txes && last_received == last_applied)\n\nThe function is a static function and always called with a variable\nlast_received that has the same scope with the function, as the first\nparameter. Thus we can remove the first parameter then let the\nfunction directly look at the both two varaibles instead.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 24 Jan 2023 11:45:35 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Sorry, I forgot to write one comment.\n\nAt Tue, 24 Jan 2023 11:45:35 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n\n+\t/* Should we delay the current transaction? */\n+\tif (finish_ts)\n+\t\tmaybe_delay_apply(xid, finish_ts);\n+\n \tif (!am_parallel_apply_worker())\n \t\tmaybe_start_skipping_changes(lsn);\n\nIt may not give actual advantages, but isn't it better that delay\nhappens after skipping?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 24 Jan 2023 12:05:30 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tue, Jan 24, 2023 at 3:46 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Mon, Jan 23, 2023 at 9:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > > 6. defGetMinApplyDelay\n> > >\n> ...\n> > >\n> > > 6b.\n> > > I thought this function should be implemented as static and located at\n> > > the top of the subscriptioncmds.c source file.\n> > >\n> >\n> > I agree that this should be a static function but I think its current\n> > location is a better place as other similar function is just above it.\n> >\n>\n> But, why not do everything, instead of settling on a half-fix?\n>\n> e.g.\n> 1. Change the new function (defGetMinApplyDelay) to be static as it should be\n> 2. And move defGetMinApplyDelay to the top of the file where IMO it\n> really belongs\n> 3. And then remove the (now) redundant forward declaration of\n> defGetMinApplyDelay\n> 4. And also move the existing function (defGetStreamingMode) to the\n> top of the file so that those similar functions (defGetMinApplyDelay\n> and defGetStreamingMode) can remain together\n>\n\nThere are various other static functions (merge_publications,\ncheck_duplicates_in_publist, etc.) which then also needs similar\nchange. BTW, I don't think we have a policy to always define static\nfunctions before their usage. So, I don't see the need to do anything\nin this matter.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 24 Jan 2023 10:37:11 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tue, Jan 24, 2023 at 5:02 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Sun, Jan 22, 2023, at 9:42 AM, Takamichi Osumi (Fujitsu) wrote:\n>\n>\n> Attached the updated patch v19.\n>\n> [I haven't been following this thread for a long time...]\n>\n> Good to know that you keep improving this patch. I have a few suggestions that\n> were easier to provide a patch on top of your latest patch than to provide an\n> inline suggestions.\n>\n\nEuler, thanks for your comments. We have an existing problem related\nto shutdown which impacts this patch. The problem is that during\nshutdown on the publisher, we wait for all the WAL to be sent and\nflushed on the subscriber. Now, if we user has configured a long value\nfor min_apply_delay on the subscriber then the shutdown won't be\nsuccessful. This can happen even today if the subscriber waits for\nsome lock during the apply. This is not so much a problem with\nphysical replication because there we have a separate process to first\nflush the WAL. This problem has been discussed in a separate thread as\nwell. See [1]. It is important to reach conclusion even if we just\nwant to document it. So, your thoughts on that other thread can help\nus to make it move forward.\n\n[1] - https://www.postgresql.org/message-id/TYAPR01MB586668E50FC2447AD7F92491F5E89%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 24 Jan 2023 10:48:42 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tue, Jan 24, 2023 at 6:17 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Mon, 23 Jan 2023 17:36:13 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > On Sun, Jan 22, 2023 at 6:12 PM Takamichi Osumi (Fujitsu)\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > >\n> > >\n> > > Attached the updated patch v19.\n> > Few comments:\n> > 2.\n> > + if (IsSet(supported_opts, SUBOPT_MIN_APPLY_DELAY) &&\n> > + opts->min_apply_delay > 0 && opts->streaming == LOGICALREP_STREAM_PARALLEL)\n> > + ereport(ERROR,\n> > + errcode(ERRCODE_SYNTAX_ERROR),\n> > + errmsg(\"%s and %s are mutually exclusive options\",\n> > + \"min_apply_delay > 0\", \"streaming = parallel\"));\n> > }\n> >\n> > I think here we should add a comment for the translator as we are\n> > doing in some other nearby cases.\n>\n> IMHO \"foo > bar\" is not an \"option\". I think we say \"foo and bar are\n> mutually exclusive options\" but I think don't say \"foo = x and bar = y\n> are.. options\". I wrote a comment as \"this should be more like\n> human-speaking\" and Euler seems having the same feeling for another\n> error message.\n>\n> Concretely I would spell this as \"min_apply_delay cannot be enabled\n> when parallel streaming mode is enabled\" or something.\n>\n\nWe can change it but the current message seems to be in line with some\nnearby messages like \"slot_name = NONE and enabled = true are mutually\nexclusive options\". So, isn't it better to keep this as one in sync\nwith existing messages?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 24 Jan 2023 11:28:58 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tue, Jan 24, 2023 at 8:35 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> Sorry, I forgot to write one comment.\n>\n> At Tue, 24 Jan 2023 11:45:35 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n>\n> + /* Should we delay the current transaction? */\n> + if (finish_ts)\n> + maybe_delay_apply(xid, finish_ts);\n> +\n> if (!am_parallel_apply_worker())\n> maybe_start_skipping_changes(lsn);\n>\n> It may not give actual advantages, but isn't it better that delay\n> happens after skipping?\n>\n\nIf we go with the order you are suggesting then the LOGs will appear\nas follows when we are skipping the transaction:\n\n\"logical replication starts skipping transaction at LSN ...\"\n\"time-delayed replication for txid %u, min_apply_delay = %lld ms,\nRemaining wait time: ...\"\n\nPersonally, I would prefer the above LOGs to be in reverse order as it\ndoesn't make much sense to me to first say that we are skipping\nchanges and then say the transaction is delayed. What do you think?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 24 Jan 2023 11:45:36 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tue, Jan 24, 2023 at 8:15 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> > Attached the updated patch v19.\n>\n> + maybe_delay_apply(TransactionId xid, TimestampTz finish_ts)\n>\n> I look this spelling strange. How about maybe_apply_delay()?\n>\n\n+1.\n\n>\n> send_feedback():\n> + * If the subscriber side apply is delayed (because of time-delayed\n> + * replication) then do not tell the publisher that the received latest\n> + * LSN is already applied and flushed, otherwise, it leads to the\n> + * publisher side making a wrong assumption of logical replication\n> + * progress. Instead, we just send a feedback message to avoid a publisher\n> + * timeout during the delay.\n> */\n> - if (!have_pending_txes)\n> + if (!have_pending_txes && !in_delayed_apply)\n> flushpos = writepos = recvpos;\n>\n> Honestly I don't like this wart. The reason for this is the function\n> assumes recvpos = applypos but we actually call it while holding\n> unapplied changes, that is, applypos < recvpos.\n>\n> Couldn't we maintain an additional static variable \"last_applied\"\n> along with last_received?\n>\n\nIt won't be easy to maintain the meaning of last_applied because there\nare cases where we don't apply the change directly. For example, in\ncase of streaming xacts, we will just keep writing it to the file,\nnow, say, due to some reason, we have to send the feedback, then it\nwill not allow you to update the latest write locations. This would\nthen become different then what we are doing without the patch.\nAnother point to think about is that we also need to keep the variable\nupdated for keep-alive ('k') messages even though we don't apply\nanything in that case. Still, other cases to consider are where we\nhave mix of streaming and non-streaming transactions.\n\n> In this case the condition cited above\n> would be as follows and in_delayed_apply will become unnecessary.\n>\n> + if (!have_pending_txes && last_received == last_applied)\n>\n> The function is a static function and always called with a variable\n> last_received that has the same scope with the function, as the first\n> parameter. Thus we can remove the first parameter then let the\n> function directly look at the both two varaibles instead.\n>\n\nI think this is true without this patch, so why that has not been\nfollowed in the first place? One comment, I see in this regard is as\nbelow:\n\n/* It's legal to not pass a recvpos */\nif (recvpos < last_recvpos)\nrecvpos = last_recvpos;\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 24 Jan 2023 12:27:58 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tue, Jan 24, 2023 at 5:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jan 24, 2023 at 8:15 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > > Attached the updated patch v19.\n> >\n> > + maybe_delay_apply(TransactionId xid, TimestampTz finish_ts)\n> >\n> > I look this spelling strange. How about maybe_apply_delay()?\n> >\n>\n> +1.\n\nIt depends on how you read it. I read it like this:\n\nmaybe_delay_apply === means \"maybe delay [the] apply\"\n(which is exactly what the function does)\n\nversus\n\nmaybe_apply_delay === means \"maybe [the] apply [needs a] delay\"\n(which is also correct, but it seemed a more awkward way to say it IMO)\n\n~\n\nPerhaps it's better to rename it more fully like\n*maybe_delay_the_apply* to remove any ambiguous interpretations.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 24 Jan 2023 18:13:50 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "At Tue, 24 Jan 2023 11:28:58 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Tue, Jan 24, 2023 at 6:17 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > IMHO \"foo > bar\" is not an \"option\". I think we say \"foo and bar are\n> > mutually exclusive options\" but I think don't say \"foo = x and bar = y\n> > are.. options\". I wrote a comment as \"this should be more like\n> > human-speaking\" and Euler seems having the same feeling for another\n> > error message.\n> >\n> > Concretely I would spell this as \"min_apply_delay cannot be enabled\n> > when parallel streaming mode is enabled\" or something.\n> >\n> \n> We can change it but the current message seems to be in line with some\n> nearby messages like \"slot_name = NONE and enabled = true are mutually\n> exclusive options\". So, isn't it better to keep this as one in sync\n> with existing messages?\n\nOoo. subscriptioncmds.c is full of such messages. Okay I agree that it\nis better to leave it as is..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 24 Jan 2023 17:33:42 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tue, Jan 24, 2023 at 12:44 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Tue, Jan 24, 2023 at 5:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Jan 24, 2023 at 8:15 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > > Attached the updated patch v19.\n> > >\n> > > + maybe_delay_apply(TransactionId xid, TimestampTz finish_ts)\n> > >\n> > > I look this spelling strange. How about maybe_apply_delay()?\n> > >\n> >\n> > +1.\n>\n> It depends on how you read it. I read it like this:\n>\n> maybe_delay_apply === means \"maybe delay [the] apply\"\n> (which is exactly what the function does)\n>\n> versus\n>\n> maybe_apply_delay === means \"maybe [the] apply [needs a] delay\"\n> (which is also correct, but it seemed a more awkward way to say it IMO)\n>\n\nThis matches more with GUC and all other usages of variables in the\npatch. So, I still prefer the second one.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 24 Jan 2023 14:22:19 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Amit, Horiguchi-san,\r\n\r\n> >\r\n> > send_feedback():\r\n> > + * If the subscriber side apply is delayed (because of time-delayed\r\n> > + * replication) then do not tell the publisher that the received latest\r\n> > + * LSN is already applied and flushed, otherwise, it leads to the\r\n> > + * publisher side making a wrong assumption of logical replication\r\n> > + * progress. Instead, we just send a feedback message to avoid a\r\n> publisher\r\n> > + * timeout during the delay.\r\n> > */\r\n> > - if (!have_pending_txes)\r\n> > + if (!have_pending_txes && !in_delayed_apply)\r\n> > flushpos = writepos = recvpos;\r\n> >\r\n> > Honestly I don't like this wart. The reason for this is the function\r\n> > assumes recvpos = applypos but we actually call it while holding\r\n> > unapplied changes, that is, applypos < recvpos.\r\n> >\r\n> > Couldn't we maintain an additional static variable \"last_applied\"\r\n> > along with last_received?\r\n> >\r\n> \r\n> It won't be easy to maintain the meaning of last_applied because there\r\n> are cases where we don't apply the change directly. For example, in\r\n> case of streaming xacts, we will just keep writing it to the file,\r\n> now, say, due to some reason, we have to send the feedback, then it\r\n> will not allow you to update the latest write locations. This would\r\n> then become different then what we are doing without the patch.\r\n> Another point to think about is that we also need to keep the variable\r\n> updated for keep-alive ('k') messages even though we don't apply\r\n> anything in that case. Still, other cases to consider are where we\r\n> have mix of streaming and non-streaming transactions.\r\n\r\nI have tried to implement that, but it might be difficult because of a corner\r\ncase related with the initial data sync.\r\n\r\nFirst of all, I have made last_applied to update when\r\n\r\n* transactions are committed, prepared, or aborted\r\n* apply worker receives keepalive message.\r\n\r\nI thought during the initial data sync, we must not update the last applied\r\ntriggered by keepalive messages, so following lines were added just after\r\nupdating last_received.\r\n\r\n```\r\n+ if (last_applied < end_lsn && AllTablesyncsReady())\r\n+ last_applied = end_lsn;\r\n```\r\n\r\nHowever, if data is synchronizing and workers receive the non-committable WAL,\r\nthis condition cannot be satisfied. 009_matviews.pl tests such a case, and I\r\ngot a failure there. In this test MATERIALIZED VIEW is created on publisher and then\r\nthe WAL is replicated to subscriber, but the transaction is not committed because\r\nlogical replication does not support the statement.\r\nIf we change the condition, we may the system may become inconsistent because the\r\nworker replies that all remote WALs are applied even if tablesync workers are\r\nsynchronizing data.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Tue, 24 Jan 2023 10:12:40 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi,\r\n\r\n\r\nOn Tuesday, January 24, 2023 5:52 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Tue, Jan 24, 2023 at 12:44 PM Peter Smith <smithpb2250@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Tue, Jan 24, 2023 at 5:58 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > On Tue, Jan 24, 2023 at 8:15 AM Kyotaro Horiguchi\r\n> > > <horikyota.ntt@gmail.com> wrote:\r\n> > > >\r\n> > > > > Attached the updated patch v19.\r\n> > > >\r\n> > > > + maybe_delay_apply(TransactionId xid, TimestampTz finish_ts)\r\n> > > >\r\n> > > > I look this spelling strange. How about maybe_apply_delay()?\r\n> > > >\r\n> > >\r\n> > > +1.\r\n> >\r\n> > It depends on how you read it. I read it like this:\r\n> >\r\n> > maybe_delay_apply === means \"maybe delay [the] apply\"\r\n> > (which is exactly what the function does)\r\n> >\r\n> > versus\r\n> >\r\n> > maybe_apply_delay === means \"maybe [the] apply [needs a] delay\"\r\n> > (which is also correct, but it seemed a more awkward way to say it\r\n> > IMO)\r\n> >\r\n> \r\n> This matches more with GUC and all other usages of variables in the patch. So,\r\n> I still prefer the second one.\r\nOkay. Fixed.\r\n\r\n\r\nAttached the patch v20 that has incorporated all comments so far.\r\nKindly have a look at the attached patch.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi",
"msg_date": "Tue, 24 Jan 2023 12:19:04 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tuesday, January 24, 2023 3:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> > send_feedback():\r\n> > + * If the subscriber side apply is delayed (because of time-delayed\r\n> > + * replication) then do not tell the publisher that the received latest\r\n> > + * LSN is already applied and flushed, otherwise, it leads to the\r\n> > + * publisher side making a wrong assumption of logical replication\r\n> > + * progress. Instead, we just send a feedback message to avoid a\r\n> publisher\r\n> > + * timeout during the delay.\r\n> > */\r\n> > - if (!have_pending_txes)\r\n> > + if (!have_pending_txes && !in_delayed_apply)\r\n> > flushpos = writepos = recvpos;\r\n> >\r\n> > Honestly I don't like this wart. The reason for this is the function\r\n> > assumes recvpos = applypos but we actually call it while holding\r\n> > unapplied changes, that is, applypos < recvpos.\r\n> >\r\n> > Couldn't we maintain an additional static variable \"last_applied\"\r\n> > along with last_received?\r\n> >\r\n> \r\n> It won't be easy to maintain the meaning of last_applied because there are\r\n> cases where we don't apply the change directly. For example, in case of\r\n> streaming xacts, we will just keep writing it to the file, now, say, due to some\r\n> reason, we have to send the feedback, then it will not allow you to update the\r\n> latest write locations. This would then become different then what we are\r\n> doing without the patch.\r\n> Another point to think about is that we also need to keep the variable updated\r\n> for keep-alive ('k') messages even though we don't apply anything in that case.\r\n> Still, other cases to consider are where we have mix of streaming and\r\n> non-streaming transactions.\r\nAgreed. This will change some existing behaviors. So, didn't conduct this change in the latest patch [1].\r\n\r\n\r\n[1] - https://www.postgresql.org/message-id/TYCPR01MB8373DC1881F382B4703F26E0EDC99%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Tue, 24 Jan 2023 12:32:03 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Monday, January 23, 2023 9:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Sun, Jan 22, 2023 at 6:12 PM Takamichi Osumi (Fujitsu)\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> >\r\n> > Attached the updated patch v19.\r\n> >\r\n> \r\n> Few comments:\r\n> =============\r\n> 1.\r\n> }\r\n> +\r\n> +\r\n> +/*\r\n> \r\n> Only one empty line is sufficient between different functions.\r\nFixed.\r\n\r\n\r\n> 2.\r\n> + if (IsSet(supported_opts, SUBOPT_MIN_APPLY_DELAY) &&\r\n> + opts->min_apply_delay > 0 && opts->streaming ==\r\n> + opts->LOGICALREP_STREAM_PARALLEL)\r\n> + ereport(ERROR,\r\n> + errcode(ERRCODE_SYNTAX_ERROR),\r\n> + errmsg(\"%s and %s are mutually exclusive options\",\r\n> + \"min_apply_delay > 0\", \"streaming = parallel\"));\r\n> }\r\n> \r\n> I think here we should add a comment for the translator as we are doing in\r\n> some other nearby cases.\r\nFixed.\r\n\r\n\r\n> 3.\r\n> + /*\r\n> + * The combination of parallel streaming mode and\r\n> + * min_apply_delay is not allowed.\r\n> + */\r\n> + if (opts.streaming == LOGICALREP_STREAM_PARALLEL) if\r\n> + ((IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) &&\r\n> opts.min_apply_delay > 0) ||\r\n> + (!IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) &&\r\n> sub->minapplydelay > 0))\r\n> + ereport(ERROR,\r\n> + errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\r\n> + errmsg(\"cannot enable %s mode for subscription with %s\",\r\n> + \"streaming = parallel\", \"min_apply_delay\"));\r\n> +\r\n> \r\n> A. When can second condition ((!IsSet(opts.specified_opts,\r\n> SUBOPT_MIN_APPLY_DELAY) && sub->minapplydelay > 0)) in above check\r\n> be true?\r\n> B. In comments, you can say \"See parse_subscription_options.\"\r\n(1) In the alter statement, streaming = parallel is set.\r\nAlso, (2) in the alter statement, min_apply_delay isn't set.\r\nand (3) an existing subscription has non-zero min_apply_delay.\r\n\r\nAdded the comment.\r\n> 4.\r\n> +/*\r\n> + * When min_apply_delay parameter is set on the subscriber, we wait\r\n> +long enough\r\n> + * to make sure a transaction is applied at least that interval behind\r\n> +the\r\n> + * publisher.\r\n> \r\n> Shouldn't this part of the comment needs to be updated after the patch has\r\n> stopped using interval?\r\nYes. I removed \"interval\" in descriptions so that we don't get\r\nconfused with types.\r\n\r\n\r\n> 5. How does this feature interacts with the SKIP feature? Currently, it doesn't\r\n> care whether the changes of a particular xact are skipped or not. I think that\r\n> might be okay because anyway the purpose of this feature is to make\r\n> subscriber lag from publishers. What do you think?\r\n> I feel we can add some comments to indicate the same.\r\nAdded the comment in the commit message.\r\nI didn't add this kind of comment as code comments,\r\nsince both features are independent. If there is a need to write it anywhere,\r\nthen please let me know. The latest patch is posted in [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/TYCPR01MB8373DC1881F382B4703F26E0EDC99%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Tue, 24 Jan 2023 13:59:22 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Monday, January 23, 2023 7:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Mon, Jan 23, 2023 at 1:36 PM Peter Smith <smithpb2250@gmail.com>\r\n> wrote:\r\n> >\r\n> > Here are my review comments for v19-0001.\r\n> >\r\n> ...\r\n> >\r\n> > 5. parse_subscription_options\r\n> >\r\n> > + /*\r\n> > + * The combination of parallel streaming mode and min_apply_delay is\r\n> > + not\r\n> > + * allowed. The subscriber in the parallel streaming mode applies\r\n> > + each\r\n> > + * stream on arrival without the time of commit/prepare. So, the\r\n> > + * subscriber needs to depend on the arrival time of the stream in\r\n> > + this\r\n> > + * case, if we apply the time-delayed feature for such transactions.\r\n> > + Then\r\n> > + * there is a possibility where some unnecessary delay will be added\r\n> > + on\r\n> > + * the subscriber by network communication break between nodes or\r\n> > + other\r\n> > + * heavy work load on the publisher. On the other hand, applying the\r\n> > + delay\r\n> > + * at the end of transaction with parallel apply also can cause\r\n> > + issues of\r\n> > + * used resource bloat and locks kept in open for a long time. Thus,\r\n> > + those\r\n> > + * features can't work together.\r\n> > + */\r\n> >\r\n> > IMO some re-wording might be warranted here. I am not sure quite how\r\n> > to do it. Perhaps like below?\r\n> >\r\n> > SUGGESTION\r\n> >\r\n> > The combination of parallel streaming mode and min_apply_delay is not\r\n> allowed.\r\n> >\r\n> > Here are some reasons why these features are incompatible:\r\n> > a. In the parallel streaming mode the subscriber applies each stream\r\n> > on arrival without knowledge of the commit/prepare time. This means we\r\n> > cannot calculate the underlying network/decoding lag between publisher\r\n> > and subscriber, and so always waiting for the full 'min_apply_delay'\r\n> > period might include unnecessary delay.\r\n> > b. If we apply the delay at the end of the transaction of the parallel\r\n> > apply then that would cause issues related to resource bloat and locks\r\n> > being held for a long time.\r\n> >\r\n> > ~~~\r\n> >\r\n> \r\n> How about something like:\r\n> The combination of parallel streaming mode and min_apply_delay is not\r\n> allowed. This is because we start applying the transaction stream as soon as\r\n> the first change arrives without knowing the transaction's prepare/commit time.\r\n> This means we cannot calculate the underlying network/decoding lag between\r\n> publisher and subscriber, and so always waiting for the full 'min_apply_delay'\r\n> period might include unnecessary delay.\r\n> \r\n> The other possibility is to apply the delay at the end of the parallel apply\r\n> transaction but that would cause issues related to resource bloat and locks\r\n> being held for a long time.\r\nThank you for providing a good description ! Adopted.\r\nThe latest patch can be seen in [1].\r\n\r\n\r\n[1] - https://www.postgresql.org/message-id/TYCPR01MB8373DC1881F382B4703F26E0EDC99%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Tue, 24 Jan 2023 14:03:09 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Monday, January 23, 2023 5:07 PM Peter Smith <smithpb2250@gmail.com> wrote:\r\n> Here are my review comments for v19-0001.\r\nThanks for your review !\r\n\r\n> \r\n> ======\r\n> Commit message\r\n> \r\n> 1.\r\n> The combination of parallel streaming mode and min_apply_delay is not\r\n> allowed. The subscriber in the parallel streaming mode applies each stream on\r\n> arrival without the time of commit/prepare. So, the subscriber needs to depend\r\n> on the arrival time of the stream in this case, if we apply the time-delayed\r\n> feature for such transactions. Then there is a possibility where some\r\n> unnecessary delay will be added on the subscriber by network communication\r\n> break between nodes or other heavy work load on the publisher. On the other\r\n> hand, applying the delay at the end of transaction with parallel apply also can\r\n> cause issues of used resource bloat and locks kept in open for a long time.\r\n> Thus, those features can't work together.\r\n> ~\r\n> \r\n> I think the above is just cut/paste from a code comment within\r\n> subscriptioncmds.c. See review comments #5 below -- so if the code is\r\n> changed then this commit message should also change to match it.\r\nNow, updated this. Kindly have a look at the latest patch in [1].\r\n\r\n\r\n> \r\n> ======\r\n> doc/src/sgml/ref/create_subscription.sgml\r\n> \r\n> 2.\r\n> + <varlistentry>\r\n> + <term><literal>min_apply_delay</literal>\r\n> (<type>integer</type>)</term>\r\n> + <listitem>\r\n> + <para>\r\n> + By default, the subscriber applies changes as soon as possible.\r\n> This\r\n> + parameter allows the user to delay the application of changes by a\r\n> + given time interval. If the value is specified without units, it is\r\n> + taken as milliseconds. The default is zero (no delay).\r\n> + </para>\r\n> \r\n> 2a.\r\n> The pgdocs says this is an integer default to “ms” unit. Also, the example on\r\n> this same page shows it is set to '4h'. But I did not see any mention of what\r\n> other units are available to the user. Maybe other time units should be\r\n> mentioned here, or maybe a link should be given to the section “20.1.1.\r\n> Parameter Names and Values\".\r\nAdded.\r\n\r\n> ~\r\n> \r\n> 2b.\r\n> Previously the word \"interval\" was deliberately used because this parameter\r\n> had interval support. But maybe now it should be changed so it is not\r\n> misleading.\r\n> \r\n> \"a given time interval\" --> \"a given time period\" ??\r\nFixed.\r\n\r\n\r\n> ======\r\n> src/backend/commands/subscriptioncmds.c\r\n> \r\n> 3. Forward declare\r\n> \r\n> +static int defGetMinApplyDelay(DefElem *def);\r\n> \r\n> If the new function is implemented as static near the top of this source file then\r\n> this forward declare would not even be necessary, right?\r\nThis declaration has been kept as discussed.\r\n\r\n\r\n> ~~~\r\n> \r\n> 4. parse_subscription_options\r\n> \r\n> @@ -324,6 +328,12 @@ parse_subscription_options(ParseState *pstate, List\r\n> *stmt_options,\r\n> opts->specified_opts |= SUBOPT_LSN;\r\n> opts->lsn = lsn;\r\n> }\r\n> + else if (IsSet(supported_opts, SUBOPT_MIN_APPLY_DELAY) &&\r\n> + strcmp(defel->defname, \"min_apply_delay\") == 0) {\r\n> + opts->specified_opts |= SUBOPT_MIN_APPLY_DELAY; min_apply_delay =\r\n> + opts->defGetMinApplyDelay(defel);\r\n> + }\r\n> \r\n> Should this code fragment be calling errorConflictingDefElem so it will report\r\n> an error if the same min_apply_delay parameter is redundantly repeated?\r\n> (IIUC, this appears to be the code pattern for other parameters nearby).\r\nAdded.\r\n\r\n\r\n> ~~~\r\n> \r\n> 5. parse_subscription_options\r\n> \r\n> + /*\r\n> + * The combination of parallel streaming mode and min_apply_delay is\r\n> + not\r\n> + * allowed. The subscriber in the parallel streaming mode applies each\r\n> + * stream on arrival without the time of commit/prepare. So, the\r\n> + * subscriber needs to depend on the arrival time of the stream in this\r\n> + * case, if we apply the time-delayed feature for such transactions.\r\n> + Then\r\n> + * there is a possibility where some unnecessary delay will be added on\r\n> + * the subscriber by network communication break between nodes or other\r\n> + * heavy work load on the publisher. On the other hand, applying the\r\n> + delay\r\n> + * at the end of transaction with parallel apply also can cause issues\r\n> + of\r\n> + * used resource bloat and locks kept in open for a long time. Thus,\r\n> + those\r\n> + * features can't work together.\r\n> + */\r\n> \r\n> IMO some re-wording might be warranted here. I am not sure quite how to do it.\r\n> Perhaps like below?\r\n> \r\n> SUGGESTION\r\n> \r\n> The combination of parallel streaming mode and min_apply_delay is not\r\n> allowed.\r\n> \r\n> Here are some reasons why these features are incompatible:\r\n> a. In the parallel streaming mode the subscriber applies each stream on arrival\r\n> without knowledge of the commit/prepare time. This means we cannot\r\n> calculate the underlying network/decoding lag between publisher and\r\n> subscriber, and so always waiting for the full 'min_apply_delay'\r\n> period might include unnecessary delay.\r\n> b. If we apply the delay at the end of the transaction of the parallel apply then\r\n> that would cause issues related to resource bloat and locks being held for a\r\n> long time.\r\nNow, this has been changed to the one suggested by Amit-san.\r\nThanks for your help.\r\n\r\n\r\n> ~~~\r\n> \r\n> 6. defGetMinApplyDelay\r\n> \r\n> +\r\n> +\r\n> +/*\r\n> + * Extract the min_apply_delay mode value from a DefElem. This is very\r\n> +similar\r\n> + * to PGC_INT case of parse_and_validate_value(), because\r\n> +min_apply_delay\r\n> + * accepts the same string as recovery_min_apply_delay.\r\n> + */\r\n> +int\r\n> +defGetMinApplyDelay(DefElem *def)\r\n> \r\n> 6a.\r\n> \"same string\" -> \"same parameter format\" ??\r\nFixed.\r\n\r\n> ~\r\n> \r\n> 6b.\r\n> I thought this function should be implemented as static and located at the top\r\n> of the subscriptioncmds.c source file.\r\nMade it static but didn't change the place, as Amit-san mentioned.\r\n\r\n> ======\r\n> src/backend/replication/logical/worker.c\r\n> \r\n> 7. maybe_delay_apply\r\n> \r\n> +static void maybe_delay_apply(TransactionId xid, TimestampTz\r\n> +finish_ts);\r\n> \r\n> Is there a reason why this is here? AFAIK the static implementation precedes\r\n> any usage so I doubt this forward declaration is required.\r\nRemoved.\r\n\r\n\r\n> ~~~\r\n> \r\n> 8. send_feedback\r\n> \r\n> @@ -3775,11 +3912,12 @@ send_feedback(XLogRecPtr recvpos, bool force,\r\n> bool requestReply)\r\n> pq_sendint64(reply_message, now); /* sendTime */\r\n> pq_sendbyte(reply_message, requestReply); /* replyRequested */\r\n> \r\n> - elog(DEBUG2, \"sending feedback (force %d) to recv %X/%X, write %X/%X,\r\n> flush %X/%X\",\r\n> + elog(DEBUG2, \"sending feedback (force %d) to recv %X/%X, write\r\n> %X/%X, flush %X/%X in-delayed: %d\",\r\n> force,\r\n> LSN_FORMAT_ARGS(recvpos),\r\n> LSN_FORMAT_ARGS(writepos),\r\n> - LSN_FORMAT_ARGS(flushpos));\r\n> + LSN_FORMAT_ARGS(flushpos),\r\n> + in_delayed_apply);\r\n> \r\n> Wondering if it is better to write this as:\r\n> \"sending feedback (force %d, in_delayed_apply %d) to recv %X/%X,\r\n> write %X/%X, flush %X/%X\"\r\nAdopted and merged with the modification Euler-san provided.\r\n\r\n\r\n> ~\r\n> \r\n> 10. Add new tests?\r\n> \r\n> Should there be other tests just to verify different units (like 'd', 'h', 'min') are\r\n> working OK?\r\nNo need. The current subscription.sql does the check\r\nof \"invalid value for parameter...\" error message, which ensures we call\r\nthe defGetMinApplyDelay(). Additionally, we have the test of one unit 'd'\r\nfor unit iteration loopin convert_to_base_unit().\r\nSo, the current test sets should suffice.\r\n\r\n\r\n> ======\r\n> src/test/subscription/t/032_apply_delay.pl\r\n> \r\n> 11.\r\n> +# Confirm the time-delayed replication has been effective from the\r\n> +server log # message where the apply worker emits for applying delay.\r\n> +Moreover, verifies # that the current worker's delayed time is\r\n> +sufficiently bigger than the # expected value, in order to check any update of\r\n> the min_apply_delay.\r\n> +sub check_apply_delay_log\r\n> \r\n> \"the current worker's delayed time...\" --> \"the current worker's remaining wait\r\n> time...\" ??\r\nFixed.\r\n\r\n\r\n> ~~~\r\n> \r\n> 12.\r\n> + # Get the delay time from the server log my $contents =\r\n> + slurp_file($node_subscriber->logfile, $offset);\r\n> \r\n> \"Get the delay time....\" --> \"Get the remaining wait time...\"\r\nFixed.\r\n\r\n> ~~~\r\n> \r\n> 13.\r\n> +# Create a subscription that applies the trasaction after 50\r\n> +milliseconds delay $node_subscriber->safe_psql('postgres',\r\n> + \"CREATE SUBSCRIPTION tap_sub CONNECTION '$publisher_connstr\r\n> application_name=$appname' PUBLICATION tap_pub WITH (copy_data = off,\r\n> min_apply_delay = '50ms', streaming = 'on')\"\r\n> +);\r\n> \r\n> 13a.\r\n> typo: \"trasaction\"\r\nFixed.\r\n\r\n\r\n> ~\r\n> \r\n> 13b\r\n> 50ms seems an extremely short time – How do you even know if this is testing\r\n> anything related to the time delay? You may just be detecting the normal lag\r\n> between publisher and subscriber without time delay having much to do with\r\n> anything.\r\nThe wait time has been updated to 1 second now.\r\nAlso, the TAP tests now search for the emitted logs by the apply worker.\r\nThe path to emit the log is in the maybe_apply_delay and\r\nit does writes the log only if the \"diffms\" is bigger than zero,\r\nwhich invokes the wait. So, this will ensure we use the feature\r\nby this flow.\r\n\r\n\r\n> ~\r\n> \r\n> 14.\r\n> \r\n> +# Note that we cannot call check_apply_delay_log() here because there\r\n> +is a # possibility that the delay is skipped. The event happens when\r\n> +the WAL # replication between publisher and subscriber is delayed due\r\n> +to a mechanical # problem. The log output will be checked later - substantial\r\n> delay-time case.\r\n> +\r\n> +# Verify that the subscriber lags the publisher by at least 50\r\n> +milliseconds check_apply_delay_time($node_publisher, $node_subscriber,\r\n> +'2', '0.05');\r\n> \r\n> 14a.\r\n> \"The event happens...\" ??\r\n> \r\n> Did you mean \"This might happen if the WAL...\"\r\nThis part has been removed.\r\n\r\n\r\n> ~\r\n> \r\n> 14b.\r\n> The log output will be checked later - substantial delay-time case.\r\n> \r\n> I think that needs re-wording to clarify.\r\n> e.g1. you have nothing called a \"substantial delay-time\" case.\r\n> e.g2. the word \"later\" confused me. Originally, I thought you meant it is not\r\n> tested yet but that you will check it \"later\", but now IIUC you are just referring\r\n> to the \"1 day 5 minutes\" test that comes below in this location TAP file (??)\r\nAlso, removed.\r\n\r\n\r\n\r\n[1] - https://www.postgresql.org/message-id/TYCPR01MB8373DC1881F382B4703F26E0EDC99%40TYCPR01MB8373.jpnprd01.prod.outlook.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Tue, 24 Jan 2023 14:22:58 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tuesday, January 24, 2023 8:32 AM Euler Taveira <euler@eulerto.com> wrote:\n> Good to know that you keep improving this patch. I have a few suggestions that\n> were easier to provide a patch on top of your latest patch than to provide an\n> inline suggestions.\nThanks for your review ! We basically adopted your suggestions.\n\n\n> There are a few documentation polishing. Let me comment some of them above.\n> \n> - The length of time (ms) to delay the application of changes.\n> + Total time spent delaying the application of changes, in milliseconds\n> \n> I don't remember if I suggested this description for catalog but IMO the\n> suggestion reads better for me.\nAdopted the above change.\n\n\n> - For time-delayed logical replication (i.e. when the subscription is\n> - created with parameter min_apply_delay > 0), the apply worker sends a\n> - Standby Status Update message to the publisher with a period of\n> - <literal>wal_receiver_status_interval</literal>. Make sure to set\n> - <literal>wal_receiver_status_interval</literal> less than the\n> - <literal>wal_sender_timeout</literal> on the publisher, otherwise, the\n> - walsender will repeatedly terminate due to the timeout errors. If\n> - <literal>wal_receiver_status_interval</literal> is set to zero, the apply\n> - worker doesn't send any feedback messages during the subscriber's\n> - <literal>min_apply_delay</literal> period. See\n> - <xref linkend=\"sql-createsubscription\"/> for details.\n> + For time-delayed logical replication, the apply worker sends a feedback\n> + message to the publisher every\n> + <varname>wal_receiver_status_interval</varname> milliseconds. Make sure\n> + to set <varname>wal_receiver_status_interval</varname> less than the\n> + <varname>wal_sender_timeout</varname> on the publisher, otherwise, the\n> + <literal>walsender</literal> will repeatedly terminate due to timeout\n> + error. If <varname>wal_receiver_status_interval</varname> is set to\n> + zero, the apply worker doesn't send any feedback messages during the\n> + <literal>min_apply_delay</literal> interval.\n> \n> I removed the parenthesis explanation about time-delayed logical replication.\n> If you are reading the documentation and does not know what it means you should\n> (a) read the logical replication chapter or (b) check the glossary (maybe a new\n> entry should be added). I also removed the Standby status Update message but it\n> is a low level detail; let's refer to it as feedback message as the other\n> sentences do. I changed \"literal\" to \"varname\" that's the correct tag for\n> parameters. I replace \"period\" with \"interval\" that was the previous\n> terminology. IMO we should be uniform, use one or the other.\nAdopted.\n\nAlso, I added the glossary for time-delayed replication (one for\napplicable to both physical replication and logical replication).\nPlus, I united the term \"interval\" to period, because it would clarify the type for this feature.\nI think this is better.\n> - The subscriber replication can be instructed to lag behind the publisher\n> - side changes by specifying the <literal>min_apply_delay</literal>\n> - subscription parameter. See <xref linkend=\"sql-createsubscription\"/> for\n> - details.\n> + A logical replication subscription can delay the application of changes by\n> + specifying the <literal>min_apply_delay</literal> subscription parameter.\n> + See <xref linkend=\"sql-createsubscription\"/> for details.\n> \n> This feature refers to a specific subscription, hence, \"logical replication\n> subscription\" instead of \"subscriber replication\".\nAdopted.\n\n> + if (IsSet(opts->specified_opts, SUBOPT_MIN_APPLY_DELAY))\n> + errorConflictingDefElem(defel, pstate);\n> +\n> \n> Peter S referred to this missing piece of code too.\nAdded.\n\n\n> -int\n> +static int\n> defGetMinApplyDelay(DefElem *def)\n> {\n> \n> It seems you forgot static keyword.\nFixed.\n\n\n> - elog(DEBUG2, \"time-delayed replication for txid %u, min_apply_delay = %lld ms, Remaining wait time: %ld ms\",\n> - xid, (long long) MySubscription->minapplydelay, diffms);\n> + elog(DEBUG2, \"time-delayed replication for txid %u, min_apply_delay = \" INT64_FORMAT \" ms, remaining wait time: %ld ms\",\n> + xid, MySubscription->minapplydelay, diffms);\n> int64 should use format modifier INT64_FORMAT.\nFixed.\n\n\n> - (long) wal_receiver_status_interval * 1000,\n> + wal_receiver_status_interval * 1000L,\n> \n> Cast is not required. I added a suffix to the constant.\nFixed.\n\n\n> - elog(DEBUG2, \"sending feedback (force %d) to recv %X/%X, write %X/%X, flush %X/%X in-delayed: %d\",\n> + elog(DEBUG2, \"sending feedback (force %d) to recv %X/%X, write %X/%X, flush %X/%X, apply delay: %s\",\n> force,\n> LSN_FORMAT_ARGS(recvpos),\n> LSN_FORMAT_ARGS(writepos),\n> LSN_FORMAT_ARGS(flushpos),\n> - in_delayed_apply);\n> + in_delayed_apply? \"yes\" : \"no\");\n> \n> It is better to use a string to represent the yes/no option.\nFixed.\n\n\n> - gettext_noop(\"Min apply delay (ms)\"));\n> + gettext_noop(\"Min apply delay\"));\n> \n> I don't know if it was discussed but we don't add units to headers. When I\n> think about this parameter representation (internal and external), I decided to\n> use the previous code because it provides a unit for external representation. I\n> understand that using the same representation as recovery_min_apply_delay is\n> good but the current code does not handle the external representation\n> accordingly. (recovery_min_apply_delay uses the GUC machinery to adds the unit\n> but for min_apply_delay, it doesn't).\nAdopted.\n\n\n> # Setup for streaming case\n> -$node_publisher->append_conf('postgres.conf',\n> +$node_publisher->append_conf('postgresql.conf',\n> 'logical_decoding_mode = immediate');\n> $node_publisher->reload;\n> \n> Fix configuration file name.\nFixed.\n\n\n> Maybe tests should do a better job. I think check_apply_delay_time is fragile\n> because it does not guarantee that time is not shifted. Time-delayed\n> replication is a subscriber feature and to check its correctness it should\n> check the logs.\n> \n> # Note that we cannot call check_apply_delay_log() here because there is a\n> # possibility that the delay is skipped. The event happens when the WAL\n> # replication between publisher and subscriber is delayed due to a mechanical\n> # problem. The log output will be checked later - substantial delay-time case.\n> \n> If you might not use the logs for it, it should adjust the min_apply_delay, no?\nYes. Adjusted.\n\n\n> It does not exercise the min_apply_delay vs parallel streaming mode.\n> \n> + /*\n> + * The combination of parallel streaming mode and\n> + * min_apply_delay is not allowed.\n> + */\n> + if (opts.streaming == LOGICALREP_STREAM_PARALLEL)\n> + if ((IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) && opts.min_apply_delay > 0) ||\n> + (!IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) && sub->minapplydelay > 0))\n> + ereport(ERROR,\n> + errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),\n> + errmsg(\"cannot enable %s mode for subscription with %s\",\n> + \"streaming = parallel\", \"min_apply_delay\"));\n> +\n> \n> Is this code correct? I also didn't like this message. \"cannot enable streaming\n> = parallel mode for subscription with min_apply_delay\" is far from a good error\n> message. How about refer parallelism to \"parallel streaming mode\".\nYes. opts is the input for alter command and sub object is the existing definition.\nWe need to check those combinations like when streaming is set to parallel\nand min_apply_delay also gets set, then, min_apply_delay should not be bigger than 0, for example.\nBesides, adopted your suggestion to improve the comments.\n\n\nAttach the patch in [1]. Kindly have a look at it.\n\n\n[1] - https://www.postgresql.org/message-id/TYCPR01MB8373DC1881F382B4703F26E0EDC99%40TYCPR01MB8373.jpnprd01.prod.outlook.com\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Tue, 24 Jan 2023 14:57:22 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "At Tue, 24 Jan 2023 11:45:36 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> Personally, I would prefer the above LOGs to be in reverse order as it\n> doesn't make much sense to me to first say that we are skipping\n> changes and then say the transaction is delayed. What do you think?\n\nIn the first place, I misunderstood maybe_start_skipping_changes(),\nwhich doesn't actually skip changes. So... sorry for the noise.\n\nFor the record, I agree that the current order is right.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 25 Jan 2023 09:11:51 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "In short, I'd like to propose renaming the parameter in_delayed_apply\nof send_feedback to \"has_unprocessed_change\".\n\nAt Tue, 24 Jan 2023 12:27:58 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> > send_feedback():\n> > + * If the subscriber side apply is delayed (because of time-delayed\n> > + * replication) then do not tell the publisher that the received latest\n> > + * LSN is already applied and flushed, otherwise, it leads to the\n> > + * publisher side making a wrong assumption of logical replication\n> > + * progress. Instead, we just send a feedback message to avoid a publisher\n> > + * timeout during the delay.\n> > */\n> > - if (!have_pending_txes)\n> > + if (!have_pending_txes && !in_delayed_apply)\n> > flushpos = writepos = recvpos;\n> >\n> > Honestly I don't like this wart. The reason for this is the function\n> > assumes recvpos = applypos but we actually call it while holding\n> > unapplied changes, that is, applypos < recvpos.\n> >\n> > Couldn't we maintain an additional static variable \"last_applied\"\n> > along with last_received?\n> >\n> \n> It won't be easy to maintain the meaning of last_applied because there\n> are cases where we don't apply the change directly. For example, in\n> case of streaming xacts, we will just keep writing it to the file,\n> now, say, due to some reason, we have to send the feedback, then it\n> will not allow you to update the latest write locations. This would\n> then become different then what we are doing without the patch.\n> Another point to think about is that we also need to keep the variable\n> updated for keep-alive ('k') messages even though we don't apply\n> anything in that case. Still, other cases to consider are where we\n> have mix of streaming and non-streaming transactions.\n\nYeah. Even though I named it as \"last_applied\", its objective is to\nhave get_flush_position returning the correct have_pending_txes\nwithout a hint from callers, that is, \"let g_f_position know if\nstore_flush_position has been called with the last received data\".\n\nAnyway I tried that but didn't find a clean and simple way. However,\nwhile on it, I realized what the code made me confused.\n\n+static void send_feedback(XLogRecPtr recvpos, bool force, bool requestReply,\n+\t\t\t\t\t\t bool in_delayed_apply);\n\nThe name \"in_delayed_apply\" doesn't donsn't give me an idea of what\nthe function should do for it. If it is named \"has_unprocessed_change\",\nI think it makes sense that send_feedback should think there may be an\noutstanding transaction that is not known to the function.\n\n\nSo, my conclusion here is I'd like to propose changing the parameter\nname to \"has_unapplied_change\".\n\n\n> > In this case the condition cited above\n> > would be as follows and in_delayed_apply will become unnecessary.\n> >\n> > + if (!have_pending_txes && last_received == last_applied)\n> >\n> > The function is a static function and always called with a variable\n> > last_received that has the same scope with the function, as the first\n\nSorry for the noise, I misread it. Maybe I took the \"function-scoped\"\nvariable as file-scoped.. Thus the discussion is false.\n\n> > parameter. Thus we can remove the first parameter then let the\n> > function directly look at the both two varaibles instead.\n> >\n> \n> I think this is true without this patch, so why that has not been\n> followed in the first place? One comment, I see in this regard is as\n> below:\n> \n> /* It's legal to not pass a recvpos */\n> if (recvpos < last_recvpos)\n> recvpos = last_recvpos;\n\nSorry. I don't understand this. It is just a part of the ratchet\nmechanism for the last received lsn to report.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 25 Jan 2023 10:17:18 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "At Tue, 24 Jan 2023 14:22:19 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Tue, Jan 24, 2023 at 12:44 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Tue, Jan 24, 2023 at 5:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Jan 24, 2023 at 8:15 AM Kyotaro Horiguchi\n> > > <horikyota.ntt@gmail.com> wrote:\n> > > >\n> > > > > Attached the updated patch v19.\n> > > >\n> > > > + maybe_delay_apply(TransactionId xid, TimestampTz finish_ts)\n> > > >\n> > > > I look this spelling strange. How about maybe_apply_delay()?\n> > > >\n> > >\n> > > +1.\n> >\n> > It depends on how you read it. I read it like this:\n> >\n> > maybe_delay_apply === means \"maybe delay [the] apply\"\n> > (which is exactly what the function does)\n> >\n> > versus\n> >\n> > maybe_apply_delay === means \"maybe [the] apply [needs a] delay\"\n> > (which is also correct, but it seemed a more awkward way to say it IMO)\n> >\n> \n> This matches more with GUC and all other usages of variables in the\n> patch. So, I still prefer the second one.\n\nI read it as \"maybe apply [the] delay [to something suggested by the\ncontext]\". If we go the first way, I will name it as\n\"maybe_delay_apply_change\" or something that has an extra word.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 25 Jan 2023 10:30:20 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Sorry for making you bothered by this.\n\nAt Tue, 24 Jan 2023 10:12:40 +0000, \"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com> wrote in \n> > > Couldn't we maintain an additional static variable \"last_applied\"\n> > > along with last_received?\n> > >\n> > \n> > It won't be easy to maintain the meaning of last_applied because there\n> > are cases where we don't apply the change directly. For example, in\n> > case of streaming xacts, we will just keep writing it to the file,\n> > now, say, due to some reason, we have to send the feedback, then it\n> > will not allow you to update the latest write locations. This would\n> > then become different then what we are doing without the patch.\n> > Another point to think about is that we also need to keep the variable\n> > updated for keep-alive ('k') messages even though we don't apply\n> > anything in that case. Still, other cases to consider are where we\n> > have mix of streaming and non-streaming transactions.\n> \n> I have tried to implement that, but it might be difficult because of a corner\n> case related with the initial data sync.\n> \n> First of all, I have made last_applied to update when\n> \n> * transactions are committed, prepared, or aborted\n> * apply worker receives keepalive message.\n\nYeah, I vagurly thought that it is enough that the update happens just\nbefor existing send_feecback() calls. But it turned out to introduce\nanother unprincipledness..\n\n> I thought during the initial data sync, we must not update the last applied\n> triggered by keepalive messages, so following lines were added just after\n> updating last_received.\n> \n> ```\n> + if (last_applied < end_lsn && AllTablesyncsReady())\n> + last_applied = end_lsn;\n> ```\n\nMaybe, the name \"last_applied\" made you confused. As I mentioned in\nanother message, the variable points to the remote LSN of last\n\"processed\" 'w/k' messages.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 25 Jan 2023 10:45:25 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tue, Jan 24, 2023 at 5:49 PM Takamichi Osumi (Fujitsu)\n<osumi.takamichi@fujitsu.com> wrote:\n>\n>\n> Attached the patch v20 that has incorporated all comments so far.\n> Kindly have a look at the attached patch.\n>\n>\n> Best Regards,\n> Takamichi Osumi\n>\n\nThank You for patch. My previous comments are addressed. Tested it and\nit looks good. Logging is also fine now.\n\nJust one comment, in summary, we see :\nIf the subscription sets min_apply_delay parameter, the logical\nreplication worker will delay the transaction commit for\nmin_apply_delay milliseconds.\n\nIs it better to write \"delay the transaction apply\" instead of \"delay\nthe transaction commit\" just to be consistent as we do not actually\ndelay the commit for regular transactions.\n\nthanks\nShveta\n\n\n",
"msg_date": "Wed, 25 Jan 2023 10:32:09 +0530",
"msg_from": "shveta malik <shveta.malik@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi, \n\nOn Wednesday, January 25, 2023 2:02 PM shveta malik <shveta.malik@gmail.com> wrote:\n> On Tue, Jan 24, 2023 at 5:49 PM Takamichi Osumi (Fujitsu)\n> <osumi.takamichi@fujitsu.com> wrote:\n> >\n> >\n> > Attached the patch v20 that has incorporated all comments so far.\n> > Kindly have a look at the attached patch.\n> Thank You for patch. My previous comments are addressed. Tested it and it\n> looks good. Logging is also fine now.\n> \n> Just one comment, in summary, we see :\n> If the subscription sets min_apply_delay parameter, the logical replication\n> worker will delay the transaction commit for min_apply_delay milliseconds.\n> \n> Is it better to write \"delay the transaction apply\" instead of \"delay the\n> transaction commit\" just to be consistent as we do not actually delay the\n> commit for regular transactions.\nThank you for your review !\n\nAgreed. Your description looks better.\nAttached the updated patch v21.\n\n\nBest Regards,\n\tTakamichi Osumi",
"msg_date": "Wed, 25 Jan 2023 05:44:53 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi, Horiguchi-san\n\n\nThank you for checking the patch !\nOn Wednesday, January 25, 2023 10:17 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> In short, I'd like to propose renaming the parameter in_delayed_apply of\n> send_feedback to \"has_unprocessed_change\".\n> \n> At Tue, 24 Jan 2023 12:27:58 +0530, Amit Kapila <amit.kapila16@gmail.com>\n> wrote in\n> > > send_feedback():\n> > > + * If the subscriber side apply is delayed (because of\n> time-delayed\n> > > + * replication) then do not tell the publisher that the received\n> latest\n> > > + * LSN is already applied and flushed, otherwise, it leads to the\n> > > + * publisher side making a wrong assumption of logical\n> replication\n> > > + * progress. Instead, we just send a feedback message to avoid a\n> publisher\n> > > + * timeout during the delay.\n> > > */\n> > > - if (!have_pending_txes)\n> > > + if (!have_pending_txes && !in_delayed_apply)\n> > > flushpos = writepos = recvpos;\n> > >\n> > > Honestly I don't like this wart. The reason for this is the function\n> > > assumes recvpos = applypos but we actually call it while holding\n> > > unapplied changes, that is, applypos < recvpos.\n> > >\n> > > Couldn't we maintain an additional static variable \"last_applied\"\n> > > along with last_received?\n> > >\n> >\n> > It won't be easy to maintain the meaning of last_applied because there\n> > are cases where we don't apply the change directly. For example, in\n> > case of streaming xacts, we will just keep writing it to the file,\n> > now, say, due to some reason, we have to send the feedback, then it\n> > will not allow you to update the latest write locations. This would\n> > then become different then what we are doing without the patch.\n> > Another point to think about is that we also need to keep the variable\n> > updated for keep-alive ('k') messages even though we don't apply\n> > anything in that case. Still, other cases to consider are where we\n> > have mix of streaming and non-streaming transactions.\n> \n> Yeah. Even though I named it as \"last_applied\", its objective is to have\n> get_flush_position returning the correct have_pending_txes without a hint\n> from callers, that is, \"let g_f_position know if store_flush_position has been\n> called with the last received data\".\n> \n> Anyway I tried that but didn't find a clean and simple way. However, while on it,\n> I realized what the code made me confused.\n> \n> +static void send_feedback(XLogRecPtr recvpos, bool force, bool\n> requestReply,\n> +\t\t\t\t\t\t bool in_delayed_apply);\n> \n> The name \"in_delayed_apply\" doesn't donsn't give me an idea of what the\n> function should do for it. If it is named \"has_unprocessed_change\", I think it\n> makes sense that send_feedback should think there may be an outstanding\n> transaction that is not known to the function.\n> \n> \n> So, my conclusion here is I'd like to propose changing the parameter name to\n> \"has_unapplied_change\".\nRenamed the variable name to \"has_unprocessed_change\".\nAlso, removed the first argument of the send_feedback() which isn't necessary now.\nKindly have a look at the patch shared in [1].\n\n\n[1] - https://www.postgresql.org/message-id/TYCPR01MB8373193B4331B7EB6276F682EDCE9%40TYCPR01MB8373.jpnprd01.prod.outlook.com\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Wed, 25 Jan 2023 05:53:23 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "At Tue, 24 Jan 2023 12:19:04 +0000, \"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com> wrote in \n> Attached the patch v20 that has incorporated all comments so far.\n\nThanks! I looked thourgh the documentation part.\n\n\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>subminapplydelay</structfield> <type>int8</type>\n+ </para>\n+ <para>\n+ Total time spent delaying the application of changes, in milliseconds.\n+ </para></entry>\n\nI was confused becase it reads as this column shows the summarized\nactual waiting time caused by min_apply_delay. IIUC actually it shows\nthe min_apply_delay setting for the subscription. Thus shouldn't it be\nsomething like this?\n\n\"The minimum amount of time to delay applying changes, in milliseconds\"\n\nAnd it might be better to mention the corresponding subscription paramter.\n\n\n+ error. If <varname>wal_receiver_status_interval</varname> is set to\n+ zero, the apply worker doesn't send any feedback messages during the\n+ <literal>min_apply_delay</literal> period.\n\nI took a bit longer time to understand what this sentence means. I'd\nlike to suggest something like the follwoing.\n\n\"Since no status-update messages are sent while delaying, note that\nwal_receiver_status_interval is the only source of keepalive messages\nduring that period.\"\n\n+ <para>\n+ A logical replication subscription can delay the application of changes by\n+ specifying the <literal>min_apply_delay</literal> subscription parameter.\n+ See <xref linkend=\"sql-createsubscription\"/> for details.\n+ </para>\n\nI'm not sure \"logical replication subscription\" is a common term.\nDoesn't just \"subscription\" mean the same, especially in that context?\n(Note that 31.2 starts with \"A subscription is the downstream..\").\n\n\n+ Any delay occurs only on WAL records for transaction begins after all\n+ initial table synchronization has finished. The delay is calculated\n\nThere is no \"transaction begin\" WAL records. Maybe it is \"logical\nreplication transaction begin message\". The timestamp is of \"commit\ntime\". (I took \"transaction begins\" as a noun, but that might be\nwrong..)\n\n\n+ may reduce the actual wait time. It is also possible that the overhead\n+ already exceeds the requested <literal>min_apply_delay</literal> value,\n+ in which case no additional wait is necessary. If the system clocks\n\nI'm not sure it is right to say \"necessary\" here. IMHO it might be\nbetter be \"in which case no delay is applied\".\n\n\n+ in which case no additional wait is necessary. If the system clocks\n+ on publisher and subscriber are not synchronized, this may lead to\n+ apply changes earlier than expected, but this is not a major issue\n+ because this parameter is typically much larger than the time\n+ deviations between servers. Note that if this parameter is set to a\n\nThis doesn't seem to fit our documentation. It is not our business\nwhether a certain amount deviation is critical or not. How about\nsomethig like the following?\n\n\"Note that the delay is measured between the timestamp assigned by\npublisher and the system clock on subscriber. You need to manage the\nsystem clocks to be in sync so that the delay works properly.\"\n\n+ Delaying the replication can mean there is a much longer time\n+ between making a change on the publisher, and that change being\n+ committed on the subscriber. This can impact the performance of\n+ synchronous replication. See <xref linkend=\"guc-synchronous-commit\"/>\n+ parameter.\n\nDo we need the \"can\" in \"Delaying the replication can mean\"? If we\nwant to say, it might be \"Delaying the replication means there can be\na much longer...\"?\n\n\n+ <para>\n+ Create a subscription to a remote server that replicates tables in\n+ the <literal>mypub</literal> publication and starts replicating immediately\n+ on commit. Pre-existing data is not copied. The application of changes is\n+ delayed by 4 hours.\n+<programlisting>\n+CREATE SUBSCRIPTION mysub\n+ CONNECTION 'host=192.0.2.4 port=5432 user=foo dbname=foodb'\n+ PUBLICATION mypub\n+ WITH (copy_data = false, min_apply_delay = '4h');\n+</programlisting></para>\n\nI'm not sure we need this additional example. We already have two\nexmaples one of which differs from the above only by actual values for\nPUBLICATION and WITH clauses.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 25 Jan 2023 15:26:54 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Wed, Jan 25, 2023 at 11:23 AM Takamichi Osumi (Fujitsu)\n<osumi.takamichi@fujitsu.com> wrote:\n>\n>\n> Thank you for checking the patch !\n> On Wednesday, January 25, 2023 10:17 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > In short, I'd like to propose renaming the parameter in_delayed_apply of\n> > send_feedback to \"has_unprocessed_change\".\n> >\n> > At Tue, 24 Jan 2023 12:27:58 +0530, Amit Kapila <amit.kapila16@gmail.com>\n> > wrote in\n> > > > send_feedback():\n> > > > + * If the subscriber side apply is delayed (because of\n> > time-delayed\n> > > > + * replication) then do not tell the publisher that the received\n> > latest\n> > > > + * LSN is already applied and flushed, otherwise, it leads to the\n> > > > + * publisher side making a wrong assumption of logical\n> > replication\n> > > > + * progress. Instead, we just send a feedback message to avoid a\n> > publisher\n> > > > + * timeout during the delay.\n> > > > */\n> > > > - if (!have_pending_txes)\n> > > > + if (!have_pending_txes && !in_delayed_apply)\n> > > > flushpos = writepos = recvpos;\n> > > >\n> > > > Honestly I don't like this wart. The reason for this is the function\n> > > > assumes recvpos = applypos but we actually call it while holding\n> > > > unapplied changes, that is, applypos < recvpos.\n> > > >\n> > > > Couldn't we maintain an additional static variable \"last_applied\"\n> > > > along with last_received?\n> > > >\n> > >\n> > > It won't be easy to maintain the meaning of last_applied because there\n> > > are cases where we don't apply the change directly. For example, in\n> > > case of streaming xacts, we will just keep writing it to the file,\n> > > now, say, due to some reason, we have to send the feedback, then it\n> > > will not allow you to update the latest write locations. This would\n> > > then become different then what we are doing without the patch.\n> > > Another point to think about is that we also need to keep the variable\n> > > updated for keep-alive ('k') messages even though we don't apply\n> > > anything in that case. Still, other cases to consider are where we\n> > > have mix of streaming and non-streaming transactions.\n> >\n> > Yeah. Even though I named it as \"last_applied\", its objective is to have\n> > get_flush_position returning the correct have_pending_txes without a hint\n> > from callers, that is, \"let g_f_position know if store_flush_position has been\n> > called with the last received data\".\n> >\n> > Anyway I tried that but didn't find a clean and simple way. However, while on it,\n> > I realized what the code made me confused.\n> >\n> > +static void send_feedback(XLogRecPtr recvpos, bool force, bool\n> > requestReply,\n> > + bool in_delayed_apply);\n> >\n> > The name \"in_delayed_apply\" doesn't donsn't give me an idea of what the\n> > function should do for it. If it is named \"has_unprocessed_change\", I think it\n> > makes sense that send_feedback should think there may be an outstanding\n> > transaction that is not known to the function.\n> >\n> >\n> > So, my conclusion here is I'd like to propose changing the parameter name to\n> > \"has_unapplied_change\".\n> Renamed the variable name to \"has_unprocessed_change\".\n> Also, removed the first argument of the send_feedback() which isn't necessary now.\n>\n\nWhy did you remove the first argument of the send_feedback() when that\nis not added by this patch? If you really think that is an\nimprovement, feel free to propose that as a separate patch.\nPersonally, I don't see a value in it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 25 Jan 2023 12:24:36 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Wed, Jan 25, 2023 at 11:57 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 24 Jan 2023 12:19:04 +0000, \"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com> wrote in\n> > Attached the patch v20 that has incorporated all comments so far.\n>\n...\n>\n>\n> + in which case no additional wait is necessary. If the system clocks\n> + on publisher and subscriber are not synchronized, this may lead to\n> + apply changes earlier than expected, but this is not a major issue\n> + because this parameter is typically much larger than the time\n> + deviations between servers. Note that if this parameter is set to a\n>\n> This doesn't seem to fit our documentation. It is not our business\n> whether a certain amount deviation is critical or not. How about\n> somethig like the following?\n>\n\nBut we have a similar description for 'recovery_min_apply_delay' [1].\nSee \"...If the system clocks on primary and standby are not\nsynchronized, this may lead to recovery applying records earlier than\nexpected; but that is not a major issue because useful settings of\nthis parameter are much larger than typical time deviations between\nservers.\"\n\n[1] - https://www.postgresql.org/docs/devel/runtime-config-replication.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 25 Jan 2023 12:30:19 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "At Wed, 25 Jan 2023 12:30:19 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Wed, Jan 25, 2023 at 11:57 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Tue, 24 Jan 2023 12:19:04 +0000, \"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com> wrote in\n> > > Attached the patch v20 that has incorporated all comments so far.\n> >\n> ...\n> >\n> >\n> > + in which case no additional wait is necessary. If the system clocks\n> > + on publisher and subscriber are not synchronized, this may lead to\n> > + apply changes earlier than expected, but this is not a major issue\n> > + because this parameter is typically much larger than the time\n> > + deviations between servers. Note that if this parameter is set to a\n> >\n> > This doesn't seem to fit our documentation. It is not our business\n> > whether a certain amount deviation is critical or not. How about\n> > somethig like the following?\n> >\n> \n> But we have a similar description for 'recovery_min_apply_delay' [1].\n> See \"...If the system clocks on primary and standby are not\n> synchronized, this may lead to recovery applying records earlier than\n> expected; but that is not a major issue because useful settings of\n> this parameter are much larger than typical time deviations between\n> servers.\"\n\nMmmm. I thought that we might be able to gather the description\n(including other common descriptions, if any), but I didn't find an\nappropreate place..\n\nOkay. I agree to the current description. Thanks for the kind\nexplanation.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 25 Jan 2023 17:43:16 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Wednesday, January 25, 2023 3:27 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> At Tue, 24 Jan 2023 12:19:04 +0000, \"Takamichi Osumi (Fujitsu)\"\n> <osumi.takamichi@fujitsu.com> wrote in\n> > Attached the patch v20 that has incorporated all comments so far.\n> \n> Thanks! I looked thourgh the documentation part.\nThank you for your review !\n\n\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>subminapplydelay</structfield> <type>int8</type>\n> + </para>\n> + <para>\n> + Total time spent delaying the application of changes, in milliseconds.\n> + </para></entry>\n> \n> I was confused becase it reads as this column shows the summarized actual\n> waiting time caused by min_apply_delay. IIUC actually it shows the\n> min_apply_delay setting for the subscription. Thus shouldn't it be something\n> like this?\n> \n> \"The minimum amount of time to delay applying changes, in milliseconds\"\n> And it might be better to mention the corresponding subscription paramter.\nThis description looks much better to me than the past description. Fixed.\nOTOH, other parameters don't mention about its subscription parameters.\nSo, I didn't add the mention.\n\n\n> + error. If <varname>wal_receiver_status_interval</varname> is set to\n> + zero, the apply worker doesn't send any feedback messages during\n> the\n> + <literal>min_apply_delay</literal> period.\n> \n> I took a bit longer time to understand what this sentence means. I'd like to\n> suggest something like the follwoing.\n> \n> \"Since no status-update messages are sent while delaying, note that\n> wal_receiver_status_interval is the only source of keepalive messages during\n> that period.\"\nThe current patch's description is precise and I prefer that.\nI would say \"the only source\" would be confusing to readers.\nHowever, I slightly adjusted the description a bit. Could you please check ?\n\n\n> + <para>\n> + A logical replication subscription can delay the application of changes by\n> + specifying the <literal>min_apply_delay</literal> subscription\n> parameter.\n> + See <xref linkend=\"sql-createsubscription\"/> for details.\n> + </para>\n> \n> I'm not sure \"logical replication subscription\" is a common term.\n> Doesn't just \"subscription\" mean the same, especially in that context?\n> (Note that 31.2 starts with \"A subscription is the downstream..\").\nI think you are right. Fixed.\n\n\n> + Any delay occurs only on WAL records for transaction begins after\n> all\n> + initial table synchronization has finished. The delay is\n> + calculated\n> \n> There is no \"transaction begin\" WAL records. Maybe it is \"logical replication\n> transaction begin message\". The timestamp is of \"commit time\". (I took\n> \"transaction begins\" as a noun, but that might be\n> wrong..)\nYeah, we can improve here. But, we need to include not only\n\"commit\" but also \"prepare\" as nuance in this part.\n\nIn short, I think we should change here to mention\n(1) the delay happens after all initial table synchronization\n(2) how delay is applied for non-streaming and streaming transactions in general.\n\nBy the way, WAL timestamp is a word used in the recovery_min_apply_delay.\nSo, I'd like to keep it to make the description more aligned with it,\nuntil there is a better description.\n\nUpdated the doc. I adjusted the commit message according to this fix.\n> \n> + may reduce the actual wait time. It is also possible that the overhead\n> + already exceeds the requested <literal>min_apply_delay</literal>\n> value,\n> + in which case no additional wait is necessary. If the system\n> + clocks\n> \n> I'm not sure it is right to say \"necessary\" here. IMHO it might be better be \"in\n> which case no delay is applied\".\nAgreed. Fixed.\n\n\n> + in which case no additional wait is necessary. If the system clocks\n> + on publisher and subscriber are not synchronized, this may lead to\n> + apply changes earlier than expected, but this is not a major issue\n> + because this parameter is typically much larger than the time\n> + deviations between servers. Note that if this parameter is\n> + set to a\n> \n> This doesn't seem to fit our documentation. It is not our business whether a\n> certain amount deviation is critical or not. How about somethig like the\n> following?\n> \n> \"Note that the delay is measured between the timestamp assigned by\n> publisher and the system clock on subscriber. You need to manage the\n> system clocks to be in sync so that the delay works properly.\"\nAs discussed, this is aligned with recovery_min_apply_delay. So, I keep it.\n\n\n> + Delaying the replication can mean there is a much longer time\n> + between making a change on the publisher, and that change\n> being\n> + committed on the subscriber. This can impact the performance\n> of\n> + synchronous replication. See <xref\n> linkend=\"guc-synchronous-commit\"/>\n> + parameter.\n> \n> Do we need the \"can\" in \"Delaying the replication can mean\"? If we want to\n> say, it might be \"Delaying the replication means there can be a much longer...\"?\nThe \"can\" indicates the possibility as the nuance,\nwhile adopting \"means\" in this case indicates \"time delayed LR causes\nthe long time wait always\".\n\nI'm okay with either expression, but\nI think you are right in practice and from\nthe perspective of the purpose of this feature. So, fixed.\n> + <para>\n> + Create a subscription to a remote server that replicates tables in\n> + the <literal>mypub</literal> publication and starts replicating\n> immediately\n> + on commit. Pre-existing data is not copied. The application of changes is\n> + delayed by 4 hours.\n> +<programlisting>\n> +CREATE SUBSCRIPTION mysub\n> + CONNECTION 'host=192.0.2.4 port=5432 user=foo dbname=foodb'\n> + PUBLICATION mypub\n> + WITH (copy_data = false, min_apply_delay = '4h');\n> +</programlisting></para>\n> \n> I'm not sure we need this additional example. We already have two exmaples\n> one of which differs from the above only by actual values for PUBLICATION and\n> WITH clauses.\nI thought there was no harm in having this example, but\nwhat you say makes sense. Removed.\n\nAttached the updated v22.\n\n\nBest Regards,\n\tTakamichi Osumi",
"msg_date": "Wed, 25 Jan 2023 14:23:51 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Wednesday, January 25, 2023 3:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Wed, Jan 25, 2023 at 11:23 AM Takamichi Osumi (Fujitsu)\n> <osumi.takamichi@fujitsu.com> wrote:\n> >\n> >\n> > Thank you for checking the patch !\n> > On Wednesday, January 25, 2023 10:17 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > > In short, I'd like to propose renaming the parameter\n> > > in_delayed_apply of send_feedback to \"has_unprocessed_change\".\n> > >\n> > > At Tue, 24 Jan 2023 12:27:58 +0530, Amit Kapila\n> > > <amit.kapila16@gmail.com> wrote in\n> > > > > send_feedback():\n> > > > > + * If the subscriber side apply is delayed (because of\n> > > time-delayed\n> > > > > + * replication) then do not tell the publisher that the\n> > > > > + received\n> > > latest\n> > > > > + * LSN is already applied and flushed, otherwise, it leads to\n> the\n> > > > > + * publisher side making a wrong assumption of logical\n> > > replication\n> > > > > + * progress. Instead, we just send a feedback message to\n> > > > > + avoid a\n> > > publisher\n> > > > > + * timeout during the delay.\n> > > > > */\n> > > > > - if (!have_pending_txes)\n> > > > > + if (!have_pending_txes && !in_delayed_apply)\n> > > > > flushpos = writepos = recvpos;\n> > > > >\n> > > > > Honestly I don't like this wart. The reason for this is the\n> > > > > function assumes recvpos = applypos but we actually call it\n> > > > > while holding unapplied changes, that is, applypos < recvpos.\n> > > > >\n> > > > > Couldn't we maintain an additional static variable \"last_applied\"\n> > > > > along with last_received?\n> > > > >\n> > > >\n> > > > It won't be easy to maintain the meaning of last_applied because\n> > > > there are cases where we don't apply the change directly. For\n> > > > example, in case of streaming xacts, we will just keep writing it\n> > > > to the file, now, say, due to some reason, we have to send the\n> > > > feedback, then it will not allow you to update the latest write\n> > > > locations. This would then become different then what we are doing\n> without the patch.\n> > > > Another point to think about is that we also need to keep the\n> > > > variable updated for keep-alive ('k') messages even though we\n> > > > don't apply anything in that case. Still, other cases to consider\n> > > > are where we have mix of streaming and non-streaming transactions.\n> > >\n> > > Yeah. Even though I named it as \"last_applied\", its objective is to\n> > > have get_flush_position returning the correct have_pending_txes\n> > > without a hint from callers, that is, \"let g_f_position know if\n> > > store_flush_position has been called with the last received data\".\n> > >\n> > > Anyway I tried that but didn't find a clean and simple way. However,\n> > > while on it, I realized what the code made me confused.\n> > >\n> > > +static void send_feedback(XLogRecPtr recvpos, bool force, bool\n> > > requestReply,\n> > > + bool\n> > > + in_delayed_apply);\n> > >\n> > > The name \"in_delayed_apply\" doesn't donsn't give me an idea of what\n> > > the function should do for it. If it is named\n> > > \"has_unprocessed_change\", I think it makes sense that send_feedback\n> > > should think there may be an outstanding transaction that is not known to\n> the function.\n> > >\n> > >\n> > > So, my conclusion here is I'd like to propose changing the parameter\n> > > name to \"has_unapplied_change\".\n> > Renamed the variable name to \"has_unprocessed_change\".\n> > Also, removed the first argument of the send_feedback() which isn't\n> necessary now.\n> >\n> \n> Why did you remove the first argument of the send_feedback() when that is not\n> added by this patch? If you really think that is an improvement, feel free to\n> propose that as a separate patch.\n> Personally, I don't see a value in it.\nOh, sorry for that. I have made the change back.\nKindly have a look at the v22 shared in [1].\n\n\n[1] - https://www.postgresql.org/message-id/TYCPR01MB837305BD31FA317256BC7B1FEDCE9%40TYCPR01MB8373.jpnprd01.prod.outlook.com\n\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Wed, 25 Jan 2023 14:27:56 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Wednesday, January 25, 2023 11:24 PM I wrote:\n> Attached the updated v22.\nHi, \n\nDuring self-review, I noticed some changes are\nrequired for some variable types related to 'min_apply_delay' value,\nso have conducted the adjustment changes for the same.\n\nAdditionally, I made some comments for translator and TAP test better.\nNote that I executed pgindent and pgperltidy for the patch.\n\nNow the updated patch should be more refined.\n\n\nBest Regards,\n\tTakamichi Osumi",
"msg_date": "Fri, 27 Jan 2023 08:09:24 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Fri, Jan 27, 2023 at 1:39 PM Takamichi Osumi (Fujitsu)\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Wednesday, January 25, 2023 11:24 PM I wrote:\n> > Attached the updated v22.\n> Hi,\n>\n> During self-review, I noticed some changes are\n> required for some variable types related to 'min_apply_delay' value,\n> so have conducted the adjustment changes for the same.\n>\n\nSo, you have changed min_apply_delay from int64 to int32, but you\nhaven't mentioned the reason for the same? We use 'int' for the\nsimilar parameter recovery_min_apply_delay, so, ideally, it makes\nsense but still better to tell your reason explicitly.\n\nFew comments\n=============\n1.\n@@ -70,6 +70,8 @@ CATALOG(pg_subscription,6100,SubscriptionRelationId)\nBKI_SHARED_RELATION BKI_ROW\n XLogRecPtr subskiplsn; /* All changes finished at this LSN are\n * skipped */\n\n+ int32 subminapplydelay; /* Replication apply delay (ms) */\n+\n NameData subname; /* Name of the subscription */\n\n Oid subowner BKI_LOOKUP(pg_authid); /* Owner of the subscription */\n\nWhy are you placing this after subskiplsn? Earlier it was okay because\nwe want the 64 bit value to be aligned but now, isn't it better to\nkeep it after subowner?\n\n2.\n+\n+ diffms = TimestampDifferenceMilliseconds(GetCurrentTimestamp(),\n+ TimestampTzPlusMilliseconds(finish_ts, MySubscription->minapplydelay));\n\nThe above code appears a bit unreadable. Can we store the result of\nTimestampTzPlusMilliseconds() in a separate variable say \"TimestampTz\ndelayUntil;\"?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 27 Jan 2023 16:30:07 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi,\n\nOn Friday, January 27, 2023 8:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Fri, Jan 27, 2023 at 1:39 PM Takamichi Osumi (Fujitsu)\n> <osumi.takamichi@fujitsu.com> wrote:\n> >\n> > On Wednesday, January 25, 2023 11:24 PM I wrote:\n> > > Attached the updated v22.\n> > Hi,\n> >\n> > During self-review, I noticed some changes are required for some\n> > variable types related to 'min_apply_delay' value, so have conducted\n> > the adjustment changes for the same.\n> >\n> \n> So, you have changed min_apply_delay from int64 to int32, but you haven't\n> mentioned the reason for the same? We use 'int' for the similar parameter\n> recovery_min_apply_delay, so, ideally, it makes sense but still better to tell your\n> reason explicitly.\nYes. It's because I thought I need to make this feature consistent with the recovery_min_apply_delay.\nThis feature handles the range same as the recovery_min_apply delay from 0 to INT_MAX now\nso should be adjusted to match it.\n\n\n> Few comments\n> =============\n> 1.\n> @@ -70,6 +70,8 @@ CATALOG(pg_subscription,6100,SubscriptionRelationId)\n> BKI_SHARED_RELATION BKI_ROW\n> XLogRecPtr subskiplsn; /* All changes finished at this LSN are\n> * skipped */\n> \n> + int32 subminapplydelay; /* Replication apply delay (ms) */\n> +\n> NameData subname; /* Name of the subscription */\n> \n> Oid subowner BKI_LOOKUP(pg_authid); /* Owner of the subscription */\n> \n> Why are you placing this after subskiplsn? Earlier it was okay because we want\n> the 64 bit value to be aligned but now, isn't it better to keep it after subowner?\nMoved it after subowner.\n\n\n> 2.\n> +\n> + diffms = TimestampDifferenceMilliseconds(GetCurrentTimestamp(),\n> + TimestampTzPlusMilliseconds(finish_ts,\n> + MySubscription->minapplydelay));\n> \n> The above code appears a bit unreadable. Can we store the result of\n> TimestampTzPlusMilliseconds() in a separate variable say \"TimestampTz\n> delayUntil;\"?\nAgreed. Fixed.\n\nAttached the updated patch v24.\n\n\nBest Regards,\n\tTakamichi Osumi",
"msg_date": "Sat, 28 Jan 2023 04:28:29 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "At Sat, 28 Jan 2023 04:28:29 +0000, \"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com> wrote in \n> On Friday, January 27, 2023 8:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > So, you have changed min_apply_delay from int64 to int32, but you haven't\n> > mentioned the reason for the same? We use 'int' for the similar parameter\n> > recovery_min_apply_delay, so, ideally, it makes sense but still better to tell your\n> > reason explicitly.\n> Yes. It's because I thought I need to make this feature consistent with the recovery_min_apply_delay.\n> This feature handles the range same as the recovery_min_apply delay from 0 to INT_MAX now\n> so should be adjusted to match it.\n\nINT_MAX can stick out of int32 on some platforms. (I'm not sure where\nthat actually happens, though.) We can use PG_INT32_MAX instead.\n\nIMHO, I think we don't use int as a catalog column and I agree that\nint32 is sufficient since I don't think more than 49 days delay is\npractical. On the other hand, maybe I wouldn't want to use int32 for\nintermediate calculations.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 30 Jan 2023 12:02:12 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Mon, Jan 30, 2023 at 8:32 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Sat, 28 Jan 2023 04:28:29 +0000, \"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com> wrote in\n> > On Friday, January 27, 2023 8:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > So, you have changed min_apply_delay from int64 to int32, but you haven't\n> > > mentioned the reason for the same? We use 'int' for the similar parameter\n> > > recovery_min_apply_delay, so, ideally, it makes sense but still better to tell your\n> > > reason explicitly.\n> > Yes. It's because I thought I need to make this feature consistent with the recovery_min_apply_delay.\n> > This feature handles the range same as the recovery_min_apply delay from 0 to INT_MAX now\n> > so should be adjusted to match it.\n>\n> INT_MAX can stick out of int32 on some platforms. (I'm not sure where\n> that actually happens, though.) We can use PG_INT32_MAX instead.\n>\n\nBut in other integer GUCs including recovery_min_apply_delay, we use\nINT_MAX, so not sure if it is a good idea to do something different\nhere.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 30 Jan 2023 08:51:05 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "At Mon, 30 Jan 2023 08:51:05 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Mon, Jan 30, 2023 at 8:32 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Sat, 28 Jan 2023 04:28:29 +0000, \"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com> wrote in\n> > > On Friday, January 27, 2023 8:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > So, you have changed min_apply_delay from int64 to int32, but you haven't\n> > > > mentioned the reason for the same? We use 'int' for the similar parameter\n> > > > recovery_min_apply_delay, so, ideally, it makes sense but still better to tell your\n> > > > reason explicitly.\n> > > Yes. It's because I thought I need to make this feature consistent with the recovery_min_apply_delay.\n> > > This feature handles the range same as the recovery_min_apply delay from 0 to INT_MAX now\n> > > so should be adjusted to match it.\n> >\n> > INT_MAX can stick out of int32 on some platforms. (I'm not sure where\n> > that actually happens, though.) We can use PG_INT32_MAX instead.\n> >\n> \n> But in other integer GUCs including recovery_min_apply_delay, we use\n> INT_MAX, so not sure if it is a good idea to do something different\n> here.\n\nThe GUC is not stored in a catalog, but.. oh... it is multiplied by\n1000. So if it is larger than (INT_MAX / 1000), it overflows... If we\nofficially accept that (I don't think great) behavior (even only for\nimpractical values), I don't object further.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 30 Jan 2023 13:13:24 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Mon, Jan 30, 2023 at 9:43 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Mon, 30 Jan 2023 08:51:05 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > On Mon, Jan 30, 2023 at 8:32 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > At Sat, 28 Jan 2023 04:28:29 +0000, \"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com> wrote in\n> > > > On Friday, January 27, 2023 8:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > So, you have changed min_apply_delay from int64 to int32, but you haven't\n> > > > > mentioned the reason for the same? We use 'int' for the similar parameter\n> > > > > recovery_min_apply_delay, so, ideally, it makes sense but still better to tell your\n> > > > > reason explicitly.\n> > > > Yes. It's because I thought I need to make this feature consistent with the recovery_min_apply_delay.\n> > > > This feature handles the range same as the recovery_min_apply delay from 0 to INT_MAX now\n> > > > so should be adjusted to match it.\n> > >\n> > > INT_MAX can stick out of int32 on some platforms. (I'm not sure where\n> > > that actually happens, though.) We can use PG_INT32_MAX instead.\n> > >\n> >\n> > But in other integer GUCs including recovery_min_apply_delay, we use\n> > INT_MAX, so not sure if it is a good idea to do something different\n> > here.\n>\n> The GUC is not stored in a catalog, but.. oh... it is multiplied by\n> 1000.\n\nWhich part of the patch you are referring to here? Isn't the check in\nthe function defGetMinApplyDelay() sufficient to ensure that the\n'delay' value stored in the catalog will always be lesser than\nINT_MAX?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 30 Jan 2023 11:56:33 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "At Mon, 30 Jan 2023 11:56:33 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Mon, Jan 30, 2023 at 9:43 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Mon, 30 Jan 2023 08:51:05 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > > On Mon, Jan 30, 2023 at 8:32 AM Kyotaro Horiguchi\n> > > <horikyota.ntt@gmail.com> wrote:\n> > > >\n> > > > At Sat, 28 Jan 2023 04:28:29 +0000, \"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com> wrote in\n> > > > > On Friday, January 27, 2023 8:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > > So, you have changed min_apply_delay from int64 to int32, but you haven't\n> > > > > > mentioned the reason for the same? We use 'int' for the similar parameter\n> > > > > > recovery_min_apply_delay, so, ideally, it makes sense but still better to tell your\n> > > > > > reason explicitly.\n> > > > > Yes. It's because I thought I need to make this feature consistent with the recovery_min_apply_delay.\n> > > > > This feature handles the range same as the recovery_min_apply delay from 0 to INT_MAX now\n> > > > > so should be adjusted to match it.\n> > > >\n> > > > INT_MAX can stick out of int32 on some platforms. (I'm not sure where\n> > > > that actually happens, though.) We can use PG_INT32_MAX instead.\n> > > >\n> > >\n> > > But in other integer GUCs including recovery_min_apply_delay, we use\n> > > INT_MAX, so not sure if it is a good idea to do something different\n> > > here.\n> >\n> > The GUC is not stored in a catalog, but.. oh... it is multiplied by\n> > 1000.\n> \n> Which part of the patch you are referring to here? Isn't the check in\n\nWhere recovery_min_apply_delay is used. It is allowed to be set up to\nINT_MAX but it is used as:\n\n>\tdelayUntil = TimestampTzPlusMilliseconds(xtime, recovery_min_apply_delay);\n\nWhere the macro is defined as:\n\n> #define TimestampTzPlusMilliseconds(tz,ms) ((tz) + ((ms) * (int64) 1000))\n\nWhich can lead to overflow, which is practically harmless.\n\n> the function defGetMinApplyDelay() sufficient to ensure that the\n> 'delay' value stored in the catalog will always be lesser than\n> INT_MAX?\n\nI'm concerned about cases where INT_MAX is wider than int32. If we\ndon't assume such cases, I'm fine with INT_MAX there.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 30 Jan 2023 16:08:02 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Mon, Jan 30, 2023 at 12:38 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Mon, 30 Jan 2023 11:56:33 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > >\n> > > The GUC is not stored in a catalog, but.. oh... it is multiplied by\n> > > 1000.\n> >\n> > Which part of the patch you are referring to here? Isn't the check in\n>\n> Where recovery_min_apply_delay is used. It is allowed to be set up to\n> INT_MAX but it is used as:\n>\n> > delayUntil = TimestampTzPlusMilliseconds(xtime, recovery_min_apply_delay);\n>\n> Where the macro is defined as:\n>\n> > #define TimestampTzPlusMilliseconds(tz,ms) ((tz) + ((ms) * (int64) 1000))\n>\n> Which can lead to overflow, which is practically harmless.\n>\n\nBut here tz is always TimestampTz (which is int64), so do, we need to worry?\n\n> > the function defGetMinApplyDelay() sufficient to ensure that the\n> > 'delay' value stored in the catalog will always be lesser than\n> > INT_MAX?\n>\n> I'm concerned about cases where INT_MAX is wider than int32. If we\n> don't assume such cases, I'm fine with INT_MAX there.\n>\n\nI am not aware of such cases. Anyway, if any such case is discovered\nthen we need to change the checks in defGetMinApplyDelay(), right? If\nso, then I think it is better to keep it as it is unless we know that\nthis could be an issue on some platform.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 30 Jan 2023 14:24:31 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Saturday, January 28, 2023 1:28 PM I wrote:\n> Attached the updated patch v24.\nHi,\n\n\nI've conducted the rebase affected by the commit(1e8b61735c)\nby renaming the GUC to logical_replication_mode accordingly,\nbecause it's utilized in the TAP test of this time-delayed LR feature.\nThere is no other change for this version.\n\nKindly have a look at the attached v25.\n\n\nBest Regards,\n\tTakamichi Osumi",
"msg_date": "Mon, 30 Jan 2023 10:04:59 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Monday, January 30, 2023 12:02 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> At Sat, 28 Jan 2023 04:28:29 +0000, \"Takamichi Osumi (Fujitsu)\"\n> <osumi.takamichi@fujitsu.com> wrote in\n> > On Friday, January 27, 2023 8:00 PM Amit Kapila\n> <amit.kapila16@gmail.com> wrote:\n> > > So, you have changed min_apply_delay from int64 to int32, but you\n> > > haven't mentioned the reason for the same? We use 'int' for the\n> > > similar parameter recovery_min_apply_delay, so, ideally, it makes\n> > > sense but still better to tell your reason explicitly.\n> > Yes. It's because I thought I need to make this feature consistent with the\n> recovery_min_apply_delay.\n> > This feature handles the range same as the recovery_min_apply delay\n> > from 0 to INT_MAX now so should be adjusted to match it.\n> \n> INT_MAX can stick out of int32 on some platforms. (I'm not sure where that\n> actually happens, though.) We can use PG_INT32_MAX instead.\n> \n> IMHO, I think we don't use int as a catalog column and I agree that\n> int32 is sufficient since I don't think more than 49 days delay is practical. On\n> the other hand, maybe I wouldn't want to use int32 for intermediate\n> calculations.\nHi, Horiguchi-san. Thanks for your comments !\n\n\nIIUC, in the last sentence, you proposed the type of\nSubOpts min_apply_delay should be change to \"int\". But\nI couldn't find actual harm of the current codes, because\nwe anyway insert the SubOpts value to the catalog after holding it in SubOpts.\nAlso, it seems there is no explicit rule where we should use \"int\" local variables\nfor \"int32\" system catalog values internally. I had a look at other\nvariables for int32 system catalog members and either looked fine.\n\nSo, I'd like to keep the current code as it is, until actual harm is found.\nThe latest patch can be seen in [1].\n\n\n[1] - https://www.postgresql.org/message-id/TYCPR01MB8373E26884C385EFFFB8965FEDD39%40TYCPR01MB8373.jpnprd01.prod.outlook.com\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Mon, 30 Jan 2023 10:33:43 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Monday, January 30, 2023 7:05 PM I wrote:\n> On Saturday, January 28, 2023 1:28 PM I wrote:\n> > Attached the updated patch v24.\n> I've conducted the rebase affected by the commit(1e8b61735c) by renaming\n> the GUC to logical_replication_mode accordingly, because it's utilized in the\n> TAP test of this time-delayed LR feature.\n> There is no other change for this version.\n> \n> Kindly have a look at the attached v25.\nHi,\n\nThe v25 caused a failure on windows of cfbot in [1].\nBut, the failure happened in the tests of pg_upgrade\nand the failure message looks the same one reported in the ongoing discussion of [2].\nThen, it's an issue independent from the v25.\n\n[1] - https://cirrus-ci.com/task/5484559622471680\n[2] - https://www.postgresql.org/message-id/20220919213217.ptqfdlcc5idk5xup%40awork3.anarazel.de\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Mon, 30 Jan 2023 13:35:33 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "At Mon, 30 Jan 2023 14:24:31 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Mon, Jan 30, 2023 at 12:38 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Mon, 30 Jan 2023 11:56:33 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > > #define TimestampTzPlusMilliseconds(tz,ms) ((tz) + ((ms) * (int64) 1000))\n> >\n> > Which can lead to overflow, which is practically harmless.\n> >\n> \n> But here tz is always TimestampTz (which is int64), so do, we need to worry?\n\nSorry, I was putting an assuption that int were int64 here.\n\n> > > the function defGetMinApplyDelay() sufficient to ensure that the\n> > > 'delay' value stored in the catalog will always be lesser than\n> > > INT_MAX?\n> >\n> > I'm concerned about cases where INT_MAX is wider than int32. If we\n> > don't assume such cases, I'm fine with INT_MAX there.\n> >\n> \n> I am not aware of such cases. Anyway, if any such case is discovered\n> then we need to change the checks in defGetMinApplyDelay(), right? If\n> so, then I think it is better to keep it as it is unless we know that\n> this could be an issue on some platform.\n\nI'm not sure. I think that int is generally thought that it is tied\nwith an integer type of any size. min_apply_delay is tightly bond\nwith a catalog column of int32 thus I thought that (PG_)INT32_MAX is\nthe right limit. So, as I expressed before, if we assume sizeof(int)\n<= sizeof(int32), I' fine with using INT_MAX there.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 31 Jan 2023 13:18:55 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Horiguchi-san,\n\n> I'm not sure. I think that int is generally thought that it is tied\n> with an integer type of any size. min_apply_delay is tightly bond\n> with a catalog column of int32 thus I thought that (PG_)INT32_MAX is\n> the right limit. So, as I expressed before, if we assume sizeof(int)\n> <= sizeof(int32), I' fine with using INT_MAX there.\n\nI have checked some articles and I think platforms supported by postgres regard\nInt as 32-bit integer.\n\n\nAccording to the definition of C99, actual value of INT_MAX/INT_MIN depend on the\nimplementation - INT_MAX must bigger than or equal to 2^15 - 1 [1].\nSo theoretically there is a possibility that int is bigger than int, as you worried.\n\n\nNext, I checked some data models, and found ILP64 that regards int as 64-bit integer.\nIn this case INT_MAX may be 2^63-1, it exceeds PG_INT32_MAX.\nI cannot find the proper document about the type, but I can site a table from the doc[2].\n\n```\nDatatype\tLP64\tILP64\tLLP64\tILP32\tLP32\nchar\t8\t8\t8\t8\t8\nshort\t16\t16\t16\t16\t16\n_int32\t\t32\t\t\t\nint\t32\t64\t32\t32\t16\nlong\t64\t64\t32\t32\t32\nlong long\t\t\t64\t\t\npointer\t64\t64\t64\t32\t32\n```\n\nI'm not sure whether the system survives or not. According to [2], a few system\nreleased, but I have never heard. Modern systems have LP64 or LLP64.\n\n> There have been a few examples of ILP64 systems that have shipped\n> (Cray and ETA come to mind).\n\nIn another paper[3], Sun UltraSPARC, which is 32-bit OS and use SPARC64 processor,\nseems to use ILP64 model, but it may be ancient OS.\n\n> 1995 Sun UltraSPARC: 64/32-bit hardware, 32-bit-only operating system. HAL Computer’s SPARC64: uses ILP64 model for C.\n\nAlso, I checked buildfarm animals that have Sparc64 architecture,\nbut their alignment of int seems to be 4 byte [4].\n\n> checking alignment of int... 4\n\nTherefore, I think we can say that modern platforms that are supported by PostgreSQL define int as 32-bit.\nIt satisfies the condition sizeof(int) <= sizeof(int32), so we can keep to use INT_MAX.\n\n[1] https://www.dii.uchile.cl/~daespino/files/Iso_C_1999_definition.pdf\n[2] https://unix.org/version2/whatsnew/lp64_wp.html\n[3] https://queue.acm.org/detail.cfm?id=1165766\n[4] https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=castoroides&dt=2023-01-30%2012%3A00%3A07&stg=configure#:~:text=checking%20alignment%20of%20int...%204\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Tue, 31 Jan 2023 07:06:40 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi, Kuroda-san, Thanks for the detailed study.\n\nAt Tue, 31 Jan 2023 07:06:40 +0000, \"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com> wrote in \n> Therefore, I think we can say that modern platforms that are supported by PostgreSQL define int as 32-bit.\n> It satisfies the condition sizeof(int) <= sizeof(int32), so we can keep to use INT_MAX.\n\nYeah, I know that that's practically correct. Just I wanted to make\nclear is whether we (always) assume int == int32. I don't want to do\nthat just because that works. Even though we cannot be perfect, in\nthis particular case the destination space is explicitly made as\nint32.\n\nIt's a similar discussion to the recent commit 3b4ac33254. We choosed\nto use the \"correct\" symbols refusing to employ an implicit assumption\nabout the actual values. (In that sense, it is a compromize to assume\nint32 being narrower than int is a premise, but the code will get\nuselessly complex without that assumption:p)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 31 Jan 2023 17:10:46 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tue, Jan 31, 2023 at 1:40 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> Hi, Kuroda-san, Thanks for the detailed study.\n>\n> At Tue, 31 Jan 2023 07:06:40 +0000, \"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com> wrote in\n> > Therefore, I think we can say that modern platforms that are supported by PostgreSQL define int as 32-bit.\n> > It satisfies the condition sizeof(int) <= sizeof(int32), so we can keep to use INT_MAX.\n>\n> Yeah, I know that that's practically correct. Just I wanted to make\n> clear is whether we (always) assume int == int32. I don't want to do\n> that just because that works. Even though we cannot be perfect, in\n> this particular case the destination space is explicitly made as\n> int32.\n>\n\nSo, shall we check if the result of parse_int is in the range 0 and\nPG_INT32_MAX to ameliorate this concern? If this works then we need to\nprobably change the return value of defGetMinApplyDelay() to int32.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 31 Jan 2023 15:12:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "At Tue, 31 Jan 2023 15:12:14 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Tue, Jan 31, 2023 at 1:40 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > Hi, Kuroda-san, Thanks for the detailed study.\n> >\n> > At Tue, 31 Jan 2023 07:06:40 +0000, \"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com> wrote in\n> > > Therefore, I think we can say that modern platforms that are supported by PostgreSQL define int as 32-bit.\n> > > It satisfies the condition sizeof(int) <= sizeof(int32), so we can keep to use INT_MAX.\n> >\n> > Yeah, I know that that's practically correct. Just I wanted to make\n> > clear is whether we (always) assume int == int32. I don't want to do\n> > that just because that works. Even though we cannot be perfect, in\n> > this particular case the destination space is explicitly made as\n> > int32.\n> >\n> \n> So, shall we check if the result of parse_int is in the range 0 and\n> PG_INT32_MAX to ameliorate this concern?\n\nYeah, it is exactly what I wanted to suggest.\n\n> If this works then we need to\n> probably change the return value of defGetMinApplyDelay() to int32.\n\nI didn't thought doing that, int can store all values in the valid\nrange (I'm assuming we implicitly assume int >= int32 in bit width)\nand it is the natural integer in C. Either will do for me but I\nslightly prefer to use int there.\n\nAs the result I'd like to propose the following change.\n\n\ndiff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c\nindex 489eae85ee..9de2745623 100644\n--- a/src/backend/commands/subscriptioncmds.c\n+++ b/src/backend/commands/subscriptioncmds.c\n@@ -2293,16 +2293,16 @@ defGetMinApplyDelay(DefElem *def)\n \t\t\t\t hintmsg ? errhint(\"%s\", _(hintmsg)) : 0));\n \n \t/*\n-\t * Check lower bound. parse_int() has already been confirmed that result\n-\t * is less than or equal to INT_MAX.\n+\t * Check the both boundary. Although parse_int() checked the result against\n+\t * INT_MAX, this value is to be stored in a catalog column of int32.\n \t */\n-\tif (result < 0)\n+\tif (result < 0 || result > PG_INT32_MAX)\n \t\tereport(ERROR,\n \t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n \t\t\t\t errmsg(\"%d ms is outside the valid range for parameter \\\"%s\\\" (%d .. %d)\",\n \t\t\t\t\t\tresult,\n \t\t\t\t\t\t\"min_apply_delay\",\n-\t\t\t\t\t\t0, INT_MAX)));\n+\t\t\t\t\t\t0, PG_INT32_MAX)));\n \n \treturn result;\n }\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 01 Feb 2023 11:43:04 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Wed, Feb 1, 2023 at 8:13 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 31 Jan 2023 15:12:14 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > On Tue, Jan 31, 2023 at 1:40 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > Hi, Kuroda-san, Thanks for the detailed study.\n> > >\n> > > At Tue, 31 Jan 2023 07:06:40 +0000, \"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com> wrote in\n> > > > Therefore, I think we can say that modern platforms that are supported by PostgreSQL define int as 32-bit.\n> > > > It satisfies the condition sizeof(int) <= sizeof(int32), so we can keep to use INT_MAX.\n> > >\n> > > Yeah, I know that that's practically correct. Just I wanted to make\n> > > clear is whether we (always) assume int == int32. I don't want to do\n> > > that just because that works. Even though we cannot be perfect, in\n> > > this particular case the destination space is explicitly made as\n> > > int32.\n> > >\n> >\n> > So, shall we check if the result of parse_int is in the range 0 and\n> > PG_INT32_MAX to ameliorate this concern?\n>\n> Yeah, it is exactly what I wanted to suggest.\n>\n> > If this works then we need to\n> > probably change the return value of defGetMinApplyDelay() to int32.\n>\n> I didn't thought doing that, int can store all values in the valid\n> range (I'm assuming we implicitly assume int >= int32 in bit width)\n> and it is the natural integer in C. Either will do for me but I\n> slightly prefer to use int there.\n>\n\nI think it would be clear to use int32 because the parameter where we\nstore the return value is also int32.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 1 Feb 2023 08:38:11 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Here are my review comments for the patch v25-0001.\n\n======\nCommit Message\n\n1.\nThe other possibility is to apply the delay at the end of the parallel\napply transaction but that would cause issues related to resource\nbloat and locks being held for a long time.\n\n~\n\nSUGGESTION\nWe chose not to apply the delay at the end of the parallel apply\ntransaction because that would cause issues related to resource bloat\nand locks being held for a long time.\n\n======\ndoc/src/sgml/config.sgml\n\n2.\n+ <para>\n+ For time-delayed logical replication, the apply worker sends a feedback\n+ message to the publisher every\n+ <varname>wal_receiver_status_interval</varname> milliseconds. Make sure\n+ to set <varname>wal_receiver_status_interval</varname> less than the\n+ <varname>wal_sender_timeout</varname> on the publisher, otherwise, the\n+ <literal>walsender</literal> will repeatedly terminate due to timeout\n+ error. Note that if <varname>wal_receiver_status_interval</varname> is\n+ set to zero, the apply worker sends no feedback messages during the\n+ <literal>min_apply_delay</literal> period.\n+ </para>\n\n2a.\n\"due to timeout error.\" --> \"due to timeout errors.\"\n\n~\n\n2b.\nShouldn't this also cross-ref to CREATE SUBSCRIPTION docs? Because the\nabove mentions 'min_apply_delay' but that is not defined on this page.\n\n======\ndoc/src/sgml/ref/create_subscription.sgml\n\n3.\n+ <para>\n+ By default, the subscriber applies changes as soon as possible. This\n+ parameter allows the user to delay the application of changes by a\n+ given time period. If the value is specified without units, it is\n+ taken as milliseconds. The default is zero (no delay). See\n+ <xref linkend=\"config-setting-names-values\"/> for details on the\n+ available valid time unites.\n+ </para>\n\nTypo: \"unites\"\n\n~~~\n\n4.\n+ <para>\n+ Any delay becomes effective after all initial table synchronization\n+ has finished and occurs before each transaction starts to get applied\n+ on the subscriber. The delay is calculated as the difference between\n+ the WAL timestamp as written on the publisher and the current time on\n+ the subscriber. Any overhead of time spent in logical decoding and in\n+ transferring the transaction may reduce the actual wait time. It is\n+ also possible that the overhead already exceeds the requested\n+ <literal>min_apply_delay</literal> value, in which case no delay is\n+ applied. If the system clocks on publisher and subscriber are not\n+ synchronized, this may lead to apply changes earlier than expected,\n+ but this is not a major issue because this parameter is typically\n+ much larger than the time deviations between servers. Note that if\n+ this parameter is set to a long delay, the replication will stop if\n+ the replication slot falls behind the current LSN by more than\n+ <link\nlinkend=\"guc-max-slot-wal-keep-size\"><literal>max_slot_wal_keep_size</literal></link>.\n+ </para>\n\n\"Any delay becomes effective after all initial table\nsynchronization...\" --> \"Any delay becomes effective only after all\ninitial table synchronization...\"\n\n~~~\n\n5.\n+ <warning>\n+ <para>\n+ Delaying the replication means there is a much longer time between\n+ making a change on the publisher, and that change being committed\n+ on the subscriber. This can impact the performance of synchronous\n+ replication. See <xref linkend=\"guc-synchronous-commit\"/>\n+ parameter.\n+ </para>\n+ </warning>\n\n\nI'm not sure why this was text changed to say \"means there is a much\nlonger time\" instead of \"can mean there is a much longer time\".\n\nIMO the previous wording was better because this current text makes an\nassumption about what the user has configured -- e.g. if they\nconfigured only 1ms delay then the warning text is not really\nrelevant.\n\n~~~\n\n6.\nWhy was the example (it existed when I last looked at patch v19)\nremoved? Personally, I found that example to be a useful reminder that\nthe min_apply_delay can specify units other than just 'ms'.\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n7. parse_subscription_options\n\n+ /*\n+ * The combination of parallel streaming mode and min_apply_delay is not\n+ * allowed. This is because we start applying the transaction stream as\n+ * soon as the first change arrives without knowing the transaction's\n+ * prepare/commit time. This means we cannot calculate the underlying\n+ * network/decoding lag between publisher and subscriber, and so always\n+ * waiting for the full 'min_apply_delay' period might include unnecessary\n+ * delay.\n+ *\n+ * The other possibility is to apply the delay at the end of the parallel\n+ * apply transaction but that would cause issues related to resource bloat\n+ * and locks being held for a long time.\n+ */\n\nI think the 2nd paragraph should be changed slightly as follows (like\nreview comment #1)\n\nSUGGESTION\nNote - we chose not to apply the delay at the end of the parallel\napply transaction because that would cause issues related to resource\nbloat and locks being held for a long time.\n\n~~~\n\n8.\n+ if (IsSet(supported_opts, SUBOPT_MIN_APPLY_DELAY) &&\n+ opts->min_apply_delay > 0 && opts->streaming == LOGICALREP_STREAM_PARALLEL)\n+ ereport(ERROR,\n+ errcode(ERRCODE_SYNTAX_ERROR),\n\nSaying \"> 0\" (in the condition) is not strictly necessary here, since\nit is never < 0.\n\n~~~\n\n9. AlterSubscription\n\n+ /*\n+ * The combination of parallel streaming mode and\n+ * min_apply_delay is not allowed. See\n+ * parse_subscription_options for details of the reason.\n+ */\n+ if (opts.streaming == LOGICALREP_STREAM_PARALLEL)\n+ if ((IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) &&\nopts.min_apply_delay > 0) ||\n+ (!IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) &&\nsub->minapplydelay > 0))\n\nSaying \"> 0\" (in the condition) is not strictly necessary here, since\nit is never < 0.\n\n~~~\n\n10.\n+ if (IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY))\n+ {\n+ /*\n+ * The combination of parallel streaming mode and\n+ * min_apply_delay is not allowed.\n+ */\n+ if (opts.min_apply_delay > 0)\n\nSaying \"> 0\" (in the condition) is not strictly necessary here, since\nit is never < 0.\n\n~~~\n\n11. defGetMinApplyDelay\n\n+ /*\n+ * Check lower bound. parse_int() has already been confirmed that result\n+ * is less than or equal to INT_MAX.\n+ */\n\nThe parse_int already checks < INT_MAX. But on return from that\nfunction, don’t you need to check again that it is < PG_INT32_MAX (in\ncase those are different)\n\n(I think Kuroda-san already suggested same as this)\n\n======\nsrc/backend/replication/logical/worker.c\n\n12.\n+/*\n+ * In order to avoid walsender timeout for time-delayed logical replication the\n+ * apply worker keeps sending feedback messages during the delay period.\n+ * Meanwhile, the feature delays the apply before the start of the\n+ * transaction and thus we don't write WAL records for the suspended changes\n+ * during the wait. When the apply worker sends a feedback message during the\n+ * delay, we should not make positions of the flushed and apply LSN overwritten\n+ * by the last received latest LSN. See send_feedback() for details.\n+ */\n\n\"we should not make positions of the flushed and apply LSN\noverwritten\" --> \"we should overwrite positions of the flushed and\napply LSN\"\n\n~~~\n\n14. send_feedback\n\n@@ -3738,8 +3867,15 @@ send_feedback(XLogRecPtr recvpos, bool force,\nbool requestReply)\n /*\n * No outstanding transactions to flush, we can report the latest received\n * position. This is important for synchronous replication.\n+ *\n+ * If the logical replication subscription has unprocessed changes then do\n+ * not inform the publisher that the received latest LSN is already\n+ * applied and flushed, otherwise, the publisher will make a wrong\n+ * assumption about the logical replication progress. Instead, it just\n+ * sends a feedback message to avoid a replication timeout during the\n+ * delay.\n */\n\n\"Instead, it just sends\" --> \"Instead, just send\"\n\n======\nsrc/bin/pg_dump/pg_dump.h\n\n15. SubscriptionInfo\n\n@@ -661,6 +661,7 @@ typedef struct _SubscriptionInfo\n char *subdisableonerr;\n char *suborigin;\n char *subsynccommit;\n+ int subminapplydelay;\n char *subpublications;\n } SubscriptionInfo;\n\nShould this also be \"int32\" to match the other member type changes?\n\n======\nsrc/test/subscription/t/032_apply_delay.pl\n\n16.\n+# Make sure the apply worker knows to wait for more than 500ms\n+check_apply_delay_log($node_subscriber, $offset, \"0.5\");\n\n\"knows to wait for more than\" --> \"waits for more than\"\n\n(this occurs in a couple of places)\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 1 Feb 2023 15:37:02 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "At Wed, 1 Feb 2023 08:38:11 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Wed, Feb 1, 2023 at 8:13 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Tue, 31 Jan 2023 15:12:14 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > > So, shall we check if the result of parse_int is in the range 0 and\n> > > PG_INT32_MAX to ameliorate this concern?\n> >\n> > Yeah, it is exactly what I wanted to suggest.\n> >\n> > > If this works then we need to\n> > > probably change the return value of defGetMinApplyDelay() to int32.\n> >\n> > I didn't thought doing that, int can store all values in the valid\n> > range (I'm assuming we implicitly assume int >= int32 in bit width)\n> > and it is the natural integer in C. Either will do for me but I\n> > slightly prefer to use int there.\n> >\n> \n> I think it would be clear to use int32 because the parameter where we\n> store the return value is also int32.\n\nI'm fine with that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 01 Feb 2023 17:39:35 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Mon, Jan 30, 2023 6:05 PM Takamichi Osumi (Fujitsu) <osumi.takamichi@fujitsu.com> wrote:\n> \n> On Saturday, January 28, 2023 1:28 PM I wrote:\n> > Attached the updated patch v24.\n> Hi,\n> \n> \n> I've conducted the rebase affected by the commit(1e8b61735c)\n> by renaming the GUC to logical_replication_mode accordingly,\n> because it's utilized in the TAP test of this time-delayed LR feature.\n> There is no other change for this version.\n> \n> Kindly have a look at the attached v25.\n> \n\nThanks for your patch. Here are some comments.\n\n1.\n+\t/*\n+\t * The min_apply_delay parameter is ignored until all tablesync workers\n+\t * have reached READY state. This is because if we allowed the delay\n+\t * during the catchup phase, then once we reached the limit of tablesync\n+\t * workers it would impose a delay for each subsequent worker. That would\n+\t * cause initial table synchronization completion to take a long time.\n+\t */\n+\tif (!AllTablesyncsReady())\n+\t\treturn;\n\nI saw that the new parameter becomes effective after all tables are in ready\nstate, because the apply worker can't set the state to catchup during the delay.\nBut can we call process_syncing_tables() in the while-loop of\nmaybe_apply_delay()? Then the tablesync can finish without delay. If we can't do\nso, it might be better to add some comments for it.\n\n2.\n+# Make sure the apply worker knows to wait for more than 500ms\n+check_apply_delay_log($node_subscriber, $offset, \"0.5\");\n\nI think the last parameter should be 500.\nBesides, I am not sure it's a stable test to check the log. Is it possible that\nthere's no such log on a slow machine? I modified the code to sleep 1s at the\nbeginning of apply_dispatch(), then the new added test failed because the server\nlog cannot match.\n\nRegards,\nShi yu\n\n\n",
"msg_date": "Wed, 1 Feb 2023 09:40:41 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Wed, Feb 1, 2023 at 3:10 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Mon, Jan 30, 2023 6:05 PM Takamichi Osumi (Fujitsu) <osumi.takamichi@fujitsu.com> wrote:\n> >\n> > Kindly have a look at the attached v25.\n> >\n>\n> Thanks for your patch. Here are some comments.\n>\n> 1.\n> + /*\n> + * The min_apply_delay parameter is ignored until all tablesync workers\n> + * have reached READY state. This is because if we allowed the delay\n> + * during the catchup phase, then once we reached the limit of tablesync\n> + * workers it would impose a delay for each subsequent worker. That would\n> + * cause initial table synchronization completion to take a long time.\n> + */\n> + if (!AllTablesyncsReady())\n> + return;\n>\n> I saw that the new parameter becomes effective after all tables are in ready\n> state, because the apply worker can't set the state to catchup during the delay.\n> But can we call process_syncing_tables() in the while-loop of\n> maybe_apply_delay()? Then the tablesync can finish without delay. If we can't do\n> so, it might be better to add some comments for it.\n>\n\nI think the point here is that if the apply worker is ahead of\ntablesync worker then to complete the catch-up, tablesync worker needs\nto apply additional transactions, and delaying during that time will\ncause initial table synchronization completion to take a long time. I\nam not sure how much more details can be added to the existing\ncomments.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 2 Feb 2023 08:48:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi,\n\nOn Wednesday, February 1, 2023 5:40 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> At Wed, 1 Feb 2023 08:38:11 +0530, Amit Kapila <amit.kapila16@gmail.com>\n> wrote in\n> > On Wed, Feb 1, 2023 at 8:13 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > At Tue, 31 Jan 2023 15:12:14 +0530, Amit Kapila\n> > > <amit.kapila16@gmail.com> wrote in\n> > > > So, shall we check if the result of parse_int is in the range 0\n> > > > and PG_INT32_MAX to ameliorate this concern?\n> > >\n> > > Yeah, it is exactly what I wanted to suggest.\n> > >\n> > > > If this works then we need to\n> > > > probably change the return value of defGetMinApplyDelay() to int32.\n> > >\n> > > I didn't thought doing that, int can store all values in the valid\n> > > range (I'm assuming we implicitly assume int >= int32 in bit width)\n> > > and it is the natural integer in C. Either will do for me but I\n> > > slightly prefer to use int there.\n> > >\n> >\n> > I think it would be clear to use int32 because the parameter where we\n> > store the return value is also int32.\n> \n> I'm fine with that.\nThank you for confirming.\n\nAttached the updated patch v26 accordingly.\nI slightly adjusted the comments in defGetMinApplyDelay\non this point as well.\n\n\nBest Regards,\n\tTakamichi Osumi",
"msg_date": "Thu, 2 Feb 2023 08:03:55 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi,\n\nOn Wednesday, February 1, 2023 1:37 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> Here are my review comments for the patch v25-0001.\nThank you for your review !\n\n> ======\n> Commit Message\n> \n> 1.\n> The other possibility is to apply the delay at the end of the parallel apply\n> transaction but that would cause issues related to resource bloat and locks being\n> held for a long time.\n> \n> ~\n> \n> SUGGESTION\n> We chose not to apply the delay at the end of the parallel apply transaction\n> because that would cause issues related to resource bloat and locks being held\n> for a long time.\nI prefer the current description. So, I just changed one word\nfrom \"The other possibility is...\" to \"The other possibility was\"\nto indicate both two paragraphs (this paragraph and the previous paragraph)\nare related.\n\n\n> ======\n> doc/src/sgml/config.sgml\n> \n> 2.\n> + <para>\n> + For time-delayed logical replication, the apply worker sends a feedback\n> + message to the publisher every\n> + <varname>wal_receiver_status_interval</varname> milliseconds.\n> Make sure\n> + to set <varname>wal_receiver_status_interval</varname> less than\n> the\n> + <varname>wal_sender_timeout</varname> on the publisher,\n> otherwise, the\n> + <literal>walsender</literal> will repeatedly terminate due to timeout\n> + error. Note that if <varname>wal_receiver_status_interval</varname>\n> is\n> + set to zero, the apply worker sends no feedback messages during the\n> + <literal>min_apply_delay</literal> period.\n> + </para>\n> \n> 2a.\n> \"due to timeout error.\" --> \"due to timeout errors.\"\nFixed.\n\n\n> ~\n> \n> 2b.\n> Shouldn't this also cross-ref to CREATE SUBSCRIPTION docs? Because the\n> above mentions 'min_apply_delay' but that is not defined on this page.\nMakes sense. Added.\n\n\n> ======\n> doc/src/sgml/ref/create_subscription.sgml\n> \n> 3.\n> + <para>\n> + By default, the subscriber applies changes as soon as possible. This\n> + parameter allows the user to delay the application of changes by a\n> + given time period. If the value is specified without units, it is\n> + taken as milliseconds. The default is zero (no delay). See\n> + <xref linkend=\"config-setting-names-values\"/> for details on the\n> + available valid time unites.\n> + </para>\n> \n> Typo: \"unites\"\nFixed it to \"units\".\n\n\n> ~~~\n> \n> 4.\n> + <para>\n> + Any delay becomes effective after all initial table synchronization\n> + has finished and occurs before each transaction starts to get applied\n> + on the subscriber. The delay is calculated as the difference between\n> + the WAL timestamp as written on the publisher and the current time\n> on\n> + the subscriber. Any overhead of time spent in logical decoding and in\n> + transferring the transaction may reduce the actual wait time. It is\n> + also possible that the overhead already exceeds the requested\n> + <literal>min_apply_delay</literal> value, in which case no delay is\n> + applied. If the system clocks on publisher and subscriber are not\n> + synchronized, this may lead to apply changes earlier than expected,\n> + but this is not a major issue because this parameter is typically\n> + much larger than the time deviations between servers. Note that if\n> + this parameter is set to a long delay, the replication will stop if\n> + the replication slot falls behind the current LSN by more than\n> + <link\n> linkend=\"guc-max-slot-wal-keep-size\"><literal>max_slot_wal_keep_size</liter\n> al></link>.\n> + </para>\n> \n> \"Any delay becomes effective after all initial table synchronization...\" --> \"Any\n> delay becomes effective only after all initial table synchronization...\"\nAgreed. Fixed.\n\n\n> ~~~\n> \n> 5.\n> + <warning>\n> + <para>\n> + Delaying the replication means there is a much longer time\n> between\n> + making a change on the publisher, and that change being\n> committed\n> + on the subscriber. This can impact the performance of\n> synchronous\n> + replication. See <xref linkend=\"guc-synchronous-commit\"/>\n> + parameter.\n> + </para>\n> + </warning>\n> \n> \n> I'm not sure why this was text changed to say \"means there is a much longer\n> time\" instead of \"can mean there is a much longer time\".\n> \n> IMO the previous wording was better because this current text makes an\n> assumption about what the user has configured -- e.g. if they configured only\n> 1ms delay then the warning text is not really relevant.\nYes, I changed here. The reason is that the purpose of this feature\nis to address unintentional wrong operations on the pub and for that purpose,\nI didn't feel quite very short time like you mentioned might not be set for this parameter\nafter some community's comments from hackers. Either was fine,\nbut I chose the current description, depending on the purpose.\n\n> ~~~\n> \n> 6.\n> Why was the example (it existed when I last looked at patch v19) removed?\n> Personally, I found that example to be a useful reminder that the\n> min_apply_delay can specify units other than just 'ms'.\nRemoved because the example was one variation that used one difference value of\nWITH clause, after some comments from the hackers.\nThe reference for available units is documented,\nso the current description should be sufficient.\n\n\n> ======\n> src/backend/commands/subscriptioncmds.c\n> \n> 7. parse_subscription_options\n> \n> + /*\n> + * The combination of parallel streaming mode and min_apply_delay is\n> + not\n> + * allowed. This is because we start applying the transaction stream as\n> + * soon as the first change arrives without knowing the transaction's\n> + * prepare/commit time. This means we cannot calculate the underlying\n> + * network/decoding lag between publisher and subscriber, and so always\n> + * waiting for the full 'min_apply_delay' period might include\n> + unnecessary\n> + * delay.\n> + *\n> + * The other possibility is to apply the delay at the end of the\n> + parallel\n> + * apply transaction but that would cause issues related to resource\n> + bloat\n> + * and locks being held for a long time.\n> + */\n> \n> I think the 2nd paragraph should be changed slightly as follows (like review\n> comment #1)\n> \n> SUGGESTION\n> Note - we chose not to apply the delay at the end of the parallel apply\n> transaction because that would cause issues related to resource bloat and locks\n> being held for a long time.\nSame as the first comment, changed only \"is\" to \"was\",\nto indicate the last paragraph is related to past discussion(option)\nfor the parallel streaming mode that was not adopted.\n\n\n> ~~~\n> \n> 8.\n> + if (IsSet(supported_opts, SUBOPT_MIN_APPLY_DELAY) &&\n> + opts->min_apply_delay > 0 && opts->streaming ==\n> + opts->LOGICALREP_STREAM_PARALLEL)\n> + ereport(ERROR,\n> + errcode(ERRCODE_SYNTAX_ERROR),\n> \n> Saying \"> 0\" (in the condition) is not strictly necessary here, since it is never < 0.\nThis check is necessary.\n\nFor example, imagine a case when we CREATE a subscription with streaming = on\nand then try to ALTER the subscription with streaming = parallel\nwithout any settings for min_apply_delay. The ALTER command\nthrows an error of \"min_apply_delay > 0 and streaming = parallel are\nmutually exclusive options.\" then.\n\nThis is because min_apply_delay is supported by ALTER command\n(so the first condition becomes true) and we set\nstreaming = parallel (which makes the 2nd condition true).\n\nSo, we need to check the opts's actual min_apply_delay value\nto make the irrelavent case pass.\n> ~~~\n> \n> 9. AlterSubscription\n> \n> + /*\n> + * The combination of parallel streaming mode and\n> + * min_apply_delay is not allowed. See\n> + * parse_subscription_options for details of the reason.\n> + */\n> + if (opts.streaming == LOGICALREP_STREAM_PARALLEL) if\n> + ((IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) &&\n> opts.min_apply_delay > 0) ||\n> + (!IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) &&\n> sub->minapplydelay > 0))\n> \n> Saying \"> 0\" (in the condition) is not strictly necessary here, since it is never < 0.\nThis is also necessary.\n\nFor example, imagine a case that\nthere is a subscription whose min_apply_delay is 1 day.\nThen, you want to try to execute ALTER SUBSCRIPTION\nwith (min_apply_delay = 0, streaming = parallel).\nIf we remove the condition of otps.min_apply_delay > 0,\nthen we error out in this case too.\n\nFirst we pass the first condition\nof the opts.streaming == LOGICALREP_STREAM_PARALLEL,\nsince we use streaming option.\nThen, we also set min_apply_delay in this example,\nthen without checking the value of min_apply_delay,\nthe second condition becomes true\n(IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY)).\n\nSo, we need to make this case(min_apply_delay = 0) pass. \nMeanwhile, checking the \"sub\" value is necessary for checking existing subscription value.\n> ~~~\n> \n> 10.\n> + if (IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY)) {\n> + /*\n> + * The combination of parallel streaming mode and\n> + * min_apply_delay is not allowed.\n> + */\n> + if (opts.min_apply_delay > 0)\n> \n> Saying \"> 0\" (in the condition) is not strictly necessary here, since it is never < 0.\nThis is also required to check the value equals to 0 or not.\nKindly imagine a case when we want to execute ALTER min_apply_delay from 1day\nwith a pair of (min_apply_delay = 0 and\nstreaming = parallel). If we remove this check, then this ALTER command fails\nwith error. Without the check, when we set min_apply_delay\nand parallel streaming mode, even when making the min_apply_delay 0,\nthe error is invoked.\n\nThe check for sub.stream is necessary for existing definition of target subscription.\n> ~~~\n> \n> 11. defGetMinApplyDelay\n> \n> + /*\n> + * Check lower bound. parse_int() has already been confirmed that\n> + result\n> + * is less than or equal to INT_MAX.\n> + */\n> \n> The parse_int already checks < INT_MAX. But on return from that function,\n> don’t you need to check again that it is < PG_INT32_MAX (in case those are\n> different)\n> \n> (I think Kuroda-san already suggested same as this)\nChanged according to the discussion.\n\n\n> ======\n> src/backend/replication/logical/worker.c\n> \n> 12.\n> +/*\n> + * In order to avoid walsender timeout for time-delayed logical\n> +replication the\n> + * apply worker keeps sending feedback messages during the delay period.\n> + * Meanwhile, the feature delays the apply before the start of the\n> + * transaction and thus we don't write WAL records for the suspended\n> +changes\n> + * during the wait. When the apply worker sends a feedback message\n> +during the\n> + * delay, we should not make positions of the flushed and apply LSN\n> +overwritten\n> + * by the last received latest LSN. See send_feedback() for details.\n> + */\n> \n> \"we should not make positions of the flushed and apply LSN overwritten\" -->\n> \"we should overwrite positions of the flushed and apply LSN\"\nFixed. I added \"not\" in your suggestion, too.\n\n\n> ~~~\n> \n> 14. send_feedback\n> \n> @@ -3738,8 +3867,15 @@ send_feedback(XLogRecPtr recvpos, bool force, bool\n> requestReply)\n> /*\n> * No outstanding transactions to flush, we can report the latest received\n> * position. This is important for synchronous replication.\n> + *\n> + * If the logical replication subscription has unprocessed changes then\n> + do\n> + * not inform the publisher that the received latest LSN is already\n> + * applied and flushed, otherwise, the publisher will make a wrong\n> + * assumption about the logical replication progress. Instead, it just\n> + * sends a feedback message to avoid a replication timeout during the\n> + * delay.\n> */\n> \n> \"Instead, it just sends\" --> \"Instead, just send\"\nFixed.\n\n\n> ======\n> src/bin/pg_dump/pg_dump.h\n> \n> 15. SubscriptionInfo\n> \n> @@ -661,6 +661,7 @@ typedef struct _SubscriptionInfo\n> char *subdisableonerr;\n> char *suborigin;\n> char *subsynccommit;\n> + int subminapplydelay;\n> char *subpublications;\n> } SubscriptionInfo;\n> \n> Should this also be \"int32\" to match the other member type changes?\nThis is intentional.\nIn the context of pg_dump, we are treating\nthis same as other int32 catalog members.\nSo, I'd like to keep the current code.\n\n\n> ======\n> src/test/subscription/t/032_apply_delay.pl\n> \n> 16.\n> +# Make sure the apply worker knows to wait for more than 500ms\n> +check_apply_delay_log($node_subscriber, $offset, \"0.5\");\n> \n> \"knows to wait for more than\" --> \"waits for more than\"\n> \n> (this occurs in a couple of places)\nFixed.\n\nKindly have a look at v26 shared in [1].\n\n[1] - https://www.postgresql.org/message-id/TYCPR01MB83730A45925B9680C40D92AFEDD69%40TYCPR01MB8373.jpnprd01.prod.outlook.com\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Thu, 2 Feb 2023 08:18:49 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi,\n\n\nOn Wednesday, February 1, 2023 6:41 PM Shi, Yu/侍 雨 <shiy.fnst@fujitsu.com> wrote:\n> On Mon, Jan 30, 2023 6:05 PM Takamichi Osumi (Fujitsu)\n> <osumi.takamichi@fujitsu.com> wrote:\n> >\n> > On Saturday, January 28, 2023 1:28 PM I wrote:\n> > > Attached the updated patch v24.\n> > Hi,\n> >\n> >\n> > I've conducted the rebase affected by the commit(1e8b61735c) by\n> > renaming the GUC to logical_replication_mode accordingly, because it's\n> > utilized in the TAP test of this time-delayed LR feature.\n> > There is no other change for this version.\n> >\n> > Kindly have a look at the attached v25.\n> >\n> \n> Thanks for your patch. Here are some comments.\nThank you for your review !\n\n> 2.\n> +# Make sure the apply worker knows to wait for more than 500ms\n> +check_apply_delay_log($node_subscriber, $offset, \"0.5\");\n> \n> I think the last parameter should be 500.\nGood catch ! Fixed.\n\n\n> Besides, I am not sure it's a stable test to check the log. Is it possible that there's\n> no such log on a slow machine? I modified the code to sleep 1s at the beginning\n> of apply_dispatch(), then the new added test failed because the server log\n> cannot match.\nTo get the log by itself is necessary to ensure\nthat the delay is conducted by the apply worker, because we emit the diffms\nonly if it's bigger than 0 in maybe_apply_delay(). If we omit the step,\nwe are not sure the delay is caused by other reasons or the time-delayed feature.\n\nAs you mentioned, it's possible that no log is emitted on slow machine. Then,\nthe idea to make the test safer for such machines should be to make the delayed time longer.\nBut we shortened the delay time to 1 second to mitigate the long test execution time of this TAP test.\nSo, I'm not sure if it's a good idea to make it longer again.\n\nPlease have a look at the latest v26 in [1].\n\n[1] - https://www.postgresql.org/message-id/TYCPR01MB83730A45925B9680C40D92AFEDD69%40TYCPR01MB8373.jpnprd01.prod.outlook.com\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Thu, 2 Feb 2023 08:21:18 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Thu, Feb 2, 2023 at 7:21 PM Takamichi Osumi (Fujitsu)\n<osumi.takamichi@fujitsu.com> wrote:\n>\n...\n>\n>\n> > Besides, I am not sure it's a stable test to check the log. Is it possible that there's\n> > no such log on a slow machine? I modified the code to sleep 1s at the beginning\n> > of apply_dispatch(), then the new added test failed because the server log\n> > cannot match.\n> To get the log by itself is necessary to ensure\n> that the delay is conducted by the apply worker, because we emit the diffms\n> only if it's bigger than 0 in maybe_apply_delay(). If we omit the step,\n> we are not sure the delay is caused by other reasons or the time-delayed feature.\n>\n> As you mentioned, it's possible that no log is emitted on slow machine. Then,\n> the idea to make the test safer for such machines should be to make the delayed time longer.\n> But we shortened the delay time to 1 second to mitigate the long test execution time of this TAP test.\n> So, I'm not sure if it's a good idea to make it longer again.\n\nI think there are a couple of things that can be done about this problem:\n\n1. If you need the code/test to remain as-is then at least the test\nmessage could include some comforting text like \"(this can fail on\nslow machines when the delay time is already exceeded)\" so then a test\nfailure will not cause undue alarm.\n\n2. Try moving the DEBUG2 elog (in function maybe_apply_delay) so that\nit will *always* log the remaining wait time even if that wait time\nbecomes negative. Then I think the test cases can be made\ndeterministic instead of relying on good luck. This seems like the\nbetter option.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 3 Feb 2023 12:10:53 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Here are my review comments for patch v26-0001.\n\nOn Thu, Feb 2, 2023 at 7:18 PM Takamichi Osumi (Fujitsu)\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> Hi,\n>\n> On Wednesday, February 1, 2023 1:37 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > Here are my review comments for the patch v25-0001.\n> Thank you for your review !\n>\n\n> > 8.\n> > + if (IsSet(supported_opts, SUBOPT_MIN_APPLY_DELAY) &&\n> > + opts->min_apply_delay > 0 && opts->streaming ==\n> > + opts->LOGICALREP_STREAM_PARALLEL)\n> > + ereport(ERROR,\n> > + errcode(ERRCODE_SYNTAX_ERROR),\n> >\n> > Saying \"> 0\" (in the condition) is not strictly necessary here, since it is never < 0.\n> This check is necessary.\n>\n> For example, imagine a case when we CREATE a subscription with streaming = on\n> and then try to ALTER the subscription with streaming = parallel\n> without any settings for min_apply_delay. The ALTER command\n> throws an error of \"min_apply_delay > 0 and streaming = parallel are\n> mutually exclusive options.\" then.\n>\n> This is because min_apply_delay is supported by ALTER command\n> (so the first condition becomes true) and we set\n> streaming = parallel (which makes the 2nd condition true).\n>\n> So, we need to check the opts's actual min_apply_delay value\n> to make the irrelavent case pass.\n\nI think there is some misunderstanding. I was not suggesting removing\nthe condition -- only that I thought it could be written without the >\n0 as:\n\nif (IsSet(supported_opts, SUBOPT_MIN_APPLY_DELAY) &&\nopts->min_apply_delay && opts->streaming == LOGICALREP_STREAM_PARALLEL)\nereport(ERROR,\n\n> > ~~~\n> >\n> > 9. AlterSubscription\n> >\n> > + /*\n> > + * The combination of parallel streaming mode and\n> > + * min_apply_delay is not allowed. See\n> > + * parse_subscription_options for details of the reason.\n> > + */\n> > + if (opts.streaming == LOGICALREP_STREAM_PARALLEL) if\n> > + ((IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) &&\n> > opts.min_apply_delay > 0) ||\n> > + (!IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) &&\n> > sub->minapplydelay > 0))\n> >\n> > Saying \"> 0\" (in the condition) is not strictly necessary here, since it is never < 0.\n> This is also necessary.\n>\n> For example, imagine a case that\n> there is a subscription whose min_apply_delay is 1 day.\n> Then, you want to try to execute ALTER SUBSCRIPTION\n> with (min_apply_delay = 0, streaming = parallel).\n> If we remove the condition of otps.min_apply_delay > 0,\n> then we error out in this case too.\n>\n> First we pass the first condition\n> of the opts.streaming == LOGICALREP_STREAM_PARALLEL,\n> since we use streaming option.\n> Then, we also set min_apply_delay in this example,\n> then without checking the value of min_apply_delay,\n> the second condition becomes true\n> (IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY)).\n>\n> So, we need to make this case(min_apply_delay = 0) pass.\n> Meanwhile, checking the \"sub\" value is necessary for checking existing subscription value.\n\nI think there is some misunderstanding. I was not suggesting removing\nthe condition -- only that I thought it could be written without the >\n0 as::\n\nif (opts.streaming == LOGICALREP_STREAM_PARALLEL)\nif ((IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) &&\nopts.min_apply_delay) ||\n(!IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) && sub->minapplydelay))\nereport(ERROR,\n\n> > ~~~\n> >\n> > 10.\n> > + if (IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY)) {\n> > + /*\n> > + * The combination of parallel streaming mode and\n> > + * min_apply_delay is not allowed.\n> > + */\n> > + if (opts.min_apply_delay > 0)\n> >\n> > Saying \"> 0\" (in the condition) is not strictly necessary here, since it is never < 0.\n> This is also required to check the value equals to 0 or not.\n> Kindly imagine a case when we want to execute ALTER min_apply_delay from 1day\n> with a pair of (min_apply_delay = 0 and\n> streaming = parallel). If we remove this check, then this ALTER command fails\n> with error. Without the check, when we set min_apply_delay\n> and parallel streaming mode, even when making the min_apply_delay 0,\n> the error is invoked.\n>\n> The check for sub.stream is necessary for existing definition of target subscription.\n\nI think there is some misunderstanding. I was not suggesting removing\nthe condition -- only that I thought it could be written without the >\n0 as::\n\nif (opts.min_apply_delay)\nif ((IsSet(opts.specified_opts, SUBOPT_STREAMING) && opts.streaming ==\nLOGICALREP_STREAM_PARALLEL) ||\n(!IsSet(opts.specified_opts, SUBOPT_STREAMING) && sub->stream ==\nLOGICALREP_STREAM_PARALLEL))\nereport(ERROR,\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 3 Feb 2023 13:31:50 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Fri, Feb 3, 2023 at 6:41 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Thu, Feb 2, 2023 at 7:21 PM Takamichi Osumi (Fujitsu)\n> <osumi.takamichi@fujitsu.com> wrote:\n> >\n> ...\n> >\n> >\n> > > Besides, I am not sure it's a stable test to check the log. Is it possible that there's\n> > > no such log on a slow machine? I modified the code to sleep 1s at the beginning\n> > > of apply_dispatch(), then the new added test failed because the server log\n> > > cannot match.\n> > To get the log by itself is necessary to ensure\n> > that the delay is conducted by the apply worker, because we emit the diffms\n> > only if it's bigger than 0 in maybe_apply_delay(). If we omit the step,\n> > we are not sure the delay is caused by other reasons or the time-delayed feature.\n> >\n> > As you mentioned, it's possible that no log is emitted on slow machine. Then,\n> > the idea to make the test safer for such machines should be to make the delayed time longer.\n> > But we shortened the delay time to 1 second to mitigate the long test execution time of this TAP test.\n> > So, I'm not sure if it's a good idea to make it longer again.\n>\n> I think there are a couple of things that can be done about this problem:\n>\n> 1. If you need the code/test to remain as-is then at least the test\n> message could include some comforting text like \"(this can fail on\n> slow machines when the delay time is already exceeded)\" so then a test\n> failure will not cause undue alarm.\n>\n> 2. Try moving the DEBUG2 elog (in function maybe_apply_delay) so that\n> it will *always* log the remaining wait time even if that wait time\n> becomes negative. Then I think the test cases can be made\n> deterministic instead of relying on good luck. This seems like the\n> better option.\n>\n\nI don't understand why we have to do any of this instead of using 3s\nas min_apply_delay similar to what we are doing in\nsrc/test/recovery/t/005_replay_delay. Also, I think we should use\nexactly the same way to verify the test even though we want to keep\nthe log level as DEBUG2 to check logs in case of any failures.\n\nAlso, I don't see the need to add more tests like the ones below:\n+# Test whether ALTER SUBSCRIPTION changes the delayed time of the apply worker\n+# (1 day 5 minutes). Note that the extra 5 minute is to account for any\n+# decoding/network overhead.\n\nLet's try to add tests similar to what we have for\nrecovery_min_apply_delay unless there is some functionality in this\npatch that is not there in the recovery_min_apply_delay feature.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 3 Feb 2023 10:50:56 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Fri, Feb 3, 2023 at 8:02 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> I think there is some misunderstanding. I was not suggesting removing\n> the condition -- only that I thought it could be written without the >\n> 0 as:\n>\n> if (IsSet(supported_opts, SUBOPT_MIN_APPLY_DELAY) &&\n> opts->min_apply_delay && opts->streaming == LOGICALREP_STREAM_PARALLEL)\n> ereport(ERROR,\n>\n\nYeah, we can probably write that way but in the error message we are\nalready using > 0, so the current style used by patch seems good to\nme. Also, I think using the way you are suggesting is more apt for\nbooleans.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 3 Feb 2023 11:07:53 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Fri, Feb 3, 2023 at 4:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Feb 3, 2023 at 6:41 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Thu, Feb 2, 2023 at 7:21 PM Takamichi Osumi (Fujitsu)\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > >\n> > ...\n> > >\n> > >\n> > > > Besides, I am not sure it's a stable test to check the log. Is it possible that there's\n> > > > no such log on a slow machine? I modified the code to sleep 1s at the beginning\n> > > > of apply_dispatch(), then the new added test failed because the server log\n> > > > cannot match.\n> > > To get the log by itself is necessary to ensure\n> > > that the delay is conducted by the apply worker, because we emit the diffms\n> > > only if it's bigger than 0 in maybe_apply_delay(). If we omit the step,\n> > > we are not sure the delay is caused by other reasons or the time-delayed feature.\n> > >\n> > > As you mentioned, it's possible that no log is emitted on slow machine. Then,\n> > > the idea to make the test safer for such machines should be to make the delayed time longer.\n> > > But we shortened the delay time to 1 second to mitigate the long test execution time of this TAP test.\n> > > So, I'm not sure if it's a good idea to make it longer again.\n> >\n> > I think there are a couple of things that can be done about this problem:\n> >\n> > 1. If you need the code/test to remain as-is then at least the test\n> > message could include some comforting text like \"(this can fail on\n> > slow machines when the delay time is already exceeded)\" so then a test\n> > failure will not cause undue alarm.\n> >\n> > 2. Try moving the DEBUG2 elog (in function maybe_apply_delay) so that\n> > it will *always* log the remaining wait time even if that wait time\n> > becomes negative. Then I think the test cases can be made\n> > deterministic instead of relying on good luck. This seems like the\n> > better option.\n> >\n>\n> I don't understand why we have to do any of this instead of using 3s\n> as min_apply_delay similar to what we are doing in\n> src/test/recovery/t/005_replay_delay. Also, I think we should use\n> exactly the same way to verify the test even though we want to keep\n> the log level as DEBUG2 to check logs in case of any failures.\n>\n\nIIUC the reasons are due to conflicting requirements. e.g.\n- A longer delay like 3s might work better for testing this feature, but OTOH\n- A longer delay will also cause the whole BF execution to take longer\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Fri, 3 Feb 2023 16:42:27 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Fri, Feb 3, 2023 at 11:12 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Fri, Feb 3, 2023 at 4:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Feb 3, 2023 at 6:41 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > On Thu, Feb 2, 2023 at 7:21 PM Takamichi Osumi (Fujitsu)\n> > > <osumi.takamichi@fujitsu.com> wrote:\n> > > >\n> > > ...\n> > > >\n> > > >\n> > > > > Besides, I am not sure it's a stable test to check the log. Is it possible that there's\n> > > > > no such log on a slow machine? I modified the code to sleep 1s at the beginning\n> > > > > of apply_dispatch(), then the new added test failed because the server log\n> > > > > cannot match.\n> > > > To get the log by itself is necessary to ensure\n> > > > that the delay is conducted by the apply worker, because we emit the diffms\n> > > > only if it's bigger than 0 in maybe_apply_delay(). If we omit the step,\n> > > > we are not sure the delay is caused by other reasons or the time-delayed feature.\n> > > >\n> > > > As you mentioned, it's possible that no log is emitted on slow machine. Then,\n> > > > the idea to make the test safer for such machines should be to make the delayed time longer.\n> > > > But we shortened the delay time to 1 second to mitigate the long test execution time of this TAP test.\n> > > > So, I'm not sure if it's a good idea to make it longer again.\n> > >\n> > > I think there are a couple of things that can be done about this problem:\n> > >\n> > > 1. If you need the code/test to remain as-is then at least the test\n> > > message could include some comforting text like \"(this can fail on\n> > > slow machines when the delay time is already exceeded)\" so then a test\n> > > failure will not cause undue alarm.\n> > >\n> > > 2. Try moving the DEBUG2 elog (in function maybe_apply_delay) so that\n> > > it will *always* log the remaining wait time even if that wait time\n> > > becomes negative. Then I think the test cases can be made\n> > > deterministic instead of relying on good luck. This seems like the\n> > > better option.\n> > >\n> >\n> > I don't understand why we have to do any of this instead of using 3s\n> > as min_apply_delay similar to what we are doing in\n> > src/test/recovery/t/005_replay_delay. Also, I think we should use\n> > exactly the same way to verify the test even though we want to keep\n> > the log level as DEBUG2 to check logs in case of any failures.\n> >\n>\n> IIUC the reasons are due to conflicting requirements. e.g.\n> - A longer delay like 3s might work better for testing this feature, but OTOH\n> - A longer delay will also cause the whole BF execution to take longer\n>\n\nSure, but we already have the same test for a similar feature and it\nseems to be a proven reliable way to test the feature. We do seem to\nhave seen buildfarm failures for tests related to\nrecovery_min_apply_delay and the current way is quite stable, so I\nwould prefer to go with that.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 3 Feb 2023 11:20:53 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi,\n\nOn Friday, February 3, 2023 2:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Fri, Feb 3, 2023 at 6:41 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > On Thu, Feb 2, 2023 at 7:21 PM Takamichi Osumi (Fujitsu)\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > >\n> > ...\n> > > > Besides, I am not sure it's a stable test to check the log. Is it\n> > > > possible that there's no such log on a slow machine? I modified\n> > > > the code to sleep 1s at the beginning of apply_dispatch(), then\n> > > > the new added test failed because the server log cannot match.\n> > > To get the log by itself is necessary to ensure that the delay is\n> > > conducted by the apply worker, because we emit the diffms only if\n> > > it's bigger than 0 in maybe_apply_delay(). If we omit the step, we\n> > > are not sure the delay is caused by other reasons or the time-delayed\n> feature.\n> > >\n> > > As you mentioned, it's possible that no log is emitted on slow\n> > > machine. Then, the idea to make the test safer for such machines should\n> be to make the delayed time longer.\n> > > But we shortened the delay time to 1 second to mitigate the long test\n> execution time of this TAP test.\n> > > So, I'm not sure if it's a good idea to make it longer again.\n> >\n> > I think there are a couple of things that can be done about this problem:\n> >\n> > 1. If you need the code/test to remain as-is then at least the test\n> > message could include some comforting text like \"(this can fail on\n> > slow machines when the delay time is already exceeded)\" so then a test\n> > failure will not cause undue alarm.\n> >\n> > 2. Try moving the DEBUG2 elog (in function maybe_apply_delay) so that\n> > it will *always* log the remaining wait time even if that wait time\n> > becomes negative. Then I think the test cases can be made\n> > deterministic instead of relying on good luck. This seems like the\n> > better option.\n> >\n> \n> I don't understand why we have to do any of this instead of using 3s as\n> min_apply_delay similar to what we are doing in\n> src/test/recovery/t/005_replay_delay. Also, I think we should use exactly the\n> same way to verify the test even though we want to keep the log level as\n> DEBUG2 to check logs in case of any failures.\nOK, will try to make our tests similar to the tests in 005_replay_delay\nas much as possible.\n \n\n> Also, I don't see the need to add more tests like the ones below:\n> +# Test whether ALTER SUBSCRIPTION changes the delayed time of the apply\n> +worker # (1 day 5 minutes). Note that the extra 5 minute is to account\n> +for any # decoding/network overhead.\n> \n> Let's try to add tests similar to what we have for recovery_min_apply_delay\n> unless there is some functionality in this patch that is not there in the\n> recovery_min_apply_delay feature.\nThe above command is a preparation part to check a behavior unique to time-delayed\nlogical replication, which is to DISABLE a subscription causes the apply worker not to apply\nthe suspended (delayed) transaction. So, it will be OK to have this test.\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Fri, 3 Feb 2023 06:35:29 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Thurs, Feb 2, 2023 16:04 PM Takamichi Osumi (Fujitsu) <osumi.takamichi@fujitsu.com> wrote:\r\n> Attached the updated patch v26 accordingly.\r\n\r\nThanks for your patch.\r\n\r\nHere is a comment:\r\n\r\n1. The checks in function AlterSubscription\r\n+\t\t\t\t\t/*\r\n+\t\t\t\t\t * The combination of parallel streaming mode and\r\n+\t\t\t\t\t * min_apply_delay is not allowed. See\r\n+\t\t\t\t\t * parse_subscription_options for details of the reason.\r\n+\t\t\t\t\t */\r\n+\t\t\t\t\tif (opts.streaming == LOGICALREP_STREAM_PARALLEL)\r\n+\t\t\t\t\t\tif ((IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) && opts.min_apply_delay > 0) ||\r\n+\t\t\t\t\t\t\t(!IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) && sub->minapplydelay > 0))\r\nand\r\n+\t\t\t\t\t/*\r\n+\t\t\t\t\t * The combination of parallel streaming mode and\r\n+\t\t\t\t\t * min_apply_delay is not allowed.\r\n+\t\t\t\t\t */\r\n+\t\t\t\t\tif (opts.min_apply_delay > 0)\r\n+\t\t\t\t\t\tif ((IsSet(opts.specified_opts, SUBOPT_STREAMING) && opts.streaming == LOGICALREP_STREAM_PARALLEL) ||\r\n+\t\t\t\t\t\t\t(!IsSet(opts.specified_opts, SUBOPT_STREAMING) && sub->stream == LOGICALREP_STREAM_PARALLEL))\r\n\r\nI think the case where the options \"min_apply_delay>0\" and \"streaming=parallel\"\r\nare set at the same time seems to have been checked in the function\r\nparse_subscription_options, how about simplifying these two if-statements here\r\nto the following:\r\n```\r\nif (opts.streaming == LOGICALREP_STREAM_PARALLEL &&\r\n\t!IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) &&\r\n\tsub->minapplydelay > 0)\r\n\r\nand\r\n\r\nif (opts.min_apply_delay > 0 &&\r\n\t!IsSet(opts.specified_opts, SUBOPT_STREAMING) &&\r\n\tsub->stream == LOGICALREP_STREAM_PARALLEL)\r\n```\r\n\r\nRegards,\r\nWang Wei\r\n",
"msg_date": "Fri, 3 Feb 2023 09:42:34 +0000",
"msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Fri, Feb 3, 2023 at 3:12 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> Here is a comment:\n>\n> 1. The checks in function AlterSubscription\n> + /*\n> + * The combination of parallel streaming mode and\n> + * min_apply_delay is not allowed. See\n> + * parse_subscription_options for details of the reason.\n> + */\n> + if (opts.streaming == LOGICALREP_STREAM_PARALLEL)\n> + if ((IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) && opts.min_apply_delay > 0) ||\n> + (!IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) && sub->minapplydelay > 0))\n> and\n> + /*\n> + * The combination of parallel streaming mode and\n> + * min_apply_delay is not allowed.\n> + */\n> + if (opts.min_apply_delay > 0)\n> + if ((IsSet(opts.specified_opts, SUBOPT_STREAMING) && opts.streaming == LOGICALREP_STREAM_PARALLEL) ||\n> + (!IsSet(opts.specified_opts, SUBOPT_STREAMING) && sub->stream == LOGICALREP_STREAM_PARALLEL))\n>\n> I think the case where the options \"min_apply_delay>0\" and \"streaming=parallel\"\n> are set at the same time seems to have been checked in the function\n> parse_subscription_options, how about simplifying these two if-statements here\n> to the following:\n> ```\n> if (opts.streaming == LOGICALREP_STREAM_PARALLEL &&\n> !IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) &&\n> sub->minapplydelay > 0)\n>\n> and\n>\n> if (opts.min_apply_delay > 0 &&\n> !IsSet(opts.specified_opts, SUBOPT_STREAMING) &&\n> sub->stream == LOGICALREP_STREAM_PARALLEL)\n> ```\n>\n\nWon't just checking if ((opts.streaming == LOGICALREP_STREAM_PARALLEL\n&& sub->minapplydelay > 0) || (opts.min_apply_delay > 0 && sub->stream\n== LOGICALREP_STREAM_PARALLEL)) be sufficient in that case?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 3 Feb 2023 17:22:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi,\n\n\nOn Friday, February 3, 2023 3:35 PM I wrote:\n> On Friday, February 3, 2023 2:21 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> > On Fri, Feb 3, 2023 at 6:41 AM Peter Smith <smithpb2250@gmail.com>\n> wrote:\n> > > On Thu, Feb 2, 2023 at 7:21 PM Takamichi Osumi (Fujitsu)\n> > > <osumi.takamichi@fujitsu.com> wrote:\n> > > ...\n> > > > > Besides, I am not sure it's a stable test to check the log. Is\n> > > > > it possible that there's no such log on a slow machine? I\n> > > > > modified the code to sleep 1s at the beginning of\n> > > > > apply_dispatch(), then the new added test failed because the server\n> log cannot match.\n> > > > To get the log by itself is necessary to ensure that the delay is\n> > > > conducted by the apply worker, because we emit the diffms only if\n> > > > it's bigger than 0 in maybe_apply_delay(). If we omit the step, we\n> > > > are not sure the delay is caused by other reasons or the\n> > > > time-delayed\n> > feature.\n> > > >\n> > > > As you mentioned, it's possible that no log is emitted on slow\n> > > > machine. Then, the idea to make the test safer for such machines\n> > > > should\n> > be to make the delayed time longer.\n> > > > But we shortened the delay time to 1 second to mitigate the long\n> > > > test\n> > execution time of this TAP test.\n> > > > So, I'm not sure if it's a good idea to make it longer again.\n> > >\n> > > I think there are a couple of things that can be done about this problem:\n> > >\n> > > 1. If you need the code/test to remain as-is then at least the test\n> > > message could include some comforting text like \"(this can fail on\n> > > slow machines when the delay time is already exceeded)\" so then a\n> > > test failure will not cause undue alarm.\n> > >\n> > > 2. Try moving the DEBUG2 elog (in function maybe_apply_delay) so\n> > > that it will *always* log the remaining wait time even if that wait\n> > > time becomes negative. Then I think the test cases can be made\n> > > deterministic instead of relying on good luck. This seems like the\n> > > better option.\n> > >\n> >\n> > I don't understand why we have to do any of this instead of using 3s\n> > as min_apply_delay similar to what we are doing in\n> > src/test/recovery/t/005_replay_delay. Also, I think we should use\n> > exactly the same way to verify the test even though we want to keep\n> > the log level as\n> > DEBUG2 to check logs in case of any failures.\n> OK, will try to make our tests similar to the tests in 005_replay_delay as much\n> as possible.\nI've updated the TAP test and made it aligned with 005_reply_delay.pl.\n\nFor coverage, I have the stream of in-progress transaction test case\nand ALTER SUBSCRIPTION DISABLE behavior, which is unique to logical replication.\nAlso, conducted pgindent and pgperltidy. Note that the latter half of the\n005_reply_delay.pl doesn't seem to match with the test for time-delayed logical replication\n(e.g. promotion). So, I don't have those points.\n\nKindly have a look at the attached v27.\n\n\nBest Regards,\n\tTakamichi Osumi",
"msg_date": "Sat, 4 Feb 2023 06:04:39 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi,\n\n\nOn wangw.fnst@fujitsu.com Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Fri, Feb 3, 2023 at 3:12 PM wangw.fnst@fujitsu.com\n> <wangw.fnst@fujitsu.com> wrote:\n> >\n> > Here is a comment:\n> >\n> > 1. The checks in function AlterSubscription\n> > + /*\n> > + * The combination of parallel\n> streaming mode and\n> > + * min_apply_delay is not\n> allowed. See\n> > + * parse_subscription_options\n> for details of the reason.\n> > + */\n> > + if (opts.streaming ==\n> LOGICALREP_STREAM_PARALLEL)\n> > + if\n> ((IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) &&\n> opts.min_apply_delay > 0) ||\n> > +\n> > + (!IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) &&\n> > + sub->minapplydelay > 0))\n> > and\n> > + /*\n> > + * The combination of parallel\n> streaming mode and\n> > + * min_apply_delay is not\n> allowed.\n> > + */\n> > + if (opts.min_apply_delay > 0)\n> > + if\n> ((IsSet(opts.specified_opts, SUBOPT_STREAMING) && opts.streaming ==\n> LOGICALREP_STREAM_PARALLEL) ||\n> > +\n> > + (!IsSet(opts.specified_opts, SUBOPT_STREAMING) && sub->stream ==\n> > + LOGICALREP_STREAM_PARALLEL))\n> >\n> > I think the case where the options \"min_apply_delay>0\" and\n> \"streaming=parallel\"\n> > are set at the same time seems to have been checked in the function\n> > parse_subscription_options, how about simplifying these two\n> > if-statements here to the following:\n> > ```\n> > if (opts.streaming == LOGICALREP_STREAM_PARALLEL &&\n> > !IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) &&\n> > sub->minapplydelay > 0)\n> >\n> > and\n> >\n> > if (opts.min_apply_delay > 0 &&\n> > !IsSet(opts.specified_opts, SUBOPT_STREAMING) &&\n> > sub->stream == LOGICALREP_STREAM_PARALLEL) ```\n> >\n> \n> Won't just checking if ((opts.streaming ==\n> LOGICALREP_STREAM_PARALLEL && sub->minapplydelay > 0) ||\n> (opts.min_apply_delay > 0 && sub->stream ==\n> LOGICALREP_STREAM_PARALLEL)) be sufficient in that case?\nWe need checks for !IsSet(). If we don't have those,\nwe error out when executing the alter subscription with min_apply_delay = 0\nand streaming = parallel, at the same time for a subscription whose min_apply_delay\nsetting is bigger than 0, for instance. In this case, we pass (don't error out)\nparse_subscription_options()'s test for the combination of mutual exclusive options\nand then, error out the condition by matching the first condition\nopts.streaming == parallel and sub->minapplydelay > 0 above.\n\nAlso, the Wang-san's refactoring proposal makes sense. Adopted.\nRegarding the style how to write min_apply_delay > 0\n(or just putting min_apply_delay in 'if' conditions) for checking parameters,\nI agreed with Amit-san so I keep them as it is in the latest patch v27.\n\n\nKindly have a look at v27 posted in [1]\n\n[1] - https://www.postgresql.org/message-id/TYCPR01MB83738F2BEF83DE525410E3ACEDD49%40TYCPR01MB8373.jpnprd01.prod.outlook.com\n\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Sat, 4 Feb 2023 06:24:18 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Sat, Feb 4, 2023 at 5:04 PM Takamichi Osumi (Fujitsu)\n<osumi.takamichi@fujitsu.com> wrote:\n>\n...\n>\n> Kindly have a look at the attached v27.\n>\n\nHere are some review comments for patch v27-0001.\n\n======\nsrc/test/subscription/t/032_apply_delay.pl\n\n1.\n+# Confirm the time-delayed replication has been effective from the server log\n+# message where the apply worker emits for applying delay. Moreover, verify\n+# that the current worker's remaining wait time is sufficiently bigger than the\n+# expected value, in order to check any update of the min_apply_delay.\n+sub check_apply_delay_log\n\n~\n\n\"has been effective from the server log\" --> \"worked, by inspecting\nthe server log\"\n\n~~~\n\n2.\n+my $delay = 3;\n\nMight be better to name this variable as 'min_apply_delay'.\n\n~~~\n\n3.\n+# Now wait for replay to complete on publisher. We're done waiting when the\n+# subscriber has applyed up to the publisher LSN.\n+$node_publisher->wait_for_catchup($appname);\n\n3a.\nSomething seemed wrong with the comment.\n\nWas it meant to say more like? \"The publisher waits for the\nreplication to complete\".\n\nTypo: \"applyed\"\n\n~\n\n3b.\nInstead of doing this wait_for_catchup stuff why don't you just use a\nsynchronous pub/sub and then the publication will just block\ninternally like you require but without you having to block using test\ncode?\n\n~~~\n\n4.\n+# Run a query to make sure that the reload has taken effect.\n+$node_publisher->safe_psql('postgres', q{SELECT 1});\n\nSUGGESTION (for the comment)\n# Running a dummy query causes the config to be reloaded.\n\n~~~\n\n5.\n+# Confirm the record is not applied expectedly\n+my $result = $node_subscriber->safe_psql('postgres',\n+ \"SELECT count(a) FROM tab_int WHERE a = 0;\");\n+is($result, qq(0), \"check the delayed transaction was not applied\");\n\n\"expectedly\" ??\n\nSUGGESTION (for comment)\n# Confirm the record was not applied\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 6 Feb 2023 14:02:55 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Monday, February 6, 2023 12:03 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> On Sat, Feb 4, 2023 at 5:04 PM Takamichi Osumi (Fujitsu)\n> <osumi.takamichi@fujitsu.com> wrote:\n> >\n> ...\n> >\n> > Kindly have a look at the attached v27.\n> >\n> \n> Here are some review comments for patch v27-0001.\nThanks for checking !\n\n> ======\n> src/test/subscription/t/032_apply_delay.pl\n> \n> 1.\n> +# Confirm the time-delayed replication has been effective from the\n> +server log # message where the apply worker emits for applying delay.\n> +Moreover, verify # that the current worker's remaining wait time is\n> +sufficiently bigger than the # expected value, in order to check any update of\n> the min_apply_delay.\n> +sub check_apply_delay_log\n> \n> ~\n> \n> \"has been effective from the server log\" --> \"worked, by inspecting the server\n> log\"\nSounds good to me. Also,\nthis is an unique part for time-delayed logical replication.\nSo, we can update those as we want. Fixed. \n\n\n> ~~~\n> \n> 2.\n> +my $delay = 3;\n> \n> Might be better to name this variable as 'min_apply_delay'.\nI named this variable by following the test of recovery_min_apply_delay\n(src/test/recovery/005_replay_delay.pl). So, this is aligned\nwith the test and I'd like to keep it as it is.\n\n\n> ~~~\n> \n> 3.\n> +# Now wait for replay to complete on publisher. We're done waiting when\n> +the # subscriber has applyed up to the publisher LSN.\n> +$node_publisher->wait_for_catchup($appname);\n> \n> 3a.\n> Something seemed wrong with the comment.\n> \n> Was it meant to say more like? \"The publisher waits for the replication to\n> complete\".\n> \n> Typo: \"applyed\"\nYour wording looks better than mine. Fixed.\n\n\n> ~\n> \n> 3b.\n> Instead of doing this wait_for_catchup stuff why don't you just use a\n> synchronous pub/sub and then the publication will just block internally like\n> you require but without you having to block using test code?\nThis is the style of 005_reply_delay.pl. Then, this is also aligned with it.\nSo, I'd like to keep the current way of times comparison as it is.\n\nEven if we could omit wait_for_catchup(), there will be new codes\nfor synchronous replication and that would make the min_apply_delay tests\nmore different from the corresponding one. Note that if we use\nthe synchronous mode, we need to turn it off for the last\nALTER SUBSCRIPTION DISABLE test case whose min_apply_delay to 1 day 5 min\nand execute one record insert after that. This will make the tests confusing.\n> ~~~\n> \n> 4.\n> +# Run a query to make sure that the reload has taken effect.\n> +$node_publisher->safe_psql('postgres', q{SELECT 1});\n> \n> SUGGESTION (for the comment)\n> # Running a dummy query causes the config to be reloaded.\nFixed.\n\n\n> ~~~\n> \n> 5.\n> +# Confirm the record is not applied expectedly my $result =\n> +$node_subscriber->safe_psql('postgres',\n> + \"SELECT count(a) FROM tab_int WHERE a = 0;\"); is($result, qq(0),\n> +\"check the delayed transaction was not applied\");\n> \n> \"expectedly\" ??\n> \n> SUGGESTION (for comment)\n> # Confirm the record was not applied\nFixed.\n\n\n\nBest Regards,\n\tTakamichi Osumi",
"msg_date": "Mon, 6 Feb 2023 07:05:56 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Mon, Feb 6, 2023 at 12:36 PM Takamichi Osumi (Fujitsu)\n<osumi.takamichi@fujitsu.com> wrote:\n>\n\nI have made a couple of changes in the attached: (a) changed a few\nerror and LOG messages; (a) added/changed comments. See, if these look\ngood to you then please include them in the next version.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Mon, 6 Feb 2023 17:21:12 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tue, Jan 24, 2023 at 5:02 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n>\n> - elog(DEBUG2, \"sending feedback (force %d) to recv %X/%X, write %X/%X, flush %X/%X in-delayed: %d\",\n> + elog(DEBUG2, \"sending feedback (force %d) to recv %X/%X, write %X/%X, flush %X/%X, apply delay: %s\",\n> force,\n> LSN_FORMAT_ARGS(recvpos),\n> LSN_FORMAT_ARGS(writepos),\n> LSN_FORMAT_ARGS(flushpos),\n> - in_delayed_apply);\n> + in_delayed_apply? \"yes\" : \"no\");\n>\n> It is better to use a string to represent the yes/no option.\n>\n\nI think it is better to be consistent with the existing force\nparameter which is also boolean, otherwise, it will look odd.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 6 Feb 2023 17:26:33 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Monday, February 6, 2023 8:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Mon, Feb 6, 2023 at 12:36 PM Takamichi Osumi (Fujitsu)\n> <osumi.takamichi@fujitsu.com> wrote:\n> >\n> \n> I have made a couple of changes in the attached: (a) changed a few error and\n> LOG messages; (a) added/changed comments. See, if these look good to you\n> then please include them in the next version.\nHi, thanks for sharing the patch !\n\nThe proposed changes make comments easier to understand\nand more aligned with other existing comments. So, LGTM.\n\nThe attached patch v29 has included your changes.\n\n\nBest Regards,\n\tTakamichi Osumi",
"msg_date": "Mon, 6 Feb 2023 13:10:01 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi,\n\n\nOn Monday, February 6, 2023 8:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Tue, Jan 24, 2023 at 5:02 AM Euler Taveira <euler@eulerto.com> wrote:\n> >\n> >\n> > - elog(DEBUG2, \"sending feedback (force %d) to recv %X/%X,\n> write %X/%X, flush %X/%X in-delayed: %d\",\n> > + elog(DEBUG2, \"sending feedback (force %d) to recv %X/%X, write\n> > + %X/%X, flush %X/%X, apply delay: %s\",\n> > force,\n> > LSN_FORMAT_ARGS(recvpos),\n> > LSN_FORMAT_ARGS(writepos),\n> > LSN_FORMAT_ARGS(flushpos),\n> > - in_delayed_apply);\n> > + in_delayed_apply? \"yes\" : \"no\");\n> >\n> > It is better to use a string to represent the yes/no option.\n> >\n> \n> I think it is better to be consistent with the existing force parameter which is\n> also boolean, otherwise, it will look odd.\nAgreed. The latest patch v29 posted in [1] followed this suggestion.\n\nKindly have a look at it.\n\n[1] - https://www.postgresql.org/message-id/TYCPR01MB8373A59E7B74AA4F96B62BEAEDDA9%40TYCPR01MB8373.jpnprd01.prod.outlook.com\n\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Mon, 6 Feb 2023 13:21:13 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Here are my review comments for v29-0001.\n\n======\nCommit Message\n\n1.\nDiscussion: https://postgr.es/m/CAB-JLwYOYwL=XTyAXKiH5CtM_Vm8KjKh7aaitCKvmCh4rzr5pQ@mail.gmail.com\n\ntmp\n\n~\n\nWhat's that \"tmp\" doing there? A typo?\n\n======\ndoc/src/sgml/catalogs.sgml\n\n2.\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>subminapplydelay</structfield> <type>int4</type>\n+ </para>\n+ <para>\n+ The minimum delay (ms) for applying changes.\n+ </para></entry>\n+ </row>\n\nFor consistency remove the period (.) because the other\nsingle-sentence descriptions on this page do not have one.\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n3. AlterSubscription\n+ errmsg(\"cannot set parallel streaming mode for subscription with %s\",\n+ \"min_apply_delay\"));\n\nSince there are no translator considerations here why not write it like this:\n\nerrmsg(\"cannot set parallel streaming mode for subscription with\nmin_apply_delay\")\n\n~~~\n\n4. AlterSubscription\n+ errmsg(\"cannot set %s for subscription in parallel streaming mode\",\n+ \"min_apply_delay\"));\n\nSince there are no translator considerations here why not write it like this:\n\nerrmsg(\"cannot set min_apply_delay for subscription in parallel streaming mode\")\n\n~~~\n\n5.\n+defGetMinApplyDelay(DefElem *def)\n+{\n+ char *input_string;\n+ int result;\n+ const char *hintmsg;\n+\n+ input_string = defGetString(def);\n+\n+ /*\n+ * Parse given string as parameter which has millisecond unit\n+ */\n+ if (!parse_int(input_string, &result, GUC_UNIT_MS, &hintmsg))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"invalid value for parameter \\\"%s\\\": \\\"%s\\\"\",\n+ \"min_apply_delay\", input_string),\n+ hintmsg ? errhint(\"%s\", _(hintmsg)) : 0));\n+\n+ /*\n+ * Check both the lower boundary for the valid min_apply_delay range and\n+ * the upper boundary as the safeguard for some platforms where INT_MAX is\n+ * wider than int32 respectively. Although parse_int() has confirmed that\n+ * the result is less than or equal to INT_MAX, the value will be stored\n+ * in a catalog column of int32.\n+ */\n+ if (result < 0 || result > PG_INT32_MAX)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"%d ms is outside the valid range for parameter \\\"%s\\\" (%d .. %d)\",\n+ result,\n+ \"min_apply_delay\",\n+ 0, PG_INT32_MAX)));\n+\n+ return result;\n+}\n\n5a.\nSince there are no translator considerations here why not write the\nfirst error like:\n\nerrmsg(\"invalid value for parameter \\\"min_apply_delay\\\": \\\"%s\\\"\",\ninput_string)\n\n~\n\n5b.\nSince there are no translator considerations here why not write the\nsecond error like:\n\nerrmsg(\"%d ms is outside the valid range for parameter\n\\\"min_apply_delay\\\" (%d .. %d)\",\nresult, 0, PG_INT32_MAX))\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 7 Feb 2023 11:32:43 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThank you for reviewing! PSA new version.\r\n\r\n> ======\r\n> Commit Message\r\n> \r\n> 1.\r\n> Discussion:\r\n> https://postgr.es/m/CAB-JLwYOYwL=XTyAXKiH5CtM_Vm8KjKh7aaitCKvmCh4r\r\n> zr5pQ@mail.gmail.com\r\n> \r\n> tmp\r\n> \r\n> ~\r\n> \r\n> What's that \"tmp\" doing there? A typo?\r\n\r\nRemoved. It was a typo.\r\nI used `git rebase` command to combining the local commits,\r\nbut the commit message seemed to be remained.\r\n\r\n> ======\r\n> doc/src/sgml/catalogs.sgml\r\n> \r\n> 2.\r\n> + <row>\r\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n> + <structfield>subminapplydelay</structfield> <type>int4</type>\r\n> + </para>\r\n> + <para>\r\n> + The minimum delay (ms) for applying changes.\r\n> + </para></entry>\r\n> + </row>\r\n> \r\n> For consistency remove the period (.) because the other\r\n> single-sentence descriptions on this page do not have one.\r\n\r\nI have also confirmed and agreed. Fixed.\r\n\r\n> ======\r\n> src/backend/commands/subscriptioncmds.c\r\n> \r\n> 3. AlterSubscription\r\n> + errmsg(\"cannot set parallel streaming mode for subscription with %s\",\r\n> + \"min_apply_delay\"));\r\n> \r\n> Since there are no translator considerations here why not write it like this:\r\n> \r\n> errmsg(\"cannot set parallel streaming mode for subscription with\r\n> min_apply_delay\")\r\n\r\nFixed.\r\n\r\n> ~~~\r\n> \r\n> 4. AlterSubscription\r\n> + errmsg(\"cannot set %s for subscription in parallel streaming mode\",\r\n> + \"min_apply_delay\"));\r\n> \r\n> Since there are no translator considerations here why not write it like this:\r\n> \r\n> errmsg(\"cannot set min_apply_delay for subscription in parallel streaming mode\")\r\n\r\nFixed.\r\n\r\n> ~~~\r\n> \r\n> 5.\r\n> +defGetMinApplyDelay(DefElem *def)\r\n> +{\r\n> + char *input_string;\r\n> + int result;\r\n> + const char *hintmsg;\r\n> +\r\n> + input_string = defGetString(def);\r\n> +\r\n> + /*\r\n> + * Parse given string as parameter which has millisecond unit\r\n> + */\r\n> + if (!parse_int(input_string, &result, GUC_UNIT_MS, &hintmsg))\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\r\n> + errmsg(\"invalid value for parameter \\\"%s\\\": \\\"%s\\\"\",\r\n> + \"min_apply_delay\", input_string),\r\n> + hintmsg ? errhint(\"%s\", _(hintmsg)) : 0));\r\n> +\r\n> + /*\r\n> + * Check both the lower boundary for the valid min_apply_delay range and\r\n> + * the upper boundary as the safeguard for some platforms where INT_MAX is\r\n> + * wider than int32 respectively. Although parse_int() has confirmed that\r\n> + * the result is less than or equal to INT_MAX, the value will be stored\r\n> + * in a catalog column of int32.\r\n> + */\r\n> + if (result < 0 || result > PG_INT32_MAX)\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\r\n> + errmsg(\"%d ms is outside the valid range for parameter \\\"%s\\\" (%d .. %d)\",\r\n> + result,\r\n> + \"min_apply_delay\",\r\n> + 0, PG_INT32_MAX)));\r\n> +\r\n> + return result;\r\n> +}\r\n> \r\n> 5a.\r\n> Since there are no translator considerations here why not write the\r\n> first error like:\r\n> \r\n> errmsg(\"invalid value for parameter \\\"min_apply_delay\\\": \\\"%s\\\"\",\r\n> input_string)\r\n> \r\n> ~\r\n> \r\n> 5b.\r\n> Since there are no translator considerations here why not write the\r\n> second error like:\r\n> \r\n> errmsg(\"%d ms is outside the valid range for parameter\r\n> \\\"min_apply_delay\\\" (%d .. %d)\",\r\n> result, 0, PG_INT32_MAX))\r\n\r\nBoth of you said were fixed.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n> -----Original Message-----\r\n> From: Peter Smith <smithpb2250@gmail.com>\r\n> Sent: Tuesday, February 7, 2023 9:33 AM\r\n> To: Osumi, Takamichi/大墨 昂道 <osumi.takamichi@fujitsu.com>\r\n> Cc: Amit Kapila <amit.kapila16@gmail.com>; Shi, Yu/侍 雨\r\n> <shiy.fnst@fujitsu.com>; Kyotaro Horiguchi <horikyota.ntt@gmail.com>;\r\n> vignesh21@gmail.com; Kuroda, Hayato/黒田 隼人\r\n> <kuroda.hayato@fujitsu.com>; shveta.malik@gmail.com; dilipbalaut@gmail.com;\r\n> euler@eulerto.com; m.melihmutlu@gmail.com; andres@anarazel.de;\r\n> marcos@f10.com.br; pgsql-hackers@postgresql.org\r\n> Subject: Re: Time delayed LR (WAS Re: logical replication restrictions)\r\n> \r\n> Here are my review comments for v29-0001.\r\n> \r\n> ======\r\n> Commit Message\r\n> \r\n> 1.\r\n> Discussion:\r\n> https://postgr.es/m/CAB-JLwYOYwL=XTyAXKiH5CtM_Vm8KjKh7aaitCKvmCh4r\r\n> zr5pQ@mail.gmail.com\r\n> \r\n> tmp\r\n> \r\n> ~\r\n> \r\n> What's that \"tmp\" doing there? A typo?\r\n> \r\n> ======\r\n> doc/src/sgml/catalogs.sgml\r\n> \r\n> 2.\r\n> + <row>\r\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n> + <structfield>subminapplydelay</structfield> <type>int4</type>\r\n> + </para>\r\n> + <para>\r\n> + The minimum delay (ms) for applying changes.\r\n> + </para></entry>\r\n> + </row>\r\n> \r\n> For consistency remove the period (.) because the other\r\n> single-sentence descriptions on this page do not have one.\r\n> \r\n> ======\r\n> src/backend/commands/subscriptioncmds.c\r\n> \r\n> 3. AlterSubscription\r\n> + errmsg(\"cannot set parallel streaming mode for subscription with %s\",\r\n> + \"min_apply_delay\"));\r\n> \r\n> Since there are no translator considerations here why not write it like this:\r\n> \r\n> errmsg(\"cannot set parallel streaming mode for subscription with\r\n> min_apply_delay\")\r\n> \r\n> ~~~\r\n> \r\n> 4. AlterSubscription\r\n> + errmsg(\"cannot set %s for subscription in parallel streaming mode\",\r\n> + \"min_apply_delay\"));\r\n> \r\n> Since there are no translator considerations here why not write it like this:\r\n> \r\n> errmsg(\"cannot set min_apply_delay for subscription in parallel streaming mode\")\r\n> \r\n> ~~~\r\n> \r\n> 5.\r\n> +defGetMinApplyDelay(DefElem *def)\r\n> +{\r\n> + char *input_string;\r\n> + int result;\r\n> + const char *hintmsg;\r\n> +\r\n> + input_string = defGetString(def);\r\n> +\r\n> + /*\r\n> + * Parse given string as parameter which has millisecond unit\r\n> + */\r\n> + if (!parse_int(input_string, &result, GUC_UNIT_MS, &hintmsg))\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\r\n> + errmsg(\"invalid value for parameter \\\"%s\\\": \\\"%s\\\"\",\r\n> + \"min_apply_delay\", input_string),\r\n> + hintmsg ? errhint(\"%s\", _(hintmsg)) : 0));\r\n> +\r\n> + /*\r\n> + * Check both the lower boundary for the valid min_apply_delay range and\r\n> + * the upper boundary as the safeguard for some platforms where INT_MAX is\r\n> + * wider than int32 respectively. Although parse_int() has confirmed that\r\n> + * the result is less than or equal to INT_MAX, the value will be stored\r\n> + * in a catalog column of int32.\r\n> + */\r\n> + if (result < 0 || result > PG_INT32_MAX)\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\r\n> + errmsg(\"%d ms is outside the valid range for parameter \\\"%s\\\" (%d .. %d)\",\r\n> + result,\r\n> + \"min_apply_delay\",\r\n> + 0, PG_INT32_MAX)));\r\n> +\r\n> + return result;\r\n> +}\r\n> \r\n> 5a.\r\n> Since there are no translator considerations here why not write the\r\n> first error like:\r\n> \r\n> errmsg(\"invalid value for parameter \\\"min_apply_delay\\\": \\\"%s\\\"\",\r\n> input_string)\r\n> \r\n> ~\r\n> \r\n> 5b.\r\n> Since there are no translator considerations here why not write the\r\n> second error like:\r\n> \r\n> errmsg(\"%d ms is outside the valid range for parameter\r\n> \\\"min_apply_delay\\\" (%d .. %d)\",\r\n> result, 0, PG_INT32_MAX))\r\n> \r\n> ------\r\n> Kind Regards,\r\n> Peter Smith.\r\n> Fujitsu Australia\r\n>",
"msg_date": "Tue, 7 Feb 2023 02:52:11 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tue, Feb 7, 2023 at 6:03 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> 5.\n> +defGetMinApplyDelay(DefElem *def)\n> +{\n> + char *input_string;\n> + int result;\n> + const char *hintmsg;\n> +\n> + input_string = defGetString(def);\n> +\n> + /*\n> + * Parse given string as parameter which has millisecond unit\n> + */\n> + if (!parse_int(input_string, &result, GUC_UNIT_MS, &hintmsg))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"invalid value for parameter \\\"%s\\\": \\\"%s\\\"\",\n> + \"min_apply_delay\", input_string),\n> + hintmsg ? errhint(\"%s\", _(hintmsg)) : 0));\n> +\n> + /*\n> + * Check both the lower boundary for the valid min_apply_delay range and\n> + * the upper boundary as the safeguard for some platforms where INT_MAX is\n> + * wider than int32 respectively. Although parse_int() has confirmed that\n> + * the result is less than or equal to INT_MAX, the value will be stored\n> + * in a catalog column of int32.\n> + */\n> + if (result < 0 || result > PG_INT32_MAX)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"%d ms is outside the valid range for parameter \\\"%s\\\" (%d .. %d)\",\n> + result,\n> + \"min_apply_delay\",\n> + 0, PG_INT32_MAX)));\n> +\n> + return result;\n> +}\n>\n> 5a.\n> Since there are no translator considerations here why not write the\n> first error like:\n>\n> errmsg(\"invalid value for parameter \\\"min_apply_delay\\\": \\\"%s\\\"\",\n> input_string)\n>\n> ~\n>\n> 5b.\n> Since there are no translator considerations here why not write the\n> second error like:\n>\n> errmsg(\"%d ms is outside the valid range for parameter\n> \\\"min_apply_delay\\\" (%d .. %d)\",\n> result, 0, PG_INT32_MAX))\n>\n\nI see that existing usage in the code matches what the patch had\nbefore this comment. See below and similar usages in the code.\nif (start <= 0)\nereport(ERROR,\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\nerrmsg(\"invalid value for parameter \\\"%s\\\": %d\",\n\"start\", start)));\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 7 Feb 2023 09:10:01 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "At Tue, 7 Feb 2023 09:10:01 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Tue, Feb 7, 2023 at 6:03 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > 5b.\n> > Since there are no translator considerations here why not write the\n> > second error like:\n> >\n> > errmsg(\"%d ms is outside the valid range for parameter\n> > \\\"min_apply_delay\\\" (%d .. %d)\",\n> > result, 0, PG_INT32_MAX))\n> >\n> \n> I see that existing usage in the code matches what the patch had\n> before this comment. See below and similar usages in the code.\n> if (start <= 0)\n> ereport(ERROR,\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> errmsg(\"invalid value for parameter \\\"%s\\\": %d\",\n> \"start\", start)));\n\nThe same errmsg text occurs mamy times in the tree. On the other hand\nthe pointed message is the only one. I suppose Peter considered this\naspect.\n\n# \"%d%s%s is outside the valid range for parameter \\\"%s\\\" (%d .. %d)\"\n# also appears just once\n\nAs for me, it seems to me a good practice to do that regadless of the\nnumber of duplicates to (semi)mechanically avoid duplicates.\n\n(But I believe I would do as Peter suggests by myself for the first\ncut, though:p)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 07 Feb 2023 13:37:06 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Thanks!\n\nAt Mon, 6 Feb 2023 13:10:01 +0000, \"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com> wrote in \n> The attached patch v29 has included your changes.\n\ncatalogs.sgml\n\n+ <para>\n+ The minimum delay (ms) for applying changes.\n+ </para></entry>\n\nI think we don't use unit symbols that way. Namely I think we would\nwrite it as \"The minimum delay for applying changes in milliseconds\"\n\n\nalter_subscription.sgml\n\n are <literal>slot_name</literal>,\n <literal>synchronous_commit</literal>,\n <literal>binary</literal>, <literal>streaming</literal>,\n- <literal>disable_on_error</literal>, and\n- <literal>origin</literal>.\n+ <literal>disable_on_error</literal>,\n+ <literal>origin</literal>, and\n+ <literal>min_apply_delay</literal>.\n </para>\n\nBy the way, is there any rule for the order among the words? They\ndon't seem in alphabetical order nor in the same order to the\ncreate_sbuscription page. (I seems like in the order of SUBOPT_*\nsymbols, but I'm not sure it's a good idea..)\n\n\nsubscriptioncmds.c\n\n+\t\t\t\t\tif (opts.streaming == LOGICALREP_STREAM_PARALLEL &&\n+\t\t\t\t\t\t!IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) && sub->minapplydelay > 0)\n..\n+\t\t\t\t\tif (opts.min_apply_delay > 0 &&\n+\t\t\t\t\t\t!IsSet(opts.specified_opts, SUBOPT_STREAMING) && sub->stream == LOGICALREP_STREAM_PARALLEL)\n\nDon't we wrap the lines?\n\n\nworker.c\n\n+\t\tif (wal_receiver_status_interval > 0 &&\n+\t\t\tdiffms > wal_receiver_status_interval * 1000L)\n+\t\t{\n+\t\t\tWaitLatch(MyLatch,\n+\t\t\t\t\t WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n+\t\t\t\t\t wal_receiver_status_interval * 1000L,\n+\t\t\t\t\t WAIT_EVENT_RECOVERY_APPLY_DELAY);\n+\t\t\tsend_feedback(last_received, true, false, true);\n+\t\t}\n+\t\telse\n+\t\t\tWaitLatch(MyLatch,\n+\t\t\t\t\t WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n+\t\t\t\t\t diffms,\n+\t\t\t\t\t WAIT_EVENT_RECOVERY_APPLY_DELAY);\n\nsend_feedback always handles the case where\nwal_receiver_status_interval == 0. thus we can simply wait for\nmin(wal_receiver_status_interval, diffms) then call send_feedback()\nunconditionally.\n\n\n-start_apply(XLogRecPtr origin_startpos)\n+start_apply(void)\n\n-LogicalRepApplyLoop(XLogRecPtr last_received)\n+LogicalRepApplyLoop(void)\n\nDoes this patch requires this change?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 07 Feb 2023 13:43:03 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tue, Feb 7, 2023 at 10:07 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 7 Feb 2023 09:10:01 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > On Tue, Feb 7, 2023 at 6:03 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > 5b.\n> > > Since there are no translator considerations here why not write the\n> > > second error like:\n> > >\n> > > errmsg(\"%d ms is outside the valid range for parameter\n> > > \\\"min_apply_delay\\\" (%d .. %d)\",\n> > > result, 0, PG_INT32_MAX))\n> > >\n> >\n> > I see that existing usage in the code matches what the patch had\n> > before this comment. See below and similar usages in the code.\n> > if (start <= 0)\n> > ereport(ERROR,\n> > (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > errmsg(\"invalid value for parameter \\\"%s\\\": %d\",\n> > \"start\", start)));\n>\n> The same errmsg text occurs mamy times in the tree. On the other hand\n> the pointed message is the only one. I suppose Peter considered this\n> aspect.\n>\n> # \"%d%s%s is outside the valid range for parameter \\\"%s\\\" (%d .. %d)\"\n> # also appears just once\n>\n> As for me, it seems to me a good practice to do that regadless of the\n> number of duplicates to (semi)mechanically avoid duplicates.\n>\n> (But I believe I would do as Peter suggests by myself for the first\n> cut, though:p)\n>\n\nPersonally, I would prefer consistency. I think we can later start a\nnew thread to change the existing message and if there is a consensus\nand value in the same then we could use the same style here as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 7 Feb 2023 10:31:48 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tue, Feb 7, 2023 at 4:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Feb 7, 2023 at 10:07 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Tue, 7 Feb 2023 09:10:01 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > > On Tue, Feb 7, 2023 at 6:03 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > > 5b.\n> > > > Since there are no translator considerations here why not write the\n> > > > second error like:\n> > > >\n> > > > errmsg(\"%d ms is outside the valid range for parameter\n> > > > \\\"min_apply_delay\\\" (%d .. %d)\",\n> > > > result, 0, PG_INT32_MAX))\n> > > >\n> > >\n> > > I see that existing usage in the code matches what the patch had\n> > > before this comment. See below and similar usages in the code.\n> > > if (start <= 0)\n> > > ereport(ERROR,\n> > > (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > > errmsg(\"invalid value for parameter \\\"%s\\\": %d\",\n> > > \"start\", start)));\n> >\n> > The same errmsg text occurs mamy times in the tree. On the other hand\n> > the pointed message is the only one. I suppose Peter considered this\n> > aspect.\n> >\n> > # \"%d%s%s is outside the valid range for parameter \\\"%s\\\" (%d .. %d)\"\n> > # also appears just once\n> >\n> > As for me, it seems to me a good practice to do that regadless of the\n> > number of duplicates to (semi)mechanically avoid duplicates.\n> >\n> > (But I believe I would do as Peter suggests by myself for the first\n> > cut, though:p)\n> >\n>\n> Personally, I would prefer consistency. I think we can later start a\n> new thread to change the existing message and if there is a consensus\n> and value in the same then we could use the same style here as well.\n>\n\nOf course, if there is a convention then we should stick to it.\n\nMy understanding was that (string literal) message parameters are\nspecified separately from the message format string primarily as an\naid to translators. That makes good sense for parameters with names\nthat are also English words (like \"start\" etc), but for non-word\nparameters like \"min_apply_delay\" there is no such ambiguity in the\nfirst place.\n\nAnyway, I am fine with it being written either way.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 7 Feb 2023 16:12:33 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tue, Feb 7, 2023 at 10:13 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Mon, 6 Feb 2023 13:10:01 +0000, \"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com> wrote in\n> > The attached patch v29 has included your changes.\n>\n> catalogs.sgml\n>\n> + <para>\n> + The minimum delay (ms) for applying changes.\n> + </para></entry>\n>\n> I think we don't use unit symbols that way. Namely I think we would\n> write it as \"The minimum delay for applying changes in milliseconds\"\n>\n\nOkay, if we prefer to use milliseconds, then how about: \"The minimum\ndelay, in milliseconds, for applying changes\"?\n\n>\n> alter_subscription.sgml\n>\n> are <literal>slot_name</literal>,\n> <literal>synchronous_commit</literal>,\n> <literal>binary</literal>, <literal>streaming</literal>,\n> - <literal>disable_on_error</literal>, and\n> - <literal>origin</literal>.\n> + <literal>disable_on_error</literal>,\n> + <literal>origin</literal>, and\n> + <literal>min_apply_delay</literal>.\n> </para>\n>\n> By the way, is there any rule for the order among the words?\n>\n\nCurrently, it is in the order in which the corresponding features are added.\n\n> They\n> don't seem in alphabetical order nor in the same order to the\n> create_sbuscription page.\n>\n\nIn create_subscription page also, it appears to be in the order in\nwhich those are added with a difference that they are divided into two\ncategories (parameters that control what happens during subscription\ncreation and parameters that control the subscription's replication\nbehavior after it has been created)\n\n> (I seems like in the order of SUBOPT_*\n> symbols, but I'm not sure it's a good idea..)\n>\n>\n> subscriptioncmds.c\n>\n> + if (opts.streaming == LOGICALREP_STREAM_PARALLEL &&\n> + !IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) && sub->minapplydelay > 0)\n> ..\n> + if (opts.min_apply_delay > 0 &&\n> + !IsSet(opts.specified_opts, SUBOPT_STREAMING) && sub->stream == LOGICALREP_STREAM_PARALLEL)\n>\n> Don't we wrap the lines?\n>\n>\n> worker.c\n>\n> + if (wal_receiver_status_interval > 0 &&\n> + diffms > wal_receiver_status_interval * 1000L)\n> + {\n> + WaitLatch(MyLatch,\n> + WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n> + wal_receiver_status_interval * 1000L,\n> + WAIT_EVENT_RECOVERY_APPLY_DELAY);\n> + send_feedback(last_received, true, false, true);\n> + }\n> + else\n> + WaitLatch(MyLatch,\n> + WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n> + diffms,\n> + WAIT_EVENT_RECOVERY_APPLY_DELAY);\n>\n> send_feedback always handles the case where\n> wal_receiver_status_interval == 0.\n>\n\nIt only handles when force is false but here we are using that as\ntrue. So, not sure, if what you said would be an improvement.\n\n> thus we can simply wait for\n> min(wal_receiver_status_interval, diffms) then call send_feedback()\n> unconditionally.\n>\n>\n> -start_apply(XLogRecPtr origin_startpos)\n> +start_apply(void)\n>\n> -LogicalRepApplyLoop(XLogRecPtr last_received)\n> +LogicalRepApplyLoop(void)\n>\n> Does this patch requires this change?\n>\n\nI think this is because the scope of last_received has been changed so\nthat it can be used to pass in send_feedback() during the delay.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 7 Feb 2023 10:56:19 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tue, Feb 7, 2023 at 10:42 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Tue, Feb 7, 2023 at 4:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Feb 7, 2023 at 10:07 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > At Tue, 7 Feb 2023 09:10:01 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > > > On Tue, Feb 7, 2023 at 6:03 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > > > 5b.\n> > > > > Since there are no translator considerations here why not write the\n> > > > > second error like:\n> > > > >\n> > > > > errmsg(\"%d ms is outside the valid range for parameter\n> > > > > \\\"min_apply_delay\\\" (%d .. %d)\",\n> > > > > result, 0, PG_INT32_MAX))\n> > > > >\n> > > >\n> > > > I see that existing usage in the code matches what the patch had\n> > > > before this comment. See below and similar usages in the code.\n> > > > if (start <= 0)\n> > > > ereport(ERROR,\n> > > > (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > > > errmsg(\"invalid value for parameter \\\"%s\\\": %d\",\n> > > > \"start\", start)));\n> > >\n> > > The same errmsg text occurs mamy times in the tree. On the other hand\n> > > the pointed message is the only one. I suppose Peter considered this\n> > > aspect.\n> > >\n> > > # \"%d%s%s is outside the valid range for parameter \\\"%s\\\" (%d .. %d)\"\n> > > # also appears just once\n> > >\n> > > As for me, it seems to me a good practice to do that regadless of the\n> > > number of duplicates to (semi)mechanically avoid duplicates.\n> > >\n> > > (But I believe I would do as Peter suggests by myself for the first\n> > > cut, though:p)\n> > >\n> >\n> > Personally, I would prefer consistency. I think we can later start a\n> > new thread to change the existing message and if there is a consensus\n> > and value in the same then we could use the same style here as well.\n> >\n>\n> Of course, if there is a convention then we should stick to it.\n>\n> My understanding was that (string literal) message parameters are\n> specified separately from the message format string primarily as an\n> aid to translators. That makes good sense for parameters with names\n> that are also English words (like \"start\" etc), but for non-word\n> parameters like \"min_apply_delay\" there is no such ambiguity in the\n> first place.\n>\n\nTBH, I am not an expert in this matter. So, to avoid, making any\nmistakes I thought of keeping it close to the existing style.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 7 Feb 2023 10:58:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tue, Feb 7, 2023 at 8:22 AM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Thank you for reviewing! PSA new version.\n>\n\nFew comments:\n=============\n1.\n@@ -74,6 +74,8 @@ CATALOG(pg_subscription,6100,SubscriptionRelationId)\nBKI_SHARED_RELATION BKI_ROW\n\n Oid subowner BKI_LOOKUP(pg_authid); /* Owner of the subscription */\n\n+ int32 subminapplydelay; /* Replication apply delay (ms) */\n+\n bool subenabled; /* True if the subscription is enabled (the\n * worker should be running) */\n\n@@ -120,6 +122,7 @@ typedef struct Subscription\n * in */\n XLogRecPtr skiplsn; /* All changes finished at this LSN are\n * skipped */\n+ int32 minapplydelay; /* Replication apply delay (ms) */\n char *name; /* Name of the subscription */\n Oid owner; /* Oid of the subscription owner */\n\nWhy the new parameter is placed at different locations in above two\nstrcutures? I think it should be after owner in both cases and\naccordingly its order should be changed in GetSubscription() or any\nother place it is used.\n\n2. A minor comment change suggestion:\n /*\n * Common spoolfile processing.\n *\n- * The commit/prepare time (finish_ts) for streamed transactions is required\n- * for time-delayed logical replication.\n+ * The commit/prepare time (finish_ts) is required for time-delayed logical\n+ * replication.\n */\n\n3. I find the newly added tests take about 8s on my machine which is\nclose highest in the subscription folder. I understand that it can't\nbe less than 3s because of the delay but checking multiple cases makes\nit take that long. I think we can avoid the tests for streaming and\ndisable the subscription. Also, after removing those, I think it would\nbe better to add the remaining test in 001_rep_changes to save set-up\nand tear-down costs as well.\n\n4.\n+$node_publisher->append_conf('postgresql.conf',\n+ 'logical_decoding_work_mem = 64kB');\n\nI think this setting is also not required.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 7 Feb 2023 15:26:09 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi,\n\n\nOn Tuesday, February 7, 2023 6:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Tue, Feb 7, 2023 at 8:22 AM Hayato Kuroda (Fujitsu)\n> <kuroda.hayato@fujitsu.com> wrote:\n> >\n> > Thank you for reviewing! PSA new version.\n> >\n> \n> Few comments:\n> =============\nThanks for your comments !\n\n> 1.\n> @@ -74,6 +74,8 @@ CATALOG(pg_subscription,6100,SubscriptionRelationId)\n> BKI_SHARED_RELATION BKI_ROW\n> \n> Oid subowner BKI_LOOKUP(pg_authid); /* Owner of the subscription */\n> \n> + int32 subminapplydelay; /* Replication apply delay (ms) */\n> +\n> bool subenabled; /* True if the subscription is enabled (the\n> * worker should be running) */\n> \n> @@ -120,6 +122,7 @@ typedef struct Subscription\n> * in */\n> XLogRecPtr skiplsn; /* All changes finished at this LSN are\n> * skipped */\n> + int32 minapplydelay; /* Replication apply delay (ms) */\n> char *name; /* Name of the subscription */\n> Oid owner; /* Oid of the subscription owner */\n> \n> Why the new parameter is placed at different locations in above two\n> strcutures? I think it should be after owner in both cases and accordingly its\n> order should be changed in GetSubscription() or any other place it is used.\nFixed.\n\n\n> \n> 2. A minor comment change suggestion:\n> /*\n> * Common spoolfile processing.\n> *\n> - * The commit/prepare time (finish_ts) for streamed transactions is required\n> - * for time-delayed logical replication.\n> + * The commit/prepare time (finish_ts) is required for time-delayed\n> + logical\n> + * replication.\n> */\nFixed.\n\n \n> 3. I find the newly added tests take about 8s on my machine which is close\n> highest in the subscription folder. I understand that it can't be less than 3s\n> because of the delay but checking multiple cases makes it take that long. I\n> think we can avoid the tests for streaming and disable the subscription. Also,\n> after removing those, I think it would be better to add the remaining test in\n> 001_rep_changes to save set-up and tear-down costs as well.\nSounds good to me. Moved the test to 001_rep_changes.pl.\n\n\n> 4.\n> +$node_publisher->append_conf('postgresql.conf',\n> + 'logical_decoding_work_mem = 64kB');\n> \n> I think this setting is also not required.\nYes. And, in the process to move the test, removed.\n\nAttached the v31 patch.\n\nNote that regarding the translator style,\nI chose to export the parameters from the errmsg to outside\nat this stage. If there is a need to change it, then I'll follow it.\n\nOther changes are minor alignments to make 'if' conditions\nthat exceeded 80 characters folded and look nicer.\n\nAlso conducted pgindent and pgperltidy.\n\n\nBest Regards,\n\tTakamichi Osumi",
"msg_date": "Tue, 7 Feb 2023 13:41:52 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi,\n\n\nOn Tuesday, February 7, 2023 2:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Tue, Feb 7, 2023 at 10:13 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> wrote:\n> >\n> > At Mon, 6 Feb 2023 13:10:01 +0000, \"Takamichi Osumi (Fujitsu)\"\n> > <osumi.takamichi@fujitsu.com> wrote in\n> > > The attached patch v29 has included your changes.\n> >\n> > catalogs.sgml\n> >\n> > + <para>\n> > + The minimum delay (ms) for applying changes.\n> > + </para></entry>\n> >\n> > I think we don't use unit symbols that way. Namely I think we would\n> > write it as \"The minimum delay for applying changes in milliseconds\"\n> >\n> \n> Okay, if we prefer to use milliseconds, then how about: \"The minimum delay, in\n> milliseconds, for applying changes\"?\nThis looks good to me. Adopted.\n\n> \n> >\n> > alter_subscription.sgml\n> >\n> > are <literal>slot_name</literal>,\n> > <literal>synchronous_commit</literal>,\n> > <literal>binary</literal>, <literal>streaming</literal>,\n> > - <literal>disable_on_error</literal>, and\n> > - <literal>origin</literal>.\n> > + <literal>disable_on_error</literal>,\n> > + <literal>origin</literal>, and\n> > + <literal>min_apply_delay</literal>.\n> > </para>\n> >\n> > By the way, is there any rule for the order among the words?\n> >\n> \n> Currently, it is in the order in which the corresponding features are added.\nYes. So, I keep it as it is.\n\n> \n> > They\n> > don't seem in alphabetical order nor in the same order to the\n> > create_sbuscription page.\n> >\n> \n> In create_subscription page also, it appears to be in the order in which those\n> are added with a difference that they are divided into two categories\n> (parameters that control what happens during subscription creation and\n> parameters that control the subscription's replication behavior after it has been\n> created)\nSame as here. The current order should be fine.\n\n> \n> > (I seems like in the order of SUBOPT_* symbols, but I'm not sure it's\n> > a good idea..)\n> >\n> >\n> > subscriptioncmds.c\n> >\n> > + if (opts.streaming ==\n> LOGICALREP_STREAM_PARALLEL &&\n> > +\n> > + !IsSet(opts.specified_opts, SUBOPT_MIN_APPLY_DELAY) &&\n> > + sub->minapplydelay > 0)\n> > ..\n> > + if (opts.min_apply_delay > 0 &&\n> > +\n> > + !IsSet(opts.specified_opts, SUBOPT_STREAMING) && sub->stream ==\n> > + LOGICALREP_STREAM_PARALLEL)\n> >\n> > Don't we wrap the lines?\n> >\n> >\n> > worker.c\n> >\n> > + if (wal_receiver_status_interval > 0 &&\n> > + diffms > wal_receiver_status_interval * 1000L)\n> > + {\n> > + WaitLatch(MyLatch,\n> > + WL_LATCH_SET |\n> WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n> > + wal_receiver_status_interval *\n> 1000L,\n> > +\n> WAIT_EVENT_RECOVERY_APPLY_DELAY);\n> > + send_feedback(last_received, true, false, true);\n> > + }\n> > + else\n> > + WaitLatch(MyLatch,\n> > + WL_LATCH_SET |\n> WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n> > + diffms,\n> > +\n> > + WAIT_EVENT_RECOVERY_APPLY_DELAY);\n> >\n> > send_feedback always handles the case where\n> > wal_receiver_status_interval == 0.\n> >\n> \n> It only handles when force is false but here we are using that as true. So, not\n> sure, if what you said would be an improvement.\nAgreed. So, I keep it as it is.\n\n> \n> > thus we can simply wait for\n> > min(wal_receiver_status_interval, diffms) then call send_feedback()\n> > unconditionally.\n> >\n> >\n> > -start_apply(XLogRecPtr origin_startpos)\n> > +start_apply(void)\n> >\n> > -LogicalRepApplyLoop(XLogRecPtr last_received)\n> > +LogicalRepApplyLoop(void)\n> >\n> > Does this patch requires this change?\n> >\n> \n> I think this is because the scope of last_received has been changed so that it\n> can be used to pass in send_feedback() during the delay.\nYes, that's our intention.\n\n\nKindly have a look at the latest patch v31 shared in [1].\n\n[1] - https://www.postgresql.org/message-id/TYCPR01MB8373BA483A6D2C924C600968EDDB9%40TYCPR01MB8373.jpnprd01.prod.outlook.com\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Tue, 7 Feb 2023 13:50:20 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi, Horiguchi-san\n\n\nThanks for your review !\nOn Tuesday, February 7, 2023 1:43 PM From: Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> At Mon, 6 Feb 2023 13:10:01 +0000, \"Takamichi Osumi (Fujitsu)\"\n> <osumi.takamichi@fujitsu.com> wrote in\n> subscriptioncmds.c\n> \n> +\t\t\t\t\tif (opts.streaming ==\n> LOGICALREP_STREAM_PARALLEL &&\n> +\t\t\t\t\t\t!IsSet(opts.specified_opts,\n> SUBOPT_MIN_APPLY_DELAY) &&\n> +sub->minapplydelay > 0)\n> ..\n> +\t\t\t\t\tif (opts.min_apply_delay > 0 &&\n> +\t\t\t\t\t\t!IsSet(opts.specified_opts,\n> SUBOPT_STREAMING) && sub->stream ==\n> +LOGICALREP_STREAM_PARALLEL)\n> \n> Don't we wrap the lines?\nYes, those lines should have looked nicer.\nUpdated. Kindly have a look at the latest patch v31 in [1].\nThere are also other some changes in the patch.\n\n[1] - https://www.postgresql.org/message-id/TYCPR01MB8373BA483A6D2C924C600968EDDB9%40TYCPR01MB8373.jpnprd01.prod.outlook.com\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Tue, 7 Feb 2023 13:55:33 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Here are my review comments for v31-0001\n\n======\ndoc/src/sgml/glossary.sgml\n\n1.\n+ <para>\n+ Replication setup that applies time-delayed copy of the data.\n+ </para>\n\nThat sentence seemed a bit strange to me.\n\nSUGGESTION\nReplication setup that delays the application of changes by a\nspecified minimum time-delay period.\n\n======\n\nsrc/backend/replication/logical/worker.c\n\n2. maybe_apply_delay\n\n+ if (wal_receiver_status_interval > 0 &&\n+ diffms > wal_receiver_status_interval * 1000L)\n+ {\n+ WaitLatch(MyLatch,\n+ WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n+ wal_receiver_status_interval * 1000L,\n+ WAIT_EVENT_RECOVERY_APPLY_DELAY);\n+ send_feedback(last_received, true, false, true);\n+ }\n\nI felt that introducing another variable like:\n\nlong statusinterval_ms = wal_receiver_status_interval * 1000L;\n\nwould help here by doing 2 things:\n1) The condition would be easier to read because the ms units would be the same\n2) Won't need * 1000L repeated in two places.\n\nOnly, do take care to assign this variable in the right place in this\nloop in case the configuration is changed.\n\n======\nsrc/test/subscription/t/001_rep_changes.pl\n\n3.\n+# Test time-delayed logical replication\n+#\n+# If the subscription sets min_apply_delay parameter, the logical replication\n+# worker will delay the transaction apply for min_apply_delay milliseconds. We\n+# look the time duration between tuples are inserted on publisher and then\n+# changes are replicated on subscriber.\n\nThis comment and the other one appearing later in this test are both\nexplaining the same test strategy. I think both comments should be\ncombined into one big one up-front, like this:\n\nSUGGESTION\nIf the subscription sets min_apply_delay parameter, the logical\nreplication worker will delay the transaction apply for\nmin_apply_delay milliseconds. We verify this by looking at the time\ndifference between a) when tuples are inserted on the publisher, and\nb) when those changes are replicated on the subscriber. Even on slow\nmachines, this strategy will give predictable behavior.\n\n~~\n\n4.\n+my $delay = 3;\n+\n+# Set min_apply_delay parameter to 3 seconds\n+$node_subscriber->safe_psql('postgres',\n+ \"ALTER SUBSCRIPTION tap_sub_renamed SET (min_apply_delay = '${delay}s')\");\n\nIMO that \"my $delay = 3;\" assignment should be *after* the comment:\n\ne.g.\n+\n+# Set min_apply_delay parameter to 3 seconds\n+my $delay = 3;\n+$node_subscriber->safe_psql('postgres',\n+ \"ALTER SUBSCRIPTION tap_sub_renamed SET (min_apply_delay = '${delay}s')\");\n\n~~~\n\n5.\n+# Make new content on publisher and check its presence in subscriber depending\n+# on the delay applied above. Before doing the insertion, get the\n+# current timestamp that will be used as a comparison base. Even on slow\n+# machines, this allows to have a predictable behavior when comparing the\n+# delay between data insertion moment on publisher and replay time on\nsubscriber.\n\nMost of this comment is now redundant because this was already\nexplained in the big comment up-front (see #3). Only one useful\nsentence is left.\n\nSUGGESTION\nBefore doing the insertion, get the current timestamp that will be\nused as a comparison base.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Wed, 8 Feb 2023 18:21:11 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThank you for reviewing! PSA new version.\r\n\r\n> ======\r\n> doc/src/sgml/glossary.sgml\r\n> \r\n> 1.\r\n> + <para>\r\n> + Replication setup that applies time-delayed copy of the data.\r\n> + </para>\r\n> \r\n> That sentence seemed a bit strange to me.\r\n> \r\n> SUGGESTION\r\n> Replication setup that delays the application of changes by a\r\n> specified minimum time-delay period.\r\n\r\nFixed.\r\n\r\n> ======\r\n> \r\n> src/backend/replication/logical/worker.c\r\n> \r\n> 2. maybe_apply_delay\r\n> \r\n> + if (wal_receiver_status_interval > 0 &&\r\n> + diffms > wal_receiver_status_interval * 1000L)\r\n> + {\r\n> + WaitLatch(MyLatch,\r\n> + WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\r\n> + wal_receiver_status_interval * 1000L,\r\n> + WAIT_EVENT_RECOVERY_APPLY_DELAY);\r\n> + send_feedback(last_received, true, false, true);\r\n> + }\r\n> \r\n> I felt that introducing another variable like:\r\n> \r\n> long statusinterval_ms = wal_receiver_status_interval * 1000L;\r\n> \r\n> would help here by doing 2 things:\r\n> 1) The condition would be easier to read because the ms units would be the same\r\n> 2) Won't need * 1000L repeated in two places.\r\n> \r\n> Only, do take care to assign this variable in the right place in this\r\n> loop in case the configuration is changed.\r\n\r\nFixed. Calculations are done on two lines - first one is the entrance of the loop,\r\nand second one is the after SIGHUP is detected.\r\n\r\n> ======\r\n> src/test/subscription/t/001_rep_changes.pl\r\n> \r\n> 3.\r\n> +# Test time-delayed logical replication\r\n> +#\r\n> +# If the subscription sets min_apply_delay parameter, the logical replication\r\n> +# worker will delay the transaction apply for min_apply_delay milliseconds. We\r\n> +# look the time duration between tuples are inserted on publisher and then\r\n> +# changes are replicated on subscriber.\r\n> \r\n> This comment and the other one appearing later in this test are both\r\n> explaining the same test strategy. I think both comments should be\r\n> combined into one big one up-front, like this:\r\n> \r\n> SUGGESTION\r\n> If the subscription sets min_apply_delay parameter, the logical\r\n> replication worker will delay the transaction apply for\r\n> min_apply_delay milliseconds. We verify this by looking at the time\r\n> difference between a) when tuples are inserted on the publisher, and\r\n> b) when those changes are replicated on the subscriber. Even on slow\r\n> machines, this strategy will give predictable behavior.\r\n\r\nChanged.\r\n\r\n> 4.\r\n> +my $delay = 3;\r\n> +\r\n> +# Set min_apply_delay parameter to 3 seconds\r\n> +$node_subscriber->safe_psql('postgres',\r\n> + \"ALTER SUBSCRIPTION tap_sub_renamed SET (min_apply_delay =\r\n> '${delay}s')\");\r\n> \r\n> IMO that \"my $delay = 3;\" assignment should be *after* the comment:\r\n> \r\n> e.g.\r\n> +\r\n> +# Set min_apply_delay parameter to 3 seconds\r\n> +my $delay = 3;\r\n> +$node_subscriber->safe_psql('postgres',\r\n> + \"ALTER SUBSCRIPTION tap_sub_renamed SET (min_apply_delay =\r\n> '${delay}s')\");\r\n\r\nRight, changed.\r\n\r\n> 5.\r\n> +# Make new content on publisher and check its presence in subscriber\r\n> depending\r\n> +# on the delay applied above. Before doing the insertion, get the\r\n> +# current timestamp that will be used as a comparison base. Even on slow\r\n> +# machines, this allows to have a predictable behavior when comparing the\r\n> +# delay between data insertion moment on publisher and replay time on\r\n> subscriber.\r\n> \r\n> Most of this comment is now redundant because this was already\r\n> explained in the big comment up-front (see #3). Only one useful\r\n> sentence is left.\r\n> \r\n> SUGGESTION\r\n> Before doing the insertion, get the current timestamp that will be\r\n> used as a comparison base.\r\n\r\nRemoved.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Wed, 8 Feb 2023 09:03:03 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Wed, Feb 8, 2023 at 8:03 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n...\n> > ======\n> >\n> > src/backend/replication/logical/worker.c\n> >\n> > 2. maybe_apply_delay\n> >\n> > + if (wal_receiver_status_interval > 0 &&\n> > + diffms > wal_receiver_status_interval * 1000L)\n> > + {\n> > + WaitLatch(MyLatch,\n> > + WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n> > + wal_receiver_status_interval * 1000L,\n> > + WAIT_EVENT_RECOVERY_APPLY_DELAY);\n> > + send_feedback(last_received, true, false, true);\n> > + }\n> >\n> > I felt that introducing another variable like:\n> >\n> > long statusinterval_ms = wal_receiver_status_interval * 1000L;\n> >\n> > would help here by doing 2 things:\n> > 1) The condition would be easier to read because the ms units would be the same\n> > 2) Won't need * 1000L repeated in two places.\n> >\n> > Only, do take care to assign this variable in the right place in this\n> > loop in case the configuration is changed.\n>\n> Fixed. Calculations are done on two lines - first one is the entrance of the loop,\n> and second one is the after SIGHUP is detected.\n>\n\nTBH, I expected you would write this as just a *single* variable\nassignment before the condition like below:\n\nSUGGESTION (tweaked comment and put single assignment before condition)\n/*\n * Call send_feedback() to prevent the publisher from exiting by\n * timeout during the delay, when the status interval is greater than\n * zero.\n */\nstatus_interval_ms = wal_receiver_status_interval * 1000L;\nif (status_interval_ms > 0 && diffms > status_interval_ms)\n{\n...\n\n~\n\nI understand in theory, your code is more efficient, but in practice,\nI think the overhead of a single variable assignment every loop\niteration (which is doing WaitLatch anyway) is of insignificant\nconcern, whereas having one assignment is simpler than having two IMO.\n\nBut, if you want to keep it the way you have then that is OK.\n\nOtherwise, this patch v32 LGTM.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Thu, 9 Feb 2023 05:47:40 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "At Wed, 8 Feb 2023 09:03:03 +0000, \"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com> wrote in \n> Thank you for reviewing! PSA new version.\n\n+\t\tif (statusinterval_ms > 0 && diffms > statusinterval_ms)\n\nThe next expected feedback time is measured from the last status\nreport. Thus, it seems to me this may suppress feedbacks from being\nsent for an unexpectedly long time especially when min_apply_delay is\nshorter than wal_r_s_interval.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 09 Feb 2023 14:14:59 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Thu, Feb 9, 2023 at 12:17 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Wed, Feb 8, 2023 at 8:03 PM Hayato Kuroda (Fujitsu)\n> <kuroda.hayato@fujitsu.com> wrote:\n> >\n> ...\n> > > ======\n> > >\n> > > src/backend/replication/logical/worker.c\n> > >\n> > > 2. maybe_apply_delay\n> > >\n> > > + if (wal_receiver_status_interval > 0 &&\n> > > + diffms > wal_receiver_status_interval * 1000L)\n> > > + {\n> > > + WaitLatch(MyLatch,\n> > > + WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n> > > + wal_receiver_status_interval * 1000L,\n> > > + WAIT_EVENT_RECOVERY_APPLY_DELAY);\n> > > + send_feedback(last_received, true, false, true);\n> > > + }\n> > >\n> > > I felt that introducing another variable like:\n> > >\n> > > long statusinterval_ms = wal_receiver_status_interval * 1000L;\n> > >\n> > > would help here by doing 2 things:\n> > > 1) The condition would be easier to read because the ms units would be the same\n> > > 2) Won't need * 1000L repeated in two places.\n> > >\n> > > Only, do take care to assign this variable in the right place in this\n> > > loop in case the configuration is changed.\n> >\n> > Fixed. Calculations are done on two lines - first one is the entrance of the loop,\n> > and second one is the after SIGHUP is detected.\n> >\n>\n> TBH, I expected you would write this as just a *single* variable\n> assignment before the condition like below:\n>\n> SUGGESTION (tweaked comment and put single assignment before condition)\n> /*\n> * Call send_feedback() to prevent the publisher from exiting by\n> * timeout during the delay, when the status interval is greater than\n> * zero.\n> */\n> status_interval_ms = wal_receiver_status_interval * 1000L;\n> if (status_interval_ms > 0 && diffms > status_interval_ms)\n> {\n> ...\n>\n> ~\n>\n> I understand in theory, your code is more efficient, but in practice,\n> I think the overhead of a single variable assignment every loop\n> iteration (which is doing WaitLatch anyway) is of insignificant\n> concern, whereas having one assignment is simpler than having two IMO.\n>\n\nYeah, that sounds better to me as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 9 Feb 2023 13:26:19 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Thu, Feb 9, 2023 at 10:45 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 8 Feb 2023 09:03:03 +0000, \"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com> wrote in\n> > Thank you for reviewing! PSA new version.\n>\n> + if (statusinterval_ms > 0 && diffms > statusinterval_ms)\n>\n> The next expected feedback time is measured from the last status\n> report. Thus, it seems to me this may suppress feedbacks from being\n> sent for an unexpectedly long time especially when min_apply_delay is\n> shorter than wal_r_s_interval.\n>\n\nI think the minimum time before we send any feedback during the wait\nis wal_r_s_interval. Now, I think if there is no transaction for a\nlong time before we get a new transaction, there should be keep-alive\nmessages in between which would allow us to send feedback at regular\nintervals (wal_receiver_status_interval). So, I think we should be\nable to send feedback in less than 2 * wal_receiver_status_interval\nunless wal_sender/receiver timeout is very large and there is a very\nlow volume of transactions. Now, we can try to send the feedback\nbefore we start waiting or maybe after every\nwal_receiver_status_interval / 2 but I think that will lead to more\nspurious feedback messages than we get the benefit from them.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 9 Feb 2023 13:48:52 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi,\n\n\nOn Thursday, February 9, 2023 4:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Thu, Feb 9, 2023 at 12:17 AM Peter Smith <smithpb2250@gmail.com>\n> wrote:\n> >\n> > On Wed, Feb 8, 2023 at 8:03 PM Hayato Kuroda (Fujitsu)\n> > <kuroda.hayato@fujitsu.com> wrote:\n> > >\n> > ...\n> > > > ======\n> > > >\n> > > > src/backend/replication/logical/worker.c\n> > > >\n> > > > 2. maybe_apply_delay\n> > > >\n> > > > + if (wal_receiver_status_interval > 0 && diffms >\n> > > > + wal_receiver_status_interval * 1000L) { WaitLatch(MyLatch,\n> > > > + WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n> > > > + wal_receiver_status_interval * 1000L,\n> > > > + WAIT_EVENT_RECOVERY_APPLY_DELAY);\n> send_feedback(last_received,\n> > > > + true, false, true); }\n> > > >\n> > > > I felt that introducing another variable like:\n> > > >\n> > > > long statusinterval_ms = wal_receiver_status_interval * 1000L;\n> > > >\n> > > > would help here by doing 2 things:\n> > > > 1) The condition would be easier to read because the ms units\n> > > > would be the same\n> > > > 2) Won't need * 1000L repeated in two places.\n> > > >\n> > > > Only, do take care to assign this variable in the right place in\n> > > > this loop in case the configuration is changed.\n> > >\n> > > Fixed. Calculations are done on two lines - first one is the\n> > > entrance of the loop, and second one is the after SIGHUP is detected.\n> > >\n> >\n> > TBH, I expected you would write this as just a *single* variable\n> > assignment before the condition like below:\n> >\n> > SUGGESTION (tweaked comment and put single assignment before\n> > condition)\n> > /*\n> > * Call send_feedback() to prevent the publisher from exiting by\n> > * timeout during the delay, when the status interval is greater than\n> > * zero.\n> > */\n> > status_interval_ms = wal_receiver_status_interval * 1000L; if\n> > (status_interval_ms > 0 && diffms > status_interval_ms) { ...\n> >\n> > ~\n> >\n> > I understand in theory, your code is more efficient, but in practice,\n> > I think the overhead of a single variable assignment every loop\n> > iteration (which is doing WaitLatch anyway) is of insignificant\n> > concern, whereas having one assignment is simpler than having two IMO.\n> >\n> \n> Yeah, that sounds better to me as well.\nOK, fixed.\n\nThe comment adjustment suggested by Peter-san above\nwas also included in this v33.\nPlease have a look at the attached patch.\n\n\n\nBest Regards,\n\tTakamichi Osumi",
"msg_date": "Thu, 9 Feb 2023 09:39:02 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "> The comment adjustment suggested by Peter-san above\n> was also included in this v33.\n> Please have a look at the attached patch.\n\nPatch v33 LGTM.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 10 Feb 2023 09:19:19 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "At Thu, 9 Feb 2023 13:26:19 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \namit.kapila16> On Thu, Feb 9, 2023 at 12:17 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > I understand in theory, your code is more efficient, but in practice,\n> > I think the overhead of a single variable assignment every loop\n> > iteration (which is doing WaitLatch anyway) is of insignificant\n> > concern, whereas having one assignment is simpler than having two IMO.\n> >\n> \n> Yeah, that sounds better to me as well.\n\nFWIW, I'm on board with this.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 10 Feb 2023 09:49:13 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "At Thu, 9 Feb 2023 13:48:52 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Thu, Feb 9, 2023 at 10:45 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Wed, 8 Feb 2023 09:03:03 +0000, \"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com> wrote in\n> > > Thank you for reviewing! PSA new version.\n> >\n> > + if (statusinterval_ms > 0 && diffms > statusinterval_ms)\n> >\n> > The next expected feedback time is measured from the last status\n> > report. Thus, it seems to me this may suppress feedbacks from being\n> > sent for an unexpectedly long time especially when min_apply_delay is\n> > shorter than wal_r_s_interval.\n> >\n> \n> I think the minimum time before we send any feedback during the wait\n> is wal_r_s_interval. Now, I think if there is no transaction for a\n> long time before we get a new transaction, there should be keep-alive\n> messages in between which would allow us to send feedback at regular\n> intervals (wal_receiver_status_interval). So, I think we should be\n\nRight.\n\n> able to send feedback in less than 2 * wal_receiver_status_interval\n> unless wal_sender/receiver timeout is very large and there is a very\n> low volume of transactions. Now, we can try to send the feedback\n\nWe have suffered this kind of feedback silence many times. Thus I\ndon't want to rely on luck here. I had in mind of exposing last_send\nitself or providing interval-calclation function to the logic.\n\n> before we start waiting or maybe after every\n> wal_receiver_status_interval / 2 but I think that will lead to more\n> spurious feedback messages than we get the benefit from them.\n\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 10 Feb 2023 09:57:22 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Mmm. A part of the previous mail have gone anywhere for a uncertain\nreason and placed by a mysterious blank lines...\n\nAt Fri, 10 Feb 2023 09:57:22 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Thu, 9 Feb 2023 13:48:52 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> > On Thu, Feb 9, 2023 at 10:45 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > At Wed, 8 Feb 2023 09:03:03 +0000, \"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com> wrote in\n> > > > Thank you for reviewing! PSA new version.\n> > >\n> > > + if (statusinterval_ms > 0 && diffms > statusinterval_ms)\n> > >\n> > > The next expected feedback time is measured from the last status\n> > > report. Thus, it seems to me this may suppress feedbacks from being\n> > > sent for an unexpectedly long time especially when min_apply_delay is\n> > > shorter than wal_r_s_interval.\n> > >\n> > \n> > I think the minimum time before we send any feedback during the wait\n> > is wal_r_s_interval. Now, I think if there is no transaction for a\n> > long time before we get a new transaction, there should be keep-alive\n> > messages in between which would allow us to send feedback at regular\n> > intervals (wal_receiver_status_interval). So, I think we should be\n> \n> Right.\n> \n> > able to send feedback in less than 2 * wal_receiver_status_interval\n> > unless wal_sender/receiver timeout is very large and there is a very\n> > low volume of transactions. Now, we can try to send the feedback\n> \n> We have suffered this kind of feedback silence many times. Thus I\n> don't want to rely on luck here. I had in mind of exposing last_send\n> itself or providing interval-calclation function to the logic.\n> \n> > before we start waiting or maybe after every\n> > wal_receiver_status_interval / 2 but I think that will lead to more\n> > spurious feedback messages than we get the benefit from them.\n\nAgreed. I think we dont want to do that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 10 Feb 2023 10:03:15 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Fri, Feb 10, 2023 at 6:27 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 9 Feb 2023 13:48:52 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > On Thu, Feb 9, 2023 at 10:45 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > At Wed, 8 Feb 2023 09:03:03 +0000, \"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com> wrote in\n> > > > Thank you for reviewing! PSA new version.\n> > >\n> > > + if (statusinterval_ms > 0 && diffms > statusinterval_ms)\n> > >\n> > > The next expected feedback time is measured from the last status\n> > > report. Thus, it seems to me this may suppress feedbacks from being\n> > > sent for an unexpectedly long time especially when min_apply_delay is\n> > > shorter than wal_r_s_interval.\n> > >\n> >\n> > I think the minimum time before we send any feedback during the wait\n> > is wal_r_s_interval. Now, I think if there is no transaction for a\n> > long time before we get a new transaction, there should be keep-alive\n> > messages in between which would allow us to send feedback at regular\n> > intervals (wal_receiver_status_interval). So, I think we should be\n>\n> Right.\n>\n> > able to send feedback in less than 2 * wal_receiver_status_interval\n> > unless wal_sender/receiver timeout is very large and there is a very\n> > low volume of transactions. Now, we can try to send the feedback\n>\n> We have suffered this kind of feedback silence many times. Thus I\n> don't want to rely on luck here. I had in mind of exposing last_send\n> itself or providing interval-calclation function to the logic.\n>\n\nI think we have last_send time in send_feedback(), so we can expose it\nif we want but how would that solve the problem you are worried about?\nThe one simple idea as I shared in my last email was to send feedback\nevery wal_receiver_status_interval / 2. I think this should avoid any\ntimeout problem because we already recommend setting it to lesser than\nwal_sender_timeout.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 10 Feb 2023 10:11:12 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Fri, Feb 10, 2023 at 10:11 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Feb 10, 2023 at 6:27 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Thu, 9 Feb 2023 13:48:52 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > > On Thu, Feb 9, 2023 at 10:45 AM Kyotaro Horiguchi\n> > > <horikyota.ntt@gmail.com> wrote:\n> > > >\n> > > > At Wed, 8 Feb 2023 09:03:03 +0000, \"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com> wrote in\n> > > > > Thank you for reviewing! PSA new version.\n> > > >\n> > > > + if (statusinterval_ms > 0 && diffms > statusinterval_ms)\n> > > >\n> > > > The next expected feedback time is measured from the last status\n> > > > report. Thus, it seems to me this may suppress feedbacks from being\n> > > > sent for an unexpectedly long time especially when min_apply_delay is\n> > > > shorter than wal_r_s_interval.\n> > > >\n> > >\n> > > I think the minimum time before we send any feedback during the wait\n> > > is wal_r_s_interval. Now, I think if there is no transaction for a\n> > > long time before we get a new transaction, there should be keep-alive\n> > > messages in between which would allow us to send feedback at regular\n> > > intervals (wal_receiver_status_interval). So, I think we should be\n> >\n> > Right.\n> >\n> > > able to send feedback in less than 2 * wal_receiver_status_interval\n> > > unless wal_sender/receiver timeout is very large and there is a very\n> > > low volume of transactions. Now, we can try to send the feedback\n> >\n> > We have suffered this kind of feedback silence many times. Thus I\n> > don't want to rely on luck here. I had in mind of exposing last_send\n> > itself or providing interval-calclation function to the logic.\n> >\n>\n> I think we have last_send time in send_feedback(), so we can expose it\n> if we want but how would that solve the problem you are worried about?\n>\n\nI have an idea to use last_send time to avoid walsenders being\ntimeout. Instead of directly using wal_receiver_status_interval as a\nminimum interval to send the feedback, we should check if it is\ngreater than last_send time then we should send the feedback after\n(wal_receiver_status_interval - last_send). I think they can probably\nbe different only on the very first time. Any better ideas?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 10 Feb 2023 10:34:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi,\n\nOn Friday, February 10, 2023 2:05 PM Friday, February 10, 2023 2:05 PM wrote:\n> On Fri, Feb 10, 2023 at 10:11 AM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >\n> > On Fri, Feb 10, 2023 at 6:27 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > At Thu, 9 Feb 2023 13:48:52 +0530, Amit Kapila\n> > > <amit.kapila16@gmail.com> wrote in\n> > > > On Thu, Feb 9, 2023 at 10:45 AM Kyotaro Horiguchi\n> > > > <horikyota.ntt@gmail.com> wrote:\n> > > > >\n> > > > > At Wed, 8 Feb 2023 09:03:03 +0000, \"Hayato Kuroda (Fujitsu)\"\n> > > > > <kuroda.hayato@fujitsu.com> wrote in\n> > > > > > Thank you for reviewing! PSA new version.\n> > > > >\n> > > > > + if (statusinterval_ms > 0 && diffms >\n> > > > > + statusinterval_ms)\n> > > > >\n> > > > > The next expected feedback time is measured from the last status\n> > > > > report. Thus, it seems to me this may suppress feedbacks from\n> > > > > being sent for an unexpectedly long time especially when\n> > > > > min_apply_delay is shorter than wal_r_s_interval.\n> > > > >\n> > > >\n> > > > I think the minimum time before we send any feedback during the\n> > > > wait is wal_r_s_interval. Now, I think if there is no transaction\n> > > > for a long time before we get a new transaction, there should be\n> > > > keep-alive messages in between which would allow us to send\n> > > > feedback at regular intervals (wal_receiver_status_interval). So,\n> > > > I think we should be\n> > >\n> > > Right.\n> > >\n> > > > able to send feedback in less than 2 *\n> > > > wal_receiver_status_interval unless wal_sender/receiver timeout is\n> > > > very large and there is a very low volume of transactions. Now, we\n> > > > can try to send the feedback\n> > >\n> > > We have suffered this kind of feedback silence many times. Thus I\n> > > don't want to rely on luck here. I had in mind of exposing last_send\n> > > itself or providing interval-calclation function to the logic.\n> > >\n> >\n> > I think we have last_send time in send_feedback(), so we can expose it\n> > if we want but how would that solve the problem you are worried about?\n> >\n> \n> I have an idea to use last_send time to avoid walsenders being timeout.\n> Instead of directly using wal_receiver_status_interval as a minimum interval\n> to send the feedback, we should check if it is greater than last_send time\n> then we should send the feedback after (wal_receiver_status_interval -\n> last_send). I think they can probably be different only on the very first time.\n> Any better ideas?\nThis idea sounds good to me and\nimplemented this idea in an attached patch v34.\n\nIn the previous patch, we couldn't solve the\ntimeout of the publisher, when we conduct a scenario suggested by Horiguchi-san\nand reproduced in the scenario attached test file 'test.sh'.\nBut now we handle it by adjusting the timing of the first wait time.\n\nFYI, we thought to implement the new variable 'send_time'\nin the LogicalRepWorker structure at first. But, this structure\nis used when launcher controls workers or reports statistics\nand it stores TimestampTz recorded in the received WAL,\nso not sure if the struct is the right place to implement the variable.\nMoreover, there are other similar variables such as last_recv_time\nor reply_time. So, those will be confusing when we decide to have\nnew variable together. Then, it's declared separately.\n\nThe new patch also includes some changes for wait event.\nKindly have a look at the v34 patch.\n\n\nBest Regards,\n\tTakamichi Osumi",
"msg_date": "Fri, 10 Feb 2023 11:26:21 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-10 11:26:21 +0000, Takamichi Osumi (Fujitsu) wrote:\n> Subject: [PATCH v34] Time-delayed logical replication subscriber\n> \n> Similar to physical replication, a time-delayed copy of the data for\n> logical replication is useful for some scenarios (particularly to fix\n> errors that might cause data loss).\n> \n> This patch implements a new subscription parameter called 'min_apply_delay'.\n\nSorry for not reading through the thread, but it's very long.\n\n\nHas there been any discussion about whether this is actually best implemented\non the client side? You could alternatively implement it on the sender.\n\nThat'd have quite a few advantages, I think - you e.g. wouldn't remove the\nability to *receive* and send feedback messages. We'd not end up filling up\nthe network buffer with data that we'll not process anytime soon.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 10 Feb 2023 18:09:42 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi\n\n\nOn Saturday, February 11, 2023 11:10 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-02-10 11:26:21 +0000, Takamichi Osumi (Fujitsu) wrote:\n> > Subject: [PATCH v34] Time-delayed logical replication subscriber\n> >\n> > Similar to physical replication, a time-delayed copy of the data for\n> > logical replication is useful for some scenarios (particularly to fix\n> > errors that might cause data loss).\n> >\n> > This patch implements a new subscription parameter called\n> 'min_apply_delay'.\n> Has there been any discussion about whether this is actually best\n> implemented on the client side? You could alternatively implement it on the\n> sender.\n> \n> That'd have quite a few advantages, I think - you e.g. wouldn't remove the\n> ability to *receive* and send feedback messages. We'd not end up filling up\n> the network buffer with data that we'll not process anytime soon.\nThanks for your comments !\n\nWe have discussed about the publisher side idea around here [1]\nbut, we chose the current direction. Kindly have a look at the discussion.\n\nIf we apply the delay on the publisher, then\nit can lead to extra delay where we don't need to apply.\nThe current proposed approach can take other loads or factors\n(network, busyness of the publisher, etc) into account\nbecause it calculates the required delay on the subscriber.\n\n\n[1] - https://www.postgresql.org/message-id/20221215.105200.268327207020006785.horikyota.ntt%40gmail.com\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Sat, 11 Feb 2023 05:44:47 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "At Fri, 10 Feb 2023 10:34:49 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Fri, Feb 10, 2023 at 10:11 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Feb 10, 2023 at 6:27 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > We have suffered this kind of feedback silence many times. Thus I\n> > > don't want to rely on luck here. I had in mind of exposing last_send\n> > > itself or providing interval-calclation function to the logic.\n> >\n> > I think we have last_send time in send_feedback(), so we can expose it\n> > if we want but how would that solve the problem you are worried about?\n\nWal receiver can avoid a too-long sleep by knowing when to wake up for\nthe next feedback.\n\n> I have an idea to use last_send time to avoid walsenders being\n> timeout. Instead of directly using wal_receiver_status_interval as a\n> minimum interval to send the feedback, we should check if it is\n> greater than last_send time then we should send the feedback after\n> (wal_receiver_status_interval - last_send). I think they can probably\n> be different only on the very first time. Any better ideas?\n\nIf it means MyLogicalRepWorker->last_send_time, it is not the last\ntime when walreceiver sent a feedback but the last time when\nwal*sender* sent a data. So I'm not sure that works.\n\nWe could use the variable that way, but AFAIU in turn when so many\nchanges have been spooled that the control doesn't return to\nLogicalRepApplyLoop longer than wal_r_s_interval, maybe_apply_delay()\nstarts calling send_feedback() at every call after the first feedback\ntiming. Even in that case, send_feedback() won't send one actually\nuntil the next feedback timing, I don't think that behavior is great.\n\nThe only packets walreceiver sends back is the feedback packets and\ncurrently only send_feedback knows the last feedback time.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 13 Feb 2023 10:26:19 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi, Horiguchi-san\n\n\nOn Monday, February 13, 2023 10:26 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> At Fri, 10 Feb 2023 10:34:49 +0530, Amit Kapila <amit.kapila16@gmail.com>\n> wrote in\n> > On Fri, Feb 10, 2023 at 10:11 AM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> > >\n> > > On Fri, Feb 10, 2023 at 6:27 AM Kyotaro Horiguchi\n> > > <horikyota.ntt@gmail.com> wrote:\n> > > > We have suffered this kind of feedback silence many times. Thus I\n> > > > don't want to rely on luck here. I had in mind of exposing\n> > > > last_send itself or providing interval-calclation function to the logic.\n> > >\n> > > I think we have last_send time in send_feedback(), so we can expose\n> > > it if we want but how would that solve the problem you are worried\n> about?\n> \n> Wal receiver can avoid a too-long sleep by knowing when to wake up for the\n> next feedback.\n> \n> > I have an idea to use last_send time to avoid walsenders being\n> > timeout. Instead of directly using wal_receiver_status_interval as a\n> > minimum interval to send the feedback, we should check if it is\n> > greater than last_send time then we should send the feedback after\n> > (wal_receiver_status_interval - last_send). I think they can probably\n> > be different only on the very first time. Any better ideas?\n> \n> If it means MyLogicalRepWorker->last_send_time, it is not the last time when\n> walreceiver sent a feedback but the last time when\n> wal*sender* sent a data. So I'm not sure that works.\n> \n> We could use the variable that way, but AFAIU in turn when so many changes\n> have been spooled that the control doesn't return to LogicalRepApplyLoop\n> longer than wal_r_s_interval, maybe_apply_delay() starts calling\n> send_feedback() at every call after the first feedback timing. Even in that\n> case, send_feedback() won't send one actually until the next feedback timing,\n> I don't think that behavior is great.\n> \n> The only packets walreceiver sends back is the feedback packets and\n> currently only send_feedback knows the last feedback time.\nThanks for your comments !\n\nAs described in your last sentence, in the latest patch v34 [1],\nwe use the last time set in send_feedback() and\nbased on it, we calculate and adjust the first timing of feedback message\nin maybe_apply_delay() so that we can send the feedback message following\nthe interval of wal_receiver_status_interval. I wasn't sure if\nthe above concern is still valid for this implementation.\n\nCould you please have a look at the latest patch and share your opinion ?\n\n\n[1] - https://www.postgresql.org/message-id/TYCPR01MB83736C50C98CB2153728A7A8EDDE9%40TYCPR01MB8373.jpnprd01.prod.outlook.com\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Mon, 13 Feb 2023 04:18:39 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Here are my review comments for the v34 patch.\n\n======\nsrc/backend/replication/logical/worker.c\n\n+/* The last time we send a feedback message */\n+static TimestampTz send_time = 0;\n+\n\nIMO this is a bad variable name. When this variable was changed to be\nglobal it ought to have been renamed.\n\nThe name \"send_time\" is almost meaningless without any contextual information.\n\nBut also it's bad because this global name is \"shadowed\" by several\nother parameters and other local variables using that same name (e.g.\nsee UpdateWorkerStats, LogicalRepApplyLoop, etc). It is too confusing.\n\nHow about using a unique/meaningful name with a comment to match to\nimprove readability and remove unwanted shadowing?\n\nSUGGESTION\n/* Timestamp of when the last feedback message was sent. */\nstatic TimestampTz last_sent_feedback_ts = 0;\n\n~~~\n\n2. maybe_apply_delay\n\n+ /* Apply the delay by the latch mechanism */\n+ do\n+ {\n+ TimestampTz delayUntil;\n+ long diffms;\n+\n+ ResetLatch(MyLatch);\n+\n+ CHECK_FOR_INTERRUPTS();\n+\n+ /* This might change wal_receiver_status_interval */\n+ if (ConfigReloadPending)\n+ {\n+ ConfigReloadPending = false;\n+ ProcessConfigFile(PGC_SIGHUP);\n+ }\n+\n+ /*\n+ * Before calculating the time duration, reload the catalog if needed.\n+ */\n+ if (!in_remote_transaction && !in_streamed_transaction)\n+ {\n+ AcceptInvalidationMessages();\n+ maybe_reread_subscription();\n+ }\n+\n+ delayUntil = TimestampTzPlusMilliseconds(finish_ts,\nMySubscription->minapplydelay);\n+ diffms = TimestampDifferenceMilliseconds(GetCurrentTimestamp(), delayUntil);\n+\n+ /*\n+ * Exit without arming the latch if it's already past time to apply\n+ * this transaction.\n+ */\n+ if (diffms <= 0)\n+ break;\n+\n+ elog(DEBUG2, \"time-delayed replication for txid %u, min_apply_delay\n= %d ms, remaining wait time: %ld ms\",\n+ xid, MySubscription->minapplydelay, diffms);\n+\n+ /*\n+ * Call send_feedback() to prevent the publisher from exiting by\n+ * timeout during the delay, when the status interval is greater than\n+ * zero.\n+ */\n+ if (!status_interval_ms)\n+ {\n+ TimestampTz nextFeedback;\n+\n+ /*\n+ * Based on the last time when we send a feedback message, adjust\n+ * the first delay time for this transaction. This ensures that\n+ * the first feedback message follows wal_receiver_status_interval\n+ * interval.\n+ */\n+ nextFeedback = TimestampTzPlusMilliseconds(send_time,\n+ wal_receiver_status_interval * 1000L);\n+ status_interval_ms =\nTimestampDifferenceMilliseconds(GetCurrentTimestamp(), nextFeedback);\n+ }\n+ else\n+ status_interval_ms = wal_receiver_status_interval * 1000L;\n+\n+ if (status_interval_ms > 0 && diffms > status_interval_ms)\n+ {\n+ WaitLatch(MyLatch,\n+ WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n+ status_interval_ms,\n+ WAIT_EVENT_LOGICAL_APPLY_DELAY);\n+ send_feedback(last_received, true, false, true);\n+ }\n+ else\n+ WaitLatch(MyLatch,\n+ WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n+ diffms,\n+ WAIT_EVENT_LOGICAL_APPLY_DELAY);\n+\n+ } while (true);\n\n~\n\nIMO this logic has been tweaked too many times without revisiting the\nvariable names and logic from scratch, so it has become over-complex\n- some variable names are assuming multiple meanings\n- multiple * 1000L have crept back in again\n- the 'diffms' is too generic now with so many vars so it has lost its meaning\n- GetCurrentTimestamp call in multiple places\n\nSUGGESTIONS\n- rename some variables and simplify the logic.\n- reduce all the if/else\n- don't be sneaky with the meaning of status_interval_ms\n- 'diffms' --> 'remaining_delay_ms'\n- 'DelayUntil' --> 'delay_until_ts'\n- introduce 'now' variable\n- simplify the check of (next_feedback_due_ms < remaining_delay_ms)\n\nSUGGESTION (WFM)\n\n/* Apply the delay by the latch mechanism */\nwhile (true)\n{\nTimestampTz now;\nTimestampTz delay_until_ts;\nlong remaining_delay_ms;\nlong status_interval_ms;\n\nResetLatch(MyLatch);\n\nCHECK_FOR_INTERRUPTS();\n\n/* This might change wal_receiver_status_interval */\nif (ConfigReloadPending)\n{\nConfigReloadPending = false;\nProcessConfigFile(PGC_SIGHUP);\n}\n\n/*\n* Before calculating the time duration, reload the catalog if needed.\n*/\nif (!in_remote_transaction && !in_streamed_transaction)\n{\nAcceptInvalidationMessages();\nmaybe_reread_subscription();\n}\n\nnow = GetCurrentTimestamp();\ndelay_until_ts = TimestampTzPlusMilliseconds(finish_ts,\nMySubscription->minapplydelay);\nremaining_delay_ms = TimestampDifferenceMilliseconds(now, delay_until_ts);\n\n/*\n* Exit without arming the latch if it's already past time to apply\n* this transaction.\n*/\nif (remaining_delay_ms <= 0)\nbreak;\n\nelog(DEBUG2, \"time-delayed replication for txid %u, min_apply_delay =\n%d ms, remaining wait time: %ld ms\",\nxid, MySubscription->minapplydelay, remaining_delay_ms);\n/*\n* If a status interval is defined then we may need to call send_feedback()\n* early to prevent the publisher from exiting during a long apply delay.\n*/\nstatus_interval_ms = wal_receiver_status_interval * 1000L;\nif (status_interval_ms > 0)\n{\nTimestampTz next_feedback_due_ts;\nlong next_feedback_due_ms;\n\n/*\n* Find if the next feedback is due earlier than the remaining delay ms.\n*/\nnext_feedback_due_ts = TimestampTzPlusMilliseconds(send_time,\nstatus_interval_ms);\nnext_feedback_due_ms = TimestampDifferenceMilliseconds(now,\nnext_feedback_due_ts);\nif (next_feedback_due_ms < remaining_delay_ms)\n{\n/* delay before feedback */\nWaitLatch(MyLatch,\n WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n next_feedback_due_ms,\n WAIT_EVENT_LOGICAL_APPLY_DELAY);\nsend_feedback(last_received, true, false, true);\ncontinue;\n}\n}\n\n/* delay before apply */\nWaitLatch(MyLatch,\n WL_LATCH_SET | WL_TIMEOUT | WL_EXIT_ON_PM_DEATH,\n remaining_delay_ms,\n WAIT_EVENT_LOGICAL_APPLY_DELAY);\n}\n\n======\nsrc/include/utils/wait_event.h\n\n3.\n@@ -149,7 +149,8 @@ typedef enum\n WAIT_EVENT_REGISTER_SYNC_REQUEST,\n WAIT_EVENT_SPIN_DELAY,\n WAIT_EVENT_VACUUM_DELAY,\n- WAIT_EVENT_VACUUM_TRUNCATE\n+ WAIT_EVENT_VACUUM_TRUNCATE,\n+ WAIT_EVENT_LOGICAL_APPLY_DELAY\n } WaitEventTimeout;\n\nFYI - The PGDOCS has a section with \"Table 28.13. Wait Events of Type\nTimeout\" so if you a going to add a new Timeout Event then you also\nneed to document it (alphabetically) in that table.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 13 Feb 2023 19:52:49 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Fri, Feb 10, 2023 at 4:56 PM Takamichi Osumi (Fujitsu)\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Friday, February 10, 2023 2:05 PM Friday, February 10, 2023 2:05 PM wrote:\n> > On Fri, Feb 10, 2023 at 10:11 AM Amit Kapila <amit.kapila16@gmail.com>\n> > wrote:\n>\n> In the previous patch, we couldn't solve the\n> timeout of the publisher, when we conduct a scenario suggested by Horiguchi-san\n> and reproduced in the scenario attached test file 'test.sh'.\n> But now we handle it by adjusting the timing of the first wait time.\n>\n> FYI, we thought to implement the new variable 'send_time'\n> in the LogicalRepWorker structure at first. But, this structure\n> is used when launcher controls workers or reports statistics\n> and it stores TimestampTz recorded in the received WAL,\n> so not sure if the struct is the right place to implement the variable.\n> Moreover, there are other similar variables such as last_recv_time\n> or reply_time. So, those will be confusing when we decide to have\n> new variable together. Then, it's declared separately.\n>\n\nI think we can introduce a new variable as last_feedback_time in the\nLogicalRepWorker structure and probably for the last_received, we can\nlast_lsn in MyLogicalRepWorker as that seems to be updated correctly.\nI think it would be good to avoid global variables.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 13 Feb 2023 15:51:25 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-11 05:44:47 +0000, Takamichi Osumi (Fujitsu) wrote:\n> On Saturday, February 11, 2023 11:10 AM Andres Freund <andres@anarazel.de> wrote:\n> > Has there been any discussion about whether this is actually best\n> > implemented on the client side? You could alternatively implement it on the\n> > sender.\n> > \n> > That'd have quite a few advantages, I think - you e.g. wouldn't remove the\n> > ability to *receive* and send feedback messages. We'd not end up filling up\n> > the network buffer with data that we'll not process anytime soon.\n> Thanks for your comments !\n> \n> We have discussed about the publisher side idea around here [1]\n> but, we chose the current direction. Kindly have a look at the discussion.\n> \n> If we apply the delay on the publisher, then\n> it can lead to extra delay where we don't need to apply.\n> The current proposed approach can take other loads or factors\n> (network, busyness of the publisher, etc) into account\n> because it calculates the required delay on the subscriber.\n\nI don't think it's OK to just loose the ability to read / reply to keepalive\nmessages.\n\nI think as-is we seriously consider to just reject the feature, adding too\nmuch complexity, without corresponding gain.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Feb 2023 08:47:12 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "At Mon, 13 Feb 2023 15:51:25 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> I think we can introduce a new variable as last_feedback_time in the\n> LogicalRepWorker structure and probably for the last_received, we can\n> last_lsn in MyLogicalRepWorker as that seems to be updated correctly.\n> I think it would be good to avoid global variables.\n\nMyLogicalRepWorker is a global variable:p, but it is far better than a\nbear one.\n\nBy the way, we are trying to send the status messages regularly, but\nas Andres pointed out, worker does not read nor reply to keepalive\nmessages from publisher while delaying. It is not possible as far as\nwe choke the stream at the subscriber end. It doesn't seem to be a\npractical problem, but IMHO I think he's right in terms of adherence\nto the wire protocol, which was also one of my own initial concern.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 14 Feb 2023 11:27:25 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi, Andres-san\n\n\nOn Tuesday, February 14, 2023 1:47 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-02-11 05:44:47 +0000, Takamichi Osumi (Fujitsu) wrote:\n> > On Saturday, February 11, 2023 11:10 AM Andres Freund\n> <andres@anarazel.de> wrote:\n> > > Has there been any discussion about whether this is actually best\n> > > implemented on the client side? You could alternatively implement it\n> > > on the sender.\n> > >\n> > > That'd have quite a few advantages, I think - you e.g. wouldn't\n> > > remove the ability to *receive* and send feedback messages. We'd\n> > > not end up filling up the network buffer with data that we'll not process\n> anytime soon.\n> > Thanks for your comments !\n> >\n> > We have discussed about the publisher side idea around here [1] but,\n> > we chose the current direction. Kindly have a look at the discussion.\n> >\n> > If we apply the delay on the publisher, then it can lead to extra\n> > delay where we don't need to apply.\n> > The current proposed approach can take other loads or factors\n> > (network, busyness of the publisher, etc) into account because it\n> > calculates the required delay on the subscriber.\n> \n> I don't think it's OK to just loose the ability to read / reply to keepalive\n> messages.\n> \n> I think as-is we seriously consider to just reject the feature, adding too much\n> complexity, without corresponding gain.\nThanks for your comments !\n\nCould you please tell us about your concern a bit more?\n\nThe keepalive/reply messages are currently used for two purposes,\n(a) send the updated wrte/flush/apply locations; (b) avoid timeouts incase of idle times.\nBoth the cases shouldn't be impacted with this time-delayed LR patch because during the delay there won't\nbe any progress and to avoid timeouts, we allow to send the alive message during the delay.\nThis is just we would like to clarify the issue you have in mind.\n\nOTOH, if we want to implement the functionality on publisher-side,\nI think we need to first consider the interface.\nWe can think of two options (a) Have it as a subscription parameter as the patch has now and\nthen pass it as an option to the publisher which it will use to delay;\n(b) Have it defined on publisher-side, say via GUC or some other way.\nThe basic idea could be that while processing commit record (in DecodeCommit),\nwe can somehow check the value of delay and then use it there to delay sending the xact.\nAlso, during delay, we need to somehow send the keepalive and process replies,\nprobably via a new callback or by some existing callback.\nWe also need to handle in-progress and 2PC xacts in a similar way.\nFor the former, probably we would need to apply the delay before sending the first stream.\nCould you please share what you feel on this direction as well ?\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Tue, 14 Feb 2023 06:22:12 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Andres and other hackers,\n\n> OTOH, if we want to implement the functionality on publisher-side,\n> I think we need to first consider the interface.\n> We can think of two options (a) Have it as a subscription parameter as the patch\n> has now and\n> then pass it as an option to the publisher which it will use to delay;\n> (b) Have it defined on publisher-side, say via GUC or some other way.\n> The basic idea could be that while processing commit record (in DecodeCommit),\n> we can somehow check the value of delay and then use it there to delay sending\n> the xact.\n> Also, during delay, we need to somehow send the keepalive and process replies,\n> probably via a new callback or by some existing callback.\n> We also need to handle in-progress and 2PC xacts in a similar way.\n> For the former, probably we would need to apply the delay before sending the first\n> stream.\n> Could you please share what you feel on this direction as well ?\n\nI implemented a patch that the delaying is done on the publisher side. In this patch,\napproach (a) was chosen, in which min_apply_delay is specified as a subscription\nparameter, and then apply worker passes it to the publisher as an output plugin option.\nDuring the delay, the walsender periodically checks and processes replies from the\napply worker and sends keepalive messages if needed. Therefore, the ability to handle\nkeepalives is not loosed.\nTo delay the transaction in the output plugin layer, the new LogicalOutputPlugin\nAPI was added. For now, I choose the output plugin layer but can consider to do\nit from the core if there is a better way.\n\nCould you please share your opinion?\n\nNote: thanks for Osumi-san to help implementing.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Wed, 15 Feb 2023 11:29:18 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "At Wed, 15 Feb 2023 11:29:18 +0000, \"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com> wrote in \n> Dear Andres and other hackers,\n> \n> > OTOH, if we want to implement the functionality on publisher-side,\n> > I think we need to first consider the interface.\n> > We can think of two options (a) Have it as a subscription parameter as the patch\n> > has now and\n> > then pass it as an option to the publisher which it will use to delay;\n> > (b) Have it defined on publisher-side, say via GUC or some other way.\n> > The basic idea could be that while processing commit record (in DecodeCommit),\n> > we can somehow check the value of delay and then use it there to delay sending\n> > the xact.\n> > Also, during delay, we need to somehow send the keepalive and process replies,\n> > probably via a new callback or by some existing callback.\n> > We also need to handle in-progress and 2PC xacts in a similar way.\n> > For the former, probably we would need to apply the delay before sending the first\n> > stream.\n> > Could you please share what you feel on this direction as well ?\n> \n> I implemented a patch that the delaying is done on the publisher side. In this patch,\n> approach (a) was chosen, in which min_apply_delay is specified as a subscription\n> parameter, and then apply worker passes it to the publisher as an output plugin option.\n\nAs Amit-K mentioned, we may need to change the name of the option in\nthis version, since the delay mechanism in this version causes a delay\nin sending from publisher than delaying apply on the subscriber side.\n\nI'm not sure why output plugin is involved in the delay mechanism. It\nappears to me that it would be simpler if the delay occurred in\nreorder buffer or logical decoder instead. Perhaps what I understand\ncorrectly is that we could delay right before only sending commit\nrecords in this case. If we delay at publisher end, all changes will\nbe sent at once if !streaming, and otherwise, all changes in a\ntransaction will be spooled at subscriber end. In any case, apply\nworker won't be holding an active transaction unnecessarily. Of\ncourse we need add the mechanism to process keep-alive and status\nreport messages.\n\n> During the delay, the walsender periodically checks and processes replies from the\n> apply worker and sends keepalive messages if needed. Therefore, the ability to handle\n> keepalives is not loosed.\n\nMy understanding is that the keep-alives is a different mechanism with\na different objective from status reports. Even if subscriber doesn't\nsend a spontaneous or extra status reports at all, connection can be\nchecked and maintained by keep-alive packets. It is possible to setup\nan asymmetric configuration where only walsender sends keep-alives,\nbut none are sent from the peer. Those setups work fine when no\napply-delay involved, but they won't work with the patches we're\ntalking about because the subscriber won't respond to the keep-alive\npackets from the peer. So when I wrote \"practically works\" in the\nlast mail, this is what I meant.\n\nThus if someone plans to enable apply_delay for logical replication,\nthat person should be aware of some additional subtle restrictions that\nare required compared to a non-delayed setups.\n\n> To delay the transaction in the output plugin layer, the new LogicalOutputPlugin\n> API was added. For now, I choose the output plugin layer but can consider to do\n> it from the core if there is a better way.\n> \n> Could you please share your opinion?\n> \n> Note: thanks for Osumi-san to help implementing.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 16 Feb 2023 14:21:01 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Horiguchi-san,\n\nThank you for responding! Before modifying patches, I want to confirm something\nyou said.\n\n> As Amit-K mentioned, we may need to change the name of the option in\n> this version, since the delay mechanism in this version causes a delay\n> in sending from publisher than delaying apply on the subscriber side.\n\nRight, will be changed.\n\n> I'm not sure why output plugin is involved in the delay mechanism. It\n> appears to me that it would be simpler if the delay occurred in\n> reorder buffer or logical decoder instead.\n\nI'm planning to change, but..\n\n> Perhaps what I understand\n> correctly is that we could delay right before only sending commit\n> records in this case. If we delay at publisher end, all changes will\n> be sent at once if !streaming, and otherwise, all changes in a\n> transaction will be spooled at subscriber end. In any case, apply\n> worker won't be holding an active transaction unnecessarily.\n\nWhat about parallel case? Latest patch does not reject the combination of parallel\nstreaming mode and delay. If delay is done at commit and subscriber uses an parallel\napply worker, it may acquire lock for a long time.\n\n> Of\n> course we need add the mechanism to process keep-alive and status\n> report messages.\n\nCould you share the good way to handle keep-alive and status messages if you have?\nIf we changed to the decoding layer, it is strange to call walsender function\ndirectly.\n\n> Those setups work fine when no\n> apply-delay involved, but they won't work with the patches we're\n> talking about because the subscriber won't respond to the keep-alive\n> packets from the peer. So when I wrote \"practically works\" in the\n> last mail, this is what I meant.\n\nI'm not sure around the part. I think in the latest patch, subscriber can respond\nto the keepalive packets from the peer. Also, publisher can respond to the peer.\nCould you please tell me if you know a case that publisher or subscriber cannot\nrespond to the opposite side? Note that if we apply the publisher-side patch, we\ndon't have to apply subscriber-side patch.\n\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Thu, 16 Feb 2023 06:20:23 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "At Thu, 16 Feb 2023 06:20:23 +0000, \"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com> wrote in \n> Dear Horiguchi-san,\n> \n> Thank you for responding! Before modifying patches, I want to confirm something\n> you said.\n> \n> > As Amit-K mentioned, we may need to change the name of the option in\n> > this version, since the delay mechanism in this version causes a delay\n> > in sending from publisher than delaying apply on the subscriber side.\n> \n> Right, will be changed.\n> \n> > I'm not sure why output plugin is involved in the delay mechanism. It\n> > appears to me that it would be simpler if the delay occurred in\n> > reorder buffer or logical decoder instead.\n> \n> I'm planning to change, but..\n\nYeah, I don't think we've made up our minds about which way to go yet,\nso it's a bit too early to work on that.\n\n> > Perhaps what I understand\n> > correctly is that we could delay right before only sending commit\n> > records in this case. If we delay at publisher end, all changes will\n> > be sent at once if !streaming, and otherwise, all changes in a\n> > transaction will be spooled at subscriber end. In any case, apply\n> > worker won't be holding an active transaction unnecessarily.\n> \n> What about parallel case? Latest patch does not reject the combination of parallel\n> streaming mode and delay. If delay is done at commit and subscriber uses an parallel\n> apply worker, it may acquire lock for a long time.\n\nI didn't looked too closely, but my guess is that transactions are\nconveyed in spool files in parallel mode, with each file storing a\ncomplete transaction.\n\n> > Of\n> > course we need add the mechanism to process keep-alive and status\n> > report messages.\n> \n> Could you share the good way to handle keep-alive and status messages if you have?\n> If we changed to the decoding layer, it is strange to call walsender function\n> directly.\n\nI'm sorry, but I don't have a concrete idea at the moment. When I read\nthrough the last patch, I missed that WalSndDelay is actually a subset\nof WalSndLoop. Although it can handle keep-alives correctly, I'm not\nsure we can accept that structure..\n\n> > Those setups work fine when no\n> > apply-delay involved, but they won't work with the patches we're\n> > talking about because the subscriber won't respond to the keep-alive\n> > packets from the peer. So when I wrote \"practically works\" in the\n> > last mail, this is what I meant.\n> \n> I'm not sure around the part. I think in the latest patch, subscriber can respond\n> to the keepalive packets from the peer. Also, publisher can respond to the peer.\n> Could you please tell me if you know a case that publisher or subscriber cannot\n> respond to the opposite side? Note that if we apply the publisher-side patch, we\n> don't have to apply subscriber-side patch.\n\nSorry about that again, I missed that part in the last patch as\nmentioned earlier..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 16 Feb 2023 17:55:46 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Thu, Feb 16, 2023 at 2:25 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 16 Feb 2023 06:20:23 +0000, \"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com> wrote in\n> > Dear Horiguchi-san,\n> >\n> > Thank you for responding! Before modifying patches, I want to confirm something\n> > you said.\n> >\n> > > As Amit-K mentioned, we may need to change the name of the option in\n> > > this version, since the delay mechanism in this version causes a delay\n> > > in sending from publisher than delaying apply on the subscriber side.\n> >\n> > Right, will be changed.\n> >\n> > > I'm not sure why output plugin is involved in the delay mechanism. It\n> > > appears to me that it would be simpler if the delay occurred in\n> > > reorder buffer or logical decoder instead.\n> >\n> > I'm planning to change, but..\n>\n> Yeah, I don't think we've made up our minds about which way to go yet,\n> so it's a bit too early to work on that.\n>\n> > > Perhaps what I understand\n> > > correctly is that we could delay right before only sending commit\n> > > records in this case. If we delay at publisher end, all changes will\n> > > be sent at once if !streaming, and otherwise, all changes in a\n> > > transaction will be spooled at subscriber end. In any case, apply\n> > > worker won't be holding an active transaction unnecessarily.\n> >\n> > What about parallel case? Latest patch does not reject the combination of parallel\n> > streaming mode and delay. If delay is done at commit and subscriber uses an parallel\n> > apply worker, it may acquire lock for a long time.\n>\n> I didn't looked too closely, but my guess is that transactions are\n> conveyed in spool files in parallel mode, with each file storing a\n> complete transaction.\n>\n\nNo, we don't try to collect all the data in files for parallel mode.\nHaving said that, it doesn't matter because we won't know the time of\nthe commit (which is used to compute delay) before we encounter the\ncommit record in WAL. So, I feel for this approach, we can follow what\nyou said.\n\n> > > Of\n> > > course we need add the mechanism to process keep-alive and status\n> > > report messages.\n> >\n> > Could you share the good way to handle keep-alive and status messages if you have?\n> > If we changed to the decoding layer, it is strange to call walsender function\n> > directly.\n>\n> I'm sorry, but I don't have a concrete idea at the moment. When I read\n> through the last patch, I missed that WalSndDelay is actually a subset\n> of WalSndLoop. Although it can handle keep-alives correctly, I'm not\n> sure we can accept that structure..\n>\n\nI think we can use update_progress_txn() for this purpose but note\nthat we are discussing to change the same in thread [1].\n\n[1] - https://www.postgresql.org/message-id/20230210210423.r26ndnfmuifie4f6%40awork3.anarazel.de\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 16 Feb 2023 14:46:01 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-16 14:21:01 +0900, Kyotaro Horiguchi wrote:\n> I'm not sure why output plugin is involved in the delay mechanism.\n\n+many\n\nThe output plugin absolutely never should be involved in something like\nthis. It was a grave mistake that OutputPluginUpdateProgress() calls were\nadded to the commit callback and then proliferated.\n\n\n> It appears to me that it would be simpler if the delay occurred in reorder\n> buffer or logical decoder instead.\n\nThis is a feature specific to walsender. So the riggering logic should either\ndirectly live in the walsender, or in a callback set in\nLogicalDecodingContext. That could be called from decode.c or such.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 16 Feb 2023 11:18:42 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Horiguchi-san,\n\nThank you for replying! This direction seems OK, so I started to revise the patch.\nPSA new version.\n\n> > > As Amit-K mentioned, we may need to change the name of the option in\n> > > this version, since the delay mechanism in this version causes a delay\n> > > in sending from publisher than delaying apply on the subscriber side.\n> >\n> > Right, will be changed.\n> >\n> > > I'm not sure why output plugin is involved in the delay mechanism. It\n> > > appears to me that it would be simpler if the delay occurred in\n> > > reorder buffer or logical decoder instead.\n> >\n> > I'm planning to change, but..\n> \n> Yeah, I don't think we've made up our minds about which way to go yet,\n> so it's a bit too early to work on that.\n\nThe parameter name is changed to min_send_delay.\nAnd the delaying spot is changed to logical decoder.\n\n> > > Perhaps what I understand\n> > > correctly is that we could delay right before only sending commit\n> > > records in this case. If we delay at publisher end, all changes will\n> > > be sent at once if !streaming, and otherwise, all changes in a\n> > > transaction will be spooled at subscriber end. In any case, apply\n> > > worker won't be holding an active transaction unnecessarily.\n> >\n> > What about parallel case? Latest patch does not reject the combination of\n> parallel\n> > streaming mode and delay. If delay is done at commit and subscriber uses an\n> parallel\n> > apply worker, it may acquire lock for a long time.\n> \n> I didn't looked too closely, but my guess is that transactions are\n> conveyed in spool files in parallel mode, with each file storing a\n> complete transaction.\n\nBased on the advice, I moved the delaying to DecodeCommit().\nAnd the combination of parallel streaming mode and min_send_delay is\nrejected again.\n\n> > > Of\n> > > course we need add the mechanism to process keep-alive and status\n> > > report messages.\n> >\n> > Could you share the good way to handle keep-alive and status messages if you\n> have?\n> > If we changed to the decoding layer, it is strange to call walsender function\n> > directly.\n> \n> I'm sorry, but I don't have a concrete idea at the moment. When I read\n> through the last patch, I missed that WalSndDelay is actually a subset\n> of WalSndLoop. Although it can handle keep-alives correctly, I'm not\n> sure we can accept that structure..\n\nNo issues. I have kept the current implementation.\n\nSome bugs I found are also fixed.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Fri, 17 Feb 2023 06:44:02 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> > > > Perhaps what I understand\r\n> > > > correctly is that we could delay right before only sending commit\r\n> > > > records in this case. If we delay at publisher end, all changes will\r\n> > > > be sent at once if !streaming, and otherwise, all changes in a\r\n> > > > transaction will be spooled at subscriber end. In any case, apply\r\n> > > > worker won't be holding an active transaction unnecessarily.\r\n> > >\r\n> > > What about parallel case? Latest patch does not reject the combination of\r\n> parallel\r\n> > > streaming mode and delay. If delay is done at commit and subscriber uses an\r\n> parallel\r\n> > > apply worker, it may acquire lock for a long time.\r\n> >\r\n> > I didn't looked too closely, but my guess is that transactions are\r\n> > conveyed in spool files in parallel mode, with each file storing a\r\n> > complete transaction.\r\n> >\r\n> \r\n> No, we don't try to collect all the data in files for parallel mode.\r\n> Having said that, it doesn't matter because we won't know the time of\r\n> the commit (which is used to compute delay) before we encounter the\r\n> commit record in WAL. So, I feel for this approach, we can follow what\r\n> you said.\r\n\r\nRight. And new patch follows the opinion. \r\n\r\n> > > > Of\r\n> > > > course we need add the mechanism to process keep-alive and status\r\n> > > > report messages.\r\n> > >\r\n> > > Could you share the good way to handle keep-alive and status messages if\r\n> you have?\r\n> > > If we changed to the decoding layer, it is strange to call walsender function\r\n> > > directly.\r\n> >\r\n> > I'm sorry, but I don't have a concrete idea at the moment. When I read\r\n> > through the last patch, I missed that WalSndDelay is actually a subset\r\n> > of WalSndLoop. Although it can handle keep-alives correctly, I'm not\r\n> > sure we can accept that structure..\r\n> >\r\n> \r\n> I think we can use update_progress_txn() for this purpose but note\r\n> that we are discussing to change the same in thread [1].\r\n> \r\n> [1] -\r\n> https://www.postgresql.org/message-id/20230210210423.r26ndnfmuifie4f6%40\r\n> awork3.anarazel.de\r\n\r\nI did not reuse update_progress_txn() because we cannot use it straightforward,\r\nBut I can change if we have better idea than present.\r\n\r\nNew patch was posted in [1].\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5866F00191375D0193320A4DF5A19%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Fri, 17 Feb 2023 06:45:08 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Andres,\n\nThank you for giving comments! I understood that you have agreed the approach\nthat publisher delays to send data.\n\n> > I'm not sure why output plugin is involved in the delay mechanism.\n> \n> +many\n> \n> The output plugin absolutely never should be involved in something like\n> this. It was a grave mistake that OutputPluginUpdateProgress() calls were\n> added to the commit callback and then proliferated.\n> \n> \n> > It appears to me that it would be simpler if the delay occurred in reorder\n> > buffer or logical decoder instead.\n> \n> This is a feature specific to walsender. So the riggering logic should either\n> directly live in the walsender, or in a callback set in\n> LogicalDecodingContext. That could be called from decode.c or such.\n\nOK, I can follow the opinion.\nI think the walsender function should not be called directly from decode.c.\nSo I implemented as callback in LogicalDecodingContext and it is called\nfrom decode.c if set.\n\nNew patch was posted in [1].\n\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5866F00191375D0193320A4DF5A19%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Fri, 17 Feb 2023 06:45:30 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Fri, Feb 17, 2023 at 12:14 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Thank you for replying! This direction seems OK, so I started to revise the patch.\n> PSA new version.\n>\n\nFew comments:\n=============\n1.\n+ <para>\n+ The minimum delay for publisher sends data, in milliseconds\n+ </para></entry>\n+ </row>\n\nIt would probably be better to write it as \"The minimum delay, in\nmilliseconds, by the publisher to send changes\"\n\n2. The subminsenddelay is placed inconsistently in the patch. In the\ndocs (catalogs.sgml), system_views.sql, and in some places in the\ncode, it is after subskiplsn, but in the catalog table and\ncorresponding structure, it is placed after subowner. It should be\nconsistently placed after the subscription owner.\n\n3.\n+ <row>\n+ <entry><literal>WalSenderSendDelay</literal></entry>\n+ <entry>Waiting for sending changes to subscriber in WAL sender\n+ process.</entry>\n\nHow about writing it as follows: \"Waiting while sending changes for\ntime-delayed logical replication in the WAL sender process.\"?\n\n4.\n+ <para>\n+ Any delay becomes effective only after all initial table\n+ synchronization has finished and occurs before each transaction\n+ starts to get applied on the subscriber. The delay does not take into\n+ account the overhead of time spent in transferring the transaction,\n+ which means that the arrival time at the subscriber may be delayed\n+ more than the given time.\n+ </para>\n\nThis needs to change based on a new approach. It should be something\nlike: \"The delay is effective only when the publisher decides to send\na particular transaction downstream.\"\n\n5.\n+ * allowed. This is because in parallel streaming mode, we start applying\n+ * the transaction stream as soon as the first change arrives without\n+ * knowing the transaction's prepare/commit time. Always waiting for the\n+ * full 'min_send_delay' period might include unnecessary delay.\n+ *\n+ * The other possibility was to apply the delay at the end of the parallel\n+ * apply transaction but that would cause issues related to resource bloat\n+ * and locks being held for a long time.\n+ */\n\nThis part of the comments seems to imply more of a subscriber-side\ndelay approach. I think we should try to adjust these as per the\nchanged approach.\n\n6.\n@@ -666,6 +666,10 @@ DecodeCommit(LogicalDecodingContext *ctx,\nXLogRecordBuffer *buf,\n buf->origptr, buf->endptr);\n }\n\n+ /* Delay given time if the context has 'delay' callback */\n+ if (ctx->delay)\n+ ctx->delay(ctx, commit_time);\n+\n\nI think we should invoke delay functionality only when\nctx->min_send_delay > 0. Otherwise, there will be some unnecessary\noverhead. We can change the comment along the lines of: \"Delay sending\nthe changes if required. For streaming transactions, this means a\ndelay in sending the last stream but that is okay because on the\ndownstream the changes will be applied only after receiving the last\nstream.\"\n\n7. For 2PC transactions, I think we should add the delay in\nDecodePrerpare. Because after receiving the PREPARE, the downstream\nwill apply the xact. In this case, we shouldn't add a delay for the\ncommit_prepared.\n\n8.\n+#\n+# If the subscription sets min_send_delay parameter, the logical replication\n+# worker will delay the transaction apply for min_send_delay milliseconds.\n\nI think here also comments should be updated as per the changed\napproach for applying the delay on the publisher side.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 18 Feb 2023 11:18:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThank you for reviewing! PSA new version.\r\n\r\n> 1.\r\n> + <para>\r\n> + The minimum delay for publisher sends data, in milliseconds\r\n> + </para></entry>\r\n> + </row>\r\n> \r\n> It would probably be better to write it as \"The minimum delay, in\r\n> milliseconds, by the publisher to send changes\"\r\n\r\nFixed.\r\n\r\n> 2. The subminsenddelay is placed inconsistently in the patch. In the\r\n> docs (catalogs.sgml), system_views.sql, and in some places in the\r\n> code, it is after subskiplsn, but in the catalog table and\r\n> corresponding structure, it is placed after subowner. It should be\r\n> consistently placed after the subscription owner.\r\n\r\nBasically moved. Note that some parts were not changed like\r\nmaybe_reread_subscription() because the ordering had been already broken.\r\n\r\n> 3.\r\n> + <row>\r\n> + <entry><literal>WalSenderSendDelay</literal></entry>\r\n> + <entry>Waiting for sending changes to subscriber in WAL sender\r\n> + process.</entry>\r\n> \r\n> How about writing it as follows: \"Waiting while sending changes for\r\n> time-delayed logical replication in the WAL sender process.\"?\r\n\r\nFixed.\r\n\r\n> 4.\r\n> + <para>\r\n> + Any delay becomes effective only after all initial table\r\n> + synchronization has finished and occurs before each transaction\r\n> + starts to get applied on the subscriber. The delay does not take into\r\n> + account the overhead of time spent in transferring the transaction,\r\n> + which means that the arrival time at the subscriber may be delayed\r\n> + more than the given time.\r\n> + </para>\r\n> \r\n> This needs to change based on a new approach. It should be something\r\n> like: \"The delay is effective only when the publisher decides to send\r\n> a particular transaction downstream.\"\r\n\r\nRight, the first sentence is partially changed as you said.\r\n\r\n> 5.\r\n> + * allowed. This is because in parallel streaming mode, we start applying\r\n> + * the transaction stream as soon as the first change arrives without\r\n> + * knowing the transaction's prepare/commit time. Always waiting for the\r\n> + * full 'min_send_delay' period might include unnecessary delay.\r\n> + *\r\n> + * The other possibility was to apply the delay at the end of the parallel\r\n> + * apply transaction but that would cause issues related to resource bloat\r\n> + * and locks being held for a long time.\r\n> + */\r\n> \r\n> This part of the comments seems to imply more of a subscriber-side\r\n> delay approach. I think we should try to adjust these as per the\r\n> changed approach.\r\n\r\nAdjusted.\r\n\r\n> 6.\r\n> @@ -666,6 +666,10 @@ DecodeCommit(LogicalDecodingContext *ctx,\r\n> XLogRecordBuffer *buf,\r\n> buf->origptr, buf->endptr);\r\n> }\r\n> \r\n> + /* Delay given time if the context has 'delay' callback */\r\n> + if (ctx->delay)\r\n> + ctx->delay(ctx, commit_time);\r\n> +\r\n> \r\n> I think we should invoke delay functionality only when\r\n> ctx->min_send_delay > 0. Otherwise, there will be some unnecessary\r\n> overhead. We can change the comment along the lines of: \"Delay sending\r\n> the changes if required. For streaming transactions, this means a\r\n> delay in sending the last stream but that is okay because on the\r\n> downstream the changes will be applied only after receiving the last\r\n> stream.\"\r\n\r\nChanged accordingly.\r\n\r\n> 7. For 2PC transactions, I think we should add the delay in\r\n> DecodePrerpare. Because after receiving the PREPARE, the downstream\r\n> will apply the xact. In this case, we shouldn't add a delay for the\r\n> commit_prepared.\r\n\r\nRight, the transaction will be end when it receive PREPARE. Fixed. \r\nI've tested locally and the delay seemed to be occurred at PREPARE phase.\r\n\r\n> 8.\r\n> +#\r\n> +# If the subscription sets min_send_delay parameter, the logical replication\r\n> +# worker will delay the transaction apply for min_send_delay milliseconds.\r\n> \r\n> I think here also comments should be updated as per the changed\r\n> approach for applying the delay on the publisher side.\r\n\r\nFixed.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Mon, 20 Feb 2023 02:27:26 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Here are some review comments for patch v3-0001.\n\n(I haven't looked at the test code yet)\n\n======\nCommit Message\n\n1.\nIf the subscription sets min_send_delay parameter, an apply worker passes the\nvalue to the publisher as an output plugin option. And then, the walsender will\ndelay the transaction sending for given milliseconds.\n\n~\n\n1a.\n\"an apply worker\" --> \"the apply worker (via walrcv_startstreaming)\".\n\n~\n\n1b.\n\"And then, the walsender\" --> \"The walsender\"\n\n~~~\n\n2.\nThe combination of parallel streaming mode and min_send_delay is not allowed.\nThis is because in parallel streaming mode, we start applying the transaction\nstream as soon as the first change arrives without knowing the transaction's\nprepare/commit time. Always waiting for the full 'min_send_delay' period might\ninclude unnecessary delay.\n\n~\n\nIs there another reason not to support this?\n\nEven if streaming + min_send_delay incurs some extra delay, is that a\nreason to reject outright the combination? What difference will the\npotential of a few extra seconds overhead make when min_send_delay is\nmore likely to be far greater (e.g. minutes or hours)?\n\n~~~\n\n3.\nThe other possibility was to apply the delay at the end of the parallel apply\ntransaction but that would cause issues related to resource bloat and\nlocks being\nheld for a long time.\n\n~\n\nIs this explanation still relevant now you are doing pub-side delays?\n\n======\ndoc/src/sgml/catalogs.sgml\n\n4.\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>subminsenddelay</structfield> <type>int4</type>\n+ </para>\n+ <para>\n+ The minimum delay, in milliseconds, by the publisher to send changes\n+ </para></entry>\n+ </row>\n\n\"by the publisher to send changes\" --> \"by the publisher before sending changes\"\n\n======\ndoc/src/sgml/logical-replication.sgml\n\n5.\n+ <para>\n+ A publication can delay sending changes to the subscription by specifying\n+ the <literal>min_send_delay</literal> subscription parameter. See\n+ <xref linkend=\"sql-createsubscription\"/> for details.\n+ </para>\n\n~\n\nThis description seemed backwards because IIUC the PUBLICATION has\nnothing to do with the delay really, the walsender is told what to do\nby the SUBSCRIPTION. Anyway, this paragraph is in the \"Subscriber\"\nsection, so mentioning publications was a bit confusing.\n\nSUGGESTION\nA subscription can delay the receipt of changes by specifying the\nmin_send_delay subscription parameter. See ...\n\n======\ndoc/src/sgml/monitoring.sgml\n\n6.\n+ <row>\n+ <entry><literal>WalSenderSendDelay</literal></entry>\n+ <entry>Waiting while sending changes for time-delayed logical replication\n+ in the WAL sender process.</entry>\n+ </row>\n\nShould this say \"Waiting before sending changes\", instead of \"Waiting\nwhile sending changes\"?\n\n======\ndoc/src/sgml/ref/create_subscription.sgml\n\n7.\n+ <para>\n+ By default, the publisher sends changes as soon as possible. This\n+ parameter allows the user to delay the publisher to send changes by\n+ given time period. If the value is specified without units, it is\n+ taken as milliseconds. The default is zero (no delay). See\n+ <xref linkend=\"config-setting-names-values\"/> for details on the\n+ available valid time units.\n+ </para>\n\n\"to delay the publisher to send changes\" --> \"to delay changes\"\n\n~~~\n\n8.\n+ <para>\n+ The delay is effective only when the initial table synchronization\n+ has been finished and the publisher decides to send a particular\n+ transaction downstream. The delay does not take into account the\n+ overhead of time spent in transferring the transaction, which means\n+ that the arrival time at the subscriber may be delayed more than the\n+ given time.\n+ </para>\n\nI'm not sure about this mention about only \"effective only when the\ninitial table synchronization has been finished\"... Now that the delay\nis pub-side I don't know that it is true anymore. The tablesync worker\nwill try to synchronize with the apply worker. IIUC during this\n\"synchronization\" phase the apply worker might be getting delayed by\nits own walsender, so therefore the tablesync might also be delayed\n(due to syncing with the apply worker) won't it?\n\n======\nsrc/backend/commands/subscriptioncmds.c\n\n9.\n+ /*\n+ * translator: the first %s is a string of the form \"parameter > 0\"\n+ * and the second one is \"option = value\".\n+ */\n+ errmsg(\"%s and %s are mutually exclusive options\",\n+ \"min_send_delay > 0\", \"streaming = parallel\"));\n+\n+\n }\n\nExcessive whitespace.\n\n======\nsrc/backend/replication/logical/worker.c\n\n10. ApplyWorkerMain\n\n+ /*\n+ * Time-delayed logical replication does not support tablesync\n+ * workers, so only the leader apply worker can request walsenders to\n+ * apply delay on the publisher side.\n+ */\n+ if (server_version >= 160000 && MySubscription->minsenddelay > 0)\n+ options.proto.logical.min_send_delay = MySubscription->minsenddelay;\n\n\"apply delay\" --> \"delay\"\n\n======\nsrc/backend/replication/pgoutput/pgoutput.c\n\n11.\n+ errno = 0;\n+ parsed = strtoul(strVal(defel->arg), &endptr, 10);\n+ if (errno != 0 || *endptr != '\\0')\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"invalid min_send_delay\")));\n+\n+ if (parsed > PG_INT32_MAX)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"min_send_delay \\\"%s\\\" out of range\",\n+ strVal(defel->arg))));\n\nShould the validation be also checking/asserting no negative numbers,\nor actually should the min_send_delay be defined as a uint32 in the\nfirst place?\n\n~~~\n\n12. pgoutput_startup\n\n@@ -501,6 +528,15 @@ pgoutput_startup(LogicalDecodingContext *ctx,\nOutputPluginOptions *opt,\n else\n ctx->twophase_opt_given = true;\n\n+ if (data->min_send_delay &&\n+ data->protocol_version < LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"requested proto_version=%d does not support delay sending\ndata, need %d or higher\",\n+ data->protocol_version, LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM)));\n+ else\n+ ctx->min_send_delay = data->min_send_delay;\n\n\nIMO it doesn't make sense to compare this new feature with the\nunrelated LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM protocol\nversion. I think we should define a new constant\nLOGICALREP_PROTO_MIN_SEND_DELAY_VERSION_NUM (even if it has the same\nvalue as the LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM).\n\n======\nsrc/backend/replication/walsender.c\n\n13. WalSndDelay\n\n+ long diffms;\n+ long timeout_interval_ms;\n\nIMO some more informative name for these would make the code read better:\n\n'diffms' --> 'remaining_wait_time_ms'\n'timeout_interval_ms' --> 'timeout_sleeptime_ms'\n\n~~~\n\n14.\n+ /* Sleep until we get reply from worker or we time out */\n+ WalSndWait(WL_SOCKET_READABLE,\n+ Min(timeout_interval_ms, diffms),\n+ WAIT_EVENT_WALSENDER_SEND_DELAY);\n\nSorry, I didn't understand this comment \"reply from worker\"... AFAIK\nhere we are just sleeping, not waiting for replies from anywhere (???)\n\n======\nsrc/include/replication/logical.h\n\n15.\n@@ -64,6 +68,7 @@ typedef struct LogicalDecodingContext\n LogicalOutputPluginWriterPrepareWrite prepare_write;\n LogicalOutputPluginWriterWrite write;\n LogicalOutputPluginWriterUpdateProgress update_progress;\n+ LogicalOutputPluginWriterDelay delay;\n\n~\n\n15a.\nQuestion: Is there some advantage to introducing another callback,\ninstead of just doing the delay inline?\n\n~\n\n15b.\nShould this be a more informative member name like 'delay_send'?\n\n~~~\n\n16.\n@@ -100,6 +105,8 @@ typedef struct LogicalDecodingContext\n */\n bool twophase_opt_given;\n\n+ int32 min_send_delay;\n+\n\nMissing comment for this new member.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 21 Feb 2023 09:01:10 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Here are some review comments for the v3-0001 test code.\n\n======\nsrc/test/regress/sql/subscription.sql\n\n1.\n+-- fail - utilizing streaming = parallel with time-delayed\nreplication is not supported\n+CREATE SUBSCRIPTION regress_testsub CONNECTION\n'dbname=regress_doesnotexist' PUBLICATION testpub WITH (connect =\nfalse, streaming = parallel, min_send_delay = 123);\n\n\"utilizing\" --> \"specifying\"\n\n~~~\n\n2.\n+-- success -- min_send_delay value without unit is take as milliseconds\n+CREATE SUBSCRIPTION regress_testsub CONNECTION\n'dbname=regress_doesnotexit' PUBLICATION testpub WITH (connect =\nfalse, min_send_delay = 123);\n+\\dRs+\n\n\"without unit is take as\" --> \"without units is taken as\"\n\n~~~\n\n3.\n+-- success -- min_send_delay value with unit is converted into ms and\nstored as an integer\n+ALTER SUBSCRIPTION regress_testsub SET (min_send_delay = '1 d');\n+\\dRs+\n\n\n\"with unit is converted into ms\" --> \"with units other than ms is\nconverted to ms\"\n\n~~~\n\n4. Missing tests?\n\nWhy have the previous ALTER SUBSCRIPTION tests been removed? AFAIK,\ncurrently, there are no regression tests for error messages like:\n\ntest_sub=# ALTER SUBSCRIPTION sub1 SET (min_send_delay = 123);\nERROR: cannot set min_send_delay for subscription in parallel streaming mode\n\n======\nsrc/test/subscription/t/001_rep_changes.pl\n\n5.\n+# This test is successful if and only if the LSN has been applied with at least\n+# the configured apply delay.\n+ok( time() - $publisher_insert_time >= $delay,\n+ \"subscriber applies WAL only after replication delay for\nnon-streaming transaction\"\n+);\n\nIt's not strictly an \"apply delay\". Maybe this comment only needs to\nsay like below:\n\nSUGGESTION\n# This test is successful only if at least the configured delay has elapsed.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 21 Feb 2023 12:06:36 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tue, Feb 21, 2023 at 3:31 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n>\n> 2.\n> The combination of parallel streaming mode and min_send_delay is not allowed.\n> This is because in parallel streaming mode, we start applying the transaction\n> stream as soon as the first change arrives without knowing the transaction's\n> prepare/commit time. Always waiting for the full 'min_send_delay' period might\n> include unnecessary delay.\n>\n> ~\n>\n> Is there another reason not to support this?\n>\n> Even if streaming + min_send_delay incurs some extra delay, is that a\n> reason to reject outright the combination? What difference will the\n> potential of a few extra seconds overhead make when min_send_delay is\n> more likely to be far greater (e.g. minutes or hours)?\n>\n\nI think the point is that we don't know the commit time at the start\nof streaming and even the transaction can be quite long in which case\nadding the delay is not expected.\n\n>\n> ======\n> doc/src/sgml/catalogs.sgml\n>\n> 4.\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>subminsenddelay</structfield> <type>int4</type>\n> + </para>\n> + <para>\n> + The minimum delay, in milliseconds, by the publisher to send changes\n> + </para></entry>\n> + </row>\n>\n> \"by the publisher to send changes\" --> \"by the publisher before sending changes\"\n>\n\nFor the streaming (=on) case, we may end up sending changes before we\nstart to apply delay.\n\n> ======\n> doc/src/sgml/monitoring.sgml\n>\n> 6.\n> + <row>\n> + <entry><literal>WalSenderSendDelay</literal></entry>\n> + <entry>Waiting while sending changes for time-delayed logical replication\n> + in the WAL sender process.</entry>\n> + </row>\n>\n> Should this say \"Waiting before sending changes\", instead of \"Waiting\n> while sending changes\"?\n>\n\nIn the streaming (non-parallel) case, we may have sent some changes\nbefore wait as we wait only at commit/prepare time. The downstream\nwon't apply such changes till commit. So, this description makes sense\nand this matches similar nearby descriptions.\n\n>\n> 8.\n> + <para>\n> + The delay is effective only when the initial table synchronization\n> + has been finished and the publisher decides to send a particular\n> + transaction downstream. The delay does not take into account the\n> + overhead of time spent in transferring the transaction, which means\n> + that the arrival time at the subscriber may be delayed more than the\n> + given time.\n> + </para>\n>\n> I'm not sure about this mention about only \"effective only when the\n> initial table synchronization has been finished\"... Now that the delay\n> is pub-side I don't know that it is true anymore.\n>\n\nThis will still be true because we don't wait during the initial copy\n(sync). The delay happens only when the replication starts.\n\n> ======\n> src/backend/commands/subscriptioncmds.c\n> ======\n> src/backend/replication/pgoutput/pgoutput.c\n>\n> 11.\n> + errno = 0;\n> + parsed = strtoul(strVal(defel->arg), &endptr, 10);\n> + if (errno != 0 || *endptr != '\\0')\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"invalid min_send_delay\")));\n> +\n> + if (parsed > PG_INT32_MAX)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"min_send_delay \\\"%s\\\" out of range\",\n> + strVal(defel->arg))));\n>\n> Should the validation be also checking/asserting no negative numbers,\n> or actually should the min_send_delay be defined as a uint32 in the\n> first place?\n>\n\nI don't see the need to change the datatype of min_send_delay as\ncompared to what we have min_apply_delay.\n\n> ======\n> src/include/replication/logical.h\n>\n> 15.\n> @@ -64,6 +68,7 @@ typedef struct LogicalDecodingContext\n> LogicalOutputPluginWriterPrepareWrite prepare_write;\n> LogicalOutputPluginWriterWrite write;\n> LogicalOutputPluginWriterUpdateProgress update_progress;\n> + LogicalOutputPluginWriterDelay delay;\n>\n> ~\n>\n> 15a.\n> Question: Is there some advantage to introducing another callback,\n> instead of just doing the delay inline?\n>\n\nThis is required because we need to check walsender's timeout and or\nprocess replies during the delay.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 21 Feb 2023 07:30:16 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThank you for reviewing! PSA new version. \r\n\r\n> 1.\r\n> If the subscription sets min_send_delay parameter, an apply worker passes the\r\n> value to the publisher as an output plugin option. And then, the walsender will\r\n> delay the transaction sending for given milliseconds.\r\n> \r\n> ~\r\n> \r\n> 1a.\r\n> \"an apply worker\" --> \"the apply worker (via walrcv_startstreaming)\".\r\n> \r\n> ~\r\n> \r\n> 1b.\r\n> \"And then, the walsender\" --> \"The walsender\"\r\n\r\nFixed.\r\n\r\n> 2.\r\n> The combination of parallel streaming mode and min_send_delay is not allowed.\r\n> This is because in parallel streaming mode, we start applying the transaction\r\n> stream as soon as the first change arrives without knowing the transaction's\r\n> prepare/commit time. Always waiting for the full 'min_send_delay' period might\r\n> include unnecessary delay.\r\n> \r\n> ~\r\n> \r\n> Is there another reason not to support this?\r\n> \r\n> Even if streaming + min_send_delay incurs some extra delay, is that a\r\n> reason to reject outright the combination? What difference will the\r\n> potential of a few extra seconds overhead make when min_send_delay is\r\n> more likely to be far greater (e.g. minutes or hours)?\r\n\r\nAnother case I came up with is that streaming transactions are come continuously.\r\nIf there are many transactions to be streamed, the walsender must delay to send for\r\nevery transactions, for the given period. It means that arrival of transactions at\r\nthe subscriber may delay for approximately min_send_delay x # of transactions.\r\n\r\n> 3.\r\n> The other possibility was to apply the delay at the end of the parallel apply\r\n> transaction but that would cause issues related to resource bloat and\r\n> locks being\r\n> held for a long time.\r\n> \r\n> ~\r\n> \r\n> Is this explanation still relevant now you are doing pub-side delays?\r\n\r\nSlightly reworded. I think the problem may be occurred if we delay sending COMMIT\r\nrecord for parallel applied transactions.\r\n\r\n> doc/src/sgml/catalogs.sgml\r\n> \r\n> 4.\r\n> + <row>\r\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n> + <structfield>subminsenddelay</structfield> <type>int4</type>\r\n> + </para>\r\n> + <para>\r\n> + The minimum delay, in milliseconds, by the publisher to send changes\r\n> + </para></entry>\r\n> + </row>\r\n> \r\n> \"by the publisher to send changes\" --> \"by the publisher before sending changes\"\r\n\r\nAs Amit said[1], there is a possibility to delay after sending delay. So I changed to\r\n\"before sending COMMIT record\". How do you think?\r\n\r\n> doc/src/sgml/logical-replication.sgml\r\n> \r\n> 5.\r\n> + <para>\r\n> + A publication can delay sending changes to the subscription by specifying\r\n> + the <literal>min_send_delay</literal> subscription parameter. See\r\n> + <xref linkend=\"sql-createsubscription\"/> for details.\r\n> + </para>\r\n> \r\n> ~\r\n> \r\n> This description seemed backwards because IIUC the PUBLICATION has\r\n> nothing to do with the delay really, the walsender is told what to do\r\n> by the SUBSCRIPTION. Anyway, this paragraph is in the \"Subscriber\"\r\n> section, so mentioning publications was a bit confusing.\r\n> \r\n> SUGGESTION\r\n> A subscription can delay the receipt of changes by specifying the\r\n> min_send_delay subscription parameter. See ...\r\n\r\nChanged.\r\n\r\n> doc/src/sgml/monitoring.sgml\r\n> \r\n> 6.\r\n> + <row>\r\n> + <entry><literal>WalSenderSendDelay</literal></entry>\r\n> + <entry>Waiting while sending changes for time-delayed logical\r\n> replication\r\n> + in the WAL sender process.</entry>\r\n> + </row>\r\n> \r\n> Should this say \"Waiting before sending changes\", instead of \"Waiting\r\n> while sending changes\"?\r\n\r\nPer discussion[1], I did not fix.\r\n\r\n> doc/src/sgml/ref/create_subscription.sgml\r\n> \r\n> 7.\r\n> + <para>\r\n> + By default, the publisher sends changes as soon as possible. This\r\n> + parameter allows the user to delay the publisher to send changes by\r\n> + given time period. If the value is specified without units, it is\r\n> + taken as milliseconds. The default is zero (no delay). See\r\n> + <xref linkend=\"config-setting-names-values\"/> for details on the\r\n> + available valid time units.\r\n> + </para>\r\n> \r\n> \"to delay the publisher to send changes\" --> \"to delay changes\"\r\n\r\nFixed.\r\n\r\n> 8.\r\n> + <para>\r\n> + The delay is effective only when the initial table synchronization\r\n> + has been finished and the publisher decides to send a particular\r\n> + transaction downstream. The delay does not take into account the\r\n> + overhead of time spent in transferring the transaction, which means\r\n> + that the arrival time at the subscriber may be delayed more than the\r\n> + given time.\r\n> + </para>\r\n> \r\n> I'm not sure about this mention about only \"effective only when the\r\n> initial table synchronization has been finished\"... Now that the delay\r\n> is pub-side I don't know that it is true anymore. The tablesync worker\r\n> will try to synchronize with the apply worker. IIUC during this\r\n> \"synchronization\" phase the apply worker might be getting delayed by\r\n> its own walsender, so therefore the tablesync might also be delayed\r\n> (due to syncing with the apply worker) won't it?\r\n\r\nI tested and checked codes. First of all, the tablesync worker request to send WALs\r\nwithout min_send_delay, so changes will be sent and applied with no delays. In this meaning,\r\nthe table synchronization has not been affected by the feature. While checking,\r\nhowever, there is a possibility that the state of table will be delayed to get\r\n'readly' because the changing of status from SYNCDONE from READY is done by apply worker.\r\nIt may lead that two-phase will be delayed in getting to \"enabled\".\r\nI added descriptions about it.\r\n\r\n> src/backend/commands/subscriptioncmds.c\r\n> \r\n> 9.\r\n> + /*\r\n> + * translator: the first %s is a string of the form \"parameter > 0\"\r\n> + * and the second one is \"option = value\".\r\n> + */\r\n> + errmsg(\"%s and %s are mutually exclusive options\",\r\n> + \"min_send_delay > 0\", \"streaming = parallel\"));\r\n> +\r\n> +\r\n> }\r\n> \r\n> Excessive whitespace.\r\n\r\nAdjusted.\r\n\r\n> src/backend/replication/logical/worker.c\r\n> \r\n> 10. ApplyWorkerMain\r\n> \r\n> + /*\r\n> + * Time-delayed logical replication does not support tablesync\r\n> + * workers, so only the leader apply worker can request walsenders to\r\n> + * apply delay on the publisher side.\r\n> + */\r\n> + if (server_version >= 160000 && MySubscription->minsenddelay > 0)\r\n> + options.proto.logical.min_send_delay = MySubscription->minsenddelay;\r\n> \r\n> \"apply delay\" --> \"delay\"\r\n\r\nFixed.\r\n\r\n> src/backend/replication/pgoutput/pgoutput.c\r\n> \r\n> 11.\r\n> + errno = 0;\r\n> + parsed = strtoul(strVal(defel->arg), &endptr, 10);\r\n> + if (errno != 0 || *endptr != '\\0')\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\r\n> + errmsg(\"invalid min_send_delay\")));\r\n> +\r\n> + if (parsed > PG_INT32_MAX)\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\r\n> + errmsg(\"min_send_delay \\\"%s\\\" out of range\",\r\n> + strVal(defel->arg))));\r\n> \r\n> Should the validation be also checking/asserting no negative numbers,\r\n> or actually should the min_send_delay be defined as a uint32 in the\r\n> first place?\r\n\r\nI think you are right because min_apply_delay does not have related code.\r\nwe must consider additional possibility that user may send START_REPLICATION\r\nby hand and it has minus value.\r\nFixed.\r\n\r\n\r\n> 12. pgoutput_startup\r\n> \r\n> @@ -501,6 +528,15 @@ pgoutput_startup(LogicalDecodingContext *ctx,\r\n> OutputPluginOptions *opt,\r\n> else\r\n> ctx->twophase_opt_given = true;\r\n> \r\n> + if (data->min_send_delay &&\r\n> + data->protocol_version <\r\n> LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM)\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\r\n> + errmsg(\"requested proto_version=%d does not support delay sending\r\n> data, need %d or higher\",\r\n> + data->protocol_version,\r\n> LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM)));\r\n> + else\r\n> + ctx->min_send_delay = data->min_send_delay;\r\n> \r\n> \r\n> IMO it doesn't make sense to compare this new feature with the\r\n> unrelated LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM protocol\r\n> version. I think we should define a new constant\r\n> LOGICALREP_PROTO_MIN_SEND_DELAY_VERSION_NUM (even if it has the\r\n> same\r\n> value as the LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM).\r\n\r\nAdded.\r\n\r\n> src/backend/replication/walsender.c\r\n> \r\n> 13. WalSndDelay\r\n> \r\n> + long diffms;\r\n> + long timeout_interval_ms;\r\n> \r\n> IMO some more informative name for these would make the code read better:\r\n> \r\n> 'diffms' --> 'remaining_wait_time_ms'\r\n> 'timeout_interval_ms' --> 'timeout_sleeptime_ms'\r\n\r\nChanged.\r\n\r\n> 14.\r\n> + /* Sleep until we get reply from worker or we time out */\r\n> + WalSndWait(WL_SOCKET_READABLE,\r\n> + Min(timeout_interval_ms, diffms),\r\n> + WAIT_EVENT_WALSENDER_SEND_DELAY);\r\n> \r\n> Sorry, I didn't understand this comment \"reply from worker\"... AFAIK\r\n> here we are just sleeping, not waiting for replies from anywhere (???)\r\n> \r\n> ======\r\n> src/include/replication/logical.h\r\n> \r\n> 15.\r\n> @@ -64,6 +68,7 @@ typedef struct LogicalDecodingContext\r\n> LogicalOutputPluginWriterPrepareWrite prepare_write;\r\n> LogicalOutputPluginWriterWrite write;\r\n> LogicalOutputPluginWriterUpdateProgress update_progress;\r\n> + LogicalOutputPluginWriterDelay delay;\r\n> \r\n> ~\r\n> \r\n> 15a.\r\n> Question: Is there some advantage to introducing another callback,\r\n> instead of just doing the delay inline?\r\n\r\nIIUC functions related with walsender should not be called directly, because there\r\nis a possibility that replication slots are manipulated from the backed.\r\n\r\n> 15b.\r\n> Should this be a more informative member name like 'delay_send'?\r\n\r\nChanged.\r\n\r\n> 16.\r\n> @@ -100,6 +105,8 @@ typedef struct LogicalDecodingContext\r\n> */\r\n> bool twophase_opt_given;\r\n> \r\n> + int32 min_send_delay;\r\n> +\r\n> \r\n> Missing comment for this new member.\r\n\r\nAdded.\r\n\r\n[1]: https://www.postgresql.org/message-id/CAA4eK1+JwLAVAOphnZ1YTiEV_jOMAE6JgJmBE98oek2cg7XF0w@mail.gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Tue, 21 Feb 2023 07:57:57 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Peter,\r\n\r\n> 1.\r\n> +-- fail - utilizing streaming = parallel with time-delayed\r\n> replication is not supported\r\n> +CREATE SUBSCRIPTION regress_testsub CONNECTION\r\n> 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (connect =\r\n> false, streaming = parallel, min_send_delay = 123);\r\n> \r\n> \"utilizing\" --> \"specifying\"\r\n\r\nFixed.\r\n\r\n> 2.\r\n> +-- success -- min_send_delay value without unit is take as milliseconds\r\n> +CREATE SUBSCRIPTION regress_testsub CONNECTION\r\n> 'dbname=regress_doesnotexit' PUBLICATION testpub WITH (connect =\r\n> false, min_send_delay = 123);\r\n> +\\dRs+\r\n> \r\n> \"without unit is take as\" --> \"without units is taken as\"\r\n\r\nFixed.\r\n\r\n> 3.\r\n> +-- success -- min_send_delay value with unit is converted into ms and\r\n> stored as an integer\r\n> +ALTER SUBSCRIPTION regress_testsub SET (min_send_delay = '1 d');\r\n> +\\dRs+\r\n> \r\n> \r\n> \"with unit is converted into ms\" --> \"with units other than ms is\r\n> converted to ms\"\r\n\r\nFixed.\r\n\r\n> 4. Missing tests?\r\n> \r\n> Why have the previous ALTER SUBSCRIPTION tests been removed? AFAIK,\r\n> currently, there are no regression tests for error messages like:\r\n> \r\n> test_sub=# ALTER SUBSCRIPTION sub1 SET (min_send_delay = 123);\r\n> ERROR: cannot set min_send_delay for subscription in parallel streaming mode\r\n\r\nThese tests were missed while changing the basic design.\r\nAdded.\r\n\r\n> src/test/subscription/t/001_rep_changes.pl\r\n> \r\n> 5.\r\n> +# This test is successful if and only if the LSN has been applied with at least\r\n> +# the configured apply delay.\r\n> +ok( time() - $publisher_insert_time >= $delay,\r\n> + \"subscriber applies WAL only after replication delay for\r\n> non-streaming transaction\"\r\n> +);\r\n> \r\n> It's not strictly an \"apply delay\". Maybe this comment only needs to\r\n> say like below:\r\n> \r\n> SUGGESTION\r\n> # This test is successful only if at least the configured delay has elapsed.\r\n\r\nChanged.\r\n\r\nNew patch is available on [1].\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5866C6BCA4D9386D9C486033F5A59%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n",
"msg_date": "Tue, 21 Feb 2023 07:58:58 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThank you for commenting!\r\n\r\n> > 8.\r\n> > + <para>\r\n> > + The delay is effective only when the initial table synchronization\r\n> > + has been finished and the publisher decides to send a particular\r\n> > + transaction downstream. The delay does not take into account the\r\n> > + overhead of time spent in transferring the transaction, which\r\n> means\r\n> > + that the arrival time at the subscriber may be delayed more than\r\n> the\r\n> > + given time.\r\n> > + </para>\r\n> >\r\n> > I'm not sure about this mention about only \"effective only when the\r\n> > initial table synchronization has been finished\"... Now that the delay\r\n> > is pub-side I don't know that it is true anymore.\r\n> >\r\n> \r\n> This will still be true because we don't wait during the initial copy\r\n> (sync). The delay happens only when the replication starts.\r\n\r\nMaybe this depends on the definition of initial copy and sync.\r\nI checked and added descriptions in [1].\r\n\r\n\r\n> > 11.\r\n> > + errno = 0;\r\n> > + parsed = strtoul(strVal(defel->arg), &endptr, 10);\r\n> > + if (errno != 0 || *endptr != '\\0')\r\n> > + ereport(ERROR,\r\n> > + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\r\n> > + errmsg(\"invalid min_send_delay\")));\r\n> > +\r\n> > + if (parsed > PG_INT32_MAX)\r\n> > + ereport(ERROR,\r\n> > + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\r\n> > + errmsg(\"min_send_delay \\\"%s\\\" out of range\",\r\n> > + strVal(defel->arg))));\r\n> >\r\n> > Should the validation be also checking/asserting no negative numbers,\r\n> > or actually should the min_send_delay be defined as a uint32 in the\r\n> > first place?\r\n> >\r\n> \r\n> I don't see the need to change the datatype of min_send_delay as\r\n> compared to what we have min_apply_delay.\r\n\r\nI think it is OK to change \"long\" to \"unsinged long\", because\r\nWe use strtoul() for reading and should reject the minus value.\r\nOf course we can modify them, but I want to keep the consistency with proto_version part.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5866C6BCA4D9386D9C486033F5A59@TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Tue, 21 Feb 2023 08:03:45 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Here are some very minor review comments for the patch v4-0001\n\n======\nCommit Message\n\n1.\nThe other possibility was to apply the delay at the end of the parallel apply\ntransaction but that would cause issues related to resource bloat and\nlocks being\nheld for a long time.\n\n~\n\nThe reply [1] for review comment #2 says that this was \"slightly\nreworded\", but AFAICT nothing is changed here.\n\n~~~\n\n2.\nEariler versions were written by Euler Taveira, Takamichi Osumi, and\nKuroda Hayato\n\nTypo: \"Eariler\"\n\n======\ndoc/src/sgml/ref/create_subscription.sgml\n\n3.\n+ <para>\n+ By default, the publisher sends changes as soon as possible. This\n+ parameter allows the user to delay changes by given time period. If\n+ the value is specified without units, it is taken as milliseconds.\n+ The default is zero (no delay). See <xref\nlinkend=\"config-setting-names-values\"/>\n+ for details on the available valid time units.\n+ </para>\n\n\"by given time period\" --> \"by the given time period\"\n\n======\nsrc/backend/replication/pgoutput/pgoutput.c\n\n4. parse_output_parameters\n\n+ else if (strcmp(defel->defname, \"min_send_delay\") == 0)\n+ {\n+ unsigned long parsed;\n+ char *endptr;\n\nI think 'parsed' is a fairly meaningless variable name. How about\ncalling this variable something useful like 'delay_val' or\n'min_send_delay_value', or something like those? Yes, I recognize that\nyou copied this from some existing code fragment, but IMO that doesn't\nmake it good.\n\n======\nsrc/backend/replication/walsender.c\n\n5.\n+ /* Sleep until we get reply from worker or we time out */\n+ WalSndWait(WL_SOCKET_READABLE,\n+ Min(timeout_sleeptime_ms, remaining_wait_time_ms),\n+ WAIT_EVENT_WALSENDER_SEND_DELAY);\n\nIn my previous review [2] comment #14, I questioned if this comment\nwas correct. It looks like that was accidentally missed.\n\n======\nsrc/include/replication/logical.h\n\n6.\n+ /*\n+ * The minimum delay, in milliseconds, by the publisher before sending\n+ * COMMIT/PREPARE record\n+ */\n+ int32 min_send_delay;\n\nThe comment is missing a period.\n\n\n------\n[1] Kuroda-san replied to my review v3-0001.\nhttps://www.postgresql.org/message-id/TYAPR01MB5866C6BCA4D9386D9C486033F5A59%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n[2] My previous review v3-0001.\nhttps://www.postgresql.org/message-id/CAHut%2BPu6Y%2BBkYKg6MYGi2zGnx6imeK4QzxBVhpQoPMeDr1npnQ%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 22 Feb 2023 12:00:04 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThank you for reviewing! PSA new version.\r\n\r\n> 1.\r\n> The other possibility was to apply the delay at the end of the parallel apply\r\n> transaction but that would cause issues related to resource bloat and\r\n> locks being\r\n> held for a long time.\r\n> \r\n> ~\r\n> \r\n> The reply [1] for review comment #2 says that this was \"slightly\r\n> reworded\", but AFAICT nothing is changed here.\r\n\r\nOh, my git operation might be wrong and it was disappeared.\r\nSorry for inconvenience, reworded again.\r\n\r\n> 2.\r\n> Eariler versions were written by Euler Taveira, Takamichi Osumi, and\r\n> Kuroda Hayato\r\n> \r\n> Typo: \"Eariler\"\r\n\r\nFixed.\r\n\r\n> ======\r\n> doc/src/sgml/ref/create_subscription.sgml\r\n> \r\n> 3.\r\n> + <para>\r\n> + By default, the publisher sends changes as soon as possible. This\r\n> + parameter allows the user to delay changes by given time period. If\r\n> + the value is specified without units, it is taken as milliseconds.\r\n> + The default is zero (no delay). See <xref\r\n> linkend=\"config-setting-names-values\"/>\r\n> + for details on the available valid time units.\r\n> + </para>\r\n> \r\n> \"by given time period\" --> \"by the given time period\"\r\n\r\nFixed.\r\n\r\n> src/backend/replication/pgoutput/pgoutput.c\r\n> \r\n> 4. parse_output_parameters\r\n> \r\n> + else if (strcmp(defel->defname, \"min_send_delay\") == 0)\r\n> + {\r\n> + unsigned long parsed;\r\n> + char *endptr;\r\n> \r\n> I think 'parsed' is a fairly meaningless variable name. How about\r\n> calling this variable something useful like 'delay_val' or\r\n> 'min_send_delay_value', or something like those? Yes, I recognize that\r\n> you copied this from some existing code fragment, but IMO that doesn't\r\n> make it good.\r\n\r\nOK, changed to 'delay_val'.\r\n\r\n> \r\n> ======\r\n> src/backend/replication/walsender.c\r\n> \r\n> 5.\r\n> + /* Sleep until we get reply from worker or we time out */\r\n> + WalSndWait(WL_SOCKET_READABLE,\r\n> + Min(timeout_sleeptime_ms, remaining_wait_time_ms),\r\n> + WAIT_EVENT_WALSENDER_SEND_DELAY);\r\n> \r\n> In my previous review [2] comment #14, I questioned if this comment\r\n> was correct. It looks like that was accidentally missed.\r\n\r\nSorry, I missed that. But I think this does not have to be changed.\r\n\r\nImportant point here is that WalSndWait() is used, not WaitLatch().\r\nAccording to comment atop WalSndWait(), the function waits till following events:\r\n\r\n- the socket becomes readable or writable\r\n- a timeout occurs\r\n\r\nLogical walsender process is always connected to worker, so the socket becomes readable\r\nwhen apply worker sends feedback message.\r\nThat's why I wrote \"Sleep until we get reply from worker or we time out\".\r\n\r\n\r\n> src/include/replication/logical.h\r\n> \r\n> 6.\r\n> + /*\r\n> + * The minimum delay, in milliseconds, by the publisher before sending\r\n> + * COMMIT/PREPARE record\r\n> + */\r\n> + int32 min_send_delay;\r\n> \r\n> The comment is missing a period.\r\n\r\nRight, added.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Wed, 22 Feb 2023 04:48:20 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tue, Feb 21, 2023 at 1:28 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> > doc/src/sgml/catalogs.sgml\n> >\n> > 4.\n> > + <row>\n> > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > + <structfield>subminsenddelay</structfield> <type>int4</type>\n> > + </para>\n> > + <para>\n> > + The minimum delay, in milliseconds, by the publisher to send changes\n> > + </para></entry>\n> > + </row>\n> >\n> > \"by the publisher to send changes\" --> \"by the publisher before sending changes\"\n>\n> As Amit said[1], there is a possibility to delay after sending delay. So I changed to\n> \"before sending COMMIT record\". How do you think?\n>\n\nI think it would be better to say: \"The minimum delay, in\nmilliseconds, by the publisher before sending all the changes\". If you\nagree then similar change is required in below comment as well:\n+ /*\n+ * The minimum delay, in milliseconds, by the publisher before sending\n+ * COMMIT/PREPARE record.\n+ */\n+ int32 min_send_delay;\n+\n\n>\n> > src/backend/replication/pgoutput/pgoutput.c\n> >\n> > 11.\n> > + errno = 0;\n> > + parsed = strtoul(strVal(defel->arg), &endptr, 10);\n> > + if (errno != 0 || *endptr != '\\0')\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > + errmsg(\"invalid min_send_delay\")));\n> > +\n> > + if (parsed > PG_INT32_MAX)\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> > + errmsg(\"min_send_delay \\\"%s\\\" out of range\",\n> > + strVal(defel->arg))));\n> >\n> > Should the validation be also checking/asserting no negative numbers,\n> > or actually should the min_send_delay be defined as a uint32 in the\n> > first place?\n>\n> I think you are right because min_apply_delay does not have related code.\n> we must consider additional possibility that user may send START_REPLICATION\n> by hand and it has minus value.\n> Fixed.\n>\n\nYour reasoning for adding the additional check seems good to me but I\ndon't see it in the patch. The check as I see is as below:\n+ if (delay_val > PG_INT32_MAX)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"min_send_delay \\\"%s\\\" out of range\",\n+ strVal(defel->arg))));\n\nAm, I missing something, and the new check is at some other place?\n\n+ has been finished. However, there is a possibility that the table\n+ status written in <link\nlinkend=\"catalog-pg-subscription-rel\"><structname>pg_subscription_rel</structname></link>\n+ will be delayed in getting to \"ready\" state, and also two-phase\n+ (if specified) will be delayed in getting to \"enabled\".\n+ </para>\n\nThere appears to be a special value <0x00> after \"ready\". I think that\nis added by mistake or probably you have used some editor which has\nadded this value. Can we slightly reword this to: \"However, there is a\npossibility that the table status updated in <link\nlinkend=\"catalog-pg-subscription-rel\"><structname>pg_subscription_rel</structname></link>\ncould be delayed in getting to the \"ready\" state, and also two-phase\n(if specified) could be delayed in getting to \"enabled\".\"?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 22 Feb 2023 17:25:32 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThank you for reviewing! PSA new version.\r\n\r\n> I think it would be better to say: \"The minimum delay, in\r\n> milliseconds, by the publisher before sending all the changes\". If you\r\n> agree then similar change is required in below comment as well:\r\n> + /*\r\n> + * The minimum delay, in milliseconds, by the publisher before sending\r\n> + * COMMIT/PREPARE record.\r\n> + */\r\n> + int32 min_send_delay;\r\n\r\nOK, both of them were fixed.\r\n\r\n> > > Should the validation be also checking/asserting no negative numbers,\r\n> > > or actually should the min_send_delay be defined as a uint32 in the\r\n> > > first place?\r\n> >\r\n> > I think you are right because min_apply_delay does not have related code.\r\n> > we must consider additional possibility that user may send\r\n> START_REPLICATION\r\n> > by hand and it has minus value.\r\n> > Fixed.\r\n> >\r\n> \r\n> Your reasoning for adding the additional check seems good to me but I\r\n> don't see it in the patch. The check as I see is as below:\r\n> + if (delay_val > PG_INT32_MAX)\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\r\n> + errmsg(\"min_send_delay \\\"%s\\\" out of range\",\r\n> + strVal(defel->arg))));\r\n> \r\n> Am, I missing something, and the new check is at some other place?\r\n\r\nFor extracting value from the string, strtoul() is used.\r\nThis is an important point.\r\n\r\n```\r\n\t\t\tdelay_val = strtoul(strVal(defel->arg), &endptr, 10);\r\n```\r\n\r\nIf user specifies min_send_delay as '-1', the value is read as a bit string\r\n'0xFFFFFFFFFFFFFFFF', and it is interpreted as PG_UINT64_MAX. After that such a\r\nstrange value is rejected by the part you copied. I have tested the case and it has\r\ncorrectly rejected.\r\n\r\n```\r\npostgres=# START_REPLICATION SLOT \"sub\" LOGICAL 0/0 (min_send_delay '-1');\r\nERROR: min_send_delay \"-1\" out of range\r\nCONTEXT: slot \"sub\", output plugin \"pgoutput\", in the startup callback\r\n```\r\n\r\n> + has been finished. However, there is a possibility that the table\r\n> + status written in <link\r\n> linkend=\"catalog-pg-subscription-rel\"><structname>pg_subscription_rel</stru\r\n> ctname></link>\r\n> + will be delayed in getting to \"ready\" state, and also two-phase\r\n> + (if specified) will be delayed in getting to \"enabled\".\r\n> + </para>\r\n> \r\n> There appears to be a special value <0x00> after \"ready\". I think that\r\n> is added by mistake or probably you have used some editor which has\r\n> added this value. Can we slightly reword this to: \"However, there is a\r\n> possibility that the table status updated in <link\r\n> linkend=\"catalog-pg-subscription-rel\"><structname>pg_subscription_rel</stru\r\n> ctname></link>\r\n> could be delayed in getting to the \"ready\" state, and also two-phase\r\n> (if specified) could be delayed in getting to \"enabled\".\"?\r\n\r\nOh, my Visual Studio Code did not detect the strange character.\r\nAnd reworded accordingly.\r\n\r\nAdditionally, I modified the commit message to describe more clearly the reason\r\nwhy the do not allow combination of min_send_delay and streaming = parallel.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Wed, 22 Feb 2023 13:47:44 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Patch v6 LGTM.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 23 Feb 2023 12:19:38 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Wed, Feb 22, 2023 9:48 PM Kuroda, Hayato/黒田 隼人 <kuroda.hayato@fujitsu.com> wrote:\r\n> \r\n> Thank you for reviewing! PSA new version.\r\n> \r\n\r\nThanks for your patch. Here is a comment.\r\n\r\n+\t\telog(DEBUG2, \"time-delayed replication for txid %u, delay_time = %d ms, remaining wait time: %ld ms\",\r\n+\t\t\t ctx->write_xid, (int) ctx->min_send_delay,\r\n+\t\t\t remaining_wait_time_ms);\r\n\r\nI tried and saw that the xid here looks wrong, what it got is the xid of the\r\nprevious transaction. It seems `ctx->write_xid` has not been updated and we\r\ncan't use it.\r\n\r\nRegards,\r\nShi Yu\r\n",
"msg_date": "Thu, 23 Feb 2023 08:55:27 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Shi,\r\n\r\nThank you for reviewing! PSA new version.\r\n\r\n> +\t\telog(DEBUG2, \"time-delayed replication for txid %u, delay_time\r\n> = %d ms, remaining wait time: %ld ms\",\r\n> +\t\t\t ctx->write_xid, (int) ctx->min_send_delay,\r\n> +\t\t\t remaining_wait_time_ms);\r\n> \r\n> I tried and saw that the xid here looks wrong, what it got is the xid of the\r\n> previous transaction. It seems `ctx->write_xid` has not been updated and we\r\n> can't use it.\r\n>\r\n\r\nGood catch. There are several approaches to fix that, I choose the simplest way.\r\nTransactionId was added as an argument of functions.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Thu, 23 Feb 2023 12:09:58 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Thu, Feb 23, 2023 at 9:10 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear Shi,\n>\n> Thank you for reviewing! PSA new version.\n>\n> > + elog(DEBUG2, \"time-delayed replication for txid %u, delay_time\n> > = %d ms, remaining wait time: %ld ms\",\n> > + ctx->write_xid, (int) ctx->min_send_delay,\n> > + remaining_wait_time_ms);\n> >\n> > I tried and saw that the xid here looks wrong, what it got is the xid of the\n> > previous transaction. It seems `ctx->write_xid` has not been updated and we\n> > can't use it.\n> >\n>\n> Good catch. There are several approaches to fix that, I choose the simplest way.\n> TransactionId was added as an argument of functions.\n>\n\nThank you for updating the patch. Here are some comments on v7 patch:\n\n+ *\n+ * LOGICALREP_PROTO_MIN_SEND_DELAY_VERSION_NUM is the minimum protocol version\n+ * with support for delaying to send transactions. Introduced in PG16.\n */\n #define LOGICALREP_PROTO_MIN_VERSION_NUM 1\n #define LOGICALREP_PROTO_VERSION_NUM 1\n #define LOGICALREP_PROTO_STREAM_VERSION_NUM 2\n #define LOGICALREP_PROTO_TWOPHASE_VERSION_NUM 3\n #define LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM 4\n-#define LOGICALREP_PROTO_MAX_VERSION_NUM\nLOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM\n+#define LOGICALREP_PROTO_MIN_SEND_DELAY_VERSION_NUM 4\n+#define LOGICALREP_PROTO_MAX_VERSION_NUM\nLOGICALREP_PROTO_MIN_SEND_DELAY_VERSION_NUM\n\nWhat is the usecase of the old macro,\nLOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM, after adding\nLOGICALREP_PROTO_MIN_SEND_DELAY_VERSION_NUM ? I think if we go this\nway, we will end up adding macros every time when adding a new option,\nwhich seems not a good idea. I'm really not sure we need to change the\nprotocol version or the macro. Commit\n366283961ac0ed6d89014444c6090f3fd02fce0a introduced the 'origin'\nsubscription parameter that is also sent to the publisher, but we\ndidn't touch the protocol version at all.\n\n---\nWhy do we not to delay sending COMMIT PREPARED messages?\n\n---\n+ /*\n+ * If we've requested to shut down, exit the process.\n+ *\n+ * Note that WalSndDone() cannot be used here because\nthe delaying\n+ * changes will be sent in the function.\n+ */\n+ if (got_STOPPING)\n+ WalSndShutdown();\n\nSince the walsender exits without sending the done message at a server\nshutdown, we get the following log message on the subscriber:\n\nERROR: could not receive data from WAL stream: server closed the\nconnection unexpectedly\n\nI think that since the walsender is just waiting for sending data, it\ncan send the done message if the socket is writable.\n\n---\n+ delayUntil = TimestampTzPlusMilliseconds(delay_start,\nctx->min_send_delay);\n+ remaining_wait_time_ms =\nTimestampDifferenceMilliseconds(GetCurrentTimestamp(), delayUntil);\n+\n(snip)\n+\n+ /* Sleep until appropriate time. */\n+ timeout_sleeptime_ms =\nWalSndComputeSleeptime(GetCurrentTimestamp());\n\nI think it's better to call GetCurrentTimestamp() only once.\n\n---\n+# This test is successful only if at least the configured delay has elapsed.\n+ok( time() - $publisher_insert_time >= $delay,\n+ \"subscriber applies WAL only after replication delay for\nnon-streaming transaction\"\n+);\n\nThe subscriber doesn't actually apply WAL records, but logically\nreplicated changes. How about \"subscriber applies changes only\nafter...\"?\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 27 Feb 2023 14:40:36 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Thu, Feb 23, 2023 at 5:40 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Thank you for reviewing! PSA new version.\n>\n\nI was trying to think if there is any better way to implement the\nnewly added callback (WalSndDelay()) but couldn't find any. For\nexample, one idea I tried to evaluate is whether we can merge it with\nthe existing callback WalSndUpdateProgress() or maybe extract the part\nother than progress tracking from that function into a new callback\nand then try to reuse it here as well. Though there is some common\nfunctionality like checking for timeout and processing replies still\nthey are different enough that they seem to need separate callbacks.\nThe prime purpose of a callback for the patch being discussed here is\nto delay the xact before sending the commit/prepare whereas the\nexisting callback (WalSndUpdateProgress()) or what we are discussing\nat [1] allows sending the keepalive message in some special cases\nwhere there is no communication between walsender and walreceiver.\nNow, the WalSndDelay() also tries to check for timeout and send\nkeepalive if necessary but there is also dependency on the delay\nparameter, so don't think it is a good idea of trying to combine those\nfunctionalities into one API.\n\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/20230210210423.r26ndnfmuifie4f6%40awork3.anarazel.de\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 27 Feb 2023 11:40:05 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Mon, Feb 27, 2023 at 11:11 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Feb 23, 2023 at 9:10 PM Hayato Kuroda (Fujitsu)\n> <kuroda.hayato@fujitsu.com> wrote:\n> >\n> > Thank you for reviewing! PSA new version.\n> >\n> >\n>\n> Thank you for updating the patch. Here are some comments on v7 patch:\n>\n> + *\n> + * LOGICALREP_PROTO_MIN_SEND_DELAY_VERSION_NUM is the minimum protocol version\n> + * with support for delaying to send transactions. Introduced in PG16.\n> */\n> #define LOGICALREP_PROTO_MIN_VERSION_NUM 1\n> #define LOGICALREP_PROTO_VERSION_NUM 1\n> #define LOGICALREP_PROTO_STREAM_VERSION_NUM 2\n> #define LOGICALREP_PROTO_TWOPHASE_VERSION_NUM 3\n> #define LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM 4\n> -#define LOGICALREP_PROTO_MAX_VERSION_NUM\n> LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM\n> +#define LOGICALREP_PROTO_MIN_SEND_DELAY_VERSION_NUM 4\n> +#define LOGICALREP_PROTO_MAX_VERSION_NUM\n> LOGICALREP_PROTO_MIN_SEND_DELAY_VERSION_NUM\n>\n> What is the usecase of the old macro,\n> LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM, after adding\n> LOGICALREP_PROTO_MIN_SEND_DELAY_VERSION_NUM ? I think if we go this\n> way, we will end up adding macros every time when adding a new option,\n> which seems not a good idea. I'm really not sure we need to change the\n> protocol version or the macro. Commit\n> 366283961ac0ed6d89014444c6090f3fd02fce0a introduced the 'origin'\n> subscription parameter that is also sent to the publisher, but we\n> didn't touch the protocol version at all.\n>\n\nRight, I also don't see a reason to do anything for this. We have\npreviously bumped the protocol version when we send extra/additional\ninformation from walsender but here that is not the requirement, so\nthis change doesn't seem to be required.\n\n> ---\n> Why do we not to delay sending COMMIT PREPARED messages?\n>\n\nI think we need to either add delay for prepare or commit prepared as\notherwise, it will lead to delaying the xact more than required. The\npatch seems to add a delay before sending a PREPARE as that is the\ntime when the subscriber will apply the changes.\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 27 Feb 2023 12:03:48 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> I was trying to think if there is any better way to implement the\r\n> newly added callback (WalSndDelay()) but couldn't find any. For\r\n> example, one idea I tried to evaluate is whether we can merge it with\r\n> the existing callback WalSndUpdateProgress() or maybe extract the part\r\n> other than progress tracking from that function into a new callback\r\n> and then try to reuse it here as well. Though there is some common\r\n> functionality like checking for timeout and processing replies still\r\n> they are different enough that they seem to need separate callbacks.\r\n> The prime purpose of a callback for the patch being discussed here is\r\n> to delay the xact before sending the commit/prepare whereas the\r\n> existing callback (WalSndUpdateProgress()) or what we are discussing\r\n> at [1] allows sending the keepalive message in some special cases\r\n> where there is no communication between walsender and walreceiver.\r\n> Now, the WalSndDelay() also tries to check for timeout and send\r\n> keepalive if necessary but there is also dependency on the delay\r\n> parameter, so don't think it is a good idea of trying to combine those\r\n> functionalities into one API.\r\n> \r\n> Thoughts?\r\n> \r\n> [1] -\r\n> https://www.postgresql.org/message-id/20230210210423.r26ndnfmuifie4f6%40\r\n> awork3.anarazel.de\r\n\r\nThank you for confirming. My understanding was that we should keep the current design.\r\nI agree with your posting.\r\n\r\nIn the current callback and modified version in [1], sending keepalives is done\r\nvia ProcessPendingWrites(). It is called by many functions and should not be changed,\r\nlike adding end_time only for us. Moreover, the name is not suitable because\r\ntime-delayed logical replication does not wait until the send buffer becomes empty.\r\n\r\nIf we reconstruct WalSndUpdateProgress() and change mechanisms around that,\r\ncodes will become dirty. As Amit said, in one path, the lag will be tracked and\r\nthe walsender will wait until the buffer is empty.\r\nIn another path, the lag calculation will be ignored, and the walsender will wait\r\nuntil the process spends time till a given period. Such a function is painful to read later.\r\n\r\nI think callbacks that have different purposes should not be mixed.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Mon, 27 Feb 2023 07:04:07 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Mon, Feb 27, 2023 at 3:34 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Feb 27, 2023 at 11:11 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Feb 23, 2023 at 9:10 PM Hayato Kuroda (Fujitsu)\n> > <kuroda.hayato@fujitsu.com> wrote:\n> > >\n> > > Thank you for reviewing! PSA new version.\n> > >\n> > >\n> >\n> > Thank you for updating the patch. Here are some comments on v7 patch:\n> >\n> > + *\n> > + * LOGICALREP_PROTO_MIN_SEND_DELAY_VERSION_NUM is the minimum protocol version\n> > + * with support for delaying to send transactions. Introduced in PG16.\n> > */\n> > #define LOGICALREP_PROTO_MIN_VERSION_NUM 1\n> > #define LOGICALREP_PROTO_VERSION_NUM 1\n> > #define LOGICALREP_PROTO_STREAM_VERSION_NUM 2\n> > #define LOGICALREP_PROTO_TWOPHASE_VERSION_NUM 3\n> > #define LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM 4\n> > -#define LOGICALREP_PROTO_MAX_VERSION_NUM\n> > LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM\n> > +#define LOGICALREP_PROTO_MIN_SEND_DELAY_VERSION_NUM 4\n> > +#define LOGICALREP_PROTO_MAX_VERSION_NUM\n> > LOGICALREP_PROTO_MIN_SEND_DELAY_VERSION_NUM\n> >\n> > What is the usecase of the old macro,\n> > LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM, after adding\n> > LOGICALREP_PROTO_MIN_SEND_DELAY_VERSION_NUM ? I think if we go this\n> > way, we will end up adding macros every time when adding a new option,\n> > which seems not a good idea. I'm really not sure we need to change the\n> > protocol version or the macro. Commit\n> > 366283961ac0ed6d89014444c6090f3fd02fce0a introduced the 'origin'\n> > subscription parameter that is also sent to the publisher, but we\n> > didn't touch the protocol version at all.\n> >\n>\n> Right, I also don't see a reason to do anything for this. We have\n> previously bumped the protocol version when we send extra/additional\n> information from walsender but here that is not the requirement, so\n> this change doesn't seem to be required.\n>\n> > ---\n> > Why do we not to delay sending COMMIT PREPARED messages?\n> >\n>\n> I think we need to either add delay for prepare or commit prepared as\n> otherwise, it will lead to delaying the xact more than required.\n\nAgreed.\n\n> The\n> patch seems to add a delay before sending a PREPARE as that is the\n> time when the subscriber will apply the changes.\n\nConsidering the purpose of this feature mentioned in the commit\nmessage \"particularly to fix errors that might cause data loss\",\ndelaying sending PREPARE would really help that situation? For\nexample, even after (mistakenly) executing PREPARE for a transaction\nexecuting DELETE without WHERE clause on the publisher the user still\ncan rollback the transaction. They don't lose data on both nodes yet.\nAfter executing (and replicating) COMMIT PREPARED for that\ntransaction, they lose the data on both nodes. IIUC the time-delayed\nlogical replication should help this situation by delaying sending\nCOMMIT PREPARED so that, for example, the user can stop logical\nreplication before COMMIT PREPARED message arrives to the subscriber.\nSo I think we should delay sending COMMIT PREPARED (and COMMIT)\ninstead of PREPARE. This would help users to correct data loss errors,\nand would be more consistent with what recovery_min_apply_delay does.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 27 Feb 2023 17:20:21 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Sawada-san, Amit,\r\n\r\nThank you for reviewing!\r\n\r\n> + *\r\n> + * LOGICALREP_PROTO_MIN_SEND_DELAY_VERSION_NUM is the minimum\r\n> protocol version\r\n> + * with support for delaying to send transactions. Introduced in PG16.\r\n> */\r\n> #define LOGICALREP_PROTO_MIN_VERSION_NUM 1\r\n> #define LOGICALREP_PROTO_VERSION_NUM 1\r\n> #define LOGICALREP_PROTO_STREAM_VERSION_NUM 2\r\n> #define LOGICALREP_PROTO_TWOPHASE_VERSION_NUM 3\r\n> #define LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM 4\r\n> -#define LOGICALREP_PROTO_MAX_VERSION_NUM\r\n> LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM\r\n> +#define LOGICALREP_PROTO_MIN_SEND_DELAY_VERSION_NUM 4\r\n> +#define LOGICALREP_PROTO_MAX_VERSION_NUM\r\n> LOGICALREP_PROTO_MIN_SEND_DELAY_VERSION_NUM\r\n> \r\n> What is the usecase of the old macro,\r\n> LOGICALREP_PROTO_STREAM_PARALLEL_VERSION_NUM, after adding\r\n> LOGICALREP_PROTO_MIN_SEND_DELAY_VERSION_NUM ? I think if we go this\r\n> way, we will end up adding macros every time when adding a new option,\r\n> which seems not a good idea. I'm really not sure we need to change the\r\n> protocol version or the macro. Commit\r\n> 366283961ac0ed6d89014444c6090f3fd02fce0a introduced the 'origin'\r\n> subscription parameter that is also sent to the publisher, but we\r\n> didn't touch the protocol version at all.\r\n\r\nI removed the protocol number.\r\n\r\nI checked the previous discussion[1]. According to it, the protocol version must\r\nbe modified when new message is added or exiting messages are changed.\r\nThis patch intentionally make walsenders delay sending data, and at that time no\r\nextra information is added. Therefore I think it is not needed.\r\n\r\n> ---\r\n> Why do we not to delay sending COMMIT PREPARED messages?\r\n\r\nThis is motivated by the comment[2] but I preferred your opinion[3].\r\nNow COMMIT PREPARED is delayed instead of PREPARE message.\r\n\r\n> ---\r\n> + /*\r\n> + * If we've requested to shut down, exit the process.\r\n> + *\r\n> + * Note that WalSndDone() cannot be used here because\r\n> the delaying\r\n> + * changes will be sent in the function.\r\n> + */\r\n> + if (got_STOPPING)\r\n> + WalSndShutdown();\r\n> \r\n> Since the walsender exits without sending the done message at a server\r\n> shutdown, we get the following log message on the subscriber:\r\n> \r\n> ERROR: could not receive data from WAL stream: server closed the\r\n> connection unexpectedly\r\n> \r\n> I think that since the walsender is just waiting for sending data, it\r\n> can send the done message if the socket is writable.\r\n\r\nYou are right. I was confused with the previous implementation that workers cannot\r\naccept any messages. I make walsenders send the end-command message directly.\r\nIs it what you expeced?\r\n\r\n> ---\r\n> + delayUntil = TimestampTzPlusMilliseconds(delay_start,\r\n> ctx->min_send_delay);\r\n> + remaining_wait_time_ms =\r\n> TimestampDifferenceMilliseconds(GetCurrentTimestamp(), delayUntil);\r\n> +\r\n> (snip)\r\n> +\r\n> + /* Sleep until appropriate time. */\r\n> + timeout_sleeptime_ms =\r\n> WalSndComputeSleeptime(GetCurrentTimestamp());\r\n> \r\n> I think it's better to call GetCurrentTimestamp() only once.\r\n\r\nRight, fixed.\r\n\r\n> ---\r\n> +# This test is successful only if at least the configured delay has elapsed.\r\n> +ok( time() - $publisher_insert_time >= $delay,\r\n> + \"subscriber applies WAL only after replication delay for\r\n> non-streaming transaction\"\r\n> +);\r\n> \r\n> The subscriber doesn't actually apply WAL records, but logically\r\n> replicated changes. How about \"subscriber applies changes only\r\n> after...\"?\r\n\r\nI grepped other tests, and I could not find the same usage of the word \"WAL\".\r\nSo fixed as you said.\r\n\r\nIn next version I will use grammar checker like Chat-GPT to modify commit messages...\r\n\r\n[1]: https://www.postgresql.org/message-id/CAA4eK1LjOm6-OHggYVH35dQ_v40jOXrJW0GFy3kuwTd2J48%3DUg%40mail.gmail.com\r\n[2]: https://www.postgresql.org/message-id/CAA4eK1K4uPbudrNdH%2B%3D_vN-Hpe9wYh%3D3vBS5Ww9dHn-LOWMV0g%40mail.gmail.com\r\n[3]: https://www.postgresql.org/message-id/CAD21AoA0mPq_m6USfAC8DAkvFfwjqGvGq++Uv=avryYotvq98A@mail.gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Mon, 27 Feb 2023 08:51:29 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Mon, Feb 27, 2023 at 1:50 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Feb 27, 2023 at 3:34 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > ---\n> > > Why do we not to delay sending COMMIT PREPARED messages?\n> > >\n> >\n> > I think we need to either add delay for prepare or commit prepared as\n> > otherwise, it will lead to delaying the xact more than required.\n>\n> Agreed.\n>\n> > The\n> > patch seems to add a delay before sending a PREPARE as that is the\n> > time when the subscriber will apply the changes.\n>\n> Considering the purpose of this feature mentioned in the commit\n> message \"particularly to fix errors that might cause data loss\",\n> delaying sending PREPARE would really help that situation? For\n> example, even after (mistakenly) executing PREPARE for a transaction\n> executing DELETE without WHERE clause on the publisher the user still\n> can rollback the transaction. They don't lose data on both nodes yet.\n> After executing (and replicating) COMMIT PREPARED for that\n> transaction, they lose the data on both nodes. IIUC the time-delayed\n> logical replication should help this situation by delaying sending\n> COMMIT PREPARED so that, for example, the user can stop logical\n> replication before COMMIT PREPARED message arrives to the subscriber.\n> So I think we should delay sending COMMIT PREPARED (and COMMIT)\n> instead of PREPARE. This would help users to correct data loss errors,\n> and would be more consistent with what recovery_min_apply_delay does.\n>\n\nThe one difference w.r.t recovery_min_apply_delay is that here we will\nhold locks for the duration of the delay which didn't seem to be a\ngood idea. This will also probably lead to more bloat as we will keep\ntransactions open for a long time. Doing it before DecodePrepare won't\nhave such problems. This is the reason that we decide to perform a\ndelay at the start of the transaction instead at commit/prepare in the\nsubscriber-side approach.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 27 Feb 2023 14:56:19 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "At Mon, 27 Feb 2023 14:56:19 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> The one difference w.r.t recovery_min_apply_delay is that here we will\n> hold locks for the duration of the delay which didn't seem to be a\n> good idea. This will also probably lead to more bloat as we will keep\n> transactions open for a long time. Doing it before DecodePrepare won't\n\nI don't have a concrete picture but could we tell reorder buffer to\nretain a PREPAREd transaction until a COMMIT PREPARED comes? If\ndelaying non-prepared transactions until COMMIT is adequate, then the\nsame thing seems to work for prepared transactions.\n\n> have such problems. This is the reason that we decide to perform a\n> delay at the start of the transaction instead at commit/prepare in the\n> subscriber-side approach.\n\nIt seems that there are no technical obstacles to do that on the\npublisher side. The only observable difference would be that\nrelatively large prepared transactions may experience noticeable\nadditional delays. IMHO I don't think it's a good practice\nprotocol-wise to intentionally choke a stream at the receiving end\nwhen it has not been flow-controlled on the transmitting end.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 28 Feb 2023 11:44:46 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tue, Feb 28, 2023 at 8:14 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Mon, 27 Feb 2023 14:56:19 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > The one difference w.r.t recovery_min_apply_delay is that here we will\n> > hold locks for the duration of the delay which didn't seem to be a\n> > good idea. This will also probably lead to more bloat as we will keep\n> > transactions open for a long time. Doing it before DecodePrepare won't\n>\n> I don't have a concrete picture but could we tell reorder buffer to\n> retain a PREPAREd transaction until a COMMIT PREPARED comes?\n>\n\nYeah, we could do that and that is what is the behavior unless the\nuser enables 2PC via 'two_phase' subscription option. But, I don't see\nthe need to unnecessarily delay the prepare till the commit if a user\nhas specified 'two_phase' option. It is quite possible that COMMIT\nPREPARED happens at a much later time frame than the amount of delay\nthe user is expecting.\n\n> If\n> delaying non-prepared transactions until COMMIT is adequate, then the\n> same thing seems to work for prepared transactions.\n>\n> > have such problems. This is the reason that we decide to perform a\n> > delay at the start of the transaction instead at commit/prepare in the\n> > subscriber-side approach.\n>\n> It seems that there are no technical obstacles to do that on the\n> publisher side. The only observable difference would be that\n> relatively large prepared transactions may experience noticeable\n> additional delays. IMHO I don't think it's a good practice\n> protocol-wise to intentionally choke a stream at the receiving end\n> when it has not been flow-controlled on the transmitting end.\n>\n\nBut in this proposal, we are not choking/delaying anything on the receiving end.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 28 Feb 2023 08:35:11 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Mon, Feb 27, 2023 at 2:21 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n\nFew comments:\n1.\n+ /*\n+ * If we've requested to shut down, exit the process.\n+ *\n+ * Note that WalSndDone() cannot be used here because the delaying\n+ * changes will be sent in the function.\n+ */\n+ if (got_STOPPING)\n+ {\n+ QueryCompletion qc;\n+\n+ /* Inform the standby that XLOG streaming is done */\n+ SetQueryCompletion(&qc, CMDTAG_COPY, 0);\n+ EndCommand(&qc, DestRemote, false);\n+ pq_flush();\n\nDo we really need to do anything except for breaking the loop and let\nthe exit handling happen in the main loop when 'got_STOPPING' is set?\nAFAICS, this is what we are doing in some other palces (See\nWalSndWaitForWal). Won't that work? It seems that will help us sending\nall the pending WAL.\n\n2.\n+ /* Try to flush pending output to the client */\n+ if (pq_flush_if_writable() != 0)\n+ WalSndShutdown();\n\nIs there a reason to try flushing here?\n\nApart from the above, I have made a few changes in the comments in the\nattached diff patch. If you agree with those then please include them\nin the next version.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Tue, 28 Feb 2023 14:05:52 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> Few comments:\r\n\r\nThank you for reviewing! PSA new version.\r\nNote that the starting point of delay for 2PC was not changed,\r\nI think it has been under discussion.\r\n\r\n> 1.\r\n> + /*\r\n> + * If we've requested to shut down, exit the process.\r\n> + *\r\n> + * Note that WalSndDone() cannot be used here because the delaying\r\n> + * changes will be sent in the function.\r\n> + */\r\n> + if (got_STOPPING)\r\n> + {\r\n> + QueryCompletion qc;\r\n> +\r\n> + /* Inform the standby that XLOG streaming is done */\r\n> + SetQueryCompletion(&qc, CMDTAG_COPY, 0);\r\n> + EndCommand(&qc, DestRemote, false);\r\n> + pq_flush();\r\n> \r\n> Do we really need to do anything except for breaking the loop and let\r\n> the exit handling happen in the main loop when 'got_STOPPING' is set?\r\n> AFAICS, this is what we are doing in some other palces (See\r\n> WalSndWaitForWal). Won't that work? It seems that will help us sending\r\n> all the pending WAL.\r\n\r\nIf we exit the loop after got_STOPPING is set, as you said, the walsender will\r\nsend delaying changes and then exit. The behavior is same as the case that WalSndDone()\r\nis called. But I think it is not suitable for the motivation of the feature.\r\nIf users notice the miss operation like TRUNCATE, they must shut down the publisher\r\nonce and then recovery from back up or old subscriber. If the walsender sends all\r\npending changes, miss operations will be also propagated to subscriber and data\r\ncannot be protected. So currently I want to keep the style.\r\nFYI - In case of physical replication, received WALs are not applied when the\r\nsecondary is shutted down.\r\n\r\n> 2.\r\n> + /* Try to flush pending output to the client */\r\n> + if (pq_flush_if_writable() != 0)\r\n> + WalSndShutdown();\r\n> \r\n> Is there a reason to try flushing here?\r\n\r\nIIUC if pq_flush_if_writable() returns non-zero (EOF), it means that there is a\r\ntrouble and walsender fails to send messages to subscriber.\r\n\r\nIn Linux, the stuck trace from pq_flush_if_writable() will finally reach the send() system call.\r\nAnd according to man page[1], it will be triggered by some unexpected state or the connection is closed.\r\n\r\nBased on above, I think the returned value should be confirmed.\r\n\r\n> Apart from the above, I have made a few changes in the comments in the\r\n> attached diff patch. If you agree with those then please include them\r\n> in the next version.\r\n\r\nThanks! I checked and I thought all of them should be included.\r\n\r\nMoreover, I used grammar checker and slightly reworded the commit message.\r\n\r\n[1]: https://man7.org/linux/man-pages/man3/send.3p.html\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Tue, 28 Feb 2023 15:51:32 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Here are some review comments for v9-0001, but these are only very trivial.\n\n======\nCommit Message\n\n1.\nNitpick. The new text is jagged-looking. It should wrap at ~80 chars.\n\n~~~\n\n2.\n2. Another reason is for that parallel streaming, the transaction will be opened\nimmediately by the parallel apply worker. Therefore, if the walsender\nis delayed in sending the final record of the transaction, the\nparallel apply worker must wait to receive it with an open\ntransaction. This would result in the locks acquired during the\ntransaction not being released until the min_send_delay has elapsed.\n\n~\n\nThe text already said there are \"two reasons\", and already this is\nnumbered as reason 2. So it doesn't need to keep saying \"Another\nreason\" here.\n\n\"Another reason is for that parallel streaming\" --> \"For parallel streaming...\"\n\n======\nsrc/backend/replication/walsender.c\n\n3. WalSndDelay\n\n+ /* die if timeout was reached */\n+ WalSndCheckTimeOut();\n\nOther nearby comments start uppercase, so this should too.\n\n======\nsrc/include/replication/walreceiver.h\n\n4. WalRcvStreamOptions\n\n@@ -187,6 +187,7 @@ typedef struct\n * prepare time */\n char *origin; /* Only publish data originating from the\n * specified origin */\n+ int32 min_send_delay; /* The minimum send delay */\n } logical;\n } proto;\n } WalRcvStreamOptions;\n\n~\n\nShould that comment mention the units are \"(ms)\"\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 1 Mar 2023 12:53:49 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "At Tue, 28 Feb 2023 08:35:11 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \r\n> On Tue, Feb 28, 2023 at 8:14 AM Kyotaro Horiguchi\r\n> <horikyota.ntt@gmail.com> wrote:\r\n> >\r\n> > At Mon, 27 Feb 2023 14:56:19 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\r\n> > > The one difference w.r.t recovery_min_apply_delay is that here we will\r\n> > > hold locks for the duration of the delay which didn't seem to be a\r\n> > > good idea. This will also probably lead to more bloat as we will keep\r\n> > > transactions open for a long time. Doing it before DecodePrepare won't\r\n> >\r\n> > I don't have a concrete picture but could we tell reorder buffer to\r\n> > retain a PREPAREd transaction until a COMMIT PREPARED comes?\r\n> >\r\n> \r\n> Yeah, we could do that and that is what is the behavior unless the\r\n> user enables 2PC via 'two_phase' subscription option. But, I don't see\r\n> the need to unnecessarily delay the prepare till the commit if a user\r\n> has specified 'two_phase' option. It is quite possible that COMMIT\r\n> PREPARED happens at a much later time frame than the amount of delay\r\n> the user is expecting.\r\n\r\nIt looks like the user should decide between potential long locks or\r\nextra delays, and this choice ought to be documented.\r\n\r\n> > If\r\n> > delaying non-prepared transactions until COMMIT is adequate, then the\r\n> > same thing seems to work for prepared transactions.\r\n> >\r\n> > > have such problems. This is the reason that we decide to perform a\r\n> > > delay at the start of the transaction instead at commit/prepare in the\r\n> > > subscriber-side approach.\r\n> >\r\n> > It seems that there are no technical obstacles to do that on the\r\n> > publisher side. The only observable difference would be that\r\n> > relatively large prepared transactions may experience noticeable\r\n> > additional delays. IMHO I don't think it's a good practice\r\n> > protocol-wise to intentionally choke a stream at the receiving end\r\n> > when it has not been flow-controlled on the transmitting end.\r\n> >\r\n> \r\n> But in this proposal, we are not choking/delaying anything on the receiving end.\r\n\r\nI didn't say that to the latest patch. I interpreted the quote of\r\nyour description as saying that the subscriber-side solution is\r\neffective in solving the long-lock problems, so I replied that that\r\ncan be solved with the publisher-side solution and the subscriber-side\r\nsolution could cause some unwanted behavior.\r\n\r\nDo you think we have decided to go with the publisher-side solution?\r\nI'm fine if so.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Wed, 01 Mar 2023 11:36:39 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Wed, Mar 1, 2023 at 12:51 AM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear Amit,\n>\n> > Few comments:\n>\n> Thank you for reviewing! PSA new version.\n> Note that the starting point of delay for 2PC was not changed,\n> I think it has been under discussion.\n>\n> > 1.\n> > + /*\n> > + * If we've requested to shut down, exit the process.\n> > + *\n> > + * Note that WalSndDone() cannot be used here because the delaying\n> > + * changes will be sent in the function.\n> > + */\n> > + if (got_STOPPING)\n> > + {\n> > + QueryCompletion qc;\n> > +\n> > + /* Inform the standby that XLOG streaming is done */\n> > + SetQueryCompletion(&qc, CMDTAG_COPY, 0);\n> > + EndCommand(&qc, DestRemote, false);\n> > + pq_flush();\n> >\n> > Do we really need to do anything except for breaking the loop and let\n> > the exit handling happen in the main loop when 'got_STOPPING' is set?\n> > AFAICS, this is what we are doing in some other palces (See\n> > WalSndWaitForWal). Won't that work? It seems that will help us sending\n> > all the pending WAL.\n>\n> If we exit the loop after got_STOPPING is set, as you said, the walsender will\n> send delaying changes and then exit. The behavior is same as the case that WalSndDone()\n> is called. But I think it is not suitable for the motivation of the feature.\n> If users notice the miss operation like TRUNCATE, they must shut down the publisher\n> once and then recovery from back up or old subscriber. If the walsender sends all\n> pending changes, miss operations will be also propagated to subscriber and data\n> cannot be protected. So currently I want to keep the style.\n> FYI - In case of physical replication, received WALs are not applied when the\n> secondary is shutted down.\n>\n> > 2.\n> > + /* Try to flush pending output to the client */\n> > + if (pq_flush_if_writable() != 0)\n> > + WalSndShutdown();\n> >\n> > Is there a reason to try flushing here?\n>\n> IIUC if pq_flush_if_writable() returns non-zero (EOF), it means that there is a\n> trouble and walsender fails to send messages to subscriber.\n>\n> In Linux, the stuck trace from pq_flush_if_writable() will finally reach the send() system call.\n> And according to man page[1], it will be triggered by some unexpected state or the connection is closed.\n>\n> Based on above, I think the returned value should be confirmed.\n>\n> > Apart from the above, I have made a few changes in the comments in the\n> > attached diff patch. If you agree with those then please include them\n> > in the next version.\n>\n> Thanks! I checked and I thought all of them should be included.\n>\n> Moreover, I used grammar checker and slightly reworded the commit message.\n\nThinking of side effects of this feature (no matter where we delay\napplying the changes), on the publisher, vacuum cannot collect garbage\nand WAL cannot be recycled. Is that okay in the first place? The point\nis that the subscription setting affects the publisher. That is,\nmin_send_delay is specified on the subscriber but the symptoms that\ncould ultimately lead to a server crash appear on the publisher, which\nsounds dangerous to me.\n\nImagine a service or system like where there is a publication server\nand it's somewhat exposed so that a user (or a subsystem) arbitrarily\ncan create a subscriber to replicate a subset of the data. A malicious\nuser can have the publisher crash by creating a subscription with,\nsay, min_send_delay = 20d. max_slot_wal_keep_size helps this situation\nbut it's -1 by default.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 1 Mar 2023 11:48:20 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Wed, Mar 1, 2023 at 8:06 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 28 Feb 2023 08:35:11 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > On Tue, Feb 28, 2023 at 8:14 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > At Mon, 27 Feb 2023 14:56:19 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > > > The one difference w.r.t recovery_min_apply_delay is that here we will\n> > > > hold locks for the duration of the delay which didn't seem to be a\n> > > > good idea. This will also probably lead to more bloat as we will keep\n> > > > transactions open for a long time. Doing it before DecodePrepare won't\n> > >\n> > > I don't have a concrete picture but could we tell reorder buffer to\n> > > retain a PREPAREd transaction until a COMMIT PREPARED comes?\n> > >\n> >\n> > Yeah, we could do that and that is what is the behavior unless the\n> > user enables 2PC via 'two_phase' subscription option. But, I don't see\n> > the need to unnecessarily delay the prepare till the commit if a user\n> > has specified 'two_phase' option. It is quite possible that COMMIT\n> > PREPARED happens at a much later time frame than the amount of delay\n> > the user is expecting.\n>\n> It looks like the user should decide between potential long locks or\n> extra delays, and this choice ought to be documented.\n>\n\nSure, we can do that. However, I think the way this feature works is\nthat we keep standby/subscriber behind the primary/publisher by a\ncertain time period and if there is any unwanted transaction (say\nDelete * .. without where clause), we can recover it from the receiver\nside. So, it may not matter much even if we wait at PREPARE to avoid\nlong locks instead of documenting it.\n\n> > > If\n> > > delaying non-prepared transactions until COMMIT is adequate, then the\n> > > same thing seems to work for prepared transactions.\n> > >\n> > > > have such problems. This is the reason that we decide to perform a\n> > > > delay at the start of the transaction instead at commit/prepare in the\n> > > > subscriber-side approach.\n> > >\n> > > It seems that there are no technical obstacles to do that on the\n> > > publisher side. The only observable difference would be that\n> > > relatively large prepared transactions may experience noticeable\n> > > additional delays. IMHO I don't think it's a good practice\n> > > protocol-wise to intentionally choke a stream at the receiving end\n> > > when it has not been flow-controlled on the transmitting end.\n> > >\n> >\n> > But in this proposal, we are not choking/delaying anything on the receiving end.\n>\n> I didn't say that to the latest patch. I interpreted the quote of\n> your description as saying that the subscriber-side solution is\n> effective in solving the long-lock problems, so I replied that that\n> can be solved with the publisher-side solution and the subscriber-side\n> solution could cause some unwanted behavior.\n>\n> Do you think we have decided to go with the publisher-side solution?\n> I'm fine if so.\n>\n\nI am fine too unless we discover any major challenges with\npublisher-side implementation.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 1 Mar 2023 09:28:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Wed, Mar 1, 2023 at 8:18 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Mar 1, 2023 at 12:51 AM Hayato Kuroda (Fujitsu)\n> <kuroda.hayato@fujitsu.com> wrote:\n>\n> Thinking of side effects of this feature (no matter where we delay\n> applying the changes), on the publisher, vacuum cannot collect garbage\n> and WAL cannot be recycled. Is that okay in the first place? The point\n> is that the subscription setting affects the publisher. That is,\n> min_send_delay is specified on the subscriber but the symptoms that\n> could ultimately lead to a server crash appear on the publisher, which\n> sounds dangerous to me.\n>\n> Imagine a service or system like where there is a publication server\n> and it's somewhat exposed so that a user (or a subsystem) arbitrarily\n> can create a subscriber to replicate a subset of the data. A malicious\n> user can have the publisher crash by creating a subscription with,\n> say, min_send_delay = 20d. max_slot_wal_keep_size helps this situation\n> but it's -1 by default.\n>\n\nBy publisher crash, do you mean due to the disk full situation, it can\nlead the publisher to stop/panic? Won't a malicious user can block the\nreplication in other ways as well and let the publisher stall (or\ncrash the publisher) even without setting min_send_delay? Basically,\none needs to either disable the subscription or create a\nconstraint-violating row in the table to make that happen. If the\nsystem is exposed for arbitrarily allowing the creation of a\nsubscription then a malicious user can create a subscription similar\nto one existing subscription and block the replication due to\nconstraint violations. I don't think it would be so easy to bypass the\ncurrent system that a malicious user will be allowed to create/alter\nsubscriptions arbitrarily. Similarly, if there is a network issue\n(unreachable or slow), one will see similar symptoms. I think\nretention of data and WAL on publisher do rely on acknowledgment from\nsubscribers and delay in that due to any reason can lead to the\nsymptoms you describe above. We have documented at least one such case\nalready where during Drop Subscription, if the network is not\nreachable then also, a similar problem can happen and users need to be\ncareful about it [1].\n\n[1] - https://www.postgresql.org/docs/devel/logical-replication-subscription.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 1 Mar 2023 10:24:50 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tue, Feb 28, 2023 at 9:21 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> > 1.\n> > + /*\n> > + * If we've requested to shut down, exit the process.\n> > + *\n> > + * Note that WalSndDone() cannot be used here because the delaying\n> > + * changes will be sent in the function.\n> > + */\n> > + if (got_STOPPING)\n> > + {\n> > + QueryCompletion qc;\n> > +\n> > + /* Inform the standby that XLOG streaming is done */\n> > + SetQueryCompletion(&qc, CMDTAG_COPY, 0);\n> > + EndCommand(&qc, DestRemote, false);\n> > + pq_flush();\n> >\n> > Do we really need to do anything except for breaking the loop and let\n> > the exit handling happen in the main loop when 'got_STOPPING' is set?\n> > AFAICS, this is what we are doing in some other palces (See\n> > WalSndWaitForWal). Won't that work? It seems that will help us sending\n> > all the pending WAL.\n>\n> If we exit the loop after got_STOPPING is set, as you said, the walsender will\n> send delaying changes and then exit. The behavior is same as the case that WalSndDone()\n> is called. But I think it is not suitable for the motivation of the feature.\n> If users notice the miss operation like TRUNCATE, they must shut down the publisher\n> once and then recovery from back up or old subscriber. If the walsender sends all\n> pending changes, miss operations will be also propagated to subscriber and data\n> cannot be protected. So currently I want to keep the style.\n> FYI - In case of physical replication, received WALs are not applied when the\n> secondary is shutted down.\n>\n\nFair point but I think the current comment should explain why we are\ndoing something different here. How about extending the existing\ncomments to something like: \"If we've requested to shut down, exit the\nprocess. This is unlike handling at other places where we allow\ncomplete WAL to be sent before shutdown because we don't want the\ndelayed transactions to be applied downstream. This will allow one to\nuse the data from downstream in case of some unwanted operations on\nthe current node.\"\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 1 Mar 2023 10:44:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Wed, Mar 1, 2023 at 1:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Mar 1, 2023 at 8:18 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Mar 1, 2023 at 12:51 AM Hayato Kuroda (Fujitsu)\n> > <kuroda.hayato@fujitsu.com> wrote:\n> >\n> > Thinking of side effects of this feature (no matter where we delay\n> > applying the changes), on the publisher, vacuum cannot collect garbage\n> > and WAL cannot be recycled. Is that okay in the first place? The point\n> > is that the subscription setting affects the publisher. That is,\n> > min_send_delay is specified on the subscriber but the symptoms that\n> > could ultimately lead to a server crash appear on the publisher, which\n> > sounds dangerous to me.\n> >\n> > Imagine a service or system like where there is a publication server\n> > and it's somewhat exposed so that a user (or a subsystem) arbitrarily\n> > can create a subscriber to replicate a subset of the data. A malicious\n> > user can have the publisher crash by creating a subscription with,\n> > say, min_send_delay = 20d. max_slot_wal_keep_size helps this situation\n> > but it's -1 by default.\n> >\n>\n> By publisher crash, do you mean due to the disk full situation, it can\n> lead the publisher to stop/panic?\n\nExactly.\n\n> Won't a malicious user can block the\n> replication in other ways as well and let the publisher stall (or\n> crash the publisher) even without setting min_send_delay? Basically,\n> one needs to either disable the subscription or create a\n> constraint-violating row in the table to make that happen. If the\n> system is exposed for arbitrarily allowing the creation of a\n> subscription then a malicious user can create a subscription similar\n> to one existing subscription and block the replication due to\n> constraint violations. I don't think it would be so easy to bypass the\n> current system that a malicious user will be allowed to create/alter\n> subscriptions arbitrarily.\n\nRight. But a difference is that with min_send_delay, it's just to\ncreate a subscription.\n\n> Similarly, if there is a network issue\n> (unreachable or slow), one will see similar symptoms. I think\n> retention of data and WAL on publisher do rely on acknowledgment from\n> subscribers and delay in that due to any reason can lead to the\n> symptoms you describe above.\n\nI think that piling up WAL files due to a slow network is a different\nstory since it's a problem not only on the subscriber side.\n\n> We have documented at least one such case\n> already where during Drop Subscription, if the network is not\n> reachable then also, a similar problem can happen and users need to be\n> careful about it [1].\n\nApart from a bad-use case example I mentioned, in general, piling up\nWAL files due to the replication slot has many bad effects on the\nsystem. I'm concerned that the side effect of this feature (at least\nof the current design) is too huge compared to the benefit, and afraid\nthat users might end up using this feature without understanding the\nside effect well. It might be okay if we thoroughly document it but\nI'm not sure.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 1 Mar 2023 14:26:48 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Wed, Mar 1, 2023 at 10:57 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Mar 1, 2023 at 1:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> > Won't a malicious user can block the\n> > replication in other ways as well and let the publisher stall (or\n> > crash the publisher) even without setting min_send_delay? Basically,\n> > one needs to either disable the subscription or create a\n> > constraint-violating row in the table to make that happen. If the\n> > system is exposed for arbitrarily allowing the creation of a\n> > subscription then a malicious user can create a subscription similar\n> > to one existing subscription and block the replication due to\n> > constraint violations. I don't think it would be so easy to bypass the\n> > current system that a malicious user will be allowed to create/alter\n> > subscriptions arbitrarily.\n>\n> Right. But a difference is that with min_send_delay, it's just to\n> create a subscription.\n>\n\nBut, currently, only superusers would be allowed to create\nsubscriptions. Even, if we change it and allow it based on some\npre-defined role, it won't be allowed to create a subscription\narbitrarily. So, not sure, if any malicious user can easily bypass it\nas you are envisioning it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 1 Mar 2023 11:35:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Sawada-san,\r\n\r\nThank you for giving your consideration!\r\n\r\n> > We have documented at least one such case\r\n> > already where during Drop Subscription, if the network is not\r\n> > reachable then also, a similar problem can happen and users need to be\r\n> > careful about it [1].\r\n> \r\n> Apart from a bad-use case example I mentioned, in general, piling up\r\n> WAL files due to the replication slot has many bad effects on the\r\n> system. I'm concerned that the side effect of this feature (at least\r\n> of the current design) is too huge compared to the benefit, and afraid\r\n> that users might end up using this feature without understanding the\r\n> side effect well. It might be okay if we thoroughly document it but\r\n> I'm not sure.\r\n\r\nOne approach is that change max_slot_wal_keep_size forcibly when min_send_delay\r\nis set. But it may lead to disable the slot because WALs needed by the time-delayed\r\nreplication may be also removed. Just the right value cannot be set by us because\r\nit is quite depends on the min_send_delay and workload.\r\n\r\nHow about throwing the WARNING when min_send_delay > 0 but\r\nmax_slot_wal_keep_size < 0? Differ from previous, version the subscription\r\nparameter min_send_delay will be sent to publisher. Therefore, we can compare\r\nmin_send_delay and max_slot_wal_keep_size when publisher receives the parameter.\r\n\r\nOf course we can reject such a setup by using ereport(ERROR), but it may generate\r\nabandoned replication slot. It is because we send the parameter at START_REPLICATION\r\nand the slot has been already created.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 1 Mar 2023 09:21:11 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Tue, 28 Feb 2023 at 21:21, Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear Amit,\n>\n> > Few comments:\n>\n> Thank you for reviewing! PSA new version.\n\nThanks for the updated patch, few comments:\n1) Currently we have added the delay during the decode of commit,\nwhile decoding the commit walsender process will stop decoding any\nfurther transaction until delay is completed. There might be a\npossibility that a lot of transactions will happen in parallel and\nthere will be a lot of transactions to be decoded after the delay is\ncompleted.\nWill it be possible to decode the WAL if any WAL is generated instead\nof staying idle in the meantime, I'm not sure if this is feasible just\nthrowing my thought to see if it might be possible.\n--- a/src/backend/replication/logical/decode.c\n+++ b/src/backend/replication/logical/decode.c\n@@ -676,6 +676,15 @@ DecodeCommit(LogicalDecodingContext *ctx,\nXLogRecordBuffer *buf,\n\nbuf->origptr, buf->endptr);\n }\n\n+ /*\n+ * Delay sending the changes if required. For streaming transactions,\n+ * this means a delay in sending the last stream but that is OK\n+ * because on the downstream the changes will be applied only after\n+ * receiving the last stream.\n+ */\n+ if (ctx->min_send_delay > 0 && ctx->delay_send)\n+ ctx->delay_send(ctx, xid, commit_time);\n+\n\n2) Generally single line comments are not terminated by \".\", The\ncomment \"/* Sleep until appropriate time. */\" can be changed\nappropriately:\n+\n+ /* Sleep until appropriate time. */\n+ timeout_sleeptime_ms = WalSndComputeSleeptime(now);\n+\n+ elog(DEBUG2, \"time-delayed replication for txid %u,\ndelay_time = %d ms, remaining wait time: %ld ms\",\n+ xid, (int) ctx->min_send_delay,\nremaining_wait_time_ms);\n+\n+ /* Sleep until we get reply from worker or we time out */\n+ WalSndWait(WL_SOCKET_READABLE,\n\n3) In some places we mention as min_send_delay and in some places we\nmention it as time-delayed replication, we can keep the comment\nconsistent by using the similar wordings.\n+-- fail - specifying streaming = parallel with time-delayed replication is not\n+-- supported\n+CREATE SUBSCRIPTION regress_testsub CONNECTION\n'dbname=regress_doesnotexist' PUBLICATION testpub WITH (connect =\nfalse, streaming = parallel, min_send_delay = 123);\n\n+-- fail - alter subscription with streaming = parallel should fail when\n+-- time-delayed replication is set\n+ALTER SUBSCRIPTION regress_testsub SET (streaming = parallel);\n\n+-- fail - alter subscription with min_send_delay should fail when\n+-- streaming = parallel is set\n\n4) Since the value is stored in ms, we need not add ms again as the\ndefault value is in ms:\n@@ -4686,6 +4694,9 @@ dumpSubscription(Archive *fout, const\nSubscriptionInfo *subinfo)\n if (strcmp(subinfo->subsynccommit, \"off\") != 0)\n appendPQExpBuffer(query, \", synchronous_commit = %s\",\nfmtId(subinfo->subsynccommit));\n\n+ if (subinfo->subminsenddelay > 0)\n+ appendPQExpBuffer(query, \", min_send_delay = '%d ms'\",\nsubinfo->subminsenddelay);\n+\n\n5) we can use the new error reporting style:\n5.a) brackets around errcode can be removed\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"invalid value for parameter\n\\\"%s\\\": \\\"%s\\\"\",\n+ \"min_send_delay\", input_string),\n+ hintmsg ? errhint(\"%s\", _(hintmsg)) : 0));\n\n5.b) Similarly here too;\n+ if (result < 0 || result > PG_INT32_MAX)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"%d ms is outside the valid\nrange for parameter \\\"%s\\\" (%d .. %d)\",\n+ result,\n+ \"min_send_delay\",\n+ 0, PG_INT32_MAX)));\n\n5.c) Similarly here too;\n+ delay_val = strtoul(strVal(defel->arg), &endptr, 10);\n+ if (errno != 0 || *endptr != '\\0')\n+ ereport(ERROR,\n+\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"invalid\nmin_send_delay\")));\n\n\n5.d) Similarly here too;\n+ if (delay_val > PG_INT32_MAX)\n+ ereport(ERROR,\n+\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+\nerrmsg(\"min_send_delay \\\"%s\\\" out of range\",\n+\nstrVal(defel->arg))));\n\n\n6) This can be changed to a single line comment:\n+ /*\n+ * Parse given string as parameter which has millisecond unit\n+ */\n+ if (!parse_int(input_string, &result, GUC_UNIT_MS, &hintmsg))\n+ ereport(ERROR,\n\n7) In the expect we have specifically mention \"for non-streaming\ntransaction\", is the behavior different for streaming transaction, if\nnot we can change the message accordingly\n+# The publisher waits for the replication to complete\n+$node_publisher->wait_for_catchup('tap_sub_renamed');\n+\n+# This test is successful only if at least the configured delay has elapsed.\n+ok( time() - $publisher_insert_time >= $delay,\n+ \"subscriber applies changes only after replication delay for\nnon-streaming transaction\"\n+);\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 1 Mar 2023 21:59:13 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Wed, Mar 1, 2023 at 6:21 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear Sawada-san,\n>\n> Thank you for giving your consideration!\n>\n> > > We have documented at least one such case\n> > > already where during Drop Subscription, if the network is not\n> > > reachable then also, a similar problem can happen and users need to be\n> > > careful about it [1].\n> >\n> > Apart from a bad-use case example I mentioned, in general, piling up\n> > WAL files due to the replication slot has many bad effects on the\n> > system. I'm concerned that the side effect of this feature (at least\n> > of the current design) is too huge compared to the benefit, and afraid\n> > that users might end up using this feature without understanding the\n> > side effect well. It might be okay if we thoroughly document it but\n> > I'm not sure.\n>\n> One approach is that change max_slot_wal_keep_size forcibly when min_send_delay\n> is set. But it may lead to disable the slot because WALs needed by the time-delayed\n> replication may be also removed. Just the right value cannot be set by us because\n> it is quite depends on the min_send_delay and workload.\n>\n> How about throwing the WARNING when min_send_delay > 0 but\n> max_slot_wal_keep_size < 0? Differ from previous, version the subscription\n> parameter min_send_delay will be sent to publisher. Therefore, we can compare\n> min_send_delay and max_slot_wal_keep_size when publisher receives the parameter.\n\nSince max_slot_wal_keep_size can be changed by reloading the config\nfile, each walsender warns it also at that time? Not sure it's\nhelpful. I think it's a legitimate use case to set min_send_delay > 0\nand max_slot_wal_keep_size = -1, and users might not even notice the\nWARNING message.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 2 Mar 2023 11:08:00 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Thu, Mar 2, 2023 at 7:38 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Mar 1, 2023 at 6:21 PM Hayato Kuroda (Fujitsu)\n> <kuroda.hayato@fujitsu.com> wrote:\n> >\n> > >\n> > > Apart from a bad-use case example I mentioned, in general, piling up\n> > > WAL files due to the replication slot has many bad effects on the\n> > > system. I'm concerned that the side effect of this feature (at least\n> > > of the current design) is too huge compared to the benefit, and afraid\n> > > that users might end up using this feature without understanding the\n> > > side effect well. It might be okay if we thoroughly document it but\n> > > I'm not sure.\n> >\n> > One approach is that change max_slot_wal_keep_size forcibly when min_send_delay\n> > is set. But it may lead to disable the slot because WALs needed by the time-delayed\n> > replication may be also removed. Just the right value cannot be set by us because\n> > it is quite depends on the min_send_delay and workload.\n> >\n> > How about throwing the WARNING when min_send_delay > 0 but\n> > max_slot_wal_keep_size < 0? Differ from previous, version the subscription\n> > parameter min_send_delay will be sent to publisher. Therefore, we can compare\n> > min_send_delay and max_slot_wal_keep_size when publisher receives the parameter.\n>\n> Since max_slot_wal_keep_size can be changed by reloading the config\n> file, each walsender warns it also at that time?\n>\n\nI think Kuroda-San wants to emit a WARNING at the time of CREATE\nSUBSCRIPTION. But it won't be possible to emit a WARNING at the time\nof ALTER SUBSCRIPTION. Also, as you say if the user later changes the\nvalue of max_slot_wal_keep_size, then even if we issue LOG/WARNING in\nwalsender, it may go unnoticed. If we really want to give WARNING for\nthis then we can probably give it as soon as user has set non-default\nvalue of min_send_delay to indicate that this can lead to retaining\nWAL on the publisher and they should consider setting\nmax_slot_wal_keep_size.\n\nHaving said that, I think users can always tune max_slot_wal_keep_size\nand min_send_delay (as none of these requires restart) if they see any\nindication of unexpected WAL size growth. There could be multiple ways\nto check it but I think one can refer wal_status in\npg_replication_slots, the extended value can be an indicator of this.\n\n> Not sure it's\n> helpful. I think it's a legitimate use case to set min_send_delay > 0\n> and max_slot_wal_keep_size = -1, and users might not even notice the\n> WARNING message.\n>\n\nI think it would be better to tell about this in the docs along with\nthe 'min_send_delay' description. The key point is whether this would\nbe an acceptable trade-off for users who want to use this feature. I\nthink it can harm only if users use this without understanding the\ncorresponding trade-off. As we kept the default to no delay, it is\nexpected from users using this have an understanding of the trade-off.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 2 Mar 2023 09:37:00 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Amit, Sawada-san,\r\n\r\n> I think Kuroda-San wants to emit a WARNING at the time of CREATE\r\n> SUBSCRIPTION. But it won't be possible to emit a WARNING at the time\r\n> of ALTER SUBSCRIPTION. Also, as you say if the user later changes the\r\n> value of max_slot_wal_keep_size, then even if we issue LOG/WARNING in\r\n> walsender, it may go unnoticed. If we really want to give WARNING for\r\n> this then we can probably give it as soon as user has set non-default\r\n> value of min_send_delay to indicate that this can lead to retaining\r\n> WAL on the publisher and they should consider setting\r\n> max_slot_wal_keep_size.\r\n\r\nYeah, my motivation is to emit WARNING at CREATE SUBSCRIPTION, but I have not noticed\r\nthat the approach has not covered ALTER SUBSCRIPTION.\r\n\r\n> Having said that, I think users can always tune max_slot_wal_keep_size\r\n> and min_send_delay (as none of these requires restart) if they see any\r\n> indication of unexpected WAL size growth. There could be multiple ways\r\n> to check it but I think one can refer wal_status in\r\n> pg_replication_slots, the extended value can be an indicator of this.\r\n\r\nYeah, min_send_delay and max_slots_wal_keep_size should be easily tunable because\r\nthe appropriate value depends on the enviroment and workload.\r\nHowever, pg_replication_slots.pg_replication_slots cannot show the exact amout of WALs,\r\nso it may not suitable for tuning. I think user can compare the value\r\npg_replication_slots.restart_lsn (or pg_stat_replication.sent_lsn) and\r\npg_current_wal_lsn() to calclate number of WALs to be delayed, like\r\n\r\n```\r\npostgres=# select pg_current_wal_lsn() - pg_replication_slots.restart_lsn as delayed from pg_replication_slots;\r\n delayed \r\n------------\r\n 1689153760\r\n(1 row)\r\n```\r\n\r\n> I think it would be better to tell about this in the docs along with\r\n> the 'min_send_delay' description. The key point is whether this would\r\n> be an acceptable trade-off for users who want to use this feature. I\r\n> think it can harm only if users use this without understanding the\r\n> corresponding trade-off. As we kept the default to no delay, it is\r\n> expected from users using this have an understanding of the trade-off.\r\n\r\nYes, the trade-off should be emphasized.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 2 Mar 2023 04:48:19 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThank you for reviewing! PSA new version.\r\n\r\n> 1.\r\n> Nitpick. The new text is jagged-looking. It should wrap at ~80 chars.\r\n\r\nAddressed.\r\n\r\n> \r\n> 2.\r\n> 2. Another reason is for that parallel streaming, the transaction will be opened\r\n> immediately by the parallel apply worker. Therefore, if the walsender\r\n> is delayed in sending the final record of the transaction, the\r\n> parallel apply worker must wait to receive it with an open\r\n> transaction. This would result in the locks acquired during the\r\n> transaction not being released until the min_send_delay has elapsed.\r\n> \r\n> ~\r\n> \r\n> The text already said there are \"two reasons\", and already this is\r\n> numbered as reason 2. So it doesn't need to keep saying \"Another\r\n> reason\" here.\r\n> \r\n> \"Another reason is for that parallel streaming\" --> \"For parallel streaming...\"\r\n\r\nChanged.\r\n\r\n> ======\r\n> src/backend/replication/walsender.c\r\n> \r\n> 3. WalSndDelay\r\n> \r\n> + /* die if timeout was reached */\r\n> + WalSndCheckTimeOut();\r\n> \r\n> Other nearby comments start uppercase, so this should too.\r\n\r\nI just picked from other part and they have lowercase, but fixed.\r\n\r\n> ======\r\n> src/include/replication/walreceiver.h\r\n> \r\n> 4. WalRcvStreamOptions\r\n> \r\n> @@ -187,6 +187,7 @@ typedef struct\r\n> * prepare time */\r\n> char *origin; /* Only publish data originating from the\r\n> * specified origin */\r\n> + int32 min_send_delay; /* The minimum send delay */\r\n> } logical;\r\n> } proto;\r\n> } WalRcvStreamOptions;\r\n> \r\n> ~\r\n> \r\n> Should that comment mention the units are \"(ms)\"\r\n\r\nAdded.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Thu, 2 Mar 2023 13:25:17 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Vignesh,\r\n\r\nThank you for reviewing! New version can be available at [1].\r\n\r\n> 1) Currently we have added the delay during the decode of commit,\r\n> while decoding the commit walsender process will stop decoding any\r\n> further transaction until delay is completed. There might be a\r\n> possibility that a lot of transactions will happen in parallel and\r\n> there will be a lot of transactions to be decoded after the delay is\r\n> completed.\r\n> Will it be possible to decode the WAL if any WAL is generated instead\r\n> of staying idle in the meantime, I'm not sure if this is feasible just\r\n> throwing my thought to see if it might be possible.\r\n> --- a/src/backend/replication/logical/decode.c\r\n> +++ b/src/backend/replication/logical/decode.c\r\n> @@ -676,6 +676,15 @@ DecodeCommit(LogicalDecodingContext *ctx,\r\n> XLogRecordBuffer *buf,\r\n> \r\n> buf->origptr, buf->endptr);\r\n> }\r\n> \r\n> + /*\r\n> + * Delay sending the changes if required. For streaming transactions,\r\n> + * this means a delay in sending the last stream but that is OK\r\n> + * because on the downstream the changes will be applied only after\r\n> + * receiving the last stream.\r\n> + */\r\n> + if (ctx->min_send_delay > 0 && ctx->delay_send)\r\n> + ctx->delay_send(ctx, xid, commit_time);\r\n> +\r\n\r\nI see your point, but I think that extension can be done in future version if needed.\r\nThis is because we must change some parts and introduce some complexities.\r\n\r\nIf we have decoded but have not wanted to send changes yet, we must store them in\r\nthe memory one and skip sending. In order to do that we must add new data structure,\r\nand we must add another path in DecodeCommit, DecodePrepare not to send changes\r\nand in WalSndLoop() and other functions to send pending changes. These may not be sufficient. \r\n\r\nI'm now thinking aboves are not needed, we can modify later if the overhead of\r\ndecoding is quite large and we must do them very efficiently.\r\n\r\n> 2) Generally single line comments are not terminated by \".\", The\r\n> comment \"/* Sleep until appropriate time. */\" can be changed\r\n> appropriately:\r\n> +\r\n> + /* Sleep until appropriate time. */\r\n> + timeout_sleeptime_ms = WalSndComputeSleeptime(now);\r\n> +\r\n> + elog(DEBUG2, \"time-delayed replication for txid %u,\r\n> delay_time = %d ms, remaining wait time: %ld ms\",\r\n> + xid, (int) ctx->min_send_delay,\r\n> remaining_wait_time_ms);\r\n> +\r\n> + /* Sleep until we get reply from worker or we time out */\r\n> + WalSndWait(WL_SOCKET_READABLE,\r\n\r\nRight, removed.\r\n\r\n> 3) In some places we mention as min_send_delay and in some places we\r\n> mention it as time-delayed replication, we can keep the comment\r\n> consistent by using the similar wordings.\r\n> +-- fail - specifying streaming = parallel with time-delayed replication is not\r\n> +-- supported\r\n> +CREATE SUBSCRIPTION regress_testsub CONNECTION\r\n> 'dbname=regress_doesnotexist' PUBLICATION testpub WITH (connect =\r\n> false, streaming = parallel, min_send_delay = 123);\r\n> \r\n> +-- fail - alter subscription with streaming = parallel should fail when\r\n> +-- time-delayed replication is set\r\n> +ALTER SUBSCRIPTION regress_testsub SET (streaming = parallel);\r\n> \r\n> +-- fail - alter subscription with min_send_delay should fail when\r\n> +-- streaming = parallel is set\r\n\r\n\"time-delayed replication\" was removed.\r\n\r\n> 4) Since the value is stored in ms, we need not add ms again as the\r\n> default value is in ms:\r\n> @@ -4686,6 +4694,9 @@ dumpSubscription(Archive *fout, const\r\n> SubscriptionInfo *subinfo)\r\n> if (strcmp(subinfo->subsynccommit, \"off\") != 0)\r\n> appendPQExpBuffer(query, \", synchronous_commit = %s\",\r\n> fmtId(subinfo->subsynccommit));\r\n> \r\n> + if (subinfo->subminsenddelay > 0)\r\n> + appendPQExpBuffer(query, \", min_send_delay = '%d ms'\",\r\n> subinfo->subminsenddelay);\r\n\r\nRight, fixed.\r\n\r\n> 5) we can use the new error reporting style:\r\n> 5.a) brackets around errcode can be removed\r\n> + ereport(ERROR,\r\n> +\r\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\r\n> + errmsg(\"invalid value for parameter\r\n> \\\"%s\\\": \\\"%s\\\"\",\r\n> + \"min_send_delay\",\r\n> input_string),\r\n> + hintmsg ? errhint(\"%s\", _(hintmsg)) : 0));\r\n> \r\n> 5.b) Similarly here too;\r\n> + if (result < 0 || result > PG_INT32_MAX)\r\n> + ereport(ERROR,\r\n> +\r\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\r\n> + errmsg(\"%d ms is outside the valid\r\n> range for parameter \\\"%s\\\" (%d .. %d)\",\r\n> + result,\r\n> + \"min_send_delay\",\r\n> + 0, PG_INT32_MAX)));\r\n> \r\n> 5.c) Similarly here too;\r\n> + delay_val = strtoul(strVal(defel->arg), &endptr, 10);\r\n> + if (errno != 0 || *endptr != '\\0')\r\n> + ereport(ERROR,\r\n> +\r\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\r\n> + errmsg(\"invalid\r\n> min_send_delay\")));\r\n> \r\n> \r\n> 5.d) Similarly here too;\r\n> + if (delay_val > PG_INT32_MAX)\r\n> + ereport(ERROR,\r\n> +\r\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\r\n> +\r\n> errmsg(\"min_send_delay \\\"%s\\\" out of range\",\r\n> +\r\n> strVal(defel->arg))));\r\n\r\nAll of them are fixed.\r\n\r\n> 6) This can be changed to a single line comment:\r\n> + /*\r\n> + * Parse given string as parameter which has millisecond unit\r\n> + */\r\n> + if (!parse_int(input_string, &result, GUC_UNIT_MS, &hintmsg))\r\n> + ereport(ERROR,\r\n\r\nChanged. I grepped ereport() in the patch and I thought there were no similar one.\r\n\r\n> 7) In the expect we have specifically mention \"for non-streaming\r\n> transaction\", is the behavior different for streaming transaction, if\r\n> not we can change the message accordingly\r\n> +# The publisher waits for the replication to complete\r\n> +$node_publisher->wait_for_catchup('tap_sub_renamed');\r\n> +\r\n> +# This test is successful only if at least the configured delay has elapsed.\r\n> +ok( time() - $publisher_insert_time >= $delay,\r\n> + \"subscriber applies changes only after replication delay for\r\n> non-streaming transaction\"\r\n> +);\r\n\r\nThere is no difference, both of normal and streamed transaction could be delayed to apply.\r\nSo removed.\r\n\r\n[1]: https://www.postgresql.org/message-id/flat/TYAPR01MB586606CF3B585B6F8BE13A9CF5B29%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 2 Mar 2023 13:27:16 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> Fair point but I think the current comment should explain why we are\r\n> doing something different here. How about extending the existing\r\n> comments to something like: \"If we've requested to shut down, exit the\r\n> process. This is unlike handling at other places where we allow\r\n> complete WAL to be sent before shutdown because we don't want the\r\n> delayed transactions to be applied downstream. This will allow one to\r\n> use the data from downstream in case of some unwanted operations on\r\n> the current node.\"\r\n\r\nThank you for suggestion. I think it is better, so changed.\r\nPlease see new patch at [1]\r\n\r\n[1]: https://www.postgresql.org/message-id/flat/TYAPR01MB586606CF3B585B6F8BE13A9CF5B29%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 2 Mar 2023 13:27:49 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "> Yeah, min_send_delay and max_slots_wal_keep_size should be easily tunable\r\n> because\r\n> the appropriate value depends on the enviroment and workload.\r\n> However, pg_replication_slots.pg_replication_slots cannot show the exact amout\r\n> of WALs,\r\n> so it may not suitable for tuning. I think user can compare the value\r\n> pg_replication_slots.restart_lsn (or pg_stat_replication.sent_lsn) and\r\n> pg_current_wal_lsn() to calclate number of WALs to be delayed, like\r\n> \r\n> ```\r\n> postgres=# select pg_current_wal_lsn() - pg_replication_slots.restart_lsn as\r\n> delayed from pg_replication_slots;\r\n> delayed\r\n> ------------\r\n> 1689153760\r\n> (1 row)\r\n> ```\r\n> \r\n> > I think it would be better to tell about this in the docs along with\r\n> > the 'min_send_delay' description. The key point is whether this would\r\n> > be an acceptable trade-off for users who want to use this feature. I\r\n> > think it can harm only if users use this without understanding the\r\n> > corresponding trade-off. As we kept the default to no delay, it is\r\n> > expected from users using this have an understanding of the trade-off.\r\n> \r\n> Yes, the trade-off should be emphasized.\r\n\r\nBased on the understanding, I added them to the doc in new version patch.\r\nPlease see [1].\r\n\r\n[1]: https://www.postgresql.org/message-id/flat/TYAPR01MB586606CF3B585B6F8BE13A9CF5B29%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 2 Mar 2023 13:28:09 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Thu, Mar 2, 2023 at 1:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Mar 2, 2023 at 7:38 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Mar 1, 2023 at 6:21 PM Hayato Kuroda (Fujitsu)\n> > <kuroda.hayato@fujitsu.com> wrote:\n> > >\n> > > >\n> > > > Apart from a bad-use case example I mentioned, in general, piling up\n> > > > WAL files due to the replication slot has many bad effects on the\n> > > > system. I'm concerned that the side effect of this feature (at least\n> > > > of the current design) is too huge compared to the benefit, and afraid\n> > > > that users might end up using this feature without understanding the\n> > > > side effect well. It might be okay if we thoroughly document it but\n> > > > I'm not sure.\n> > >\n> > > One approach is that change max_slot_wal_keep_size forcibly when min_send_delay\n> > > is set. But it may lead to disable the slot because WALs needed by the time-delayed\n> > > replication may be also removed. Just the right value cannot be set by us because\n> > > it is quite depends on the min_send_delay and workload.\n> > >\n> > > How about throwing the WARNING when min_send_delay > 0 but\n> > > max_slot_wal_keep_size < 0? Differ from previous, version the subscription\n> > > parameter min_send_delay will be sent to publisher. Therefore, we can compare\n> > > min_send_delay and max_slot_wal_keep_size when publisher receives the parameter.\n> >\n> > Since max_slot_wal_keep_size can be changed by reloading the config\n> > file, each walsender warns it also at that time?\n> >\n>\n> I think Kuroda-San wants to emit a WARNING at the time of CREATE\n> SUBSCRIPTION. But it won't be possible to emit a WARNING at the time\n> of ALTER SUBSCRIPTION. Also, as you say if the user later changes the\n> value of max_slot_wal_keep_size, then even if we issue LOG/WARNING in\n> walsender, it may go unnoticed. If we really want to give WARNING for\n> this then we can probably give it as soon as user has set non-default\n> value of min_send_delay to indicate that this can lead to retaining\n> WAL on the publisher and they should consider setting\n> max_slot_wal_keep_size.\n>\n> Having said that, I think users can always tune max_slot_wal_keep_size\n> and min_send_delay (as none of these requires restart) if they see any\n> indication of unexpected WAL size growth. There could be multiple ways\n> to check it but I think one can refer wal_status in\n> pg_replication_slots, the extended value can be an indicator of this.\n>\n> > Not sure it's\n> > helpful. I think it's a legitimate use case to set min_send_delay > 0\n> > and max_slot_wal_keep_size = -1, and users might not even notice the\n> > WARNING message.\n> >\n>\n> I think it would be better to tell about this in the docs along with\n> the 'min_send_delay' description. The key point is whether this would\n> be an acceptable trade-off for users who want to use this feature. I\n> think it can harm only if users use this without understanding the\n> corresponding trade-off. As we kept the default to no delay, it is\n> expected from users using this have an understanding of the trade-off.\n\nI imagine that a typical use case would be to set min_send_delay to\nseveral hours to days. I'm concerned that it could not be an\nacceptable trade-off for many users that the system cannot collect any\ngarbage during that.\n\nI think we can have the apply process write the decoded changes\nsomewhere on the disk (as not temporary files) and return the flush\nLSN so that the apply worker can apply them later and the publisher\ncan advance slot's LSN. The feature would be more complex but from the\nuser perspective it would be better.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 4 Mar 2023 00:21:12 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi all,\n\nThanks for working on this.\n\n\n> I imagine that a typical use case would be to set min_send_delay to\n> several hours to days. I'm concerned that it could not be an\n> acceptable trade-off for many users that the system cannot collect any\n> garbage during that.\n>\n\nI'm not too worried about the WAL recycling, that mostly looks like\na documentation issue to me. It is not a problem that many PG users\nare unfamiliar. Also, even though one day creating - altering subscription\nis relaxed to be done by a regular user, one option could be to require\nthis setting to be changed by a superuser? That would alleviate my concern\nregarding WAL recycling. A superuser should be able to monitor the system\nand adjust the settings/hardware accordingly.\n\nHowever, VACUUM being blocked by replication with a configuration\nchange on the subscription sounds more concerning to me. Blocking\nVACUUM for hours could quickly escalate to performance problems.\n\nOn the other hand, we already have a similar problem with\nrecovery_min_apply_delay combined with hot_standby_feedback [1].\nSo, that probably is an acceptable trade-off for the pgsql-hackers.\nIf you use this feature, you should be even more careful.\n\n\n> I think we can have the apply process write the decoded changes\n> somewhere on the disk (as not temporary files) and return the flush\n> LSN so that the apply worker can apply them later and the publisher\n> can advance slot's LSN. The feature would be more complex but from the\n> user perspective it would be better.\n>\n\nYes, this might probably be one of the ideal solutions to the problem at\nhand. But,\nmy current guess is that it'd be a non-trivial change with different\nconcurrency/failure\nscenarios. So, I'm not sure if that is going to be a realistic patch to\npursue.\n\n\nThanks,\nOnder KALACI\n\n\n\n[1] PostgreSQL: Documentation: 15: 20.6. Replication\n<https://www.postgresql.org/docs/current/runtime-config-replication.html>\n\nHi all,Thanks for working on this.\n\nI imagine that a typical use case would be to set min_send_delay to\nseveral hours to days. I'm concerned that it could not be an\nacceptable trade-off for many users that the system cannot collect any\ngarbage during that.I'm not too worried about the WAL recycling, that mostly looks likea documentation issue to me. It is not a problem that many PG usersare unfamiliar. Also, even though one day creating - altering subscriptionis relaxed to be done by a regular user, one option could be to requirethis setting to be changed by a superuser? That would alleviate my concernregarding WAL recycling. A superuser should be able to monitor the systemand adjust the settings/hardware accordingly.However, VACUUM being blocked by replication with a configurationchange on the subscription sounds more concerning to me. BlockingVACUUM for hours could quickly escalate to performance problems.On the other hand, we already have a similar problem with recovery_min_apply_delay combined with hot_standby_feedback [1].So, that probably is an acceptable trade-off for the pgsql-hackers.If you use this feature, you should be even more careful.\n\nI think we can have the apply process write the decoded changes\nsomewhere on the disk (as not temporary files) and return the flush\nLSN so that the apply worker can apply them later and the publisher\ncan advance slot's LSN. The feature would be more complex but from the\nuser perspective it would be better.Yes, this might probably be one of the ideal solutions to the problem at hand. But, my current guess is that it'd be a non-trivial change with different concurrency/failurescenarios. So, I'm not sure if that is going to be a realistic patch to pursue.Thanks,Onder KALACI [1] PostgreSQL: Documentation: 15: 20.6. Replication",
"msg_date": "Mon, 6 Mar 2023 19:27:59 +0300",
"msg_from": "=?UTF-8?B?w5ZuZGVyIEthbGFjxLE=?= <onderkalaci@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Mon, Mar 06, 2023 at 07:27:59PM +0300, Önder Kalacı wrote:\n> On the other hand, we already have a similar problem with\n> recovery_min_apply_delay combined with hot_standby_feedback [1].\n> So, that probably is an acceptable trade-off for the pgsql-hackers.\n> If you use this feature, you should be even more careful.\n\nYes, but it's possible to turn off hot_standby_feedback so that you don't\nincur bloat on the primary. And you don't need to store hours or days of\nWAL on the primary. I'm very late to this thread, but IIUC you cannot\navoid blocking VACUUM with the proposed feature. IMO the current set of\ntrade-offs (e.g., unavoidable bloat and WAL buildup) would make this\nfeature virtually unusable for a lot of workloads, so it's probably worth\nexploring an alternative approach. In any case, we probably shouldn't rush\nthis into v16 in its current form.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 7 Mar 2023 10:30:14 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Wed, Mar 8, 2023 at 3:30 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Mon, Mar 06, 2023 at 07:27:59PM +0300, Önder Kalacı wrote:\n> > On the other hand, we already have a similar problem with\n> > recovery_min_apply_delay combined with hot_standby_feedback [1].\n> > So, that probably is an acceptable trade-off for the pgsql-hackers.\n> > If you use this feature, you should be even more careful.\n>\n> Yes, but it's possible to turn off hot_standby_feedback so that you don't\n> incur bloat on the primary. And you don't need to store hours or days of\n> WAL on the primary.\n\nRight. This side effect belongs to the combination of\nrecovery_min_apply_delay and hot_standby_feedback/replication slot.\nrecovery_min_apply_delay itself can be used even without this side\neffect if we accept other trade-offs. When it comes to this\ntime-delayed logical replication feature, there is no choice to avoid\nthe side effect for users who want to use this feature.\n\n> I'm very late to this thread, but IIUC you cannot\n> avoid blocking VACUUM with the proposed feature.\n\nRight.\n\n> IMO the current set of\n> trade-offs (e.g., unavoidable bloat and WAL buildup) would make this\n> feature virtually unusable for a lot of workloads, so it's probably worth\n> exploring an alternative approach.\n\nIt might require more engineering effort for alternative approaches\nsuch as one I proposed but the feature could become better from the\nuser perspective. I also think it would be worth exploring it if we've\nnot yet.\n\nRegards,\n\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 8 Mar 2023 12:49:42 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Wed, Mar 8, 2023 at 9:20 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Mar 8, 2023 at 3:30 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> >\n>\n> > IMO the current set of\n> > trade-offs (e.g., unavoidable bloat and WAL buildup) would make this\n> > feature virtually unusable for a lot of workloads, so it's probably worth\n> > exploring an alternative approach.\n>\n> It might require more engineering effort for alternative approaches\n> such as one I proposed but the feature could become better from the\n> user perspective. I also think it would be worth exploring it if we've\n> not yet.\n>\n\nFair enough. I think as of now most people think that we should\nconsider alternative approaches for this feature. The two ideas at a\nhigh level are that the apply worker itself first flushes the decoded\nWAL (maybe only when time-delay is configured) or have a separate\nwalreceiver process as we have for standby. I think we need to analyze\nthe pros and cons of each of those approaches and see if they would be\nuseful even for other things on the apply side.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 9 Mar 2023 11:00:46 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "At Thu, 9 Mar 2023 11:00:46 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \r\n> On Wed, Mar 8, 2023 at 9:20 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> >\r\n> > On Wed, Mar 8, 2023 at 3:30 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\r\n> > >\r\n> >\r\n> > > IMO the current set of\r\n> > > trade-offs (e.g., unavoidable bloat and WAL buildup) would make this\r\n> > > feature virtually unusable for a lot of workloads, so it's probably worth\r\n> > > exploring an alternative approach.\r\n> >\r\n> > It might require more engineering effort for alternative approaches\r\n> > such as one I proposed but the feature could become better from the\r\n> > user perspective. I also think it would be worth exploring it if we've\r\n> > not yet.\r\n> >\r\n> \r\n> Fair enough. I think as of now most people think that we should\r\n> consider alternative approaches for this feature. The two ideas at a\r\n\r\nIf we can notify subscriber of the transaction start time, will that\r\nsolve the current problem? If not, or if it is not possible, +1 to\r\nlook for other solutions.\r\n\r\n> high level are that the apply worker itself first flushes the decoded\r\n> WAL (maybe only when time-delay is configured) or have a separate\r\n> walreceiver process as we have for standby. I think we need to analyze\r\n> the pros and cons of each of those approaches and see if they would be\r\n> useful even for other things on the apply side.\r\n\r\nMy understanding of the requirements here is that the publisher should\r\nnot hold changes, the subscriber should not hold data reads, and all\r\ntransactions including two-phase ones should be applied at once upon\r\ncommitting. Both sides need to respond to the requests from the other\r\nside. We expect apply-delay of several hours or more. My thoughts\r\nconsidering the requirements are as follows:\r\n\r\nIf we expect delays of several hours or more, I don't think it's\r\nfeasible to stack received changes in the process memory. So, if\r\napply-delay is in effect, I think it would be better to process\r\ntransactions through files regardless of process configuration.\r\n\r\nI'm not sure whether we should have a separate process for protocol\r\nprocessing. On one hand, it would simplify the protocol processing\r\npart, but on the other hand, changes would always have to be applied\r\nthrough files. If we plan to integrate the paths with and without\r\napply-delay by the file-passing method, this might work. If we want to\r\nmaintain the current approach when not applying apply-delay, I think\r\nwe would have to implement it in a single process, but I feel the\r\nprotocol processing part could become complicated.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n",
"msg_date": "Thu, 09 Mar 2023 18:26:04 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Thu, Mar 9, 2023 at 2:56 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 9 Mar 2023 11:00:46 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > On Wed, Mar 8, 2023 at 9:20 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Wed, Mar 8, 2023 at 3:30 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> > > >\n> > >\n> > > > IMO the current set of\n> > > > trade-offs (e.g., unavoidable bloat and WAL buildup) would make this\n> > > > feature virtually unusable for a lot of workloads, so it's probably worth\n> > > > exploring an alternative approach.\n> > >\n> > > It might require more engineering effort for alternative approaches\n> > > such as one I proposed but the feature could become better from the\n> > > user perspective. I also think it would be worth exploring it if we've\n> > > not yet.\n> > >\n> >\n> > Fair enough. I think as of now most people think that we should\n> > consider alternative approaches for this feature. The two ideas at a\n>\n> If we can notify subscriber of the transaction start time, will that\n> solve the current problem?\n>\n\nI don't think that will solve the current problem because the problem\nis related to confirming back the flush LSN (commit LSN) to the\npublisher which we do only after we commit the delayed transaction.\nDue to this, we are not able to advance WAL(restart_lsn)/XMIN on the\npublisher which causes an accumulation of WAL and does not allow the\nvacuum to remove deleted rows. Do you have something else in mind\nwhich makes you think that it can solve the problem?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 9 Mar 2023 15:06:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear hackers,\r\n\r\nBased on the discussion Sawada-san pointed out[1] that the current approach of\r\nlogical time-delayed avoids recycling WALs, I'm planning to close the CF entry once.\r\nThis or the forked thread will be registered again after deciding on the alternative\r\napproach. Thank you very much for the time to join our discussions earlier.\r\n\r\nI think to solve the issue, logical changes must be flushed on subscribers once\r\nand workers apply changes after spending a specified time. The straightforward\r\napproach for it is following physical replication - introduce the walreceiver process\r\non the subscriber. We must research more, but at least there are some benefits:\r\n\r\n* Publisher can be shutted down even if the apply worker stuck. The stuck is more\r\n likely happen than physical replication, so this may improve the robustness.\r\n More detail, please see another thread[2].\r\n* In case of synchronous_commit = 'remote_write', publisher can COMMIT faster.\r\n This is because walreceiver will flush changes immediately and reply soon.\r\n Even if time-delayed is enabled, the wait-time will not be increased.\r\n* May be used as an infrastructure of parallel apply for non-streaming transaction.\r\n The basic design of them are the similar - one process receive changes and others apply.\r\n\r\nI searched old discussions [3] and wiki pages, and I found that the initial prototype\r\nhad a logical walreceiver but in a later version [4] apply worker directly received\r\nchanges. I could not find the reason for the decision, but I suspect there were the\r\nfollowing reasons. Could you please tell me the correct background about that?\r\n\r\n* Performance bottlenecks. If the walreceiver flush changes and the worker applies\r\n them, fsync() is called for every reception.\r\n* Complexity. In this design walreceiver and apply worker must share the progress\r\n of flush/apply. For crash recovery, more consideration is needed. The related discussion\r\n can be found in [5].\r\n* Extendibility. In-core logical replication should be a sample of an external\r\n project. Apply worker is just a background worker that can be launched from an extension,\r\n so it can be easily understood. If it deeply depends on the walreceiver, other projects cannot follow.\r\n\r\n[1]: https://www.postgresql.org/message-id/CAD21AoAeG2%2BRsUYD9%2BmEwr8-rrt8R1bqpe56T2D%3DeuO-Qs-GAg%40mail.gmail.com\r\n[2]: https://www.postgresql.org/message-id/flat/TYAPR01MB586668E50FC2447AD7F92491F5E89%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n[3]: https://www.postgresql.org/message-id/201206131327.24092.andres%402ndquadrant.com\r\n[4]: https://www.postgresql.org/message-id/37e19ad5-f667-2fe2-b95b-bba69c5b6c68@2ndquadrant.com\r\n[5]: https://www.postgresql.org/message-id/1339586927-13156-12-git-send-email-andres%402ndquadrant.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n",
"msg_date": "Fri, 10 Mar 2023 12:05:52 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Hi hackers,\r\n\r\nI have made a rough prototype that can serialize changes to permanent file and\r\napply after time elapsed from v30 patch. I think the 2PC and restore mechanism\r\nneeds more analysis, but I can share codes for discussion. How do you think?\r\n\r\n## Interfaces\r\n\r\nNot changed from old versions. The subscription parameter \"min_apply_delay\" is\r\nused to enable the time-delayed logical replication.\r\n\r\n## Advantages\r\n\r\nTwo big problems are solved.\r\n\r\n* Apply worker can respond from walsender's keepalive while delaying application.\r\n This is because the process will not sleep.\r\n* Publisher can recycle WALs even if a transaction related with the WAL is not\r\n applied yet. This is because the apply worker flush all the changes to file\r\n and reply that WALs are flushed.\r\n\r\n## Disadvantages\r\n\r\nCode complexity.\r\n\r\n## Basic design\r\n\r\nThe basic idea is quite simple - create a new file when apply worker receive\r\nBEGIN message, write received changes, and flush them when COMMIT message is come.\r\nThe delayed transaction is checked its commit time for every main loop, and applied\r\nwhen the time exceeds the min_apply_delay.\r\n\r\nTo handle files APIs that uses plain kernel FDs was used. This approach is\r\nsimilar to physical walreceiver process. Apart from the physical one, worker\r\ndoes not flush for every messages - it is done at the end of the transaction.\r\n\r\n### For 2PC\r\n\r\nThe delay is started since COMMIT PREPARED is come. But to avoid the\r\nlong-lock-holding issue, the prepared transaction is just written into file\r\nwithout applying.\r\n\r\nWhen BEGIN PREPARE is received, same as normal transactions, the worker creates\r\na file and starts to write changes. If we reach the PREPARE message, just writes\r\na message into file, flushes, and just closes it. This means that no transactions\r\nare prepared on subscriber. When COMMIT PREPARED is received, the worker opens the\r\nfile again and write the message. After that we treat same as normal committed\r\ntransaction.\r\n\r\n### For streamed transaction\r\n\r\nDo no special thing when the streaming transaction is come. When it is committed\r\nor prepared, read all the changes and write into permanent file. To read and\r\nwrite changes apply_spooled_changes() is used, which means the basic workflow\r\nis not changed.\r\n\r\n### Restore from files\r\n\r\nTo check the elapsed time from the commit, all commit_time of delayed transactions\r\nmust be stored in the memory. Basically it can store when the worker handle COMMIT\r\nmessage, but it must do special treatment for restarting.\r\n\r\nWhen an apply worker receives COMMIT/PREPARE/COMMIT PREPARED message, it writes\r\nthe message, flush them, and cache the commit_time. When worker restarts, it open\r\nfiles, check the final message (this is done by seeking some bytes from end of\r\nthe file), and then cache the written commit_time.\r\n\r\n## Restrictions\r\n\r\n* The combination with ALTER SUBSCRIPTION .. SKIP LSN is not considered.\r\n\r\nThanks for Osumi-san to help implementing.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Fri, 17 Mar 2023 13:11:58 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear hackers,\r\n\r\n> I have made a rough prototype that can serialize changes to permanent file and\r\n> apply after time elapsed from v30 patch. I think the 2PC and restore mechanism\r\n> needs more analysis, but I can share codes for discussion. How do you think?\r\n\r\nI have noticed that it could not be applied due to the recent commit.\r\nHere is a rebased version.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Thu, 30 Mar 2023 05:28:13 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear hackers,\r\n\r\nPrevious patch could not be applied due to 482675 1e10d4, c3afe8.\r\nPSA rebased version. Also, I have done some code cleanups.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Wed, 5 Apr 2023 09:23:13 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear hackers,\r\n\r\nI have rebased an update the PoC. Please see attached.\r\n\r\nIn [1], I wrote:\r\n\r\n>\r\n### Restore from files\r\n\r\nTo check the elapsed time from the commit, all commit_time of delayed transactions\r\nmust be stored in the memory. Basically it can store when the worker handle COMMIT\r\nmessage, but it must do special treatment for restarting.\r\n\r\nWhen an apply worker receives COMMIT/PREPARE/COMMIT PREPARED message, it writes\r\nthe message, flush them, and cache the commit_time. When worker restarts, it open\r\nfiles, check the final message (this is done by seeking some bytes from end of\r\nthe file), and then cache the written commit_time.\r\n>\r\nBut I have been thinking that this spec is terrible. Therefore, I have implemented\r\nnew approach which uses the its filename for restoring when it is commit. Followings\r\nare the summary.\r\n\r\nWhen a worker receives a BEGIN message, it creates a new file and writes its\r\nchanges to it. The filename contains the following dash-separated components:\r\n\r\n1. Subscription OID\r\n2. XID of the delayed transaction on the publisher\r\n3. Status of the delaying transaction\r\n4. Upper 32 bits of the commit_lsn\r\n5. Lower 32 bits of the commit_lsn\r\n6. Upper 32 bits of the end_lsn\r\n7. Lower 32 bits of the end_lsn\r\n8. Commit time\r\n\r\nAt the beginning, the new file contains components 4-8 as 0 because the worker\r\ndoes not know their values. When it receives a COMMIT message, the changes are\r\nwritten to the permanent file, and the file is renamed to an appropriate value.\r\n\r\nWhile restarting, the worker reads the directory containing the files and caches\r\ntheir commit time into memory from the filenames. Files do not need to be opened\r\nat this point. Therefore, PREPARE/COMMIT PREPARED messages are no longer written\r\ninto the file. The status of transactions can be distinguished from the filename.\r\n\r\nAnother notable change is the addition of a replication option. If the\r\nmin_apply_delay is greater than 0, a new parameter called \"require_schema\" is\r\npassed via START_REPICATION command. When \"require_schema\" is enabled, the publisher\r\nsends its schema (RELATION and TYPE messages) every time it sends decoded DMLs.\r\nThis is necessary because delayed transactions may be applied after the subscriber\r\nis restarted, and the LogicalRepRelMap hash is destroyed at that time. If the\r\nRELATION message is not written into the delayed file, and the worker restarts\r\njust before applying the transaction, it will fail to open the local relation\r\nand display an error message: \"ERROR: no relation map entry\".\r\n\r\nAnd some small bugs were also fixed.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5866D871F60DDFD8FAA2CDE4F5BD9@TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Wed, 19 Apr 2023 09:30:46 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear hackers,\r\n\r\nI rebased and refined my PoC. Followings are the changes:\r\n\r\n* Added support for ALTER SUBSCRIPTION .. SKIP LSN. The skip operation is done when\r\nthe application starts. User must indicate the commit_lsn of the transaction to\r\nskip the transaction. If the apply worker faces ERROR, it will output the commit_lsn.\r\nApart from non-delayed transactions, the prepared but not committed transaction\r\ncannot be skipped. This is because currently the prepare_lsn is not recorded to\r\nthe file.\r\n\r\n \r\n* Added integrity checks. When the debug build is enabled, each messages written in\r\nthe files has the CRC checksums. When the message is read by apply worker, the\r\nworker checks it and raise PANIC if the process fails to compare. I'm not sure\r\nthe performancec degradation can be accepted, so I added it only when\r\nUSE_ASSERT_CHECKING is on.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Fri, 28 Apr 2023 09:05:12 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Fri, Apr 28, 2023 at 2:35 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear hackers,\n>\n> I rebased and refined my PoC. Followings are the changes:\n\nThanks.\n\nApologies for being late here. Please bear with me if I'm repeating\nany of the discussed points.\n\nI'm mainly trying to understand the production level use-case behind\nthis feature, and for that matter, recovery_min_apply_delay. AFAIK,\npeople try to keep the replication lag as minimum as possible i.e.\nnear zero to avoid the extreme problems on production servers - wal\nfile growth, blocked vacuum, crash and downtime.\n\nThe proposed feature commit message and existing docs about\nrecovery_min_apply_delay justify the reason as 'offering opportunities\nto correct data loss errors'. If someone wants to enable\nrecovery_min_apply_delay/min_apply_delay on production servers, I'm\nguessing their values will be in hours, not in minutes; for the simple\nreason that when a data loss occurs, people/infrastructure monitoring\npostgres need to know it first and need time to respond with\ncorrective actions to recover data loss. When these parameters are\nset, the primary server mustn't be generating too much WAL to avoid\neventual crash/downtime. Who would really want to be so defensive\nagainst somebody who may or may not accidentally cause data loss and\nenable these features on production servers (especially when these can\ntake down the primary server) and live happily with the induced\nreplication lag?\n\nAFAIK, PITR is what people use for recovering from data loss errors in\nproduction.\n\nIMO, before we even go implement the apply delay feature for logical\nreplication, it's worth to understand if induced replication lags have\nany production level significance. We can also debate if providing\napply delay hooks is any better with simple out-of-the-box extensions\nas opposed to the core providing these features.\n\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 10 May 2023 17:35:25 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Wed, May 10, 2023 at 5:35 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Apr 28, 2023 at 2:35 PM Hayato Kuroda (Fujitsu)\n> <kuroda.hayato@fujitsu.com> wrote:\n> >\n> > Dear hackers,\n> >\n> > I rebased and refined my PoC. Followings are the changes:\n>\n> Thanks.\n>\n> Apologies for being late here. Please bear with me if I'm repeating\n> any of the discussed points.\n>\n> I'm mainly trying to understand the production level use-case behind\n> this feature, and for that matter, recovery_min_apply_delay. AFAIK,\n> people try to keep the replication lag as minimum as possible i.e.\n> near zero to avoid the extreme problems on production servers - wal\n> file growth, blocked vacuum, crash and downtime.\n>\n> The proposed feature commit message and existing docs about\n> recovery_min_apply_delay justify the reason as 'offering opportunities\n> to correct data loss errors'. If someone wants to enable\n> recovery_min_apply_delay/min_apply_delay on production servers, I'm\n> guessing their values will be in hours, not in minutes; for the simple\n> reason that when a data loss occurs, people/infrastructure monitoring\n> postgres need to know it first and need time to respond with\n> corrective actions to recover data loss. When these parameters are\n> set, the primary server mustn't be generating too much WAL to avoid\n> eventual crash/downtime. Who would really want to be so defensive\n> against somebody who may or may not accidentally cause data loss and\n> enable these features on production servers (especially when these can\n> take down the primary server) and live happily with the induced\n> replication lag?\n>\n> AFAIK, PITR is what people use for recovering from data loss errors in\n> production.\n>\n\nI think PITR is not a preferred way to achieve this because it can be\nquite time-consuming. See how Gitlab[1] uses delayed replication in\nPostgreSQL. This is one of the use cases I came across but I am sure\nthere will be others as well, otherwise, we would not have introduced\nthis feature in the first place.\n\nSome of the other solutions like MySQL also have this feature. See\n[2], you can also read the other use cases in that article. It seems\npglogical has this feature and there is a customer demand for the same\n[3]\n\n> IMO, before we even go implement the apply delay feature for logical\n> replication, it's worth to understand if induced replication lags have\n> any production level significance.\n>\n\nI think the main thing here is to come up with the right design to\nimplement this feature. In the last release, we found some blocking\nproblems with the proposed patch at that time but Kuroda-San came up\nwith a new patch with a different design based on the discussion here.\nI haven't looked at it yet though.\n\n\n[1] - https://about.gitlab.com/blog/2019/02/13/delayed-replication-for-disaster-recovery-with-postgresql/\n[2] - https://dev.mysql.com/doc/refman/8.0/en/replication-delayed.html\n[3] - https://www.postgresql.org/message-id/73b06a32-56ab-4056-86ff-e307f3c316f1%40www.fastmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 11 May 2023 08:49:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Amit-san, Bharath,\r\n\r\nThank you for giving your opinion!\r\n\r\n> Some of the other solutions like MySQL also have this feature. See\r\n> [2], you can also read the other use cases in that article. It seems\r\n> pglogical has this feature and there is a customer demand for the same\r\n> [3]\r\n\r\nAdditionally, the Db2[1] seems to have similar feature. If we extend to DBaaSes,\r\nRDS for MySQL [2] and TencentDB [3] have that. These may indicate the needs\r\nof the delayed replication. \r\n\r\n[1]: https://www.ibm.com/docs/en/db2/11.5?topic=parameters-hadr-replay-delay-hadr-replay-delay\r\n[2]: https://aws.amazon.com/jp/blogs/database/recover-from-a-disaster-with-delayed-replication-in-amazon-rds-for-mysql/\r\n[3]: https://www.tencentcloud.com/document/product/236/41085\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 11 May 2023 04:19:40 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Fri, Apr 28, 2023 at 2:35 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear hackers,\n>\n> I rebased and refined my PoC. Followings are the changes:\n>\n\n1. Is my understanding correct that this patch creates the delay files\nfor each transaction? If so, did you consider other approaches such as\nusing one file to avoid creating many files?\n2. For streaming transactions, first the changes are written in the\ntemp file and then moved to the delay file. It seems like there is a\ndouble work. Is it possible to unify it such that when min_apply_delay\nis specified, we just use the delay file without sacrificing the\nadvantages like stream sub-abort can truncate the changes?\n3. Ideally, there shouldn't be a performance impact of this feature on\nregular transactions because the delay file is created only when\nmin_apply_delay is active but better to do some testing of the same.\n\nOverall, I think such an approach can address comments by Sawada-San\n[1] but not sure if Sawada-San or others have any better ideas to\nachieve this feature. It would be good to see what others think of\nthis approach.\n\n[1] - https://www.postgresql.org/message-id/CAD21AoAeG2%2BRsUYD9%2BmEwr8-rrt8R1bqpe56T2D%3DeuO-Qs-GAg%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 11 May 2023 10:34:44 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Thu, May 11, 2023 at 2:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Apr 28, 2023 at 2:35 PM Hayato Kuroda (Fujitsu)\n> <kuroda.hayato@fujitsu.com> wrote:\n> >\n> > Dear hackers,\n> >\n> > I rebased and refined my PoC. Followings are the changes:\n> >\n>\n> 1. Is my understanding correct that this patch creates the delay files\n> for each transaction? If so, did you consider other approaches such as\n> using one file to avoid creating many files?\n> 2. For streaming transactions, first the changes are written in the\n> temp file and then moved to the delay file. It seems like there is a\n> double work. Is it possible to unify it such that when min_apply_delay\n> is specified, we just use the delay file without sacrificing the\n> advantages like stream sub-abort can truncate the changes?\n> 3. Ideally, there shouldn't be a performance impact of this feature on\n> regular transactions because the delay file is created only when\n> min_apply_delay is active but better to do some testing of the same.\n>\n\nIn addition to the points Amit raised, if the 'required_schema' option\nis specified in START_REPLICATION, the publisher sends schema\ninformation for every change. I think it leads to significant\noverhead. Did you consider alternative approaches such as sending the\nschema information for every transaction or the subscriber requests\nthe publisher to send it?\n\n> Overall, I think such an approach can address comments by Sawada-San\n> [1] but not sure if Sawada-San or others have any better ideas to\n> achieve this feature. It would be good to see what others think of\n> this approach.\n>\n\nI agree with this approach.\n\nWhen it comes to the idea of writing logical changes to permanent\nfiles, I think it would also be a good idea (and perhaps could be a\nbuilding block of this feature) that we write streamed changes to a\npermanent file so that the apply worker can retry to apply them\nwithout retrieving the same changes again from the publisher.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 12 May 2023 11:07:38 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Amit, Sawada-san,\r\n\r\nThank you for replying!\r\n\r\n> On Thu, May 11, 2023 at 2:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > On Fri, Apr 28, 2023 at 2:35 PM Hayato Kuroda (Fujitsu)\r\n> > <kuroda.hayato@fujitsu.com> wrote:\r\n> > >\r\n> > > Dear hackers,\r\n> > >\r\n> > > I rebased and refined my PoC. Followings are the changes:\r\n> > >\r\n> >\r\n> > 1. Is my understanding correct that this patch creates the delay files\r\n> > for each transaction? If so, did you consider other approaches such as\r\n> > using one file to avoid creating many files?\r\n> > 2. For streaming transactions, first the changes are written in the\r\n> > temp file and then moved to the delay file. It seems like there is a\r\n> > double work. Is it possible to unify it such that when min_apply_delay\r\n> > is specified, we just use the delay file without sacrificing the\r\n> > advantages like stream sub-abort can truncate the changes?\r\n> > 3. Ideally, there shouldn't be a performance impact of this feature on\r\n> > regular transactions because the delay file is created only when\r\n> > min_apply_delay is active but better to do some testing of the same.\r\n> >\r\n> \r\n> In addition to the points Amit raised, if the 'required_schema' option\r\n> is specified in START_REPLICATION, the publisher sends schema\r\n> information for every change. I think it leads to significant\r\n> overhead. Did you consider alternative approaches such as sending the\r\n> schema information for every transaction or the subscriber requests\r\n> the publisher to send it?\r\n\r\nThanks for giving your opinions. Except for suggestion 2, I have never considered.\r\nI will analyze them and share my opinion later.\r\nAbout 2, I chose the style in order to simplify the source code, but I'm now planning\r\nto follow suggestions.\r\n\r\n> > Overall, I think such an approach can address comments by Sawada-San\r\n> > [1] but not sure if Sawada-San or others have any better ideas to\r\n> > achieve this feature. It would be good to see what others think of\r\n> > this approach.\r\n> >\r\n> \r\n> I agree with this approach.\r\n> \r\n> When it comes to the idea of writing logical changes to permanent\r\n> files, I think it would also be a good idea (and perhaps could be a\r\n> building block of this feature) that we write streamed changes to a\r\n> permanent file so that the apply worker can retry to apply them\r\n> without retrieving the same changes again from the publisher.\r\n\r\nI'm very relieved to hear that.\r\nOne question: did you mean to say that serializing changes into the permanent files\r\ncan be extend to the non-delay case, right? I think once I will treat for delayed\r\nreplication, and then we can consider later.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Fri, 12 May 2023 03:48:12 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Fri, May 12, 2023 at 12:48 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> > > Overall, I think such an approach can address comments by Sawada-San\n> > > [1] but not sure if Sawada-San or others have any better ideas to\n> > > achieve this feature. It would be good to see what others think of\n> > > this approach.\n> > >\n> >\n> > I agree with this approach.\n> >\n> > When it comes to the idea of writing logical changes to permanent\n> > files, I think it would also be a good idea (and perhaps could be a\n> > building block of this feature) that we write streamed changes to a\n> > permanent file so that the apply worker can retry to apply them\n> > without retrieving the same changes again from the publisher.\n>\n> I'm very relieved to hear that.\n> One question: did you mean to say that serializing changes into the permanent files\n> can be extend to the non-delay case, right? I think once I will treat for delayed\n> replication, and then we can consider later.\n\nWhat I was thinking of is that we implement non-delay cases (only for\nstreamed transactions) and then extend it to delay cases (i.e. adding\nnon-streamed transaction support and the delay mechanism). It might be\nhelpful if this patch becomes large and this approach can enable us to\nreduce the complexity or divide the patch. That being said, I've not\nconsidered this approach enough yet and it's just an idea. Extending\nthis feature to non-delay cases later also makes sense to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 12 May 2023 13:45:56 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Fri, May 12, 2023 at 7:38 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, May 11, 2023 at 2:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Apr 28, 2023 at 2:35 PM Hayato Kuroda (Fujitsu)\n> > <kuroda.hayato@fujitsu.com> wrote:\n> > >\n> > > Dear hackers,\n> > >\n> > > I rebased and refined my PoC. Followings are the changes:\n> > >\n> >\n> > 1. Is my understanding correct that this patch creates the delay files\n> > for each transaction? If so, did you consider other approaches such as\n> > using one file to avoid creating many files?\n> > 2. For streaming transactions, first the changes are written in the\n> > temp file and then moved to the delay file. It seems like there is a\n> > double work. Is it possible to unify it such that when min_apply_delay\n> > is specified, we just use the delay file without sacrificing the\n> > advantages like stream sub-abort can truncate the changes?\n> > 3. Ideally, there shouldn't be a performance impact of this feature on\n> > regular transactions because the delay file is created only when\n> > min_apply_delay is active but better to do some testing of the same.\n> >\n>\n> In addition to the points Amit raised, if the 'required_schema' option\n> is specified in START_REPLICATION, the publisher sends schema\n> information for every change. I think it leads to significant\n> overhead. Did you consider alternative approaches such as sending the\n> schema information for every transaction or the subscriber requests\n> the publisher to send it?\n>\n\nWhy do we need this new flag? I can't see any comments in the related\ncode which explain its need.\n\n> > Overall, I think such an approach can address comments by Sawada-San\n> > [1] but not sure if Sawada-San or others have any better ideas to\n> > achieve this feature. It would be good to see what others think of\n> > this approach.\n> >\n>\n> I agree with this approach.\n>\n> When it comes to the idea of writing logical changes to permanent\n> files, I think it would also be a good idea (and perhaps could be a\n> building block of this feature) that we write streamed changes to a\n> permanent file so that the apply worker can retry to apply them\n> without retrieving the same changes again from the publisher.\n>\n\nI think we anyway won't be able to send confirmation till we write or\nprocess the commit. If it gets interrupted anytime in between we need\nto get all the changes again. I think using Fileset with temp files\nhas quite a few advantages for streaming as are noted in the header\ncomments of worker.c. We can investigate to replace that with\npermanent files but I don't see that the advantages outweigh the\nchange. Also, after parallel apply, I am expecting, most users would\nprefer that mode for large transactions, so making changes in the\nserialized path doesn't seem like a good idea to me.\n\nHaving said that, I also thought that it would be a good idea if both\nstreaming and time-delayed can use the same code path in some way\nw.r.t writing to files but couldn't come up with any good idea without\nmore downsides. I see that Kuroda-San has tried to keep the code path\nisolated for this feature but still see that one can question the\nimplementation approach.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 12 May 2023 10:33:19 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "On Fri, May 12, 2023 at 10:33 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, May 12, 2023 at 7:38 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, May 11, 2023 at 2:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Fri, Apr 28, 2023 at 2:35 PM Hayato Kuroda (Fujitsu)\n> > > <kuroda.hayato@fujitsu.com> wrote:\n> > > >\n> > > > Dear hackers,\n> > > >\n> > > > I rebased and refined my PoC. Followings are the changes:\n> > > >\n> > >\n> > > 1. Is my understanding correct that this patch creates the delay files\n> > > for each transaction? If so, did you consider other approaches such as\n> > > using one file to avoid creating many files?\n> > > 2. For streaming transactions, first the changes are written in the\n> > > temp file and then moved to the delay file. It seems like there is a\n> > > double work. Is it possible to unify it such that when min_apply_delay\n> > > is specified, we just use the delay file without sacrificing the\n> > > advantages like stream sub-abort can truncate the changes?\n> > > 3. Ideally, there shouldn't be a performance impact of this feature on\n> > > regular transactions because the delay file is created only when\n> > > min_apply_delay is active but better to do some testing of the same.\n> > >\n> >\n> > In addition to the points Amit raised, if the 'required_schema' option\n> > is specified in START_REPLICATION, the publisher sends schema\n> > information for every change. I think it leads to significant\n> > overhead. Did you consider alternative approaches such as sending the\n> > schema information for every transaction or the subscriber requests\n> > the publisher to send it?\n> >\n>\n> Why do we need this new flag? I can't see any comments in the related\n> code which explain its need.\n>\n\nSo as per the email [1], this would be required after the subscriber\nrestart. I guess we ideally need it once per delay file (considering\nthat we have one file for all delayed xacts). In the worst case, we\ncan have it per transaction as suggested by Sawada-San.\n\n[1] - https://www.postgresql.org/message-id/TYAPR01MB5866568A5C1E71338328B20CF5629%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 12 May 2023 10:47:03 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThank you for giving suggestions.\r\n\r\n> > Dear hackers,\r\n> >\r\n> > I rebased and refined my PoC. Followings are the changes:\r\n> >\r\n> \r\n> 1. Is my understanding correct that this patch creates the delay files\r\n> for each transaction? If so, did you consider other approaches such as\r\n> using one file to avoid creating many files?\r\n\r\nI have been analyzing the approach which uses only one file per subscription, per\r\nyour suggestion. Currently I'm not sure whether it is good approach or not, so could\r\nyou please give me any feedbacks?\r\n\r\nTL;DR: rotating segment files like WALs may be used, but there are several issues.\r\n\r\n# Assumption \r\n\r\n* Streamed txns are also serialized to the same permanent file, in the received order.\r\n* No additional sorting is considered.\r\n\r\n# Considerations\r\n\r\nAs a premise, applied txns must be removed from files, otherwise the disk becomes\r\nfull in some day and it leads PANIC.\r\n\r\n## Naive approach - serialize all the changes to one large file\r\n\r\nIf workers continue to write received changes from the head naively, it may be\r\ndifficult to purge applied txns because there seems not to have a good way to\r\ntruncate the first part of the file. I could not find related functions in fd.h.\r\n\r\n## Alternative approach - separate the file into segments\r\n\r\nAlternative approach I came up with is that the file is divided into some segments\r\n- like WAL - and remove it if all written txns are applied. It may work well in\r\nnon-streaming, 1pc case, but may not in other cases.\r\n\r\n### Regarding the PREPARE transactions\r\n\r\nAt that time it is more likely to occur that the segment which contains the\r\nactual txn is differ from the segment where COMMIT PREPARED. Hence the worker\r\nmust check all the remained segments to find the actual messages from them. Isn't\r\nit inefficient? There is another approach that workers apply the PREPARE\r\nimmediately and spill to file only COMMIT PREPARED, but in this case the worker\r\nhave been acquiring the lock and never released it till delay is done.\r\n\r\n### Regarding the streamed transactions\r\n\r\nAs for streaming case, chunks of txns are separated into several segments.\r\nHence the worker must check all the remained segments to find chunks messages\r\nfrom them, same as above. Isn't it inefficient too?\r\n\r\nAdditionally, segments which have prepared or streamed transactions cannot be\r\nremoved, so even if the case many files may be generated and remained.\r\n\r\nAnyway, it may be difficult to accept to stream in-progress transactions while\r\ndelaying the application. IIUC the motivation of steaming is to reduce the lag\r\nbetween nodes, and it is opposite of this feature. So it might be okay, not sure.\r\n\r\n### Regarding the publisher - timing to send schema may be fuzzy\r\n\r\nAnother issue is that the timing when publisher sends the schema information\r\ncannot be determined on publisher itself. As discussed on hackers, publisher\r\nmust send schema information once per segment file, but it is controlled on\r\nsubscriber side.\r\nI'm thinking that the walsender cannot recognize the changing of segments and\r\nunderstand the timing to send them.\r\n\r\nThat's it. I'm very happy to get idea. \r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Mon, 22 May 2023 11:49:12 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear hackers,\r\n\r\nAt PGcon and other places we have discussed the time-delayed logical replication,\r\nbut now we have understood that there are no easy ways. Followings are our analysis.\r\n\r\n# Abstract\r\n\r\nTo implement the time-dealyed logical replication for more proper approach,\r\nthe worker must serialize all the received messages into permanent files.\r\nBut PostgreSQL does not have good infrastructures for the purpose so huge engineering is needed.\r\n\r\n## Review: problem of without-file approach\r\n\r\nIn the without-file approach, the apply worker process sleeps while delaying the application.\r\nThis approach is chosen in earlier versions like [1], but it contains problems which was\r\nshared by Sawada-san [2]. They lead the PANIC error due to the disk full.\r\n\r\n A) WALs cannot be recycled on publisher because they are not flushed on subscriber.\r\n B) Moreover, vacuuming cannot remove dead tuples on publisher.\r\n\r\n## Alternative approach: serializing messages to files\r\n\r\nTo prevent any potential issues, the worker should serialize all incoming messages\r\nto a permanent file, like what the physical walreceiver does.\r\nHere, messages are first written into files at the beginning of transactions and then flushed at the end.\r\nThis approach could slove problem a), b), but it still has many considerations and difficulties.\r\n\r\n### How to separate messages into files?\r\n\r\nThere are two possibilities for dividing messages into files, but neither of them is ideal.\r\n\r\n1. Create a file per received transaction. \r\n \r\nIn this case files will be removed after the delay-period is exceeded and it is applied.\r\nThis is the simplest approach, but the number of files is bloat.\r\n\r\n2. Use one large file or segmented file (like WAL). \r\n\r\nThis can reduce the number of files, but we must consider further things:\r\n\r\n A) Purge – We must purge the applied transaction, but we do not have a good way\r\n to remove one transaction from the large file.\r\n\r\n B) 2PC – It is more likely to occur that the segment which contains the actual\r\n transaction differs from the segment where COMMIT PREPARED.\r\n Hence the worker must check all the segments to find the actual messages from them.\r\n\r\n C) Streamed in-progress transactions - chunks of transactions are separated\r\n into several segments. Hence the worker must check all the segments to find\r\n chunks messages from them, same as above.\r\n\r\n### Handle the case when the file exceeds the limitation \r\n\r\nRegardless of the option chosen from the ones mentioned above, there is a possibility\r\nthat the file size could exceed the file system's limit. This can occur as the\r\npublisher can send transactions of any length.\r\nPostgreSQL provides a mechanism for working with such large files - BufFile data structure,\r\nbut it could not be used as-is for several reasons:\r\n\r\n A) It only supports the buffered-I/O. A read or write of the low-level File\r\n occurs only when the buffer is filled or emptied. So, we cannot control when it is persisted.\r\n\r\n B) It can be used only for temporary purpose. Internally the BufFile creates\r\n some physical files into $PGDATA/base/pgsql_tmp directories, and files in the\r\n subdirectory will be removed when postmaster restarts.\r\n\r\n C) It does not have mechanisms for restoring information after the restart.\r\n BufFile contains virtual positions such as file index and offset, but these\r\n fields are stored in a memory structure, so the BufFile will forget the ordering\r\n of files and its initial/final position after restarts.\r\n\r\n D) It cannot remove a part of virtual file. Even if a large file is separated\r\n into multiple physical files and all transactions in a physical file are already\r\n applied, BufFile cannot remove only one part.\r\n\r\n[1]: https://www.postgresql.org/message-id/f026292b-c9ee-472e-beaa-d32c5c3a2ced%40www.fastmail.com\r\n[2]: https://www.postgresql.org/message-id/CAD21AoAeG2+RsUYD9+mEwr8-rrt8R1bqpe56T2D=euO-Qs-GAg@mail.gmail.com\r\n\r\nAcknowledgement:\r\n\r\nAmit, Peter, Sawada-san\r\nThank you for discussing with me off-list.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Mon, 12 Jun 2023 11:39:43 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
},
{
"msg_contents": "Dear hackers,\r\n\r\n> At PGcon and other places we have discussed the time-delayed logical\r\n> replication,\r\n> but now we have understood that there are no easy ways. Followings are our\r\n> analysis.\r\n\r\nAt this point, I have not planned to develop the PoC anymore, unless better idea\r\nor infrastructure will come.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n",
"msg_date": "Tue, 13 Jun 2023 02:59:55 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Time delayed LR (WAS Re: logical replication restrictions)"
}
] |
[
{
"msg_contents": "I have been trying to get a reply or interest in either updating PostgreSQL to support the following,\nor for there to be a public, free for any use Extension put out there, that will support the following.\nCan someone able and interested please respond to me about the following project specification,\nwhich I am very keen to see happen:\n\n###################################################\n# High Precision Numeric and Elementary Functions Support. #\n###################################################\n\n-Integer (HPZ) Z, or Rational Decimal Q (HPQ) numbers support.\n\n-A library like GMP, written in C, is an appropriate basis to start from and to include, for all OS platforms involved.\n\n-Real numbers include the values of both Recurring Rational Numbers and recurring Irrational Numbers. Those two can be appropriately truncated, by a precision value, to obtain an approximating value. The latter phenomenon is a finite Rational value, possibly with integer and/or decimal parts at the same time. These also may be positive or negative, or zero, standard number line, values.\n\n-Forward and Inverse operations accuracy, withstanding truncation, can be maintained by storing and normalising the expression behind a value, (or by just including pointers to value(s)) and displaying that value. This system will uphold any precision, certainly within a very large range limit.\n\n-A defaulting number of significant figures, (precision), in a field in memory that exists as one copy per connection, that is updated, as a filter, for all relevant HPZ and HPQ numbers. For example, 20 significant figures, as a default, would be sensible to start with.\n\n-A function that varies the precision filter for every HPZ and HPQ number at once.\n\n-Value assignment to a typed variable by =.\n\n-Operators. Base 10 Arithmetic and comparisons support on Base 10 Integer and Rational Decimal numbers, and casting:\n\n+,-,*,/,%,^,=,!=,<>,>,<,>=,<=, ::\n\nThese include full finite division and integer only division, with no remainder, and remainder division. The defaulting ability of values within the two new types to automatically be cast up to HPZ or HPQ, where specified and appropriate in PostgreSQL code.\n\n-Reified support with broader syntax and operations within PostgreSQL, in all the obvious and less than obvious places. Tables and related phenomena, HPZ arrays, Indexing, the Window type, Record type, direct compatability with Aggregate and Window Functions, the Recursive keyword, are all parts of a larger subset that should re-interact with HPZ or HPQ.\n\n-Ease of installation support. Particularly for Windows and Linux. *.exe, *.msi or *.rpm, *.deb, *.bin installers.\nUpon a PostgreSQL standard installation. Installation and Activation instructions included. That is, presuming the HPZ and HPQ support is an Extension, and simply not added as native, default types into PostgreSQL Baseline.\n\n##############################################################\n\n-Mathematical and Operational functions support:\n\nprecision(BIGINT input)\n\ncast(HPZ as HPQ) returns HPQ;\ncast(HPQ as HPZ) returns HPZ;\ncast(TEXT as HPZ) returns HPZ;\ncast(TEXT as HPQ) returns HPQ;\ncast(HPQ as TEXT) returns TEXT;\ncast(HPZ as TEXT) returns TEXT;\ncast(HPZ as SMALLINT) returns SMALLINT;\ncast(SMALLINT as HPQ) returns HPZ;\ncast(HPZ as INTEGER) returns INTEGER;\ncast(INTEGER as HPZ) returns HPZ;\ncast(HPZ as BIGINT) returns BIGINT;\ncast(BIGINT as HPZ) returns HPZ;\ncast(HPQ as REAL) returns REAL;\ncast(REAL as HPQ) returns HPQ\ncast(DOUBLE PRECISION as HPQ) returns HPQ;\ncast(HPQ as DOUBLE PRECISION) returns DOUBLE PRECISION;\ncast(HPQ as DECIMAL) returns DECIMAL;\ncast(DECIMAL as HPQ) returns HPQ;\ncast(HPQ as NUMERIC) returns NUMERIC;\ncast(NUMERIC as HPQ) returns HPQ;\n\nsign(HPQ input) returns HPQ;\nabs(HPQ input) returns HPQ;\nceil(HPQ input) returns HPQ;\nfloor(HPQ input) returns HPQ;\nround(HPQ input) returns HPZ;\nrecip(HPQ input) returns HPQ;\npi() returns HPQ;\ne() returns HPQ;\npower(HPQ base, HPQ exponent) returns HPQ;\nsqrt(HPQ input) returns HPQ\nnroot(HPZ theroot, HPQ input) returns HPQ;\nlog10(HPQ input) returns HPQ;\nloge(HPQ input) returns HPQ;\nlog2(HPQ input) returns HPQ;\nfactorial(HPZ input) returns HPZ;\nnCr(HPZ objects, HPZ selectionSize) returns HPZ\nnPr(HPZ objects, HPZ selectionSize) returns HPZ\n\ndegrees(HPQ input) returns HPQ;\nradians(HPQ input) returns HPQ;\nsind(HPQ input) returns HPQ;\ncosd(HPQ input) returns HPQ;\ntand(HPQ input) returns HPQ;\nasind(HPQ input) returns HPQ;\nacosd(HPQ input) returns HPQ;\natand(HPQ input) returns HPQ;\nsinr(HPQ input) returns HPQ;\ncosr(HPQ input) returns HPQ;\ntanr(HPQ input) returns HPQ;\nasinr(HPQ input) returns HPQ;\nacosr(HPQ input) returns HPQ;\natanr(HPQ input) returns HPQ;\n\n##############################################################\n\n-Informative articles on all these things exist at:\nComparison Operators: https://en.wikipedia.org/wiki/Relational_operator\nFloor and Ceiling Functions: https://en.wikipedia.org/wiki/Floor_and_ceiling_functions\nArithmetic Operations: https://en.wikipedia.org/wiki/Arithmetic\nInteger Division: https://en.wikipedia.org/wiki/Division_(mathematics)#Of_integers\nModulus Operation: https://en.wikipedia.org/wiki/Modulo_operation\nRounding (Commercial Rounding): https://en.wikipedia.org/wiki/Rounding\nFactorial Operation: https://en.wikipedia.org/wiki/Factorial\nDegrees: https://en.wikipedia.org/wiki/Degree_(angle)\nRadians: https://en.wikipedia.org/wiki/Radian\nElementary Functions: https://en.wikipedia.org/wiki/Elementary_function\n\nThe following chart could be used to help test trigonometry outputs, under\nFurther Condsideration of the Unit Circle:\nhttps://courses.lumenlearning.com/boundless-algebra/chapter/trigonometric-functions-and-the-unit-circle/\n##############################################################\n\n\n\n\n\n\n\n\nI have been trying to get a reply or interest in either updating PostgreSQL to support the following,\n\nor for there to be a public, free for any use Extension put out there, that will support the following.\n\nCan someone able and interested please respond to me about the following project specification,\n\nwhich I am very keen to see happen:\n\n\n###################################################\n# High Precision Numeric and Elementary Functions Support. #\n###################################################\n\n\n-Integer (HPZ) Z, or Rational Decimal Q (HPQ) numbers support.\n\n\n-A library like GMP, written in C, is an appropriate basis to start from and to include, for all OS platforms involved.\n\n\n-Real numbers include the values of both Recurring Rational Numbers and recurring Irrational Numbers. Those two can be appropriately truncated, by a precision value, to obtain an approximating value. The latter phenomenon is a finite Rational value,\n possibly with integer and/or decimal parts at the same time. These also may be positive or negative, or zero, standard number line, values.\n\n\n-Forward and Inverse operations accuracy, withstanding truncation, can be maintained by storing and normalising the expression behind a value, (or by just including pointers to value(s)) and displaying that value. This system will uphold any precision,\n certainly within a very large range limit.\n\n\n-A defaulting number of significant figures, (precision), in a field in memory that exists as one copy per connection, that is updated, as a filter, for all relevant HPZ and HPQ numbers. For example, 20 significant figures, as a default, would be sensible\n to start with.\n\n\n-A function that varies the precision filter for every HPZ and HPQ number at once.\n\n\n-Value assignment to a typed variable by =.\n\n\n-Operators. Base 10 Arithmetic and comparisons support on Base 10 Integer and Rational Decimal numbers, and casting:\n\n\n+,-,*,/,%,^,=,!=,<>,>,<,>=,<=, ::\n\n\nThese include full finite division and integer only division, with no remainder, and remainder division. The defaulting ability of values within the two new types to automatically be cast up to HPZ or HPQ, where specified and appropriate in PostgreSQL\n code.\n\n\n-Reified support with broader syntax and operations within PostgreSQL, in all the obvious and less than obvious places. Tables and related phenomena, HPZ arrays, Indexing, the Window type, Record type, direct compatability with Aggregate and Window Functions,\n the Recursive keyword, are all parts of a larger subset that should re-interact with HPZ or HPQ.\n\n\n-Ease of installation support. Particularly for Windows and Linux. *.exe, *.msi or *.rpm, *.deb, *.bin installers.\nUpon a PostgreSQL standard installation. Installation and Activation instructions included. That is, presuming the HPZ and HPQ support is an Extension, and simply not added as native, default types into PostgreSQL Baseline.\n\n\n##############################################################\n\n\n-Mathematical and Operational functions support:\n\n\nprecision(BIGINT input)\n\n\ncast(HPZ as HPQ) returns HPQ;\ncast(HPQ as HPZ) returns HPZ;\ncast(TEXT as HPZ) returns HPZ;\ncast(TEXT as HPQ) returns HPQ;\ncast(HPQ as TEXT) returns TEXT;\ncast(HPZ as TEXT) returns TEXT;\ncast(HPZ as SMALLINT) returns SMALLINT;\ncast(SMALLINT as HPQ) returns HPZ;\ncast(HPZ as INTEGER) returns INTEGER;\ncast(INTEGER as HPZ) returns HPZ;\ncast(HPZ as BIGINT) returns BIGINT;\ncast(BIGINT as HPZ) returns HPZ;\ncast(HPQ as REAL) returns REAL;\ncast(REAL as HPQ) returns HPQ\ncast(DOUBLE PRECISION as HPQ) returns HPQ;\ncast(HPQ as DOUBLE PRECISION) returns DOUBLE PRECISION;\ncast(HPQ as DECIMAL) returns DECIMAL;\ncast(DECIMAL as HPQ) returns HPQ;\ncast(HPQ as NUMERIC) returns NUMERIC;\ncast(NUMERIC as HPQ) returns HPQ;\n\n\nsign(HPQ input) returns HPQ;\nabs(HPQ input) returns HPQ;\nceil(HPQ input) returns HPQ;\nfloor(HPQ input) returns HPQ;\nround(HPQ input) returns HPZ;\nrecip(HPQ input) returns HPQ;\npi() returns HPQ;\ne() returns HPQ;\npower(HPQ base, HPQ exponent) returns HPQ;\nsqrt(HPQ input) returns HPQ\nnroot(HPZ theroot, HPQ input) returns HPQ;\nlog10(HPQ input) returns HPQ;\nloge(HPQ input) returns HPQ;\nlog2(HPQ input) returns HPQ;\nfactorial(HPZ input) returns HPZ;\nnCr(HPZ objects, HPZ selectionSize) returns HPZ\nnPr(HPZ objects, HPZ selectionSize) returns HPZ\n\n\ndegrees(HPQ input) returns HPQ;\nradians(HPQ input) returns HPQ;\nsind(HPQ input) returns HPQ;\ncosd(HPQ input) returns HPQ;\ntand(HPQ input) returns HPQ;\nasind(HPQ input) returns HPQ;\nacosd(HPQ input) returns HPQ;\natand(HPQ input) returns HPQ;\nsinr(HPQ input) returns HPQ;\ncosr(HPQ input) returns HPQ;\ntanr(HPQ input) returns HPQ;\nasinr(HPQ input) returns HPQ;\nacosr(HPQ input) returns HPQ;\natanr(HPQ input) returns HPQ;\n\n\n##############################################################\n\n\n-Informative articles on all these things exist at:\nComparison Operators: https://en.wikipedia.org/wiki/Relational_operator\nFloor and Ceiling Functions: https://en.wikipedia.org/wiki/Floor_and_ceiling_functions\nArithmetic Operations: https://en.wikipedia.org/wiki/Arithmetic\nInteger Division: https://en.wikipedia.org/wiki/Division_(mathematics)#Of_integers\nModulus Operation: https://en.wikipedia.org/wiki/Modulo_operation\nRounding (Commercial Rounding): https://en.wikipedia.org/wiki/Rounding\nFactorial Operation: https://en.wikipedia.org/wiki/Factorial\nDegrees: https://en.wikipedia.org/wiki/Degree_(angle)\nRadians: https://en.wikipedia.org/wiki/Radian\nElementary Functions: https://en.wikipedia.org/wiki/Elementary_function\n\n\nThe following chart could be used to help test trigonometry outputs, under\nFurther Condsideration of the Unit Circle:\nhttps://courses.lumenlearning.com/boundless-algebra/chapter/trigonometric-functions-and-the-unit-circle/\n##############################################################",
"msg_date": "Tue, 21 Sep 2021 01:29:48 +0000",
"msg_from": "A Z <poweruserm@live.com.au>",
"msg_from_op": true,
"msg_subject": "PostgreSQL High Precision Support Extension."
},
{
"msg_contents": "On Tue, Sep 21, 2021 at 1:30 PM A Z <poweruserm@live.com.au> wrote:\n> -A library like GMP, written in C, is an appropriate basis to start from and to include, for all OS platforms involved.\n\nAre you aware of Daniele Varrazzo's extension\nhttps://github.com/dvarrazzo/pgmp/ ? (Never looked into it myself,\nbut this seems like the sort of thing you might be looking for?)\n\n\n",
"msg_date": "Tue, 21 Sep 2021 14:58:37 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL High Precision Support Extension."
},
{
"msg_contents": "\nOn 9/20/21 9:29 PM, A Z wrote:\n> I have been trying to get a reply or interest in either updating\n> PostgreSQL to support the following,\n> or for there to be a public, free for any use Extension put out there,\n> that will support the following.\n> Can someone able and interested please respond to me about the\n> following project specification,\n> which I am very keen to see happen:\n\n\nPlease stop posting the same thing over and over. It doesn't help you,\nin fact it's likely to put off anyone who might be interested in your\nproject. If you haven't got an answer by now you should conclude that\nnobody here is interested.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 21 Sep 2021 09:31:36 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL High Precision Support Extension."
},
{
"msg_contents": "On Tue, Sep 21, 2021 at 2:58 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Tue, Sep 21, 2021 at 1:30 PM A Z <poweruserm@live.com.au> wrote:\n> > -A library like GMP, written in C, is an appropriate basis to start from and to include, for all OS platforms involved.\n>\n> Are you aware of Daniele Varrazzo's extension\n> https://github.com/dvarrazzo/pgmp/ ? (Never looked into it myself,\n> but this seems like the sort of thing you might be looking for?)\n\n[A Z replied off-list and mentioned areas where pgmp falls short, but\nI'll reply on-list to try to increase the chance of useful discussion\nhere...]\n\nIt seems to me that there are 3 or 4 different topics here:\n\n1. Can you find the functions GMP lacks in some other library? For\nexample, if I understand correctly, the library \"mpfr\" provides a\nbunch of transcendental functions for libgmp's types. Are there other\nlibraries? Can you share what you already know about the landscape of\nrelevant libraries and what's good or lacking from your perspective?\nOr are you proposing writing entirely new numeric code (in which case\nthat's getting pretty far away from the topics we're likely to discuss\nhere...).\n\n2. Supposing there are suitable libraries that build on top of GMP,\nwould it be reasonable to make a separate extension that extends pgmp?\n That is, users install the pgmp extension to get the basic types and\nfunctions, and then install a second, new extension \"pmpfr\" (or\nwhatever) to get access to more functions?\n\n3. You mentioned wanting portability and packages for platforms X, Y,\nZ. Packaging is something to worry about later, and not typically\nsomething that the author of an extension has to do personally. Once\nyou produce a good extension, it seems very likely that you could\nconvince the various package maintainers to pick it up (as they've\ndone for pgmp and many other extensions). The only question to worry\nabout initially is how portable the libraries you depend on are. For\nwhat it's worth, I'd personally start by setting up a CI system for a\nbunch of relevant OSes with all the relevant libraries installed, for\nexploration; I could provide some pointers on how to do that if you\nthink that would be interesting.\n\n4. You talked about whether such types could be in PostgreSQL \"core\".\nIn my humble opinion (1) this is a textbook example of something that\nbelongs in an extension, and (2) things built on GNU/GPL libraries are\ncomplicated and probably couldn't ever be included in core in our\nBSD-licensed project anyway. (In contrast, the SQL:2016 DECFLOAT(n)\ntypes really should be included in the core system, because they're in\nthe SQL standard and we are a SQL implementation, and AFAIK the only\nreal thing stopping us from doing that is deciding which library to\nuse to do it, which is complicated.)\n\n\n",
"msg_date": "Wed, 22 Sep 2021 10:20:54 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL High Precision Support Extension."
},
{
"msg_contents": "On 9/21/21 6:20 PM, Thomas Munro wrote:\n> On Tue, Sep 21, 2021 at 2:58 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> On Tue, Sep 21, 2021 at 1:30 PM A Z <poweruserm@live.com.au> wrote:\n>> > -A library like GMP, written in C, is an appropriate basis to start from and to include, for all OS platforms involved.\n>>\n>> Are you aware of Daniele Varrazzo's extension\n>> https://github.com/dvarrazzo/pgmp/ ? (Never looked into it myself,\n>> but this seems like the sort of thing you might be looking for?)\n> \n> [A Z replied off-list and mentioned areas where pgmp falls short, but\n> I'll reply on-list to try to increase the chance of useful discussion\n> here...]\n\nThis seems to become a common pattern to open source communities. Not \njust PostgreSQL, I have seen it elsewhere. People make vague or just \nspecification level \"proposals\" and as soon as anyone replies, try to \ndrag it into private and off-list/off-forum conversations.\n\nMost of the time this is because they aren't really proposing anything, \nbut are just looking for someone else to implement what they need for \ntheir own, paying customer. They are not willing or able to contribute \nanything but requirements.\n\nAs the original author of the NUMERIC data type I am definitely \ninterested in this sort of stuff. And I would love contributing in the \ndesign of the on-disk and in-memory structures of these new data types \ncreated as an EXTENSION.\n\nHowever, so far I only see a request for someone else to create this \nextension. What exactly is \"A Z\" (poweruserm) going to contribute to \nthis effort?\n\n\nRegards, Jan\n\n-- \nJan Wieck\n\n\n",
"msg_date": "Tue, 21 Sep 2021 18:47:02 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL High Precision Support Extension."
}
] |
[
{
"msg_contents": "Hi,\n\nAttached is a copy of the first draft for the PG14 press release.\n\nThis brings together highlights of many of the features in the upcoming\nPostgreSQL 14 release while providing context on their significance.\nWith the plethora of new features coming in PostgreSQL 14, it is\nchallenging to highlight them all, but the idea is to give a glimpse of\nwhat to expect in the new release.\n\nFeedback on the release is welcome. However, please provide your\nfeedback on the release no later than **Thu, Sep 23, 2021 @ 18:00 UTC**.\n\nThanks,\n\nJonathan",
"msg_date": "Mon, 20 Sep 2021 22:19:32 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 14 press release draft"
},
{
"msg_contents": "On Mon, Sep 20, 2021 at 10:19:32PM -0400, Jonathan S. Katz wrote:\n\n> PostgreSQL 14 provides a significant throughput boost on workloads that use many\n> connections, with some benchmarks showing a 2x speedup. This release continues\n> on the recent improvements the overall management of B-tree indexes by reducing\n> index bloat on tables with [frequently updated indexes](https://www.postgresql.org/docs/14/btree-implementation.html#BTREE-DELETION).\n\nimprovements *in* ?\n\n> [Foreign data wrappers](https://www.postgresql.org/docs/14/sql-createforeigndatawrapper.html),\n> used to work with federated workloads across PostgreSQL and other databases, can\n\nIt'd be clearer to write \"used for working\".\n\"Used to work\" sounds like it no longer works.\n\n> PostgreSQL 14 extends its performance gains to its [vacuuming](https://www.postgresql.org/docs/14/routine-vacuuming.html)\n\nto *the* vacuuming system ?\n\n> indexes and now allows autovacuum to analyze partitioned tables and propagate\n> information to its parents.\n\nThis was reverted last month.\n\n> The choice of compression for PostgreSQL's [TOAST](https://www.postgresql.org/docs/14/storage-toast.html)\n> system, which is used to store larger data like blocks of text or geometries,\n> can [now be configured](https://www.postgresql.org/docs/14/runtime-config-client.html#GUC-DEFAULT-TOAST-COMPRESSION).\n\nRemove \"the choice of\" ?\n\n> The [extended systems](https://www.postgresql.org/docs/14/planner-stats.html#PLANNER-STATS-EXTENDED)\n\ns/systems/statistics/\n\n> includes many improvements in PostgreSQL 14, including the ability to apply\n> extend statistics on expressions. Additionally,\n\ns/extend/extended/\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 20 Sep 2021 23:09:08 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 14 press release draft"
},
{
"msg_contents": "On 9/21/21 12:09 AM, Justin Pryzby wrote:\n> On Mon, Sep 20, 2021 at 10:19:32PM -0400, Jonathan S. Katz wrote:\n> \n>> PostgreSQL 14 provides a significant throughput boost on workloads that use many\n>> connections, with some benchmarks showing a 2x speedup. This release continues\n>> on the recent improvements the overall management of B-tree indexes by reducing\n>> index bloat on tables with [frequently updated indexes](https://www.postgresql.org/docs/14/btree-implementation.html#BTREE-DELETION).\n> \n> improvements *in* ?\n\nModified, albeit a bit differently.\n\n>> [Foreign data wrappers](https://www.postgresql.org/docs/14/sql-createforeigndatawrapper.html),\n>> used to work with federated workloads across PostgreSQL and other databases, can\n> \n> It'd be clearer to write \"used for working\".\n> \"Used to work\" sounds like it no longer works.\n\nModified, albeit a bit differently.\n\n>> PostgreSQL 14 extends its performance gains to its [vacuuming](https://www.postgresql.org/docs/14/routine-vacuuming.html)\n> \n> to *the* vacuuming system ?\n\nI think this could go either way, but I changed it to the above suggestion.\n\n>> indexes and now allows autovacuum to analyze partitioned tables and propagate\n>> information to its parents.\n> \n> This was reverted last month.\n\nI did not see that in the copy of the release notes that I was working\noff of; I have gone ahead and removed it from the press release. Thanks!\n\n> \n>> The choice of compression for PostgreSQL's [TOAST](https://www.postgresql.org/docs/14/storage-toast.html)\n>> system, which is used to store larger data like blocks of text or geometries,\n>> can [now be configured](https://www.postgresql.org/docs/14/runtime-config-client.html#GUC-DEFAULT-TOAST-COMPRESSION).\n> \n> Remove \"the choice of\" ?\n\nModified.\n\n>> The [extended systems](https://www.postgresql.org/docs/14/planner-stats.html#PLANNER-STATS-EXTENDED)\n> \n> s/systems/statistics/\n> \n>> includes many improvements in PostgreSQL 14, including the ability to apply\n>> extend statistics on expressions. Additionally,\n> \n> s/extend/extended/\n\nModified, albeit a bit differently.\n\nUpdated draft attached. As a reminder, please provide any feedback on\nthe press release no later than **Thu, Sep 23, 2021 @ 18:00 UTC**.\n\nThanks!\n\nJonathan",
"msg_date": "Wed, 22 Sep 2021 10:17:07 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 14 press release draft"
},
{
"msg_contents": "On 9/22/21 10:17 AM, Jonathan S. Katz wrote:\n\n> Updated draft attached. As a reminder, please provide any feedback on\n> the press release no later than **Thu, Sep 23, 2021 @ 18:00 UTC**.\n\nI'm sure it helps if I actually attach the draft.\n\nJonathan",
"msg_date": "Wed, 22 Sep 2021 10:18:51 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 14 press release draft"
},
{
"msg_contents": "Some super quick nitpicks; feel free to ignore/apply/laugh off.\n\n...\nadministrators to deploy their data-backed applications. PostgreSQL\ncontinues to\nadd innovations on complex data types, including more conveniences for\naccessing\nJSON and support for noncontiguous ranges of data. This latest release also\nadds\nto PostgreSQL's trend on improvements for high performance and distributed\ndata workloads, with advances in support for connection concurrency,\nhigh-write\nworkloads, query parallelism and logical replication.\n\n>> add innovations on complex data types -> add innovations to? \"on\" sounds\nodd\n\n>> comma after \"query parallelism\" please\n\nnow works. This aligns PostgreSQL with commonly recognized syntax for\nretrieving information from JSON data. The subscripting framework added to\nPostgreSQL 14 can be generally extended to other nested data structures,\nand is\nalso applied to the `hstore` data type in this release.\n\n>> with commonly recognized syntax -> with the commonly recognized syntax\n\n>> hyperlink hstore?\n\n\n[Range types](https://www.postgresql.org/docs/14/rangetypes.html), also\nfirst\nreleased in PostgreSQL 9.2, now have support for noncontiguous ranges\nthrough\nthe introduction of the \"[multirange](\nhttps://www.postgresql.org/docs/14/rangetypes.html#RANGETYPES-BUILTIN)\".\nA multirange is an ordered list of ranges that are nonoverlapping, which\nallows\nfor developers to write simpler queries for dealing with complex sequences\nof\n\n>> introduction of the multirange -> introduction of the multirange type\n\n>> which allows for developers to write -> which lets developers write\n\n\non the recent improvements to the overall management of B-tree indexes by\n\n>> to the overall management -> to the management\n\noperations. As this is a client-side feature, you can use pipeline mode\nwith any\nmodern PostgreSQL database so long as you use the version 14 client.\n\n>> more complicated than \"version 14 client\", more like - if the\napplication has explicit\n>> support for it and was compiled via libpq against PG 14.\n>> Just don't want to overpromise here\n\n[Foreign data wrappers](\nhttps://www.postgresql.org/docs/14/sql-createforeigndatawrapper.html),\nwhich are used for working with federated workloads across PostgreSQL and\nother\n\n>> which are used for -> used for\n\nIn addition to supporting query parallelism, `postgres_fdw` can now also\nbulk\ninsert data on foreign tables and import table partitions with the\n[`IMPORT FOREIGN SCHEMA`](\nhttps://www.postgresql.org/docs/14/sql-importforeignschema.html)\ndirective.\n\n>> can now also bulk insert -> can now bulk insert\n\n>> on foreign tables and import -> on foreign tables, and can import\n\nPostgreSQL 14 extends its performance gains to the [vacuuming](\nhttps://www.postgresql.org/docs/14/routine-vacuuming.html)\nsystem, including optimizations for reducing overhead from B-Trees.\n\n>> B-Tree or B-tree - pick one (latter used earlier in this doc)\n\nlets you uniquely track a query through several PostgreSQL systems,\nincluding\n[`pg_stat_activity`](\nhttps://www.postgresql.org/docs/14/monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEW\n),\n[`EXPLAIN VERBOSE`](https://www.postgresql.org/docs/14/sql-explain.html),\nand\nthrough several logging functions.\n\n>> \"several PostgreSQL systems\" sounds weird.\n\n>> \"several logging functions\" - not sure what this means\n\n\nparallel queries when using the `RETURN QUERY` directive, and enabling\n\n>> directive -> command\n\nnow benefit from incremental sorts, a feature that was introduced in\n[PostgreSQL 13](\nhttps://www.postgresql.org/about/news/postgresql-13-released-2077/).\n\n>> that was introduced in -> introduced in\n\nfunction. This release also adds the SQL conforming\n[`SEARCH`](\nhttps://www.postgresql.org/docs/14/queries-with.html#QUERIES-WITH-SEARCH)\nand [`CYCLE`](\nhttps://www.postgresql.org/docs/14/queries-with.html#QUERIES-WITH-CYCLE)\ndirectives to help with ordering and cycle detection for recursive\n[common table expressions](\nhttps://www.postgresql.org/docs/14/queries-with.html#QUERIES-WITH-RECURSIVE\n).\n\n>> directives -> clauses\n\n\nPostgreSQL 14 makes it convenient to assign read-only and write-only\nprivileges\n\n>> convenient -> easy\n\ncompanies and organizations. Built on over 30 years of engineering,\nstarting at\n\n>> companies -> companies,\n\n>> we claims 25 years at the top, 30 here. That's 5 non-open-source years?\n:)\n\nSome super quick nitpicks; feel free to ignore/apply/laugh off....administrators to deploy their data-backed applications. PostgreSQL continues toadd innovations on complex data types, including more conveniences for accessingJSON and support for noncontiguous ranges of data. This latest release also addsto PostgreSQL's trend on improvements for high performance and distributeddata workloads, with advances in support for connection concurrency, high-writeworkloads, query parallelism and logical replication.>> add innovations on complex data types -> add innovations to? \"on\" sounds odd>> comma after \"query parallelism\" pleasenow works. This aligns PostgreSQL with commonly recognized syntax forretrieving information from JSON data. The subscripting framework added toPostgreSQL 14 can be generally extended to other nested data structures, and isalso applied to the `hstore` data type in this release.>> with commonly recognized syntax -> with the commonly recognized syntax>> hyperlink hstore?[Range types](https://www.postgresql.org/docs/14/rangetypes.html), also firstreleased in PostgreSQL 9.2, now have support for noncontiguous ranges throughthe introduction of the \"[multirange](https://www.postgresql.org/docs/14/rangetypes.html#RANGETYPES-BUILTIN)\".A multirange is an ordered list of ranges that are nonoverlapping, which allowsfor developers to write simpler queries for dealing with complex sequences of>> introduction of the multirange -> introduction of the multirange type>> which allows for developers to write -> which lets developers writeon the recent improvements to the overall management of B-tree indexes by>> to the overall management -> to the managementoperations. As this is a client-side feature, you can use pipeline mode with anymodern PostgreSQL database so long as you use the version 14 client.>> more complicated than \"version 14 client\", more like - if the application has explicit >> support for it and was compiled via libpq against PG 14.>> Just don't want to overpromise here[Foreign data wrappers](https://www.postgresql.org/docs/14/sql-createforeigndatawrapper.html),which are used for working with federated workloads across PostgreSQL and other>> which are used for -> used forIn addition to supporting query parallelism, `postgres_fdw` can now also bulkinsert data on foreign tables and import table partitions with the[`IMPORT FOREIGN SCHEMA`](https://www.postgresql.org/docs/14/sql-importforeignschema.html)directive.>> can now also bulk insert -> can now bulk insert>> on foreign tables and import -> on foreign tables, and can importPostgreSQL 14 extends its performance gains to the [vacuuming](https://www.postgresql.org/docs/14/routine-vacuuming.html)system, including optimizations for reducing overhead from B-Trees.>> B-Tree or B-tree - pick one (latter used earlier in this doc)lets you uniquely track a query through several PostgreSQL systems, including[`pg_stat_activity`](https://www.postgresql.org/docs/14/monitoring-stats.html#MONITORING-PG-STAT-ACTIVITY-VIEW),[`EXPLAIN VERBOSE`](https://www.postgresql.org/docs/14/sql-explain.html), andthrough several logging functions.>> \"several PostgreSQL systems\" sounds weird. >> \"several logging functions\" - not sure what this meansparallel queries when using the `RETURN QUERY` directive, and enabling>> directive -> commandnow benefit from incremental sorts, a feature that was introduced in[PostgreSQL 13](https://www.postgresql.org/about/news/postgresql-13-released-2077/).>> that was introduced in -> introduced infunction. This release also adds the SQL conforming[`SEARCH`](https://www.postgresql.org/docs/14/queries-with.html#QUERIES-WITH-SEARCH)and [`CYCLE`](https://www.postgresql.org/docs/14/queries-with.html#QUERIES-WITH-CYCLE)directives to help with ordering and cycle detection for recursive[common table expressions](https://www.postgresql.org/docs/14/queries-with.html#QUERIES-WITH-RECURSIVE).>> directives -> clausesPostgreSQL 14 makes it convenient to assign read-only and write-only privileges>> convenient -> easycompanies and organizations. Built on over 30 years of engineering, starting at>> companies -> companies,>> we claims 25 years at the top, 30 here. That's 5 non-open-source years? :)",
"msg_date": "Wed, 22 Sep 2021 12:57:57 -0400",
"msg_from": "Greg Sabino Mullane <htamfids@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 14 press release draft"
},
{
"msg_contents": "On 9/22/21 12:57 PM, Greg Sabino Mullane wrote:\n> Some super quick nitpicks; feel free to ignore/apply/laugh off.\n\nThanks. I incorporated many of the suggestions.\n\nHere is the press release at is stands. As we are past the deadline for\nfeedback, we are going to start the translation effort for the press kit.\n\nOf course, critical changes will still be applied but this is the press\nrelease as it stands.\n\nThank you for your feedback!\n\nJonathan",
"msg_date": "Thu, 23 Sep 2021 18:46:32 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL 14 press release draft"
}
] |
[
{
"msg_contents": "Hi,\n\nCurrently when determining where CoerceToDomainValue can be read,\nit evaluates every step in a loop.\nBut, I think that the expression is immutable and should be solved only\nonce.\n\nOtherwise the logic is wrong since by the rules of C, even though the\nvariable is\nbeing initialized in the declaration, it still receives initialization at\neach repetition.\nWhat causes palloc running multiple times.\n\nIn other words:\nDatum *domainval = NULL;\n\nis the same:\nDatum *domainval;\ndomainval = NULL;\n\nOnce there, reduce the scope for save_innermost_domainval and\nsave_innermost_domainnull.\n\nThoughts?\n\nregards,\nRanier Vilela",
"msg_date": "Tue, 21 Sep 2021 15:09:11 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Eval expression R/O once time (src/backend/executor/execExpr.c)"
},
{
"msg_contents": "Hi,\n\nOn 2021-09-21 15:09:11 -0300, Ranier Vilela wrote:\n> Currently when determining where CoerceToDomainValue can be read,\n> it evaluates every step in a loop.\n> But, I think that the expression is immutable and should be solved only\n> once.\n\nWhat is immutable here?\n\n\n> Otherwise the logic is wrong since by the rules of C, even though the\n> variable is\n> being initialized in the declaration, it still receives initialization at\n> each repetition.\n> What causes palloc running multiple times.\n> \n> In other words:\n> Datum *domainval = NULL;\n> \n> is the same:\n> Datum *domainval;\n> domainval = NULL;\n\nObviously?\n\n\n> Thoughts?\n\nI don't see what this is supposed to achieve. The allocation of\ndomainval/domainnull happens on every loop iteration with/without your patch.\n\nAnd it has to, the allocation intentionally is separate for each\nconstraint. As the comment even explicitly says:\n\t\t\t\t\t/*\n\t\t\t\t\t * Since value might be read multiple times, force to R/O\n\t\t\t\t\t * - but only if it could be an expanded datum.\n\t\t\t\t\t */\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 21 Sep 2021 13:19:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Eval expression R/O once time (src/backend/executor/execExpr.c)"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-09-21 15:09:11 -0300, Ranier Vilela wrote:\n>> Currently when determining where CoerceToDomainValue can be read,\n>> it evaluates every step in a loop.\n>> But, I think that the expression is immutable and should be solved only\n>> once.\n\n> What is immutable here?\n\nI think Ranier has a point here. The clear intent of this bit:\n\n /*\n * If first time through, determine where CoerceToDomainValue\n * nodes should read from.\n */\n if (domainval == NULL)\n {\n\nis that we only need to emit the EEOP_MAKE_READONLY once when there are\nmultiple CHECK constraints. But because domainval has the wrong lifespan,\nthat test is constant-true, and we'll do it over each time to little\npurpose.\n\n> And it has to, the allocation intentionally is separate for each\n> constraint. As the comment even explicitly says:\n> /*\n> * Since value might be read multiple times, force to R/O\n> * - but only if it could be an expanded datum.\n> */\n\nNo, what that's on about is that each constraint might contain multiple\nVALUE symbols. But once we've R/O-ified the datum, we can keep using\nit across VALUE symbols in different CHECK expressions, not just one.\n\n(AFAICS anyway)\n\nI'm unexcited by the proposed move of the save_innermost_domainval/null\nvariables, though. It adds no correctness and it forces an additional\nlevel of indentation of a good deal of code, as the patch fails to show.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 21 Sep 2021 18:21:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Eval expression R/O once time (src/backend/executor/execExpr.c)"
},
{
"msg_contents": "Hi,\n\nOn 2021-09-21 18:21:24 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2021-09-21 15:09:11 -0300, Ranier Vilela wrote:\n> >> Currently when determining where CoerceToDomainValue can be read,\n> >> it evaluates every step in a loop.\n> >> But, I think that the expression is immutable and should be solved only\n> >> once.\n> \n> > What is immutable here?\n> \n> I think Ranier has a point here. The clear intent of this bit:\n> \n> /*\n> * If first time through, determine where CoerceToDomainValue\n> * nodes should read from.\n> */\n> if (domainval == NULL)\n> {\n> \n> is that we only need to emit the EEOP_MAKE_READONLY once when there are\n> multiple CHECK constraints. But because domainval has the wrong lifespan,\n> that test is constant-true, and we'll do it over each time to little\n> purpose.\n\nOh, I clearly re-skimmed the code too quickly. Sorry for that!\n\n\n> (AFAICS anyway)\n> \n> I'm unexcited by the proposed move of the save_innermost_domainval/null\n> variables, though. It adds no correctness and it forces an additional\n> level of indentation of a good deal of code, as the patch fails to show.\n\nYea.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 21 Sep 2021 16:00:55 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Eval expression R/O once time (src/backend/executor/execExpr.c)"
},
{
"msg_contents": "Em ter., 21 de set. de 2021 às 19:21, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2021-09-21 15:09:11 -0300, Ranier Vilela wrote:\n> >> Currently when determining where CoerceToDomainValue can be read,\n> >> it evaluates every step in a loop.\n> >> But, I think that the expression is immutable and should be solved only\n> >> once.\n>\n> > What is immutable here?\n>\n> I think Ranier has a point here. The clear intent of this bit:\n>\n> /*\n> * If first time through, determine where\n> CoerceToDomainValue\n> * nodes should read from.\n> */\n> if (domainval == NULL)\n> {\n>\n> is that we only need to emit the EEOP_MAKE_READONLY once when there are\n> multiple CHECK constraints. But because domainval has the wrong lifespan,\n> that test is constant-true, and we'll do it over each time to little\n> purpose.\n>\nExactly, thanks for the clear explanation.\n\n\n> > And it has to, the allocation intentionally is separate for each\n> > constraint. As the comment even explicitly says:\n> > /*\n> > * Since value might be read multiple times, force\n> to R/O\n> > * - but only if it could be an expanded datum.\n> > */\n>\n> No, what that's on about is that each constraint might contain multiple\n> VALUE symbols. But once we've R/O-ified the datum, we can keep using\n> it across VALUE symbols in different CHECK expressions, not just one.\n>\n> (AFAICS anyway)\n>\n> I'm unexcited by the proposed move of the save_innermost_domainval/null\n> variables, though. It adds no correctness and it forces an additional\n> level of indentation of a good deal of code, as the patch fails to show.\n>\nOk, but I think that still has a value in reducing the scope.\nsave_innermost_domainval and save_innermost_domainnull,\nonly are needed with DOM_CONSTRAINT_CHECK expressions,\nand both are declared even when they will not be used.\n\nAnyway, the v1 patch fixes only the expression eval.\n\nregards,\nRanier Vilela",
"msg_date": "Tue, 21 Sep 2021 20:12:33 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Eval expression R/O once time (src/backend/executor/execExpr.c)"
},
{
"msg_contents": "Em ter., 21 de set. de 2021 às 20:00, Andres Freund <andres@anarazel.de>\nescreveu:\n\n> Hi,\n>\n> On 2021-09-21 18:21:24 -0400, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > On 2021-09-21 15:09:11 -0300, Ranier Vilela wrote:\n> > >> Currently when determining where CoerceToDomainValue can be read,\n> > >> it evaluates every step in a loop.\n> > >> But, I think that the expression is immutable and should be solved\n> only\n> > >> once.\n> >\n> > > What is immutable here?\n> >\n> > I think Ranier has a point here. The clear intent of this bit:\n> >\n> > /*\n> > * If first time through, determine where\n> CoerceToDomainValue\n> > * nodes should read from.\n> > */\n> > if (domainval == NULL)\n> > {\n> >\n> > is that we only need to emit the EEOP_MAKE_READONLY once when there are\n> > multiple CHECK constraints. But because domainval has the wrong\n> lifespan,\n> > that test is constant-true, and we'll do it over each time to little\n> > purpose.\n>\n> Oh, I clearly re-skimmed the code too quickly. Sorry for that!\n>\nNo problem, thanks for taking a look.\n\nregards,\nRanier Vilela\n\nEm ter., 21 de set. de 2021 às 20:00, Andres Freund <andres@anarazel.de> escreveu:Hi,\n\nOn 2021-09-21 18:21:24 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2021-09-21 15:09:11 -0300, Ranier Vilela wrote:\n> >> Currently when determining where CoerceToDomainValue can be read,\n> >> it evaluates every step in a loop.\n> >> But, I think that the expression is immutable and should be solved only\n> >> once.\n> \n> > What is immutable here?\n> \n> I think Ranier has a point here. The clear intent of this bit:\n> \n> /*\n> * If first time through, determine where CoerceToDomainValue\n> * nodes should read from.\n> */\n> if (domainval == NULL)\n> {\n> \n> is that we only need to emit the EEOP_MAKE_READONLY once when there are\n> multiple CHECK constraints. But because domainval has the wrong lifespan,\n> that test is constant-true, and we'll do it over each time to little\n> purpose.\n\nOh, I clearly re-skimmed the code too quickly. Sorry for that!No problem, thanks for taking a look.regards,Ranier Vilela",
"msg_date": "Tue, 21 Sep 2021 20:13:13 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Eval expression R/O once time (src/backend/executor/execExpr.c)"
},
{
"msg_contents": "Em ter., 21 de set. de 2021 às 20:12, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Em ter., 21 de set. de 2021 às 19:21, Tom Lane <tgl@sss.pgh.pa.us>\n> escreveu:\n>\n>> Andres Freund <andres@anarazel.de> writes:\n>> > On 2021-09-21 15:09:11 -0300, Ranier Vilela wrote:\n>> >> Currently when determining where CoerceToDomainValue can be read,\n>> >> it evaluates every step in a loop.\n>> >> But, I think that the expression is immutable and should be solved only\n>> >> once.\n>>\n>> > What is immutable here?\n>>\n>> I think Ranier has a point here. The clear intent of this bit:\n>>\n>> /*\n>> * If first time through, determine where\n>> CoerceToDomainValue\n>> * nodes should read from.\n>> */\n>> if (domainval == NULL)\n>> {\n>>\n>> is that we only need to emit the EEOP_MAKE_READONLY once when there are\n>> multiple CHECK constraints. But because domainval has the wrong lifespan,\n>> that test is constant-true, and we'll do it over each time to little\n>> purpose.\n>>\n> Exactly, thanks for the clear explanation.\n>\n>\n>> > And it has to, the allocation intentionally is separate for each\n>> > constraint. As the comment even explicitly says:\n>> > /*\n>> > * Since value might be read multiple times, force\n>> to R/O\n>> > * - but only if it could be an expanded datum.\n>> > */\n>>\n>> No, what that's on about is that each constraint might contain multiple\n>> VALUE symbols. But once we've R/O-ified the datum, we can keep using\n>> it across VALUE symbols in different CHECK expressions, not just one.\n>>\n>> (AFAICS anyway)\n>>\n>> I'm unexcited by the proposed move of the save_innermost_domainval/null\n>> variables, though. It adds no correctness and it forces an additional\n>> level of indentation of a good deal of code, as the patch fails to show.\n>>\n> Ok, but I think that still has a value in reducing the scope.\n> save_innermost_domainval and save_innermost_domainnull,\n> only are needed with DOM_CONSTRAINT_CHECK expressions,\n> and both are declared even when they will not be used.\n>\n> Anyway, the v1 patch fixes only the expression eval.\n>\nCreated a new entry at next CF.\n\nhttps://commitfest.postgresql.org/35/3327/\n\nregards,\nRanier Vilela\n\nEm ter., 21 de set. de 2021 às 20:12, Ranier Vilela <ranier.vf@gmail.com> escreveu:Em ter., 21 de set. de 2021 às 19:21, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Andres Freund <andres@anarazel.de> writes:\n> On 2021-09-21 15:09:11 -0300, Ranier Vilela wrote:\n>> Currently when determining where CoerceToDomainValue can be read,\n>> it evaluates every step in a loop.\n>> But, I think that the expression is immutable and should be solved only\n>> once.\n\n> What is immutable here?\n\nI think Ranier has a point here. The clear intent of this bit:\n\n /*\n * If first time through, determine where CoerceToDomainValue\n * nodes should read from.\n */\n if (domainval == NULL)\n {\n\nis that we only need to emit the EEOP_MAKE_READONLY once when there are\nmultiple CHECK constraints. But because domainval has the wrong lifespan,\nthat test is constant-true, and we'll do it over each time to little\npurpose.Exactly, thanks for the clear explanation. \n\n> And it has to, the allocation intentionally is separate for each\n> constraint. As the comment even explicitly says:\n> /*\n> * Since value might be read multiple times, force to R/O\n> * - but only if it could be an expanded datum.\n> */\n\nNo, what that's on about is that each constraint might contain multiple\nVALUE symbols. But once we've R/O-ified the datum, we can keep using\nit across VALUE symbols in different CHECK expressions, not just one.\n\n(AFAICS anyway)\n\nI'm unexcited by the proposed move of the save_innermost_domainval/null\nvariables, though. It adds no correctness and it forces an additional\nlevel of indentation of a good deal of code, as the patch fails to show.Ok, but I think that still has a value in reducing the scope.save_innermost_domainval and save_innermost_domainnull, only are needed with DOM_CONSTRAINT_CHECK expressions,and both are declared even when they will not be used.Anyway, the v1 patch fixes only the expression eval.Created a new entry at next CF. https://commitfest.postgresql.org/35/3327/regards,Ranier Vilela",
"msg_date": "Thu, 23 Sep 2021 08:17:31 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Eval expression R/O once time (src/backend/executor/execExpr.c)"
},
{
"msg_contents": "On Wed, Sep 22, 2021 at 1:12 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> Anyway, the v1 patch fixes only the expression eval.\n\nThe patch looks good to me.\n\nIt seems that initially the code looked similar to your patch. See the\ncommit b8d7f053c5c2bf2a7e8734fe3327f6a8bc711755. Then the variables\nwere moved to foreach scope by the commit\n1ec7679f1b67e84be688a311dce234eeaa1d5de8.\n\nI'll mark the patch as Ready for Commiter.\n\n-- \nArtur\n\n\n",
"msg_date": "Fri, 1 Oct 2021 11:55:37 +0200",
"msg_from": "Artur Zakirov <zaartur@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Eval expression R/O once time (src/backend/executor/execExpr.c)"
},
{
"msg_contents": "Em sex., 1 de out. de 2021 às 06:55, Artur Zakirov <zaartur@gmail.com>\nescreveu:\n\n> On Wed, Sep 22, 2021 at 1:12 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> > Anyway, the v1 patch fixes only the expression eval.\n>\n> The patch looks good to me.\n>\n> It seems that initially the code looked similar to your patch. See the\n> commit b8d7f053c5c2bf2a7e8734fe3327f6a8bc711755. Then the variables\n> were moved to foreach scope by the commit\n> 1ec7679f1b67e84be688a311dce234eeaa1d5de8.\n>\nThanks for the search.\nIt seems that 1ec7679f1b67e84be688a311dce234eeaa1d5de8 caused the problem.\n\n\n> I'll mark the patch as Ready for Commiter.\n>\nThank you.\n\nregards,\nRanier Vilela\n\nEm sex., 1 de out. de 2021 às 06:55, Artur Zakirov <zaartur@gmail.com> escreveu:On Wed, Sep 22, 2021 at 1:12 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> Anyway, the v1 patch fixes only the expression eval.\n\nThe patch looks good to me.\n\nIt seems that initially the code looked similar to your patch. See the\ncommit b8d7f053c5c2bf2a7e8734fe3327f6a8bc711755. Then the variables\nwere moved to foreach scope by the commit\n1ec7679f1b67e84be688a311dce234eeaa1d5de8.Thanks for the search.It seems that \n1ec7679f1b67e84be688a311dce234eeaa1d5de8\n\ncaused the problem.\n\nI'll mark the patch as Ready for Commiter.Thank you.regards,Ranier Vilela",
"msg_date": "Fri, 1 Oct 2021 07:04:03 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Eval expression R/O once time (src/backend/executor/execExpr.c)"
},
{
"msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> It seems that 1ec7679f1b67e84be688a311dce234eeaa1d5de8 caused the problem.\n\nIndeed. Fix pushed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 02 Nov 2021 13:43:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Eval expression R/O once time (src/backend/executor/execExpr.c)"
},
{
"msg_contents": "On 2021-11-02 13:43:46 -0400, Tom Lane wrote:\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n> > It seems that 1ec7679f1b67e84be688a311dce234eeaa1d5de8 caused the problem.\n> \n> Indeed. Fix pushed.\n\nThanks to both of you!\n\n\n",
"msg_date": "Tue, 2 Nov 2021 11:33:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Eval expression R/O once time (src/backend/executor/execExpr.c)"
},
{
"msg_contents": "Em ter., 2 de nov. de 2021 às 15:33, Andres Freund <andres@anarazel.de>\nescreveu:\n\n> On 2021-11-02 13:43:46 -0400, Tom Lane wrote:\n> > Ranier Vilela <ranier.vf@gmail.com> writes:\n> > > It seems that 1ec7679f1b67e84be688a311dce234eeaa1d5de8 caused the\n> problem.\n> >\n> > Indeed. Fix pushed.\n>\n> Thanks to both of you!\n>\nYou are welcome, Andres.\n\nregards,\nRanier Vilela\n\nEm ter., 2 de nov. de 2021 às 15:33, Andres Freund <andres@anarazel.de> escreveu:On 2021-11-02 13:43:46 -0400, Tom Lane wrote:\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n> > It seems that 1ec7679f1b67e84be688a311dce234eeaa1d5de8 caused the problem.\n> \n> Indeed. Fix pushed.\n\nThanks to both of you!You are welcome, Andres.regards,Ranier Vilela",
"msg_date": "Wed, 3 Nov 2021 08:44:50 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Eval expression R/O once time (src/backend/executor/execExpr.c)"
}
] |
[
{
"msg_contents": "Hi,\n\nFor the AIO stuff I needed to build postgres for windows. And I was a bit\nhorrified by the long compile times. At first I was ready to blame the MS\ncompiler for being slow, until I noticed that using mingw gcc from linux to\ncross compile to windows is also a *lot* slower than building for linux.\n\nI found some blog-post-documented-only compiler flags [1], most importantly\n/d1reportTime. Which shows that the include processing of postgres.h takes\n0.6s [2]\n\nBasically all the time in a debug windows build is spent parsing windows.h and\nrelated headers. Argh.\n\nThe amount of stuff we include in win32_port.h and declare is pretty absurd\nimo. There's really no need to expose the whole backend to all of it. Most of\nit should just be needed in a few port/ files and a few select users.\n\nBut that's too much work for my taste. As it turns out there's a partial\nsolution to windows.h being just so damn big, the delightfully named\nWIN32_LEAN_AND_MEAN.\n\nThis reduces the non-incremental buildtime in my 8 core windows VM from 187s to\n140s. Cross compiling from linux it's\nmaster:\nreal\t0m53.807s\nuser\t22m16.930s\nsys\t2m50.264s\nWIN32_LEAN_AND_MEAN\nreal\t0m32.956s\nuser\t12m17.773s\nsys\t1m52.313s\n\nStill far from !windows compile times, but still not a bad improvement.\n\nMost of the compile time after this is still spent doing parsing /\npreprocessing. I sidetracked myself into looking at precompiled headers, but\nit's not trivial to do that right unfortunately.\n\n\nI think it'd be good if win32_port.h were slimmed down, and more of its\ncontents were moved into fake \"port/win32/$name-of-unix-header\" style headers\nor such.\n\n\nGreetings,\n\nAndres Freund\n\n[1] https://aras-p.info/blog/2019/01/21/Another-cool-MSVC-flag-d1reportTime/\n\n[2]\n\npostgres.c\nInclude Headers:\n Count: 483\n c:\\Users\\anfreund\\src\\postgres\\src\\include\\postgres.h: 0.561795s\n c:\\Users\\anfreund\\src\\postgres\\src\\include\\c.h: 0.556991s\n c:\\Users\\anfreund\\src\\postgres\\src\\include\\postgres_ext.h: 0.000488s\n c:\\Users\\anfreund\\src\\postgres\\src\\include\\pg_config_ext.h: 0.000151s\n c:\\Users\\anfreund\\src\\postgres\\src\\include\\pg_config.h: 0.000551s\n c:\\Users\\anfreund\\src\\postgres\\src\\include\\pg_config_manual.h: 0.000286s\n c:\\Users\\anfreund\\src\\postgres\\src\\include\\pg_config_os.h: 0.014283s\n C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Enterprise\\VC\\Tools\\MSVC\\14.29.30133\\include\\crtdefs.h: 0.009727s\n...\n c:\\Users\\anfreund\\src\\postgres\\src\\include\\port\\win32_port.h: 0.487469s\n C:\\Program Files (x86)\\Windows Kits\\10\\include\\10.0.20348.0\\um\\winsock2.h: 0.449373s\n...\n C:\\Program Files (x86)\\Windows Kits\\10\\include\\10.0.20348.0\\um\\windows.h: 0.439666s",
"msg_date": "Tue, 21 Sep 2021 12:30:35 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "windows build slow due to windows.h includes"
},
{
"msg_contents": "\nOn 9/21/21 3:30 PM, Andres Freund wrote:\n> Hi,\n>\n> For the AIO stuff I needed to build postgres for windows. And I was a bit\n> horrified by the long compile times. At first I was ready to blame the MS\n> compiler for being slow, until I noticed that using mingw gcc from linux to\n> cross compile to windows is also a *lot* slower than building for linux.\n>\n> I found some blog-post-documented-only compiler flags [1], most importantly\n> /d1reportTime. Which shows that the include processing of postgres.h takes\n> 0.6s [2]\n>\n> Basically all the time in a debug windows build is spent parsing windows.h and\n> related headers. Argh.\n>\n> The amount of stuff we include in win32_port.h and declare is pretty absurd\n> imo. There's really no need to expose the whole backend to all of it. Most of\n> it should just be needed in a few port/ files and a few select users.\n>\n> But that's too much work for my taste. As it turns out there's a partial\n> solution to windows.h being just so damn big, the delightfully named\n> WIN32_LEAN_AND_MEAN.\n>\n> This reduces the non-incremental buildtime in my 8 core windows VM from 187s to\n> 140s. Cross compiling from linux it's\n> master:\n> real\t0m53.807s\n> user\t22m16.930s\n> sys\t2m50.264s\n> WIN32_LEAN_AND_MEAN\n> real\t0m32.956s\n> user\t12m17.773s\n> sys\t1m52.313s\n\n\nNice!\n\n\nI also see references to VC_EXTRALEAN which defines this and some other\nstuff that might make things even faster.\n\n\nWorth investigating.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 21 Sep 2021 16:13:55 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: windows build slow due to windows.h includes"
},
{
"msg_contents": "Hi,\n\nOn 2021-09-21 16:13:55 -0400, Andrew Dunstan wrote:\n> I also see references to VC_EXTRALEAN which defines this and some other\n> stuff that might make things even faster.\n\nI don't think that's relevant to \"us\", just mfc apps (which we gladly\naren't). From what I can see we'd have to actually clean up our includes to\nnot have windows.h everywhere or use precompiled headers to benefit further.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 21 Sep 2021 15:58:05 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: windows build slow due to windows.h includes"
},
{
"msg_contents": "Em ter., 21 de set. de 2021 às 16:30, Andres Freund <andres@anarazel.de>\nescreveu:\n\n> Hi,\n>\n> For the AIO stuff I needed to build postgres for windows. And I was a bit\n> horrified by the long compile times. At first I was ready to blame the MS\n> compiler for being slow, until I noticed that using mingw gcc from linux to\n> cross compile to windows is also a *lot* slower than building for linux.\n>\n> I found some blog-post-documented-only compiler flags [1], most importantly\n> /d1reportTime. Which shows that the include processing of postgres.h takes\n> 0.6s [2]\n>\n> Basically all the time in a debug windows build is spent parsing windows.h\n> and\n> related headers. Argh.\n>\n> The amount of stuff we include in win32_port.h and declare is pretty absurd\n> imo. There's really no need to expose the whole backend to all of it. Most\n> of\n> it should just be needed in a few port/ files and a few select users.\n>\n> But that's too much work for my taste. As it turns out there's a partial\n> solution to windows.h being just so damn big, the delightfully named\n> WIN32_LEAN_AND_MEAN.\n>\n+1\nBut I did a quick dirty test here, and removed windows.h in win32_port.h,\nand compiled normally with msvc 2019 (64 bit), would it work with mingw\ncross compile?\n\nregards,\nRanier Vilela\n\nEm ter., 21 de set. de 2021 às 16:30, Andres Freund <andres@anarazel.de> escreveu:Hi,\n\nFor the AIO stuff I needed to build postgres for windows. And I was a bit\nhorrified by the long compile times. At first I was ready to blame the MS\ncompiler for being slow, until I noticed that using mingw gcc from linux to\ncross compile to windows is also a *lot* slower than building for linux.\n\nI found some blog-post-documented-only compiler flags [1], most importantly\n/d1reportTime. Which shows that the include processing of postgres.h takes\n0.6s [2]\n\nBasically all the time in a debug windows build is spent parsing windows.h and\nrelated headers. Argh.\n\nThe amount of stuff we include in win32_port.h and declare is pretty absurd\nimo. There's really no need to expose the whole backend to all of it. Most of\nit should just be needed in a few port/ files and a few select users.\n\nBut that's too much work for my taste. As it turns out there's a partial\nsolution to windows.h being just so damn big, the delightfully named\nWIN32_LEAN_AND_MEAN.+1 But I did a quick dirty test here, and removed windows.h in win32_port.h, and compiled normally with msvc 2019 (64 bit), would it work with mingw cross compile?regards,Ranier Vilela",
"msg_date": "Tue, 21 Sep 2021 20:26:36 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: windows build slow due to windows.h includes"
},
{
"msg_contents": "Hi,\n\nOn 2021-09-21 20:26:36 -0300, Ranier Vilela wrote:\n> Em ter., 21 de set. de 2021 �s 16:30, Andres Freund <andres@anarazel.de>\n> escreveu:\n> > But that's too much work for my taste. As it turns out there's a partial\n> > solution to windows.h being just so damn big, the delightfully named\n> > WIN32_LEAN_AND_MEAN.\n> >\n> +1\n> But I did a quick dirty test here, and removed windows.h in win32_port.h,\n> and compiled normally with msvc 2019 (64 bit), would it work with mingw\n> cross compile?\n\nThat's likely only because winsock indirectly includes windows.h - because of\nthat it won't actually reduce compile time. And you can't remove the other\nheaders that indirectly include windows.h without causing compilation errors.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 21 Sep 2021 16:56:43 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: windows build slow due to windows.h includes"
},
{
"msg_contents": "On Tue, Sep 21, 2021 at 12:30:35PM -0700, Andres Freund wrote:\n> solution to windows.h being just so damn big, the delightfully named\n> WIN32_LEAN_AND_MEAN.\n> \n> This reduces the non-incremental buildtime in my 8 core windows VM from 187s to\n> 140s. Cross compiling from linux it's\n> master:\n> real\t0m53.807s\n> user\t22m16.930s\n> sys\t2m50.264s\n> WIN32_LEAN_AND_MEAN\n> real\t0m32.956s\n> user\t12m17.773s\n> sys\t1m52.313s\n\n+1, great win for a one-liner.\n\n\n",
"msg_date": "Tue, 21 Sep 2021 22:44:06 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: windows build slow due to windows.h includes"
},
{
"msg_contents": "On Wed, Sep 22, 2021 at 1:56 AM Andres Freund <andres@anarazel.de> wrote:\n\n> On 2021-09-21 20:26:36 -0300, Ranier Vilela wrote:\n> > Em ter., 21 de set. de 2021 às 16:30, Andres Freund <andres@anarazel.de>\n> > escreveu:\n> > > But that's too much work for my taste. As it turns out there's a\n> partial\n> > > solution to windows.h being just so damn big, the delightfully named\n> > > WIN32_LEAN_AND_MEAN.\n> > >\n> > +1\n> > But I did a quick dirty test here, and removed windows.h in win32_port.h,\n> > and compiled normally with msvc 2019 (64 bit), would it work with mingw\n> > cross compile?\n>\n> That's likely only because winsock indirectly includes windows.h - because\n> of\n> that it won't actually reduce compile time. And you can't remove the other\n> headers that indirectly include windows.h without causing compilation\n> errors.\n>\n> You are right about winsock2.h including some parts of windows.h, please\nsee note in [1]. You could move the windows.h inclusion for clarity:\n\n+ #ifndef WIN32_LEAN_AND_MEAN\n+ #define WIN32_LEAN_AND_MEAN\n+ #endif\n+\n+ #include <windows.h>\n#include <winsock2.h>\n#include <ws2tcpip.h>\n- #include <windows.h>\n\n[1]\nhttps://docs.microsoft.com/en-us/windows/win32/winsock/creating-a-basic-winsock-application\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Wed, Sep 22, 2021 at 1:56 AM Andres Freund <andres@anarazel.de> wrote:On 2021-09-21 20:26:36 -0300, Ranier Vilela wrote:\n> Em ter., 21 de set. de 2021 às 16:30, Andres Freund <andres@anarazel.de>\n> escreveu:\n> > But that's too much work for my taste. As it turns out there's a partial\n> > solution to windows.h being just so damn big, the delightfully named\n> > WIN32_LEAN_AND_MEAN.\n> >\n> +1\n> But I did a quick dirty test here, and removed windows.h in win32_port.h,\n> and compiled normally with msvc 2019 (64 bit), would it work with mingw\n> cross compile?\n\nThat's likely only because winsock indirectly includes windows.h - because of\nthat it won't actually reduce compile time. And you can't remove the other\nheaders that indirectly include windows.h without causing compilation errors.You are right about winsock2.h including some parts of windows.h, please see note in [1]. You could move the windows.h inclusion for clarity: + #ifndef WIN32_LEAN_AND_MEAN+ #define WIN32_LEAN_AND_MEAN+ #endif+ + #include <windows.h>#include <winsock2.h>#include <ws2tcpip.h>- #include <windows.h>[1] https://docs.microsoft.com/en-us/windows/win32/winsock/creating-a-basic-winsock-applicationRegards,Juan José Santamaría Flecha",
"msg_date": "Wed, 22 Sep 2021 09:06:03 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: windows build slow due to windows.h includes"
},
{
"msg_contents": "On Wed, Sep 22, 2021 at 11:14 AM Noah Misch <noah@leadboat.com> wrote:\n>\n> On Tue, Sep 21, 2021 at 12:30:35PM -0700, Andres Freund wrote:\n> > solution to windows.h being just so damn big, the delightfully named\n> > WIN32_LEAN_AND_MEAN.\n> >\n> > This reduces the non-incremental buildtime in my 8 core windows VM from 187s to\n> > 140s. Cross compiling from linux it's\n> > master:\n> > real 0m53.807s\n> > user 22m16.930s\n> > sys 2m50.264s\n> > WIN32_LEAN_AND_MEAN\n> > real 0m32.956s\n> > user 12m17.773s\n> > sys 1m52.313s\n>\n> +1, great win for a one-liner.\n>\n\n+1. It reduced the build time of Postgres from \"Time Elapsed\n00:01:57.60\" to \"Time Elapsed 00:01:38.11\" in my Windows env. (Win 10,\nMSVC 2019).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 22 Sep 2021 14:14:59 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: windows build slow due to windows.h includes"
},
{
"msg_contents": "On Wed, Sep 22, 2021 at 02:14:59PM +0530, Amit Kapila wrote:\n> On Wed, Sep 22, 2021 at 11:14 AM Noah Misch <noah@leadboat.com> wrote:\n> > +1, great win for a one-liner.\n> \n> +1. It reduced the build time of Postgres from \"Time Elapsed\n> 00:01:57.60\" to \"Time Elapsed 00:01:38.11\" in my Windows env. (Win 10,\n> MSVC 2019).\n\nThat's nice. Great find!\n--\nMichael",
"msg_date": "Thu, 23 Sep 2021 20:51:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: windows build slow due to windows.h includes"
}
] |
[
{
"msg_contents": "Document XLOG_INCLUDE_XID a little better\n\nI noticed that commit 0bead9af484c left this flag undocumented in\nXLogSetRecordFlags, which led me to discover that the flag doesn't\nactually do what the one comment on it said it does. Improve the\nsituation by adding some more comments.\n\nBackpatch to 14, where the aforementioned commit appears.\n\nAuthor: Álvaro Herrera <alvherre@alvh.no-ip.org>\nDiscussion: https://postgr.es/m/202109212119.c3nhfp64t2ql@alvherre.pgsql\n\nBranch\n------\nREL_14_STABLE\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/c1d1ae1db23796e4d1b04f5c087944722cf1446a\n\nModified Files\n--------------\nsrc/backend/access/transam/xloginsert.c | 2 ++\nsrc/include/access/xlog.h | 2 +-\nsrc/include/access/xlogrecord.h | 5 +++--\n3 files changed, 6 insertions(+), 3 deletions(-)",
"msg_date": "Tue, 21 Sep 2021 22:48:33 +0000",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "pgsql: Document XLOG_INCLUDE_XID a little better"
},
{
"msg_contents": "On Tue, Sep 21, 2021 at 6:48 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Document XLOG_INCLUDE_XID a little better\n>\n> I noticed that commit 0bead9af484c left this flag undocumented in\n> XLogSetRecordFlags, which led me to discover that the flag doesn't\n> actually do what the one comment on it said it does. Improve the\n> situation by adding some more comments.\n>\n> Backpatch to 14, where the aforementioned commit appears.\n\nI'm not sure that saying something is a \"hack\" is really all that\nuseful as documentation.\n\nBut more to the point, I think this hack is ugly and needs to be\nreplaced with something less hacky.\n\nAmit?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 29 Sep 2021 11:20:29 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Document XLOG_INCLUDE_XID a little better"
},
{
"msg_contents": "On Wed, Sep 29, 2021 at 8:50 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Sep 21, 2021 at 6:48 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > Document XLOG_INCLUDE_XID a little better\n> >\n> > I noticed that commit 0bead9af484c left this flag undocumented in\n> > XLogSetRecordFlags, which led me to discover that the flag doesn't\n> > actually do what the one comment on it said it does. Improve the\n> > situation by adding some more comments.\n> >\n> > Backpatch to 14, where the aforementioned commit appears.\n>\n> I'm not sure that saying something is a \"hack\" is really all that\n> useful as documentation.\n>\n> But more to the point, I think this hack is ugly and needs to be\n> replaced with something less hacky.\n>\n\nI think we can do better than using XLOG_INCLUDE_XID flag in the\nrecord being inserted. We need this flag so that we can mark\nSubTransaction assigned after XLogInsertRecord() is successful. We\ncan instead output a flag (say sub_xact_assigned) from\nXLogRecordAssemble() and pass it to XLogInsertRecord(). Then in\nXLogInsertRecord(), we can mark SubTransactionAssigned once the record\nis inserted (after or before calling\nMarkCurrentTransactionIdLoggedIfAny()).\n\nThe other idea could be that in XLogInsertRecord(), we check\nIsSubTransactionAssignmentPending() after the record is successfully\ninserted and then mark SubTransaction assigned but I think the\nprevious one is better.\n\nWhat do you think?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 30 Sep 2021 15:37:47 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Document XLOG_INCLUDE_XID a little better"
},
{
"msg_contents": "On Thu, Sep 30, 2021 at 6:08 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> I think we can do better than using XLOG_INCLUDE_XID flag in the\n> record being inserted. We need this flag so that we can mark\n> SubTransaction assigned after XLogInsertRecord() is successful. We\n> can instead output a flag (say sub_xact_assigned) from\n> XLogRecordAssemble() and pass it to XLogInsertRecord(). Then in\n> XLogInsertRecord(), we can mark SubTransactionAssigned once the record\n> is inserted (after or before calling\n> MarkCurrentTransactionIdLoggedIfAny()).\n\nIsn't there other communication between these routines that just uses\nglobal variables?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 30 Sep 2021 15:02:04 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Document XLOG_INCLUDE_XID a little better"
},
{
"msg_contents": "On Fri, Oct 1, 2021 at 12:32 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Sep 30, 2021 at 6:08 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > I think we can do better than using XLOG_INCLUDE_XID flag in the\n> > record being inserted. We need this flag so that we can mark\n> > SubTransaction assigned after XLogInsertRecord() is successful. We\n> > can instead output a flag (say sub_xact_assigned) from\n> > XLogRecordAssemble() and pass it to XLogInsertRecord(). Then in\n> > XLogInsertRecord(), we can mark SubTransactionAssigned once the record\n> > is inserted (after or before calling\n> > MarkCurrentTransactionIdLoggedIfAny()).\n>\n> Isn't there other communication between these routines that just uses\n> global variables?\n>\n\nAFAICS, there are two possibilities w.r.t global variables: (a) use\ncurinsert_flags which we are doing now, (b) another is to introduce a\nnew global variable, set it after we make the association, and then\nreset it after we mark SubTransaction assigned and on error. I have\nalso thought of passing it via XLogCtlInsert but as that is shared by\ndifferent processes, it can be set by one process and be read by\nanother process which we don't want here.\n\nI am not sure if any of these is better than doing this communication\nvia local variable. Do you have something specific in mind?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 1 Oct 2021 10:21:16 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Document XLOG_INCLUDE_XID a little better"
},
{
"msg_contents": "On 2021-Oct-01, Amit Kapila wrote:\n\n> AFAICS, there are two possibilities w.r.t global variables: (a) use\n> curinsert_flags which we are doing now, (b) another is to introduce a\n> new global variable, set it after we make the association, and then\n> reset it after we mark SubTransaction assigned and on error. I have\n> also thought of passing it via XLogCtlInsert but as that is shared by\n> different processes, it can be set by one process and be read by\n> another process which we don't want here.\n\nSo, in my mind, curinsert_flags is a way for the high-level user of\nXLogInsert to pass info about the record being inserted to the low-level\nxloginsert.c infrastructure. In contrast, XLOG_INCLUDE_XID is being\nused solely within xloginsert.c, by one piece of low-level\ninfrastructure to communicate to another piece of low-level\ninfrastructure that some cleanup is needed. Nothing outside of\nxloginsert.c needs to know anything about XLOG_INCLUDE_XID, in contrast\nwith the other bits that can be set by XLogSetRecordFlags. You could\nmove the #define to xloginsert.c and everything would compile fine.\n\nAnother tell-tale sign that things are not fitting right is that\nXLOG_INCLUDE_XID is not set via XLogSetRecordFlags, contrary the comment\nabove those defines.\n\n(Aside: I wonder why do we have XLogSetRecordFlags at all -- I think we\ncould just pass the other two flags via XLogBeginInsert).\n\nRegarding XLOG_INCLUDE_XID, I don't think replacing it with a bit in\nshared memory is a good idea, since it only applies to the insertion\nbeing carried out by the current process, right?\n\nI think a straight standalone variable (probably a static boolean in\nxloginsert.c) might be less confusing. \n\n... so, reading the xact.c code again, TransactionState->assigned really\nmeans \"whether the subXID-to-topXID association has been wal-logged\",\nwhich is a completely different meaning from what the term 'assigned'\nmeans in all other comments in xact.c ... and I think the subroutine\nname MarkSubTransactionAssigned() is not a great choice either.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"Nunca se desea ardientemente lo que solo se desea por razón\" (F. Alexandre)\n\n\n",
"msg_date": "Fri, 1 Oct 2021 09:53:48 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Document XLOG_INCLUDE_XID a little better"
},
{
"msg_contents": "On Fri, Oct 1, 2021 at 8:53 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> I think a straight standalone variable (probably a static boolean in\n> xloginsert.c) might be less confusing.\n\n+1.\n\n> ... so, reading the xact.c code again, TransactionState->assigned really\n> means \"whether the subXID-to-topXID association has been wal-logged\",\n> which is a completely different meaning from what the term 'assigned'\n> means in all other comments in xact.c ... and I think the subroutine\n> name MarkSubTransactionAssigned() is not a great choice either.\n\n+1.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 1 Oct 2021 09:45:47 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Document XLOG_INCLUDE_XID a little better"
},
{
"msg_contents": "On Fri, Oct 1, 2021 at 6:24 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Oct-01, Amit Kapila wrote:\n\n> I think a straight standalone variable (probably a static boolean in\n> xloginsert.c) might be less confusing.\n\nI have written two patches, Approach1 is as you described using a\nstatic boolean and Approach2 as a local variable to XLogAssembleRecord\nas described by Amit, attached both of them for your reference.\nIMHO, either of these approaches looks cleaner.\n\n>\n> ... so, reading the xact.c code again, TransactionState->assigned really\n> means \"whether the subXID-to-topXID association has been wal-logged\",\n> which is a completely different meaning from what the term 'assigned'\n> means in all other comments in xact.c ... and I think the subroutine\n> name MarkSubTransactionAssigned() is not a great choice either.\n\nI have also renamed the variable and functions as per the actual usage.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Sat, 2 Oct 2021 16:16:31 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Document XLOG_INCLUDE_XID a little better"
},
{
"msg_contents": "On 2021-Oct-02, Dilip Kumar wrote:\n\n> I have written two patches, Approach1 is as you described using a\n> static boolean and Approach2 as a local variable to XLogAssembleRecord\n> as described by Amit, attached both of them for your reference.\n> IMHO, either of these approaches looks cleaner.\n\nThanks! I haven't read these patches carefully, but I think the\nvariable is about assigning the *subxid*, not the topxid. Amit can\nconfirm ...\n\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"Aprender sin pensar es inútil; pensar sin aprender, peligroso\" (Confucio)\n\n\n",
"msg_date": "Sat, 2 Oct 2021 11:40:10 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Document XLOG_INCLUDE_XID a little better"
},
{
"msg_contents": "On Sat, Oct 2, 2021 at 8:10 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Oct-02, Dilip Kumar wrote:\n>\n> > I have written two patches, Approach1 is as you described using a\n> > static boolean and Approach2 as a local variable to XLogAssembleRecord\n> > as described by Amit, attached both of them for your reference.\n> > IMHO, either of these approaches looks cleaner.\n>\n> Thanks! I haven't read these patches carefully, but I think the\n> variable is about assigning the *subxid*, not the topxid. Amit can\n> confirm ...\n\nIIRC, this variable is for logging the top xid in the first WAL by\neach subtransaction. So that during logical decoding, while creating\nthe ReorderBufferTxn for the subtransaction we can associate it to the\ntop transaction without seeing the commit WAL.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 3 Oct 2021 17:05:24 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Document XLOG_INCLUDE_XID a little better"
},
{
"msg_contents": "On Sun, Oct 3, 2021 at 5:05 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Sat, Oct 2, 2021 at 8:10 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2021-Oct-02, Dilip Kumar wrote:\n> >\n> > > I have written two patches, Approach1 is as you described using a\n> > > static boolean and Approach2 as a local variable to XLogAssembleRecord\n> > > as described by Amit, attached both of them for your reference.\n> > > IMHO, either of these approaches looks cleaner.\n> >\n> > Thanks! I haven't read these patches carefully, but I think the\n> > variable is about assigning the *subxid*, not the topxid. Amit can\n> > confirm ...\n>\n> IIRC, this variable is for logging the top xid in the first WAL by\n> each subtransaction. So that during logical decoding, while creating\n> the ReorderBufferTxn for the subtransaction we can associate it to the\n> top transaction without seeing the commit WAL.\n>\n\nThis is correct.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 4 Oct 2021 09:56:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Document XLOG_INCLUDE_XID a little better"
},
{
"msg_contents": "On Fri, Oct 1, 2021 at 6:23 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Oct-01, Amit Kapila wrote:\n>\n> > AFAICS, there are two possibilities w.r.t global variables: (a) use\n> > curinsert_flags which we are doing now, (b) another is to introduce a\n> > new global variable, set it after we make the association, and then\n> > reset it after we mark SubTransaction assigned and on error. I have\n> > also thought of passing it via XLogCtlInsert but as that is shared by\n> > different processes, it can be set by one process and be read by\n> > another process which we don't want here.\n>\n> So, in my mind, curinsert_flags is a way for the high-level user of\n> XLogInsert to pass info about the record being inserted to the low-level\n> xloginsert.c infrastructure. In contrast, XLOG_INCLUDE_XID is being\n> used solely within xloginsert.c, by one piece of low-level\n> infrastructure to communicate to another piece of low-level\n> infrastructure that some cleanup is needed. Nothing outside of\n> xloginsert.c needs to know anything about XLOG_INCLUDE_XID, in contrast\n> with the other bits that can be set by XLogSetRecordFlags. You could\n> move the #define to xloginsert.c and everything would compile fine.\n>\n> Another tell-tale sign that things are not fitting right is that\n> XLOG_INCLUDE_XID is not set via XLogSetRecordFlags, contrary the comment\n> above those defines.\n>\n> (Aside: I wonder why do we have XLogSetRecordFlags at all -- I think we\n> could just pass the other two flags via XLogBeginInsert).\n>\n\nAgreed, I think we can do that if we want but we still need to set\ncurinsert_flags or some other similar variable in xloginsert.c so that\nwe can later use and reset it.\n\n> Regarding XLOG_INCLUDE_XID, I don't think replacing it with a bit in\n> shared memory is a good idea, since it only applies to the insertion\n> being carried out by the current process, right?\n>\n\nRight. Ideally, we can set this in a local variable via\nXLogRecordAssemble() and then use it in XLogInsertRecord() as is done\nin 0001-Refactor-code-for-logging-the-top-transaction-id-in-Approach2.\nBasically, we just need to ensure that we mark the\nCurrentTransactionState for this flag once we are sure that the\nfunction XLogInsertRecord() will perform the insertion and won't\nreturn InvalidXLogRecPtr. OTOH, I see the point in using a global\nstatic variable to achieve this purpose as that allows to do the\nrequired work only in xloginsert.c.\n\n> I think a straight standalone variable (probably a static boolean in\n> xloginsert.c) might be less confusing.\n>\n> ... so, reading the xact.c code again, TransactionState->assigned really\n> means \"whether the subXID-to-topXID association has been wal-logged\",\n> which is a completely different meaning from what the term 'assigned'\n> means in all other comments in xact.c ...\n>\n\nI think you have interpreted it correctly and we make this association\nlogged with the first WAL of each subtransaction if the WAL level is\nlogical.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 4 Oct 2021 10:34:22 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Document XLOG_INCLUDE_XID a little better"
},
{
"msg_contents": "On Sat, Oct 2, 2021 at 6:46 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I have written two patches, Approach1 is as you described using a\n> static boolean and Approach2 as a local variable to XLogAssembleRecord\n> as described by Amit, attached both of them for your reference.\n> IMHO, either of these approaches looks cleaner.\n\nI agree, and I don't have a strong preference between them. If I were\nwriting code like this from scratch, I would do what 0001 does. But\n0002 is arguably more consistent with the existing style. It's kind of\nhard to judge, at least for me, which is to be preferred.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 4 Oct 2021 13:13:20 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Document XLOG_INCLUDE_XID a little better"
},
{
"msg_contents": "On Sat, Oct 2, 2021 at 4:16 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Fri, Oct 1, 2021 at 6:24 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2021-Oct-01, Amit Kapila wrote:\n>\n> > I think a straight standalone variable (probably a static boolean in\n> > xloginsert.c) might be less confusing.\n>\n> I have written two patches, Approach1 is as you described using a\n> static boolean and Approach2 as a local variable to XLogAssembleRecord\n> as described by Amit, attached both of them for your reference.\n> IMHO, either of these approaches looks cleaner.\n>\n\nI have tried to improve some comments and a variable name in the\nApproach-2 (use local variable) patch and also reverts one of the\ncomments introduced by the commit ade24dab97. I am fine if we decide\nto go with Approach-1 as well but personally, I would prefer to keep\nthe code consistent with nearby code.\n\nLet me know what you think of the attached?\n\nWith Regards,\nAmit Kapila.",
"msg_date": "Thu, 7 Oct 2021 10:46:03 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Document XLOG_INCLUDE_XID a little better"
},
{
"msg_contents": "On Thu, Oct 7, 2021 at 10:46 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Oct 2, 2021 at 4:16 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Fri, Oct 1, 2021 at 6:24 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > >\n> > > On 2021-Oct-01, Amit Kapila wrote:\n> >\n> > > I think a straight standalone variable (probably a static boolean in\n> > > xloginsert.c) might be less confusing.\n> >\n> > I have written two patches, Approach1 is as you described using a\n> > static boolean and Approach2 as a local variable to XLogAssembleRecord\n> > as described by Amit, attached both of them for your reference.\n> > IMHO, either of these approaches looks cleaner.\n> >\n>\n> I have tried to improve some comments and a variable name in the\n> Approach-2 (use local variable) patch and also reverts one of the\n> comments introduced by the commit ade24dab97. I am fine if we decide\n> to go with Approach-1 as well but personally, I would prefer to keep\n> the code consistent with nearby code.\n>\n> Let me know what you think of the attached?\n>\n\nToday, I have looked at this patch again and slightly changed a\ncomment, one of the function name and variable name. Do, let me know\nif you or others have any suggestions for better names or otherwise? I\nthink we should backpatch this to 14 as well where this code was\nintroduced.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Mon, 18 Oct 2021 10:47:56 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Document XLOG_INCLUDE_XID a little better"
},
{
"msg_contents": "On Mon, Oct 18, 2021 at 10:48 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n>\n> Today, I have looked at this patch again and slightly changed a\n> comment, one of the function name and variable name. Do, let me know\n> if you or others have any suggestions for better names or otherwise? I\n> think we should backpatch this to 14 as well where this code was\n> introduced.\n>\n\n bool\n-IsSubTransactionAssignmentPending(void)\n+IsTopTransactionIdLogged(void)\n {\n /* wal_level has to be logical */\n if (!XLogLogicalInfoActive())\n@@ -6131,19 +6131,20 @@ IsSubTransactionAssignmentPending(void)\n if (!TransactionIdIsValid(GetCurrentTransactionIdIfAny()))\n return false;\n\n- /* and it should not be already 'assigned' */\n- return !CurrentTransactionState->assigned;\n+ /* and it should not be already 'logged' */\n+ return !CurrentTransactionState->topXidLogged;\n }\n\nI have one comment here, basically, you have changed the function name\nto \"IsTopTransactionIdLogged\", but it still behaves like\nIsTopTransactionIdLogPending. Now with the new name, it should return\n(CurrentTransactionState->topXidLogged) instead of\n(!CurrentTransactionState->topXidLogged).\n\nAnd the caller should also be changed accordingly. Other changes look fine.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 20 Oct 2021 10:25:05 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Document XLOG_INCLUDE_XID a little better"
},
{
"msg_contents": "On Wed, Oct 20, 2021 at 10:25 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Oct 18, 2021 at 10:48 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> >\n> > Today, I have looked at this patch again and slightly changed a\n> > comment, one of the function name and variable name. Do, let me know\n> > if you or others have any suggestions for better names or otherwise? I\n> > think we should backpatch this to 14 as well where this code was\n> > introduced.\n> >\n>\n> bool\n> -IsSubTransactionAssignmentPending(void)\n> +IsTopTransactionIdLogged(void)\n> {\n> /* wal_level has to be logical */\n> if (!XLogLogicalInfoActive())\n> @@ -6131,19 +6131,20 @@ IsSubTransactionAssignmentPending(void)\n> if (!TransactionIdIsValid(GetCurrentTransactionIdIfAny()))\n> return false;\n>\n> - /* and it should not be already 'assigned' */\n> - return !CurrentTransactionState->assigned;\n> + /* and it should not be already 'logged' */\n> + return !CurrentTransactionState->topXidLogged;\n> }\n>\n> I have one comment here, basically, you have changed the function name\n> to \"IsTopTransactionIdLogged\", but it still behaves like\n> IsTopTransactionIdLogPending. Now with the new name, it should return\n> (CurrentTransactionState->topXidLogged) instead of\n> (!CurrentTransactionState->topXidLogged).\n>\n\nValid point but I think the change suggested by you won't be\nsufficient. We also need to change all the other checks in that\nfunction to return true which will make it a bit awkward. So instead,\nwe can change the function name to IsTopTransactionIdLogPending().\nDoes that make sense?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Wed, 20 Oct 2021 16:17:26 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Document XLOG_INCLUDE_XID a little better"
},
{
"msg_contents": "On Wed, Oct 20, 2021 at 4:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Oct 20, 2021 at 10:25 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > I have one comment here, basically, you have changed the function name\n> > to \"IsTopTransactionIdLogged\", but it still behaves like\n> > IsTopTransactionIdLogPending. Now with the new name, it should return\n> > (CurrentTransactionState->topXidLogged) instead of\n> > (!CurrentTransactionState->topXidLogged).\n> >\n>\n> Valid point but I think the change suggested by you won't be\n> sufficient. We also need to change all the other checks in that\n> function to return true which will make it a bit awkward. So instead,\n> we can change the function name to IsTopTransactionIdLogPending().\n> Does that make sense?\n\nYeah, that makes sense.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 20 Oct 2021 17:17:02 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Document XLOG_INCLUDE_XID a little better"
},
{
"msg_contents": "API-wise, this seems a good improvement and it brings a lot of clarity\nto what is really going on. Thanks for working on it.\n\nSome minor comments:\n\nPlease do not revert the comment change in xlogrecord.h. It is not\nwrong. The exceptions mentioned are values 252-255, so \"a few\" is\nbetter than \"a couple\".\n\nIn IsTopTransactionIdLogPending(), I suggest to reorder the tests so\nthat the faster ones are first -- or at least, the last one should be at\nthe top, so in some cases we can return without additional function\ncalls. I don't think it'd be extremely noticeable, but as Tom likes to\nsay, a cycle shaved is a cycle earned.\n\nXLogRecordAssemble's comment should explain its new output argument.\nMaybe \"*topxid_included is set if the topmost transaction ID is logged\nwith the current subtransaction\".\n\nI think these new routines IsTopTransactionIdLogPending and\nMarkTopTransactionIdLogged should not be at the end of the file; a\nlocation closer to where MarkCurrentTransactionIdLoggedIfAny() seems\nmore appropriate. Keep related things closer. (In this case these are\noperating on TransactionStateData, and it looks right that they would\nappear somewhat about AssignTransactionId and\nGetStableLatestTransactionId.)\n\nDoes MarkTopTransactionIdLogged() have to be inside XLogInsertRecord's\ncritical section?\n\nThe names IsTopTransactionIdLogPending() and\nMarkTopTransactionIdLogged() irk me somewhat. It's not at all obvious\nfrom these names that these routines are mostly about actions taken for\na subtransaction. I propose IsSubxactTopXidLogPending() and\nMarkSubxactTopXidLogged(). I don't feel the need to expand \"Xid\" to\n\"TransactionId\" in these function names.\n\nIn TransactionStateData, I propose this wording for the comment:\n\n bool topXidLogged;\t/* for a subxact: is top-level XID logged? */\n\nThanks!\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Cuando mañana llegue pelearemos segun lo que mañana exija\" (Mowgli)\n\n\n",
"msg_date": "Wed, 20 Oct 2021 10:39:15 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Document XLOG_INCLUDE_XID a little better"
},
{
"msg_contents": "On Wed, Oct 20, 2021 at 7:09 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> API-wise, this seems a good improvement and it brings a lot of clarity\n> to what is really going on. Thanks for working on it.\n>\n> Some minor comments:\n\nThanks for the review, most of the comments look fine, and will work\non those, but I think some of them need more thoughts so replying to\nthose.\n\n> In IsTopTransactionIdLogPending(), I suggest to reorder the tests so\n> that the faster ones are first -- or at least, the last one should be at\n> the top, so in some cases we can return without additional function\n> calls. I don't think it'd be extremely noticeable, but as Tom likes to\n> say, a cycle shaved is a cycle earned.\n\nI don't think we can really move the last at top. Basically, we only\nwant to log the top transaction id if all the above check passes and\nthe top xid is not yet logged. For example, if the WAL level is not\nlogical then we don't want to log the top xid even if it is not yet\nlogged, similarly, if the current transaction is not a subtransaction\nthen also we don't want to log the top transaction.\n\n>\n> Does MarkTopTransactionIdLogged() have to be inside XLogInsertRecord's\n> critical section?\n\nI think this function is doing somewhat similar things to what we are\ndoing in MarkCurrentTransactionIdLoggedIfAny() so put at the same\nplace. But I don't see any reason for this to be in the critical\nsection.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 20 Oct 2021 20:49:36 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Document XLOG_INCLUDE_XID a little better"
},
{
"msg_contents": "On 2021-Oct-20, Dilip Kumar wrote:\n\n> On Wed, Oct 20, 2021 at 7:09 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > In IsTopTransactionIdLogPending(), I suggest to reorder the tests so\n> > that the faster ones are first -- or at least, the last one should be at\n> > the top, so in some cases we can return without additional function\n> > calls. I don't think it'd be extremely noticeable, but as Tom likes to\n> > say, a cycle shaved is a cycle earned.\n> \n> I don't think we can really move the last at top. Basically, we only\n> want to log the top transaction id if all the above check passes and\n> the top xid is not yet logged. For example, if the WAL level is not\n> logical then we don't want to log the top xid even if it is not yet\n> logged, similarly, if the current transaction is not a subtransaction\n> then also we don't want to log the top transaction.\n\nWell, I don't suggest to move it verbatim, but ISTM the code can be\nrestructured so that we do that test first, and if we see that flag set\nto true, we don't have to consider any of the other tests. That flag\ncan only be set true if we saw all the other checks pass in the same\nsubtransaction, right?\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 20 Oct 2021 13:16:47 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Document XLOG_INCLUDE_XID a little better"
},
{
"msg_contents": "On Wed, Oct 20, 2021 at 9:46 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Oct-20, Dilip Kumar wrote:\n>\n> > On Wed, Oct 20, 2021 at 7:09 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> > > In IsTopTransactionIdLogPending(), I suggest to reorder the tests so\n> > > that the faster ones are first -- or at least, the last one should be at\n> > > the top, so in some cases we can return without additional function\n> > > calls. I don't think it'd be extremely noticeable, but as Tom likes to\n> > > say, a cycle shaved is a cycle earned.\n> >\n> > I don't think we can really move the last at top. Basically, we only\n> > want to log the top transaction id if all the above check passes and\n> > the top xid is not yet logged. For example, if the WAL level is not\n> > logical then we don't want to log the top xid even if it is not yet\n> > logged, similarly, if the current transaction is not a subtransaction\n> > then also we don't want to log the top transaction.\n>\n> Well, I don't suggest to move it verbatim, but ISTM the code can be\n> restructured so that we do that test first, and if we see that flag set\n> to true, we don't have to consider any of the other tests. That flag\n> can only be set true if we saw all the other checks pass in the same\n> subtransaction, right?\n\nYeah you are right, I will change it.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 21 Oct 2021 08:40:12 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Document XLOG_INCLUDE_XID a little better"
},
{
"msg_contents": "On Wed, Oct 20, 2021 at 8:49 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Oct 20, 2021 at 7:09 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > Does MarkTopTransactionIdLogged() have to be inside XLogInsertRecord's\n> > critical section?\n>\n> I think this function is doing somewhat similar things to what we are\n> doing in MarkCurrentTransactionIdLoggedIfAny() so put at the same\n> place. But I don't see any reason for this to be in the critical\n> section.\n>\n\nYeah, I also don't see any reason for this to be in the critical\nsection but it might be better to keep both together. So, if we want\nto keep MarkTopTransactionIdLogged() out of the critical section in\nthis patch then we should move the existing function\nMarkCurrentTransactionIdLoggedIfAny() in a separate patch so that\nfuture readers doesn't get confused as to why one of these is in the\ncritical section and other is not. OTOH, we can move\nMarkCurrentTransactionIdLoggedIfAny() out of the critical section in\nthis patch itself but that appears like an unrelated change and we may\nor may not want to back-patch the same.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 21 Oct 2021 09:10:55 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Document XLOG_INCLUDE_XID a little better"
},
{
"msg_contents": "On Thu, Oct 21, 2021 at 9:11 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Oct 20, 2021 at 8:49 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Wed, Oct 20, 2021 at 7:09 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > >\n> > > Does MarkTopTransactionIdLogged() have to be inside XLogInsertRecord's\n> > > critical section?\n> >\n> > I think this function is doing somewhat similar things to what we are\n> > doing in MarkCurrentTransactionIdLoggedIfAny() so put at the same\n> > place. But I don't see any reason for this to be in the critical\n> > section.\n> >\n>\n> Yeah, I also don't see any reason for this to be in the critical\n> section but it might be better to keep both together. So, if we want\n> to keep MarkTopTransactionIdLogged() out of the critical section in\n> this patch then we should move the existing function\n> MarkCurrentTransactionIdLoggedIfAny() in a separate patch so that\n> future readers doesn't get confused as to why one of these is in the\n> critical section and other is not. OTOH, we can move\n> MarkCurrentTransactionIdLoggedIfAny() out of the critical section in\n> this patch itself but that appears like an unrelated change and we may\n> or may not want to back-patch the same.\n>\n\nv5-0001, incorporates all the comment fixes suggested by Alvaro. and\n0001 is an additional patch which moves\nMarkCurrentTransactionIdLoggedIfAny(), out of the critical section.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 21 Oct 2021 11:20:23 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Document XLOG_INCLUDE_XID a little better"
},
{
"msg_contents": "On Thu, Oct 21, 2021 at 11:20 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Oct 21, 2021 at 9:11 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> v5-0001, incorporates all the comment fixes suggested by Alvaro. and\n> 0001 is an additional patch which moves\n> MarkCurrentTransactionIdLoggedIfAny(), out of the critical section.\n>\n\nThanks, both your patches look good to me except that we need to\nremove the sentence related to the revert of ade24dab97 from the\ncommit message. I think we should backpatch the first patch to 14\nwhere it was introduced and commit the second patch (related to moving\ncode out of critical section) only to HEAD but we can even backpatch\nthe second one till 9.6 for the sake of consistency. What do you guys\nthink?\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 25 Oct 2021 16:21:26 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Document XLOG_INCLUDE_XID a little better"
},
{
"msg_contents": "On Mon, Oct 25, 2021 at 4:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Oct 21, 2021 at 11:20 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Thu, Oct 21, 2021 at 9:11 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> >\n> > v5-0001, incorporates all the comment fixes suggested by Alvaro. and\n> > 0001 is an additional patch which moves\n> > MarkCurrentTransactionIdLoggedIfAny(), out of the critical section.\n> >\n>\n> Thanks, both your patches look good to me except that we need to\n> remove the sentence related to the revert of ade24dab97 from the\n> commit message. I think we should backpatch the first patch to 14\n> where it was introduced and commit the second patch (related to moving\n> code out of critical section) only to HEAD but we can even backpatch\n> the second one till 9.6 for the sake of consistency. What do you guys\n> think?\n>\n\nThe other option could be to just commit both these patches in HEAD as\nthere is no correctness issue here.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 26 Oct 2021 09:19:03 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Document XLOG_INCLUDE_XID a little better"
},
{
"msg_contents": "On Tue, Oct 26, 2021 at 9:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Oct 25, 2021 at 4:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Oct 21, 2021 at 11:20 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Thu, Oct 21, 2021 at 9:11 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > >\n> > > v5-0001, incorporates all the comment fixes suggested by Alvaro. and\n> > > 0001 is an additional patch which moves\n> > > MarkCurrentTransactionIdLoggedIfAny(), out of the critical section.\n> > >\n> >\n> > Thanks, both your patches look good to me except that we need to\n> > remove the sentence related to the revert of ade24dab97 from the\n> > commit message. I think we should backpatch the first patch to 14\n> > where it was introduced and commit the second patch (related to moving\n> > code out of critical section) only to HEAD but we can even backpatch\n> > the second one till 9.6 for the sake of consistency. What do you guys\n> > think?\n> >\n>\n> The other option could be to just commit both these patches in HEAD as\n> there is no correctness issue here.\n\nRight, even I feel we should just commit it to the HEAD as there is no\ncorrectness issue.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 27 Oct 2021 16:39:09 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Document XLOG_INCLUDE_XID a little better"
},
{
"msg_contents": "On Wed, Oct 27, 2021 at 4:39 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Oct 26, 2021 at 9:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > >\n> > > Thanks, both your patches look good to me except that we need to\n> > > remove the sentence related to the revert of ade24dab97 from the\n> > > commit message. I think we should backpatch the first patch to 14\n> > > where it was introduced and commit the second patch (related to moving\n> > > code out of critical section) only to HEAD but we can even backpatch\n> > > the second one till 9.6 for the sake of consistency. What do you guys\n> > > think?\n> > >\n> >\n> > The other option could be to just commit both these patches in HEAD as\n> > there is no correctness issue here.\n>\n> Right, even I feel we should just commit it to the HEAD as there is no\n> correctness issue.\n>\n\nThanks for your opinion. I'll commit it to the HEAD by next Tuesday\nunless someone feels that we should backpatch this.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 28 Oct 2021 08:15:24 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Document XLOG_INCLUDE_XID a little better"
},
{
"msg_contents": "On Thu, Oct 28, 2021 at 8:15 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Oct 27, 2021 at 4:39 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Tue, Oct 26, 2021 at 9:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > >\n> > > > Thanks, both your patches look good to me except that we need to\n> > > > remove the sentence related to the revert of ade24dab97 from the\n> > > > commit message. I think we should backpatch the first patch to 14\n> > > > where it was introduced and commit the second patch (related to moving\n> > > > code out of critical section) only to HEAD but we can even backpatch\n> > > > the second one till 9.6 for the sake of consistency. What do you guys\n> > > > think?\n> > > >\n> > >\n> > > The other option could be to just commit both these patches in HEAD as\n> > > there is no correctness issue here.\n> >\n> > Right, even I feel we should just commit it to the HEAD as there is no\n> > correctness issue.\n> >\n>\n> Thanks for your opinion. I'll commit it to the HEAD by next Tuesday\n> unless someone feels that we should backpatch this.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 3 Nov 2021 08:40:26 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Document XLOG_INCLUDE_XID a little better"
}
] |
[
{
"msg_contents": "Good day.\n\nI found BufferAlloc unnecessary goes through dynahash's freelist when it\nreuses valid buffer.\n\nIf it is avoided and dynahash's entry directly moved, 1-2% is gained in\nselect only pgbench (with scale factor 100 in 50 connections/50 threads\non 4 core 8ht notebook cpu 185krps=>190krps).\n\nI've changed speculative call to BufferInsert to BufferLookup to avoid\ninsertion too early. (It also saves call to BufferDelete if conflicting\nentry is already in). Then if buffer is valid and no conflicting entry\nin a dynahash I'm moving old dynahash entry directly and without check\n(since we already did the check).\n\nIf old buffer were invalid, new entry is unavoidably fetched from\nfreelist and inserted (also without check). But in steady state (if\nthere is no dropped/truncated tables/indices/databases) it is rare case.\n\nRegards,\nSokolov Yura @ Postgres Professional\ny.sokolov@postgrespro.ru\nfunny.falcon@gmail.com",
"msg_date": "Wed, 22 Sep 2021 13:52:43 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Avoid dynahash's freelist in BufferAlloc."
}
] |
[
{
"msg_contents": "Hi,\n\nCurrently the following messages are shown in server log if bail_out\nerrors occur in guc_file.l. This leaves the (naive) users with no clue\nas to where to look for the exact errors:\nerrmsg(\"configuration file \\\"%s\\\" contains errors\",\nerrmsg(\"configuration file \\\"%s\\\" contains errors; unaffected changes\nwere applied\",\nerrmsg(\"configuration file \\\"%s\\\" contains errors; no changes were applied\",\n2021-09-22 15:16:45.788 UTC [8241] LOG: configuration file\n\"/home/uuser/postgres/inst/bin/data/postgresql.auto.conf\" contains\nerrors; unaffected changes were applied\n\nBut the user can get to know the parameters for which the errors\nhappened via pg_file_settings view or pg_show_all_file_settings(). Can\nwe specify this information in the errdetail for these bail_out\nerrors, something like errdetail(\"See pg_file_settings view for more\ninformation.\") or errdetail(\"Check pg_file_settings view for the\nerrors.\") or some other better wording?\n\nIn the same guc_file.l for parse_error, a clear indication (line\nnumber and near end of line or near token) is provided, so that users\ncan easily identify where the error was.\n2021-09-22 15:17:11.051 UTC [8241] LOG: syntax error in file\n\"/home/bharath/postgres/inst/bin/data/postgresql.auto.conf\" line 3,\nnear end of line\n\nThoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Thu, 23 Sep 2021 08:54:26 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Add errdetail for bail_out errors in guc_file.l"
}
] |
[
{
"msg_contents": "Dear Thomas,\n\nIs there anyone more specifically that you know,\ncertainly on pgsql-hackers@lists.postgresql.org,\nthat you might recommd me to?\n\nZ.M.\n________________________________\nFrom: Thomas Munro <thomas.munro@gmail.com>\nSent: Thursday, 23 September 2021 1:14 PM\nTo: A Z <poweruserm@live.com.au>\nSubject: Re: High Precision Mathematics PostgreSQL Extension.\n\nOn Thu, Sep 23, 2021 at 1:39 PM A Z <poweruserm@live.com.au> wrote:\n> I was wondering if you have had time yet to read and think about\n> my most recent email about my desire to see a PostgreSQL\n> High Precision Base 10 Mathematics Extension developed.\n>\n> Do you know anyone who is in a position to accomplish this?\n\nHi A Z,\n\nNot me, sorry. If it's a commercial proposition, you could perhaps\ntry talking to one of the PostgreSQL consultancies? Or hire a student\n:-)\n\nI'm personally quite interested in getting DEFLOAT added to Postgres,\nbut that's a fixed size, floating point, base 10 data type, and it's\nneeded for standard conformance. That's probably why I read your\nemails to see if there was some overlap.\n\n\n\n\n\n\n\n\nDear Thomas,\n\n\n\n\n\nIs there anyone more specifically that you know,\n\ncertainly on pgsql-hackers@lists.postgresql.org,\n\nthat you might recommd me to?\n\n\n\n\nZ.M.\n\n\n\nFrom: Thomas Munro <thomas.munro@gmail.com>\nSent: Thursday, 23 September 2021 1:14 PM\nTo: A Z <poweruserm@live.com.au>\nSubject: Re: High Precision Mathematics PostgreSQL Extension.\n \n\n\nOn Thu, Sep 23, 2021 at 1:39 PM A Z <poweruserm@live.com.au> wrote:\n> I was wondering if you have had time yet to read and think about\n> my most recent email about my desire to see a PostgreSQL\n> High Precision Base 10 Mathematics Extension developed.\n>\n> Do you know anyone who is in a position to accomplish this?\n\nHi A Z,\n\nNot me, sorry. If it's a commercial proposition, you could perhaps\ntry talking to one of the PostgreSQL consultancies? Or hire a student\n:-)\n\nI'm personally quite interested in getting DEFLOAT added to Postgres,\nbut that's a fixed size, floating point, base 10 data type, and it's\nneeded for standard conformance. That's probably why I read your\nemails to see if there was some overlap.",
"msg_date": "Thu, 23 Sep 2021 06:56:32 +0000",
"msg_from": "A Z <poweruserm@live.com.au>",
"msg_from_op": true,
"msg_subject": "Re: High Precision Mathematics PostgreSQL Extension."
}
] |
[
{
"msg_contents": "Hi, guys, I encount a problem on compiling pssql, the environment is:\nos: macos big sur version 11.5.2 (20G95)\ncompiler: gcc-11 (Homebrew GCC 11.2.0) 11.2.0\nerror message:\n/usr/local/bin/gcc-11 -Wall -Wmissing-prototypes -Wpointer-arith\n-Wdeclaration-after-statement -Werror=vla -Wendif-labels\n-Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type\n-Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard\n-Wno-format-truncation -Wno-stringop-truncation -O2 zic.o -L../../src/port\n-L../../src/common -isysroot\n/Library/Developer/CommandLineTools/SDKs/MacOSX11.3.sdk\n-L/usr/local/opt/binutils/lib -Wl,-dead_strip_dylibs -lpgcommon -lpgport\n-lz -lreadline -lm -o zic\nld: warning: ignoring file ../../src/common/libpgcommon.a, building for\nmacOS-x86_64 but attempting to link with file built for macOS-x86_64\nld: warning: ignoring file ../../src/port/libpgport.a, building for\nmacOS-x86_64 but attempting to link with file built for macOS-x86_64\nUndefined symbols for architecture x86_64:\n \"_pg_fprintf\", referenced from:\n _close_file in zic.o\n _usage in zic.o\n _memory_exhausted in zic.o\n _verror in zic.o\n _warning in zic.o\n _dolink in zic.o\n _writezone in zic.o\n ...\n \"_pg_printf\", referenced from:\n _main in zic.o\n \"_pg_qsort\", referenced from:\n _writezone in zic.o\n _main in zic.o\n \"_pg_sprintf\", referenced from:\n _stringoffset in zic.o\n _stringrule in zic.o\n _doabbr in zic.o\n \"_pg_strerror\", referenced from:\n _close_file in zic.o\n _memcheck.part.0 in zic.o\n _mkdirs in zic.o\n _dolink in zic.o\n _writezone in zic.o\n _infile in zic.o\n _main in zic.o\n ...\n \"_pg_vfprintf\", referenced from:\n _verror in zic.o\nld: symbol(s) not found for architecture x86_64\ncollect2: error: ld returned 1 exit status\nmake[2]: *** [zic] Error 1\nmake[1]: *** [all-timezone-recurse] Error 2\nmake: *** [all-src-recurse] Error 2\n\nNeed help, thanks in advance.\n\nHi, guys, I encount a problem on compiling pssql, the environment is:os: macos big sur version 11.5.2 (20G95)compiler: gcc-11 (Homebrew GCC 11.2.0) 11.2.0error message:/usr/local/bin/gcc-11 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 zic.o -L../../src/port -L../../src/common -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX11.3.sdk -L/usr/local/opt/binutils/lib -Wl,-dead_strip_dylibs -lpgcommon -lpgport -lz -lreadline -lm -o zicld: warning: ignoring file ../../src/common/libpgcommon.a, building for macOS-x86_64 but attempting to link with file built for macOS-x86_64ld: warning: ignoring file ../../src/port/libpgport.a, building for macOS-x86_64 but attempting to link with file built for macOS-x86_64Undefined symbols for architecture x86_64: \"_pg_fprintf\", referenced from: _close_file in zic.o _usage in zic.o _memory_exhausted in zic.o _verror in zic.o _warning in zic.o _dolink in zic.o _writezone in zic.o ... \"_pg_printf\", referenced from: _main in zic.o \"_pg_qsort\", referenced from: _writezone in zic.o _main in zic.o \"_pg_sprintf\", referenced from: _stringoffset in zic.o _stringrule in zic.o _doabbr in zic.o \"_pg_strerror\", referenced from: _close_file in zic.o _memcheck.part.0 in zic.o _mkdirs in zic.o _dolink in zic.o _writezone in zic.o _infile in zic.o _main in zic.o ... \"_pg_vfprintf\", referenced from: _verror in zic.old: symbol(s) not found for architecture x86_64collect2: error: ld returned 1 exit statusmake[2]: *** [zic] Error 1make[1]: *** [all-timezone-recurse] Error 2make: *** [all-src-recurse] Error 2Need help, thanks in advance.",
"msg_date": "Thu, 23 Sep 2021 15:09:08 +0800",
"msg_from": "zhang listar <zhanglinuxstar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Compile fail on macos big sur"
},
{
"msg_contents": "Hi,\n\nOn 23.09.2021 10:09, zhang listar wrote:\n> Hi, guys, I encount a problem on compiling pssql, the environment is:\n> os: macos big sur version 11.5.2 (20G95)\n> compiler: gcc-11 (Homebrew GCC 11.2.0) 11.2.0\n\nI've just tried building with gcc-11 on Catalina (yes, it's time to\nupgrade) and it went fine, no warnings.\n\nMaybe `git clean -fdx` would help? (Be careful, though, it removes any\nchanges you may have maid.)\n\n\n> /usr/local/bin/gcc-11 -Wall -Wmissing-prototypes -Wpointer-arith\n> -Wdeclaration-after-statement -Werror=vla -Wendif-labels\n> -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type\n> -Wformat-security -fno-strict-aliasing -fwrapv\n> -fexcess-precision=standard -Wno-format-truncation\n> -Wno-stringop-truncation -O2 zic.o -L../../src/port -L../../src/common\n> -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX11.3.sdk\n> -L/usr/local/opt/binutils/lib -Wl,-dead_strip_dylibs -lpgcommon\n> -lpgport -lz -lreadline -lm -o zic\n\nHere is what I have:\n\n/usr/local/bin/gcc-11 -Wall -Wmissing-prototypes -Wpointer-arith\n-Wdeclaration-after-statement -Werror=vla -Wendif-labels\n-Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type\n-Wformat-security -fno-strict-aliasing -fwrapv\n-fexcess-precision=standard -Wno-format-truncation\n-Wno-stringop-truncation -g -O2 zic.o -L../../src/port\n-L../../src/common -isysroot\n/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk\n-Wl,-dead_strip_dylibs -lpgcommon -lpgport -lz -lreadline -lm -o zic\n\nLooks the same, and gives no warnings.\n\nJust in case, I configured like that:\n\n./configure --prefix=$(cd ..;pwd)/install-gcc-11 --enable-cassert\n--enable-debug --enable-tap-tests CC=/usr/local/bin/gcc-11\n\nHope that helps.\n\n-- \nSergey Shinderuk\t\thttps://postgrespro.com/\n\n\n",
"msg_date": "Thu, 23 Sep 2021 10:35:44 +0300",
"msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Compile fail on macos big sur"
},
{
"msg_contents": "Thanks for your reply, I do make distclean and git clean -fdx, but it does\nno help.\n\nthe code: master, c7aeb775df895db240dcd6f47242f7e08899adfb\nIt looks like the macos issue, because of the ignoring of some lib, it\ndrives the compiling error.\n\nSergey Shinderuk <s.shinderuk@postgrespro.ru> 于2021年9月23日周四 下午3:35写道:\n\n> Hi,\n>\n> On 23.09.2021 10:09, zhang listar wrote:\n> > Hi, guys, I encount a problem on compiling pssql, the environment is:\n> > os: macos big sur version 11.5.2 (20G95)\n> > compiler: gcc-11 (Homebrew GCC 11.2.0) 11.2.0\n>\n> I've just tried building with gcc-11 on Catalina (yes, it's time to\n> upgrade) and it went fine, no warnings.\n>\n> Maybe `git clean -fdx` would help? (Be careful, though, it removes any\n> changes you may have maid.)\n>\n>\n> > /usr/local/bin/gcc-11 -Wall -Wmissing-prototypes -Wpointer-arith\n> > -Wdeclaration-after-statement -Werror=vla -Wendif-labels\n> > -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type\n> > -Wformat-security -fno-strict-aliasing -fwrapv\n> > -fexcess-precision=standard -Wno-format-truncation\n> > -Wno-stringop-truncation -O2 zic.o -L../../src/port -L../../src/common\n> > -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX11.3.sdk\n> > -L/usr/local/opt/binutils/lib -Wl,-dead_strip_dylibs -lpgcommon\n> > -lpgport -lz -lreadline -lm -o zic\n>\n> Here is what I have:\n>\n> /usr/local/bin/gcc-11 -Wall -Wmissing-prototypes -Wpointer-arith\n> -Wdeclaration-after-statement -Werror=vla -Wendif-labels\n> -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type\n> -Wformat-security -fno-strict-aliasing -fwrapv\n> -fexcess-precision=standard -Wno-format-truncation\n> -Wno-stringop-truncation -g -O2 zic.o -L../../src/port\n> -L../../src/common -isysroot\n> /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk\n> -Wl,-dead_strip_dylibs -lpgcommon -lpgport -lz -lreadline -lm -o zic\n>\n> Looks the same, and gives no warnings.\n>\n> Just in case, I configured like that:\n>\n> ./configure --prefix=$(cd ..;pwd)/install-gcc-11 --enable-cassert\n> --enable-debug --enable-tap-tests CC=/usr/local/bin/gcc-11\n>\n> Hope that helps.\n>\n> --\n> Sergey Shinderuk https://postgrespro.com/\n>\n\nThanks for your reply, I do make distclean and git clean -fdx, but it does no help.the code: master, c7aeb775df895db240dcd6f47242f7e08899adfbIt looks like the macos issue, because of the ignoring of some lib, it drives the compiling error. Sergey Shinderuk <s.shinderuk@postgrespro.ru> 于2021年9月23日周四 下午3:35写道:Hi,\n\nOn 23.09.2021 10:09, zhang listar wrote:\n> Hi, guys, I encount a problem on compiling pssql, the environment is:\n> os: macos big sur version 11.5.2 (20G95)\n> compiler: gcc-11 (Homebrew GCC 11.2.0) 11.2.0\n\nI've just tried building with gcc-11 on Catalina (yes, it's time to\nupgrade) and it went fine, no warnings.\n\nMaybe `git clean -fdx` would help? (Be careful, though, it removes any\nchanges you may have maid.)\n\n\n> /usr/local/bin/gcc-11 -Wall -Wmissing-prototypes -Wpointer-arith\n> -Wdeclaration-after-statement -Werror=vla -Wendif-labels\n> -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type\n> -Wformat-security -fno-strict-aliasing -fwrapv\n> -fexcess-precision=standard -Wno-format-truncation\n> -Wno-stringop-truncation -O2 zic.o -L../../src/port -L../../src/common\n> -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX11.3.sdk\n> -L/usr/local/opt/binutils/lib -Wl,-dead_strip_dylibs -lpgcommon\n> -lpgport -lz -lreadline -lm -o zic\n\nHere is what I have:\n\n/usr/local/bin/gcc-11 -Wall -Wmissing-prototypes -Wpointer-arith\n-Wdeclaration-after-statement -Werror=vla -Wendif-labels\n-Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type\n-Wformat-security -fno-strict-aliasing -fwrapv\n-fexcess-precision=standard -Wno-format-truncation\n-Wno-stringop-truncation -g -O2 zic.o -L../../src/port\n-L../../src/common -isysroot\n/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk\n-Wl,-dead_strip_dylibs -lpgcommon -lpgport -lz -lreadline -lm -o zic\n\nLooks the same, and gives no warnings.\n\nJust in case, I configured like that:\n\n./configure --prefix=$(cd ..;pwd)/install-gcc-11 --enable-cassert\n--enable-debug --enable-tap-tests CC=/usr/local/bin/gcc-11\n\nHope that helps.\n\n-- \nSergey Shinderuk https://postgrespro.com/",
"msg_date": "Thu, 23 Sep 2021 15:50:06 +0800",
"msg_from": "zhang listar <zhanglinuxstar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Compile fail on macos big sur"
},
{
"msg_contents": "On 23.09.2021 10:50, zhang listar wrote:\n> Thanks for your reply, I do make distclean and git clean -fdx, but it\n> does no help.\n> \n> the code: master, c7aeb775df895db240dcd6f47242f7e08899adfb\n> It looks like the macos issue, because of the ignoring of some lib, it\n> drives the compiling error. \n\nMaybe you could try adding -v to the problematic gcc command to see what\nreally goes on.\n\nI see that gcc calls /usr/bin/ld, not binutils ld installed with\nHomebrew. I saw an advice to `brew unlink binutils` somewhere.\n\n\n",
"msg_date": "Thu, 23 Sep 2021 11:03:46 +0300",
"msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Compile fail on macos big sur"
},
{
"msg_contents": "Thanks. It is the binuitls problem. I do \"brew uninstall binutils\" and\ncompile successfully.\nActually it is the lib $(which ranlib) -V problem.\nThe similar issue here: https://github.com/bitcoin/bitcoin/issues/20825\n\nSergey Shinderuk <s.shinderuk@postgrespro.ru> 于2021年9月23日周四 下午4:03写道:\n\n> On 23.09.2021 10:50, zhang listar wrote:\n> > Thanks for your reply, I do make distclean and git clean -fdx, but it\n> > does no help.\n> >\n> > the code: master, c7aeb775df895db240dcd6f47242f7e08899adfb\n> > It looks like the macos issue, because of the ignoring of some lib, it\n> > drives the compiling error.\n>\n> Maybe you could try adding -v to the problematic gcc command to see what\n> really goes on.\n>\n> I see that gcc calls /usr/bin/ld, not binutils ld installed with\n> Homebrew. I saw an advice to `brew unlink binutils` somewhere.\n>\n\nThanks. It is the binuitls problem. I do \"brew uninstall binutils\" and compile successfully.Actually it is the lib $(which ranlib) -V problem.The similar issue here: https://github.com/bitcoin/bitcoin/issues/20825Sergey Shinderuk <s.shinderuk@postgrespro.ru> 于2021年9月23日周四 下午4:03写道:On 23.09.2021 10:50, zhang listar wrote:\n> Thanks for your reply, I do make distclean and git clean -fdx, but it\n> does no help.\n> \n> the code: master, c7aeb775df895db240dcd6f47242f7e08899adfb\n> It looks like the macos issue, because of the ignoring of some lib, it\n> drives the compiling error. \n\nMaybe you could try adding -v to the problematic gcc command to see what\nreally goes on.\n\nI see that gcc calls /usr/bin/ld, not binutils ld installed with\nHomebrew. I saw an advice to `brew unlink binutils` somewhere.",
"msg_date": "Thu, 23 Sep 2021 17:05:47 +0800",
"msg_from": "zhang listar <zhanglinuxstar@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Compile fail on macos big sur"
}
] |
[
{
"msg_contents": "Page explaining progress reporting views, for all versions, have \"The\ntables\" expression several times when it points to a single table. So,\nsingular expressions should be used, right ?\n\n\"The tables below describe the information that will be reported and\nprovide information about how to interpret it.\"\n\nregards,\nMarcos\n\nPage explaining progress reporting views, for all versions, have \"The tables\" expression several times when it points to a single table. So, singular expressions should be used, right ?\"The tables below describe the information that will be reported and provide information about how to interpret it.\"regards, Marcos",
"msg_date": "Thu, 23 Sep 2021 09:40:40 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "DOC: Progress Reporting page"
},
{
"msg_contents": "Em qui., 23 de set. de 2021 às 09:41, Marcos Pegoraro <marcos@f10.com.br>\nescreveu:\n\n> Page explaining progress reporting views, for all versions, have \"The\n> tables\" expression several times when it points to a single table. So,\n> singular expressions should be used, right ?\n>\n> \"The tables below describe the information that will be reported and\n> provide information about how to interpret it.\"\n>\nI think documentation refers to \"html tables\" and not \"postgres tables\".\nMaybe could use other terms to be clearer.\n\nregards,\nRanier Vilela\n\nEm qui., 23 de set. de 2021 às 09:41, Marcos Pegoraro <marcos@f10.com.br> escreveu:Page explaining progress reporting views, for all versions, have \"The tables\" expression several times when it points to a single table. So, singular expressions should be used, right ?\"The tables below describe the information that will be reported and provide information about how to interpret it.\"I think documentation refers to \"html tables\" and not \"postgres tables\".Maybe could use other terms to be clearer.regards,Ranier Vilela",
"msg_date": "Thu, 23 Sep 2021 10:27:43 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DOC: Progress Reporting page"
},
{
"msg_contents": ">\n>\n> Page explaining progress reporting views, for all versions, have \"The\n>> tables\" expression several times when it points to a single table. So,\n>> singular expressions should be used, right ?\n>>\n>> \"The tables below describe the information that will be reported and\n>> provide information about how to interpret it.\"\n>>\n> I think documentation refers to \"html tables\" and not \"postgres tables\".\n> Maybe could use other terms to be clearer.\n>\n\nSure, The tables are HTML tables, but it points to one table, why is it\nusing plural form ?\n\nPage explaining progress reporting views, for all versions, have \"The tables\" expression several times when it points to a single table. So, singular expressions should be used, right ?\"The tables below describe the information that will be reported and provide information about how to interpret it.\"I think documentation refers to \"html tables\" and not \"postgres tables\".Maybe could use other terms to be clearer. Sure, The tables are HTML tables, but it points to one table, why is it using plural form ?",
"msg_date": "Thu, 23 Sep 2021 10:36:00 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Re: DOC: Progress Reporting page"
},
{
"msg_contents": "Em qui., 23 de set. de 2021 às 10:37, Marcos Pegoraro <marcos@f10.com.br>\nescreveu:\n\n>\n>> Page explaining progress reporting views, for all versions, have \"The\n>>> tables\" expression several times when it points to a single table. So,\n>>> singular expressions should be used, right ?\n>>>\n>>> \"The tables below describe the information that will be reported and\n>>> provide information about how to interpret it.\"\n>>>\n>> I think documentation refers to \"html tables\" and not \"postgres tables\".\n>> Maybe could use other terms to be clearer.\n>>\n>\n> Sure, The tables are HTML tables, but it points to one table, why is it\n> using plural form ?\n>\nBecause there are two html tables:\n*Table 27.32. pg_stat_progress_analyze View*\nand\n*Table 27.33. ANALYZE phases*\n\nregards,\nRanier Vilela\n\nEm qui., 23 de set. de 2021 às 10:37, Marcos Pegoraro <marcos@f10.com.br> escreveu:Page explaining progress reporting views, for all versions, have \"The tables\" expression several times when it points to a single table. So, singular expressions should be used, right ?\"The tables below describe the information that will be reported and provide information about how to interpret it.\"I think documentation refers to \"html tables\" and not \"postgres tables\".Maybe could use other terms to be clearer. Sure, The tables are HTML tables, but it points to one table, why is it using plural form ? Because there are two html tables:\nTable 27.32. pg_stat_progress_analyze View and\nTable 27.33. ANALYZE phases regards,Ranier Vilela",
"msg_date": "Thu, 23 Sep 2021 10:40:19 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DOC: Progress Reporting page"
},
{
"msg_contents": "> *Table 27.32. pg_stat_progress_analyze View*\n> and\n> *Table 27.33. ANALYZE phases*\n>\n\nyou´re right, I didn´t see that always have a phases table later. sorry ...\n\n\nTable 27.32. pg_stat_progress_analyze View and\nTable 27.33. ANALYZE phases you´re right, I didn´t see that always have a phases table later. sorry ...",
"msg_date": "Thu, 23 Sep 2021 10:50:49 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": true,
"msg_subject": "Re: DOC: Progress Reporting page"
}
] |
[
{
"msg_contents": "Hi,\n\n\nI did a bit of testing today and noticed that we don't set indexlist properly at the right time in some cases when using partitioned tables.\n\n\nI attached a simple case where the indexlist doesn't seems to be set at the right time. get_relation_info in plancat.c seems to process it only after analyzejoins.c checked for it.\n\n\nCan someone explain to me why it is deferred at all?\n\n\nRegards\n\nArne",
"msg_date": "Thu, 23 Sep 2021 17:20:08 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "missing indexes in indexlist with partitioned tables"
},
{
"msg_contents": "Hi,\n\n\nI stumbled across a few places that depend on the inheritance appends being applied at a later date, so I quickly abandoned that idea. I thought a bit about the indexlist, in particular the inhparent, and I am not sure what depends on get_relation_info working in that way.\n\n\nTherefore I propose a new attribute partIndexlist of RelOptInfo to include information about uniqueness, in the case the executor can't use the structure that causes the uniqueness to begin with. Said attribute can be used by relation_has_unique_index_for and rel_supports_distinctness.\n\n\nThe attached patch takes that route. I'd appreciate feedback!\n\n\nRegards\nArne",
"msg_date": "Thu, 28 Oct 2021 13:44:31 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Re: missing indexes in indexlist with partitioned tables"
},
{
"msg_contents": "Hi,\n\nOn Thu, Oct 28, 2021 at 01:44:31PM +0000, Arne Roland wrote:\n> \n> The attached patch takes that route. I'd appreciate feedback!\n\nThe cfbot reports that the patch doesn't apply anymore:\n\nhttp://cfbot.cputube.org/patch_36_3452.log\n=== Applying patches on top of PostgreSQL commit ID 025b920a3d45fed441a0a58fdcdf05b321b1eead ===\n=== applying patch ./partIndexlistClean.patch\npatching file src/backend/access/heap/vacuumlazy.c\nHunk #1 FAILED at 2375.\n1 out of 1 hunk FAILED -- saving rejects to file src/backend/access/heap/vacuumlazy.c.rej\npatching file src/backend/access/transam/xlog.c\nHunk #1 succeeded at 911 with fuzz 1 (offset 5 lines).\nHunk #2 FAILED at 5753.\n[...]\n1 out of 6 hunks FAILED -- saving rejects to file src/backend/access/transam/xlog.c.rej\n[...]\npatching file src/backend/commands/publicationcmds.c\nHunk #1 FAILED at 813.\n1 out of 1 hunk FAILED -- saving rejects to file src/backend/commands/publicationcmds.c.rej\npatching file src/include/nodes/pathnodes.h\nHunk #9 FAILED at 1516.\n[...]\n1 out of 17 hunks FAILED -- saving rejects to file src/include/nodes/pathnodes.h.rej\n\nCould you send a rebased version? In the meantime I will switch the cf entry\nto Waiting on Author.\n\n\n",
"msg_date": "Sat, 15 Jan 2022 16:33:04 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: missing indexes in indexlist with partitioned tables"
},
{
"msg_contents": "Hi,\n\nthank you for the heads up! Those files ended up accidentally in the dump from me running pg_indent. The file count was truly excessive, I should have noticed this sooner.\nI attached the patch without the excessive modifications. That should be way easier to read.\n\n\nRegards\nArne",
"msg_date": "Mon, 17 Jan 2022 11:25:01 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Re: missing indexes in indexlist with partitioned tables"
},
{
"msg_contents": "Using some valuable feedback from Zhihong Yu, I fixed a flipped negation error and updated the comments.\n\n\nRegards\nArne\n\n________________________________\nFrom: Arne Roland\nSent: Monday, January 17, 2022 12:25\nTo: Julien Rouhaud\nCc: pgsql-hackers\nSubject: Re: missing indexes in indexlist with partitioned tables\n\n\nHi,\n\nthank you for the heads up! Those files ended up accidentally in the dump from me running pg_indent. The file count was truly excessive, I should have noticed this sooner.\nI attached the patch without the excessive modifications. That should be way easier to read.\n\n\nRegards\nArne",
"msg_date": "Mon, 17 Jan 2022 17:59:33 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Re: missing indexes in indexlist with partitioned tables"
},
{
"msg_contents": "Hmm, can you show cases of queries for which having this new\npartIndexlist changes plans?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Saca el libro que tu religión considere como el indicado para encontrar la\noración que traiga paz a tu alma. Luego rebootea el computador\ny ve si funciona\" (Carlos Duclós)\n\n\n",
"msg_date": "Mon, 17 Jan 2022 15:16:08 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: missing indexes in indexlist with partitioned tables"
},
{
"msg_contents": "Hi!\n\nAfaiac the join pruning where the outer table is a partitioned table is the relevant case.\n\nI am not sure whether there are other cases.\nThe join pruning, which works great for plain relations since 9.0, falls short for partitioned tables, since the optimizer fails to prove uniqueness there.\n\n\nIn practical cases inner and outer tables are almost surely different ones, but I reattached a simpler example. It's the one, I came up with back in September.\n\nI've seen this can be a reason to avoid partitioning for the time being, if the application relies on join pruning. I think generic views make it almost necessary to have it. If you had a different answer in mind, please don't hesitate to ask again.\n\n\nRegards\nArne\n\n\n________________________________\nFrom: Alvaro Herrera <alvherre@alvh.no-ip.org>\nSent: Monday, January 17, 2022 7:16:08 PM\nTo: Arne Roland\nCc: Julien Rouhaud; pgsql-hackers\nSubject: Re: missing indexes in indexlist with partitioned tables\n\nHmm, can you show cases of queries for which having this new\npartIndexlist changes plans?\n\n--\nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Saca el libro que tu religión considere como el indicado para encontrar la\noración que traiga paz a tu alma. Luego rebootea el computador\ny ve si funciona\" (Carlos Duclós)",
"msg_date": "Mon, 17 Jan 2022 20:32:40 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Re: missing indexes in indexlist with partitioned tables"
},
{
"msg_contents": "Hi,\n\nOn Mon, Jan 17, 2022 at 08:32:40PM +0000, Arne Roland wrote:\n> \n> Afaiac the join pruning where the outer table is a partitioned table is the relevant case.\n\nThe last version of the patch now fails on all platform, with plan changes.\n\nFor instance:\nhttps://cirrus-ci.com/task/4825629131538432\nhttps://api.cirrus-ci.com/v1/artifact/task/4825629131538432/regress_diffs/src/test/regress/regression.diffs\ndiff -U3 /tmp/cirrus-ci-build/src/test/regress/expected/partition_join.out /tmp/cirrus-ci-build/src/test/regress/results/partition_join.out\n--- /tmp/cirrus-ci-build/src/test/regress/expected/partition_join.out\t2022-01-17 23:08:47.158198249 +0000\n+++ /tmp/cirrus-ci-build/src/test/regress/results/partition_join.out\t2022-01-17 23:12:34.163488567 +0000\n@@ -4887,37 +4887,23 @@\n SET enable_partitionwise_join = on;\n EXPLAIN (COSTS OFF)\n SELECT * FROM fract_t x LEFT JOIN fract_t y USING (id) ORDER BY id ASC LIMIT 10;\n- QUERY PLAN\n------------------------------------------------------------------------\n+ QUERY PLAN\n+-----------------------------------------------------------------\n Limit\n- -> Merge Append\n- Sort Key: x.id\n- -> Merge Left Join\n- Merge Cond: (x_1.id = y_1.id)\n- -> Index Only Scan using fract_t0_pkey on fract_t0 x_1\n- -> Index Only Scan using fract_t0_pkey on fract_t0 y_1\n- -> Merge Left Join\n- Merge Cond: (x_2.id = y_2.id)\n- -> Index Only Scan using fract_t1_pkey on fract_t1 x_2\n- -> Index Only Scan using fract_t1_pkey on fract_t1 y_2\n-(11 rows)\n+ -> Append\n+ -> Index Only Scan using fract_t0_pkey on fract_t0 x_1\n+ -> Index Only Scan using fract_t1_pkey on fract_t1 x_2\n+(4 rows)\n\n EXPLAIN (COSTS OFF)\n SELECT * FROM fract_t x LEFT JOIN fract_t y USING (id) ORDER BY id DESC LIMIT 10;\n- QUERY PLAN\n---------------------------------------------------------------------------------\n+ QUERY PLAN\n+--------------------------------------------------------------------------\n Limit\n- -> Merge Append\n- Sort Key: x.id DESC\n- -> Nested Loop Left Join\n- -> Index Only Scan Backward using fract_t0_pkey on fract_t0 x_1\n- -> Index Only Scan using fract_t0_pkey on fract_t0 y_1\n- Index Cond: (id = x_1.id)\n- -> Nested Loop Left Join\n- -> Index Only Scan Backward using fract_t1_pkey on fract_t1 x_2\n- -> Index Only Scan using fract_t1_pkey on fract_t1 y_2\n- Index Cond: (id = x_2.id)\n-(11 rows)\n+ -> Append\n+ -> Index Only Scan Backward using fract_t1_pkey on fract_t1 x_2\n+ -> Index Only Scan Backward using fract_t0_pkey on fract_t0 x_1\n+(4 rows)\n\n\n",
"msg_date": "Tue, 18 Jan 2022 14:57:50 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: missing indexes in indexlist with partitioned tables"
},
{
"msg_contents": "On 2022-Jan-18, Julien Rouhaud wrote:\n\n> SET enable_partitionwise_join = on;\n> EXPLAIN (COSTS OFF)\n> SELECT * FROM fract_t x LEFT JOIN fract_t y USING (id) ORDER BY id ASC LIMIT 10;\n> - QUERY PLAN\n> ------------------------------------------------------------------------\n> + QUERY PLAN\n> +-----------------------------------------------------------------\n> Limit\n> - -> Merge Append\n> - Sort Key: x.id\n> - -> Merge Left Join\n> - Merge Cond: (x_1.id = y_1.id)\n> - -> Index Only Scan using fract_t0_pkey on fract_t0 x_1\n> - -> Index Only Scan using fract_t0_pkey on fract_t0 y_1\n> - -> Merge Left Join\n> - Merge Cond: (x_2.id = y_2.id)\n> - -> Index Only Scan using fract_t1_pkey on fract_t1 x_2\n> - -> Index Only Scan using fract_t1_pkey on fract_t1 y_2\n> -(11 rows)\n> + -> Append\n> + -> Index Only Scan using fract_t0_pkey on fract_t0 x_1\n> + -> Index Only Scan using fract_t1_pkey on fract_t1 x_2\n> +(4 rows)\n\nHmm, these plan changes look valid to me. A left self-join using the\nprimary key column? That looks optimizable all right.\n\nI suspect that the author of partition-wise joins would want to change\nthese queries so that whatever was being tested by these self-joins is\ntested by some other means (maybe just create an identical partitioned\ntable via CREATE TABLE fract_t2 ... ; INSERT INTO fract_t2 SELECT FROM\nfract_t). But at the same time, the author of this patch should a) make\nsure that the submitted patch updates these test results so that the\ntest pass, and also b) add some test cases to verify that his desired\nbehavior is tested somewhere, not just in a test case that's\nincidentally broken by his patch.\n\nWhat I still don't know is whether this patch is actually desirable or\nnot. If the only cases it affects is self-joins, is there much actual\nvalue?\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 18 Jan 2022 10:24:34 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: missing indexes in indexlist with partitioned tables"
},
{
"msg_contents": "Hi!\n\n> From: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> [...]\n> Hmm, these plan changes look valid to me. A left self-join using the\n> primary key column? That looks optimizable all right.\n> [...]\n> What I still don't know is whether this patch is actually desirable or\n> not. If the only cases it affects is self-joins, is there much actual\n> value?\n\nThis is not really about self joins. That was just the most simple example, because otherwise we need a second table.\nIt's unique, it's not relevant whether it's the same table. In most cases it won't. I was talking about join pruning.\n\n> I suspect that the author of partition-wise joins would want to change\n> these queries so that whatever was being tested by these self-joins is\n> tested by some other means (maybe just create an identical partitioned\n> table via CREATE TABLE fract_t2 ... ; INSERT INTO fract_t2 SELECT FROM\n> fract_t). But at the same time, the author of this patch should\n\nYour suggestion doesn't work, because with my patch we solve that case as well. We solve the general join pruning case. If we make the index non-unique however, we should be able to test the fractional case the same way.\n\n> b) add some test cases to verify that his desired\n> behavior is tested somewhere, not just in a test case that's\n> incidentally broken by his patch.\n\nMy patch already includes such a test, look at @@ -90,6 +90,13 @@ src/test/regress/sql/partition_join.sql\nSince the selfjoin part was confusing to you, it might be worthwhile to do test that with two different tables. While I see no need to test in that way, I will adjust the patch so. Just to make it more clear for people looking at those tests in the future.\n\na) make\n> sure that the submitted patch updates these test results so that the\n> test pass [...]\n\nJust for the record: I did run the tests, but I did miss that the commit of Tomas fix for fractional optimization is already on the master. Please note that this is a very new test case from a patch committed less than one week ago.\n\nI'm glad Julien Rouhaud pointed out, that Tomas patch is committed it by now. That was very helpful to me, as I can now integrate the two tests.\n\n@Álvaro Herrera:\nIf you want to help me, please don't put forward an abstract list of responsibilities. If anything I obviously need practical help, on how I can catch on recent changes quicker and without manual intervention. I don't have a modified buildfarm animal running, that tries to apply my patch and run regression tests for my patch on a daily basis.\nIs there a simple way for me to check for that?\n\nI will probably integrate those two tests, since they can work of similar structures without need to recreate the tables again and again. I have clear understanding how that new test works. I have to attend a few calls now, but I should be able to update the tests later.\n\nRegards\nArne\n\n\n\n\n\n\n\n\n\nHi!\n\n> From: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> [...]\n> Hmm, these plan changes look valid to me. A left self-join using the\n> primary key column? That looks optimizable all right.\n> [...]\n> What I still don't know is whether this patch is actually desirable or\n> not. If the only cases it affects is self-joins, is there much actual\n> value?\n\nThis is not really about self joins. That was just the most simple example, because otherwise we need a second table.\nIt's unique, it's not relevant whether it's the same table. In most cases it won't. I was talking about join pruning.\n\n> I suspect that the author of partition-wise joins would want to change\n> these queries so that whatever was being tested by these self-joins is\n> tested by some other means (maybe just create an identical partitioned\n> table via CREATE TABLE fract_t2 ... ; INSERT INTO fract_t2 SELECT FROM\n> fract_t). But at the same time, the author of this patch should\n\nYour suggestion doesn't work, because with my patch we solve that case as well. We solve the general join pruning case. If we make the index non-unique however, we should be able to test the\nfractional case the same way.\n\n> b) add some test cases to verify that his desired\n> behavior is tested somewhere, not just in a test case that's\n> incidentally broken by his patch.\n\nMy patch already includes such a test, look at @@ -90,6 +90,13 @@ src/test/regress/sql/partition_join.sql\nSince the selfjoin part was confusing to you, it might be worthwhile to do test that with two different tables. While I see no need to test in that way, I will adjust the patch so. Just to make it more clear for people looking at those tests in the\nfuture.\n\na) make\n> sure that the submitted patch updates these test results so that the\n> test pass [...]\n\nJust for the record: I did run the tests, but I did miss that the commit of Tomas fix for fractional optimization is already on the master. Please note that this is a very new test case from a patch committed less than one week ago.\n\nI'm glad Julien Rouhaud pointed out, that Tomas patch is committed it by now. That was very helpful to me, as I can now integrate the two tests.\n\n\n@Álvaro Herrera:\nIf you want to help me, please don't put forward an abstract list of responsibilities. If anything I obviously need practical help, on how I can catch on recent changes quicker and without manual intervention. I don't have a modified buildfarm animal running,\n that tries to apply my patch and run regression tests for my patch on a daily basis.\nIs there a simple way for me to check for that?\n\nI will probably integrate those two tests, since they can work of similar structures without need to recreate the tables again and again. I have clear understanding how that new test works. I have to attend a few calls now, but I should be able to update the\n tests later.\n\nRegards\nArne",
"msg_date": "Wed, 19 Jan 2022 21:13:55 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Re: missing indexes in indexlist with partitioned tables"
},
{
"msg_contents": "Hi,\n\n\nI came up with a slightly more intuitive way to force the different outcome for the fractional test, that does hardly change anything.\n\n\nI'm not sure, whether the differentiation between fract_x and fract_t is worth it, since there shouldn't be any difference, but as mentioned before I added it for potential future clarity.\n\n\nThanks for your feedback again!\n\n\nRegards\n\nArne\n\n\n________________________________\nFrom: Arne Roland\nSent: Wednesday, January 19, 2022 10:13:55 PM\nTo: Alvaro Herrera; Julien Rouhaud\nCc: pgsql-hackers\nSubject: Re: missing indexes in indexlist with partitioned tables\n\n\nHi!\n\n> From: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> [...]\n> Hmm, these plan changes look valid to me. A left self-join using the\n> primary key column? That looks optimizable all right.\n> [...]\n> What I still don't know is whether this patch is actually desirable or\n> not. If the only cases it affects is self-joins, is there much actual\n> value?\n\nThis is not really about self joins. That was just the most simple example, because otherwise we need a second table.\nIt's unique, it's not relevant whether it's the same table. In most cases it won't. I was talking about join pruning.\n\n> I suspect that the author of partition-wise joins would want to change\n> these queries so that whatever was being tested by these self-joins is\n> tested by some other means (maybe just create an identical partitioned\n> table via CREATE TABLE fract_t2 ... ; INSERT INTO fract_t2 SELECT FROM\n> fract_t). But at the same time, the author of this patch should\n\nYour suggestion doesn't work, because with my patch we solve that case as well. We solve the general join pruning case. If we make the index non-unique however, we should be able to test the fractional case the same way.\n\n> b) add some test cases to verify that his desired\n> behavior is tested somewhere, not just in a test case that's\n> incidentally broken by his patch.\n\nMy patch already includes such a test, look at @@ -90,6 +90,13 @@ src/test/regress/sql/partition_join.sql\nSince the selfjoin part was confusing to you, it might be worthwhile to do test that with two different tables. While I see no need to test in that way, I will adjust the patch so. Just to make it more clear for people looking at those tests in the future.\n\na) make\n> sure that the submitted patch updates these test results so that the\n> test pass [...]\n\nJust for the record: I did run the tests, but I did miss that the commit of Tomas fix for fractional optimization is already on the master. Please note that this is a very new test case from a patch committed less than one week ago.\n\nI'm glad Julien Rouhaud pointed out, that Tomas patch is committed it by now. That was very helpful to me, as I can now integrate the two tests.\n\n@Álvaro Herrera:\nIf you want to help me, please don't put forward an abstract list of responsibilities. If anything I obviously need practical help, on how I can catch on recent changes quicker and without manual intervention. I don't have a modified buildfarm animal running, that tries to apply my patch and run regression tests for my patch on a daily basis.\nIs there a simple way for me to check for that?\n\nI will probably integrate those two tests, since they can work of similar structures without need to recreate the tables again and again. I have clear understanding how that new test works. I have to attend a few calls now, but I should be able to update the tests later.\n\nRegards\nArne",
"msg_date": "Wed, 19 Jan 2022 21:50:01 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Re: missing indexes in indexlist with partitioned tables"
},
{
"msg_contents": "On Wed, Jan 19, 2022 at 1:50 PM Arne Roland <A.Roland@index.de> wrote:\n\n> Hi,\n>\n>\n> I came up with a slightly more intuitive way to force the different\n> outcome for the fractional test, that does hardly change anything.\n>\n>\n> I'm not sure, whether the differentiation between fract_x and fract_t is\n> worth it, since there shouldn't be any difference, but as mentioned before\n> I added it for potential future clarity.\n>\n>\n> Thanks for your feedback again!\n>\n>\n> Regards\n>\n> Arne\n>\n>\n> ------------------------------\n> *From:* Arne Roland\n> *Sent:* Wednesday, January 19, 2022 10:13:55 PM\n> *To:* Alvaro Herrera; Julien Rouhaud\n> *Cc:* pgsql-hackers\n> *Subject:* Re: missing indexes in indexlist with partitioned tables\n>\n>\n> Hi!\n>\n> > From: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> > [...]\n> > Hmm, these plan changes look valid to me. A left self-join using the\n> > primary key column? That looks optimizable all right.\n> > [...]\n> > What I still don't know is whether this patch is actually desirable or\n> > not. If the only cases it affects is self-joins, is there much actual\n> > value?\n>\n> This is not really about self joins. That was just the most simple\n> example, because otherwise we need a second table.\n> It's unique, it's not relevant whether it's the same table. In most cases\n> it won't. I was talking about join pruning.\n>\n> > I suspect that the author of partition-wise joins would want to change\n> > these queries so that whatever was being tested by these self-joins is\n> > tested by some other means (maybe just create an identical partitioned\n> > table via CREATE TABLE fract_t2 ... ; INSERT INTO fract_t2 SELECT FROM\n> > fract_t). But at the same time, the author of this patch should\n>\n> Your suggestion doesn't work, because with my patch we solve that case as\n> well. We solve the general join pruning case. If we make the index\n> non-unique however, we should be able to test the fractional case the\n> same way.\n>\n> > b) add some test cases to verify that his desired\n> > behavior is tested somewhere, not just in a test case that's\n> > incidentally broken by his patch.\n>\n> My patch already includes such a test, look at @@ -90,6 +90,13 @@\n> src/test/regress/sql/partition_join.sql\n> Since the selfjoin part was confusing to you, it might be worthwhile to do\n> test that with two different tables. While I see no need to test in that\n> way, I will adjust the patch so. Just to make it more clear for people\n> looking at those tests in the future.\n>\n> a) make\n> > sure that the submitted patch updates these test results so that the\n> > test pass [...]\n>\n> Just for the record: I did run the tests, but I did miss that the commit\n> of Tomas fix for fractional optimization is already on the master. Please\n> note that this is a very new test case from a patch committed less than one\n> week ago.\n>\n> I'm glad Julien Rouhaud pointed out, that Tomas patch is committed it by\n> now. That was very helpful to me, as I can now integrate the two tests.\n>\n> @Álvaro Herrera:\n> If you want to help me, please don't put forward an abstract list of\n> responsibilities. If anything I obviously need practical help, on how I can\n> catch on recent changes quicker and without manual intervention. I don't\n> have a modified buildfarm animal running, that tries to apply my patch and\n> run regression tests for my patch on a daily basis.\n> Is there a simple way for me to check for that?\n>\n> I will probably integrate those two tests, since they can work of similar\n> structures without need to recreate the tables again and again. I have\n> clear understanding how that new test works. I have to attend a few calls\n> now, but I should be able to update the tests later.\n>\n> Regards\n> Arne\n>\n> Hi,\n\n- if (indexRelation->rd_rel->relkind == RELKIND_PARTITIONED_INDEX)\n+ if (inhparent && (!index->indisunique ||\nindexRelation->rd_rel->relkind != RELKIND_PARTITIONED_INDEX))\n\nThe check on RELKIND_PARTITIONED_INDEX seems to negate what the comment\nabove says:\n\n+ * Don't add partitioned indexes to the indexlist\n\nCheers\n\nOn Wed, Jan 19, 2022 at 1:50 PM Arne Roland <A.Roland@index.de> wrote:\n\n\nHi,\n\n\nI came up with a slightly more intuitive way to force the different outcome for the fractional test, that does hardly change anything.\n\n\nI'm not sure, whether the differentiation between fract_x and \nfract_t is worth it, since there shouldn't be any difference, but as mentioned before I added it for potential future clarity.\n\n\nThanks for your feedback again!\n\n\n\nRegards\nArne\n\n\n\n\nFrom: Arne Roland\nSent: Wednesday, January 19, 2022 10:13:55 PM\nTo: Alvaro Herrera; Julien Rouhaud\nCc: pgsql-hackers\nSubject: Re: missing indexes in indexlist with partitioned tables\n \n\n\n\nHi!\n\n> From: Alvaro Herrera <alvherre@alvh.no-ip.org>\n> [...]\n> Hmm, these plan changes look valid to me. A left self-join using the\n> primary key column? That looks optimizable all right.\n> [...]\n> What I still don't know is whether this patch is actually desirable or\n> not. If the only cases it affects is self-joins, is there much actual\n> value?\n\nThis is not really about self joins. That was just the most simple example, because otherwise we need a second table.\nIt's unique, it's not relevant whether it's the same table. In most cases it won't. I was talking about join pruning.\n\n> I suspect that the author of partition-wise joins would want to change\n> these queries so that whatever was being tested by these self-joins is\n> tested by some other means (maybe just create an identical partitioned\n> table via CREATE TABLE fract_t2 ... ; INSERT INTO fract_t2 SELECT FROM\n> fract_t). But at the same time, the author of this patch should\n\nYour suggestion doesn't work, because with my patch we solve that case as well. We solve the general join pruning case. If we make the index non-unique however, we should be able to test the\nfractional case the same way.\n\n> b) add some test cases to verify that his desired\n> behavior is tested somewhere, not just in a test case that's\n> incidentally broken by his patch.\n\nMy patch already includes such a test, look at @@ -90,6 +90,13 @@ src/test/regress/sql/partition_join.sql\nSince the selfjoin part was confusing to you, it might be worthwhile to do test that with two different tables. While I see no need to test in that way, I will adjust the patch so. Just to make it more clear for people looking at those tests in the\nfuture.\n\na) make\n> sure that the submitted patch updates these test results so that the\n> test pass [...]\n\nJust for the record: I did run the tests, but I did miss that the commit of Tomas fix for fractional optimization is already on the master. Please note that this is a very new test case from a patch committed less than one week ago.\n\nI'm glad Julien Rouhaud pointed out, that Tomas patch is committed it by now. That was very helpful to me, as I can now integrate the two tests.\n\n\n@Álvaro Herrera:\nIf you want to help me, please don't put forward an abstract list of responsibilities. If anything I obviously need practical help, on how I can catch on recent changes quicker and without manual intervention. I don't have a modified buildfarm animal running,\n that tries to apply my patch and run regression tests for my patch on a daily basis.\nIs there a simple way for me to check for that?\n\nI will probably integrate those two tests, since they can work of similar structures without need to recreate the tables again and again. I have clear understanding how that new test works. I have to attend a few calls now, but I should be able to update the\n tests later.\n\nRegards\nArne\nHi,- if (indexRelation->rd_rel->relkind == RELKIND_PARTITIONED_INDEX)+ if (inhparent && (!index->indisunique || indexRelation->rd_rel->relkind != RELKIND_PARTITIONED_INDEX)) The check on RELKIND_PARTITIONED_INDEX seems to negate what the comment above says:+ * Don't add partitioned indexes to the indexlistCheers",
"msg_date": "Wed, 19 Jan 2022 14:13:34 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: missing indexes in indexlist with partitioned tables"
},
{
"msg_contents": "On 2022-Jan-19, Arne Roland wrote:\n\n> > a) make sure that the submitted patch updates these test results so\n> > that the test pass [...]\n> \n> Just for the record: I did run the tests, but I did miss that the\n> commit of Tomas fix for fractional optimization is already on the\n> master. Please note that this is a very new test case from a patch\n> committed less than one week ago.\n\nAh, apologies, I didn't realize that that test was so new.\n\n> If anything I obviously need practical help, on how\n> I can catch on recent changes quicker and without manual intervention.\n> I don't have a modified buildfarm animal running, that tries to apply\n> my patch and run regression tests for my patch on a daily basis.\n\nSee src/tools/ci/README (for multi-platform testing of patches on\nseveral platforms) and http://commitfest.cputube.org/\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 19 Jan 2022 19:26:39 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: missing indexes in indexlist with partitioned tables"
},
{
"msg_contents": "Hi!\n\n> From: Zhihong Yu <zyu@yugabyte.com>\n> Subject: Re: missing indexes in indexlist with partitioned tables\n>\n> Hi,\n>\n> - if (indexRelation->rd_rel->relkind == RELKIND_PARTITIONED_INDEX)\n> + if (inhparent && (!index->indisunique || indexRelation->rd_rel->relkind != RELKIND_PARTITIONED_INDEX))\n>\n> The check on RELKIND_PARTITIONED_INDEX seems to negate what the comment above says:\n>\n> + * Don't add partitioned indexes to the indexlist\n>\n> Cheers\n\nThe comment at my end goes on:\n\n\n/*\n* Don't add partitioned indexes to the indexlist, since they are\n* not usable by the executor. If they are unique add them to the\n* partindexlist instead, to use for further pruning. If they\n* aren't that either, simply skip them.\n*/\n\nRegarding the structure: I think, that we probably should remove the first two sentences here. They reoccur 50 lines below anyways, which seems a dubious practice. The logic that enforces the first two sentences is mainly down below, so that place is probably on one to keep.\n\nRegarding the semantics: This is sort of what the statement checks for (skip for inhparent, if not unique or not partitioned index), i.e. it checks for the case, where the index shouldn't be added to either list.\n\nSide note: I personally think the name inhparent is mildly confusing, since it's not really about inheritance. I don't have a significantly better idea though.\n\nFrom: Alvaro Herrera <alvherre@alvh.no-ip.org>\nSent: Wednesday, January 19, 2022 23:26\n> Ah, apologies, I didn't realize that that test was so new.\n\nNo offense taken. Unless one was involved in the creation of the corresponding patch, it's unreasonable to know that. I like the second part of your message very much:\n\n> See src/tools/ci/README (for multi-platform testing of patches on\n> several platforms) and http://commitfest.cputube.org/\n\nThanks, I didn't know of cputube. Neat! That's pretty much what I was looking for!\nIs there a way to get an email notification if some machine fails (turns bright red)? For the threads I'm explicitly subscribed to, that would seem helpful to me.\n\nRegards\nArne\n\n\n\n\n\n\n\n\n\nHi!\n\n> From: Zhihong Yu <zyu@yugabyte.com>\n> Subject: Re: missing indexes in indexlist with partitioned tables\n> \n> Hi,\n> \n> - if (indexRelation->rd_rel->relkind == RELKIND_PARTITIONED_INDEX)\n> + if (inhparent && (!index->indisunique || indexRelation->rd_rel->relkind != RELKIND_PARTITIONED_INDEX))\n\n> \n> The check on RELKIND_PARTITIONED_INDEX seems to negate what the comment above says:\n> \n> + * Don't add partitioned indexes to the indexlist\n> \n> Cheers\n\n\nThe comment at my end goes on:\n\n\n\n/* \n* Don't add partitioned indexes to the indexlist, since they are\n* not usable by the executor. If they are unique add them to the\n* partindexlist instead, to use for further pruning. If they\n* aren't that either, simply skip them.\n*/\n\n\nRegarding the structure: I think, that we probably should remove the first two sentences here. They reoccur 50 lines below anyways, which seems a dubious practice. The logic that enforces the first two sentences is mainly down below, so that place is probably\n on one to keep.\n\n\n\nRegarding the semantics: This is sort of what the statement checks for (skip for inhparent, if not unique or not partitioned index), i.e. it checks for the case, where the index shouldn't be added to either list.\n\n\n\n\nSide note: I personally think the name inhparent is mildly confusing, since it's not really about inheritance. I don't have a significantly better idea though.\n\n\nFrom: Alvaro Herrera <alvherre@alvh.no-ip.org>\nSent: Wednesday, January 19, 2022 23:26\n> Ah, apologies, I didn't realize that that test was so new.\n\nNo offense taken. Unless one was involved in the creation of the corresponding patch, it's unreasonable to know that. I like the second part of your message very much:\n\n> See src/tools/ci/README (for multi-platform testing of patches on\n> several platforms) and http://commitfest.cputube.org/\n\nThanks, I didn't know of cputube. Neat! That's pretty much what I was looking for!\nIs there a way to get an email notification if some machine fails (turns bright red)? For the threads I'm explicitly subscribed to, that would seem helpful to me.\n\n\nRegards\nArne",
"msg_date": "Mon, 24 Jan 2022 12:29:58 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Re: missing indexes in indexlist with partitioned tables"
},
{
"msg_contents": "Hi,\n\nOn Mon, Jan 24, 2022 at 9:30 PM Arne Roland <A.Roland@index.de> wrote:\n> The comment at my end goes on:\n>\n> /*\n> * Don't add partitioned indexes to the indexlist, since they are\n> * not usable by the executor. If they are unique add them to the\n> * partindexlist instead, to use for further pruning. If they\n> * aren't that either, simply skip them.\n> */\n\n\"partindexlist\" really made me think about a list of \"partial indexes\"\nfor some reason. I think maybe \"partedindexlist\" is what you are\nlooking for; \"parted\" is commonly used as short for \"partitioned\" when\nnaming variables.\n\nThe comment only mentions \"further pruning\" as to what partitioned\nindexes are to be remembered in RelOptInfo, but it's not clear what\nthat means. It may help to be more specific.\n\nFinally, I don't understand why we need a separate field to store\nindexes found in partitioned base relations. AFAICS, nothing but the\nsites you are interested in (relation_has_unique_index_for() and\nrel_supports_distinctness()) would ever bother to look at a\npartitioned base relation's indexlist. Do you think putting them into\nin indexlist might break something?\n\n> Regarding the semantics: This is sort of what the statement checks for (skip for inhparent, if not unique or not partitioned index), i.e. it checks for the case, where the index shouldn't be added to either list.\n>\n> Side note: I personally think the name inhparent is mildly confusing, since it's not really about inheritance. I don't have a significantly better idea though.\n\nPartitioned tables are \"inheritance parent\", so share the same code as\nwhat traditional inheritance parents have always used for planning.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 25 Jan 2022 17:04:37 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: missing indexes in indexlist with partitioned tables"
},
{
"msg_contents": "Hi!\n\nFrom: Amit Langote <amitlangote09@gmail.com>\nSent: Tuesday, January 25, 2022 09:04\nSubject: Re: missing indexes in indexlist with partitioned tables\n> [...]\n> \"partindexlist\" really made me think about a list of \"partial indexes\"\n> for some reason. I think maybe \"partedindexlist\" is what you are\n> looking for; \"parted\" is commonly used as short for \"partitioned\" when\n> naming variables.\n>\n> The comment only mentions \"further pruning\" as to what partitioned\n> indexes are to be remembered in RelOptInfo, but it's not clear what\n> that means. It may help to be more specific.\n\nThanks for the feedback! I've changed that. The current version is attached.\n\n> Finally, I don't understand why we need a separate field to store\n> indexes found in partitioned base relations. AFAICS, nothing but the\n> sites you are interested in (relation_has_unique_index_for() and\n> rel_supports_distinctness()) would ever bother to look at a\n> partitioned base relation's indexlist. Do you think putting them into\n> in indexlist might break something?\n\nI have thought about that before. AFAICT there is nothing in core, which breaks. However I am not sure, I want to mix those two kinds of index nodes. First of all the structure is different, partedIndexes don't have physical attributes after all. This is technical implementation detail relating to the current promise, that entries of the indexlist are indexes we can use. And by use, I mean use for statistics or the executor.\nI'm more concerned about future changes regarding the order and optimization of processing harder here. The order in which we do things in the planner is a bit messy, and I wouldn't mind seeing details about that change. Looking at the current wacky order in the optimizer, I'm not convinced, that nothing will want to have a look at the indexlist, before partitioned tables are unpacked.\n\nSince it would be easy to introduce this new variable later, wouldn't mind adding it to the indexlist directly for now. But changing the underlying promise of what it contains, seems noteworthy and more intrusive to me.\n\n> > Side note: I personally think the name inhparent is mildly confusing, since it's not really about inheritance. I don't have a significantly better idea though.\n>\n> Partitioned tables are \"inheritance parent\", so share the same code as\n> what traditional inheritance parents have always used for planning.\n\nI recall that manual partitioning via inheritance, that was cumbersome. Though that minor historical detail was not, what I was referring to. There are a lot of other cases, that cause us to set inhparent. IIRC we use this flag in some ddl commands, which have nothing to do with inheritance. It essentially is used as a variant to skip the indexlist creation. If such hacks weren't there, we could simply check for the relkind and indisunique.\n\nRegards\nArne",
"msg_date": "Mon, 31 Jan 2022 18:14:10 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Re: missing indexes in indexlist with partitioned tables"
},
{
"msg_contents": "Hi!\n\n\nAttached a rebased version of the patch.\n\n\nRegards\nArne\n\n\n________________________________\nFrom: Arne Roland\nSent: Monday, January 31, 2022 19:14\nTo: Amit Langote\nCc: Zhihong Yu; Alvaro Herrera; Julien Rouhaud; pgsql-hackers\nSubject: Re: missing indexes in indexlist with partitioned tables\n\nHi!\n\nFrom: Amit Langote <amitlangote09@gmail.com>\nSent: Tuesday, January 25, 2022 09:04\nSubject: Re: missing indexes in indexlist with partitioned tables\n> [...]\n> \"partindexlist\" really made me think about a list of \"partial indexes\"\n> for some reason. I think maybe \"partedindexlist\" is what you are\n> looking for; \"parted\" is commonly used as short for \"partitioned\" when\n> naming variables.\n>\n> The comment only mentions \"further pruning\" as to what partitioned\n> indexes are to be remembered in RelOptInfo, but it's not clear what\n> that means. It may help to be more specific.\n\nThanks for the feedback! I've changed that. The current version is attached.\n\n> Finally, I don't understand why we need a separate field to store\n> indexes found in partitioned base relations. AFAICS, nothing but the\n> sites you are interested in (relation_has_unique_index_for() and\n> rel_supports_distinctness()) would ever bother to look at a\n> partitioned base relation's indexlist. Do you think putting them into\n> in indexlist might break something?\n\nI have thought about that before. AFAICT there is nothing in core, which breaks. However I am not sure, I want to mix those two kinds of index nodes. First of all the structure is different, partedIndexes don't have physical attributes after all. This is technical implementation detail relating to the current promise, that entries of the indexlist are indexes we can use. And by use, I mean use for statistics or the executor.\nI'm more concerned about future changes regarding the order and optimization of processing harder here. The order in which we do things in the planner is a bit messy, and I wouldn't mind seeing details about that change. Looking at the current wacky order in the optimizer, I'm not convinced, that nothing will want to have a look at the indexlist, before partitioned tables are unpacked.\n\nSince it would be easy to introduce this new variable later, wouldn't mind adding it to the indexlist directly for now. But changing the underlying promise of what it contains, seems noteworthy and more intrusive to me.\n\n> > Side note: I personally think the name inhparent is mildly confusing, since it's not really about inheritance. I don't have a significantly better idea though.\n>\n> Partitioned tables are \"inheritance parent\", so share the same code as\n> what traditional inheritance parents have always used for planning.\n\nI recall that manual partitioning via inheritance, that was cumbersome. Though that minor historical detail was not, what I was referring to. There are a lot of other cases, that cause us to set inhparent. IIRC we use this flag in some ddl commands, which have nothing to do with inheritance. It essentially is used as a variant to skip the indexlist creation. If such hacks weren't there, we could simply check for the relkind and indisunique.\n\nRegards\nArne",
"msg_date": "Tue, 2 Aug 2022 23:07:03 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Re: missing indexes in indexlist with partitioned tables"
},
{
"msg_contents": "On Wed, 3 Aug 2022 at 11:07, Arne Roland <A.Roland@index.de> wrote:\n> Attached a rebased version of the patch.\n\nFirstly, I agree that we should fix the issue of join removals not\nworking with partitioned tables.\n\nI had a quick look over this and the first thing that I thought was\nthe same as what Amit mentioned in:\n\nOn Tue, 25 Jan 2022 at 21:04, Amit Langote <amitlangote09@gmail.com> wrote:\n> Finally, I don't understand why we need a separate field to store\n> indexes found in partitioned base relations. AFAICS, nothing but the\n> sites you are interested in (relation_has_unique_index_for() and\n> rel_supports_distinctness()) would ever bother to look at a\n> partitioned base relation's indexlist. Do you think putting them into\n> in indexlist might break something?\n\nI kinda disagree with Alvaro's fix in 05fb5d661. I think indexlist is\nthe place to store these details. That commit added the following\ncomment:\n\n/*\n* Ignore partitioned indexes, since they are not usable for\n* queries.\n*/\n\nBut neither are hypothetical indexes either, yet they're added to\nRelOptInfo.indexlist.\n\nI think the patch should be changed so that the existing list is used\nand we find another fix for the problems Alvaro fixed in 05fb5d661.\nUnfortunately, there was no discussion marked on that commit message,\nso it's not quite clear what the problem was. I'm unsure if there was\nanything other than CLUSTER that was broken. I see that cfdd03f45\nadded CLUSTER for partitioned tables in v15. I think the patch would\nneed to go over the usages of RelOptInfo.indexlist to make sure that\nwe don't need to add any further conditions to skip their usage for\npartitioned tables.\n\nI wrote the attached patch as I wanted to see what would break if we\ndid this. The only problem I got from running make check was in\nget_actual_variable_range(), so I just changed that so it returns\nfalse when the given rel is a partitioned table. I only quickly did\nthe changes to get_relation_info() and didn't give much thought to\nwhat the can* bool flags should be set to. I just mostly skipped all\nthat code because it was crashing on\nrelation->rd_tableam->scan_bitmap_next_block due to the rd_tableam\nbeing NULL.\n\nAlso, just a friendly tip, Arne; I saw you named your patch 0006 for\nversion 6. You'll see many 000n patches around the list, but those\nare generally done with git format-patch. That number normally means\nthe patch in the patch series, not the version of the patch. I'm not\nsure if it'll help any, but my workflow for writing new patches\nagainst master tends to be:\n\ngit checkout master\ngit checkout -b some_feature_branch\n# write some code\ngit commit -a\n# maybe more code\ngit commit -a\ngit format-patch -v1 master\n\nThat'll create v1-0001 and v1-0002 patches. When I'm onto v2, I just\nchange the version number from -v1 to -v2.\n\nDavid",
"msg_date": "Fri, 16 Sep 2022 16:08:30 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: missing indexes in indexlist with partitioned tables"
},
{
"msg_contents": "On 2022-Sep-16, David Rowley wrote:\n\n> I kinda disagree with Alvaro's fix in 05fb5d661. I think indexlist is\n> the place to store these details. That commit added the following\n> comment:\n> \n> /*\n> * Ignore partitioned indexes, since they are not usable for\n> * queries.\n> */\n> \n> But neither are hypothetical indexes either, yet they're added to\n> RelOptInfo.indexlist.\n> \n> I think the patch should be changed so that the existing list is used\n> and we find another fix for the problems Alvaro fixed in 05fb5d661.\n> Unfortunately, there was no discussion marked on that commit message,\n> so it's not quite clear what the problem was. I'm unsure if there was\n> anything other than CLUSTER that was broken.\n\nAfter a bit of trawling through the archives, I found it here:\nhttps://www.postgresql.org/message-id/20180124162006.pmapfiznhgngwtjf%40alvherre.pgsql\nI think there was insufficient discussion and you're probably right that\nit wasn't the best fix. I don't object to finding another fix.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Sat, 17 Sep 2022 20:37:01 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: missing indexes in indexlist with partitioned tables"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2022-Sep-16, David Rowley wrote:\n>> I kinda disagree with Alvaro's fix in 05fb5d661. I think indexlist is\n>> the place to store these details. That commit added the following\n>> comment:\n>> \n>> /*\n>> * Ignore partitioned indexes, since they are not usable for\n>> * queries.\n>> */\n>> \n>> But neither are hypothetical indexes either, yet they're added to\n>> RelOptInfo.indexlist.\n>> \n>> I think the patch should be changed so that the existing list is used\n>> and we find another fix for the problems Alvaro fixed in 05fb5d661.\n>> Unfortunately, there was no discussion marked on that commit message,\n>> so it's not quite clear what the problem was. I'm unsure if there was\n>> anything other than CLUSTER that was broken.\n\n> After a bit of trawling through the archives, I found it here:\n> https://www.postgresql.org/message-id/20180124162006.pmapfiznhgngwtjf%40alvherre.pgsql\n> I think there was insufficient discussion and you're probably right that\n> it wasn't the best fix. I don't object to finding another fix.\n\nFWIW, I don't see any big problem with what you did. We'd need to\ndo something more like what David suggests if the planner ever has\na reason to consider partitioned indexes. But as long as it does\nnot, why expend the time to build data structures representing them?\nAnd we'd have to add code in quite a few places to ignore them,\nonce they're in indexlist.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 17 Sep 2022 15:00:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: missing indexes in indexlist with partitioned tables"
},
{
"msg_contents": "On Sun, 18 Sept 2022 at 07:00, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > After a bit of trawling through the archives, I found it here:\n> > https://www.postgresql.org/message-id/20180124162006.pmapfiznhgngwtjf%40alvherre.pgsql\n> > I think there was insufficient discussion and you're probably right that\n> > it wasn't the best fix. I don't object to finding another fix.\n>\n> FWIW, I don't see any big problem with what you did. We'd need to\n> do something more like what David suggests if the planner ever has\n> a reason to consider partitioned indexes. But as long as it does\n> not, why expend the time to build data structures representing them?\n\nDid you miss the report about left join removals not working with\npartitioned tables due to lack of unique proofs? That seems like a\ngood enough reason to me.\n\n> And we'd have to add code in quite a few places to ignore them,\n> once they're in indexlist.\n\nI think the same is true for \"hypothetical\" indexes. Maybe that would\nbe a good field to grep on to find the places that need to be\naddressed.\n\nDavid\n\n\n",
"msg_date": "Mon, 19 Sep 2022 06:50:27 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: missing indexes in indexlist with partitioned tables"
},
{
"msg_contents": "On Fri, Sep 16, 2022 at 1:08 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Wed, 3 Aug 2022 at 11:07, Arne Roland <A.Roland@index.de> wrote:\n> > Attached a rebased version of the patch.\n> Firstly, I agree that we should fix the issue of join removals not\n> working with partitioned tables.\n\nAgreed, though the patch's changes to tests does not seem to have to\ndo with join removal? I don't really understand what the test changes\nare all about. I wonder why the patch doesn't instead add the test\ncase that Arne showed in the file he attached with [1].\n\n> I think the patch should be changed so that the existing list is used\n> and we find another fix for the problems Alvaro fixed in 05fb5d661.\n> Unfortunately, there was no discussion marked on that commit message,\n> so it's not quite clear what the problem was. I'm unsure if there was\n> anything other than CLUSTER that was broken. I see that cfdd03f45\n> added CLUSTER for partitioned tables in v15. I think the patch would\n> need to go over the usages of RelOptInfo.indexlist to make sure that\n> we don't need to add any further conditions to skip their usage for\n> partitioned tables.\n>\n> I wrote the attached patch as I wanted to see what would break if we\n> did this. The only problem I got from running make check was in\n> get_actual_variable_range(), so I just changed that so it returns\n> false when the given rel is a partitioned table. I only quickly did\n> the changes to get_relation_info() and didn't give much thought to\n> what the can* bool flags should be set to. I just mostly skipped all\n> that code because it was crashing on\n> relation->rd_tableam->scan_bitmap_next_block due to the rd_tableam\n> being NULL.\n\nYeah, it makes sense to just skip the portion of the code that reads\nfrom rd_indam, as your patch does.\n\n+ if (indexRelation->rd_rel->relkind != RELKIND_PARTITIONED_INDEX)\n {\n+ /* We copy just the fields we need, not all of rd_indam */\n+ amroutine = indexRelation->rd_indam;\n\nMaybe you're intending to add one before committing but there should\nbe a comment mentioning why the am* initializations are being skipped\nover for partitioned index IndexOptInfos. I'd think that's because we\nare not going to be building any paths using them for now.\n\nThe following portion of the top comment of get_relation_info()\nperhaps needs an update.\n\n * If inhparent is true, all we need to do is set up the attr arrays:\n * the RelOptInfo actually represents the appendrel formed by an inheritance\n * tree, and so the parent rel's physical size and index information isn't\n * important for it.\n */\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/2641568c18de40e8b1528fc9d4d80127%40index.de\n\n\n",
"msg_date": "Tue, 20 Sep 2022 15:40:55 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: missing indexes in indexlist with partitioned tables"
},
{
"msg_contents": "Thank you for having a look at the patch.\n\nOn Tue, 20 Sept 2022 at 18:41, Amit Langote <amitlangote09@gmail.com> wrote:\n> Agreed, though the patch's changes to tests does not seem to have to\n> do with join removal? I don't really understand what the test changes\n> are all about. I wonder why the patch doesn't instead add the test\n> case that Arne showed in the file he attached with [1].\n> [1] https://www.postgresql.org/message-id/2641568c18de40e8b1528fc9d4d80127%40index.de\n\nI adjusted a test in partition_join.sql to add an additional column to\nthe fract_t table. Before the change that table only had a single\ncolumn and due to the query's join condition being USING(id), none of\nthe columns from the left joined table were being used. That resulted\nin the updated code performing a left join removal as it was passing\nthe checks for no columns being used in the left joined table in\nanalyzejoins.c. The test in partition_join.sql claims to be testing\n\"partitionwise join with fractional paths\", so I thought we'd better\nnot have a query that the planner removes the join when we're meant to\nbe testing joins.\n\nIt probably wouldn't hurt to have a new test to ensure left join\nremovals work with a partitioned table. That should go in join.sql\nalong with the other join removal tests. I didn't study Arne's patch\nto see what test he added. I was only interested in writing enough\ncode so I could check there was no good reason not to add the\npartitioned index into RelOptInfo.indexlist. Arne sent me an off-list\nmessage to say he's planning on working on the patch that uses the\nexisting field instead of the new one he originally added. Let's hold\noff for that patch.\n\nDavid\n\n\n",
"msg_date": "Tue, 20 Sep 2022 19:53:15 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: missing indexes in indexlist with partitioned tables"
},
{
"msg_contents": "On Tue, Sep 20, 2022 at 4:53 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> Thank you for having a look at the patch.\n>\n> On Tue, 20 Sept 2022 at 18:41, Amit Langote <amitlangote09@gmail.com> wrote:\n> > Agreed, though the patch's changes to tests does not seem to have to\n> > do with join removal? I don't really understand what the test changes\n> > are all about. I wonder why the patch doesn't instead add the test\n> > case that Arne showed in the file he attached with [1].\n> > [1] https://www.postgresql.org/message-id/2641568c18de40e8b1528fc9d4d80127%40index.de\n>\n> I adjusted a test in partition_join.sql to add an additional column to\n> the fract_t table. Before the change that table only had a single\n> column and due to the query's join condition being USING(id), none of\n> the columns from the left joined table were being used. That resulted\n> in the updated code performing a left join removal as it was passing\n> the checks for no columns being used in the left joined table in\n> analyzejoins.c. The test in partition_join.sql claims to be testing\n> \"partitionwise join with fractional paths\", so I thought we'd better\n> not have a query that the planner removes the join when we're meant to\n> be testing joins.\n\nAh, got it, thanks for the explanation.\n\n> It probably wouldn't hurt to have a new test to ensure left join\n> removals work with a partitioned table. That should go in join.sql\n> along with the other join removal tests.\n\nMakes sense.\n\n> I didn't study Arne's patch\n> to see what test he added. I was only interested in writing enough\n> code so I could check there was no good reason not to add the\n> partitioned index into RelOptInfo.indexlist. Arne sent me an off-list\n> message to say he's planning on working on the patch that uses the\n> existing field instead of the new one he originally added. Let's hold\n> off for that patch.\n\nOk, sure.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 20 Sep 2022 17:00:34 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: missing indexes in indexlist with partitioned tables"
},
{
"msg_contents": "On Tue, Sep 20, 2022 at 4:53 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> Arne sent me an off-list\n> message to say he's planning on working on the patch that uses the\n> existing field instead of the new one he originally added. Let's hold\n> off for that patch.\n\nI wouldn't say, I explicitly stated that. But I ended up doing something, that resulted in the attached patch. :)\n\n\nFor my own sanity I greped one last time for the usage of indexlist.\n\nMost of the (untouched) usages have comments that, they are only called for baserels/plain tables.\n\nNamely all but the cluster of partitioned tables. I had to reread that section. There we are just traversing the tree and omitting partitioned tables.\n\n\nThere is now a test section in join.sql for partitioned tables, that tests very similar to the\n\nbaserel case. That's more thorough, than what I originally went for.\n\nFurther feedback would be appreciated!\n\n\nRegards\nArne",
"msg_date": "Sat, 1 Oct 2022 16:34:16 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Re: missing indexes in indexlist with partitioned tables"
},
{
"msg_contents": "On Sun, 2 Oct 2022 at 05:34, Arne Roland <A.Roland@index.de> wrote:\n>\n> On Tue, Sep 20, 2022 at 4:53 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > Arne sent me an off-list\n> > message to say he's planning on working on the patch that uses the\n> > existing field instead of the new one he originally added. Let's hold\n> > off for that patch.\n>\n> I wouldn't say, I explicitly stated that. But I ended up doing something, that resulted in the attached patch. :)\n\nI stand corrected. You said you'd think about it, not do it. Anyway,\nthanks for doing it :)\n\n> Further feedback would be appreciated!\n\nI had a quick look through the patch. Here are a few things that would\nbe good to adjust. I understand that some of these things were how I\nleft them in the patch I sent. In my defence, I mostly did that very\nquickly just so I could see if there was some issue with having the\npartitioned indexes in indexlist. I didn't actually put much effort\ninto addressing the fine details of how that should be done.\n\n* In the header comment in get_relation_info(), I don't think we need\nto mention join removals explicitly. At a stretch, maybe mentioning\n\"unique proofs\" might be ok, but \"various optimizations\" might be\nbetter. If you mention \"join removal\", I fear that will just become\noutdated too quickly as further optimisations are added. Likewise for\nthe comment about \"join pruning\" you've added in the function body.\nFWIW, we call these \"join removals\" anyway.\n\n* I think we should put RelationGetNumberOfBlocks() back to what it\nwas and just ensure we don't call that for partitioned indexes in\nget_relation_info(). (Yes, I know that was my change)\n\n* I can't quite figure out why you're doing \"DROP TABLE a CASCADE;\" in\ninherits.sql. You've not changed anything else in that file. Did you\nmean to do this in join.sql?\n\n* The new test in join.sql. I understand that you've mostly copied the\ntest from another place in the file and adjusted it to use a\npartitioned table. However, I don't really think you need to INSERT\nany data into those tables. I also think that using the table name of\n\"a\" is dangerous as it could conflict with another table by the same\nname in a parallel run of the tests. The non-partitioned version is a\nTEMP table. Also, it's slightly painful to look at the inconsistent\ncapitalisation of SQL keywords in those queries you've added, again, I\nunderstand those are copied from above, but I see no need to duplicate\nthe inconsistencies. Perhaps it's fine to copy the upper case\nkeywords in the DDL and keep all lowercase in the queries. Also, I\nthink you probably should just add a single simple join removal test\nfor partitioned tables. You're not exercising any code that the\nnon-partitioned case isn't by adding any additional tests. All that I\nthink is worth testing here is ensuring nobody thinks that partitioned\ntables can get away with an empty indexlist again.\n\n* I had a bit of a 2nd thought on the test change in\npartition_join.sql. I know I added the \"c\" column so that join\nremovals didn't apply. I'm now thinking it's probably better to just\nchange the queries to use JOIN ON rather than JOIN USING. Also, apply\nthe correct alias to the ORDER BY. This method should save from\nslowing the test down due to the additional column. We have some\npretty slow buildfarm machines that this might actually make a\nmeaningful difference to.\n\nThanks again for working on this.\n\nDavid\n\n\n",
"msg_date": "Mon, 3 Oct 2022 11:51:45 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: missing indexes in indexlist with partitioned tables"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> * I can't quite figure out why you're doing \"DROP TABLE a CASCADE;\" in\n> inherits.sql. You've not changed anything else in that file. Did you\n> mean to do this in join.sql?\n\nDoing that would be a bad idea no matter where it's done. IIRC,\nthose tables are intentionally set up to stress later dump/restore\ntests (with issues like inheritance children having column order\ndifferent from their parents).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 02 Oct 2022 19:07:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: missing indexes in indexlist with partitioned tables"
},
{
"msg_contents": "Hi!\n\nI mainly changed the comments, the Assert and some casing.\n\n> From: David Rowley <dgrowleyml@gmail.com>\n> Sent: Monday, October 3, 2022 00:51\n>\n> * In the header comment in get_relation_info(), I don't think we need\n> to mention join removals explicitly. At a stretch, maybe mentioning\n> \"unique proofs\" might be ok, but \"various optimizations\" might be\n> better. If you mention \"join removal\", I fear that will just become\n> outdated too quickly as further optimisations are added. Likewise for\n> the comment about \"join pruning\" you've added in the function body.\n> FWIW, we call these \"join removals\" anyway.\n\nI made them a little bit more vague. I also updated the description of indexlist in general.\n\n> * I think we should put RelationGetNumberOfBlocks() back to what it\n> was and just ensure we don't call that for partitioned indexes in\n> get_relation_info(). (Yes, I know that was my change)\n\nI don't think it's relevant who did it. I don't see much importance either way. I reverted it to the old state.\n\n> * I can't quite figure out why you're doing \"DROP TABLE a CASCADE;\" in\n> inherits.sql. You've not changed anything else in that file. Did you\n> mean to do this in join.sql?\n\nThe problem I encountered, was that simple copy of the test wasn't possible, because the tables were named the same way. It seemed intuitive to me to make the tests , such that there are no side-effects. I added a comment to the creation of those tables to make clear, that there are intended side effects by not dropping those tables.\n\n> * The new test in join.sql. I understand that you've mostly copied the\n> test from another place in the file and adjusted it to use a\n> partitioned table. However, I don't really think you need to INSERT\n> any data into those tables. I also think that using the table name of\n> \"a\" is dangerous as it could conflict with another table by the same\n> name in a parallel run of the tests. The non-partitioned version is a\n> TEMP table. Also, it's slightly painful to look at the inconsistent\n> capitalisation of SQL keywords in those queries you've added, again, I\n> understand those are copied from above, but I see no need to duplicate\n> the inconsistencies. Perhaps it's fine to copy the upper case\n> keywords in the DDL and keep all lowercase in the queries. Also, I\n> think you probably should just add a single simple join removal test\n> for partitioned tables. You're not exercising any code that the\n> non-partitioned case isn't by adding any additional tests. All that I\n> think is worth testing here is ensuring nobody thinks that partitioned\n> tables can get away with an empty indexlist again.\n\nI am not sure, how thorough the tests on partitioned tables need to be. I guess, I will turn up more issues in production, than any test will be able to cover.\n\nAs a general sentiment, I wouldn't agree. The empty indexlist isn't the only interesting thing to test. The more we add optimizations, the more non trivial intersections of those start to break things again, we have fixed. A notable part of the complexity of the optimizer stems from the fact, that we apply most transformations in a fixed order. We obviously have to do that for performance reasons. But as long as we have that, we are prone to have cases where the ordering breaks part. Partitioned tables are a prominent cases here, because we always have the appendrel.\n\nI removed some test cases here to half the amount of partitioned tables needed here. I don't see the value in having one simple explain less. But I do not have strong feelings about this. Are there any further opinions?\n\n> * I had a bit of a 2nd thought on the test change in\n> partition_join.sql. I know I added the \"c\" column so that join\n> removals didn't apply. I'm now thinking it's probably better to just\n> change the queries to use JOIN ON rather than JOIN USING. Also, apply\n> the correct alias to the ORDER BY. This method should save from\n> slowing the test down due to the additional column. We have some\n> pretty slow buildfarm machines that this might actually make a\n> meaningful difference to.\n\nThere is no real point in changing this, because we can just access the column that is hidden anyways to make the join removal impossible.\nThat is sort of the tick my version v6 was going for. Tbh I don't care much either way as long as the test still tests for the fractional merge append. I just switched back.\n\nAfaiac creating and dropping a table is sort of the worst thing we can do, when thinking about tests. There is just so much work involved there. If I am concerned about test runtimes, I'd try to minimize that. This is even more true regarding tests for partitioned tables. To get a parted table with two partitions, we always have to create three tables. I do think there is a lot of potential to speed up the test times with that, but I'd suggest to handle that in a different thread.\n\n> Thanks again for working on this.\n\nThank you for your input!\n\nArne",
"msg_date": "Wed, 2 Nov 2022 01:50:38 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Re: missing indexes in indexlist with partitioned tables"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-02 01:50:38 +0000, Arne Roland wrote:\n> I mainly changed the comments, the Assert and some casing.\n\nThe tests have been failing for a while\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/40/3452\n\nhttps://api.cirrus-ci.com/v1/task/6190372803051520/logs/cores.log\n\n#2 0x00005645dff192f6 in ExceptionalCondition (conditionName=conditionName@entry=0x5645e014b167 \"false\", fileName=fileName@entry=0x5645e0196b08 \"../src/backend/storage/buffer/bufmgr.c\", lineNumber=lineNumber@entry=2971) at ../src/backend/utils/error/assert.c:66\nNo locals.\n#3 0x00005645dfc13823 in RelationGetNumberOfBlocksInFork (relation=relation@entry=0x7fb54d54e470, forkNum=forkNum@entry=MAIN_FORKNUM) at ../src/backend/storage/buffer/bufmgr.c:2971\nNo locals.\n#4 0x00005645dfa9ac5e in get_relation_info (root=root@entry=0x5645e1ed9840, relationObjectId=16660, inhparent=<optimized out>, rel=rel@entry=0x5645e2086b38) at ../src/backend/optimizer/util/plancat.c:442\n indexoid = <optimized out>\n info = 0x5645e2083b28\n i = <optimized out>\n indexRelation = 0x7fb54d54e470\n index = 0x7fb54d548c48\n amroutine = <optimized out>\n ncolumns = 1\n nkeycolumns = 1\n l__state = {l = <optimized out>, i = <optimized out>}\n indexoidlist = 0x5645e2088a98\n lmode = 1\n l = <optimized out>\n varno = 1\n relation = 0x7fb54d54e680\n hasindex = <optimized out>\n indexinfos = 0x0\n __func__ = \"get_relation_info\"\n#5 0x00005645dfaa5e25 in build_simple_rel (root=0x5645e1ed9840, relid=1, parent=parent@entry=0x0) at ../src/backend/optimizer/util/relnode.c:293\n rel = 0x5645e2086b38\n rte = 0x5645e1ed8fc8\n __func__ = \"build_simple_rel\"\n...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 21 Nov 2022 17:36:59 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: missing indexes in indexlist with partitioned tables"
},
{
"msg_contents": "Thank you!\n\n\nSadly I didn't manage how to reproduce that locally. check-world doesn't seem to fail at my end.\n\n\nThat being said, attached patch should fix the issue reported below.\n\n\nI'll have another look at the log later this week.\n\n\nRegards\n\nArne\n\n\n________________________________\nFrom: Andres Freund <andres@anarazel.de>\nSent: Tuesday, November 22, 2022 2:36:59 AM\nTo: Arne Roland\nCc: David Rowley; Amit Langote; pgsql-hackers; Zhihong Yu; Alvaro Herrera; Julien Rouhaud\nSubject: Re: missing indexes in indexlist with partitioned tables\n\nHi,\n\nOn 2022-11-02 01:50:38 +0000, Arne Roland wrote:\n> I mainly changed the comments, the Assert and some casing.\n\nThe tests have been failing for a while\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/40/3452\nCirrus CI<https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/40/3452>\ncirrus-ci.com\nCirrus CI makes your development cycle fast, efficient, and secure by leveraging modern cloud technologies.\n\n\n\nhttps://api.cirrus-ci.com/v1/task/6190372803051520/logs/cores.log\n\n#2 0x00005645dff192f6 in ExceptionalCondition (conditionName=conditionName@entry=0x5645e014b167 \"false\", fileName=fileName@entry=0x5645e0196b08 \"../src/backend/storage/buffer/bufmgr.c\", lineNumber=lineNumber@entry=2971) at ../src/backend/utils/error/assert.c:66\nNo locals.\n#3 0x00005645dfc13823 in RelationGetNumberOfBlocksInFork (relation=relation@entry=0x7fb54d54e470, forkNum=forkNum@entry=MAIN_FORKNUM) at ../src/backend/storage/buffer/bufmgr.c:2971\nNo locals.\n#4 0x00005645dfa9ac5e in get_relation_info (root=root@entry=0x5645e1ed9840, relationObjectId=16660, inhparent=<optimized out>, rel=rel@entry=0x5645e2086b38) at ../src/backend/optimizer/util/plancat.c:442\n indexoid = <optimized out>\n info = 0x5645e2083b28\n i = <optimized out>\n indexRelation = 0x7fb54d54e470\n index = 0x7fb54d548c48\n amroutine = <optimized out>\n ncolumns = 1\n nkeycolumns = 1\n l__state = {l = <optimized out>, i = <optimized out>}\n indexoidlist = 0x5645e2088a98\n lmode = 1\n l = <optimized out>\n varno = 1\n relation = 0x7fb54d54e680\n hasindex = <optimized out>\n indexinfos = 0x0\n __func__ = \"get_relation_info\"\n#5 0x00005645dfaa5e25 in build_simple_rel (root=0x5645e1ed9840, relid=1, parent=parent@entry=0x0) at ../src/backend/optimizer/util/relnode.c:293\n rel = 0x5645e2086b38\n rte = 0x5645e1ed8fc8\n __func__ = \"build_simple_rel\"\n...\n\nGreetings,\n\nAndres Freund",
"msg_date": "Tue, 6 Dec 2022 00:43:30 +0000",
"msg_from": "Arne Roland <A.Roland@index.de>",
"msg_from_op": true,
"msg_subject": "Re: missing indexes in indexlist with partitioned tables"
},
{
"msg_contents": "On Tue, 6 Dec 2022 at 13:43, Arne Roland <A.Roland@index.de> wrote:\n> That being said, attached patch should fix the issue reported below.\n\nI took a look over the v10 patch and ended up making adjustments to\nthe tests. I didn't quite see the need for the test to be as extensive\nas you had them in v10. Neither join removals nor unique joins treat\npartitioned tables any differently from normal tables, so I think it's\nfine just to have a single test that makes sure join removals work on\npartitioned tables. I didn't feel inclined to add a test for unique\njoins. The test I added is mainly just there to make sure something\nfails if someone decides partitioned tables don't need the indexlist\npopulated at some point in the future.\n\nThe other changes I made were just cosmetic. I pushed the result.\n\nDavid\n\n\n",
"msg_date": "Mon, 9 Jan 2023 17:21:09 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: missing indexes in indexlist with partitioned tables"
}
] |
[
{
"msg_contents": "On HEAD, I see these headers failing to compile standalone:\n\n$ src/tools/pginclude/cpluspluscheck \nIn file included from /tmp/cpluspluscheck.XxTv1i/test.cpp:3:\n./src/include/common/unicode_east_asian_fw_table.h:3:32: error: elements of array 'const mbinterval east_asian_fw []' have incomplete type\n static const struct mbinterval east_asian_fw[] = {\n ^~~~~~~~~~~~~\n./src/include/common/unicode_east_asian_fw_table.h:3:32: error: storage size of 'east_asian_fw' isn't known\nIn file included from /tmp/cpluspluscheck.XxTv1i/test.cpp:3:\n./src/include/replication/worker_internal.h:60:2: error: 'FileSet' does not name a type\n FileSet *stream_fileset;\n ^~~~~~~\n\nThe first of these is evidently the fault of bab982161 (Update display\nwidths as part of updating Unicode), which introduced that header.\nThe second seems to have been introduced by 31c389d8d (Optimize fileset\nusage in apply worker).\n\nPlease fix.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 Sep 2021 14:37:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Recent cpluspluscheck failures"
},
{
"msg_contents": "On Thu, Sep 23, 2021 at 2:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> On HEAD, I see these headers failing to compile standalone:\n>\n> $ src/tools/pginclude/cpluspluscheck\n> In file included from /tmp/cpluspluscheck.XxTv1i/test.cpp:3:\n> ./src/include/common/unicode_east_asian_fw_table.h:3:32: error: elements\nof array 'const mbinterval east_asian_fw []' have incomplete type\n> static const struct mbinterval east_asian_fw[] = {\n> ^~~~~~~~~~~~~\n> ./src/include/common/unicode_east_asian_fw_table.h:3:32: error: storage\nsize of 'east_asian_fw' isn't known\n\nOkay, this file is used similarly to\nsrc/include/common/unicode_combining_table.h, which has an exception in the\ncheck script, so I'll add another exception.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Sep 23, 2021 at 2:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:>> On HEAD, I see these headers failing to compile standalone:>> $ src/tools/pginclude/cpluspluscheck> In file included from /tmp/cpluspluscheck.XxTv1i/test.cpp:3:> ./src/include/common/unicode_east_asian_fw_table.h:3:32: error: elements of array 'const mbinterval east_asian_fw []' have incomplete type> static const struct mbinterval east_asian_fw[] = {> ^~~~~~~~~~~~~> ./src/include/common/unicode_east_asian_fw_table.h:3:32: error: storage size of 'east_asian_fw' isn't knownOkay, this file is used similarly to src/include/common/unicode_combining_table.h, which has an exception in the check script, so I'll add another exception.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 23 Sep 2021 15:13:51 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Recent cpluspluscheck failures"
},
{
"msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> On Thu, Sep 23, 2021 at 2:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> On HEAD, I see these headers failing to compile standalone:\n>> $ src/tools/pginclude/cpluspluscheck\n>> In file included from /tmp/cpluspluscheck.XxTv1i/test.cpp:3:\n>> ./src/include/common/unicode_east_asian_fw_table.h:3:32: error: elements\n> of array 'const mbinterval east_asian_fw []' have incomplete type\n\n> Okay, this file is used similarly to\n> src/include/common/unicode_combining_table.h, which has an exception in the\n> check script, so I'll add another exception.\n\nOK, but see also src/tools/pginclude/headerscheck.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 23 Sep 2021 15:24:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Recent cpluspluscheck failures"
},
{
"msg_contents": "On Thu, Sep 23, 2021 at 3:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> John Naylor <john.naylor@enterprisedb.com> writes:\n> > On Thu, Sep 23, 2021 at 2:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> On HEAD, I see these headers failing to compile standalone:\n> >> $ src/tools/pginclude/cpluspluscheck\n> >> In file included from /tmp/cpluspluscheck.XxTv1i/test.cpp:3:\n> >> ./src/include/common/unicode_east_asian_fw_table.h:3:32: error:\nelements\n> > of array 'const mbinterval east_asian_fw []' have incomplete type\n>\n> > Okay, this file is used similarly to\n> > src/include/common/unicode_combining_table.h, which has an exception in\nthe\n> > check script, so I'll add another exception.\n>\n> OK, but see also src/tools/pginclude/headerscheck.\n>\n> regards, tom lane\n\nOh, I didn't know there was another one, will add it there also.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Sep 23, 2021 at 3:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:>> John Naylor <john.naylor@enterprisedb.com> writes:> > On Thu, Sep 23, 2021 at 2:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:> >> On HEAD, I see these headers failing to compile standalone:> >> $ src/tools/pginclude/cpluspluscheck> >> In file included from /tmp/cpluspluscheck.XxTv1i/test.cpp:3:> >> ./src/include/common/unicode_east_asian_fw_table.h:3:32: error: elements> > of array 'const mbinterval east_asian_fw []' have incomplete type>> > Okay, this file is used similarly to> > src/include/common/unicode_combining_table.h, which has an exception in the> > check script, so I'll add another exception.>> OK, but see also src/tools/pginclude/headerscheck.>> regards, tom laneOh, I didn't know there was another one, will add it there also.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 23 Sep 2021 15:29:08 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Recent cpluspluscheck failures"
}
] |
[
{
"msg_contents": "extended stats objects are allowed on partitioned tables since v10.\nhttps://www.postgresql.org/message-id/flat/CAKJS1f-BmGo410bh5RSPZUvOO0LhmHL2NYmdrC_Jm8pk_FfyCA%40mail.gmail.com\n8c5cdb7f4f6e1d6a6104cb58ce4f23453891651b\n\nBut since 859b3003de they're not populated - pg_statistic_ext(_data) is empty.\nThis was the consequence of a commit to avoid an error I reported with stats on\ninheritence parents (not partitioned tables).\n\npreceding 859b3003de, stats on the parent table *did* improve the estimate,\nso this part of the commit message seems to have been wrong?\n|commit 859b3003de87645b62ee07ef245d6c1f1cd0cedb\n| Don't build extended statistics on inheritance trees\n...\n| Moreover, the current selectivity estimation code only works with individual\n| relations, so building statistics on inheritance trees would be pointless\n| anyway.\n\n|CREATE TABLE p (i int, a int, b int) PARTITION BY RANGE (i);\n|CREATE TABLE pd PARTITION OF p FOR VALUES FROM (1)TO(100);\n|TRUNCATE p; INSERT INTO p SELECT 1, a/100, a/100 FROM generate_series(1,999)a;\n|CREATE STATISTICS pp ON (a),(b) FROM p;\n|VACUUM ANALYZE p;\n|SELECT * FROM pg_statistic_ext WHERE stxrelid ='p'::regclass;\n\n|postgres=# begin; DROP STATISTICS pp; explain analyze SELECT a,b FROM p GROUP BY 1,2; abort;\n| HashAggregate (cost=20.98..21.98 rows=100 width=8) (actual time=1.088..1.093 rows=10 loops=1)\n\n|postgres=# explain analyze SELECT a,b FROM p GROUP BY 1,2;\n| HashAggregate (cost=20.98..21.09 rows=10 width=8) (actual time=1.082..1.086 rows=10 loops=1)\n\nSo I think this is a regression, and extended stats should be populated for\npartitioned tables - I had actually done that for some parent tables and hadn't\nnoticed that the stats objects no longer do anything.\n\nThat begs the question if the current behavior for inheritence parents is\ncorrect..\n\nCREATE TABLE p (i int, a int, b int);\nCREATE TABLE pd () INHERITS (p);\nINSERT INTO pd SELECT 1, a/100, a/100 FROM generate_series(1,999)a;\nCREATE STATISTICS pp ON (a),(b) FROM p;\nVACUUM ANALYZE p;\nexplain analyze SELECT a,b FROM p GROUP BY 1,2;\n\n| HashAggregate (cost=25.99..26.99 rows=100 width=8) (actual time=3.268..3.284 rows=10 loops=1)\n\nSince child tables can be queried directly, it's a legitimate question whether\nwe should collect stats for the table heirarchy or (since the catalog only\nsupports one) only the table itself. I'd think that stats for the table\nhierarchy would be more commonly useful (but we shouldn't change the behavior\nin existing releases again). Anyway it seems unfortunate that\nstatistic_ext_data still has no stxinherited.\n\nNote that for partitioned tables if I enable enable_partitionwise_aggregate,\nthen stats objects on the child tables can be helpful (but that's also\nconfusing to the question at hand).\n\n\n",
"msg_date": "Thu, 23 Sep 2021 16:26:24 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "extended stats on partitioned tables"
},
{
"msg_contents": "On 9/23/21 11:26 PM, Justin Pryzby wrote:\n> extended stats objects are allowed on partitioned tables since v10.\n> https://www.postgresql.org/message-id/flat/CAKJS1f-BmGo410bh5RSPZUvOO0LhmHL2NYmdrC_Jm8pk_FfyCA%40mail.gmail.com\n> 8c5cdb7f4f6e1d6a6104cb58ce4f23453891651b\n> \n> But since 859b3003de they're not populated - pg_statistic_ext(_data) is empty.\n> This was the consequence of a commit to avoid an error I reported with stats on\n> inheritence parents (not partitioned tables).\n> \n> preceding 859b3003de, stats on the parent table *did* improve the estimate,\n> so this part of the commit message seems to have been wrong?\n> |commit 859b3003de87645b62ee07ef245d6c1f1cd0cedb\n> | Don't build extended statistics on inheritance trees\n> ...\n> | Moreover, the current selectivity estimation code only works with individual\n> | relations, so building statistics on inheritance trees would be pointless\n> | anyway.\n> \n> |CREATE TABLE p (i int, a int, b int) PARTITION BY RANGE (i);\n> |CREATE TABLE pd PARTITION OF p FOR VALUES FROM (1)TO(100);\n> |TRUNCATE p; INSERT INTO p SELECT 1, a/100, a/100 FROM generate_series(1,999)a;\n> |CREATE STATISTICS pp ON (a),(b) FROM p;\n> |VACUUM ANALYZE p;\n> |SELECT * FROM pg_statistic_ext WHERE stxrelid ='p'::regclass;\n> \n> |postgres=# begin; DROP STATISTICS pp; explain analyze SELECT a,b FROM p GROUP BY 1,2; abort;\n> | HashAggregate (cost=20.98..21.98 rows=100 width=8) (actual time=1.088..1.093 rows=10 loops=1)\n> \n> |postgres=# explain analyze SELECT a,b FROM p GROUP BY 1,2;\n> | HashAggregate (cost=20.98..21.09 rows=10 width=8) (actual time=1.082..1.086 rows=10 loops=1)\n> \n> So I think this is a regression, and extended stats should be populated for\n> partitioned tables - I had actually done that for some parent tables and hadn't\n> noticed that the stats objects no longer do anything.\n> \n> That begs the question if the current behavior for inheritence parents is\n> correct..\n> \n> CREATE TABLE p (i int, a int, b int);\n> CREATE TABLE pd () INHERITS (p);\n> INSERT INTO pd SELECT 1, a/100, a/100 FROM generate_series(1,999)a;\n> CREATE STATISTICS pp ON (a),(b) FROM p;\n> VACUUM ANALYZE p;\n> explain analyze SELECT a,b FROM p GROUP BY 1,2;\n> \n> | HashAggregate (cost=25.99..26.99 rows=100 width=8) (actual time=3.268..3.284 rows=10 loops=1)\n> \n\nAgreed, that seems like a regression, but I don't see how to fix that \nwithout having the extra flag in the catalog. Otherwise we can store \njust one version for each statistics object :-(\n\n> Since child tables can be queried directly, it's a legitimate question whether\n> we should collect stats for the table heirarchy or (since the catalog only\n> supports one) only the table itself. I'd think that stats for the table\n> hierarchy would be more commonly useful (but we shouldn't change the behavior\n> in existing releases again). Anyway it seems unfortunate that\n> statistic_ext_data still has no stxinherited.\n> \n\nYeah, we probably need the flag - I planned to get it into 14, but then \nI got distracted by something else :-/\n\nAttached is a PoC that I quickly bashed together today. It's pretty raw, \nbut it passed \"make check\" and I think it does most of the things right. \nCan you try if this fixes the estimates with partitioned tables?\n\nExtended statistics use two catalogs, pg_statistic_ext for definition, \nwhile pg_statistic_ext_data stores the built statistics objects - the \nflag needs to be in the \"data\" catalog, and managing the records is a \nbit challenging - the current PoC code mostly works, but I had to relax \nsome error checks and I'm sure there are cases when we fail to remove a \nrow, or something like that.\n\n> Note that for partitioned tables if I enable enable_partitionwise_aggregate,\n> then stats objects on the child tables can be helpful (but that's also\n> confusing to the question at hand).\n> \n\nYeah. I think it'd be helpful to assemble a script with various test \ncases demonstrating how we estimate various cases.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sat, 25 Sep 2021 21:27:10 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "On Sat, Sep 25, 2021 at 09:27:10PM +0200, Tomas Vondra wrote:\n> On 9/23/21 11:26 PM, Justin Pryzby wrote:\n> > extended stats objects are allowed on partitioned tables since v10.\n> > https://www.postgresql.org/message-id/flat/CAKJS1f-BmGo410bh5RSPZUvOO0LhmHL2NYmdrC_Jm8pk_FfyCA%40mail.gmail.com\n> > 8c5cdb7f4f6e1d6a6104cb58ce4f23453891651b\n> > \n> > But since 859b3003de they're not populated - pg_statistic_ext(_data) is empty.\n> > This was the consequence of a commit to avoid an error I reported with stats on\n> > inheritence parents (not partitioned tables).\n> > \n> > preceding 859b3003de, stats on the parent table *did* improve the estimate,\n> > so this part of the commit message seems to have been wrong?\n> > |commit 859b3003de87645b62ee07ef245d6c1f1cd0cedb\n> > | Don't build extended statistics on inheritance trees\n> > ...\n> > | Moreover, the current selectivity estimation code only works with individual\n> > | relations, so building statistics on inheritance trees would be pointless\n> > | anyway.\n> > \n> > |CREATE TABLE p (i int, a int, b int) PARTITION BY RANGE (i);\n> > |CREATE TABLE pd PARTITION OF p FOR VALUES FROM (1)TO(100);\n> > |TRUNCATE p; INSERT INTO p SELECT 1, a/100, a/100 FROM generate_series(1,999)a;\n> > |CREATE STATISTICS pp ON (a),(b) FROM p;\n> > |VACUUM ANALYZE p;\n> > |SELECT * FROM pg_statistic_ext WHERE stxrelid ='p'::regclass;\n> > \n> > |postgres=# begin; DROP STATISTICS pp; explain analyze SELECT a,b FROM p GROUP BY 1,2; abort;\n> > | HashAggregate (cost=20.98..21.98 rows=100 width=8) (actual time=1.088..1.093 rows=10 loops=1)\n> > \n> > |postgres=# explain analyze SELECT a,b FROM p GROUP BY 1,2;\n> > | HashAggregate (cost=20.98..21.09 rows=10 width=8) (actual time=1.082..1.086 rows=10 loops=1)\n> > \n> > So I think this is a regression, and extended stats should be populated for\n> > partitioned tables - I had actually done that for some parent tables and hadn't\n> > noticed that the stats objects no longer do anything.\n...\n> Agreed, that seems like a regression, but I don't see how to fix that\n> without having the extra flag in the catalog. Otherwise we can store just\n> one version for each statistics object :-(\n\nDo you think it's possible to backpatch a fix to handle partitioned tables\nspecifically ?\n\nThe \"tuple already updated\" error which I reported and which was fixed by\n859b3003 involved inheritence children. Since partitioned tables have no data\nthemselves, the !inh check could be relaxed. It's not totally clear to me if\nthe correct statistics would be used in that case. I suppose the wrong\n(inherited) stats would be wrongly applied affect queries FROM ONLY a\npartitioned table, which seems pointless to write and also hard for the\nestimates to be far off :)\n\n> Attached is a PoC that I quickly bashed together today. It's pretty raw, but\n> it passed \"make check\" and I think it does most of the things right. Can you\n> try if this fixes the estimates with partitioned tables?\n\nI think pg_stats_ext_exprs also needs to expose the inherited flag.\n\nThanks,\n-- \nJustin\n\n\n",
"msg_date": "Sat, 25 Sep 2021 14:53:22 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "On 9/25/21 9:53 PM, Justin Pryzby wrote:\n> On Sat, Sep 25, 2021 at 09:27:10PM +0200, Tomas Vondra wrote:\n>> On 9/23/21 11:26 PM, Justin Pryzby wrote:\n>>> extended stats objects are allowed on partitioned tables since v10.\n>>> https://www.postgresql.org/message-id/flat/CAKJS1f-BmGo410bh5RSPZUvOO0LhmHL2NYmdrC_Jm8pk_FfyCA%40mail.gmail.com\n>>> 8c5cdb7f4f6e1d6a6104cb58ce4f23453891651b\n>>>\n>>> But since 859b3003de they're not populated - pg_statistic_ext(_data) is empty.\n>>> This was the consequence of a commit to avoid an error I reported with stats on\n>>> inheritence parents (not partitioned tables).\n>>>\n>>> preceding 859b3003de, stats on the parent table *did* improve the estimate,\n>>> so this part of the commit message seems to have been wrong?\n>>> |commit 859b3003de87645b62ee07ef245d6c1f1cd0cedb\n>>> | Don't build extended statistics on inheritance trees\n>>> ...\n>>> | Moreover, the current selectivity estimation code only works with individual\n>>> | relations, so building statistics on inheritance trees would be pointless\n>>> | anyway.\n>>>\n>>> |CREATE TABLE p (i int, a int, b int) PARTITION BY RANGE (i);\n>>> |CREATE TABLE pd PARTITION OF p FOR VALUES FROM (1)TO(100);\n>>> |TRUNCATE p; INSERT INTO p SELECT 1, a/100, a/100 FROM generate_series(1,999)a;\n>>> |CREATE STATISTICS pp ON (a),(b) FROM p;\n>>> |VACUUM ANALYZE p;\n>>> |SELECT * FROM pg_statistic_ext WHERE stxrelid ='p'::regclass;\n>>>\n>>> |postgres=# begin; DROP STATISTICS pp; explain analyze SELECT a,b FROM p GROUP BY 1,2; abort;\n>>> | HashAggregate (cost=20.98..21.98 rows=100 width=8) (actual time=1.088..1.093 rows=10 loops=1)\n>>>\n>>> |postgres=# explain analyze SELECT a,b FROM p GROUP BY 1,2;\n>>> | HashAggregate (cost=20.98..21.09 rows=10 width=8) (actual time=1.082..1.086 rows=10 loops=1)\n>>>\n>>> So I think this is a regression, and extended stats should be populated for\n>>> partitioned tables - I had actually done that for some parent tables and hadn't\n>>> noticed that the stats objects no longer do anything.\n> ...\n>> Agreed, that seems like a regression, but I don't see how to fix that\n>> without having the extra flag in the catalog. Otherwise we can store just\n>> one version for each statistics object :-(\n> \n> Do you think it's possible to backpatch a fix to handle partitioned tables\n> specifically ?\n> \n> The \"tuple already updated\" error which I reported and which was fixed by\n> 859b3003 involved inheritence children. Since partitioned tables have no data\n> themselves, the !inh check could be relaxed. It's not totally clear to me if\n> the correct statistics would be used in that case. I suppose the wrong\n> (inherited) stats would be wrongly applied affect queries FROM ONLY a\n> partitioned table, which seems pointless to write and also hard for the\n> estimates to be far off :)\n> \n\nHmmm, maybe. To prevent the \"tuple concurrently updated\" we must ensure \nwe never build stats with and without inheritance at the same time (for \nthe same rel). The 859b3003de ensures that by only building extended \nstats in the (!inh) case, but we might tweak that based on relkind. See \nthe attached patch. But I wonder if there are cases that might be hurt \nby this - that'd be a regression too, of course.\n\n>> Attached is a PoC that I quickly bashed together today. It's pretty raw, but\n>> it passed \"make check\" and I think it does most of the things right. Can you\n>> try if this fixes the estimates with partitioned tables?\n> \n> I think pg_stats_ext_exprs also needs to expose the inherited flag.\n> \n\nYeah, I only did the bare minimum to get the PoC working. I'm sure there \nare various other loose ends.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sat, 25 Sep 2021 23:01:21 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "On Sat, Sep 25, 2021 at 11:01:21PM +0200, Tomas Vondra wrote:\n> > Do you think it's possible to backpatch a fix to handle partitioned tables\n> > specifically ?\n> > \n> > The \"tuple already updated\" error which I reported and which was fixed by\n> > 859b3003 involved inheritence children. Since partitioned tables have no data\n> > themselves, the !inh check could be relaxed. It's not totally clear to me if\n> > the correct statistics would be used in that case. I suppose the wrong\n> > (inherited) stats would be wrongly applied affect queries FROM ONLY a\n> > partitioned table, which seems pointless to write and also hard for the\n> > estimates to be far off :)\n> \n> Hmmm, maybe. To prevent the \"tuple concurrently updated\" we must ensure we\n> never build stats with and without inheritance at the same time (for the\n> same rel). The 859b3003de ensures that by only building extended stats in\n> the (!inh) case, but we might tweak that based on relkind. See the attached\n> patch. But I wonder if there are cases that might be hurt by this - that'd\n> be a regression too, of course.\n\nI think we should leave the inheritance case alone, since it hasn't changed in\n2 years, and building stats on the table ONLY is a legitimate interpretation,\nand it's as good as we can do without the catalog change.\n\nBut the partitioned case used to work, and there's no utility in selecting FROM\nONLY a partitioned table, so we might as well build the stats including its\npartitions.\n\nI don't think anything would get worse for the partitioned case.\nObviously building inherited ext stats could change plans - that's the point.\nIt's weird that the stats objects which existed for 18 months before being\n\"built\" after the patch was applied, but no so weird that the release notes\nwouldn't be ample documentation.\n\nIf building statistics caused the plan to change undesirably, the solution\nwould be to drop the stats object, of course.\n\n+ build_ext_stats = (onerel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE) ? inh : (!inh); \n\nIt's weird to build inherited extended stats for partitioned tables but not for\ninheritence parents. We could be clever and say \"build inherited ext stats for\ninheritence parents only if we didn't insert any stats for the table itself\n(because it's empty)\". But I think that's fragile: a single tuple in the\nparent table could cause stats to be built there instead of on its heirarchy,\nand the extended stats would be used for *both* FROM and FROM ONLY, which is an\nawful combination.\n\nSince do_analyze_rel is only called once for partitioned tables, I think you\ncould write that as:\n\n/* Do not build inherited stats (since the catalog cannot support it) except\n * for partitioned tables, for which numrows==0 and have no non-inherited stats */\nbuild_ext_stats = !inh || onerel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE;\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 25 Sep 2021 16:46:10 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "(Resending with -hackers)\n\nIt seems like your patch should also check \"inh\" in examine_variable and\nstatext_expressions_load.\n\nWhich leads to another issue in stable branches:\n\nANALYZE builds only non-inherited stats, but they're incorrectly used for\ninherited queries - the rowcount estimate is worse on inheritence parents with\nextended stats than without.\n\n CREATE TABLE p(i int, j int);\n CREATE TABLE p1() INHERITS(p);\n INSERT INTO p SELECT a, a/10 FROM generate_series(1,9)a;\n INSERT INTO p1 SELECT a, a FROM generate_series(1,999)a;\n CREATE STATISTICS ps ON i,j FROM p;\n VACUUM ANALYZE p,p1;\n\npostgres=# explain analyze SELECT * FROM p GROUP BY 1,2;\n HashAggregate (cost=26.16..26.25 rows=9 width=8) (actual time=2.571..3.282 rows=1008 loops=1)\n\npostgres=# begin; DROP STATISTICS ps; explain analyze SELECT * FROM p GROUP BY 1,2; rollback;\n HashAggregate (cost=26.16..36.16 rows=1000 width=8) (actual time=2.167..2.872 rows=1008 loops=1)\n\nI guess examine_variable() should have corresponding logic to the hardcoded\n!inh in analyze.c.\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 25 Sep 2021 17:31:52 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "On 9/25/21 11:46 PM, Justin Pryzby wrote:\n> On Sat, Sep 25, 2021 at 11:01:21PM +0200, Tomas Vondra wrote:\n>>> Do you think it's possible to backpatch a fix to handle partitioned tables\n>>> specifically ?\n>>>\n>>> The \"tuple already updated\" error which I reported and which was fixed by\n>>> 859b3003 involved inheritence children. Since partitioned tables have no data\n>>> themselves, the !inh check could be relaxed. It's not totally clear to me if\n>>> the correct statistics would be used in that case. I suppose the wrong\n>>> (inherited) stats would be wrongly applied affect queries FROM ONLY a\n>>> partitioned table, which seems pointless to write and also hard for the\n>>> estimates to be far off :)\n>>\n>> Hmmm, maybe. To prevent the \"tuple concurrently updated\" we must ensure we\n>> never build stats with and without inheritance at the same time (for the\n>> same rel). The 859b3003de ensures that by only building extended stats in\n>> the (!inh) case, but we might tweak that based on relkind. See the attached\n>> patch. But I wonder if there are cases that might be hurt by this - that'd\n>> be a regression too, of course.\n> \n> I think we should leave the inheritance case alone, since it hasn't changed in\n> 2 years, and building stats on the table ONLY is a legitimate interpretation,\n> and it's as good as we can do without the catalog change.\n> \n> But the partitioned case used to work, and there's no utility in selecting FROM\n> ONLY a partitioned table, so we might as well build the stats including its\n> partitions.\n> \n> I don't think anything would get worse for the partitioned case.\n> Obviously building inherited ext stats could change plans - that's the point.\n> It's weird that the stats objects which existed for 18 months before being\n> \"built\" after the patch was applied, but no so weird that the release notes\n> wouldn't be ample documentation.\n> \n\nAgreed.\n\n> If building statistics caused the plan to change undesirably, the solution\n> would be to drop the stats object, of course.\n> \n> + build_ext_stats = (onerel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE) ? inh : (!inh); \n> \n> It's weird to build inherited extended stats for partitioned tables but not for\n> inheritence parents. We could be clever and say \"build inherited ext stats for\n> inheritence parents only if we didn't insert any stats for the table itself\n> (because it's empty)\". But I think that's fragile: a single tuple in the\n> parent table could cause stats to be built there instead of on its heirarchy,\n> and the extended stats would be used for *both* FROM and FROM ONLY, which is an\n> awful combination.\n> \n\nI don't think there's a good way to check if there are any rows in the\nparent relation. And even then, a single row might cause huge changes to\nquery plans (essentially switching to very different stats).\n\n> Since do_analyze_rel is only called once for partitioned tables, I think you\n> could write that as:\n> \n> /* Do not build inherited stats (since the catalog cannot support it) except\n> * for partitioned tables, for which numrows==0 and have no non-inherited stats */\n> build_ext_stats = !inh || onerel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE;\n> \n\nGood point.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 26 Sep 2021 13:33:09 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "On Sat, Sep 25, 2021 at 05:31:52PM -0500, Justin Pryzby wrote:\n> It seems like your patch should also check \"inh\" in examine_variable and\n> statext_expressions_load.\n\nI tried adding that - I mostly kept my patches separate.\nHopefully this is more helpful than a complication.\nI added at: https://commitfest.postgresql.org/35/3332/\n\n+ /* create only the \"stxdinherit=false\", because that always exists */\n+ datavalues[Anum_pg_statistic_ext_data_stxdinherit - 1] = ObjectIdGetDatum(false);\n\nThat'd be confusing for partitioned tables, no?\nThey'd always have an row with no data.\nI guess it could be stxdinherit = BoolGetDatum(rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE).\n(not ObjectIdGetDatum).\nThen, that affects the loops which delete the tuples - neither inh nor !inh is\nguaranteed, unless you check relkind there, too.\n\nBTW, you'd need to add an \"inherited\" column to \\dX if you added the \"built\"\ndata back.\n\nAlso, I think in backbranches we should document what's being stored in\npg_statistic_ext, since it's pretty unintuitive:\n - noninherted stats (FROM ONLY) for inheritence parents;\n - inherted stats (FROM *) for partitioned tables;\n\nI think the !inh decision in 859b3003de was basically backwards.\nI think it'd be rare for someone to put extended stats on a parent for\nimproving plans involving FROM ONLY.\n\nBut it's not worth trying to fix now, since it would change plans in\nirreversible ways. Also, if the stx data were already populated, users would\nhave to run a manual analyze after upgrading to populate the catalog with the\ndata the planner would expect in the new version, or else it would end up being\nthe opposite of the issue I mentioned: non-inherited stats (from before the\nupgrade) would be applied by the planner (after the upgrade) to inherited\nqueries.",
"msg_date": "Sun, 26 Sep 2021 15:25:50 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "On Sun, Sep 26, 2021 at 03:25:50PM -0500, Justin Pryzby wrote:\n> On Sat, Sep 25, 2021 at 05:31:52PM -0500, Justin Pryzby wrote:\n> > It seems like your patch should also check \"inh\" in examine_variable and\n> > statext_expressions_load.\n> \n> I tried adding that - I mostly kept my patches separate.\n> Hopefully this is more helpful than a complication.\n> I added at: https://commitfest.postgresql.org/35/3332/\n> \n\nActually, this is confusing. Which patch is the one we should be\nreviewing?\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n",
"msg_date": "Thu, 7 Oct 2021 15:26:46 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": false,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "On Thu, Oct 07, 2021 at 03:26:46PM -0500, Jaime Casanova wrote:\n> On Sun, Sep 26, 2021 at 03:25:50PM -0500, Justin Pryzby wrote:\n> > On Sat, Sep 25, 2021 at 05:31:52PM -0500, Justin Pryzby wrote:\n> > > It seems like your patch should also check \"inh\" in examine_variable and\n> > > statext_expressions_load.\n> > \n> > I tried adding that - I mostly kept my patches separate.\n> > Hopefully this is more helpful than a complication.\n> > I added at: https://commitfest.postgresql.org/35/3332/\n> > \n> \n> Actually, this is confusing. Which patch is the one we should be\n> reviewing?\n\nIt is confusing, but not as much as I first thought. Please check the commit\nmessages.\n\nThe first two patches are meant to be applied to master *and* backpatched. The\nfirst one intends to fixes the bug that non-inherited stats are being used for\nqueries of inheritance trees. The 2nd one fixes the regression that stats are\nnot collected for inheritence trees of partitioned tables (which is the only\ntype of stats they could ever possibly have).\n\nAnd the 3rd+4th patches (Tomas' plus my changes) allow collecting both\ninherited and non-inherited stats, only in master, since it requires a catalog\nchange. It's a bit confusing that patch #4 removes most what I added in\npatches 1 and 2. But that's exactly what's needed to collect and apply both\ninherited and non-inherited stats: the first two patches avoid applying stats\ncollected with the wrong inheritence. That's also what's needed for the\npatchset to follow the normal \"apply to master and backpatch\" process, rather\nthan 2 patches which are backpatched but not applied to master, and one which\nis applied to master and not backpatched..\n\n@Tomas: I just found commit 427c6b5b9, which is a remarkably similar issue\naffecting column stats 15 years ago.\n\nRebased since there were conflicts with my typos fixes.\n\n-- \nJustin",
"msg_date": "Thu, 7 Oct 2021 17:45:43 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "On 10/8/21 12:45 AM, Justin Pryzby wrote:\n> On Thu, Oct 07, 2021 at 03:26:46PM -0500, Jaime Casanova wrote:\n>> On Sun, Sep 26, 2021 at 03:25:50PM -0500, Justin Pryzby wrote:\n>>> On Sat, Sep 25, 2021 at 05:31:52PM -0500, Justin Pryzby wrote:\n>>>> It seems like your patch should also check \"inh\" in examine_variable and\n>>>> statext_expressions_load.\n>>>\n>>> I tried adding that - I mostly kept my patches separate.\n>>> Hopefully this is more helpful than a complication.\n>>> I added at: https://commitfest.postgresql.org/35/3332/\n>>>\n>>\n>> Actually, this is confusing. Which patch is the one we should be\n>> reviewing?\n> \n> It is confusing, but not as much as I first thought. Please check the commit\n> messages.\n> \n> The first two patches are meant to be applied to master *and* backpatched. The\n> first one intends to fixes the bug that non-inherited stats are being used for\n> queries of inheritance trees. The 2nd one fixes the regression that stats are\n> not collected for inheritence trees of partitioned tables (which is the only\n> type of stats they could ever possibly have).\n> \n\nI think 0001 and 0002 seem mostly fine, but it seems a bit strange to do\nthe (!rte->inh) check in the rel->statlist loops. AFAICS both places\ncould do that right at the beginning, because it does not depend on the\nstatistics object at all, just the RelOptInfo.\n\n> And the 3rd+4th patches (Tomas' plus my changes) allow collecting both\n> inherited and non-inherited stats, only in master, since it requires a catalog\n> change. It's a bit confusing that patch #4 removes most what I added in\n> patches 1 and 2. But that's exactly what's needed to collect and apply both\n> inherited and non-inherited stats: the first two patches avoid applying stats\n> collected with the wrong inheritence. That's also what's needed for the\n> patchset to follow the normal \"apply to master and backpatch\" process, rather\n> than 2 patches which are backpatched but not applied to master, and one which\n> is applied to master and not backpatched..\n> \n\nYeah. Af first I was a bit confused because after applying 0003 there\nare both the fixes and the \"correct\" way, but then I realized 0004\nremoves the unnecessary bits.\n\nThe one thing 0003 still needs is to rework the places that need to\ntouch both inh and !inh stats. The patch simply does\n\n for (inh = 0; inh <= 1; inh++) { ... }\n\nbut that feels a bit too hackish. But if we don't know which of the two\nstats exist, I'm not sure what to do about it. And I'm not sure we do\nthe right thing after removing children, for example (that should drop\nthe inheritance stats, I guess).\n\nThe 1:2 mapping between pg_statistic_ext and pg_statistic_ext_data is a\nbit strange, but I can't think of a better way.\n\n\n> @Tomas: I just found commit 427c6b5b9, which is a remarkably similar issue\n> affecting column stats 15 years ago.\n> \n\nWhat can I say? The history repeats itself ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 3 Nov 2021 23:48:44 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "On Wed, Nov 03, 2021 at 11:48:44PM +0100, Tomas Vondra wrote:\n> On 10/8/21 12:45 AM, Justin Pryzby wrote:\n> > On Thu, Oct 07, 2021 at 03:26:46PM -0500, Jaime Casanova wrote:\n> >> On Sun, Sep 26, 2021 at 03:25:50PM -0500, Justin Pryzby wrote:\n> >>> On Sat, Sep 25, 2021 at 05:31:52PM -0500, Justin Pryzby wrote:\n> >>>> It seems like your patch should also check \"inh\" in examine_variable and\n> >>>> statext_expressions_load.\n> >>>\n> >>> I tried adding that - I mostly kept my patches separate.\n> >>> Hopefully this is more helpful than a complication.\n> >>> I added at: https://commitfest.postgresql.org/35/3332/\n> >>>\n> >>\n> >> Actually, this is confusing. Which patch is the one we should be\n> >> reviewing?\n> > \n> > It is confusing, but not as much as I first thought. Please check the commit\n> > messages.\n> > \n> > The first two patches are meant to be applied to master *and* backpatched. The\n> > first one intends to fixes the bug that non-inherited stats are being used for\n> > queries of inheritance trees. The 2nd one fixes the regression that stats are\n> > not collected for inheritence trees of partitioned tables (which is the only\n> > type of stats they could ever possibly have).\n> \n> I think 0001 and 0002 seem mostly fine, but it seems a bit strange to do\n> the (!rte->inh) check in the rel->statlist loops. AFAICS both places\n> could do that right at the beginning, because it does not depend on the\n> statistics object at all, just the RelOptInfo.\n\nI probably did this to make the code change small, to avoid indentin the whole\nblock.\n\n> > And the 3rd+4th patches (Tomas' plus my changes) allow collecting both\n> > inherited and non-inherited stats, only in master, since it requires a catalog\n> > change. It's a bit confusing that patch #4 removes most what I added in\n> > patches 1 and 2. But that's exactly what's needed to collect and apply both\n> > inherited and non-inherited stats: the first two patches avoid applying stats\n> > collected with the wrong inheritence. That's also what's needed for the\n> > patchset to follow the normal \"apply to master and backpatch\" process, rather\n> > than 2 patches which are backpatched but not applied to master, and one which\n> > is applied to master and not backpatched..\n> > \n> \n> Yeah. Af first I was a bit confused because after applying 0003 there\n> are both the fixes and the \"correct\" way, but then I realized 0004\n> removes the unnecessary bits.\n\nThis was to leave your 0003 (mostly) unchanged, so you can see and/or apply my\nchanges. They should be squished together.\n\n> The one thing 0003 still needs is to rework the places that need to\n> touch both inh and !inh stats. The patch simply does\n> \n> for (inh = 0; inh <= 1; inh++) { ... }\n> \n> but that feels a bit too hackish. But if we don't know which of the two\n> stats exist, I'm not sure what to do about it. \n\nThere's also this:\n\nOn Sun, Sep 26, 2021 at 03:25:50PM -0500, Justin Pryzby wrote:\n> + /* create only the \"stxdinherit=false\", because that always exists */\n> + datavalues[Anum_pg_statistic_ext_data_stxdinherit - 1] = ObjectIdGetDatum(false);\n> \n> That'd be confusing for partitioned tables, no?\n> They'd always have an row with no data.\n> I guess it could be stxdinherit = BoolGetDatum(rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE).\n> (not ObjectIdGetDatum).\n> Then, that affects the loops which delete the tuples - neither inh nor !inh is\n> guaranteed, unless you check relkind there, too.\n\nMaybe the for inh<=1 loop should instead be two calls to new functions factored\nout of get_relation_statistics() and RemoveStatisticsById(), which take \"bool\ninh\".\n\n> And I'm not sure we do the right thing after removing children, for example\n> (that should drop the inheritance stats, I guess).\n\nDo you mean for inheritance only ? Or partitions too ?\nI think for partitions, the stats should stay.\nAnd for inheritence, they can stay, for consistency with partitions, and since\nit does no harm.\n\n\n",
"msg_date": "Wed, 3 Nov 2021 18:19:04 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "\n\nOn 11/4/21 12:19 AM, Justin Pryzby wrote:\n> On Wed, Nov 03, 2021 at 11:48:44PM +0100, Tomas Vondra wrote:\n>> On 10/8/21 12:45 AM, Justin Pryzby wrote:\n>>> On Thu, Oct 07, 2021 at 03:26:46PM -0500, Jaime Casanova wrote:\n>>>> On Sun, Sep 26, 2021 at 03:25:50PM -0500, Justin Pryzby wrote:\n>>>>> On Sat, Sep 25, 2021 at 05:31:52PM -0500, Justin Pryzby wrote:\n>>>>>> It seems like your patch should also check \"inh\" in examine_variable and\n>>>>>> statext_expressions_load.\n>>>>>\n>>>>> I tried adding that - I mostly kept my patches separate.\n>>>>> Hopefully this is more helpful than a complication.\n>>>>> I added at: https://commitfest.postgresql.org/35/3332/\n>>>>>\n>>>>\n>>>> Actually, this is confusing. Which patch is the one we should be\n>>>> reviewing?\n>>>\n>>> It is confusing, but not as much as I first thought. Please check the commit\n>>> messages.\n>>>\n>>> The first two patches are meant to be applied to master *and* backpatched. The\n>>> first one intends to fixes the bug that non-inherited stats are being used for\n>>> queries of inheritance trees. The 2nd one fixes the regression that stats are\n>>> not collected for inheritence trees of partitioned tables (which is the only\n>>> type of stats they could ever possibly have).\n>>\n>> I think 0001 and 0002 seem mostly fine, but it seems a bit strange to do\n>> the (!rte->inh) check in the rel->statlist loops. AFAICS both places\n>> could do that right at the beginning, because it does not depend on the\n>> statistics object at all, just the RelOptInfo.\n> \n> I probably did this to make the code change small, to avoid indentin the whole\n> block.\n\nBut indenting the block is not necessary. It's possible to do something\nlike this:\n\n if (!rel->inh)\n return 1.0;\n\nor whatever is the \"default\" result for that function.\n\n> \n>>> And the 3rd+4th patches (Tomas' plus my changes) allow collecting both\n>>> inherited and non-inherited stats, only in master, since it requires a catalog\n>>> change. It's a bit confusing that patch #4 removes most what I added in\n>>> patches 1 and 2. But that's exactly what's needed to collect and apply both\n>>> inherited and non-inherited stats: the first two patches avoid applying stats\n>>> collected with the wrong inheritence. That's also what's needed for the\n>>> patchset to follow the normal \"apply to master and backpatch\" process, rather\n>>> than 2 patches which are backpatched but not applied to master, and one which\n>>> is applied to master and not backpatched..\n>>>\n>>\n>> Yeah. Af first I was a bit confused because after applying 0003 there\n>> are both the fixes and the \"correct\" way, but then I realized 0004\n>> removes the unnecessary bits.\n> \n> This was to leave your 0003 (mostly) unchanged, so you can see and/or apply my\n> changes. They should be squished together.\n> \n\nYep.\n\n>> The one thing 0003 still needs is to rework the places that need to\n>> touch both inh and !inh stats. The patch simply does\n>>\n>> for (inh = 0; inh <= 1; inh++) { ... }\n>>\n>> but that feels a bit too hackish. But if we don't know which of the two\n>> stats exist, I'm not sure what to do about it. \n> \n> There's also this:\n> \n> On Sun, Sep 26, 2021 at 03:25:50PM -0500, Justin Pryzby wrote:\n>> + /* create only the \"stxdinherit=false\", because that always exists */\n>> + datavalues[Anum_pg_statistic_ext_data_stxdinherit - 1] = ObjectIdGetDatum(false);\n>>\n>> That'd be confusing for partitioned tables, no?\n>> They'd always have an row with no data.\n>> I guess it could be stxdinherit = BoolGetDatum(rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE).\n>> (not ObjectIdGetDatum).\n>> Then, that affects the loops which delete the tuples - neither inh nor !inh is\n>> guaranteed, unless you check relkind there, too.\n> \n> Maybe the for inh<=1 loop should instead be two calls to new functions factored\n> out of get_relation_statistics() and RemoveStatisticsById(), which take \"bool\n> inh\".\n> \n\nWell, yeah. That's part of the strange 1:2 mapping between the stats\ndefinition and data. Although, even with regular stats we have such\nmapping, except the \"definition\" is the pg_attribute row.\n\n>> And I'm not sure we do the right thing after removing children, for example\n>> (that should drop the inheritance stats, I guess).\n> \n> Do you mean for inheritance only ? Or partitions too ?\n> I think for partitions, the stats should stay.\n> And for inheritence, they can stay, for consistency with partitions, and since\n> it does no harm.\n> \n\nI think the behavior should be the same as for data in pg_statistic,\ni.e. if we keep/remove those, we should do the same thing for extended\nstatistics.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 4 Nov 2021 00:44:45 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "On Thu, Nov 04, 2021 at 12:44:45AM +0100, Tomas Vondra wrote:\n> > I probably did this to make the code change small, to avoid indentin the whole\n> > block.\n> \n> But indenting the block is not necessary. It's possible to do something\n> like this:\n> \n> if (!rel->inh)\n> return 1.0;\n> \n> or whatever is the \"default\" result for that function.\n\nYou're right. I did like that, Except in examine_variable, which already does\nit with \"break\".\n\n> > Maybe the for inh<=1 loop should instead be two calls to new functions factored\n> > out of get_relation_statistics() and RemoveStatisticsById(), which take \"bool\n> > inh\".\n\nI did like that in a separate patch for now.\nAnd I avoided making a !inh tuple for partitioned tables, since they're never\npopulated.\n\n> >> And I'm not sure we do the right thing after removing children, for example\n> >> (that should drop the inheritance stats, I guess).\n> > \n> > Do you mean for inheritance only ? Or partitions too ?\n> > I think for partitions, the stats should stay.\n> > And for inheritence, they can stay, for consistency with partitions, and since\n> > it does no harm.\n> > \n> \n> I think the behavior should be the same as for data in pg_statistic,\n> i.e. if we keep/remove those, we should do the same thing for extended\n> statistics.\n\nThis works for column stats the way I proposed for extended stats: child stats\nare never removed, neither when the only child is dropped, nor when re-running\nANALYZE (actually, that part is odd).\n\nI can stop sending patches if it makes it hard to reconcile, but I wanted to\nput it \"on paper\" to see/show what the patch series would look like, for v15\nand back branches.\n\n-- \nJustin",
"msg_date": "Wed, 3 Nov 2021 21:20:56 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "On Thu, Nov 04, 2021 at 12:44:45AM +0100, Tomas Vondra wrote:\n> >> And I'm not sure we do the right thing after removing children, for example\n> >> (that should drop the inheritance stats, I guess).\n\n> > Do you mean for inheritance only ? Or partitions too ?\n> > I think for partitions, the stats should stay.\n> > And for inheritence, they can stay, for consistency with partitions, and since\n> > it does no harm.\n> \n> I think the behavior should be the same as for data in pg_statistic,\n> i.e. if we keep/remove those, we should do the same thing for extended\n> statistics.\n\nThat works for column stats the way I proposed for extended stats: child stats\nare never removed, neither when the only child is dropped, nor when re-running\nanalyze (that part is actually a bit odd).\n\nRebased, fixing an intermediate compile error, and typos in the commit message.\n\n-- \nJustin",
"msg_date": "Thu, 2 Dec 2021 23:24:11 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "On Thu, Dec 2, 2021 at 9:24 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Thu, Nov 04, 2021 at 12:44:45AM +0100, Tomas Vondra wrote:\n> > >> And I'm not sure we do the right thing after removing children, for\n> example\n> > >> (that should drop the inheritance stats, I guess).\n>\n> > > Do you mean for inheritance only ? Or partitions too ?\n> > > I think for partitions, the stats should stay.\n> > > And for inheritence, they can stay, for consistency with partitions,\n> and since\n> > > it does no harm.\n> >\n> > I think the behavior should be the same as for data in pg_statistic,\n> > i.e. if we keep/remove those, we should do the same thing for extended\n> > statistics.\n>\n> That works for column stats the way I proposed for extended stats: child\n> stats\n> are never removed, neither when the only child is dropped, nor when\n> re-running\n> analyze (that part is actually a bit odd).\n>\n> Rebased, fixing an intermediate compile error, and typos in the commit\n> message.\n>\n> --\n> Justin\n>\nHi,\n\n+ if (!HeapTupleIsValid(tup)) /* should not happen */\n+ // elog(ERROR, \"cache lookup failed for statistics data %u\",\nstatsOid);\n\nYou may want to remove commented out code.\n\n+ for (i = 0; i < staForm->stxkeys.dim1; i++)\n+ keys = bms_add_member(keys, staForm->stxkeys.values[i]);\n\nSince the above code is in a loop now, should keys be cleared across the\nouter loop iterations ?\n\nCheers\n\nOn Thu, Dec 2, 2021 at 9:24 PM Justin Pryzby <pryzby@telsasoft.com> wrote:On Thu, Nov 04, 2021 at 12:44:45AM +0100, Tomas Vondra wrote:\n> >> And I'm not sure we do the right thing after removing children, for example\n> >> (that should drop the inheritance stats, I guess).\n\n> > Do you mean for inheritance only ? Or partitions too ?\n> > I think for partitions, the stats should stay.\n> > And for inheritence, they can stay, for consistency with partitions, and since\n> > it does no harm.\n> \n> I think the behavior should be the same as for data in pg_statistic,\n> i.e. if we keep/remove those, we should do the same thing for extended\n> statistics.\n\nThat works for column stats the way I proposed for extended stats: child stats\nare never removed, neither when the only child is dropped, nor when re-running\nanalyze (that part is actually a bit odd).\n\nRebased, fixing an intermediate compile error, and typos in the commit message.\n\n-- \nJustinHi,+ if (!HeapTupleIsValid(tup)) /* should not happen */+ // elog(ERROR, \"cache lookup failed for statistics data %u\", statsOid);You may want to remove commented out code.+ for (i = 0; i < staForm->stxkeys.dim1; i++)+ keys = bms_add_member(keys, staForm->stxkeys.values[i]);Since the above code is in a loop now, should keys be cleared across the outer loop iterations ?Cheers",
"msg_date": "Fri, 3 Dec 2021 09:15:28 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "Hi,\n\nAttached is a rebased and cleaned-up version of these patches, with more\ncomments, refactorings etc. Justin and Zhihong, can you take a look?\n\n\n0001 - Ignore extended statistics for inheritance trees\n\n0002 - Build inherited extended stats on partitioned tables\n\nThose are mostly just Justin's patches, with more detailed comments and\nupdated commit message. I've considered moving the rel->inh check to\nstatext_clauselist_selectivity, and then removing the check from\ndependencies and MCV. But I decided no to do that, because someone might\nbe calling those functions directly (even if that's very unlikely).\n\nThe one thing bugging me a bit is that the regression test checks only a\nGROUP BY query. It'd be nice to add queries testing MCV/dependencies\ntoo, but that seems tricky because most queries will use per-partitions\nstats.\n\n\n0003 - Add stxdinherit flag to pg_statistic_ext_data\n\nThis is the patch for master, allowing to build stats for both inherits\nflag values. It adds the flag to pg_stats_ext_exprs view to, reworked\nhow we deal with iterating both flags etc. I've adopted most of the\nJustin's fixup patches, except that in plancat.c I've refactored how we\nload the stats to process keys/expressions just once.\n\nIt has the same issue with regression test using just a GROUP BY query,\nbut if we add a test to 0001/0002, that'll fix this too.\n\n\n0004 - Refactor parent ACL check\n\nNot sure about this - I doubt saving 30 rows in an 8kB file is really\nworth it. Maybe it is, but then maybe we should try cleaning up the\nother ACL checks in this file too? Seems mostly orthogonal to this\nthread, though.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 12 Dec 2021 05:17:10 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "On Sat, Dec 11, 2021 at 8:17 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> Hi,\n>\n> Attached is a rebased and cleaned-up version of these patches, with more\n> comments, refactorings etc. Justin and Zhihong, can you take a look?\n>\n>\n> 0001 - Ignore extended statistics for inheritance trees\n>\n> 0002 - Build inherited extended stats on partitioned tables\n>\n> Those are mostly just Justin's patches, with more detailed comments and\n> updated commit message. I've considered moving the rel->inh check to\n> statext_clauselist_selectivity, and then removing the check from\n> dependencies and MCV. But I decided no to do that, because someone might\n> be calling those functions directly (even if that's very unlikely).\n>\n> The one thing bugging me a bit is that the regression test checks only a\n> GROUP BY query. It'd be nice to add queries testing MCV/dependencies\n> too, but that seems tricky because most queries will use per-partitions\n> stats.\n>\n>\n> 0003 - Add stxdinherit flag to pg_statistic_ext_data\n>\n> This is the patch for master, allowing to build stats for both inherits\n> flag values. It adds the flag to pg_stats_ext_exprs view to, reworked\n> how we deal with iterating both flags etc. I've adopted most of the\n> Justin's fixup patches, except that in plancat.c I've refactored how we\n> load the stats to process keys/expressions just once.\n>\n> It has the same issue with regression test using just a GROUP BY query,\n> but if we add a test to 0001/0002, that'll fix this too.\n>\n>\n> 0004 - Refactor parent ACL check\n>\n> Not sure about this - I doubt saving 30 rows in an 8kB file is really\n> worth it. Maybe it is, but then maybe we should try cleaning up the\n> other ACL checks in this file too? Seems mostly orthogonal to this\n> thread, though.\n>\n>\n> Hi,\nFor patch 3, in commit message:\n\nand there no clear winner. -> and there is no clear winner.\n\nand it seem wasteful -> and it seems wasteful\n\nThe there may be -> There may be\n\n+ /* skip statistics with mismatching stxdinherit value */\n+ if (stat->inherit != rte->inh)\n\nShould a log be added for the above case ?\n\n+ * Determine if we'redealing with inheritance tree.\n\nThere should be a space between re and dealing.\n\nCheers\n\nregards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\nOn Sat, Dec 11, 2021 at 8:17 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:Hi,\n\nAttached is a rebased and cleaned-up version of these patches, with more\ncomments, refactorings etc. Justin and Zhihong, can you take a look?\n\n\n0001 - Ignore extended statistics for inheritance trees\n\n0002 - Build inherited extended stats on partitioned tables\n\nThose are mostly just Justin's patches, with more detailed comments and\nupdated commit message. I've considered moving the rel->inh check to\nstatext_clauselist_selectivity, and then removing the check from\ndependencies and MCV. But I decided no to do that, because someone might\nbe calling those functions directly (even if that's very unlikely).\n\nThe one thing bugging me a bit is that the regression test checks only a\nGROUP BY query. It'd be nice to add queries testing MCV/dependencies\ntoo, but that seems tricky because most queries will use per-partitions\nstats.\n\n\n0003 - Add stxdinherit flag to pg_statistic_ext_data\n\nThis is the patch for master, allowing to build stats for both inherits\nflag values. It adds the flag to pg_stats_ext_exprs view to, reworked\nhow we deal with iterating both flags etc. I've adopted most of the\nJustin's fixup patches, except that in plancat.c I've refactored how we\nload the stats to process keys/expressions just once.\n\nIt has the same issue with regression test using just a GROUP BY query,\nbut if we add a test to 0001/0002, that'll fix this too.\n\n\n0004 - Refactor parent ACL check\n\nNot sure about this - I doubt saving 30 rows in an 8kB file is really\nworth it. Maybe it is, but then maybe we should try cleaning up the\nother ACL checks in this file too? Seems mostly orthogonal to this\nthread, though.\n\nHi,For patch 3, in commit message:and there no clear winner. -> and there is no clear winner. and it seem wasteful -> and it seems wastefulThe there may be -> There may be+ /* skip statistics with mismatching stxdinherit value */+ if (stat->inherit != rte->inh)Should a log be added for the above case ?+ * Determine if we'redealing with inheritance tree.There should be a space between re and dealing.Cheers\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sat, 11 Dec 2021 20:38:30 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "\n\nOn 12/12/21 05:38, Zhihong Yu wrote:\n> \n> \n> On Sat, Dec 11, 2021 at 8:17 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com <mailto:tomas.vondra@enterprisedb.com>>\n> wrote:\n> \n> Hi,\n> \n> Attached is a rebased and cleaned-up version of these patches, with more\n> comments, refactorings etc. Justin and Zhihong, can you take a look?\n> \n> \n> 0001 - Ignore extended statistics for inheritance trees\n> \n> 0002 - Build inherited extended stats on partitioned tables\n> \n> Those are mostly just Justin's patches, with more detailed comments and\n> updated commit message. I've considered moving the rel->inh check to\n> statext_clauselist_selectivity, and then removing the check from\n> dependencies and MCV. But I decided no to do that, because someone might\n> be calling those functions directly (even if that's very unlikely).\n> \n> The one thing bugging me a bit is that the regression test checks only a\n> GROUP BY query. It'd be nice to add queries testing MCV/dependencies\n> too, but that seems tricky because most queries will use per-partitions\n> stats.\n> \n> \n> 0003 - Add stxdinherit flag to pg_statistic_ext_data\n> \n> This is the patch for master, allowing to build stats for both inherits\n> flag values. It adds the flag to pg_stats_ext_exprs view to, reworked\n> how we deal with iterating both flags etc. I've adopted most of the\n> Justin's fixup patches, except that in plancat.c I've refactored how we\n> load the stats to process keys/expressions just once.\n> \n> It has the same issue with regression test using just a GROUP BY query,\n> but if we add a test to 0001/0002, that'll fix this too.\n> \n> \n> 0004 - Refactor parent ACL check\n> \n> Not sure about this - I doubt saving 30 rows in an 8kB file is really\n> worth it. Maybe it is, but then maybe we should try cleaning up the\n> other ACL checks in this file too? Seems mostly orthogonal to this\n> thread, though.\n> \n> \n> Hi,\n> For patch 3, in commit message:\n> \n> and there no clear winner. -> and there is no clear winner. \n> \n> and it seem wasteful -> and it seems wasteful\n> \n> The there may be -> There may be\n> \n\nThanks, will fix.\n\n> + /* skip statistics with mismatching stxdinherit value */\n> + if (stat->inherit != rte->inh)\n> \n> Should a log be added for the above case ?\n> \n\nWhy should we log this? It's an entirely expected case - there's a\nmismatch between inheritance for the relation and statistics, simply\nskipping it is the right thing to do.\n\n> + * Determine if we'redealing with inheritance tree.\n> \n> There should be a space between re and dealing.\n> \n\nThanks, will fix.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 12 Dec 2021 06:14:21 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "On Sat, Dec 11, 2021 at 9:14 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n>\n>\n> On 12/12/21 05:38, Zhihong Yu wrote:\n> >\n> >\n> > On Sat, Dec 11, 2021 at 8:17 PM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com <mailto:tomas.vondra@enterprisedb.com>>\n> > wrote:\n> >\n> > Hi,\n> >\n> > Attached is a rebased and cleaned-up version of these patches, with\n> more\n> > comments, refactorings etc. Justin and Zhihong, can you take a look?\n> >\n> >\n> > 0001 - Ignore extended statistics for inheritance trees\n> >\n> > 0002 - Build inherited extended stats on partitioned tables\n> >\n> > Those are mostly just Justin's patches, with more detailed comments\n> and\n> > updated commit message. I've considered moving the rel->inh check to\n> > statext_clauselist_selectivity, and then removing the check from\n> > dependencies and MCV. But I decided no to do that, because someone\n> might\n> > be calling those functions directly (even if that's very unlikely).\n> >\n> > The one thing bugging me a bit is that the regression test checks\n> only a\n> > GROUP BY query. It'd be nice to add queries testing MCV/dependencies\n> > too, but that seems tricky because most queries will use\n> per-partitions\n> > stats.\n> >\n> >\n> > 0003 - Add stxdinherit flag to pg_statistic_ext_data\n> >\n> > This is the patch for master, allowing to build stats for both\n> inherits\n> > flag values. It adds the flag to pg_stats_ext_exprs view to, reworked\n> > how we deal with iterating both flags etc. I've adopted most of the\n> > Justin's fixup patches, except that in plancat.c I've refactored how\n> we\n> > load the stats to process keys/expressions just once.\n> >\n> > It has the same issue with regression test using just a GROUP BY\n> query,\n> > but if we add a test to 0001/0002, that'll fix this too.\n> >\n> >\n> > 0004 - Refactor parent ACL check\n> >\n> > Not sure about this - I doubt saving 30 rows in an 8kB file is really\n> > worth it. Maybe it is, but then maybe we should try cleaning up the\n> > other ACL checks in this file too? Seems mostly orthogonal to this\n> > thread, though.\n> >\n> >\n> > Hi,\n> > For patch 3, in commit message:\n> >\n> > and there no clear winner. -> and there is no clear winner.\n> >\n> > and it seem wasteful -> and it seems wasteful\n> >\n> > The there may be -> There may be\n> >\n>\n> Thanks, will fix.\n>\n> > + /* skip statistics with mismatching stxdinherit value */\n> > + if (stat->inherit != rte->inh)\n> >\n> > Should a log be added for the above case ?\n> >\n>\n> Why should we log this? It's an entirely expected case - there's a\n> mismatch between inheritance for the relation and statistics, simply\n> skipping it is the right thing to do.\n>\n\nHi,\nI agree that skipping should be fine (to avoid too much logging).\n\nThanks\n\n\n> > + * Determine if we'redealing with inheritance tree.\n> >\n> > There should be a space between re and dealing.\n> >\n>\n> Thanks, will fix.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nOn Sat, Dec 11, 2021 at 9:14 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n\nOn 12/12/21 05:38, Zhihong Yu wrote:\n> \n> \n> On Sat, Dec 11, 2021 at 8:17 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com <mailto:tomas.vondra@enterprisedb.com>>\n> wrote:\n> \n> Hi,\n> \n> Attached is a rebased and cleaned-up version of these patches, with more\n> comments, refactorings etc. Justin and Zhihong, can you take a look?\n> \n> \n> 0001 - Ignore extended statistics for inheritance trees\n> \n> 0002 - Build inherited extended stats on partitioned tables\n> \n> Those are mostly just Justin's patches, with more detailed comments and\n> updated commit message. I've considered moving the rel->inh check to\n> statext_clauselist_selectivity, and then removing the check from\n> dependencies and MCV. But I decided no to do that, because someone might\n> be calling those functions directly (even if that's very unlikely).\n> \n> The one thing bugging me a bit is that the regression test checks only a\n> GROUP BY query. It'd be nice to add queries testing MCV/dependencies\n> too, but that seems tricky because most queries will use per-partitions\n> stats.\n> \n> \n> 0003 - Add stxdinherit flag to pg_statistic_ext_data\n> \n> This is the patch for master, allowing to build stats for both inherits\n> flag values. It adds the flag to pg_stats_ext_exprs view to, reworked\n> how we deal with iterating both flags etc. I've adopted most of the\n> Justin's fixup patches, except that in plancat.c I've refactored how we\n> load the stats to process keys/expressions just once.\n> \n> It has the same issue with regression test using just a GROUP BY query,\n> but if we add a test to 0001/0002, that'll fix this too.\n> \n> \n> 0004 - Refactor parent ACL check\n> \n> Not sure about this - I doubt saving 30 rows in an 8kB file is really\n> worth it. Maybe it is, but then maybe we should try cleaning up the\n> other ACL checks in this file too? Seems mostly orthogonal to this\n> thread, though.\n> \n> \n> Hi,\n> For patch 3, in commit message:\n> \n> and there no clear winner. -> and there is no clear winner. \n> \n> and it seem wasteful -> and it seems wasteful\n> \n> The there may be -> There may be\n> \n\nThanks, will fix.\n\n> + /* skip statistics with mismatching stxdinherit value */\n> + if (stat->inherit != rte->inh)\n> \n> Should a log be added for the above case ?\n> \n\nWhy should we log this? It's an entirely expected case - there's a\nmismatch between inheritance for the relation and statistics, simply\nskipping it is the right thing to do.Hi,I agree that skipping should be fine (to avoid too much logging).Thanks\n\n> + * Determine if we'redealing with inheritance tree.\n> \n> There should be a space between re and dealing.\n> \n\nThanks, will fix.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 12 Dec 2021 05:47:37 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "On Sat, Dec 11, 2021 at 8:17 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> Hi,\n>\n> Attached is a rebased and cleaned-up version of these patches, with more\n> comments, refactorings etc. Justin and Zhihong, can you take a look?\n>\n>\n> 0001 - Ignore extended statistics for inheritance trees\n>\n> 0002 - Build inherited extended stats on partitioned tables\n>\n> Those are mostly just Justin's patches, with more detailed comments and\n> updated commit message. I've considered moving the rel->inh check to\n> statext_clauselist_selectivity, and then removing the check from\n> dependencies and MCV. But I decided no to do that, because someone might\n> be calling those functions directly (even if that's very unlikely).\n>\n> The one thing bugging me a bit is that the regression test checks only a\n> GROUP BY query. It'd be nice to add queries testing MCV/dependencies\n> too, but that seems tricky because most queries will use per-partitions\n> stats.\n>\n>\n> 0003 - Add stxdinherit flag to pg_statistic_ext_data\n>\n> This is the patch for master, allowing to build stats for both inherits\n> flag values. It adds the flag to pg_stats_ext_exprs view to, reworked\n> how we deal with iterating both flags etc. I've adopted most of the\n> Justin's fixup patches, except that in plancat.c I've refactored how we\n> load the stats to process keys/expressions just once.\n>\n> It has the same issue with regression test using just a GROUP BY query,\n> but if we add a test to 0001/0002, that'll fix this too.\n>\n>\n> 0004 - Refactor parent ACL check\n>\n> Not sure about this - I doubt saving 30 rows in an 8kB file is really\n> worth it. Maybe it is, but then maybe we should try cleaning up the\n> other ACL checks in this file too? Seems mostly orthogonal to this\n> thread, though.\n>\n> Hi,\nFor patch 1, minor comment:\n\n+ if (planner_rt_fetch(onerel->relid, root)->inh)\n\nSince the rte (RangeTblEntry*) doesn't seem to be used beyond checking inh,\nI think it would be better if the above style of checking is used\nthroughout the patch (without introducing rte variable).\n\nCheers\n\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n\nOn Sat, Dec 11, 2021 at 8:17 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:Hi,\n\nAttached is a rebased and cleaned-up version of these patches, with more\ncomments, refactorings etc. Justin and Zhihong, can you take a look?\n\n\n0001 - Ignore extended statistics for inheritance trees\n\n0002 - Build inherited extended stats on partitioned tables\n\nThose are mostly just Justin's patches, with more detailed comments and\nupdated commit message. I've considered moving the rel->inh check to\nstatext_clauselist_selectivity, and then removing the check from\ndependencies and MCV. But I decided no to do that, because someone might\nbe calling those functions directly (even if that's very unlikely).\n\nThe one thing bugging me a bit is that the regression test checks only a\nGROUP BY query. It'd be nice to add queries testing MCV/dependencies\ntoo, but that seems tricky because most queries will use per-partitions\nstats.\n\n\n0003 - Add stxdinherit flag to pg_statistic_ext_data\n\nThis is the patch for master, allowing to build stats for both inherits\nflag values. It adds the flag to pg_stats_ext_exprs view to, reworked\nhow we deal with iterating both flags etc. I've adopted most of the\nJustin's fixup patches, except that in plancat.c I've refactored how we\nload the stats to process keys/expressions just once.\n\nIt has the same issue with regression test using just a GROUP BY query,\nbut if we add a test to 0001/0002, that'll fix this too.\n\n\n0004 - Refactor parent ACL check\n\nNot sure about this - I doubt saving 30 rows in an 8kB file is really\nworth it. Maybe it is, but then maybe we should try cleaning up the\nother ACL checks in this file too? Seems mostly orthogonal to this\nthread, though.\nHi,For patch 1, minor comment:+ if (planner_rt_fetch(onerel->relid, root)->inh) Since the rte (RangeTblEntry*) doesn't seem to be used beyond checking inh, I think it would be better if the above style of checking is used throughout the patch (without introducing rte variable).Cheers\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 12 Dec 2021 07:37:01 -0800",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "On 12/12/21 14:47, Zhihong Yu wrote:\n> \n> \n> On Sat, Dec 11, 2021 at 9:14 PM Tomas Vondra \n> <tomas.vondra@enterprisedb.com <mailto:tomas.vondra@enterprisedb.com>> \n> wrote:\n >\n> ...\n> \n> > + /* skip statistics with mismatching stxdinherit value */\n> > + if (stat->inherit != rte->inh)\n> >\n> > Should a log be added for the above case ?\n> >\n> \n> Why should we log this? It's an entirely expected case - there's a\n> mismatch between inheritance for the relation and statistics, simply\n> skipping it is the right thing to do.\n> \n> \n> Hi,\n> I agree that skipping should be fine (to avoid too much logging).\n> \n\nI'm not sure it's related to the amount of logging, really. It'd be just \nnoise without any practical use, even for debugging purposes. If you \nhave an inheritance tree, it'll automatically have one set of statistics \nfor inh=true and one for inh=false. And this condition will always skip \none of those, depending on what query is being estimated.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 12 Dec 2021 18:45:43 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "On 12/12/21 16:37, Zhihong Yu wrote:\n> \n> Hi,\n> For patch 1, minor comment:\n> \n> + if (planner_rt_fetch(onerel->relid, root)->inh)\n> \n> Since the rte (RangeTblEntry*) doesn't seem to be used beyond checking \n> inh, I think it would be better if the above style of checking is used \n> throughout the patch (without introducing rte variable).\n> \n\nIt's mostly a matter of personal taste, but I always found this style of \ncondition (i.e. dereferencing a pointer returned by a function) much \nless readable. It's hard to parse what exactly is happening, what struct \ntype are we dealing with, etc. YMMV but the separate variable makes it \nmuch clearer for me. And I'd expect the compilers to produce pretty much \nthe same code too for those cases.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 12 Dec 2021 18:52:00 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "+ * XXX Can't we simply look at rte->inh?\n+ */\n+ inh = root->append_rel_array == NULL ? false :\n+ root->append_rel_array[onerel->relid]->parent_relid != 0;\n\nI think so. That's what I came up with while trying to figured this out, and\nit's no great surprise that it needed to be cleaned up - thanks.\n\nIn your 0003 patch, the \"if inh: break\" isn't removed from examine_variable(),\nbut the corresponding thing is removed everywhere else.\n\nIn 0003, mcv_clauselist_selectivity still uses simple_rte_array rather than\nrt_fetch.\n\nThe regression tests changed as a result of not populating stx_data; I think\nit's may be better to update like this:\n\nSELECT stxname, stxdndistinct, stxddependencies, stxdmcv, stxoid IS NULL\n FROM pg_statistic_ext s LEFT JOIN pg_statistic_ext_data d\n ON d.stxoid = s.oid\n WHERE s.stxname = 'ab1_a_b_stats';\n\nThere's this part about documentation for the changes in backbranches:\n\nOn Sun, Sep 26, 2021 at 03:25:50PM -0500, Justin Pryzby wrote:\n> Also, I think in backbranches we should document what's being stored in\n> pg_statistic_ext, since it's pretty unintuitive:\n> - noninherted stats (FROM ONLY) for inheritence parents;\n> - inherted stats (FROM *) for partitioned tables;\n\nspellcheck: inheritence should be inheritance.\n\nAll for now. I'm going to update the regression tests for dependencies and the\nother code paths.\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 12 Dec 2021 11:52:56 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> On 12/12/21 16:37, Zhihong Yu wrote:\n>> Since the rte (RangeTblEntry*) doesn't seem to be used beyond checking \n>> inh, I think it would be better if the above style of checking is used \n>> throughout the patch (without introducing rte variable).\n\n> It's mostly a matter of personal taste, but I always found this style of \n> condition (i.e. dereferencing a pointer returned by a function) much \n> less readable. It's hard to parse what exactly is happening, what struct \n> type are we dealing with, etc. YMMV but the separate variable makes it \n> much clearer for me. And I'd expect the compilers to produce pretty much \n> the same code too for those cases.\n\nFWIW, I agree. Also, it's possible that future patches would create a\nneed to touch the RTE again nearby, in which case having the variable\nmakes it easier to write non-crummy code for that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 12 Dec 2021 13:27:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "On 12/12/21 18:52, Justin Pryzby wrote:\n> + * XXX Can't we simply look at rte->inh?\n> + */\n> + inh = root->append_rel_array == NULL ? false :\n> + root->append_rel_array[onerel->relid]->parent_relid != 0;\n> \n> I think so. That's what I came up with while trying to figured this out, and\n> it's no great surprise that it needed to be cleaned up - thanks.\n> \n\nOK, fixed.\n\n> In your 0003 patch, the \"if inh: break\" isn't removed from examine_variable(),\n> but the corresponding thing is removed everywhere else.\n> \n\nAh, you're right. And it wasn't updated in the 0002 patch either - it\nshould do the relkind check too, to allow partitioned tables. Fixed.\n\n> In 0003, mcv_clauselist_selectivity still uses simple_rte_array rather than\n> rt_fetch.\n> \n\nThat's mostly a conscious choice, so that I don't have to include\nparsetree.h. But maybe that'd be better ...\n\n> The regression tests changed as a result of not populating stx_data; I think\n> it's may be better to update like this:\n> \n> SELECT stxname, stxdndistinct, stxddependencies, stxdmcv, stxoid IS NULL\n> FROM pg_statistic_ext s LEFT JOIN pg_statistic_ext_data d\n> ON d.stxoid = s.oid\n> WHERE s.stxname = 'ab1_a_b_stats';\n> \n\nNot sure I understand. Why would this be better than inner join?\n\n> There's this part about documentation for the changes in backbranches:\n> \n> On Sun, Sep 26, 2021 at 03:25:50PM -0500, Justin Pryzby wrote:\n>> Also, I think in backbranches we should document what's being stored in\n>> pg_statistic_ext, since it's pretty unintuitive:\n>> - noninherted stats (FROM ONLY) for inheritence parents;\n>> - inherted stats (FROM *) for partitioned tables;\n> \n> spellcheck: inheritence should be inheritance.\n> \n\nThanks, fixed. Can you read through the commit messages and check the\nattribution is correct for all the patches?\n\n> All for now. I'm going to update the regression tests for dependencies and the\n> other code paths.\n> \n\nThanks!\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 12 Dec 2021 22:29:39 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "On Sun, Dec 12, 2021 at 05:17:10AM +0100, Tomas Vondra wrote:\n> The one thing bugging me a bit is that the regression test checks only a\n> GROUP BY query. It'd be nice to add queries testing MCV/dependencies\n> too, but that seems tricky because most queries will use per-partitions\n> stats.\n\nYou mean because the quals are pushed down to the scan node.\n\nDoes that indicate a deficiency ?\n\nIf extended stats are collected for a parent table, selectivity estimates based\nfrom the parent would be better; but instead we use uncorrected column\nestimates from the child tables.\n\n From what I see, we could come up with a way to avoid the pushdown, involving\nvolatile functions/foreign tables/RLS/window functions/SRF/wholerow vars/etc.\n\nBut would it be better if extended stats objects on partitioned tables were to\ncollect stats for both parent AND CHILD ? I'm not sure. Maybe that's the\nwrong solution, but maybe we should still document that extended stats on\n(empty) parent tables are often themselves not used/useful for selectivity\nestimates, and the user should instead (or in addition) create stats on child\ntables.\n\nOr, maybe if there's no extended stats on the child tables, stats on the parent\ntable should be consulted ?\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 12 Dec 2021 15:32:10 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "On Sun, Dec 12, 2021 at 10:29:39PM +0100, Tomas Vondra wrote:\n> On 12/12/21 18:52, Justin Pryzby wrote:\n> That's mostly a conscious choice, so that I don't have to include\n> parsetree.h. But maybe that'd be better ...\n> \n> > The regression tests changed as a result of not populating stx_data; I think\n> > it's may be better to update like this:\n> > \n> > SELECT stxname, stxdndistinct, stxddependencies, stxdmcv, stxoid IS NULL\n> > FROM pg_statistic_ext s LEFT JOIN pg_statistic_ext_data d\n> > ON d.stxoid = s.oid\n> > WHERE s.stxname = 'ab1_a_b_stats';\n> > \n> \n> Not sure I understand. Why would this be better than inner join?\n\nIt shows that there's an entry in pg_statistic_ext and not one in ext_data,\nrather than that it's not in at least one of the catalogs. Which is nice to\nshow since as you say it's no longer 1:1.\n\n> Thanks, fixed. Can you read through the commit messages and check the\n> attribution is correct for all the patches?\n\nSeems fine.\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 12 Dec 2021 15:49:15 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "\n\nOn 12/12/21 22:32, Justin Pryzby wrote:\n> On Sun, Dec 12, 2021 at 05:17:10AM +0100, Tomas Vondra wrote:\n>> The one thing bugging me a bit is that the regression test checks only a\n>> GROUP BY query. It'd be nice to add queries testing MCV/dependencies\n>> too, but that seems tricky because most queries will use per-partitions\n>> stats.\n> \n> You mean because the quals are pushed down to the scan node.\n> \n> Does that indicate a deficiency ?\n> \n> If extended stats are collected for a parent table, selectivity estimates based\n> from the parent would be better; but instead we use uncorrected column\n> estimates from the child tables.\n> \n> From what I see, we could come up with a way to avoid the pushdown, involving\n> volatile functions/foreign tables/RLS/window functions/SRF/wholerow vars/etc.\n> > But would it be better if extended stats objects on partitioned \ntables were to\n> collect stats for both parent AND CHILD ? I'm not sure. Maybe that's the\n> wrong solution, but maybe we should still document that extended stats on\n> (empty) parent tables are often themselves not used/useful for selectivity\n> estimates, and the user should instead (or in addition) create stats on child\n> tables.\n> \n> Or, maybe if there's no extended stats on the child tables, stats on the parent\n> table should be consulted ?\n> \n\nMaybe, but that seems like a mostly separate improvement. At this point \nI'm interested only in testing the behavior implemented in the current \npatches.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 12 Dec 2021 23:23:19 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "On Sun, Dec 12, 2021 at 10:29:39PM +0100, Tomas Vondra wrote:\n> > In your 0003 patch, the \"if inh: break\" isn't removed from examine_variable(),\n> > but the corresponding thing is removed everywhere else.\n> \n> Ah, you're right. And it wasn't updated in the 0002 patch either - it\n> should do the relkind check too, to allow partitioned tables. Fixed.\n\nI think you fixed it in 0002 (thanks) but still wasn't removed from 0003?\n\nIn these comments:\n+ * When dealing with regular inheritance trees, ignore extended stats\n+ * (which were built without data from child rels, and thus do not\n+ * represent them). For partitioned tables data from partitions are\n+ * in the stats (and there's no data in the non-leaf relations), so\n+ * in this case we do consider extended stats.\n\nI suggest to add a comment after \"For partitioned tables\".\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 13 Dec 2021 07:48:30 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "On Sun, Dec 12, 2021 at 11:23:19PM +0100, Tomas Vondra wrote:\n> On 12/12/21 22:32, Justin Pryzby wrote:\n> > On Sun, Dec 12, 2021 at 05:17:10AM +0100, Tomas Vondra wrote:\n> > > The one thing bugging me a bit is that the regression test checks only a\n> > > GROUP BY query. It'd be nice to add queries testing MCV/dependencies\n> > > too, but that seems tricky because most queries will use per-partitions\n> > > stats.\n> > \n> > You mean because the quals are pushed down to the scan node.\n> > \n> > Does that indicate a deficiency ?\n> > \n> > If extended stats are collected for a parent table, selectivity estimates based\n> > from the parent would be better; but instead we use uncorrected column\n> > estimates from the child tables.\n> > \n> > From what I see, we could come up with a way to avoid the pushdown, involving\n> > volatile functions/foreign tables/RLS/window functions/SRF/wholerow vars/etc.\n> > But would it be better if extended stats objects on partitioned tables were to\n> > collect stats for both parent AND CHILD ? I'm not sure. Maybe that's the\n> > wrong solution, but maybe we should still document that extended stats on\n> > (empty) parent tables are often themselves not used/useful for selectivity\n> > estimates, and the user should instead (or in addition) create stats on child\n> > tables.\n> > \n> > Or, maybe if there's no extended stats on the child tables, stats on the parent\n> > table should be consulted ?\n> \n> Maybe, but that seems like a mostly separate improvement. At this point I'm\n> interested only in testing the behavior implemented in the current patches.\n\nI don't want to change the scope of the patch, or this thread, but my point is\nthat the behaviour already changed once (the original regression) and now we're\nplanning to change it again to fix that, so we ought to decide on the expected\nbehavior before writing tests to verify it.\n\nI think it may be impossible to use the \"dependencies\" statistic with inherited\nstats. Normally the quals would be pushed down to the child tables. But, if\nthey weren't pushed down, they'd be attached to something other than a scan\nnode on the parent table, so the stats on that table wouldn't apply (right?). \n\nMaybe the useless stats types should have been prohibited on partitioned tables\nsince v10. It's too late to change that, but perhaps now they shouldn't even\nbe collected during analyze. The dependencies and MCV paths are never called\nwith rte->inh==true, so maybe we should Assert(!inh), or add a comment to that\neffect. Or the regression tests should \"memorialize\" the behavior. I'm still\nthinking about it.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 13 Dec 2021 07:53:48 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "On 12/13/21 14:48, Justin Pryzby wrote:\n> On Sun, Dec 12, 2021 at 10:29:39PM +0100, Tomas Vondra wrote:\n>>> In your 0003 patch, the \"if inh: break\" isn't removed from examine_variable(),\n>>> but the corresponding thing is removed everywhere else.\n>>\n>> Ah, you're right. And it wasn't updated in the 0002 patch either - it\n>> should do the relkind check too, to allow partitioned tables. Fixed.\n> \n> I think you fixed it in 0002 (thanks) but still wasn't removed from 0003?\n> \n\nD'oh! Those repeated rebase conflicts got me quite confused.\n\n> In these comments:\n> + * When dealing with regular inheritance trees, ignore extended stats\n> + * (which were built without data from child rels, and thus do not\n> + * represent them). For partitioned tables data from partitions are\n> + * in the stats (and there's no data in the non-leaf relations), so\n> + * in this case we do consider extended stats.\n> \n> I suggest to add a comment after \"For partitioned tables\".\n> \n\nI've reworded the comment a bit, hopefully it's a bit clearer now.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 13 Dec 2021 20:52:53 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "On 12/13/21 14:53, Justin Pryzby wrote:\n> On Sun, Dec 12, 2021 at 11:23:19PM +0100, Tomas Vondra wrote:\n>> On 12/12/21 22:32, Justin Pryzby wrote:\n>>> On Sun, Dec 12, 2021 at 05:17:10AM +0100, Tomas Vondra wrote:\n>>>> The one thing bugging me a bit is that the regression test checks only a\n>>>> GROUP BY query. It'd be nice to add queries testing MCV/dependencies\n>>>> too, but that seems tricky because most queries will use per-partitions\n>>>> stats.\n>>>\n>>> You mean because the quals are pushed down to the scan node.\n>>>\n>>> Does that indicate a deficiency ?\n>>>\n>>> If extended stats are collected for a parent table, selectivity estimates based\n>>> from the parent would be better; but instead we use uncorrected column\n>>> estimates from the child tables.\n>>>\n>>> From what I see, we could come up with a way to avoid the pushdown, involving\n>>> volatile functions/foreign tables/RLS/window functions/SRF/wholerow vars/etc.\n>>> But would it be better if extended stats objects on partitioned tables were to\n>>> collect stats for both parent AND CHILD ? I'm not sure. Maybe that's the\n>>> wrong solution, but maybe we should still document that extended stats on\n>>> (empty) parent tables are often themselves not used/useful for selectivity\n>>> estimates, and the user should instead (or in addition) create stats on child\n>>> tables.\n>>>\n>>> Or, maybe if there's no extended stats on the child tables, stats on the parent\n>>> table should be consulted ?\n>>\n>> Maybe, but that seems like a mostly separate improvement. At this point I'm\n>> interested only in testing the behavior implemented in the current patches.\n> \n> I don't want to change the scope of the patch, or this thread, but my point is\n> that the behaviour already changed once (the original regression) and now we're\n> planning to change it again to fix that, so we ought to decide on the expected\n> behavior before writing tests to verify it.\n> \n\nOK. Makes sense.\n\n> I think it may be impossible to use the \"dependencies\" statistic with inherited\n> stats. Normally the quals would be pushed down to the child tables. But, if\n> they weren't pushed down, they'd be attached to something other than a scan\n> node on the parent table, so the stats on that table wouldn't apply (right?).\n> \n\nYeah, that's probably right. But I'm not 100% sure the whole inheritance\ntree can't be treated as a single relation by some queries ...\n\n> Maybe the useless stats types should have been prohibited on partitioned tables\n> since v10. It's too late to change that, but perhaps now they shouldn't even\n> be collected during analyze. The dependencies and MCV paths are never called\n> with rte->inh==true, so maybe we should Assert(!inh), or add a comment to that\n> effect. Or the regression tests should \"memorialize\" the behavior. I'm still\n> thinking about it.\n> \n\nYeah, we can't really prohibit them in backbranches - that'd mean some\nCREATE STATISTICS commands suddenly start failing, which would be quite\nannoying. Not building them for partitioned tables seems like a better\noption - BuildRelationExtStatistics can check relkind and pick what to\nignore. But I'd choose to do that in a separate patch, probably - after\nall, it shouldn't really change the behavior of any tests/queries, no?\n\nThis reminds me we need to consider if these patches could cause any\nissues. The way I see it is this:\n\n1) If the table is a separate relation (not part of an inheritance\ntree), this should make no difference. -> OK\n\n2) If the table is using \"old\" inheritance, this reverts back to\npre-regression behavior. So people will keep using the old statistics\nuntil the ANALYZE, and we need to tell them to ANALYZE or something.\n\n3) If the table is using partitioning, it's guaranteed to be empty and\nthere are no stats at all. Again, we should tell people to run ANALYZE.\n\nOf course, we can't be sure query plans will change in the right\ndirection :-(\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 13 Dec 2021 21:40:09 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "On Mon, Dec 13, 2021 at 09:40:09PM +0100, Tomas Vondra wrote:\n> 1) If the table is a separate relation (not part of an inheritance\n> tree), this should make no difference. -> OK\n> \n> 2) If the table is using \"old\" inheritance, this reverts back to\n> pre-regression behavior. So people will keep using the old statistics\n> until the ANALYZE, and we need to tell them to ANALYZE or something.\n> \n> 3) If the table is using partitioning, it's guaranteed to be empty and\n> there are no stats at all. Again, we should tell people to run ANALYZE.\n\nI think these can be mentioned in the commit message, which can end up in the\nminor release notes as a recommendation to rerun ANALYZE.\n\nThanks for pushing 0001.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 14 Jan 2022 23:11:38 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "On 1/15/22 06:11, Justin Pryzby wrote:\n> On Mon, Dec 13, 2021 at 09:40:09PM +0100, Tomas Vondra wrote:\n>> 1) If the table is a separate relation (not part of an inheritance\n>> tree), this should make no difference. -> OK\n>>\n>> 2) If the table is using \"old\" inheritance, this reverts back to\n>> pre-regression behavior. So people will keep using the old statistics\n>> until the ANALYZE, and we need to tell them to ANALYZE or something.\n>>\n>> 3) If the table is using partitioning, it's guaranteed to be empty and\n>> there are no stats at all. Again, we should tell people to run ANALYZE.\n> \n> I think these can be mentioned in the commit message, which can end up in the\n> minor release notes as a recommendation to rerun ANALYZE.\n> \n\nGood point. I pushed the 0002 part and added a short paragraph \nsuggesting ANALYZE might be necessary. I did not go into details about \nthe individual cases, because that'd be too much for a commit message.\n\n> Thanks for pushing 0001.\n> \n\nThanks for posting the patches!\n\nI've pushed the second part - attached are the two remaining parts. I'll \nwait a bit before pushing the rebased 0001 (which goes into master \nbranch only). Not sure about 0002 - I'm not convinced the refactored ACL \nchecks are an improvement, but I'll think about it.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sat, 15 Jan 2022 19:35:12 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "\n\nOn 1/15/22 19:35, Tomas Vondra wrote:\n> On 1/15/22 06:11, Justin Pryzby wrote:\n>> On Mon, Dec 13, 2021 at 09:40:09PM +0100, Tomas Vondra wrote:\n>>> 1) If the table is a separate relation (not part of an inheritance\n>>> tree), this should make no difference. -> OK\n>>>\n>>> 2) If the table is using \"old\" inheritance, this reverts back to\n>>> pre-regression behavior. So people will keep using the old statistics\n>>> until the ANALYZE, and we need to tell them to ANALYZE or something.\n>>>\n>>> 3) If the table is using partitioning, it's guaranteed to be empty and\n>>> there are no stats at all. Again, we should tell people to run ANALYZE.\n>>\n>> I think these can be mentioned in the commit message, which can end up \n>> in the\n>> minor release notes as a recommendation to rerun ANALYZE.\n>>\n> \n> Good point. I pushed the 0002 part and added a short paragraph \n> suggesting ANALYZE might be necessary. I did not go into details about \n> the individual cases, because that'd be too much for a commit message.\n> \n>> Thanks for pushing 0001.\n>>\n> \n> Thanks for posting the patches!\n> \n> I've pushed the second part - attached are the two remaining parts. I'll \n> wait a bit before pushing the rebased 0001 (which goes into master \n> branch only). Not sure about 0002 - I'm not convinced the refactored ACL \n> checks are an improvement, but I'll think about it.\n> \n\nBTW when backpatching the first part, I had to decide what to do about \ntests. The 10 & 11 branches did not have the check_estimated_rows() \nfunction. I considered removing the tests, reworking the tests not to \nneed the function, or adding the function. Ultimately I added the \nfunction, which seemed like the best option.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 15 Jan 2022 20:05:27 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: extended stats on partitioned tables"
},
{
"msg_contents": "I've pushed the part adding stxdinherit flag to the catalog, so that on \nmaster we build statistics both with and without data from the child \nrelations.\n\nI'm not going to push the ACL refactoring. We have similar code on \nvarious other places (not addressed by the proposed patch), and it'd \nmake backpatching harder. So I'm not sure it'd be an improvement.\n\nIn any case, I'm going to mark this as committed. Justin, if you think \nwe should reconsider the ACL refactoring, please submit it as a separate \npatch. It seems mostly unrelated to the issue this thread was about.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 17 Jan 2022 00:44:50 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: extended stats on partitioned tables"
}
] |
[
{
"msg_contents": "Hello,\n\nI've recently been working with a database containing bcrypt hashes generated by a 3rd-party which use the $2b$ prefix. This prefix was introduced in 2014 and has since been recognised by a number of bcrypt implementations. [1][2][3][4]\n\nAt the moment, pgcrypto’s `crypt` doesn’t recognise this prefix. However, simply `replace`ing the prefix with $2a$ allows crypt to validate the hashes. This patch simply adds recognition for the prefix and treats the hash identically to the $2a$ hashes.\n\nIs this a reasonable change to pgcrypto? I note that the last upstream change brought into crypt-blowfish.c was in 2011, predating this prefix. [5] Are there deeper concerns or other upstream changes that need to be addressed alongside this? Is there a better approach to this? \n\nAt the moment, the $2x$ variant is supported but not mentioned in the docs, so I haven’t included any documentation updates.\n\nThanks,\n\nDaniel\n\n\n[1]: https://marc.info/?l=openbsd-misc&m=139320023202696\n[2]: https://www.openwall.com/lists/announce/2014/08/31/1\n[3]: https://github.com/kelektiv/node.bcrypt.js/pull/549/files#diff-c55280c5e4da52b0f86244d3b95c5ae0abf2fcd121a071dba1363540875b32bc\n[4]: https://github.com/bcrypt-ruby/bcrypt-ruby/commit/d19ea481618420922b533a8b0ed049109404cb13\n[5]: https://github.com/postgres/postgres/commit/ca59dfa6f727fe3bf3a01904ec30e87f7fa5a67e",
"msg_date": "Fri, 24 Sep 2021 14:12:08 +1200",
"msg_from": "Daniel Fone <daniel@fone.net.nz>",
"msg_from_op": true,
"msg_subject": "pgcrypto support for bcrypt $2b$ hashes"
},
{
"msg_contents": "> On 24 Sep 2021, at 04:12, Daniel Fone <daniel@fone.net.nz> wrote:\n\n> At the moment, pgcrypto’s `crypt` doesn’t recognise this prefix. However, simply `replace`ing the prefix with $2a$ allows crypt to validate the hashes. This patch simply adds recognition for the prefix and treats the hash identically to the $2a$ hashes.\n\nBut 2b and 2a hashes aren't equal, although very similar. 2a should have the\nmany-buggy to one-correct collision safety and 2b hashes shouldn't. The fact\nthat your hashes work isn't conclusive evidence.\n\n> Is this a reasonable change to pgcrypto?\n\nI think it's reasonable to support 2b hashes, but not like how this patch does\nit.\n\n> I note that the last upstream change brought into crypt-blowfish.c was in 2011, predating this prefix. [5] Are there deeper concerns or other upstream changes that need to be addressed alongside this?\n\nUpgrading our crypt_blowfish.c to the upstream 1.3 version would be the correct\nfix IMO, but since we have a few local modifications it's not a drop-in. I\ndon't think it would be too hairy, but one needs to be very careful when\ndealing with crypto.\n\n> Is there a better approach to this? \n\nCompile with OpenSSL support, then pgcrypto will use the libcrypto implementation.\n\n> At the moment, the $2x$ variant is supported but not mentioned in the docs, so I haven’t included any documentation updates.\n\nActually it is, in table F.16 in the below documentation page we refer to our\nsupported level as \"Blowfish-based, variant 2a\".\n\n\thttps://www.postgresql.org/docs/devel/pgcrypto.html\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Sat, 25 Sep 2021 14:09:10 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pgcrypto support for bcrypt $2b$ hashes"
},
{
"msg_contents": "Hi Daniel,\n\nThanks for the feedback.\n\n> On 26/09/2021, at 12:09 AM, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n> But 2b and 2a hashes aren't equal, although very similar. 2a should have the\n> many-buggy to one-correct collision safety and 2b hashes shouldn't. The fact\n> that your hashes work isn't conclusive evidence.\n\nI was afraid this might be a bit naive. Re-reading the crypt_blowfish release notes, it’s principally the changes introducing $2y$ into 1.2 that we need, with support for OpenBSD $2b$ introduced in 1.3. Do I understand this correctly?\n\n> Upgrading our crypt_blowfish.c to the upstream 1.3 version would be the correct\n> fix IMO, but since we have a few local modifications it's not a drop-in. I\n> don't think it would be too hairy, but one needs to be very careful when\n> dealing with crypto.\n\nMy C experience is limited, but I can make an initial attempt if the effort would be worthwhile. Is this realistically a patch that a newcomer to the codebase should attempt?\n\n> Actually it is, in table F.16 in the below documentation page we refer to our\n> supported level as \"Blowfish-based, variant 2a”.\n\nSorry I wasn’t clear. My point was that the docs only mention $2a$, and $2x$ isn’t mentioned even though pgcrypto supports it. As part of the upgrade to 1.3, perhaps the docs can be updated to mention variants x, y, and b as well.\n\nThanks,\n\nDaniel\n\n\n\n\n",
"msg_date": "Tue, 28 Sep 2021 16:15:36 +1300",
"msg_from": "Daniel Fone <daniel@fone.net.nz>",
"msg_from_op": true,
"msg_subject": "Re: pgcrypto support for bcrypt $2b$ hashes"
},
{
"msg_contents": "> On 28 Sep 2021, at 05:15, Daniel Fone <daniel@fone.net.nz> wrote:\n> \n> Hi Daniel,\n> \n> Thanks for the feedback.\n> \n>> On 26/09/2021, at 12:09 AM, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> \n>> But 2b and 2a hashes aren't equal, although very similar. 2a should have the\n>> many-buggy to one-correct collision safety and 2b hashes shouldn't. The fact\n>> that your hashes work isn't conclusive evidence.\n> \n> I was afraid this might be a bit naive. Re-reading the crypt_blowfish release notes, it’s principally the changes introducing $2y$ into 1.2 that we need, with support for OpenBSD $2b$ introduced in 1.3. Do I understand this correctly?\n\nYeah, we'd want a port of 1.3 into pgcrypto essentially.\n\n>> Upgrading our crypt_blowfish.c to the upstream 1.3 version would be the correct\n>> fix IMO, but since we have a few local modifications it's not a drop-in. I\n>> don't think it would be too hairy, but one needs to be very careful when\n>> dealing with crypto.\n> \n> My C experience is limited, but I can make an initial attempt if the effort would be worthwhile. Is this realistically a patch that a newcomer to the codebase should attempt?\n\nI don't see why not, the best first patches are those scratching an itch. If\nyou feel up for it then give it a go, I - and the rest of pgsql-hackers - can\nhelp if you need to bounce ideas. Many of the changes in the pgcrypto BF code\nis whitespace and formatting, which are performed via pgindent. I would\nsuggest to familiarize yourself with pgindent in order to tease them out\neasier. Another set of changes are around error handling and reporting, which\nis postgres specific.\n\n>> Actually it is, in table F.16 in the below documentation page we refer to our\n>> supported level as \"Blowfish-based, variant 2a”.\n> \n> Sorry I wasn’t clear. My point was that the docs only mention $2a$, and $2x$ isn’t mentioned even though pgcrypto supports it. As part of the upgrade to 1.3, perhaps the docs can be updated to mention variants x, y, and b as well.\n\nAha, now I see what you mean, yes you are right. I think the docs should be\nupdated regardless of the above as a first step to properly match what's in the\ntree. Unless there are objections I propose to apply the attached.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Tue, 28 Sep 2021 15:33:11 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pgcrypto support for bcrypt $2b$ hashes"
},
{
"msg_contents": "> On 29/09/2021, at 2:33 AM, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 28 Sep 2021, at 05:15, Daniel Fone <daniel@fone.net.nz> wrote:\n>> \n>>> On 26/09/2021, at 12:09 AM, Daniel Gustafsson <daniel@yesql.se> wrote:\n>>> \n>>> Upgrading our crypt_blowfish.c to the upstream 1.3 version would be the correct\n>>> fix IMO, but since we have a few local modifications it's not a drop-in. I\n>>> don't think it would be too hairy, but one needs to be very careful when\n>>> dealing with crypto.\n>> \n>> My C experience is limited, but I can make an initial attempt if the effort would be worthwhile. Is this realistically a patch that a newcomer to the codebase should attempt?\n> \n> I don't see why not, the best first patches are those scratching an itch. If\n> you feel up for it then give it a go, I - and the rest of pgsql-hackers - can\n> help if you need to bounce ideas.\n\nI’m glad you said that. I couldn’t resist trying and have attached a patch. By referencing the respective git logs, I didn’t have too much difficulty identifying the material changes in each codebase. I’ve documented all the postgres-specific changes to upstream in the header comment for each file.\n\nDaniel",
"msg_date": "Wed, 29 Sep 2021 10:58:49 +1300",
"msg_from": "Daniel Fone <daniel@fone.net.nz>",
"msg_from_op": true,
"msg_subject": "Re: pgcrypto support for bcrypt $2b$ hashes"
},
{
"msg_contents": "On 9/28/21 11:58 PM, Daniel Fone wrote:\n>> On 29/09/2021, at 2:33 AM, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> I don't see why not, the best first patches are those scratching an itch. If\n>> you feel up for it then give it a go, I - and the rest of pgsql-hackers - can\n>> help if you need to bounce ideas.\n> \n> I’m glad you said that. I couldn’t resist trying and have attached a patch. By referencing the respective git logs, I didn’t have too much difficulty identifying the material changes in each codebase. I’ve documented all the postgres-specific changes to upstream in the header comment for each file.\n\nI took a quick look and on a cursory glance it looks good but I got \nthese compilation warnings.\n\ncrypt-blowfish.c: In function ‘BF_crypt’:\ncrypt-blowfish.c:789:3: warning: ISO C90 forbids mixed declarations and \ncode [-Wdeclaration-after-statement]\n 789 | int done;\n | ^~~\ncrypt-blowfish.c: In function ‘_crypt_blowfish_rn’:\ncrypt-blowfish.c:897:8: warning: variable ‘save_errno’ set but not used \n[-Wunused-but-set-variable]\n 897 | int save_errno,\n | ^~~~~~~~~~\n\nAndreas\n\n\n\n",
"msg_date": "Thu, 30 Sep 2021 13:17:01 +0200",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: pgcrypto support for bcrypt $2b$ hashes"
},
{
"msg_contents": "Hi Andreas,\n\n> On 1/10/2021, at 12:17 AM, Andreas Karlsson <andreas@proxel.se> wrote:\n> \n> On 9/28/21 11:58 PM, Daniel Fone wrote:\n>>> On 29/09/2021, at 2:33 AM, Daniel Gustafsson <daniel@yesql.se> wrote:\n>>> I don't see why not, the best first patches are those scratching an itch. If\n>>> you feel up for it then give it a go, I - and the rest of pgsql-hackers - can\n>>> help if you need to bounce ideas.\n>> I’m glad you said that. I couldn’t resist trying and have attached a patch. By referencing the respective git logs, I didn’t have too much difficulty identifying the material changes in each codebase. I’ve documented all the postgres-specific changes to upstream in the header comment for each file.\n> \n> I took a quick look and on a cursory glance it looks good but I got these compilation warnings.\n> \n> crypt-blowfish.c: In function ‘BF_crypt’:\n> crypt-blowfish.c:789:3: warning: ISO C90 forbids mixed declarations and code [-Wdeclaration-after-statement]\n> 789 | int done;\n> | ^~~\n> crypt-blowfish.c: In function ‘_crypt_blowfish_rn’:\n> crypt-blowfish.c:897:8: warning: variable ‘save_errno’ set but not used [-Wunused-but-set-variable]\n> 897 | int save_errno,\n> | ^~~~~~~~~~\n\nI don’t get these compiler warnings and I can’t find any settings to use that might generate them. I’m compiling on macOS 11.6 configured with `--enable-cassert --enable-depend --enable-debug CFLAGS=-O0`\n\nI’ve optimistically updated the patch to hopefully address them, but I’d like to know what I need to do to get those warnings.\n\nThanks,\n\nDaniel",
"msg_date": "Sat, 2 Oct 2021 16:48:24 +1300",
"msg_from": "Daniel Fone <daniel@fone.net.nz>",
"msg_from_op": true,
"msg_subject": "Re: pgcrypto support for bcrypt $2b$ hashes"
},
{
"msg_contents": "On 02.10.2021 06:48, Daniel Fone wrote:\n> I don’t get these compiler warnings and I can’t find any settings to use that might generate them. I’m compiling on macOS 11.6 configured with `--enable-cassert --enable-depend --enable-debug CFLAGS=-O0`\n>\n\nHi, Daniel!\nI don't get them from clang on macOS either.\n\n\n> I’ve optimistically updated the patch to hopefully address them, but I’d like to know what I need to do to get those warnings.\n\nBut gcc-11 on Ubuntu 20.04 emits them.\n\nRegards,\n\n-- \nSergey Shinderuk\t\thttps://postgrespro.com/\n\n\n",
"msg_date": "Sat, 2 Oct 2021 09:48:42 +0300",
"msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: pgcrypto support for bcrypt $2b$ hashes"
},
{
"msg_contents": "On 10/2/21 5:48 AM, Daniel Fone wrote:\n> I don’t get these compiler warnings and I can’t find any settings to use that might generate them. I’m compiling on macOS 11.6 configured with `--enable-cassert --enable-depend --enable-debug CFLAGS=-O0`\n> \n> I’ve optimistically updated the patch to hopefully address them, but I’d like to know what I need to do to get those warnings.\n\nI run \"gcc (Debian 10.3.0-11) 10.3.0\" and your new patch silenced the \nwarnings.\n\nPlease add your patch to the current open commitfest.\n\nhttps://commitfest.postgresql.org/35/\n\nAndreas\n\n\n",
"msg_date": "Sat, 2 Oct 2021 12:22:16 +0200",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: pgcrypto support for bcrypt $2b$ hashes"
},
{
"msg_contents": "On Sat, 2 Oct 2021 at 23:22, Andreas Karlsson <andreas@proxel.se> wrote:\n\n> Please add your patch to the current open commitfest.\n>\n\nDone. https://commitfest.postgresql.org/35/3338/\n\nThanks for the guidance.\n\nDaniel\n\nOn Sat, 2 Oct 2021 at 23:22, Andreas Karlsson <andreas@proxel.se> wrote:Please add your patch to the current open commitfest.Done. https://commitfest.postgresql.org/35/3338/Thanks for the guidance.Daniel",
"msg_date": "Sun, 3 Oct 2021 00:02:57 +1300",
"msg_from": "Daniel Fone <daniel@fone.net.nz>",
"msg_from_op": true,
"msg_subject": "Re: pgcrypto support for bcrypt $2b$ hashes"
},
{
"msg_contents": "As discussed in [1], we're taking this opportunity to return some\npatchsets that don't appear to be getting enough reviewer interest.\n\nThis is not a rejection, since we don't necessarily think there's\nanything unacceptable about the entry, but it differs from a standard\n\"Returned with Feedback\" in that there's probably not much actionable\nfeedback at all. Rather than code changes, what this patch needs is more\ncommunity interest. You might\n\n- ask people for help with your approach,\n- see if there are similar patches that your code could supplement,\n- get interested parties to agree to review your patch in a CF, or\n- possibly present the functionality in a way that's easier to review\n overall.\n\n(Doing these things is no guarantee that there will be interest, but\nit's hopefully better than endlessly rebasing a patchset that is not\nreceiving any feedback from the community.)\n\nOnce you think you've built up some community support and the patchset\nis ready for review, you (or any interested party) can resurrect the\npatch entry by visiting\n\n https://commitfest.postgresql.org/38/3338/\n\nand changing the status to \"Needs Review\", and then changing the\nstatus again to \"Move to next CF\". (Don't forget the second step;\nhopefully we will have streamlined this in the near future!)\n\nThanks,\n--Jacob\n\n[1] https://postgr.es/m/86140760-8ba5-6f3a-3e6e-5ca6c060bd24@timescale.com\n\n\n\n",
"msg_date": "Mon, 1 Aug 2022 13:51:01 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: pgcrypto support for bcrypt $2b$ hashes"
}
] |
[
{
"msg_contents": "Hi,\n\nI think there's a word missing in the following comment:\n\n /*\n * See if the partition bounds for inputs are exactly the same, in\n * which case we don't need to work hard: the join rel have the same\n * partition bounds as inputs, and the partitions with the same\n * cardinal positions form the pairs.\n\n\": the join rel have the same...\" seems to be missing a \"will\".\n\nAttached a patch to fix.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 24 Sep 2021 15:34:03 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "a comment in joinrel.c: compute_partition_bounds()"
},
{
"msg_contents": "Hi Amit-san,\n\nOn Fri, Sep 24, 2021 at 3:34 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> I think there's a word missing in the following comment:\n>\n> /*\n> * See if the partition bounds for inputs are exactly the same, in\n> * which case we don't need to work hard: the join rel have the same\n> * partition bounds as inputs, and the partitions with the same\n> * cardinal positions form the pairs.\n>\n> \": the join rel have the same...\" seems to be missing a \"will\".\n>\n> Attached a patch to fix.\n\nGood catch! Will fix.\n\nThanks!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Fri, 24 Sep 2021 16:20:24 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: a comment in joinrel.c: compute_partition_bounds()"
},
{
"msg_contents": "On Fri, Sep 24, 2021 at 4:20 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Fri, Sep 24, 2021 at 3:34 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > I think there's a word missing in the following comment:\n> >\n> > /*\n> > * See if the partition bounds for inputs are exactly the same, in\n> > * which case we don't need to work hard: the join rel have the same\n> > * partition bounds as inputs, and the partitions with the same\n> > * cardinal positions form the pairs.\n> >\n> > \": the join rel have the same...\" seems to be missing a \"will\".\n> >\n> > Attached a patch to fix.\n>\n> Good catch! Will fix.\n\nRereading the comment, I think it would be better to add “will” to the\nsecond part “the partitions with the same cardinal positions form the\npairs” as well. Updated patch attached.\n\nBest regards,\nEtsuro Fujita",
"msg_date": "Wed, 6 Oct 2021 17:42:24 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: a comment in joinrel.c: compute_partition_bounds()"
},
{
"msg_contents": "Fujita-san,\n\nOn Wed, Oct 6, 2021 at 5:41 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> On Fri, Sep 24, 2021 at 4:20 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > On Fri, Sep 24, 2021 at 3:34 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > I think there's a word missing in the following comment:\n> > >\n> > > /*\n> > > * See if the partition bounds for inputs are exactly the same, in\n> > > * which case we don't need to work hard: the join rel have the same\n> > > * partition bounds as inputs, and the partitions with the same\n> > > * cardinal positions form the pairs.\n> > >\n> > > \": the join rel have the same...\" seems to be missing a \"will\".\n> > >\n> > > Attached a patch to fix.\n> >\n> > Good catch! Will fix.\n>\n> Rereading the comment, I think it would be better to add “will” to the\n> second part “the partitions with the same cardinal positions form the\n> pairs” as well. Updated patch attached.\n\nNo objection from my side.\n\nThank you.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 7 Oct 2021 12:04:58 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: a comment in joinrel.c: compute_partition_bounds()"
},
{
"msg_contents": "On Thu, Oct 7, 2021 at 12:05 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Wed, Oct 6, 2021 at 5:41 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> > Rereading the comment, I think it would be better to add “will” to the\n> > second part “the partitions with the same cardinal positions form the\n> > pairs” as well. Updated patch attached.\n>\n> No objection from my side.\n\nOk, pushed. Thanks!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Thu, 7 Oct 2021 17:57:47 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: a comment in joinrel.c: compute_partition_bounds()"
}
] |
[
{
"msg_contents": "Hello Hackers,\n\nThis is my second proposal for a patch, so I hope not to make \"rookie\"\nmistakes.\n\nThis proposal patch is based on a simple use case :\n\nIf one creates a table this way\nCREATE TABLE tst_table AS (SELECT array_to_tsvector(ARRAY['','abc','def']));\n\nthe table content is :\n array_to_tsvector\n-------------------\n '' 'abc' 'def'\n(1 row)\n\nFirst it can be strange to have an empty string for tsvector lexeme but\nanyway, keep going to the real point.\n\nOnce dumped, this table dump contains that empty string that can't be\nrestored.\ntsvector_parse (./utils/adt/tsvector_parser.c) raises an error.\n\nThus it is not possible for data to be restored this way.\n\nThere are two ways to consider this : is it alright to have empty strings\nin lexemes ?\n * If so, empty strings should be correctly parsed by tsvector_parser.\n * If not, one should prevent empty strings from being stored into\ntsvectors.\n\nSince \"empty strings\" seems not to be a valid lexeme, I undertook to change\nsome functions dealing with tsvector to check whether string arguments are\nempty. This might be the wrong path as I'm not familiar with tsvector\nusage... (OTOH, I can provide a fix patch for tsvector_parser() if I'm\nwrong).\n\nThis involved changing the way functions like array_to_tsvector(),\nts_delete() and setweight() behave. As for NULL values, empty string values\nare checked and an error is raised for such a value. It appears to me that\nERRCODE_ZERO_LENGTH_CHARACTER_STRING (2200F) matched this behaviour but I\nmay be wrong.\n\nSince this patch changes the way functions behave, consider it as a simple\ntry to overcome a strange situation we've noticed on a specific use case.\n\nThis included patch manages that checks for empty strings on the pointed\nout functions. It comes with modified regression tests. Patch applies along\nhead/master and 14_RC1.\n\nComments are more than welcome!\nThank you,\n\n-- \nJean-Christophe Arnu",
"msg_date": "Fri, 24 Sep 2021 10:46:49 +0200",
"msg_from": "Jean-Christophe Arnu <jcarnu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Empty string in lexeme for tsvector"
},
{
"msg_contents": "Em sex., 24 de set. de 2021 às 05:47, Jean-Christophe Arnu <jcarnu@gmail.com>\nescreveu:\n\n> Hello Hackers,\n>\n> This is my second proposal for a patch, so I hope not to make \"rookie\"\n> mistakes.\n>\n> This proposal patch is based on a simple use case :\n>\n> If one creates a table this way\n> CREATE TABLE tst_table AS (SELECT\n> array_to_tsvector(ARRAY['','abc','def']));\n>\n> the table content is :\n> array_to_tsvector\n> -------------------\n> '' 'abc' 'def'\n> (1 row)\n>\n> First it can be strange to have an empty string for tsvector lexeme but\n> anyway, keep going to the real point.\n>\n> Once dumped, this table dump contains that empty string that can't be\n> restored.\n> tsvector_parse (./utils/adt/tsvector_parser.c) raises an error.\n>\n> Thus it is not possible for data to be restored this way.\n>\n> There are two ways to consider this : is it alright to have empty strings\n> in lexemes ?\n> * If so, empty strings should be correctly parsed by tsvector_parser.\n> * If not, one should prevent empty strings from being stored into\n> tsvectors.\n>\n> Since \"empty strings\" seems not to be a valid lexeme, I undertook to\n> change some functions dealing with tsvector to check whether string\n> arguments are empty. This might be the wrong path as I'm not familiar with\n> tsvector usage... (OTOH, I can provide a fix patch for tsvector_parser() if\n> I'm wrong).\n>\n> This involved changing the way functions like array_to_tsvector(),\n> ts_delete() and setweight() behave. As for NULL values, empty string values\n> are checked and an error is raised for such a value. It appears to me that\n> ERRCODE_ZERO_LENGTH_CHARACTER_STRING (2200F) matched this behaviour but I\n> may be wrong.\n>\n> Since this patch changes the way functions behave, consider it as a simple\n> try to overcome a strange situation we've noticed on a specific use case.\n>\n> This included patch manages that checks for empty strings on the pointed\n> out functions. It comes with modified regression tests. Patch applies along\n> head/master and 14_RC1.\n>\n> Comments are more than welcome!\n>\n1. Would be better to add this test-and-error before tsvector_bsearch call.\n\n+ if (lex_len == 0)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_ZERO_LENGTH_CHARACTER_STRING),\n+ errmsg(\"lexeme array may not contain empty strings\")));\n+\n\nIf lex_len is equal to zero, better get out soon.\n\n2. The second test-and-error can use lex_len, just like the first test,\nI don't see the point in recalculating the size of lex_len if that's\nalready done.\n\n+ if (VARSIZE(dlexemes[i]) - VARHDRSZ == 0)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_ZERO_LENGTH_CHARACTER_STRING),\n+ errmsg(\"lexeme array may not contain empty strings\")));\n+\n\nregards,\nRanier Vilela\n\nEm sex., 24 de set. de 2021 às 05:47, Jean-Christophe Arnu <jcarnu@gmail.com> escreveu:Hello Hackers,This is my second proposal for a patch, so I hope not to make \"rookie\" mistakes.This proposal patch is based on a simple use case :If one creates a table this wayCREATE TABLE tst_table AS (SELECT array_to_tsvector(ARRAY['','abc','def']));the table content is : array_to_tsvector ------------------- '' 'abc' 'def'(1 row)First it can be strange to have an empty string for tsvector lexeme but anyway, keep going to the real point.Once dumped, this table dump contains that empty string that can't be restored. tsvector_parse (./utils/adt/tsvector_parser.c) raises an error.Thus it is not possible for data to be restored this way.There are two ways to consider this : is it alright to have empty strings in lexemes ? * If so, empty strings should be correctly parsed by tsvector_parser. * If not, one should prevent empty strings from being stored into tsvectors.Since \"empty strings\" seems not to be a valid lexeme, I undertook to change some functions dealing with tsvector to check whether string arguments are empty. This might be the wrong path as I'm not familiar with tsvector usage... (OTOH, I can provide a fix patch for tsvector_parser() if I'm wrong).This involved changing the way functions like array_to_tsvector(), ts_delete() and setweight() behave. As for NULL values, empty string values are checked and an error is raised for such a value. It appears to me that ERRCODE_ZERO_LENGTH_CHARACTER_STRING (2200F) matched this behaviour but I may be wrong.Since this patch changes the way functions behave, consider it as a simple try to overcome a strange situation we've noticed on a specific use case.This included patch manages that checks for empty strings on the pointed out functions. It comes with modified regression tests. Patch applies along head/master and 14_RC1.Comments are more than welcome!1. Would be better to add this test-and-error before tsvector_bsearch call.+\t\tif (lex_len == 0)+\t\t\tereport(ERROR,+\t\t\t\t\t(errcode(ERRCODE_ZERO_LENGTH_CHARACTER_STRING),+\t\t\t\t\t errmsg(\"lexeme array may not contain empty strings\")));+If lex_len is equal to zero, better get out soon.2. The second test-and-error can use lex_len, just like the first test, I don't see the point in recalculating the size of lex_len if that's already done. +\t\tif (VARSIZE(dlexemes[i]) - VARHDRSZ == 0)+\t\t\tereport(ERROR,+\t\t\t\t\t(errcode(ERRCODE_ZERO_LENGTH_CHARACTER_STRING),+\t\t\t\t\t errmsg(\"lexeme array may not contain empty strings\")));+regards,Ranier Vilela",
"msg_date": "Fri, 24 Sep 2021 08:03:38 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Empty string in lexeme for tsvector"
},
{
"msg_contents": "Le ven. 24 sept. 2021 à 13:03, Ranier Vilela <ranier.vf@gmail.com> a écrit :\n\n>\n> Comments are more than welcome!\n>>\n> 1. Would be better to add this test-and-error before tsvector_bsearch call.\n>\n> + if (lex_len == 0)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_ZERO_LENGTH_CHARACTER_STRING),\n> + errmsg(\"lexeme array may not contain empty strings\")));\n> +\n>\n> If lex_len is equal to zero, better get out soon.\n>\n> 2. The second test-and-error can use lex_len, just like the first test,\n> I don't see the point in recalculating the size of lex_len if that's\n> already done.\n>\n> + if (VARSIZE(dlexemes[i]) - VARHDRSZ == 0)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_ZERO_LENGTH_CHARACTER_STRING),\n> + errmsg(\"lexeme array may not contain empty strings\")));\n> +\n>\n\nHello Ranier,\nThank you for your comments.\nHere's a new patch file taking your comments into account.\n\nI was just wondering if empty string eviction is done in the right place.\nAs you rightfully commented, lex_len is calculated later (once again for a\nright purpose) and my code checks for empty strings as soon as possible.\nTo me, it seems to be the right thing to do (prevent further processing on\nlexemes\nas soon as possible) but I might omit something.\n\nRegards\n\n\nJean-Christophe Arnu",
"msg_date": "Fri, 24 Sep 2021 14:39:31 +0200",
"msg_from": "Jean-Christophe Arnu <jcarnu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Empty string in lexeme for tsvector"
},
{
"msg_contents": "Em sex., 24 de set. de 2021 às 09:39, Jean-Christophe Arnu <jcarnu@gmail.com>\nescreveu:\n\n>\n>\n> Le ven. 24 sept. 2021 à 13:03, Ranier Vilela <ranier.vf@gmail.com> a\n> écrit :\n>\n>>\n>> Comments are more than welcome!\n>>>\n>> 1. Would be better to add this test-and-error before tsvector_bsearch\n>> call.\n>>\n>> + if (lex_len == 0)\n>> + ereport(ERROR,\n>> + (errcode(ERRCODE_ZERO_LENGTH_CHARACTER_STRING),\n>> + errmsg(\"lexeme array may not contain empty strings\")));\n>> +\n>>\n>> If lex_len is equal to zero, better get out soon.\n>>\n>> 2. The second test-and-error can use lex_len, just like the first test,\n>> I don't see the point in recalculating the size of lex_len if that's\n>> already done.\n>>\n>> + if (VARSIZE(dlexemes[i]) - VARHDRSZ == 0)\n>> + ereport(ERROR,\n>> + (errcode(ERRCODE_ZERO_LENGTH_CHARACTER_STRING),\n>> + errmsg(\"lexeme array may not contain empty strings\")));\n>> +\n>>\n>\n> Hello Ranier,\n> Thank you for your comments.\n> Here's a new patch file taking your comments into account.\n>\nThanks.\n\n\n> I was just wondering if empty string eviction is done in the right place.\n>\nAs you rightfully commented, lex_len is calculated later (once again for a\n> right purpose) and my code checks for empty strings as soon as possible.\n> To me, it seems to be the right thing to do (prevent further processing on\n> lexemes\n> as soon as possible) but I might omit something.\n>\nIt's always good to avoid unnecessary processing.\n\nregards,\nRanier Vilela\n\nEm sex., 24 de set. de 2021 às 09:39, Jean-Christophe Arnu <jcarnu@gmail.com> escreveu:Le ven. 24 sept. 2021 à 13:03, Ranier Vilela <ranier.vf@gmail.com> a écrit :Comments are more than welcome!1. Would be better to add this test-and-error before tsvector_bsearch call.+\t\tif (lex_len == 0)+\t\t\tereport(ERROR,+\t\t\t\t\t(errcode(ERRCODE_ZERO_LENGTH_CHARACTER_STRING),+\t\t\t\t\t errmsg(\"lexeme array may not contain empty strings\")));+If lex_len is equal to zero, better get out soon.2. The second test-and-error can use lex_len, just like the first test, I don't see the point in recalculating the size of lex_len if that's already done. +\t\tif (VARSIZE(dlexemes[i]) - VARHDRSZ == 0)+\t\t\tereport(ERROR,+\t\t\t\t\t(errcode(ERRCODE_ZERO_LENGTH_CHARACTER_STRING),+\t\t\t\t\t errmsg(\"lexeme array may not contain empty strings\")));+Hello Ranier,Thank you for your comments.Here's a new patch file taking your comments into account.Thanks. I was just wondering if empty string eviction is done in the right place. As you rightfully commented, lex_len is calculated later (once again for a right purpose) and my code checks for empty strings as soon as possible. To me, it seems to be the right thing to do (prevent further processing on lexemes as soon as possible) but I might omit something.It's always good to avoid unnecessary processing.regards,Ranier Vilela",
"msg_date": "Fri, 24 Sep 2021 10:08:49 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Empty string in lexeme for tsvector"
},
{
"msg_contents": "On Fri, Sep 24, 2021 at 2:39 PM Jean-Christophe Arnu <jcarnu@gmail.com> wrote:\n> Here's a new patch file taking your comments into account.\n\nNice catch! The patch looks good to me.\nCan you also add a more general test case:\n\n=# SELECT $$'' '1' '2'$$::tsvector;\nERROR: syntax error in tsvector: \"'' '1' '2'\"\nLINE 1: SELECT $$'' '1' '2'$$::tsvector;\n\n-- \nArtur\n\n\n",
"msg_date": "Sun, 26 Sep 2021 15:54:57 +0200",
"msg_from": "Artur Zakirov <zaartur@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Empty string in lexeme for tsvector"
},
{
"msg_contents": "Le dim. 26 sept. 2021 à 15:55, Artur Zakirov <zaartur@gmail.com> a écrit :\n\n> Nice catch! The patch looks good to me.\n> Can you also add a more general test case:\n>\n> =# SELECT $$'' '1' '2'$$::tsvector;\n> ERROR: syntax error in tsvector: \"'' '1' '2'\"\n> LINE 1: SELECT $$'' '1' '2'$$::tsvector;\n>\n>\nThank you, Artur for spotting this test.\nIt is now included into this patch.\n\n-- \nJean-Christophe Arnu",
"msg_date": "Sun, 26 Sep 2021 22:41:13 +0200",
"msg_from": "Jean-Christophe Arnu <jcarnu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Empty string in lexeme for tsvector"
},
{
"msg_contents": "Le dim. 26 sept. 2021 à 22:41, Jean-Christophe Arnu <jcarnu@gmail.com> a\nécrit :\n\n>\n>\n> Le dim. 26 sept. 2021 à 15:55, Artur Zakirov <zaartur@gmail.com> a écrit :\n>\n>> Nice catch! The patch looks good to me.\n>> Can you also add a more general test case:\n>>\n>> =# SELECT $$'' '1' '2'$$::tsvector;\n>> ERROR: syntax error in tsvector: \"'' '1' '2'\"\n>> LINE 1: SELECT $$'' '1' '2'$$::tsvector;\n>>\n>>\n> Thank you, Artur for spotting this test.\n> It is now included into this patch.\n>\n>\n>\nTwo more things :\n\n * I updated the documentation for array_to_tsvector(), ts_delete() and\nsetweight() functions (so here's a new patch);\n * I should mention François Ferry from Logilab who first reported the\nbackup/restore problem that led to this patch.\n\nI think this should be ok, now the doc is up to date.\n\nKind regards.\n-- \nJean-Christophe Arnu",
"msg_date": "Mon, 27 Sep 2021 12:18:00 +0200",
"msg_from": "Jean-Christophe Arnu <jcarnu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Empty string in lexeme for tsvector"
},
{
"msg_contents": "Jean-Christophe Arnu <jcarnu@gmail.com> writes:\n> [ empty_string_in_tsvector_v4.patch ]\n\nI looked through this patch a bit. I don't agree with adding\nthese new error conditions to tsvector_setweight_by_filter and\ntsvector_delete_arr. Those don't prevent bad lexemes from being\nadded to tsvectors, so AFAICS they can have no effect other than\nbreaking existing applications. In fact, tsvector_delete_arr is\none thing you could use to fix existing bad tsvectors, so making\nit throw an error seems actually counterproductive.\n\n(By the same token, I think there's a good argument for\ntsvector_delete_arr to just ignore nulls, not throw an error.\nThat's a somewhat orthogonal issue, though.)\n\nWhat I'm wondering about more than that is whether array_to_tsvector\nis the only place that can inject an empty lexeme ... don't we have\nanything else that can add lexemes without going through the parser?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 29 Sep 2021 15:36:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Empty string in lexeme for tsvector"
},
{
"msg_contents": "Thank you Tom for your review.\n\nLe mer. 29 sept. 2021 à 21:36, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n\n> Jean-Christophe Arnu <jcarnu@gmail.com> writes:\n> > [ empty_string_in_tsvector_v4.patch ]\n>\n> I looked through this patch a bit. I don't agree with adding\n> these new error conditions to tsvector_setweight_by_filter and\n> tsvector_delete_arr. Those don't prevent bad lexemes from being\n> added to tsvectors, so AFAICS they can have no effect other than\n> breaking existing applications. In fact, tsvector_delete_arr is\n> one thing you could use to fix existing bad tsvectors, so making\n> it throw an error seems actually counterproductive.\n>\n\nAgreed.\nThe new patch included here does not change tsvector_setweight_by_filter()\nanymore. Tests and docs are also upgraded.\nThis patch is not ready yet.\n\n\n> (By the same token, I think there's a good argument for\n> tsvector_delete_arr to just ignore nulls, not throw an error.\n> That's a somewhat orthogonal issue, though.)\n>\n\nNulls are now ignored in tsvector_delete_arr().\n\n\n> What I'm wondering about more than that is whether array_to_tsvector\n> is the only place that can inject an empty lexeme ... don't we have\n> anything else that can add lexemes without going through the parser?\n>\n\nI crawled the docs [1] in order to check each tsvector output from\nfunctions and\noperators :\n\n * The only fonctions left that may lead to empty strings seem\n both json_to_tsvector() and jsonb_to_tsvector(). Both functions use\nparsetext\n (in ts_parse.c) that seems to behave correctly and don't create \"empty\nstring\".\n * concat operator \"||' allows to compute a ts_vector containing \"empty\nstring\" if\n one of its operands contains itself an empty string tsvector. This seems\nperfectly\n correct from the operator point of view... Should we change this\nbehaviour to\n filter out empty strings ?\n\nI also wonder if we should not also consider changing COPY FROM behaviour\non empty string lexemes.\nCurrent version is just crashing on empty string lexemes. Should\nwe allow them anyway as COPY FROM input (it seems weird not to be able\nto re-import dumped data) or \"skipping them\" just like array_to_tsvector()\ndoes in the patched version (that may lead to database content changes) or\nfinally should we not change COPY behaviour ?\n\nI admit this is a tricky bunch of questions I'm too rookie to answer.\n\n[1] https://www.postgresql.org/docs/14/functions-textsearch.html\n\n-- \nJean-Christophe Arnu",
"msg_date": "Thu, 30 Sep 2021 20:34:15 +0200",
"msg_from": "Jean-Christophe Arnu <jcarnu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Empty string in lexeme for tsvector"
},
{
"msg_contents": "Jean-Christophe Arnu <jcarnu@gmail.com> writes:\n>> (By the same token, I think there's a good argument for\n>> tsvector_delete_arr to just ignore nulls, not throw an error.\n>> That's a somewhat orthogonal issue, though.)\n\n> Nulls are now ignored in tsvector_delete_arr().\n\nI think setweight() with an array should handle this the same as\nts_delete() with an array, so the attached v6 does it like that.\n\n> I also wonder if we should not also consider changing COPY FROM behaviour\n> on empty string lexemes.\n> Current version is just crashing on empty string lexemes. Should\n> we allow them anyway as COPY FROM input (it seems weird not to be able\n> to re-import dumped data) or \"skipping them\" just like array_to_tsvector()\n> does in the patched version (that may lead to database content changes) or\n> finally should we not change COPY behaviour ?\n\nNo, I don't think so. tsvector's restriction against empty lexemes was\ncertainly intentional from the beginning, so I wouldn't be surprised if\nwe'd run into semantic difficulties if we remove it. Moreover, we're\ngoing on fourteen years with that restriction and we've had few\ncomplaints, so there's no field demand to loosen it. It's clearly just\nan oversight that array_to_tsvector() failed to enforce the restriction.\n\nI polished the docs and tests a bit more, too. I think the attached\nis committable -- any objections?\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 03 Nov 2021 16:40:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Empty string in lexeme for tsvector"
}
] |
[
{
"msg_contents": "When PG11 added the ability for ALTER TABLE ADD COLUMN to set a constant\ndefault value without rewriting the table the doc changes did not note\nhow the new feature interplayed with ADD COLUMN DEFAULT NOT NULL.\nPreviously such a new column required a verification table scan to\nensure no values were null. That scan happens under an exclusive lock on\nthe table, so it can have a meaningful impact on database \"accessible\nuptime\".\n\nI've attached a patch to document that the new mechanism also\nprecludes that scan.\n\nThanks,\nJames Coleman",
"msg_date": "Fri, 24 Sep 2021 10:30:39 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Document atthasmissing default optimization avoids verification table\n scan"
},
{
"msg_contents": "On 9/24/21, 7:30 AM, \"James Coleman\" <jtc331@gmail.com> wrote:\r\n> When PG11 added the ability for ALTER TABLE ADD COLUMN to set a constant\r\n> default value without rewriting the table the doc changes did not note\r\n> how the new feature interplayed with ADD COLUMN DEFAULT NOT NULL.\r\n> Previously such a new column required a verification table scan to\r\n> ensure no values were null. That scan happens under an exclusive lock on\r\n> the table, so it can have a meaningful impact on database \"accessible\r\n> uptime\".\r\n\r\nI'm likely misunderstanding, but are you saying that adding a new\r\ncolumn with a default value and a NOT NULL constraint used to require\r\na verification scan?\r\n\r\n+ Additionally adding a column with a constant default value avoids a\r\n+ a table scan to verify no <literal>NULL</literal> values are present.\r\n\r\nShould this clarify that it's referring to NOT NULL constraints?\r\n\r\nNathan\r\n\r\n",
"msg_date": "Thu, 20 Jan 2022 00:08:07 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table\n scan"
},
{
"msg_contents": "On Wed, Jan 19, 2022 at 5:08 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n\n> On 9/24/21, 7:30 AM, \"James Coleman\" <jtc331@gmail.com> wrote:\n> > When PG11 added the ability for ALTER TABLE ADD COLUMN to set a constant\n> > default value without rewriting the table the doc changes did not note\n> > how the new feature interplayed with ADD COLUMN DEFAULT NOT NULL.\n> > Previously such a new column required a verification table scan to\n> > ensure no values were null. That scan happens under an exclusive lock on\n> > the table, so it can have a meaningful impact on database \"accessible\n> > uptime\".\n>\n> I'm likely misunderstanding, but are you saying that adding a new\n> column with a default value and a NOT NULL constraint used to require\n> a verification scan?\n>\n\nAs a side-effect of rewriting every live record in the table and indexes to\nbrand new files, yes. I doubt an actual independent scan was performed\nsince the only way for the newly written tuples to not have the default\nvalue inserted would be a severe server bug.\n\n\n> + Additionally adding a column with a constant default value avoids a\n> + a table scan to verify no <literal>NULL</literal> values are present.\n>\n> Should this clarify that it's referring to NOT NULL constraints?\n>\n>\nThis doesn't seem like relevant material to comment on. It's an\nimplementation detail that is sufficiently covered by \"making the ALTER\nTABLE very fast even on large tables\".\n\nAlso, the idea of performing that scan seems ludicrous. I just added the\ncolumn and told it to populate with default values - why do you need to\ncheck that your server didn't miss any?\n\nDavid J.\n\nOn Wed, Jan 19, 2022 at 5:08 PM Bossart, Nathan <bossartn@amazon.com> wrote:On 9/24/21, 7:30 AM, \"James Coleman\" <jtc331@gmail.com> wrote:\n> When PG11 added the ability for ALTER TABLE ADD COLUMN to set a constant\n> default value without rewriting the table the doc changes did not note\n> how the new feature interplayed with ADD COLUMN DEFAULT NOT NULL.\n> Previously such a new column required a verification table scan to\n> ensure no values were null. That scan happens under an exclusive lock on\n> the table, so it can have a meaningful impact on database \"accessible\n> uptime\".\n\nI'm likely misunderstanding, but are you saying that adding a new\ncolumn with a default value and a NOT NULL constraint used to require\na verification scan?As a side-effect of rewriting every live record in the table and indexes to brand new files, yes. I doubt an actual independent scan was performed since the only way for the newly written tuples to not have the default value inserted would be a severe server bug.\n\n+ Additionally adding a column with a constant default value avoids a\n+ a table scan to verify no <literal>NULL</literal> values are present.\n\nShould this clarify that it's referring to NOT NULL constraints?This doesn't seem like relevant material to comment on. It's an implementation detail that is sufficiently covered by \"making the ALTER TABLE very fast even on large tables\".Also, the idea of performing that scan seems ludicrous. I just added the column and told it to populate with default values - why do you need to check that your server didn't miss any? David J.",
"msg_date": "Wed, 19 Jan 2022 17:51:13 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "On Wed, Jan 19, 2022 at 7:51 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Wed, Jan 19, 2022 at 5:08 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n>>\n>> On 9/24/21, 7:30 AM, \"James Coleman\" <jtc331@gmail.com> wrote:\n>> > When PG11 added the ability for ALTER TABLE ADD COLUMN to set a constant\n>> > default value without rewriting the table the doc changes did not note\n>> > how the new feature interplayed with ADD COLUMN DEFAULT NOT NULL.\n>> > Previously such a new column required a verification table scan to\n>> > ensure no values were null. That scan happens under an exclusive lock on\n>> > the table, so it can have a meaningful impact on database \"accessible\n>> > uptime\".\n>>\n>> I'm likely misunderstanding, but are you saying that adding a new\n>> column with a default value and a NOT NULL constraint used to require\n>> a verification scan?\n>\n>\n> As a side-effect of rewriting every live record in the table and indexes to brand new files, yes. I doubt an actual independent scan was performed since the only way for the newly written tuples to not have the default value inserted would be a severe server bug.\n\nI've confirmed it wasn't a separate scan, but it does evaluate all\nconstraints (it doesn't have any optimizations for skipping ones\nprobably true by virtue of the new default).\n\n>>\n>> + Additionally adding a column with a constant default value avoids a\n>> + a table scan to verify no <literal>NULL</literal> values are present.\n>>\n>> Should this clarify that it's referring to NOT NULL constraints?\n>>\n>\n> This doesn't seem like relevant material to comment on. It's an implementation detail that is sufficiently covered by \"making the ALTER TABLE very fast even on large tables\".\n>\n> Also, the idea of performing that scan seems ludicrous. I just added the column and told it to populate with default values - why do you need to check that your server didn't miss any?\n\nI'm open to the idea of wordsmithing here, of course, but I strongly\ndisagree that this is irrelevant data. There are plenty of\noptimizations Postgres could theoretically implement but doesn't, so\nmeasuring what should happen by what you think is obvious (\"told it to\npopulate with default values - why do you need to check\") is clearly\nnot valid.\n\nThis patch actually came out of our specifically needing to verify\nthat this is true before an op precisely because docs aren't actually\nclear and because we can't risk a large table scan under an exclusive\nlock. We're clearly not the only ones with that question; it came up\nin a comment on this blog post announcing the newly committed feature\n[1].\n\nI realize that most users aren't as worried about this kind of\nspecific detail about DDL as we are (requiring absolutely zero slow\nDDL while under an exclusive lock), but it is relevant to high uptime\nsystems.\n\nThanks,\nJames Coleman\n\n1: https://www.depesz.com/2018/04/04/waiting-for-postgresql-11-fast-alter-table-add-column-with-a-non-null-default/\n\n\n",
"msg_date": "Wed, 19 Jan 2022 20:14:10 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "On Wed, Jan 19, 2022 at 6:14 PM James Coleman <jtc331@gmail.com> wrote:\n\n> I'm open to the idea of wordsmithing here, of course, but I strongly\n> disagree that this is irrelevant data.\n\n\nOk, but wording aside, only changing a tip in the DDL - Add Table section\ndoesn't seem like a complete fix. The notes in alter table, where I'd look\nfor such an official directive first, need to be touched as well.\n\nFor the alter table docs maybe change/add to the existing sentence below\n(I'm in favor of not pointing out that scanning the table doesn't mean we\nare rewriting it, but maybe I'm making another unwarranted assumption\nregarding obviousness...).\n\n\"Adding a CHECK or NOT NULL constraint requires scanning the table [but not\nrewriting it] to verify that existing rows meet the constraint. It is\nskipped when done as part of ADD COLUMN unless a table rewrite is required\nanyway.\"\n\nOn that note, does the check constraint interplay with the default rewrite\navoidance in the same way?\n\nFor the Tip I'd almost rather redo it to say:\n\n\"Before PostgreSQL 11, adding a new column to a table required rewriting\nthat table, making it a very slow operation. More recent versions can\nsometimes optimize away this rewrite and related validation scans. See the\nnotes in ALTER TABLE for details.\"\n\nThough I suppose I'd accept something like (leave existing text,\nalternative patch text):\n\n\"[...]large tables.\\nIf the added column also has a not null constraint the\nusual verification scan is also skipped.\"\n\n\"constant\" is used in the Tip, \"non-volatile\" is used in alter table -\nhence a desire to have just one source of truth, with alter table being the\ncorrect place. We should sync them up otherwise.\n\nDavid J.\n\nOn Wed, Jan 19, 2022 at 6:14 PM James Coleman <jtc331@gmail.com> wrote:I'm open to the idea of wordsmithing here, of course, but I strongly\ndisagree that this is irrelevant data.Ok, but wording aside, only changing a tip in the DDL - Add Table section doesn't seem like a complete fix. The notes in alter table, where I'd look for such an official directive first, need to be touched as well.For the alter table docs maybe change/add to the existing sentence below (I'm in favor of not pointing out that scanning the table doesn't mean we are rewriting it, but maybe I'm making another unwarranted assumption regarding obviousness...).\"Adding a CHECK or NOT NULL constraint requires scanning the table [but not rewriting it] to verify that existing rows meet the constraint. It is skipped when done as part of ADD COLUMN unless a table rewrite is required anyway.\"On that note, does the check constraint interplay with the default rewrite avoidance in the same way?For the Tip I'd almost rather redo it to say:\"Before PostgreSQL 11, adding a new column to a table required rewriting that table, making it a very slow operation. More recent versions can sometimes optimize away this rewrite and related validation scans. See the notes in ALTER TABLE for details.\"Though I suppose I'd accept something like (leave existing text, alternative patch text):\"[...]large tables.\\nIf the added column also has a not null constraint the usual verification scan is also skipped.\"\"constant\" is used in the Tip, \"non-volatile\" is used in alter table - hence a desire to have just one source of truth, with alter table being the correct place. We should sync them up otherwise.David J.",
"msg_date": "Wed, 19 Jan 2022 19:34:34 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "On Wed, Jan 19, 2022 at 9:34 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Wed, Jan 19, 2022 at 6:14 PM James Coleman <jtc331@gmail.com> wrote:\n>>\n>> I'm open to the idea of wordsmithing here, of course, but I strongly\n>> disagree that this is irrelevant data.\n>\n>\n> Ok, but wording aside, only changing a tip in the DDL - Add Table section doesn't seem like a complete fix. The notes in alter table, where I'd look for such an official directive first, need to be touched as well.\n>\n> For the alter table docs maybe change/add to the existing sentence below (I'm in favor of not pointing out that scanning the table doesn't mean we are rewriting it, but maybe I'm making another unwarranted assumption regarding obviousness...).\n>\n> \"Adding a CHECK or NOT NULL constraint requires scanning the table [but not rewriting it] to verify that existing rows meet the constraint. It is skipped when done as part of ADD COLUMN unless a table rewrite is required anyway.\"\n\nI'm looking over the docs again to see how it might be better\nstructured; point is well taken that we should have it clearly in the\nprimary place.\n\n> On that note, does the check constraint interplay with the default rewrite avoidance in the same way?\n\nI hadn't checked until you asked, but interestingly, no it doesn't (I\nassume you mean scan not rewrite in this context):\n\ntest=# select seq_scan from pg_stat_all_tables where relname = 't2';\n seq_scan\n----------\n 2\ntest=# alter table t2 add column i3 int not null default 5;\nALTER TABLE\ntest=# select seq_scan from pg_stat_all_tables where relname = 't2';\n seq_scan\n----------\n 2\ntest=# alter table t2 add column i4 int default 5 check (i4 < 50);\nALTER TABLE\ntest=# select seq_scan from pg_stat_all_tables where relname = 't2';\n seq_scan\n----------\n 3\n\nThat seems like an opportunity for improvement here, though it's\nobviously a separate patch. I might poke around at that though\nlater...\n\n> For the Tip I'd almost rather redo it to say:\n>\n> \"Before PostgreSQL 11, adding a new column to a table required rewriting that table, making it a very slow operation. More recent versions can sometimes optimize away this rewrite and related validation scans. See the notes in ALTER TABLE for details.\"\n>\n> Though I suppose I'd accept something like (leave existing text, alternative patch text):\n>\n> \"[...]large tables.\\nIf the added column also has a not null constraint the usual verification scan is also skipped.\"\n>\n> \"constant\" is used in the Tip, \"non-volatile\" is used in alter table - hence a desire to have just one source of truth, with alter table being the correct place. We should sync them up otherwise.\n\nAs noted I'll look over how restructuring might improve and reply with\nan updated proposed patch.\n\nThanks,\nJames Coleman\n\n\n",
"msg_date": "Thu, 20 Jan 2022 09:05:40 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "On 1/19/22, 5:15 PM, \"James Coleman\" <jtc331@gmail.com> wrote:\r\n> I'm open to the idea of wordsmithing here, of course, but I strongly\r\n> disagree that this is irrelevant data. There are plenty of\r\n> optimizations Postgres could theoretically implement but doesn't, so\r\n> measuring what should happen by what you think is obvious (\"told it to\r\n> populate with default values - why do you need to check\") is clearly\r\n> not valid.\r\n>\r\n> This patch actually came out of our specifically needing to verify\r\n> that this is true before an op precisely because docs aren't actually\r\n> clear and because we can't risk a large table scan under an exclusive\r\n> lock. We're clearly not the only ones with that question; it came up\r\n> in a comment on this blog post announcing the newly committed feature\r\n> [1].\r\n\r\nMy initial reaction was similar to David's. It seems silly to\r\ndocument that we don't do something that seems obviously unnecessary.\r\nHowever, I think you make a convincing argument for including it. I\r\nagree with David's feedback on where this information should go.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Thu, 20 Jan 2022 17:25:40 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table\n scan"
},
{
"msg_contents": "\nOn 1/20/22 12:25, Bossart, Nathan wrote:\n> On 1/19/22, 5:15 PM, \"James Coleman\" <jtc331@gmail.com> wrote:\n>> I'm open to the idea of wordsmithing here, of course, but I strongly\n>> disagree that this is irrelevant data. There are plenty of\n>> optimizations Postgres could theoretically implement but doesn't, so\n>> measuring what should happen by what you think is obvious (\"told it to\n>> populate with default values - why do you need to check\") is clearly\n>> not valid.\n>>\n>> This patch actually came out of our specifically needing to verify\n>> that this is true before an op precisely because docs aren't actually\n>> clear and because we can't risk a large table scan under an exclusive\n>> lock. We're clearly not the only ones with that question; it came up\n>> in a comment on this blog post announcing the newly committed feature\n>> [1].\n> My initial reaction was similar to David's. It seems silly to\n> document that we don't do something that seems obviously unnecessary.\n> However, I think you make a convincing argument for including it. I\n> agree with David's feedback on where this information should go.\n>\n\nI still don't understand the confusion. When you add a new column with a\nnon-null non-volatile default, none of the existing rows has any storage\nfor the new column, so there is nothing to scan and nothing to verify on\nsuch rows. Only the catalog is changed. This is true whether or not the\nnew column is constrained by NOT NULL. I don't understand what people\nthink might have had to be verified by scanning the table.\n\nIf what's happening is not clear from the docs then by all means let's\nmake it clear. But in general I don't think we should talk about what we\nused to do.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 20 Jan 2022 15:31:04 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "On Thu, Jan 20, 2022 at 3:31 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>\n> On 1/20/22 12:25, Bossart, Nathan wrote:\n> > On 1/19/22, 5:15 PM, \"James Coleman\" <jtc331@gmail.com> wrote:\n> >> I'm open to the idea of wordsmithing here, of course, but I strongly\n> >> disagree that this is irrelevant data. There are plenty of\n> >> optimizations Postgres could theoretically implement but doesn't, so\n> >> measuring what should happen by what you think is obvious (\"told it to\n> >> populate with default values - why do you need to check\") is clearly\n> >> not valid.\n> >>\n> >> This patch actually came out of our specifically needing to verify\n> >> that this is true before an op precisely because docs aren't actually\n> >> clear and because we can't risk a large table scan under an exclusive\n> >> lock. We're clearly not the only ones with that question; it came up\n> >> in a comment on this blog post announcing the newly committed feature\n> >> [1].\n> > My initial reaction was similar to David's. It seems silly to\n> > document that we don't do something that seems obviously unnecessary.\n> > However, I think you make a convincing argument for including it. I\n> > agree with David's feedback on where this information should go.\n> >\n>\n> I still don't understand the confusion. When you add a new column with a\n> non-null non-volatile default, none of the existing rows has any storage\n> for the new column, so there is nothing to scan and nothing to verify on\n> such rows. Only the catalog is changed. This is true whether or not the\n> new column is constrained by NOT NULL. I don't understand what people\n> think might have had to be verified by scanning the table.\n>\n> If what's happening is not clear from the docs then by all means let's\n> make it clear. But in general I don't think we should talk about what we\n> used to do.\n\nThis path isn't about talking about what we used to do (though that's\nalready in the docs); it is about trying to make it clear.\n\nBut actually \"When you add a new column with a non-null non-volatile\ndefault...there is nothing to scan\" doesn't always hold as I showed\nwith the check constraint above. Other than that I think that phrasing\nis actually almost close to the kind of clarity I'd like to see in the\ndocs.\n\nAs noted earlier I expect to be posting an updated patch soon.\n\nThanks,\nJames Coleman\n\n\n",
"msg_date": "Thu, 20 Jan 2022 15:43:37 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "On Thu, Jan 20, 2022 at 3:43 PM James Coleman <jtc331@gmail.com> wrote:\n>\n> As noted earlier I expect to be posting an updated patch soon.\n\nHere's the updated series. In 0001 I've moved the documentation tweak\ninto the ALTER TABLE notes section. In 0002 I've taken David J's\nsuggestion of shortening the \"Tip\" on the DDL page and mostly using it\nto point people to the Notes section on the ALTER TABLE page.\n\nThanks,\nJames Coleman",
"msg_date": "Fri, 21 Jan 2022 13:55:10 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "On Fri, Jan 21, 2022 at 11:55 AM James Coleman <jtc331@gmail.com> wrote:\n\n> On Thu, Jan 20, 2022 at 3:43 PM James Coleman <jtc331@gmail.com> wrote:\n> >\n> > As noted earlier I expect to be posting an updated patch soon.\n>\n> Here's the updated series. In 0001 I've moved the documentation tweak\n> into the ALTER TABLE notes section. In 0002 I've taken David J's\n> suggestion of shortening the \"Tip\" on the DDL page and mostly using it\n> to point people to the Notes section on the ALTER TABLE page.\n>\n>\nWFM\n\nDavid J.\n\nOn Fri, Jan 21, 2022 at 11:55 AM James Coleman <jtc331@gmail.com> wrote:On Thu, Jan 20, 2022 at 3:43 PM James Coleman <jtc331@gmail.com> wrote:\n>\n> As noted earlier I expect to be posting an updated patch soon.\n\nHere's the updated series. In 0001 I've moved the documentation tweak\ninto the ALTER TABLE notes section. In 0002 I've taken David J's\nsuggestion of shortening the \"Tip\" on the DDL page and mostly using it\nto point people to the Notes section on the ALTER TABLE page.WFMDavid J.",
"msg_date": "Fri, 21 Jan 2022 12:04:06 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "\nOn 1/21/22 13:55, James Coleman wrote:\n> On Thu, Jan 20, 2022 at 3:43 PM James Coleman <jtc331@gmail.com> wrote:\n>> As noted earlier I expect to be posting an updated patch soon.\n> Here's the updated series. In 0001 I've moved the documentation tweak\n> into the ALTER TABLE notes section. In 0002 I've taken David J's\n> suggestion of shortening the \"Tip\" on the DDL page and mostly using it\n> to point people to the Notes section on the ALTER TABLE page.\n\n\nI don't really like the first part of patch 1, but as it gets removed by\npatch 2 we can move past that.\n\n\n+ Before <productname>PostgreSQL</productname> 11, adding a new\ncolumn to a\n+ table required rewriting that table, making it a very slow operation.\n+ More recent versions can sometimes optimize away this rewrite and\nrelated\n+ validation scans. See the notes in <command>ALTER TABLE</command>\nfor details.\n\n\nI know what it's replacing refers to release 11, but let's stop doing\nthat. How about something like this?\n\n Adding a new column can sometimes require rewriting the table,\n making it a very slow operation. However in many cases this rewrite\n and related verification scans can be optimized away by using an\n appropriate default value. See the notes in <command>ALTER\n TABLE</command> for details.\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 21 Jan 2022 16:08:35 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "On Fri, Jan 21, 2022 at 2:08 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n> On 1/21/22 13:55, James Coleman wrote:\n>\n> + Before <productname>PostgreSQL</productname> 11, adding a new\n> column to a\n> + table required rewriting that table, making it a very slow operation.\n> + More recent versions can sometimes optimize away this rewrite and\n> related\n> + validation scans. See the notes in <command>ALTER TABLE</command>\n> for details.\n>\n>\n> I know what it's replacing refers to release 11, but let's stop doing\n> that. How about something like this?\n>\n> Adding a new column can sometimes require rewriting the table,\n> making it a very slow operation. However in many cases this rewrite\n> and related verification scans can be optimized away by using an\n> appropriate default value. See the notes in <command>ALTER\n> TABLE</command> for details.\n>\n\nI think it is a virtue, and am supported in that feeling by the existing\nwording, to be explicit about the release before which these optimizations\ncan not happen. The docs generally use this to good effect without\noverdoing it. This is a prime example.\n\nThe combined effect of \"sometimes\", \"in many\", \"can be\", and \"an\nappropriate\" make this version harder to read than it probably needs to\nbe. I like the patch as-is over this; but I would want to give an\nalternative wording more thought if it is insisted upon that mention of\nPostgreSQL 11 goes away.\n\nDavid J.\n\nOn Fri, Jan 21, 2022 at 2:08 PM Andrew Dunstan <andrew@dunslane.net> wrote:On 1/21/22 13:55, James Coleman wrote:\n+ Before <productname>PostgreSQL</productname> 11, adding a new\ncolumn to a\n+ table required rewriting that table, making it a very slow operation.\n+ More recent versions can sometimes optimize away this rewrite and\nrelated\n+ validation scans. See the notes in <command>ALTER TABLE</command>\nfor details.\n\n\nI know what it's replacing refers to release 11, but let's stop doing\nthat. How about something like this?\n\n Adding a new column can sometimes require rewriting the table,\n making it a very slow operation. However in many cases this rewrite\n and related verification scans can be optimized away by using an\n appropriate default value. See the notes in <command>ALTER\n TABLE</command> for details.I think it is a virtue, and am supported in that feeling by the existing wording, to be explicit about the release before which these optimizations can not happen. The docs generally use this to good effect without overdoing it. This is a prime example.The combined effect of \"sometimes\", \"in many\", \"can be\", and \"an appropriate\" make this version harder to read than it probably needs to be. I like the patch as-is over this; but I would want to give an alternative wording more thought if it is insisted upon that mention of PostgreSQL 11 goes away.David J.",
"msg_date": "Fri, 21 Jan 2022 14:34:52 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Fri, Jan 21, 2022 at 2:08 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> I know what it's replacing refers to release 11, but let's stop doing\n>> that. How about something like this?\n>> \n>> Adding a new column can sometimes require rewriting the table,\n>> making it a very slow operation. However in many cases this rewrite\n>> and related verification scans can be optimized away by using an\n>> appropriate default value. See the notes in <command>ALTER\n>> TABLE</command> for details.\n\n> I think it is a virtue, and am supported in that feeling by the existing\n> wording, to be explicit about the release before which these optimizations\n> can not happen. The docs generally use this to good effect without\n> overdoing it. This is a prime example.\n\nThe fact of the matter is that optimizations of this sort have existed\nfor years. (For example, I think we've optimized away the rewrite\nwhen the new column is DEFAULT NULL since the very beginning.) So it\ndoes not help to write the text as if there were no such optimizations\nbefore version N and they were all there in N.\n\nI agree that Andrew's text could stand a pass of \"omit needless words\".\nBut I also think that we could be a bit more explicit about what \"slow\"\nmeans. Maybe like\n\nAdding a new column can require rewriting the whole table,\nmaking it slow for large tables. However the rewrite can be optimized\naway in some cases, depending on what default value is given to the\ncolumn. See <command>ALTER TABLE</command> for details.\n\n(the ALTER TABLE reference should be a link, too)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 21 Jan 2022 16:50:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "On Fri, Jan 21, 2022 at 2:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > On Fri, Jan 21, 2022 at 2:08 PM Andrew Dunstan <andrew@dunslane.net>\n> wrote:\n> >> I know what it's replacing refers to release 11, but let's stop doing\n> >> that. How about something like this?\n> >>\n> >> Adding a new column can sometimes require rewriting the table,\n> >> making it a very slow operation. However in many cases this rewrite\n> >> and related verification scans can be optimized away by using an\n> >> appropriate default value. See the notes in <command>ALTER\n> >> TABLE</command> for details.\n>\n> > I think it is a virtue, and am supported in that feeling by the existing\n> > wording, to be explicit about the release before which these\n> optimizations\n> > can not happen. The docs generally use this to good effect without\n> > overdoing it. This is a prime example.\n>\n> The fact of the matter is that optimizations of this sort have existed\n> for years. (For example, I think we've optimized away the rewrite\n> when the new column is DEFAULT NULL since the very beginning.) So it\n> does not help to write the text as if there were no such optimizations\n> before version N and they were all there in N.\n>\n\nFair point, and indeed the v10 docs do mention the NULL (or no default)\noptimization.\n\n\n> I agree that Andrew's text could stand a pass of \"omit needless words\".\n> But I also think that we could be a bit more explicit about what \"slow\"\n> means. Maybe like\n>\n> Adding a new column can require rewriting the whole table,\n> making it slow for large tables. However the rewrite can be optimized\n> away in some cases, depending on what default value is given to the\n> column. See <command>ALTER TABLE</command> for details.\n>\n>\nComma needed after however.\nYou've removed the \"constraint verification scan\" portion of this. Maybe:\n\"\"\"\n...\ncolumn. The same applies for the NOT NULL constraint verification scan.\nSee <command>ALTER TABLE</command> for details.\n\"\"\"\n\n\nRe-reading this, the recommendation:\n\n- However, if the default value is volatile (e.g.,\n- <function>clock_timestamp()</function>)\n- each row will need to be updated with the value calculated at the time\n- <command>ALTER TABLE</command> is executed. To avoid a potentially\n- lengthy update operation, particularly if you intend to fill the\ncolumn\n- with mostly nondefault values anyway, it may be preferable to add the\n- column with no default, insert the correct values using\n- <command>UPDATE</command>, and then add any desired default as\ndescribed\n- below.\n\nhas now been completely removed from the documentation. I suggest having\nthis remain as the Tip and turning the optimization stuff into a Note.\n\n\n> (the ALTER TABLE reference should be a link, too)\n>\n\nYeah, the page does have a link already (fairly close by...) but with these\nchanges putting one here seems to make sense.\n\nDavid J.\n\nOn Fri, Jan 21, 2022 at 2:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Fri, Jan 21, 2022 at 2:08 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> I know what it's replacing refers to release 11, but let's stop doing\n>> that. How about something like this?\n>> \n>> Adding a new column can sometimes require rewriting the table,\n>> making it a very slow operation. However in many cases this rewrite\n>> and related verification scans can be optimized away by using an\n>> appropriate default value. See the notes in <command>ALTER\n>> TABLE</command> for details.\n\n> I think it is a virtue, and am supported in that feeling by the existing\n> wording, to be explicit about the release before which these optimizations\n> can not happen. The docs generally use this to good effect without\n> overdoing it. This is a prime example.\n\nThe fact of the matter is that optimizations of this sort have existed\nfor years. (For example, I think we've optimized away the rewrite\nwhen the new column is DEFAULT NULL since the very beginning.) So it\ndoes not help to write the text as if there were no such optimizations\nbefore version N and they were all there in N.Fair point, and indeed the v10 docs do mention the NULL (or no default) optimization.\n\nI agree that Andrew's text could stand a pass of \"omit needless words\".\nBut I also think that we could be a bit more explicit about what \"slow\"\nmeans. Maybe like\n\nAdding a new column can require rewriting the whole table,\nmaking it slow for large tables. However the rewrite can be optimized\naway in some cases, depending on what default value is given to the\ncolumn. See <command>ALTER TABLE</command> for details.\nComma needed after however.You've removed the \"constraint verification scan\" portion of this. Maybe:\"\"\"...column. The same applies for the NOT NULL constraint verification scan.See <command>ALTER TABLE</command> for details.\"\"\"Re-reading this, the recommendation:- However, if the default value is volatile (e.g.,- <function>clock_timestamp()</function>)- each row will need to be updated with the value calculated at the time- <command>ALTER TABLE</command> is executed. To avoid a potentially- lengthy update operation, particularly if you intend to fill the column- with mostly nondefault values anyway, it may be preferable to add the- column with no default, insert the correct values using- <command>UPDATE</command>, and then add any desired default as described- below.has now been completely removed from the documentation. I suggest having this remain as the Tip and turning the optimization stuff into a Note. \n(the ALTER TABLE reference should be a link, too)Yeah, the page does have a link already (fairly close by...) but with these changes putting one here seems to make sense.David J.",
"msg_date": "Fri, 21 Jan 2022 15:29:01 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> You've removed the \"constraint verification scan\" portion of this.\n\nIndeed, because that's got nothing to do with adding a new column\n(per se; adding a constraint along with the column is a different\ncan of worms).\n\n> Re-reading this, the recommendation:\n\n> - However, if the default value is volatile (e.g.,\n> - <function>clock_timestamp()</function>)\n> - each row will need to be updated with the value calculated at the time\n> - <command>ALTER TABLE</command> is executed. To avoid a potentially\n> - lengthy update operation, particularly if you intend to fill the\n> column\n> - with mostly nondefault values anyway, it may be preferable to add the\n> - column with no default, insert the correct values using\n> - <command>UPDATE</command>, and then add any desired default as\n> described\n> - below.\n\n> has now been completely removed from the documentation.\n\nReally? That's horrid, because that's directly useful advice.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 21 Jan 2022 17:38:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "On Fri, Jan 21, 2022 at 4:08 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>\n> On 1/21/22 13:55, James Coleman wrote:\n> > On Thu, Jan 20, 2022 at 3:43 PM James Coleman <jtc331@gmail.com> wrote:\n> >> As noted earlier I expect to be posting an updated patch soon.\n> > Here's the updated series. In 0001 I've moved the documentation tweak\n> > into the ALTER TABLE notes section. In 0002 I've taken David J's\n> > suggestion of shortening the \"Tip\" on the DDL page and mostly using it\n> > to point people to the Notes section on the ALTER TABLE page.\n>\n>\n> I don't really like the first part of patch 1, but as it gets removed by\n> patch 2 we can move past that.\n\nAt first I was very confused by this feedback, but after looking at\nthe patch files I sent, that's my fault: I meant to remove the\nmodification of the \"Tip\" section but somehow missed that in what I\nsent. I'll correct that in the next patch series.\n\nJames Coleman\n\n\n",
"msg_date": "Fri, 21 Jan 2022 18:29:04 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "On Fri, Jan 21, 2022 at 5:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > You've removed the \"constraint verification scan\" portion of this.\n>\n> Indeed, because that's got nothing to do with adding a new column\n> (per se; adding a constraint along with the column is a different\n> can of worms).\n\nYeah. Initially I'd thought I'd wanted it there, but by explicitly\nlinking people to the ALTER TABLE docs for more details (I've made\nthat a link now too) I'm now inclined to agree that tightly focusing\nthe tip is better form.\n\n> > Re-reading this, the recommendation:\n>\n> > - However, if the default value is volatile (e.g.,\n> > - <function>clock_timestamp()</function>)\n> > - each row will need to be updated with the value calculated at the time\n> > - <command>ALTER TABLE</command> is executed. To avoid a potentially\n> > - lengthy update operation, particularly if you intend to fill the\n> > column\n> > - with mostly nondefault values anyway, it may be preferable to add the\n> > - column with no default, insert the correct values using\n> > - <command>UPDATE</command>, and then add any desired default as\n> > described\n> > - below.\n>\n> > has now been completely removed from the documentation.\n>\n> Really? That's horrid, because that's directly useful advice.\n\nRemedied, but rewritten a bit to better fit with the new style/goal of\nthat tip).\n\nVersion 3 is attached.\n\nJames Coleman",
"msg_date": "Fri, 21 Jan 2022 19:14:36 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "On Fri, Jan 21, 2022 at 5:14 PM James Coleman <jtc331@gmail.com> wrote:\n\n>\n> > Really? That's horrid, because that's directly useful advice.\n>\n> Remedied, but rewritten a bit to better fit with the new style/goal of\n> that tip).\n>\n> Version 3 is attached.\n>\n>\nComing back to this after a respite I think the tip needs to be moved just\nlike everything else. For much the same reason (though this may only be a\npersonal bias), I know what SQL Commands do the various things that DDL\nencompasses (especially the basics like adding a column) and so the DDL\nsection is really just a tutorial-like chapter that I will generally forget\nabout because I will go straight to the official source which is the SQL\nCommand Reference. My future self would want the tip to show up there. If\nwe put the tip after the existing paragraph that starts: \"Adding a column\nwith a volatile DEFAULT or changing the type of an existing column...\" the\nneed to specify an example function in the tip goes away - though maybe it\nshould be moved to the notes paragraph instead: \"with a volatile DEFAULT\n(e.g., clock_timestamp()) or changing the type of an existing column...\"\n\nDavid J.\n\nOn Fri, Jan 21, 2022 at 5:14 PM James Coleman <jtc331@gmail.com> wrote:\n> Really? That's horrid, because that's directly useful advice.\n\nRemedied, but rewritten a bit to better fit with the new style/goal of\nthat tip).\n\nVersion 3 is attached.Coming back to this after a respite I think the tip needs to be moved just like everything else. For much the same reason (though this may only be a personal bias), I know what SQL Commands do the various things that DDL encompasses (especially the basics like adding a column) and so the DDL section is really just a tutorial-like chapter that I will generally forget about because I will go straight to the official source which is the SQL Command Reference. My future self would want the tip to show up there. If we put the tip after the existing paragraph that starts: \"Adding a column with a volatile DEFAULT or changing the type of an existing column...\" the need to specify an example function in the tip goes away - though maybe it should be moved to the notes paragraph instead: \"with a volatile DEFAULT (e.g., clock_timestamp()) or changing the type of an existing column...\"David J.",
"msg_date": "Fri, 21 Jan 2022 22:35:15 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "On Sat, Jan 22, 2022 at 12:35 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Fri, Jan 21, 2022 at 5:14 PM James Coleman <jtc331@gmail.com> wrote:\n>>\n>>\n>> > Really? That's horrid, because that's directly useful advice.\n>>\n>> Remedied, but rewritten a bit to better fit with the new style/goal of\n>> that tip).\n>>\n>> Version 3 is attached.\n>>\n>\n> Coming back to this after a respite I think the tip needs to be moved just like everything else. For much the same reason (though this may only be a personal bias), I know what SQL Commands do the various things that DDL encompasses (especially the basics like adding a column) and so the DDL section is really just a tutorial-like chapter that I will generally forget about because I will go straight to the official source which is the SQL Command Reference. My future self would want the tip to show up there. If we put the tip after the existing paragraph that starts: \"Adding a column with a volatile DEFAULT or changing the type of an existing column...\" the need to specify an example function in the tip goes away - though maybe it should be moved to the notes paragraph instead: \"with a volatile DEFAULT (e.g., clock_timestamp()) or changing the type of an existing column...\"\n\nIn my mind that actually might be a reason to keep it that way. I\nexpect someone who's somewhat experienced to know there are things\n(like table rewrites and scans) you need to consider and therefore go\nto the ALTER TABLE page and read the details. But for someone newer\nthe tutorial page needs to introduce them to the idea that those\ngotchas exist.\n\nThoughts?\nJames Coleman\n\n\n",
"msg_date": "Sat, 22 Jan 2022 08:18:33 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "On Saturday, January 22, 2022, James Coleman <jtc331@gmail.com> wrote:\n\n> On Sat, Jan 22, 2022 at 12:35 AM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> >\n> > On Fri, Jan 21, 2022 at 5:14 PM James Coleman <jtc331@gmail.com> wrote:\n> >>\n> >>\n> >> > Really? That's horrid, because that's directly useful advice.\n> >>\n> >> Remedied, but rewritten a bit to better fit with the new style/goal of\n> >> that tip).\n> >>\n> >> Version 3 is attached.\n> >>\n> >\n> > Coming back to this after a respite I think the tip needs to be moved\n> just like everything else. For much the same reason (though this may only\n> be a personal bias), I know what SQL Commands do the various things that\n> DDL encompasses (especially the basics like adding a column) and so the DDL\n> section is really just a tutorial-like chapter that I will generally forget\n> about because I will go straight to the official source which is the SQL\n> Command Reference. My future self would want the tip to show up there. If\n> we put the tip after the existing paragraph that starts: \"Adding a column\n> with a volatile DEFAULT or changing the type of an existing column...\" the\n> need to specify an example function in the tip goes away - though maybe it\n> should be moved to the notes paragraph instead: \"with a volatile DEFAULT\n> (e.g., clock_timestamp()) or changing the type of an existing column...\"\n>\n> In my mind that actually might be a reason to keep it that way. I\n> expect someone who's somewhat experienced to know there are things\n> (like table rewrites and scans) you need to consider and therefore go\n> to the ALTER TABLE page and read the details. But for someone newer\n> the tutorial page needs to introduce them to the idea that those\n> gotchas exist.\n>\n>\nReaders of the DDL page are given a hint of the issues and directed to\nadditional, arguably mandatory, reading. They can not worry about the\nnuances during their learning phase but instead can defer that reading\nuntil they actually have need to alter a (large) table. But expecting them\nto read the command reference page is reasonable and is IMO the more\nprobable place they will look when they start doing stuff in earnest. For\nthe inexperienced reader breaking this up in this manner based upon depth\nof detail feels right to me.\n\nDavid J.\n\nOn Saturday, January 22, 2022, James Coleman <jtc331@gmail.com> wrote:On Sat, Jan 22, 2022 at 12:35 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Fri, Jan 21, 2022 at 5:14 PM James Coleman <jtc331@gmail.com> wrote:\n>>\n>>\n>> > Really? That's horrid, because that's directly useful advice.\n>>\n>> Remedied, but rewritten a bit to better fit with the new style/goal of\n>> that tip).\n>>\n>> Version 3 is attached.\n>>\n>\n> Coming back to this after a respite I think the tip needs to be moved just like everything else. For much the same reason (though this may only be a personal bias), I know what SQL Commands do the various things that DDL encompasses (especially the basics like adding a column) and so the DDL section is really just a tutorial-like chapter that I will generally forget about because I will go straight to the official source which is the SQL Command Reference. My future self would want the tip to show up there. If we put the tip after the existing paragraph that starts: \"Adding a column with a volatile DEFAULT or changing the type of an existing column...\" the need to specify an example function in the tip goes away - though maybe it should be moved to the notes paragraph instead: \"with a volatile DEFAULT (e.g., clock_timestamp()) or changing the type of an existing column...\"\n\nIn my mind that actually might be a reason to keep it that way. I\nexpect someone who's somewhat experienced to know there are things\n(like table rewrites and scans) you need to consider and therefore go\nto the ALTER TABLE page and read the details. But for someone newer\nthe tutorial page needs to introduce them to the idea that those\ngotchas exist.\nReaders of the DDL page are given a hint of the issues and directed to additional, arguably mandatory, reading. They can not worry about the nuances during their learning phase but instead can defer that reading until they actually have need to alter a (large) table. But expecting them to read the command reference page is reasonable and is IMO the more probable place they will look when they start doing stuff in earnest. For the inexperienced reader breaking this up in this manner based upon depth of detail feels right to me.David J.",
"msg_date": "Sat, 22 Jan 2022 08:28:52 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "On Sat, Jan 22, 2022 at 10:28 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n>\n>\n> On Saturday, January 22, 2022, James Coleman <jtc331@gmail.com> wrote:\n>>\n>> On Sat, Jan 22, 2022 at 12:35 AM David G. Johnston\n>> <david.g.johnston@gmail.com> wrote:\n>> >\n>> > On Fri, Jan 21, 2022 at 5:14 PM James Coleman <jtc331@gmail.com> wrote:\n>> >>\n>> >>\n>> >> > Really? That's horrid, because that's directly useful advice.\n>> >>\n>> >> Remedied, but rewritten a bit to better fit with the new style/goal of\n>> >> that tip).\n>> >>\n>> >> Version 3 is attached.\n>> >>\n>> >\n>> > Coming back to this after a respite I think the tip needs to be moved just like everything else. For much the same reason (though this may only be a personal bias), I know what SQL Commands do the various things that DDL encompasses (especially the basics like adding a column) and so the DDL section is really just a tutorial-like chapter that I will generally forget about because I will go straight to the official source which is the SQL Command Reference. My future self would want the tip to show up there. If we put the tip after the existing paragraph that starts: \"Adding a column with a volatile DEFAULT or changing the type of an existing column...\" the need to specify an example function in the tip goes away - though maybe it should be moved to the notes paragraph instead: \"with a volatile DEFAULT (e.g., clock_timestamp()) or changing the type of an existing column...\"\n>>\n>> In my mind that actually might be a reason to keep it that way. I\n>> expect someone who's somewhat experienced to know there are things\n>> (like table rewrites and scans) you need to consider and therefore go\n>> to the ALTER TABLE page and read the details. But for someone newer\n>> the tutorial page needs to introduce them to the idea that those\n>> gotchas exist.\n>>\n>\n> Readers of the DDL page are given a hint of the issues and directed to additional, arguably mandatory, reading. They can not worry about the nuances during their learning phase but instead can defer that reading until they actually have need to alter a (large) table. But expecting them to read the command reference page is reasonable and is IMO the more probable place they will look when they start doing stuff in earnest. For the inexperienced reader breaking this up in this manner based upon depth of detail feels right to me.\n\nHere's a version that looks like that. I'm not convinced it's an\nimprovement over the previous version: again, I expect more advanced\nusers to already understand this concept, and I think moving it to the\nALTER TABLE page could very well have the effect of burying i(amidst\nthe ton of detail on the ALTER TABLE page) concept that would be\nuseful to learn early on in a tutorial like the DDL page. But if\npeople really think this is an improvement, then I can acquiesce.\n\nThanks,\nJames Coleman",
"msg_date": "Tue, 25 Jan 2022 08:48:44 -0500",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "On Tue, Jan 25, 2022 at 8:49 AM James Coleman <jtc331@gmail.com> wrote:\n> Here's a version that looks like that. I'm not convinced it's an\n> improvement over the previous version: again, I expect more advanced\n> users to already understand this concept, and I think moving it to the\n> ALTER TABLE page could very well have the effect of burying i(amidst\n> the ton of detail on the ALTER TABLE page) concept that would be\n> useful to learn early on in a tutorial like the DDL page. But if\n> people really think this is an improvement, then I can acquiesce.\n\nI vote for rejecting both of these patches.\n\n0001 adds the following sentence to the documentation: \"A <literal>NOT\nNULL</literal> constraint may be added to the new column in the same\nstatement without requiring scanning the table to verify the\nconstraint.\" My first reaction when I read this sentence was that it\nwas warning the user about the absence of a hazard that no one would\nexpect in the first place. We could also document that adding a NOT\nNULL constraint will not cause your gas tank to catch fire, but nobody\nwas worried about that until we brought it up. I also think that the\nsentence makes the rest of the paragraph harder to understand, because\nthe rest of the paragraph is talking about adding a new column with a\ndefault, and now suddenly we're talking about NOT NULL constraints.\n\n0002 moves some advice about adding columns with defaults from one\npart of the documentation to another. Maybe that's a good idea, and\nmaybe it isn't, but it also rewords the advice, and in my opinion, the\nnew wording is less clear and specific than the existing wording. It\nalso changes a sentence that mentions volatile defaults to give a\nspecific example of a volatile function -- clock_timestamp -- probably\nbecause where the documentation was before that function was\nmentioned. However, that sentence seems clear enough as it is and does\nnot really need an example.\n\nI am not trying to pretend that all of our documentation in this area\nis totally ideal or that nothing can be done to make it better.\nHowever, I don't think that these particular patches actually make it\nbetter. And I also think that there's only so much time that is worth\nspending on a patch set like this. Not everything that is confusing\nabout the system is ever going to make its way into the documentation,\nand that would remain true even if we massively expanded the level of\ndetail that we put in there. That doesn't mean that James or anyone\nelse shouldn't suggest things to add as they find things that they\nthink should be added, but it does mean that not every such suggestion\nis going to get traction and that's OK too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 25 Mar 2022 16:40:37 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I vote for rejecting both of these patches.\n\nI see what James is on about here, but I agree that these specific changes\ndon't help much. What would actually be desirable IMO is a separate\nsection somewhere explaining the performance characteristics of ALTER\nTABLE. (We've also kicked around the idea of EXPLAIN for ALTER TABLE,\nbut that's a lot more work.) This could coalesce the parenthetical\nremarks that exist in ddl.sgml as well as alter_table.sgml into\nsomething a bit more unified and perhaps easier to follow. In particular,\nit should start by defining what we mean by \"table rewrite\" and \"table\nscan\". I don't recall at the moment whether we define those in multiple\nplaces or not at all, but as things stand any such discussion would be\npretty fragmented.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 25 Mar 2022 17:00:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "On Fri, Mar 25, 2022 at 1:40 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Jan 25, 2022 at 8:49 AM James Coleman <jtc331@gmail.com> wrote:\n> > Here's a version that looks like that. I'm not convinced it's an\n> > improvement over the previous version: again, I expect more advanced\n> > users to already understand this concept, and I think moving it to the\n> > ALTER TABLE page could very well have the effect of burying i(amidst\n> > the ton of detail on the ALTER TABLE page) concept that would be\n> > useful to learn early on in a tutorial like the DDL page. But if\n> > people really think this is an improvement, then I can acquiesce.\n>\n> I vote for rejecting both of these patches.\n>\n> 0001 adds the following sentence to the documentation: \"A <literal>NOT\n> NULL</literal> constraint may be added to the new column in the same\n> statement without requiring scanning the table to verify the\n> constraint.\" My first reaction when I read this sentence was that it\n> was warning the user about the absence of a hazard that no one would\n> expect in the first place.\n\n\nI agree. The wording that would make one even consider this has yet to\nhave been introduced at this point in the documentation.\n\n\n> 0002 moves some advice about adding columns with defaults from one\n> part of the documentation to another. Maybe that's a good idea, and\n> maybe it isn't, but it also rewords the advice, and in my opinion, the\n> new wording is less clear and specific than the existing wording.\n\n\nIn the passing time I've had to directly reference the DDL chapter (which\nis a mix of reference material and tutorial) on numerous items so my desire\nto move the commentary away from here is less, but still I feel that the\ncommand reference page is the correct place for this kind of detail.\n\nIf we took away too much info and made things less clear let's address\nthat. It can't be that much, we are talking about basically a paragraph of\ntext here.\n\n\n> It\n> also changes a sentence that mentions volatile defaults to give a\n> specific example of a volatile function -- clock_timestamp -- probably\n> because where the documentation was before that function was\n> mentioned. However, that sentence seems clear enough as it is and does\n\nnot really need an example.\n>\n\nNope, the usage and context in the patch is basically the same as the\nexisting usage and context.\nDavid J.\n\nOn Fri, Mar 25, 2022 at 1:40 PM Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Jan 25, 2022 at 8:49 AM James Coleman <jtc331@gmail.com> wrote:\n> Here's a version that looks like that. I'm not convinced it's an\n> improvement over the previous version: again, I expect more advanced\n> users to already understand this concept, and I think moving it to the\n> ALTER TABLE page could very well have the effect of burying i(amidst\n> the ton of detail on the ALTER TABLE page) concept that would be\n> useful to learn early on in a tutorial like the DDL page. But if\n> people really think this is an improvement, then I can acquiesce.\n\nI vote for rejecting both of these patches.\n\n0001 adds the following sentence to the documentation: \"A <literal>NOT\nNULL</literal> constraint may be added to the new column in the same\nstatement without requiring scanning the table to verify the\nconstraint.\" My first reaction when I read this sentence was that it\nwas warning the user about the absence of a hazard that no one would\nexpect in the first place.I agree. The wording that would make one even consider this has yet to have been introduced at this point in the documentation.\n0002 moves some advice about adding columns with defaults from one\npart of the documentation to another. Maybe that's a good idea, and\nmaybe it isn't, but it also rewords the advice, and in my opinion, the\nnew wording is less clear and specific than the existing wording.In the passing time I've had to directly reference the DDL chapter (which is a mix of reference material and tutorial) on numerous items so my desire to move the commentary away from here is less, but still I feel that the command reference page is the correct place for this kind of detail.If we took away too much info and made things less clear let's address that. It can't be that much, we are talking about basically a paragraph of text here. It\nalso changes a sentence that mentions volatile defaults to give a\nspecific example of a volatile function -- clock_timestamp -- probably\nbecause where the documentation was before that function was\nmentioned. However, that sentence seems clear enough as it is and does\nnot really need an example.Nope, the usage and context in the patch is basically the same as the existing usage and context.David J.",
"msg_date": "Fri, 25 Mar 2022 14:02:02 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "On Fri, Mar 25, 2022 at 5:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I see what James is on about here, but I agree that these specific changes\n> don't help much. What would actually be desirable IMO is a separate\n> section somewhere explaining the performance characteristics of ALTER\n> TABLE.\n\nSure. If someone wants to do that and bring it to a level of quality\nthat we could consider committing, I'm fine with that. But I don't\nthink that has much to do with the patches before us.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 26 Mar 2022 14:31:31 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "On Fri, Mar 25, 2022 at 4:40 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Jan 25, 2022 at 8:49 AM James Coleman <jtc331@gmail.com> wrote:\n> > Here's a version that looks like that. I'm not convinced it's an\n> > improvement over the previous version: again, I expect more advanced\n> > users to already understand this concept, and I think moving it to the\n> > ALTER TABLE page could very well have the effect of burying i(amidst\n> > the ton of detail on the ALTER TABLE page) concept that would be\n> > useful to learn early on in a tutorial like the DDL page. But if\n> > people really think this is an improvement, then I can acquiesce.\n>\n> I vote for rejecting both of these patches.\n>\n> 0001 adds the following sentence to the documentation: \"A <literal>NOT\n> NULL</literal> constraint may be added to the new column in the same\n> statement without requiring scanning the table to verify the\n> constraint.\" My first reaction when I read this sentence was that it\n> was warning the user about the absence of a hazard that no one would\n> expect in the first place. We could also document that adding a NOT\n> NULL constraint will not cause your gas tank to catch fire, but nobody\n> was worried about that until we brought it up.\n\nAs noted at minimum we (Braintree Payments) feared this hazard. That's\nreasonable because adding a NOT NULL constraint normally requires a\ntable scan while holding an exclusive lock. It's fairly obvious why\nsomeone like us (any anyone who can't have downtime) would be paranoid\nabout any possibility of long-running operations under exclusive locks\n\nI realize it's rhetorical flourish, but it hardly seems reasonable to\ncompare an actual hazard a database could plausibly have (an index it\nis an optimization in the code that prevents it from happening -- a\nnaive implementation would in fact scan the full table on all NOT NULL\nconstraint additions) with something not at all related to database\n(gas tank fires).\n\nI simply do not accept the claim that this is not a reasonable concern\nto have nor that this isn't worth documenting. It was worth someone\ntaking the time to consider as an optimization in the code. And the\nconsequence of that not having been done could be an outage for an\nunsuspecting user. Of all the things we would want to document DDL\nthat could require executing long operations while holding exclusive\nlocks seems pretty high on the list.\n\n> I also think that the\n> sentence makes the rest of the paragraph harder to understand, because\n> the rest of the paragraph is talking about adding a new column with a\n> default, and now suddenly we're talking about NOT NULL constraints.\n\nI am, however, happy to hear critiques of the style of the patch or\nthe best way to document this kind of behavior.\n\nI'm curious though what you'd envision being a better place for this\ninformation. Yes, we're talking about new columns -- that's the\noperation under consideration -- but the NOT NULL constraint is part\nof the new column definition. I'm not sure where else you would\ndocument something that's a part of adding a new column.\n\n> 0002 moves some advice about adding columns with defaults from one\n> part of the documentation to another. Maybe that's a good idea, and\n> maybe it isn't, but it also rewords the advice, and in my opinion, the\n> new wording is less clear and specific than the existing wording. It\n> also changes a sentence that mentions volatile defaults to give a\n> specific example of a volatile function -- clock_timestamp -- probably\n> because where the documentation was before that function was\n> mentioned. However, that sentence seems clear enough as it is and does\n> not really need an example.\n\nAdding that example (and, indeed, moving the advice) was per a\nprevious reviewer's request. So I'm not sure what to do in this\nsituation -- I'm trying to improve the proposal per reviewer feedback\nbut there are conflicting reviewers. I suppose we'd need a\ntie-breaking reviewer.\n\nThanks,\nJames Coleman\n\n\n",
"msg_date": "Sat, 26 Mar 2022 18:25:36 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "On Fri, Mar 25, 2022 at 5:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > I vote for rejecting both of these patches.\n>\n> I see what James is on about here, but I agree that these specific changes\n> don't help much. What would actually be desirable IMO is a separate\n> section somewhere explaining the performance characteristics of ALTER\n> TABLE. (We've also kicked around the idea of EXPLAIN for ALTER TABLE,\n> but that's a lot more work.) This could coalesce the parenthetical\n> remarks that exist in ddl.sgml as well as alter_table.sgml into\n> something a bit more unified and perhaps easier to follow. In particular,\n> it should start by defining what we mean by \"table rewrite\" and \"table\n> scan\". I don't recall at the moment whether we define those in multiple\n> places or not at all, but as things stand any such discussion would be\n> pretty fragmented.\n>\n> regards, tom lane\n\nI think a unified area discussing pitfalls/performance of ALTER TABLE\nseems like a great idea.\n\nThat being said: given that \"as things stand\" that \"discussion\n[already is] pretty fragmented\" is there a place for a simpler\nimprovement (adding a short explanation of this particular hazard) in\nthe meantime? I don't mean this specific v4 patch -- just in general\n(since the patch can be revised of course).\n\nThanks,\nJames Coleman\n\n\n",
"msg_date": "Sat, 26 Mar 2022 18:29:08 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "On Sat, Mar 26, 2022 at 3:25 PM James Coleman <jtc331@gmail.com> wrote:\n\n> On Fri, Mar 25, 2022 at 4:40 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Tue, Jan 25, 2022 at 8:49 AM James Coleman <jtc331@gmail.com> wrote:\n> > > Here's a version that looks like that. I'm not convinced it's an\n> > > improvement over the previous version: again, I expect more advanced\n> > > users to already understand this concept, and I think moving it to the\n> > > ALTER TABLE page could very well have the effect of burying i(amidst\n> > > the ton of detail on the ALTER TABLE page) concept that would be\n> > > useful to learn early on in a tutorial like the DDL page. But if\n> > > people really think this is an improvement, then I can acquiesce.\n> >\n> > I vote for rejecting both of these patches.\n> >\n> > 0001 adds the following sentence to the documentation: \"A <literal>NOT\n> > NULL</literal> constraint may be added to the new column in the same\n> > statement without requiring scanning the table to verify the\n> > constraint.\" My first reaction when I read this sentence was that it\n> > was warning the user about the absence of a hazard that no one would\n> > expect in the first place. We could also document that adding a NOT\n> > NULL constraint will not cause your gas tank to catch fire, but nobody\n> > was worried about that until we brought it up.\n>\n> As noted at minimum we (Braintree Payments) feared this hazard. That's\n> reasonable because adding a NOT NULL constraint normally requires a\n> table scan while holding an exclusive lock. It's fairly obvious why\n> someone like us (any anyone who can't have downtime) would be paranoid\n> about any possibility of long-running operations under exclusive locks\n>\n>\nReading the docs again I see:\n\nALTER TABLE ... ALTER COLUMN ... SET/DROP NOT NULL\n\"SET NOT NULL may only be applied to a column provided none of the records\nin the table contain a NULL value for the column. Ordinarily this is\nchecked during the ALTER TABLE by scanning the entire table; however, if a\nvalid CHECK constraint is found which proves no NULL can exist, then the\ntable scan is skipped.\"\n\nAnd the claim is that the reader would read this behavior of the ALTER\nCOLUMN ... SET NOT NULL command and assume that it might also apply to:\n\nALTER TABLE ... ADD COLUMN ... DEFAULT NOT NULL\n\nI accept that such an assumption is plausible and worth disabusing\n(regardless of my opinion, that is, to my understanding, why this patch is\nbeing introduced).\n\nTo that end we should do so in the ALTER COLUMN ... SET NOT NULL section,\nnot the ADD COLUMN ... DEFAULT NOT NULL (or, specifically, its\ncorresponding paragraph in the notes section).\n\nI would suggest rewriting 0001 to target ALTER COLUMN instead of in the\ngeneric notes section (in the paragraph beginning \"Adding a column with a\nvolatile DEFAULT\") for the desired clarification.\n\n0002, the moving of existing content from DDL to ALTER TABLE, does not have\nagreement and the author of this patch isn't behind it. I'm not inclined\nto introduce a patch to push forth the discussion to conclusion myself. So\nfor now it should just die.\n\nDavid J.\n\nOn Sat, Mar 26, 2022 at 3:25 PM James Coleman <jtc331@gmail.com> wrote:On Fri, Mar 25, 2022 at 4:40 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Jan 25, 2022 at 8:49 AM James Coleman <jtc331@gmail.com> wrote:\n> > Here's a version that looks like that. I'm not convinced it's an\n> > improvement over the previous version: again, I expect more advanced\n> > users to already understand this concept, and I think moving it to the\n> > ALTER TABLE page could very well have the effect of burying i(amidst\n> > the ton of detail on the ALTER TABLE page) concept that would be\n> > useful to learn early on in a tutorial like the DDL page. But if\n> > people really think this is an improvement, then I can acquiesce.\n>\n> I vote for rejecting both of these patches.\n>\n> 0001 adds the following sentence to the documentation: \"A <literal>NOT\n> NULL</literal> constraint may be added to the new column in the same\n> statement without requiring scanning the table to verify the\n> constraint.\" My first reaction when I read this sentence was that it\n> was warning the user about the absence of a hazard that no one would\n> expect in the first place. We could also document that adding a NOT\n> NULL constraint will not cause your gas tank to catch fire, but nobody\n> was worried about that until we brought it up.\n\nAs noted at minimum we (Braintree Payments) feared this hazard. That's\nreasonable because adding a NOT NULL constraint normally requires a\ntable scan while holding an exclusive lock. It's fairly obvious why\nsomeone like us (any anyone who can't have downtime) would be paranoid\nabout any possibility of long-running operations under exclusive locksReading the docs again I see:ALTER TABLE ... ALTER COLUMN ... SET/DROP NOT NULL\"SET NOT NULL may only be applied to a column provided none of the records in the table contain a NULL value for the column. Ordinarily this is checked during the ALTER TABLE by scanning the entire table; however, if a valid CHECK constraint is found which proves no NULL can exist, then the table scan is skipped.\"And the claim is that the reader would read this behavior of the ALTER COLUMN ... SET NOT NULL command and assume that it might also apply to:ALTER TABLE ... ADD COLUMN ... DEFAULT NOT NULLI accept that such an assumption is plausible and worth disabusing (regardless of my opinion, that is, to my understanding, why this patch is being introduced).To that end we should do so in the ALTER COLUMN ... SET NOT NULL section, not the ADD COLUMN ... DEFAULT NOT NULL (or, specifically, its corresponding paragraph in the notes section).I would suggest rewriting 0001 to target ALTER COLUMN instead of in the generic notes section (in the paragraph beginning \"Adding a column with a volatile DEFAULT\") for the desired clarification.0002, the moving of existing content from DDL to ALTER TABLE, does not have agreement and the author of this patch isn't behind it. I'm not inclined to introduce a patch to push forth the discussion to conclusion myself. So for now it should just die.David J.",
"msg_date": "Sat, 26 Mar 2022 16:14:05 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "On Sat, Mar 26, 2022 at 4:14 PM David G. Johnston <\ndavid.g.johnston@gmail.com> wrote:\n\n>\n> I would suggest rewriting 0001 to target ALTER COLUMN instead of in the\n> generic notes section (in the paragraph beginning \"Adding a column with a\n> volatile DEFAULT\") for the desired clarification.\n>\n>\nOr, we can leave it where things are and make sure the reader understands\nthere are two paths to having a NOT NULL constraint on the newly added\ncolumn. Something like:\n\n\"If you plan on having a NOT NULL constraint on the newly added column you\nshould add it as a column constraint during the ADD COLUMN command. If you\nadd it later via ALTER COLUMN SET NOT NULL the table will have to be\ncompletely scanned in order to ensure that no null values were inserted.\"\n\nDavid J.\n\nOn Sat, Mar 26, 2022 at 4:14 PM David G. Johnston <david.g.johnston@gmail.com> wrote:I would suggest rewriting 0001 to target ALTER COLUMN instead of in the generic notes section (in the paragraph beginning \"Adding a column with a volatile DEFAULT\") for the desired clarification.Or, we can leave it where things are and make sure the reader understands there are two paths to having a NOT NULL constraint on the newly added column. Something like:\"If you plan on having a NOT NULL constraint on the newly added column you should add it as a column constraint during the ADD COLUMN command. If you add it later via ALTER COLUMN SET NOT NULL the table will have to be completely scanned in order to ensure that no null values were inserted.\"David J.",
"msg_date": "Sat, 26 Mar 2022 16:26:19 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> Or, we can leave it where things are and make sure the reader understands\n> there are two paths to having a NOT NULL constraint on the newly added\n> column. Something like:\n\n> \"If you plan on having a NOT NULL constraint on the newly added column you\n> should add it as a column constraint during the ADD COLUMN command. If you\n> add it later via ALTER COLUMN SET NOT NULL the table will have to be\n> completely scanned in order to ensure that no null values were inserted.\"\n\nThe first way also requires having a non-null DEFAULT, of course, and\nthen also that default value must be a constant (else you end up with\na table rewrite which is even worse). This sort of interaction\nbetween features is why I feel that a separate unified discussion\nis the only reasonable solution.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 26 Mar 2022 19:36:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "On Sat, Mar 26, 2022 at 4:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > Or, we can leave it where things are and make sure the reader understands\n> > there are two paths to having a NOT NULL constraint on the newly added\n> > column. Something like:\n>\n> > \"If you plan on having a NOT NULL constraint on the newly added column\n> you\n> > should add it as a column constraint during the ADD COLUMN command. If\n> you\n> > add it later via ALTER COLUMN SET NOT NULL the table will have to be\n> > completely scanned in order to ensure that no null values were inserted.\"\n>\n> The first way also requires having a non-null DEFAULT, of course, and\n> then also that default value must be a constant (else you end up with\n> a table rewrite which is even worse). This sort of interaction\n> between features is why I feel that a separate unified discussion\n> is the only reasonable solution.\n>\n>\nThe paragraph it is being added to discusses the table rewrite already.\nThis does nothing to contradict the fact that a table rewrite might still\nhave to happen.\n\nThe goal of this sentence is to tell the user to make sure they don't\nforget to add the NOT NULL during the column add so that they don't have to\nincur a future table scan by executing ALTER COLUMN SET NOT NULL.\n\nI am assuming that the user understands when a table rewrite has to happen\nand that the presence of NOT NULL in the ADD COLUMN doesn't impact that.\nAnd if a table rewrite happens that a table scan happens implicitly.\nAdmittedly, this doesn't directly address the original complaint, but by\nshowing how the two commands differ I believe the confusion will go away.\nSET NOT NULL performs a scan, ADD COLUMN NOT NULL does not; it just might\nrequire something worse if the supplied default is volatile.\n\nDavid J.\n\nOn Sat, Mar 26, 2022 at 4:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> Or, we can leave it where things are and make sure the reader understands\n> there are two paths to having a NOT NULL constraint on the newly added\n> column. Something like:\n\n> \"If you plan on having a NOT NULL constraint on the newly added column you\n> should add it as a column constraint during the ADD COLUMN command. If you\n> add it later via ALTER COLUMN SET NOT NULL the table will have to be\n> completely scanned in order to ensure that no null values were inserted.\"\n\nThe first way also requires having a non-null DEFAULT, of course, and\nthen also that default value must be a constant (else you end up with\na table rewrite which is even worse). This sort of interaction\nbetween features is why I feel that a separate unified discussion\nis the only reasonable solution.The paragraph it is being added to discusses the table rewrite already. This does nothing to contradict the fact that a table rewrite might still have to happen.The goal of this sentence is to tell the user to make sure they don't forget to add the NOT NULL during the column add so that they don't have to incur a future table scan by executing ALTER COLUMN SET NOT NULL.I am assuming that the user understands when a table rewrite has to happen and that the presence of NOT NULL in the ADD COLUMN doesn't impact that. And if a table rewrite happens that a table scan happens implicitly. Admittedly, this doesn't directly address the original complaint, but by showing how the two commands differ I believe the confusion will go away. SET NOT NULL performs a scan, ADD COLUMN NOT NULL does not; it just might require something worse if the supplied default is volatile.David J.",
"msg_date": "Sat, 26 Mar 2022 16:56:37 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "On Sat, Mar 26, 2022 at 6:25 PM James Coleman <jtc331@gmail.com> wrote:\n> I simply do not accept the claim that this is not a reasonable concern\n> to have nor that this isn't worth documenting.\n\nI don't think I said that the concern wasn't reasonable, but I don't\nthink the fact that one person or organization had a concern means\nthat it has to be worth documenting. And I didn't say either that it's\nnot intrinsically worth documenting. I said it doesn't fit nicely into\nthe documentation we have.\n\nSince you didn't like my last example, let's try another one. If\nsomeone shows up and proposes a documentation patch to explain what a\nBitmapOr node means, we're probably going to reject it, because it\nmakes no sense to document that one node and not all the others. That\ndoesn't mean that people shouldn't want to know what BitmapOr means,\nbut it's just not sensible to document that one thing in isolation,\neven if somebody somewhere happened to be confused by that thing and\nnot any of the other nodes.\n\nIn the same way, I think you're trying to jam documentation of one\nparticular point into the documentation when there are many other\nsimilar points that are not documented, and I think it's very awkward.\nIt looks to me like you want to document that a table scan isn't\nperformed in a certain case when we haven't documented the rule that\nwould cause that table scan to be performed in other cases, or even\nwhat a table scan means in this context, or any of the similar things\nthat are equally important, like a table rewrite or an index rebuild,\nor any of the rules for when those things happen.\n\nIt's arguable in my mind whether it is worth documenting all of those\nrules, although I am not opposed to it if somebody wants to do the\nwork. But I *am* opposed to documenting that a certain situation is an\nexception to an undocumented rule about an undocumented concept.\nThat's going to create confusion, not dispel it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 27 Mar 2022 11:43:45 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "On Sun, Mar 27, 2022 at 11:43 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Sat, Mar 26, 2022 at 6:25 PM James Coleman <jtc331@gmail.com> wrote:\n> > I simply do not accept the claim that this is not a reasonable concern\n> > to have nor that this isn't worth documenting.\n>\n> I don't think I said that the concern wasn't reasonable, but I don't\n> think the fact that one person or organization had a concern means\n> that it has to be worth documenting.\n\nYou said \"My first reaction when I read this sentence was that it\nwas warning the user about the absence of a hazard that no one would\nexpect in the first place.\" That seemed to me even stronger than \"not\na reasonable concern\", and so while I agree that one organization\nhaving a concern doesn't mean that it has to be documented, it does\nseem clear to me that one such organization dispels the idea that \"no\none would expect [this]\", which is why I said it in response to that\nstatement.\n\n> And I didn't say either that it's\n> not intrinsically worth documenting. I said it doesn't fit nicely into\n> the documentation we have.\n\nThat was not the critique I took away from your email at all. It is,\nhowever, what Tom noted, and I agree it's a relevant question.\n\n> Since you didn't like my last example, let's try another one. If\n> someone shows up and proposes a documentation patch to explain what a\n> BitmapOr node means, we're probably going to reject it, because it\n> makes no sense to document that one node and not all the others. That\n> doesn't mean that people shouldn't want to know what BitmapOr means,\n> but it's just not sensible to document that one thing in isolation,\n> even if somebody somewhere happened to be confused by that thing and\n> not any of the other nodes.\n>\n> In the same way, I think you're trying to jam documentation of one\n> particular point into the documentation when there are many other\n> similar points that are not documented, and I think it's very awkward.\n> It looks to me like you want to document that a table scan isn't\n> performed in a certain case when we haven't documented the rule that\n> would cause that table scan to be performed in other cases, or even\n> what a table scan means in this context, or any of the similar things\n> that are equally important, like a table rewrite or an index rebuild,\n> or any of the rules for when those things happen.\n\nIn the ALTER TABLE docs page \"table scan\" is used in the section on\n\"SET NOT NULL\", \"full table scan\" is used in the sections on \"ADD\ntable_constraint_using_index\" and \"ATTACH PARTITION\", and \"table scan\"\nis used again in the \"Note\" section. Table rewrites are similarly\ndiscussed repeatedly on that page. Indeed the docs make a clear effort\nto point out where table scans and table rewrites do and do not occur\n(albeit not in one unified place -- it's scattered through the various\nsubcommands and notes sections. Indeed the Notes section explicitly\nsay \"The main reason for providing the option to specify multiple\nchanges in a single ALTER TABLE is that multiple table scans or\nrewrites can thereby be combined into a single pass over the table.\"\n\nSo I believe it is just factually incorrect to say that \"we haven't\ndocumented...what a table scan means in this context, or any of the\nsimilar things that are equally important, like a table rewrite or an\nindex rebuild, or any of the rules for when those things happen.\"\n\n> It's arguable in my mind whether it is worth documenting all of those\n> rules, although I am not opposed to it if somebody wants to do the\n> work. But I *am* opposed to documenting that a certain situation is an\n> exception to an undocumented rule about an undocumented concept.\n> That's going to create confusion, not dispel it.\n\nAs shown above, table scans (and specifically table scans used to\nvalidate constraints, which is what this patch is about) are clearly\ndocumented (more than once!) in the ALTER TABLE documentation. In fact\nit's documented specifically in reference to SET NOT NULL, which is\neven more specifically the type of constraint this patch is about.\n\nSo \"undocumented concept\" is just not accurate, and so I don't see it\nas a valid reason to reject the patch.\n\nThanks,\nJames Coleman\n\n\n",
"msg_date": "Sun, 27 Mar 2022 13:00:11 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "On Sun, Mar 27, 2022 at 10:00 AM James Coleman <jtc331@gmail.com> wrote:\n\n> As shown above, table scans (and specifically table scans used to\n> validate constraints, which is what this patch is about) are clearly\n> documented (more than once!) in the ALTER TABLE documentation. In fact\n> it's documented specifically in reference to SET NOT NULL, which is\n> even more specifically the type of constraint this patch is about.\n>\n> So \"undocumented concept\" is just not accurate, and so I don't see it\n> as a valid reason to reject the patch.\n>\n>\nAs you point out, where these scans are performed is documented. Your\nrequest, though, is to document a location where they are not performed\ninstead of trusting in the absence of a statement meaning that no such scan\nhappens. In this case no such scan of the table is ever needed when adding\na column and so ADD COLUMN doesn't mention table scanning. We almost\nalways choose not to document those things which do not happen. I don't\nalways agree with this position but it is valid and largely adhered to. On\nthat documentation theory/policy basis alone this patch can be rejected.\n0001 as proposed is especially strong in violating this principle.\n\nMy two thoughts from yesterday take slightly different approaches to try\nand mitigate the same misunderstanding while also providing useful guidance\nto the reader to make sure the hazard of ALTER COLUMN SET NOT NULL is\nsomething they are thinking about even when adding a new column since\nforgetting to incorporate the NOT NULL during the add can be a costly\nmistake. The tweaking the notes section seems to be the more productive of\nthe two approaches.\n\nDavid J.\n\nOn Sun, Mar 27, 2022 at 10:00 AM James Coleman <jtc331@gmail.com> wrote:As shown above, table scans (and specifically table scans used to\nvalidate constraints, which is what this patch is about) are clearly\ndocumented (more than once!) in the ALTER TABLE documentation. In fact\nit's documented specifically in reference to SET NOT NULL, which is\neven more specifically the type of constraint this patch is about.\n\nSo \"undocumented concept\" is just not accurate, and so I don't see it\nas a valid reason to reject the patch.As you point out, where these scans are performed is documented. Your request, though, is to document a location where they are not performed instead of trusting in the absence of a statement meaning that no such scan happens. In this case no such scan of the table is ever needed when adding a column and so ADD COLUMN doesn't mention table scanning. We almost always choose not to document those things which do not happen. I don't always agree with this position but it is valid and largely adhered to. On that documentation theory/policy basis alone this patch can be rejected. 0001 as proposed is especially strong in violating this principle.My two thoughts from yesterday take slightly different approaches to try and mitigate the same misunderstanding while also providing useful guidance to the reader to make sure the hazard of ALTER COLUMN SET NOT NULL is something they are thinking about even when adding a new column since forgetting to incorporate the NOT NULL during the add can be a costly mistake. The tweaking the notes section seems to be the more productive of the two approaches.David J.",
"msg_date": "Sun, 27 Mar 2022 10:46:19 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "On Sun, Mar 27, 2022 at 1:46 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Sun, Mar 27, 2022 at 10:00 AM James Coleman <jtc331@gmail.com> wrote:\n>>\n>> As shown above, table scans (and specifically table scans used to\n>> validate constraints, which is what this patch is about) are clearly\n>> documented (more than once!) in the ALTER TABLE documentation. In fact\n>> it's documented specifically in reference to SET NOT NULL, which is\n>> even more specifically the type of constraint this patch is about.\n>>\n>> So \"undocumented concept\" is just not accurate, and so I don't see it\n>> as a valid reason to reject the patch.\n>>\n>\n> As you point out, where these scans are performed is documented. Your request, though, is to document a location where they are not performed instead of trusting in the absence of a statement meaning that no such scan happens. In this case no such scan of the table is ever needed when adding a column and so ADD COLUMN doesn't mention table scanning. We almost always choose not to document those things which do not happen. I don't always agree with this position but it is valid and largely adhered to. On that documentation theory/policy basis alone this patch can be rejected. 0001 as proposed is especially strong in violating this principle.\n\nHmm, I didn't realize that was project policy, but I'm a bit\nsurprised given that the sentence which 0001 replaces seems like a\ndirect violation of that also: \"In neither case is a rewrite of the\ntable required.\"\n\n> My two thoughts from yesterday take slightly different approaches to try and mitigate the same misunderstanding while also providing useful guidance to the reader to make sure the hazard of ALTER COLUMN SET NOT NULL is something they are thinking about even when adding a new column since forgetting to incorporate the NOT NULL during the add can be a costly mistake. The tweaking the notes section seems to be the more productive of the two approaches.\n\nYes, I like those suggestions. I've attached an updated patch that I\nthink fits a good bit more naturally into the Notes section\nspecifically addressing scans and rewrites on NOT NULL constraints.\n\nThanks for the feedback,\nJames Coleman",
"msg_date": "Sun, 27 Mar 2022 14:17:40 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "On Sun, Mar 27, 2022 at 11:17 AM James Coleman <jtc331@gmail.com> wrote:\n\n> Hmm, I didn't realize that was project policy,\n\n\nGuideline/Rule of Thumb is probably a better concept.\n\n\n> but I'm a bit\n> surprised given that the sentence which 0001 replaces seems like a\n> direct violation of that also: \"In neither case is a rewrite of the\n> table required.\"\n>\n>\nIMO mostly due to the structuring of the paragraphs; something like the\nfollowing makes it less problematic (and as shown below may be\nsufficient to address the purpose of this patch)\n\n\"\"\"\n[...]\nThe following alterations of the table require the entire table, and/or its\nindexes, to be rewritten; which may take a significant amount of time for a\nlarge table, and will temporarily require as much as double the disk space.\n\nChanging the type of an existing column will require the entire table and\nits indexes to be rewritten. As an exception, if the USING clause does not\nchange the column contents and the old type is either binary coercible to\nthe new type, or an unconstrained domain over the new type, a table rewrite\nis not needed; but any indexes on the affected columns must still be\nrewritten.\n\nAdding a column with a volatile DEFAULT also requires the entire table and\nits indexes to be rewritten.\n\nThe reason a non-volatile (or absent) DEFAULT does not require a rewrite of\nthe table is because the DEFAULT expression (or NULL) is evaluated at the\ntime of the statement and the result is stored in the table's metadata.\n\nThe following alterations of the table require that it be scanned in its\nentirety to ensure that no existing values are contrary to the new\nconstraints placed on the table. Constraints backed by indexes will scan\nthe table as a side-effect of populating the new index with data.\n\nAdding a CHECK constraint requires scanning the table to verify that\nexisting rows meet the constraint. The same goes for adding a NOT NULL\nconstraint to an existing column.\n\nA newly attached partition requires scanning the table to verify that\nexisting rows meet the partition constraint.\n\nA foreign key constraint requires scanning the table to verify that all\nexisting values exist on the referenced table.\n\nThe main reason for providing the option to specify multiple changes in a\nsingle ALTER TABLE is that multiple table scans or rewrites can thereby be\ncombined into a single pass over the table.\n\nScanning a large table to verify a new constraint can take a long time, and\nother updates to the table are locked out until the ALTER TABLE ADD\nCONSTRAINT command is committed. For CHECK and FOREIGN KEY constraints\nthere is an option, NOT VALID, that reduces the impact of adding a\nconstraint on concurrent updates. With NOT VALID, the ADD CONSTRAINT\ncommand does not scan the table and can be committed immediately. After\nthat, a VALIDATE CONSTRAINT command can be issued to verify that existing\nrows satisfy the constraint. The validation step does not need to lock out\nconcurrent updates, since it knows that other transactions will be\nenforcing the constraint for rows that they insert or update; only\npre-existing rows need to be checked. Hence, validation acquires only a\nSHARE UPDATE EXCLUSIVE lock on the table being altered. (If the constraint\nis a foreign key then a ROW SHARE lock is also required on the table\nreferenced by the constraint.) In addition to improving concurrency, it can\nbe useful to use NOT VALID and VALIDATE CONSTRAINT in cases where the table\nis known to contain pre-existing violations. Once the constraint is in\nplace, no new violations can be inserted, and the existing problems can be\ncorrected at leisure until VALIDATE CONSTRAINT finally succeeds.\n\nThe DROP COLUMN form does not physically remove the column, but simply\nmakes it invisible to SQL operations. Subsequent insert and update\noperations in the table will store a null value for the column. Thus,\ndropping a column is quick but it will not immediately reduce the on-disk\nsize of your table, as the space occupied by the dropped column is not\nreclaimed. The space will be reclaimed over time as existing rows are\nupdated.\n\nTo force immediate reclamation of space occupied by a dropped column, you\ncan execute one of the forms of ALTER TABLE that performs a rewrite of the\nwhole table. This results in reconstructing each row with the dropped\ncolumn replaced by a null value.\n\nThe rewriting forms of ALTER TABLE are not MVCC-safe. After a table\nrewrite, the table will appear empty to concurrent transactions, if they\nare using a snapshot taken before the rewrite occurred. See Section 13.5\nfor more details.\n[...]\n\"\"\"\n\nI'm liking the idea of breaking out multiple features into their own\nsentences or paragraphs instead of saying:\n\n\"Adding a column with a volatile DEFAULT or changing the type of an\nexisting column\"\n\n\"Adding a CHECK or NOT NULL constraint\"\n\nThis later combination probably doesn't catch my attention except for this\ndiscussion and the fact that there are multiple ways to add these\nconstraints and we might as well be clear about whether ALTER COLUMN or ADD\nCOLUMN makes a difference. On that note, the behavior implied by this\nwording is that adding a check constraint even during ADD COLUMN will\nresult in scanning the table even when a table rewrite is not required. If\nthat is the case at present nothing actually says that - if one agrees that\nthe exact same sentence doesn't imply that a table scan is performed when\nadding a NOT NULL constraint during ADD COLUMN (which doesn't happen).\nThat seems like enough material to extract out from the ALTER TABLE page\nand stick elsewhere if one is so motivated. There may be other stuff too -\nbut the next paragraph covers some SET DATA TYPE nuances which seem like a\ndifferent dynamic.\n\nDavid J.\n\nOn Sun, Mar 27, 2022 at 11:17 AM James Coleman <jtc331@gmail.com> wrote:Hmm, I didn't realize that was project policy,Guideline/Rule of Thumb is probably a better concept. but I'm a bit\nsurprised given that the sentence which 0001 replaces seems like a\ndirect violation of that also: \"In neither case is a rewrite of the\ntable required.\"IMO mostly due to the structuring of the paragraphs; something like the following makes it less problematic (and as shown below may be sufficient to address the purpose of this patch)\"\"\"[...]The following alterations of the table require the entire table, and/or its indexes, to be rewritten; which may take a significant amount of time for a large table, and will temporarily require as much as double the disk space.Changing the type of an existing column will require the entire table and its indexes to be rewritten. As an exception, if the USING clause does not change the column contents and the old type is either binary coercible to the new type, or an unconstrained domain over the new type, a table rewrite is not needed; but any indexes on the affected columns must still be rewritten.Adding a column with a volatile DEFAULT also requires the entire table and its indexes to be rewritten. The reason a non-volatile (or absent) DEFAULT does not require a rewrite of the table is because the DEFAULT expression (or NULL) is evaluated at the time of the statement and the result is stored in the table's metadata.The following alterations of the table require that it be scanned in its entirety to ensure that no existing values are contrary to the new constraints placed on the table. Constraints backed by indexes will scan the table as a side-effect of populating the new index with data. Adding a CHECK constraint requires scanning the table to verify that existing rows meet the constraint. The same goes for adding a NOT NULL constraint to an existing column.A newly attached partition requires scanning the table to verify that existing rows meet the partition constraint.A foreign key constraint requires scanning the table to verify that all existing values exist on the referenced table.The main reason for providing the option to specify multiple changes in a single ALTER TABLE is that multiple table scans or rewrites can thereby be combined into a single pass over the table.Scanning a large table to verify a new constraint can take a long time, and other updates to the table are locked out until the ALTER TABLE ADD CONSTRAINT command is committed. For CHECK and FOREIGN KEY constraints there is an option, NOT VALID, that reduces the impact of adding a constraint on concurrent updates. With NOT VALID, the ADD CONSTRAINT command does not scan the table and can be committed immediately. After that, a VALIDATE CONSTRAINT command can be issued to verify that existing rows satisfy the constraint. The validation step does not need to lock out concurrent updates, since it knows that other transactions will be enforcing the constraint for rows that they insert or update; only pre-existing rows need to be checked. Hence, validation acquires only a SHARE UPDATE EXCLUSIVE lock on the table being altered. (If the constraint is a foreign key then a ROW SHARE lock is also required on the table referenced by the constraint.) In addition to improving concurrency, it can be useful to use NOT VALID and VALIDATE CONSTRAINT in cases where the table is known to contain pre-existing violations. Once the constraint is in place, no new violations can be inserted, and the existing problems can be corrected at leisure until VALIDATE CONSTRAINT finally succeeds.The DROP COLUMN form does not physically remove the column, but simply makes it invisible to SQL operations. Subsequent insert and update operations in the table will store a null value for the column. Thus, dropping a column is quick but it will not immediately reduce the on-disk size of your table, as the space occupied by the dropped column is not reclaimed. The space will be reclaimed over time as existing rows are updated.To force immediate reclamation of space occupied by a dropped column, you can execute one of the forms of ALTER TABLE that performs a rewrite of the whole table. This results in reconstructing each row with the dropped column replaced by a null value.The rewriting forms of ALTER TABLE are not MVCC-safe. After a table rewrite, the table will appear empty to concurrent transactions, if they are using a snapshot taken before the rewrite occurred. See Section 13.5 for more details.[...]\"\"\"I'm liking the idea of breaking out multiple features into their own sentences or paragraphs instead of saying:\"Adding a column with a volatile DEFAULT or changing the type of an existing column\"\"Adding a CHECK or NOT NULL constraint\"This later combination probably doesn't catch my attention except for this discussion and the fact that there are multiple ways to add these constraints and we might as well be clear about whether ALTER COLUMN or ADD COLUMN makes a difference. On that note, the behavior implied by this wording is that adding a check constraint even during ADD COLUMN will result in scanning the table even when a table rewrite is not required. If that is the case at present nothing actually says that - if one agrees that the exact same sentence doesn't imply that a table scan is performed when adding a NOT NULL constraint during ADD COLUMN (which doesn't happen).That seems like enough material to extract out from the ALTER TABLE page and stick elsewhere if one is so motivated. There may be other stuff too - but the next paragraph covers some SET DATA TYPE nuances which seem like a different dynamic.David J.",
"msg_date": "Sun, 27 Mar 2022 20:12:16 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "On Sun, Mar 27, 2022 at 1:00 PM James Coleman <jtc331@gmail.com> wrote:\n> So \"undocumented concept\" is just not accurate, and so I don't see it\n> as a valid reason to reject the patch.\n\nI mean, I think it's pretty accurate. The fact that you can point to a\nfew uses of the terms \"table rewrite\" and \"table scan\" in the ALTER\nTABLE documentation doesn't prove that those terms are defined there\nor systematically discussed and it seems pretty clear to me that they\nare not. And I don't even know what we're arguing about here, because\nelsewhere in the same email you agree that it is reasonable to\ncritique the patch on the basis of how well it fits into the\ndocumentation and at least for me that is precisely this issue.\n\nI think the bottom line here is that you're not prepared to accept as\nvalid any opinion to the effect that we shouldn't commit these\npatches. But that remains my opinion.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 28 Mar 2022 09:29:57 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "On Mon, Mar 28, 2022 at 9:30 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Sun, Mar 27, 2022 at 1:00 PM James Coleman <jtc331@gmail.com> wrote:\n> > So \"undocumented concept\" is just not accurate, and so I don't see it\n> > as a valid reason to reject the patch.\n>\n> I mean, I think it's pretty accurate. The fact that you can point to a\n> few uses of the terms \"table rewrite\" and \"table scan\" in the ALTER\n> TABLE documentation doesn't prove that those terms are defined there\n> or systematically discussed and it seems pretty clear to me that they\n> are not. And I don't even know what we're arguing about here, because\n> elsewhere in the same email you agree that it is reasonable to\n> critique the patch on the basis of how well it fits into the\n> documentation and at least for me that is precisely this issue.\n>\n> I think the bottom line here is that you're not prepared to accept as\n> valid any opinion to the effect that we shouldn't commit these\n> patches. But that remains my opinion.\n\nNo, I've appreciated constructive feedback from both Tom and David on\nthis thread. Your original email was so incredibly strongly worded\n(and contained no constructive recommendations about a better path\nforward, unlike Tom's and David's replies), and I had a hard time\nunderstanding what could possibly have made you that irritated with a\nproposal to document how to avoid long-running table scans while\nholding an exclusive lock.\n\nThe two patches you reviewed aren't the current state of this\nproposal; I'll continue working on revising to reviewers replies, and\nas either a replacement or follow-on for this I like Tom's idea of\nhaving a comprehensive guide (which I think has been needed for quite\nsome time).\n\nThanks,\nJames Coleman\n\n\n",
"msg_date": "Mon, 28 Mar 2022 09:54:35 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "On Mon, Mar 28, 2022 at 9:54 AM James Coleman <jtc331@gmail.com> wrote:\n> No, I've appreciated constructive feedback from both Tom and David on\n> this thread. Your original email was so incredibly strongly worded\n> (and contained no constructive recommendations about a better path\n> forward, unlike Tom's and David's replies), and I had a hard time\n> understanding what could possibly have made you that irritated with a\n> proposal to document how to avoid long-running table scans while\n> holding an exclusive lock.\n\nI don't think I was particularly irritated then, but I admit I'm\ngetting irritated now. I clearly said that the documentation wasn't\nperfect but that I didn't think these patches made it better, and I\nexplained why in some detail. It's not like I said \"you suck and I\nhate you and please go die in a fire\" or something like that. So why\nis that \"incredibly strongly worded\"? Especially when both David and\nTom agreed with my recommendation that we reject these patches as\nproposed?\n\nThere are probably patches in this CommitFest that have gotten no\nreview from anyone, but it's pretty hard to find them, because the\nCommitFest is full of patches like this one, which have been reviewed\nfairly extensively yet which, for one reason or another, don't seem\nlikely to go anywhere any time soon. I think that's a much bigger\nproblem for the project than the lack of documentation on this\nparticular issue. Of course, you will likely disagree.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 28 Mar 2022 11:44:44 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
},
{
"msg_contents": "On Sun, Mar 27, 2022 at 11:12 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Sun, Mar 27, 2022 at 11:17 AM James Coleman <jtc331@gmail.com> wrote:\n>>\n>> Hmm, I didn't realize that was project policy,\n>\n>\n> Guideline/Rule of Thumb is probably a better concept.\n\nAh, OK, thanks.\n\n>>\n>> but I'm a bit\n>> surprised given that the sentence which 0001 replaces seems like a\n>> direct violation of that also: \"In neither case is a rewrite of the\n>> table required.\"\n>>\n>\n> IMO mostly due to the structuring of the paragraphs; something like the following makes it less problematic (and as shown below may be sufficient to address the purpose of this patch)\n>\n> \"\"\"\n> [...]\n> The following alterations of the table require the entire table, and/or its indexes, to be rewritten; which may take a significant amount of time for a large table, and will temporarily require as much as double the disk space.\n>\n> Changing the type of an existing column will require the entire table and its indexes to be rewritten. As an exception, if the USING clause does not change the column contents and the old type is either binary coercible to the new type, or an unconstrained domain over the new type, a table rewrite is not needed; but any indexes on the affected columns must still be rewritten.\n>\n> Adding a column with a volatile DEFAULT also requires the entire table and its indexes to be rewritten.\n>\n> The reason a non-volatile (or absent) DEFAULT does not require a rewrite of the table is because the DEFAULT expression (or NULL) is evaluated at the time of the statement and the result is stored in the table's metadata.\n>\n> The following alterations of the table require that it be scanned in its entirety to ensure that no existing values are contrary to the new constraints placed on the table. Constraints backed by indexes will scan the table as a side-effect of populating the new index with data.\n>\n> Adding a CHECK constraint requires scanning the table to verify that existing rows meet the constraint. The same goes for adding a NOT NULL constraint to an existing column.\n>\n> A newly attached partition requires scanning the table to verify that existing rows meet the partition constraint.\n>\n> A foreign key constraint requires scanning the table to verify that all existing values exist on the referenced table.\n>\n> The main reason for providing the option to specify multiple changes in a single ALTER TABLE is that multiple table scans or rewrites can thereby be combined into a single pass over the table.\n>\n> Scanning a large table to verify a new constraint can take a long time, and other updates to the table are locked out until the ALTER TABLE ADD CONSTRAINT command is committed. For CHECK and FOREIGN KEY constraints there is an option, NOT VALID, that reduces the impact of adding a constraint on concurrent updates. With NOT VALID, the ADD CONSTRAINT command does not scan the table and can be committed immediately. After that, a VALIDATE CONSTRAINT command can be issued to verify that existing rows satisfy the constraint. The validation step does not need to lock out concurrent updates, since it knows that other transactions will be enforcing the constraint for rows that they insert or update; only pre-existing rows need to be checked. Hence, validation acquires only a SHARE UPDATE EXCLUSIVE lock on the table being altered. (If the constraint is a foreign key then a ROW SHARE lock is also required on the table referenced by the constraint.) In addition to improving concurrency, it can be useful to use NOT VALID and VALIDATE CONSTRAINT in cases where the table is known to contain pre-existing violations. Once the constraint is in place, no new violations can be inserted, and the existing problems can be corrected at leisure until VALIDATE CONSTRAINT finally succeeds.\n>\n> The DROP COLUMN form does not physically remove the column, but simply makes it invisible to SQL operations. Subsequent insert and update operations in the table will store a null value for the column. Thus, dropping a column is quick but it will not immediately reduce the on-disk size of your table, as the space occupied by the dropped column is not reclaimed. The space will be reclaimed over time as existing rows are updated.\n>\n> To force immediate reclamation of space occupied by a dropped column, you can execute one of the forms of ALTER TABLE that performs a rewrite of the whole table. This results in reconstructing each row with the dropped column replaced by a null value.\n>\n> The rewriting forms of ALTER TABLE are not MVCC-safe. After a table rewrite, the table will appear empty to concurrent transactions, if they are using a snapshot taken before the rewrite occurred. See Section 13.5 for more details.\n> [...]\n> \"\"\"\n>\n> I'm liking the idea of breaking out multiple features into their own sentences or paragraphs instead of saying:\n>\n> \"Adding a column with a volatile DEFAULT or changing the type of an existing column\"\n>\n> \"Adding a CHECK or NOT NULL constraint\"\n>\n> This later combination probably doesn't catch my attention except for this discussion and the fact that there are multiple ways to add these constraints and we might as well be clear about whether ALTER COLUMN or ADD COLUMN makes a difference. On that note, the behavior implied by this wording is that adding a check constraint even during ADD COLUMN will result in scanning the table even when a table rewrite is not required. If that is the case at present nothing actually says that - if one agrees that the exact same sentence doesn't imply that a table scan is performed when adding a NOT NULL constraint during ADD COLUMN (which doesn't happen).\n> That seems like enough material to extract out from the ALTER TABLE page and stick elsewhere if one is so motivated. There may be other stuff too - but the next paragraph covers some SET DATA TYPE nuances which seem like a different dynamic.\n\nComing back to this with fresh eyes in the morning and comparing your\nidea above to the existing doc page, and I really like this approach.\nI'll be marking this specific patch as withdrawn and opening a new\npatch for the restructuring.\n\nI also noticed an error in the existing docs (we no longer need to\nrebuild indexes when a table rewrite is skipped), and I'll be sending\na separate patch to fix that separately.\n\nThanks,\nJames Coleman\n\n\n",
"msg_date": "Tue, 29 Mar 2022 09:53:30 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Document atthasmissing default optimization avoids verification\n table scan"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm trying to set up a postgres server with version 11 in targeted\nrecovery mode (for the first time after my journey started with\npostgres) and I came across the explanation at [1] in PG 12 and newer\nversions that we have a clear differentiation as to what is the\n\"standby\" mode or \"targeted recovery\" mode. How do we differentiate\nthese two modes in PG 11? Can anyone please help me with it?\n\nPS: hackers-list may not be the right place to ask, but I'm used to\nseeking help from it.\n\n[1] From the https://www.postgresql.org/docs/12/runtime-config-wal.html:\n\n\"19.5.4. Archive Recovery\n\nThis section describes the settings that apply only for the duration\nof the recovery. They must be reset for any subsequent recovery you\nwish to perform.\n\n“Recovery” covers using the server as a standby or for executing a\ntargeted recovery. Typically, standby mode would be used to provide\nhigh availability and/or read scalability, whereas a targeted recovery\nis used to recover from data loss.\n\nTo start the server in standby mode, create a file called\nstandby.signal in the data directory. The server will enter recovery\nand will not stop recovery when the end of archived WAL is reached,\nbut will keep trying to continue recovery by connecting to the sending\nserver as specified by the primary_conninfo setting and/or by fetching\nnew WAL segments using restore_command. For this mode, the parameters\nfrom this section and Section 19.6.3 are of interest. Parameters from\nSection 19.5.5 will also be applied but are typically not useful in\nthis mode.\n\nTo start the server in targeted recovery mode, create a file called\nrecovery.signal in the data directory. If both standby.signal and\nrecovery.signal files are created, standby mode takes precedence.\nTargeted recovery mode ends when the archived WAL is fully replayed,\nor when recovery_target is reached. In this mode, the parameters from\nboth this section and Section 19.5.5 will be used.\"\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Fri, 24 Sep 2021 22:16:02 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "how to distinguish between using the server as a standby or for\n executing a targeted recovery in PG 11?"
},
{
"msg_contents": "On Fri, Sep 24, 2021, at 1:46 PM, Bharath Rupireddy wrote:\n> I'm trying to set up a postgres server with version 11 in targeted\n> recovery mode (for the first time after my journey started with\n> postgres) and I came across the explanation at [1] in PG 12 and newer\n> versions that we have a clear differentiation as to what is the\n> \"standby\" mode or \"targeted recovery\" mode. How do we differentiate\n> these two modes in PG 11? Can anyone please help me with it?\nIt seems you have to rely on parsing recovery.conf. However, someone can modify\nit after starting Postgres. In this case, you have to use a debugger such as\n\ngdb /path/to/postgres -p $(pgrep -f 'postgres: startup recovering') -quiet -batch -ex 'p StandbyMode' -ex 'quit'\n\nUnfortunately, there is no simple way such as checking if a .signal file exists.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Fri, Sep 24, 2021, at 1:46 PM, Bharath Rupireddy wrote:I'm trying to set up a postgres server with version 11 in targetedrecovery mode (for the first time after my journey started withpostgres) and I came across the explanation at [1] in PG 12 and newerversions that we have a clear differentiation as to what is the\"standby\" mode or \"targeted recovery\" mode. How do we differentiatethese two modes in PG 11? Can anyone please help me with it?It seems you have to rely on parsing recovery.conf. However, someone can modifyit after starting Postgres. In this case, you have to use a debugger such asgdb /path/to/postgres -p $(pgrep -f 'postgres: startup recovering') -quiet -batch -ex 'p StandbyMode' -ex 'quit'Unfortunately, there is no simple way such as checking if a .signal file exists.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Mon, 27 Sep 2021 17:49:08 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re:_how_to_distinguish_between_using_the_server_as_a_standby_o?=\n =?UTF-8?Q?r_for_executing_a_targeted_recovery_in_PG_11=3F?="
},
{
"msg_contents": "\nOn 9/24/21 12:46 PM, Bharath Rupireddy wrote:\n> Hi,\n>\n> I'm trying to set up a postgres server with version 11 in targeted\n> recovery mode (for the first time after my journey started with\n> postgres) and I came across the explanation at [1] in PG 12 and newer\n> versions that we have a clear differentiation as to what is the\n> \"standby\" mode or \"targeted recovery\" mode. How do we differentiate\n> these two modes in PG 11? Can anyone please help me with it?\n>\n> PS: hackers-list may not be the right place to ask, but I'm used to\n> seeking help from it.\n>\n\n\nsee <https://www.postgresql.org/docs/11/recovery-target-settings.html>\nand <https://www.postgresql.org/docs/11/standby-settings.html>\n\n\n(And yes, pgsql-general would be the right forum)\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 28 Sep 2021 10:09:40 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: how to distinguish between using the server as a standby or for\n executing a targeted recovery in PG 11?"
},
{
"msg_contents": "On Tue, Sep 28, 2021 at 7:39 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>\n> On 9/24/21 12:46 PM, Bharath Rupireddy wrote:\n> > Hi,\n> >\n> > I'm trying to set up a postgres server with version 11 in targeted\n> > recovery mode (for the first time after my journey started with\n> > postgres) and I came across the explanation at [1] in PG 12 and newer\n> > versions that we have a clear differentiation as to what is the\n> > \"standby\" mode or \"targeted recovery\" mode. How do we differentiate\n> > these two modes in PG 11? Can anyone please help me with it?\n> >\n> > PS: hackers-list may not be the right place to ask, but I'm used to\n> > seeking help from it.\n> >\n>\n>\n> see <https://www.postgresql.org/docs/11/recovery-target-settings.html>\n> and <https://www.postgresql.org/docs/11/standby-settings.html>\n\nThanks! It looks like the 'standby_mode = off' in the recovery.conf\nwith a 'recovery_target' makes the server to be in \"targeted recovery\"\nmode.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Wed, 29 Sep 2021 09:18:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: how to distinguish between using the server as a standby or for\n executing a targeted recovery in PG 11?"
}
] |
[
{
"msg_contents": "A compilation of fixes for master.\n\nThe first patch should be applied to v13 - the typo was already fixed in master\nbut not backpatched.",
"msg_date": "Fri, 24 Sep 2021 16:58:27 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "typos"
},
{
"msg_contents": "On Fri, Sep 24, 2021 at 04:58:27PM -0500, Justin Pryzby wrote:\n> A compilation of fixes for master.\n\nThanks Michael for applying fixes to user-facing docs (I hadn't realized that\nthe 2nd one needed to be backpatched).\n\nThis fixes an file I failed to include in the \"recheck\" patch and more typos\nfor extended stats (+Tomas).\n\n+Andres (Jit), +Zhihong (file header comments).",
"msg_date": "Sun, 26 Sep 2021 12:01:17 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: typos (and more)"
},
{
"msg_contents": "On Sun, Sep 26, 2021 at 12:01:17PM -0500, Justin Pryzby wrote:\n> Thanks Michael for applying fixes to user-facing docs (I hadn't realized that\n> the 2nd one needed to be backpatched).\n\nYes, thanks for compiling all these. The two changes committed were\nthe only user-visible changes, which is why I have hastened this part\nto include those fixes. The rest could just go on HEAD.\n--\nMichael",
"msg_date": "Mon, 27 Sep 2021 09:24:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: typos (and more)"
},
{
"msg_contents": "On Mon, Sep 27, 2021 at 09:24:27AM +0900, Michael Paquier wrote:\n> Yes, thanks for compiling all these. The two changes committed were\n> the only user-visible changes, which is why I have hastened this part\n> to include those fixes. The rest could just go on HEAD.\n\nI have looked at the full set, and applied 0003, 0006, 0009, 0010 and\n0011. 0001 has been discussed separately, and I am really not sure if\nthat's worth bothering. 0002 may actually break some code? I have\nlet 0004 and 0005 alone. 0007 could be related to the discussion\nwhere we could just remove all those IDENTIFICATION fields. The use\nof \"statistic\", \"statistics\" and \"statistics object\" in 0008 and 0012\nis indeed inconsistent. The latter term is the most used, but it\nsounds a bit weird to me even if it refers to the DDL object\nmanipulated. Simply using \"statistics\" would be tempting.\n--\nMichael",
"msg_date": "Mon, 27 Sep 2021 14:23:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: typos (and more)"
},
{
"msg_contents": "On 2021-Sep-27, Michael Paquier wrote:\n\n> The use\n> of \"statistic\", \"statistics\" and \"statistics object\" in 0008 and 0012\n> is indeed inconsistent. The latter term is the most used, but it\n> sounds a bit weird to me even if it refers to the DDL object\n> manipulated. Simply using \"statistics\" would be tempting.\n\nInitially we just used \"statistic\" as a noun, which IIRC was already\ngrammatically wrong (but I didn't know that and I think Tomas didn't\neither); later at some point when discussing how to use that noun in\nplural we realized this and argued that merely using \"statistics\" was\neven more wrong. It was then that we started using the term \"statistics\nobject\" with plural \"statistics objects\". Going back to using just\n\"statistics\" is unlikely to have become correct; I think Justin's\npatches 0008 and 0012 are correct.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"La libertad es como el dinero; el que no la sabe emplear la pierde\" (Alvarez)\n\n\n",
"msg_date": "Mon, 27 Sep 2021 18:04:02 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: typos (and more)"
},
{
"msg_contents": "On Mon, Sep 27, 2021 at 06:04:02PM -0300, Alvaro Herrera wrote:\n> Initially we just used \"statistic\" as a noun, which IIRC was already\n> grammatically wrong (but I didn't know that and I think Tomas didn't\n> either); later at some point when discussing how to use that noun in\n> plural we realized this and argued that merely using \"statistics\" was\n> even more wrong. It was then that we started using the term \"statistics\n> object\" with plural \"statistics objects\". Going back to using just\n> \"statistics\" is unlikely to have become correct; I think Justin's\n> patches 0008 and 0012 are correct.\n\nThanks for confirming.\n\n if (list_length(pstate->p_rtable) != 1)\n ereport(ERROR,\n (errcode(ERRCODE_INVALID_COLUMN_REFERENCE),\n- errmsg(\"statistics expressions can refer only to the table being indexed\")));\n+ errmsg(\"statistics expressions can refer only to the table being referenced\")));\nThis part should be backpatched? The code claims that this should\nbe dead code so an elog() would be more adapted, and the same can be\nsaid about transformRuleStmt() and transformIndexStmt(), no? That\nwould be less messages to translate. \n--\nMichael",
"msg_date": "Tue, 28 Sep 2021 08:53:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: typos (and more)"
},
{
"msg_contents": "On Mon, Sep 27, 2021 at 06:04:02PM -0300, Alvaro Herrera wrote:\n> On 2021-Sep-27, Michael Paquier wrote:\n> \n> > The use\n> > of \"statistic\", \"statistics\" and \"statistics object\" in 0008 and 0012\n> > is indeed inconsistent. The latter term is the most used, but it\n> > sounds a bit weird to me even if it refers to the DDL object\n> > manipulated. Simply using \"statistics\" would be tempting.\n> \n> Initially we just used \"statistic\" as a noun, which IIRC was already\n> grammatically wrong (but I didn't know that and I think Tomas didn't\n> either); later at some point when discussing how to use that noun in\n> plural we realized this and argued that merely using \"statistics\" was\n> even more wrong. It was then that we started using the term \"statistics\n> object\" with plural \"statistics objects\". Going back to using just\n> \"statistics\" is unlikely to have become correct; I think Justin's\n> patches 0008 and 0012 are correct.\n\nAttached is an updated patch fixing more of the same.\n\n-- \nJustin",
"msg_date": "Mon, 27 Sep 2021 19:50:02 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: typos (and more)"
},
{
"msg_contents": "On Mon, Sep 27, 2021 at 07:50:02PM -0500, Justin Pryzby wrote:\n> Attached is an updated patch fixing more of the same.\n\nDoes this include everything you have spotted, as well as everything\nfrom the previous patches 0008 and 0012 posted?\n--\nMichael",
"msg_date": "Tue, 28 Sep 2021 11:15:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: typos (and more)"
},
{
"msg_contents": "On Tue, Sep 28, 2021 at 11:15:39AM +0900, Michael Paquier wrote:\n> On Mon, Sep 27, 2021 at 07:50:02PM -0500, Justin Pryzby wrote:\n> > Attached is an updated patch fixing more of the same.\n> \n> Does this include everything you have spotted, as well as everything\n> from the previous patches 0008 and 0012 posted?\n\nThat's an \"expanded\" version of 0008.\n\nIt doesn't include 0012, which is primarily about fixing incorrect references\nto \"index expressions\" that should refer to stats expressions. Naturally 0012\nalso uses the phrase \"statistics objects\", and fixes one nearby reference\nthat's not itself about indexes, which could arguably be in 0008 instead..\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 27 Sep 2021 21:27:56 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: typos (and more)"
},
{
"msg_contents": "On Mon, Sep 27, 2021 at 09:27:56PM -0500, Justin Pryzby wrote:\n> That's an \"expanded\" version of 0008.\n\nOkay, thanks.\n\n> It doesn't include 0012, which is primarily about fixing incorrect references\n> to \"index expressions\" that should refer to stats expressions. Naturally 0012\n> also uses the phrase \"statistics objects\", and fixes one nearby reference\n> that's not itself about indexes, which could arguably be in 0008 instead..\n\nMerging both made the most sense to me after reviewing the whole area\nof the code dedicated to stats. This has been applied after taking\ncare of some issues with the indentation, with few extra tweaks.\n--\nMichael",
"msg_date": "Wed, 29 Sep 2021 16:26:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: typos (and more)"
}
] |
[
{
"msg_contents": "Hackers,\n\nA few TAP tests in the project appear to be sensitive to reductions of the PostgresNode's max_wal_size setting, resulting in tests failing due to wal files having been removed too soon. The failures in the logs typically are of the \"requested WAL segment %s has already been removed\" variety. I would expect tests which fail under legal alternate GUC settings to be hardened to explicitly set the GUCs as they need, rather than implicitly relying on the defaults. As far as missing WAL files go, I would expect the TAP test to prevent this with the use of replication slots or some other mechanism, and not simply to rely on checkpoints not happening too soon. I'm curious if others on this list disagree with that point of view.\n\nFailures in src/test/recovery/t/015_promotion_pages.pl can be fixed by creating a physical replication slot on node \"alpha\" and using it from node \"beta\", a technique already used in other TAP tests and apparently merely overlooked in this one.\n\nThe first two tests in src/bin/pg_basebackup/t fail, and it's not clear that physical replication slots are the appropriate solution, since no replication is happening. It's not immediately obvious that the tests are at fault anyway. On casual inspection, it seems they might be detecting a live bug which simply doesn't manifest under larger values of max_wal_size. Test 010 appears to show a bug with `pg_basebackup -X`, and test 020 with `pg_receivewal`.\n\nThe test in contrib/bloom/t/ is deliberately disabled in contrib/bloom/Makefile with a comment that the test is unstable in the buildfarm, but I didn't find anything to explain what exactly those buildfarm failures might have been when I chased down the email thread that gave rise to the related commit. That test happens to be stable on my laptop until I change GUC settings to both reduce max_wal_size=32MB and to set wal_consistency_checking=all.\n\nThoughts?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 24 Sep 2021 17:33:13 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Fixing WAL instability in various TAP tests"
},
{
"msg_contents": "On Fri, Sep 24, 2021 at 05:33:13PM -0700, Mark Dilger wrote:\n> A few TAP tests in the project appear to be sensitive to reductions of the\n> PostgresNode's max_wal_size setting, resulting in tests failing due to wal\n> files having been removed too soon. The failures in the logs typically are\n> of the \"requested WAL segment %s has already been removed\" variety. I would\n> expect tests which fail under legal alternate GUC settings to be hardened to\n> explicitly set the GUCs as they need, rather than implicitly relying on the\n> defaults.\n\nThat is not the general practice in PostgreSQL tests today. The buildfarm\nexercises some settings, so we keep the tests clean for those. Coping with\nmax_wal_size=2 that way sounds reasonable. I'm undecided about the value of\nhardening tests against all possible settings. On the plus side, it would let\nus run one buildfarm member that sets every setting to its min_val or\nenumvals[1] and another member that elects enumvals[cardinality(enumvals)] or\nmax_val. We'd catch some real bugs that way. On the minus side, every\nnontrivial test suite addition would need to try those two cases before commit\nor risk buildfarm wrath. I don't know whether the bugs found would pay for\nthat trouble. (There's also a less-important loss around the ability to\nexercise a setting and manually inspect the results. For example, I sometimes\ntest parallel_tuple_cost=0 parallel_setup_cost=0 and confirm a lack of\ncrashes. After hardening, that would require temporary source code edits to\nremove the hardening. That's fine, though.)\n\n\n",
"msg_date": "Fri, 24 Sep 2021 22:21:51 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Fixing WAL instability in various TAP tests"
},
{
"msg_contents": "\n\n> On Sep 24, 2021, at 10:21 PM, Noah Misch <noah@leadboat.com> wrote:\n> \n>> I would\n>> expect tests which fail under legal alternate GUC settings to be hardened to\n>> explicitly set the GUCs as they need, rather than implicitly relying on the\n>> defaults.\n> \n> That is not the general practice in PostgreSQL tests today. The buildfarm\n> exercises some settings, so we keep the tests clean for those. Coping with\n> max_wal_size=2 that way sounds reasonable. I'm undecided about the value of\n> hardening tests against all possible settings.\n\nLeaving the tests brittle wastes developer time.\n\nI ran into this problem when I changed the storage underlying bloom indexes and ran the contrib/bloom/t/001_wal.pl test with wal_consistency_checking=all. That caused the test to fail with errors about missing wal files, and it took time to backtrack and see that the test fails under this setting even before applying my storage layer changes. Ordinarily, failures about missing wal files would have led me to suspect the TAP test sooner, but since I had mucked around with storage and wal it initially seemed plausible that my code changes were the problem. The real problem is that a replication slot is not used in the test.\n\nThe failure in src/test/recovery/t/015_promotion_pages.pl is also that a replication slot should be used but is not.\n\nThe failure in src/bin/pg_basebackup/t/010_pg_basebackup.pl stems from not heeding the documented requirement for pg_basebackup -X fetch that the wal_keep_size \"be set high enough that the required log data is not removed before the end of the backup\". It's just assuming that it will be, because that tends to be true under default GUC settings. I think this can be fixed by setting wal_keep_size=<SOMETHING_BIG_ENOUGH>, but (a) you say this is not the general practice in PostgreSQL tests today, and (b) there doesn't seem to be any principled way to decide what value would be big enough. Sure, we can use something that is big enough in practice, and we'll probably have to go with that, but it feels like we're just papering over the problem.\n\nI'm inclined to guess that the problem in src/bin/pg_basebackup/t/020_pg_receivewal.pl is similar.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Sat, 25 Sep 2021 07:12:08 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Fixing WAL instability in various TAP tests"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> On Sep 24, 2021, at 10:21 PM, Noah Misch <noah@leadboat.com> wrote:\n>>> I would\n>>> expect tests which fail under legal alternate GUC settings to be hardened to\n>>> explicitly set the GUCs as they need, rather than implicitly relying on the\n>>> defaults.\n\n>> That is not the general practice in PostgreSQL tests today. The buildfarm\n>> exercises some settings, so we keep the tests clean for those. Coping with\n>> max_wal_size=2 that way sounds reasonable. I'm undecided about the value of\n>> hardening tests against all possible settings.\n\n> Leaving the tests brittle wastes developer time.\n\nTrying to make them proof against all possible settings would waste\na lot more time, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 25 Sep 2021 10:17:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fixing WAL instability in various TAP tests"
},
{
"msg_contents": "\n\n> On Sep 25, 2021, at 7:17 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n>> Leaving the tests brittle wastes developer time.\n> \n> Trying to make them proof against all possible settings would waste\n> a lot more time, though.\n\nYou may be right, but the conversation about \"all possible settings\" was started by Noah. I was really just talking about tests that depend on wal files not being removed, but taking no action to guarantee that, merely trusting that under default settings they won't be. I can't square that design against other TAP tests that do take measures to prevent wal files being removed. Why is the precaution taken in some tests but not others? If this is intentional, shouldn't some comment in the tests without such precautions explain that choice? Are they intentionally testing that the default GUC wal size settings and wal verbosity won't break the test?\n\nThis isn't a rhetorical question:\n\nIn src/test/recovery/t/015_promotion_pages.pl, the comments talk about the how checkpoints impact what happens on the standby. The test issues an explicit checkpoint on the primary, and again later on the standby, so it is unclear if that's what the comments refer to, or if they also refer to implicit expectations about when/if other checkpoints will happen. The test breaks when I change the GUC settings, but I can fix that breakage by adding a replication slot to the test. Have I broken the purpose of the test by doing so, though? Does using a replication slot to force the wal to not be removed early break what the test is designed to check?\n\nThe other tests raise similar questions. Is the brittleness intentional?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Sat, 25 Sep 2021 08:20:06 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Fixing WAL instability in various TAP tests"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> You may be right, but the conversation about \"all possible settings\" was started by Noah. I was really just talking about tests that depend on wal files not being removed, but taking no action to guarantee that, merely trusting that under default settings they won't be. I can't square that design against other TAP tests that do take measures to prevent wal files being removed. Why is the precaution taken in some tests but not others?\n\nIf we are doing something about that in some test cases, I'd agree with\ndoing the same thing in others that need it. It seems more likely to\nbe an oversight than an intentional test condition.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 25 Sep 2021 11:43:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fixing WAL instability in various TAP tests"
},
{
"msg_contents": "On Sat, Sep 25, 2021 at 08:20:06AM -0700, Mark Dilger wrote:\n> > On Sep 25, 2021, at 7:17 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Leaving the tests brittle wastes developer time.\n> > \n> > Trying to make them proof against all possible settings would waste\n> > a lot more time, though.\n> \n> You may be right, but the conversation about \"all possible settings\" was\n> started by Noah.\n\nYou wrote, \"I would expect tests which fail under legal alternate GUC settings\nto be hardened to explicitly set the GUCs as they need, rather than implicitly\nrelying on the defaults.\" I read that as raising the general principle, not\njust a narrow argument about max_wal_size. We can discontinue talking about\nthe general principle and focus on max_wal_size.\n\n> I was really just talking about tests that depend on wal\n> files not being removed, but taking no action to guarantee that, merely\n> trusting that under default settings they won't be.\n\nAs I said, +1 for making tests pass under the min_val of max_wal_size. If you\nwant to introduce a max_wal_size=2 buildfarm member so it stays that way, +1\nfor that as well.\n\n\n",
"msg_date": "Sat, 25 Sep 2021 09:00:39 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Fixing WAL instability in various TAP tests"
},
{
"msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> On Sat, Sep 25, 2021 at 08:20:06AM -0700, Mark Dilger wrote:\n>> You may be right, but the conversation about \"all possible settings\" was\n>> started by Noah.\n\n> You wrote, \"I would expect tests which fail under legal alternate GUC settings\n> to be hardened to explicitly set the GUCs as they need, rather than implicitly\n> relying on the defaults.\" I read that as raising the general principle, not\n> just a narrow argument about max_wal_size.\n\nAs did I.\n\n> We can discontinue talking about\n> the general principle and focus on max_wal_size.\n\nIt is worth stopping to think about whether there are adjacent settings\nthat need similar treatment.\n\nIn general, it seems like \"premature discarding of WAL segments\" is\nsomething akin to \"premature timeout\" errors, and we've got a pretty\naggressive policy about preventing those. There are a lot of settings\nthat I'd *not* be in favor of trying to be bulletproof about, because\nit doesn't seem worth the trouble; but perhaps this one is.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 25 Sep 2021 12:20:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fixing WAL instability in various TAP tests"
},
{
"msg_contents": "\n\n> On Sep 25, 2021, at 9:00 AM, Noah Misch <noah@leadboat.com> wrote:\n> \n>> You may be right, but the conversation about \"all possible settings\" was\n>> started by Noah.\n> \n> You wrote, \"I would expect tests which fail under legal alternate GUC settings\n> to be hardened to explicitly set the GUCs as they need, rather than implicitly\n> relying on the defaults.\" I read that as raising the general principle, not\n> just a narrow argument about max_wal_size.\n\nIn the first draft of my email to Tom, I had language about my inartful crafting of my original post that led Noah to respond as he did.... I couldn't quite figure out how to phrase that without distracting from the main point. I don't think you were (much) offended, but my apologies for any perceived fingerpointing.\n\nI also don't have a problem with your idea of testing in the build farm with some animals having the gucs set to minimum values and some to maximum and so forth. I like that idea generally, though don't feel competent to predict how much work that would be to maintain, so I'm just deferring to Tom's and your judgement about that.\n\nMy inartful first post was really meant to say, \"here is a problem that I perceive about tap tests vis-a-vis wal files, do people disagree with me that this is a problem, and would patches to address the problem be welcome?\" I took Tom's response to be, \"yeah, go ahead\", and am mostly waiting for the weekend to be over to see if anybody else has anything to say about it.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Sat, 25 Sep 2021 11:04:29 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Fixing WAL instability in various TAP tests"
},
{
"msg_contents": "> On Sep 25, 2021, at 11:04 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> I took Tom's response to be, \"yeah, go ahead\", and am mostly waiting for the weekend to be over to see if anybody else has anything to say about it.\n\nHere is a patch set, one patch per test. The third patch enables its test in the Makefile, which is commented as having been disabled due to the test being unstable in the build farm. Re-enabling the test might be wrong, since the instability might not have been due to WAL being recycled early. I didn't find enough history about why that test was disabled, so trying it in the build farm again is the best I can suggest. Maybe somebody else on this list knows more?\n\n\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 27 Sep 2021 12:58:28 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Fixing WAL instability in various TAP tests"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> Here is a patch set, one patch per test. The third patch enables its test in the Makefile, which is commented as having been disabled due to the test being unstable in the build farm. Re-enabling the test might be wrong, since the instability might not have been due to WAL being recycled early. I didn't find enough history about why that test was disabled, so trying it in the build farm again is the best I can suggest. Maybe somebody else on this list knows more?\n\nDigging in the archives where the commit points, I find\n\nhttps://www.postgresql.org/message-id/flat/20181126025125.GH1776%40paquier.xyz\n\nwhich says there was an unexpected result on my animal longfin.\nI tried the same thing (i.e., re-enable bloom's TAP test) on my laptop\njust now, and it passed fine. The laptop is not exactly the same\nas longfin was in 2018, but it ought to be close enough. Not sure\nwhat to make of that --- maybe the failure is only intermittent,\nor else we fixed the underlying issue since then.\n\nI'm a little inclined to re-enable the test without your other\nchanges, just to see what happens.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 27 Sep 2021 16:19:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fixing WAL instability in various TAP tests"
},
{
"msg_contents": "\n\n> On Sep 27, 2021, at 1:19 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> I'm a little inclined to re-enable the test without your other\n> changes, just to see what happens.\n\nThat sounds like a good idea. Even if it passes at first, I'd prefer to leave it for a week or more to have a better sense of how stable it is. Applying my patches too soon would just add more variables to a not-so-well understood situation.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 27 Sep 2021 13:21:36 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Fixing WAL instability in various TAP tests"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> On Sep 27, 2021, at 1:19 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I'm a little inclined to re-enable the test without your other\n>> changes, just to see what happens.\n\n> That sounds like a good idea. Even if it passes at first, I'd prefer to leave it for a week or more to have a better sense of how stable it is. Applying my patches too soon would just add more variables to a not-so-well understood situation.\n\nDone. I shall retire to a safe viewing distance and observe the\nbuildfarm.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 27 Sep 2021 18:49:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fixing WAL instability in various TAP tests"
},
{
"msg_contents": "On Mon, Sep 27, 2021 at 04:19:27PM -0400, Tom Lane wrote:\n> I tried the same thing (i.e., re-enable bloom's TAP test) on my laptop\n> just now, and it passed fine. The laptop is not exactly the same\n> as longfin was in 2018, but it ought to be close enough. Not sure\n> what to make of that --- maybe the failure is only intermittent,\n> or else we fixed the underlying issue since then.\n\nHonestly, I have no idea what change in the backend matters here. And\nit is not like bloom has changed in any significant way since d3c09b9.\n--\nMichael",
"msg_date": "Tue, 28 Sep 2021 08:43:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fixing WAL instability in various TAP tests"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, Sep 27, 2021 at 04:19:27PM -0400, Tom Lane wrote:\n>> I tried the same thing (i.e., re-enable bloom's TAP test) on my laptop\n>> just now, and it passed fine. The laptop is not exactly the same\n>> as longfin was in 2018, but it ought to be close enough. Not sure\n>> what to make of that --- maybe the failure is only intermittent,\n>> or else we fixed the underlying issue since then.\n\n> Honestly, I have no idea what change in the backend matters here. And\n> it is not like bloom has changed in any significant way since d3c09b9.\n\nI went so far as to check out 03faa4a8dd on longfin's host, and I find\nthat I cannot reproduce the failure shown at\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=longfin&dt=2018-11-25+23%3A59%3A03\n\nSo that's the same hardware, and identical PG source tree, and different\nresults. This seems to leave only two theories standing:\n\n1. It was a since-fixed macOS bug. (Unlikely, especially if we also saw\nit on other platforms.)\n\n2. The failure manifested only in the buildfarm, not under manual \"make\ncheck\". This is somewhat more plausible, especially since subsequent\nbuildfarm script changes might then explain why it went away. But I have\nno idea what the \"subsequent script changes\" might've been.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 27 Sep 2021 22:20:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fixing WAL instability in various TAP tests"
},
{
"msg_contents": "\nOn 9/27/21 10:20 PM, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> On Mon, Sep 27, 2021 at 04:19:27PM -0400, Tom Lane wrote:\n>>> I tried the same thing (i.e., re-enable bloom's TAP test) on my laptop\n>>> just now, and it passed fine. The laptop is not exactly the same\n>>> as longfin was in 2018, but it ought to be close enough. Not sure\n>>> what to make of that --- maybe the failure is only intermittent,\n>>> or else we fixed the underlying issue since then.\n>> Honestly, I have no idea what change in the backend matters here. And\n>> it is not like bloom has changed in any significant way since d3c09b9.\n> I went so far as to check out 03faa4a8dd on longfin's host, and I find\n> that I cannot reproduce the failure shown at\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=longfin&dt=2018-11-25+23%3A59%3A03\n>\n> So that's the same hardware, and identical PG source tree, and different\n> results. This seems to leave only two theories standing:\n>\n> 1. It was a since-fixed macOS bug. (Unlikely, especially if we also saw\n> it on other platforms.)\n>\n> 2. The failure manifested only in the buildfarm, not under manual \"make\n> check\". This is somewhat more plausible, especially since subsequent\n> buildfarm script changes might then explain why it went away. But I have\n> no idea what the \"subsequent script changes\" might've been.\n>\n> \t\t\t\n\n\nNothing I can think of.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 28 Sep 2021 10:01:19 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Fixing WAL instability in various TAP tests"
},
{
"msg_contents": "I wrote:\n> So that's the same hardware, and identical PG source tree, and different\n> results. This seems to leave only two theories standing:\n\nI forgot theory 3: it's intermittent. Apparently the probability has\ndropped a lot since 2018, but behold:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=flaviventris&dt=2021-09-28%2014%3A20%3A41\n\n(with successful runs just before and after this one, on the same\nanimal)\n\nNote that the delta is not exactly like the previous result, either.\nSo there's more than one symptom, but in any case it seems like\nwe have an issue in WAL replay. I wonder whether it's bloom's fault\nor a core bug.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 28 Sep 2021 13:27:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fixing WAL instability in various TAP tests"
},
{
"msg_contents": "\n\n> On Sep 28, 2021, at 10:27 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> I wonder whether it's bloom's fault\n> or a core bug.\n\nLooking closer at the TAP test, it's not ORDERing the result set from the SELECTs on either node, but it is comparing the sets for stringwise equality, which is certainly order dependent.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 28 Sep 2021 11:07:54 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Fixing WAL instability in various TAP tests"
},
{
"msg_contents": "I wrote:\n> So there's more than one symptom, but in any case it seems like\n> we have an issue in WAL replay. I wonder whether it's bloom's fault\n> or a core bug.\n\nActually ... I bet it's just the test script's fault. It waits for the\nstandby to catch up like this:\n\n\tmy $caughtup_query =\n\t \"SELECT pg_current_wal_lsn() <= write_lsn FROM pg_stat_replication WHERE application_name = '$applname';\";\n\t$node_primary->poll_query_until('postgres', $caughtup_query)\n\t or die \"Timed out while waiting for standby 1 to catch up\";\n\nwhich seems like completely the wrong condition. Don't we need the\nstandby to have *replayed* the WAL, not merely written it to disk?\n\nI'm also wondering why this doesn't use wait_for_catchup, instead\nof reinventing the query to use.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 28 Sep 2021 14:11:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fixing WAL instability in various TAP tests"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> Looking closer at the TAP test, it's not ORDERing the result set from the SELECTs on either node, but it is comparing the sets for stringwise equality, which is certainly order dependent.\n\nWell, it's forcing a bitmap scan, so what we're getting is the native\nordering of a bitmapscan result. That should match, given that what\nwe're doing is physical replication. I think adding ORDER BY would\nbe more likely to obscure real issues than hide test instability.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 28 Sep 2021 14:20:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fixing WAL instability in various TAP tests"
},
{
"msg_contents": "\n\n> On Sep 28, 2021, at 11:07 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> Looking closer at the TAP test, it's not ORDERing the result set from the SELECTs on either node, but it is comparing the sets for stringwise equality, which is certainly order dependent.\n\nTaking the output from the buildfarm page, parsing out the first test's results and comparing got vs. expected for this test:\n\nis($primary_result, $standby_result, \"$test_name: query result matches\");\n\nthe primary result had all the same rows as the standby, along with additional rows. Comparing the results, they match other than rows missing from the standby that are present on the primary. That seems consistent with the view that the query on the standby is running before all the data has replicated across.\n\nHowever, the missing rows all have column i either 0 or 3, though the test round-robins i=0..9 as it performs the inserts. I would expect the wal for the inserts to not cluster around any particular value of i. The DELETE and VACUUM commands do operate on a single value of i, so that would make sense if the data failed to be deleted on the standby after successfully being deleted on the primary, but then I'd expect the standby to have more rows, not fewer.\n\nPerhaps having the bloom index messed up answers that, though. I think it should be easy enough to get the path to the heap main table fork and the bloom main index fork for both the primary and standby and do a filesystem comparison as part of the wal test. That would tell us if they differ, and also if the differences are limited to just one or the other.\n\nI'll go write that up....\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 28 Sep 2021 11:43:30 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Fixing WAL instability in various TAP tests"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> Perhaps having the bloom index messed up answers that, though. I think it should be easy enough to get the path to the heap main table fork and the bloom main index fork for both the primary and standby and do a filesystem comparison as part of the wal test. That would tell us if they differ, and also if the differences are limited to just one or the other.\n\nI think that's probably overkill, and definitely out-of-scope for\ncontrib/bloom. If we fear that WAL replay is not reproducing the data\naccurately, we should be testing for that in some more centralized place.\n\nAnyway, I confirmed my diagnosis by adding a delay in WAL apply\n(0001 below); that makes this test fall over spectacularly.\nAnd 0002 fixes it. So I propose to push 0002 as soon as the\nv14 release freeze ends.\n\nShould we back-patch 0002? I'm inclined to think so. Should\nwe then also back-patch enablement of the bloom test? Less\nsure about that, but I'd lean to doing so. A test that appears\nto be there but isn't actually invoked is pretty misleading.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 28 Sep 2021 15:00:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fixing WAL instability in various TAP tests"
},
{
"msg_contents": "On Tue, Sep 28, 2021 at 03:00:13PM -0400, Tom Lane wrote:\n> Should we back-patch 0002? I'm inclined to think so. Should\n> we then also back-patch enablement of the bloom test? Less\n> sure about that, but I'd lean to doing so. A test that appears\n> to be there but isn't actually invoked is pretty misleading.\n\nA backpatch sounds adapted to me for both patches. The only risk that\nI could see here is somebody implementing a new test by copy-pasting\nthis one if we were to keep things as they are on stable branches.\n--\nMichael",
"msg_date": "Wed, 29 Sep 2021 12:15:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fixing WAL instability in various TAP tests"
},
{
"msg_contents": "On 9/28/21, 8:17 PM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> On Tue, Sep 28, 2021 at 03:00:13PM -0400, Tom Lane wrote:\r\n>> Should we back-patch 0002? I'm inclined to think so. Should\r\n>> we then also back-patch enablement of the bloom test? Less\r\n>> sure about that, but I'd lean to doing so. A test that appears\r\n>> to be there but isn't actually invoked is pretty misleading.\r\n>\r\n> A backpatch sounds adapted to me for both patches. The only risk that\r\n> I could see here is somebody implementing a new test by copy-pasting\r\n> this one if we were to keep things as they are on stable branches.\r\n\r\nI found this thread via the Commitfest entry\r\n(https://commitfest.postgresql.org/35/3333/), and I also see that the\r\nfollowing patches have been committed:\r\n\r\n https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=7d1aa6b\r\n https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=6bc6bd4\r\n\r\nHowever, it looks like there are a couple of other patches upthread\r\n[0] that attempt to ensure the tests pass for different settings of\r\nmax_wal_size. Do we intend to proceed with those, or should we just\r\nclose out the Commmitfest entry?\r\n\r\nNathan\r\n\r\n[0] https://postgr.es/m/C1D227C2-C271-4310-8C85-C5368C298622%40enterprisedb.com\r\n\r\n",
"msg_date": "Thu, 21 Oct 2021 22:23:20 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Fixing WAL instability in various TAP tests"
},
{
"msg_contents": "\n\n> On Oct 21, 2021, at 3:23 PM, Bossart, Nathan <bossartn@amazon.com> wrote:\n> \n> Do we intend to proceed with those, or should we just\n> close out the Commmitfest entry?\n\nI have withdrawn the patch. The issues were intermittent on the buildfarm, and committing other changes along with what Tom already committed would seem to confuse matters if any new issues were to arise. We can come back to this sometime in the future, if need be.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 3 Nov 2021 16:56:35 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Fixing WAL instability in various TAP tests"
}
] |
[
{
"msg_contents": "Hi, \n\nWhen I work on patch[1] on windows(vs2019), I found there are some file(generated\nby vs2019) are not listed in gitignore.\n\n> $ git status\n> \n> Untracked files:\n> (use \"git add <file>...\" to include in what will be committed)\n> .vs/\n> postgres.vcxproj.user\n> src/tools/msvc/buildvs.bat\n> src/tools/msvc/installvs.bat\n\nCan we add these file to gitignore?\n- *vcproj.user\n- *vcxproj.user\n- /.vs/\n\n[1] https://www.postgresql.org/message-id/flat/OSBPR01MB4214FA221FFE046F11F2AD74F2D49@OSBPR01MB4214.jpnprd01.prod.outlook.com\n\nRegards.\nShenhao Wang\n\n\n",
"msg_date": "Sun, 26 Sep 2021 08:57:55 +0000",
"msg_from": "\"wangsh.fnst@fujitsu.com\" <wangsh.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "can we add some file(msvc) to gitignore"
},
{
"msg_contents": "> On 26 Sep 2021, at 10:57, wangsh.fnst@fujitsu.com wrote:\n\n> Can we add these file to gitignore?\n\nAs postgres isn't mandating a specific IDE or dev environment, we typically\ndon't add these files to the .gitignore we ship. If we did it would be an\nenormous list we'd have to curate and maintain. Instead, everyone hacking on\npostgres can add these to their local gitignore with the core.excludesfile Git\nconfig.\n\nNow, it is true that there are some MSVC specific files in the .gitignore\nalready, but past discussion on this have leaned towards removing those (which\nI personally support) rather than adding new ones.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Sun, 26 Sep 2021 12:18:01 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: can we add some file(msvc) to gitignore"
},
{
"msg_contents": "On Sun, Sep 26, 2021 at 12:18:01PM +0200, Daniel Gustafsson wrote:\n> As postgres isn't mandating a specific IDE or dev environment, we typically\n> don't add these files to the .gitignore we ship. If we did it would be an\n> enormous list we'd have to curate and maintain. Instead, everyone hacking on\n> postgres can add these to their local gitignore with the core.excludesfile Git\n> config.\n\nYeah. This is an issue for many things. For example, under emacs or\nvim, we'd still track backup files for unsaved changes.\n\n> Now, it is true that there are some MSVC specific files in the .gitignore\n> already, but past discussion on this have leaned towards removing those (which\n> I personally support) rather than adding new ones.\n\nAgreed. I don't think that we should remove the entries that track\nfiles we could expect based on the state of the build code, though,\nlike config.pl or buildenv.pl in src/tools/msvc/ as committing those\ncould silently break builds.\n--\nMichael",
"msg_date": "Mon, 27 Sep 2021 09:34:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: can we add some file(msvc) to gitignore"
},
{
"msg_contents": "> On 27 Sep 2021, at 02:34, Michael Paquier <michael@paquier.xyz> wrote:\n\n> I don't think that we should remove the entries that track\n> files we could expect based on the state of the build code, though,\n> like config.pl or buildenv.pl in src/tools/msvc/ as committing those\n> could silently break builds.\n\nAgreed, those clearly belong in the .gitignore. The ones I was looking at were\n*.vcproj and *.vcxproj in the root .gitignore, but I don't know the MSVC build\nwell enough to know if those make sense or not.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 27 Sep 2021 13:50:39 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: can we add some file(msvc) to gitignore"
}
] |
[
{
"msg_contents": "While thinking about Isaac Morland's patch to add abs(interval),\nI happened to notice that interval_cmp_value() seems rather\ninefficently written: it's expending an int64 division -- or\neven two of them, if the compiler's not very smart -- to split\nup the \"time\" field into days and microseconds. That's quite\npointless, since we're immediately going to recombine the results\ninto microseconds. Integer divisions are pretty expensive, too,\non a lot of hardware.\n\nI suppose this is a hangover from when the code supported float\nas well as int64 time fields; but since that's long gone, I see\nno reason not to do the attached.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 26 Sep 2021 13:01:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Useless division(s) in interval_cmp_value"
}
] |
[
{
"msg_contents": "Are there any plans to add a create and last updated time stamp field to any and all objects in postgres?\nPossibly even adding a updated_by documenting which role created and last updated the object.\nAll done natively and without the need for extra extensions.\nThanks in advance.\nAre there any plans to add a create and last updated time stamp field to any and all objects in postgres?Possibly even adding a updated_by documenting which role created and last updated the object.All done natively and without the need for extra extensions.Thanks in advance.",
"msg_date": "Sun, 26 Sep 2021 20:11:07 +0000 (UTC)",
"msg_from": "\"Efrain J. Berdecia\" <ejberdecia@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Add create and update timestamp to all objects"
},
{
"msg_contents": "On Sun, Sep 26, 2021 at 1:11 PM Efrain J. Berdecia <ejberdecia@yahoo.com>\nwrote:\n\n> Are there any plans to add a create and last updated time stamp field to\n> any and all objects in postgres?\n>\n> Possibly even adding a updated_by documenting which role created and last\n> updated the object.\n>\n> All done natively and without the need for extra extensions.\n>\n> Thanks in advance.\n>\n\nWhy would you need that? Each timestamp[tz] value is 8 bytes.\n\nAnd setting up values via defaults and triggers is straightforward, isn't\nit?\n\nOn Sun, Sep 26, 2021 at 1:11 PM Efrain J. Berdecia <ejberdecia@yahoo.com> wrote:Are there any plans to add a create and last updated time stamp field to any and all objects in postgres?Possibly even adding a updated_by documenting which role created and last updated the object.All done natively and without the need for extra extensions.Thanks in advance.Why would you need that? Each timestamp[tz] value is 8 bytes.And setting up values via defaults and triggers is straightforward, isn't it?",
"msg_date": "Sun, 26 Sep 2021 13:39:48 -0700",
"msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add create and update timestamp to all objects"
},
{
"msg_contents": "\"Efrain J. Berdecia\" <ejberdecia@yahoo.com> writes:\n> Are there any plans to add a create and last updated time stamp field to any and all objects in postgres?\n\nNo. This has been proposed and rejected (more than once).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 26 Sep 2021 18:28:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add create and update timestamp to all objects"
},
{
"msg_contents": "On 09/26/21 18:28, Tom Lane wrote:\n> \"Efrain J. Berdecia\" <ejberdecia@yahoo.com> writes:\n>> Are there any plans to add a create and last updated time stamp field to any and all objects in postgres?\n> \n> No.\n\nThat said, if you'd be satisfied with a create OR last updated time,\nthere should already be a txid there.\n\nIf you have track_commit_timestamp turned on, there's your time.\n\nIf it's not turned on, but you want to know the time, there are ways.[0]\n\nRegards,\n-Chap\n\n\n[0]https://stackoverflow.com/a/61788447/4062350\n\n\n",
"msg_date": "Sun, 26 Sep 2021 19:03:30 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Add create and update timestamp to all objects"
}
] |
[
{
"msg_contents": "Hell Hackers, long time no email!\n\nI got a bug report for the semver extension:\n\n https://github.com/theory/pg-semver/issues/58\n\nIt claims that a test unexpected passes. That is, Test #31 is expected to fail, because it intentionally tests a version in which its parts overflow the int32[3] they’re stored in, with the expectation that one day we can refactor the type to handle larger version parts.\n\nI can’t imagine there would be any circumstance under which int32 would somehow be larger than a signed 32-bit integer, but perhaps there is?\n\nScroll to the bottom of these pages to see the unexpected passes on i386 and armhf:\n\n https://ci.debian.net/data/autopkgtest/unstable/i386/p/postgresql-semver/15208658/log.gz\n https://ci.debian.net/data/autopkgtest/unstable/armhf/p/postgresql-semver/15208657/log.gz\n\nHere’s the Postgres build output for those two platforms, as well, though nothing jumps out at me:\n\n https://buildd.debian.org/status/fetch.php?pkg=postgresql-13&arch=i386&ver=13.4-3&stamp=1630408269&raw=0\n https://buildd.debian.org/status/fetch.php?pkg=postgresql-13&arch=armhf&ver=13.4-3&stamp=1630412028&raw=0\n\n\nThanks,\n\nDavid",
"msg_date": "Sun, 26 Sep 2021 17:32:11 -0400",
"msg_from": "\"David E. Wheeler\" <david@justatheory.com>",
"msg_from_op": true,
"msg_subject": "When is int32 not an int32?"
},
{
"msg_contents": "\"David E. Wheeler\" <david@justatheory.com> writes:\n> It claims that a test unexpected passes. That is, Test #31 is expected to fail, because it intentionally tests a version in which its parts overflow the int32[3] they’re stored in, with the expectation that one day we can refactor the type to handle larger version parts.\n\n> I can’t imagine there would be any circumstance under which int32 would somehow be larger than a signed 32-bit integer, but perhaps there is?\n\nI'd bet more along the lines of \"your overflow check is less portable than\nyou thought\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 26 Sep 2021 18:31:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: When is int32 not an int32?"
},
{
"msg_contents": "On Sun, Sep 26, 2021 at 05:32:11PM -0400, David E. Wheeler wrote:\n> Hell Hackers, long time no email!\n> \n> I got a bug report for the semver extension:\n> \n> https://github.com/theory/pg-semver/issues/58\n> \n> It claims that a test unexpected passes. That is, Test #31 is expected to fail, because it intentionally tests a version in which its parts overflow the int32[3] they’re stored in, with the expectation that one day we can refactor the type to handle larger version parts.\n> \n> I can’t imagine there would be any circumstance under which int32 would somehow be larger than a signed 32-bit integer, but perhaps there is?\n> \n> Scroll to the bottom of these pages to see the unexpected passes on i386 and armhf:\n> \n> https://ci.debian.net/data/autopkgtest/unstable/i386/p/postgresql-semver/15208658/log.gz\n> https://ci.debian.net/data/autopkgtest/unstable/armhf/p/postgresql-semver/15208657/log.gz\n> \n> Here’s the Postgres build output for those two platforms, as well, though nothing jumps out at me:\n> \n> https://buildd.debian.org/status/fetch.php?pkg=postgresql-13&arch=i386&ver=13.4-3&stamp=1630408269&raw=0\n> https://buildd.debian.org/status/fetch.php?pkg=postgresql-13&arch=armhf&ver=13.4-3&stamp=1630412028&raw=0\n\nI noticed that in i386, configure finds none of (int8, uint8, int64,\nuint64), and I wonder whether we're actually testing whatever\nalternative we provide when we don't have them.\n\nI also noticed that the first of the long sequences of 9s doesn't even\nfit inside a uint64. The other two fit inside an int64, so if\npromotion were somehow happening, that wouldn't be a great test.\n\n99999999999999999999999.999999999999999999.99999999999999999\n over 2^72 over 2^59 over 2^56\n\nThese two observations taken together, get me to my first guess is\nthat the machinery we provide when we see non-working 64-bit integers\nis totally broken.\n\nIf that's right, we should at least discuss reversing our claim that\nwe support such systems, seeing as it doesn't appear that people will\nbe deploying new versions of PostgreSQL on them.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Sun, 26 Sep 2021 22:36:00 +0000",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: When is int32 not an int32?"
},
{
"msg_contents": "On Sep 26, 2021, at 18:31, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I'd bet more along the lines of \"your overflow check is less portable than\n> you thought”.\n\nOh well now that you mention it and I look past things, I see we’re using INT_MAX, but should probably use INT32_MAX.\n\n https://github.com/theory/pg-semver/blob/87cc30cbe80aa3992a4af6f19a35a9441111a86c/src/semver.c#L145-L149\n\nAnd also that the destination value we’re storing it in is an int parts[], not int32 parts[]. Which we do so we can parse numbers up to int size. But to Fetter’s point, we’re not properly handling something greater than int (usually int64, presumably). Not sure what changes are required to improve memory safety over and above using INT32_MAX instead of INT_MAX.\n\nThanks,\n\nDaavid",
"msg_date": "Sun, 26 Sep 2021 19:06:45 -0400",
"msg_from": "\"David E. Wheeler\" <david@justatheory.com>",
"msg_from_op": true,
"msg_subject": "Re: When is int32 not an int32?"
},
{
"msg_contents": "\"David E. Wheeler\" <david@justatheory.com> writes:\n> On Sep 26, 2021, at 18:31, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I'd bet more along the lines of \"your overflow check is less portable than\n>> you thought”.\n\n> Oh well now that you mention it and I look past things, I see we’re using INT_MAX, but should probably use INT32_MAX.\n\nMore to the point, you should be checking whether strtol reports overflow.\nHaving now seen your code, I'll opine that the failing platforms have\n32-bit long.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 26 Sep 2021 19:25:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: When is int32 not an int32?"
},
{
"msg_contents": "On Sep 26, 2021, at 19:25, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> More to the point, you should be checking whether strtol reports overflow.\n> Having now seen your code, I'll opine that the failing platforms have\n> 32-bit long.\n\nThanks for the pointer, Tom. I believe this fixes that particular issue.\n\n https://github.com/theory/pg-semver/commit/4d79dcc\n\nBest,\n\nDavid",
"msg_date": "Sun, 26 Sep 2021 22:38:46 -0400",
"msg_from": "\"David E. Wheeler\" <david@justatheory.com>",
"msg_from_op": true,
"msg_subject": "Re: When is int32 not an int32?"
}
] |
[
{
"msg_contents": "Hi,\n\nI found other functions that we should add \"pg_catalog\" prefix in\ndescribe.c. This fix is similar to the following commit.\n\n=====\ncommit 359bcf775550aa577c86ea30a6d071487fcca1ed\nAuthor: Alvaro Herrera <alvherre@alvh.no-ip.org>\nDate: Sat Aug 28 12:04:15 2021 -0400\n\n psql \\dX: reference regclass with \"pg_catalog.\" prefix\n=====\n\nPlease find attached the patch.\n\nRegards,\nTatsuro Yamada",
"msg_date": "Mon, 27 Sep 2021 09:13:19 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Add prefix pg_catalog to pg_get_statisticsobjdef_columns() in\n describe.c (\\dX)"
},
{
"msg_contents": "On Mon, Sep 27, 2021 at 2:14 AM Tatsuro Yamada <\ntatsuro.yamada.tf@nttcom.co.jp> wrote:\n\n> Hi,\n>\n> I found other functions that we should add \"pg_catalog\" prefix in\n> describe.c. This fix is similar to the following commit.\n>\n\nHi!\n\nYup, that's correct. Applied and backpatched to 14 (but it won't be in the\n14.0 release).\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Mon, Sep 27, 2021 at 2:14 AM Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp> wrote:Hi,\n\nI found other functions that we should add \"pg_catalog\" prefix in\ndescribe.c. This fix is similar to the following commit.Hi!Yup, that's correct. Applied and backpatched to 14 (but it won't be in the 14.0 release). -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 28 Sep 2021 16:25:11 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Add prefix pg_catalog to pg_get_statisticsobjdef_columns() in\n describe.c (\\dX)"
},
{
"msg_contents": "Hi Magnus!\n\n\n> I found other functions that we should add \"pg_catalog\" prefix in\n> describe.c. This fix is similar to the following commit.\n> \n> Yup, that's correct. Applied and backpatched to 14 (but it won't be in the 14.0 release).\n\n\nThank you!\n\n\nRegards,\nTatsuro Yamada\n\n\n\n\n",
"msg_date": "Wed, 29 Sep 2021 08:54:18 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Add prefix pg_catalog to pg_get_statisticsobjdef_columns() in\n describe.c (\\dX)"
},
{
"msg_contents": "https://www.postgresql.org/message-id/flat/7ad8cd13-db5b-5cf6-8561-dccad1a934cb%40nttcom.co.jp\nhttps://www.postgresql.org/message-id/flat/20210827193151.GN26465%40telsasoft.com\n\nOn Sat, Aug 28, 2021 at 08:57:32AM -0400, �lvaro Herrera wrote:\n> On 2021-Aug-27, Justin Pryzby wrote:\n> > I noticed that for \\dP+ since 1c5d9270e, regclass is written without\n> > \"pg_catalog.\" (Alvaro and I failed to notice it in 421a2c483, too).\n> \n> Oops, will fix shortly.\n\nOn Tue, Sep 28, 2021 at 04:25:11PM +0200, Magnus Hagander wrote:\n> On Mon, Sep 27, 2021 at 2:14 AM Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp> wrote:\n> > I found other functions that we should add \"pg_catalog\" prefix in\n> > describe.c. This fix is similar to the following commit.\n> \n> Yup, that's correct. Applied and backpatched to 14 (but it won't be in the\n> 14.0 release).\n\nThose two issues were fixed in 1f092a309 and 07f8a9e784.\nBut, we missed two casts to ::text which don't use pg_catalog.\nEvidently the cast is to allow stable sorting.\n\nI improved on my previous hueristic to look for these ; this finds the two\nmissing schema qualifiers:\n\n> time grep \"$(sed -r \"/.*pg_catalog\\\\.([_[:alpha:]]+).*/! d; s//\\\\1/; /^(char|oid)$/d; s/.*/[^. ']\\\\\\\\<&\\\\\\\\>/\" src/bin/psql/describe.c |sort -u)\" src/bin/psql/describe.c\n\nWhile looking at that, I wondered why describeOneTableDetails lists stats\nobjects in order of OID ? Dating back to 7b504eb28, and, before that,\n554A73A6.2060603@2ndquadrant.com. It should probably order by nsp, stxname.\n\n\n",
"msg_date": "Thu, 6 Jan 2022 20:22:35 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: \\dP and \\dX use ::regclass without \"pg_catalog.\""
},
{
"msg_contents": "Hi Justin,\n\nOn 2022/01/07 11:22, Justin Pryzby wrote:\n> But, we missed two casts to ::text which don't use pg_catalog.\n> Evidently the cast is to allow stable sorting.\n\nAh, you are right.\n\nWe should prefix them with pg_catalog as well.\nAre you planning to make a patch?\nIf not, I'll make a patch later since that's where \\dX is.\n\nRegards,\nTatsuro Yamada\n\n\n\n\n",
"msg_date": "Fri, 07 Jan 2022 18:30:30 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: \\dP and \\dX use ::regclass without \"pg_catalog.\""
},
{
"msg_contents": "On Fri, Jan 07, 2022 at 06:30:30PM +0900, Tatsuro Yamada wrote:\n> We should prefix them with pg_catalog as well.\n> Are you planning to make a patch?\n> If not, I'll make a patch later since that's where \\dX is.\n\nIf any of you can make a patch, that would be great. Thanks!\n--\nMichael",
"msg_date": "Fri, 7 Jan 2022 20:08:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: \\dP and \\dX use ::regclass without \"pg_catalog.\""
},
{
"msg_contents": "On Fri, Jan 07, 2022 at 08:08:57PM +0900, Michael Paquier wrote:\n> On Fri, Jan 07, 2022 at 06:30:30PM +0900, Tatsuro Yamada wrote:\n> > We should prefix them with pg_catalog as well.\n> > Are you planning to make a patch?\n> > If not, I'll make a patch later since that's where \\dX is.\n> \n> If any of you can make a patch, that would be great. Thanks!\n\nI'd propose these.",
"msg_date": "Fri, 7 Jan 2022 15:56:20 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: \\dP and \\dX use ::regclass without \"pg_catalog.\""
},
{
"msg_contents": "On Fri, Jan 07, 2022 at 03:56:20PM -0600, Justin Pryzby wrote:\n> I'd propose these.\n\nApplied and backpatched down to 14. One of the aliases is present in\n13~, but I have let that alone. The detection regex posted upthread\nis kind of cool. \n--\nMichael",
"msg_date": "Sat, 8 Jan 2022 16:47:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: \\dP and \\dX use ::regclass without \"pg_catalog.\""
},
{
"msg_contents": "On 2022-Jan-08, Michael Paquier wrote:\n\n> The detection regex posted upthread is kind of cool. \n\nYes, but it's not bulletproof -- it only detects uses of some\nunqualified object name that is also used with qualification. Here it\ndetected \"text\" unqualified, but only because we already had\npg_catalog.text elsewhere. As an exercise, if you revert this commit\nand change one of those \"text\" to \"int\", it's not detected as a problem.\n\nMy point is that it's good to have it, but it would be much better to\nhave something bulletproof, which we could use in an automated check\nsomewhere (next to stuff like perlcritic, perhaps). I don't know what,\nthough.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Cuando no hay humildad las personas se degradan\" (A. Christie)\n\n\n",
"msg_date": "Sat, 8 Jan 2022 16:50:19 -0300",
"msg_from": "=?utf-8?Q?=C3=81lvaro?= Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: \\dP and \\dX use ::regclass without \"pg_catalog.\""
},
{
"msg_contents": "=?utf-8?Q?=C3=81lvaro?= Herrera <alvherre@alvh.no-ip.org> writes:\n> My point is that it's good to have it, but it would be much better to\n> have something bulletproof, which we could use in an automated check\n> somewhere (next to stuff like perlcritic, perhaps). I don't know what,\n> though.\n\nMeh ... this is mostly cosmetic these days, so I can't get excited\nabout putting a lot of work into it. We disclaimed search path\nbulletproofness for all these queries a long time ago.\n\nI don't object to fixing it in the name of consistency, but that's\nnot a reason to invest large effort.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 08 Jan 2022 15:03:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \\dP and \\dX use ::regclass without \"pg_catalog.\""
},
{
"msg_contents": "Hi justin,\n\nOn 2022/01/08 6:56, Justin Pryzby wrote:\n> On Fri, Jan 07, 2022 at 08:08:57PM +0900, Michael Paquier wrote:\n>> On Fri, Jan 07, 2022 at 06:30:30PM +0900, Tatsuro Yamada wrote:\n>>> We should prefix them with pg_catalog as well.\n>>> Are you planning to make a patch?\n>>> If not, I'll make a patch later since that's where \\dX is.\n>>\n>> If any of you can make a patch, that would be great. Thanks!\n> \n> I'd propose these.\n\n\nThanks!\n\nRegards,\nTatsuro Yamada\n\n\n\n",
"msg_date": "Tue, 11 Jan 2022 10:57:29 +0900",
"msg_from": "Tatsuro Yamada <tatsuro.yamada.tf@nttcom.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: \\dP and \\dX use ::regclass without \"pg_catalog.\""
}
] |
[
{
"msg_contents": "I have been trying to find active interest in a free for all use PostgreSQL extension, complete and available, on the public internet, that will support the following:\n\n####################################################\n# High Precision Numeric and Elementary Functions Support #\n# In PostgreSQL v13 and beyond. #\n# HPPM: High Precision PostgreSQL Mathematics. #\n####################################################\n\n-Integer Z, or Rational Mixed Decimal Q, numbers support in 64 bit PostgreSQL. Via HPZ, and HPQ, original types. In this specification, they are collectively referred to as HPX types.\n\nThere should be no range or multi range types or their supporting functions or special operators included for HPX types, at this point.\n\n-The extension could be based on a library like GMP, written in C, being an appropriate basis to use, for all OS platforms involved. The point being, that there is already some support for this extension, in terms of its logic, publicly available in C that can be apprehended for this extension and its platforms.\n\n-Real numbers are the values of Integer, non-recurring Rational Numbers and recurring, Irrational Numbers.\nRecurring numbers can be appropriately truncated, via a finite Natural precision value, always at least 1, to obtain an approximating value. The approximating value can really be seen as a finite Rational value, possibly with integer or decimal parts, or both together. These numbers may be positive or negative, or zero, scalar values, may be integers, decimals or mixed numbers, and always do exist on the one dimensional number line.\n\n-A defaulting number of significant figures (precision), stored inside each HPX data or type instance. This gets specified within each type variable before its use, or on data at type casting. Or a default precision is submitted instead. Precision can be accessed and changed later via precision functions.\n\nPrecision is set at data casting, type declaration, or from the default, and may be altered again later. Precision is always apprehended before external or internal evaluation begins. Precision is used to control numbers, and operations involving them, and the number output, when numeric manipulation happens.\n\nIf an HPX value is data on its own, without a variable or a coded expression, it takes the total default precision, if simply specified alone. If it is inserted into a table column with a different precision, then that precision is applied then. When an HPX calculated value is assigned into an HPX variable, it will try to skip ahead to the assignment variable, and take its precision from the result variable, which can be set up beforehand. If however, an HPX value, in a PostgreSQL code expression is sent straight into a RETURN statement or later, a SELECT statement, for example, then that datum will contain the highest precision value out of any of the previous values in the PostgreSQL expression which lead to it. But before anything is set or specified, a total default precision value of 20 is the beginning point.\n\n#############################################\n# precision(HPZ input, BIGINT input) returns HPZ; #\n# precision(HPQ input, BIGINT input) returns HPQ; #\n# #\n# precision(HPZ input) returns BIGINT; #\n# precision(HPQ input) returns BIGINT; #\n# #\n# expression(HPZ input) returns TEXT; #\n# expression(HPQ input) returns TEXT; #\n#############################################\n\n-HPX values, as PostgreSQL data, can be displayed, but they sit on top of a few other phenomena. Forward and inverse accuracy, withstanding truncation, can be achieved by storing, encapsulating and operating with and normalising the mathematical expression (or just one value, via assignment). The expression has one or more links, from value(s) to variable(s) in the expression, via applying of precision adjustment at evaluation time, all internally. This system will uphold any precision, certainly ones within a very large range limit, controlled by the already available type, the BIGINT. It can enumerate digits of a frequency within the range of -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Though naturally evaluation will slow down, or not conclude in useful time frames, before these limits. That phenomenon can be allowed, and left to the context of the programmer to deal with or avoid as they may. They may try to minimise the extent of one internal expression by using re-substitution in the body of originating, PostgreSQL, code.\n\n--------------------------------------------------------------\n--At the point of PostgreSQL code input and execution:\n\nselect pi(1001) as pi;\n\n--Within a table creation command:\n\ncreate table example_table\n(\nid BIGSERIAL PRIMARY KEY,\na HPZ,\nb HPQ(50)\n);\n\nINSERT INTO example_table(a,b) VALUES(0, 0.1);\nINSERT INTO example_table(a,b) VALUES(100,1.1);\nINSERT INTO example_table(a,b) VALUES(200,2.2);\nINSERT INTO example_table(a,b) VALUES(300,3.3);\nINSERT INTO example_table(a,b) VALUES(400,4.4);\nINSERT INTO example_table(a,b) VALUES(500,5.5);\nINSERT INTO example_table(a,b) VALUES(600,6.6);\nINSERT INTO example_table(a,b) VALUES(700,7.7);\nINSERT INTO example_table(a,b) VALUES(800,8.8);\nINSERT INTO example_table(a,b) VALUES(900,9.9);\n\n--Or as variables, in some function:\n\ncreate or replace function example_function()\nreturns void\nlanguage plpgsql\nas\n$$\n\ndeclare\na HPQ;\nb HPQ;\nc HPQ;\n\nbegin\n\nBIGINT p=30;\nprecision(a,p);\nprecision(b,p);\na = 0.1;\nb = 0.1;\nprecision(c,3);\nc=a*b;\nprecision(c,p^2);\nreturn void\nend;\n$$\n--------------------------------------------------------------\n\n-Value assignment to a typed variable by =.\n\n-Operators. Base 10 Arithmetic and comparisons support on Base 10 HPZ and HPQ, with casting:\n\n+,-,*,/,%,^,=,!=,<>,>,<,>=,<=, ::\n\nThese include full division and integer only division (from type inference), with no remainder, and a remainder calculating operator. There should be a defaulting ability of values not within these two types to automatically be cast up to HPZ or HPQ, where specified and appropriate in PostgreSQL expressions.\n\n-Reified support with broader syntax and operations within PostgreSQL. Tables and related phenomena, Array types, Indexing, Variables and related phenomena,the Record type,\ndirect compatability with Aggregate and Window functions, and Partitions are all parts of a larger subset that should re-interact with HPZ or HPQ successfully.\n\n-Ease of installation support. Particularly for Windows and Linux. *.exe, *.msi or *.rpm, *.deb, *.bin installers.\nUpon a PostgreSQL standard installation. Installation and Activation instructions included, if unavoidable. The extension should literally just install and be applicable, with no loading command necessary, inside PostgreSQL. Every time the database process runs, by default.\n\n#####################################################\n# -Mathematical and Operational functions support: #\n# #\n# cast(HPZ as HPQ) returns HPQ; #\n# cast(HPQ as HPZ) returns HPZ; #\n# cast(TEXT as HPZ) returns HPZ; #\n# cast(TEXT as HPQ) returns HPQ; #\n# cast(HPQ as TEXT) returns TEXT; #\n# cast(HPZ as TEXT) returns TEXT; #\n# #\n# cast(HPZ as SMALLINT) returns SMALLINT; #\n# cast(SMALLINT as HPZ) returns HPZ; #\n# cast(HPZ as INTEGER) returns INTEGER; #\n# cast(INTEGER as HPZ) returns HPZ; #\n# cast(HPZ as BIGINT) returns BIGINT; #\n# cast(BIGINT as HPZ) returns HPZ; #\n# cast(HPQ as REAL) returns REAL; #\n# cast(REAL as HPQ) returns HPQ #\n# cast(DOUBLE PRECISION as HPQ) returns HPQ; #\n# cast(HPQ as DOUBLE PRECISION) returns DOUBLE PRECISION; #\n# cast(HPQ as DECIMAL) returns DECIMAL; #\n# cast(DECIMAL as HPQ) returns HPQ; #\n# cast(HPQ as NUMERIC) returns NUMERIC; #\n# cast(NUMERIC as HPQ) returns HPQ; #\n# #\n# sign(HPQ input) returns HPZ; #\n# abs(HPQ input) returns HPQ; #\n# ceil(HPQ input) returns HPZ; #\n# floor(HPQ input) returns HPZ; #\n# round(HPQ input) returns HPZ; #\n# recip(HPQ input) returns HPQ; #\n# pi(BIGINT precision) returns HPQ; #\n# e(BIGINT precision) returns HPQ; #\n# power(HPQ base, HPQ exponent) returns HPQ; #\n# sqrt(HPQ input) returns HPQ; #\n# nroot(HPZ theroot, HPQ input) returns HPQ; #\n# log10(HPQ input) returns HPQ; #\n# ln(HPQ input) returns HPQ; #\n# log2(HPQ input) returns HPQ; #\n# factorial(HPZ input) returns HPZ; #\n# nCr(HPZ objects, HPZ selectionSize) returns HPZ; #\n# nPr(HPZ objects, HPZ selectionSize) returns HPZ; #\n# #\n# degrees(HPQ input) returns HPQ; #\n# radians(HPQ input) returns HPQ; #\n# sind(HPQ input) returns HPQ; #\n# cosd(HPQ input) returns HPQ; #\n# tand(HPQ input) returns HPQ; #\n# asind(HPQ input) returns HPQ; #\n# acosd(HPQ input) returns HPQ; #\n# atand(HPQ input) returns HPQ; #\n# sinr(HPQ input) returns HPQ; #\n# cosr(HPQ input) returns HPQ; #\n# tanr(HPQ input) returns HPQ; #\n# asinr(HPQ input) returns HPQ; #\n# acosr(HPQ input) returns HPQ; #\n# atanr(HPQ input) returns HPQ; #\n# #\n###################################################\n\n-Informative articles on all these things exist at:\n\nComparison Operators: https://en.wikipedia.org/wiki/Relational_operator\nFloor and Ceiling Functions: https://en.wikipedia.org/wiki/Floor_and_ceiling_functions\nArithmetic Operations: https://en.wikipedia.org/wiki/Arithmetic\nInteger Division: https://en.wikipedia.org/wiki/Division_(mathematics)#Of_integers\nModulus Operation: https://en.wikipedia.org/wiki/Modulo_operation\nRounding (Commercial Rounding): https://en.wikipedia.org/wiki/Rounding\nFactorial Operation: https://en.wikipedia.org/wiki/Factorial\nDegrees: https://en.wikipedia.org/wiki/Degree_(angle)\nRadians: https://en.wikipedia.org/wiki/Radian\nElementary Functions: https://en.wikipedia.org/wiki/Elementary_function\n\nThe following chart could be used to help test trigonometry outputs, under\nFurther Condsideration of the Unit Circle:\nhttps://courses.lumenlearning.com/boundless-algebra/chapter/trigonometric-functions-and-the-unit-circle/\n\n ############\n # #\n # The End. #\n # #\n ############\n\n\n\n\n\n\n\n\n\nI have been trying to find active interest in a free for all use PostgreSQL extension, complete and available, on the public internet, that will support the following:\n\n\n####################################################\n# High Precision Numeric and Elementary Functions Support #\n# In PostgreSQL v13 and beyond. #\n# HPPM: High Precision PostgreSQL Mathematics. # \n####################################################\n\n\n-Integer Z, or Rational Mixed Decimal Q, numbers support in 64 bit PostgreSQL. Via HPZ, and HPQ, original types. In this specification, they are collectively referred to as HPX types. \n\n\nThere should be no range or multi range types or their supporting functions or special operators included for HPX types, at this point.\n\n\n-The extension could be based on a library like GMP, written in C, being an appropriate basis to use, for all OS platforms involved. The point being, that there is already some support for this extension, in terms of its logic, publicly available in C\n that can be apprehended for this extension and its platforms.\n\n\n-Real numbers are the values of Integer, non-recurring Rational Numbers and recurring, Irrational Numbers. \nRecurring numbers can be appropriately truncated, via a finite Natural precision value, always at least 1, to obtain an approximating value. The approximating value can really be seen as a finite Rational value, possibly with integer or decimal parts,\n or both together. These numbers may be positive or negative, or zero, scalar values, may be integers, decimals or mixed numbers, and always do exist on the one dimensional number line.\n\n\n-A defaulting number of significant figures (precision), stored inside each HPX data or type instance. This gets specified within each type variable before its use, or on data at type casting. Or a default precision is submitted instead. Precision can\n be accessed and changed later via precision functions.\n\n\nPrecision is set at data casting, type declaration, or from the default, and may be altered again later. Precision is always apprehended before external or internal evaluation begins. Precision is used to control numbers, and operations involving them,\n and the number output, when numeric manipulation happens.\n\n\nIf an HPX value is data on its own, without a variable or a coded expression, it takes the total default precision, if simply specified alone. If it is inserted into a table column with a different precision, then that precision is applied then. When\n an HPX calculated value is assigned into an HPX variable, it will try to skip ahead to the assignment variable, and take its precision from the result variable, which can be set up beforehand. If however, an HPX value, in a PostgreSQL code expression is sent\n straight into a RETURN statement or later, a SELECT statement, for example, then that datum will contain the highest precision value out of any of the previous values in the PostgreSQL expression which lead to it. But before anything is set or specified, a\n total default precision value of 20 is the beginning point.\n\n\n#############################################\n# precision(HPZ input, BIGINT input) returns HPZ; #\n# precision(HPQ input, BIGINT input) returns HPQ; #\n# #\n# precision(HPZ input) returns BIGINT; #\n# precision(HPQ input) returns BIGINT; # \n# #\n\n# expression(HPZ input) returns TEXT; # \n# expression(HPQ input) returns TEXT; # \n#############################################\n\n\n-HPX values, as PostgreSQL data, can be displayed, but they sit on top of a few other phenomena. Forward and inverse accuracy, withstanding truncation, can be achieved by storing, encapsulating and operating with and normalising the mathematical expression\n (or just one value, via assignment). The expression has one or more links, from value(s) to variable(s) in the expression, via applying of precision adjustment at evaluation time, all internally. This system will uphold any precision, certainly ones within\n a very large range limit, controlled by the already available type, the BIGINT. It can enumerate digits of a frequency within the range of -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Though naturally evaluation will slow down, or not conclude\n in useful time frames, before these limits. That phenomenon can be allowed, and left to the context of the programmer to deal with or avoid as they may. They may try to minimise the extent of one internal expression by using re-substitution in the body of\n originating, PostgreSQL, code.\n\n\n--------------------------------------------------------------\n--At the point of PostgreSQL code input and execution:\n\n\nselect pi(1001) as pi;\n\n\n--Within a table creation command:\n\n\ncreate table example_table\n(\nid BIGSERIAL PRIMARY KEY,\na HPZ,\nb HPQ(50)\n);\n\n\nINSERT INTO example_table(a,b) VALUES(0, 0.1);\nINSERT INTO example_table(a,b) VALUES(100,1.1);\nINSERT INTO example_table(a,b) VALUES(200,2.2);\nINSERT INTO example_table(a,b) VALUES(300,3.3);\nINSERT INTO example_table(a,b) VALUES(400,4.4);\nINSERT INTO example_table(a,b) VALUES(500,5.5);\nINSERT INTO example_table(a,b) VALUES(600,6.6);\nINSERT INTO example_table(a,b) VALUES(700,7.7);\nINSERT INTO example_table(a,b) VALUES(800,8.8);\nINSERT INTO example_table(a,b) VALUES(900,9.9);\n\n\n--Or as variables, in some function:\n\n\ncreate or replace function example_function()\nreturns void\nlanguage plpgsql\nas\n$$\n\n\ndeclare\na HPQ; \nb HPQ;\nc HPQ;\n\n\nbegin\n\n\nBIGINT p=30;\nprecision(a,p);\nprecision(b,p);\na = 0.1;\nb = 0.1;\nprecision(c,3);\nc=a*b;\nprecision(c,p^2);\nreturn void\nend;\n$$\n--------------------------------------------------------------\n\n\n-Value assignment to a typed variable by =.\n\n\n-Operators. Base 10 Arithmetic and comparisons support on Base 10 HPZ and HPQ, with casting:\n\n\n+,-,*,/,%,^,=,!=,<>,>,<,>=,<=, ::\n\n\nThese include full division and integer only division (from type inference), with no remainder, and a remainder calculating operator. There should be a defaulting ability of values not within these two types to automatically be cast up to HPZ or HPQ, where\n specified and appropriate in PostgreSQL expressions.\n\n\n-Reified support with broader syntax and operations within PostgreSQL. Tables and related phenomena, Array types, Indexing, Variables and related phenomena,the Record type,\n\ndirect compatability with Aggregate and Window functions, and Partitions are all parts of a larger subset that should re-interact with HPZ or HPQ successfully.\n\n\n\n-Ease of installation support. Particularly for Windows and Linux. *.exe, *.msi or *.rpm, *.deb, *.bin installers.\nUpon a PostgreSQL standard installation. Installation and Activation instructions included, if unavoidable. The extension should literally just install and be applicable, with no loading command necessary, inside PostgreSQL. Every time the database process\n runs, by default. \n\n\n#####################################################\n# -Mathematical and Operational functions support: #\n# # \n# cast(HPZ as HPQ) returns HPQ; #\n\n# cast(HPQ as HPZ) returns HPZ; #\n# cast(TEXT as HPZ) returns HPZ; #\n\n# cast(TEXT as HPQ) returns HPQ; #\n\n# cast(HPQ as TEXT) returns TEXT; #\n\n# cast(HPZ as TEXT) returns TEXT; #\n\n# #\n\n# cast(HPZ as SMALLINT) returns SMALLINT; #\n# cast(SMALLINT as HPZ) returns HPZ; #\n\n# cast(HPZ as INTEGER) returns INTEGER; # \n\n# cast(INTEGER as HPZ) returns HPZ; #\n\n# cast(HPZ as BIGINT) returns BIGINT; #\n# cast(BIGINT as HPZ) returns HPZ; #\n\n# cast(HPQ as REAL) returns REAL; #\n\n# cast(REAL as HPQ) returns HPQ #\n# cast(DOUBLE PRECISION as HPQ) returns HPQ; #\n# cast(HPQ as DOUBLE PRECISION) returns DOUBLE PRECISION; #\n# cast(HPQ as DECIMAL) returns DECIMAL; #\n\n# cast(DECIMAL as HPQ) returns HPQ; #\n\n# cast(HPQ as NUMERIC) returns NUMERIC; #\n# cast(NUMERIC as HPQ) returns HPQ; #\n\n# # \n\n# sign(HPQ input) returns HPZ; # \n# abs(HPQ input) returns HPQ; #\n# ceil(HPQ input) returns HPZ; #\n\n# floor(HPQ input) returns HPZ; #\n\n# round(HPQ input) returns HPZ; # \n\n# recip(HPQ input) returns HPQ; # \n\n# pi(BIGINT precision) returns HPQ; #\n# e(BIGINT precision) returns HPQ; #\n\n# power(HPQ base, HPQ exponent) returns HPQ; # \n# sqrt(HPQ input) returns HPQ; # \n# nroot(HPZ theroot, HPQ input) returns HPQ; #\n# log10(HPQ input) returns HPQ; #\n# ln(HPQ input) returns HPQ; #\n\n# log2(HPQ input) returns HPQ; #\n# factorial(HPZ input) returns HPZ; #\n\n# nCr(HPZ objects, HPZ selectionSize) returns HPZ; # \n# nPr(HPZ objects, HPZ selectionSize) returns HPZ; # \n# # \n\n# degrees(HPQ input) returns HPQ; #\n# radians(HPQ input) returns HPQ; #\n\n# sind(HPQ input) returns HPQ; #\n# cosd(HPQ input) returns HPQ; #\n# tand(HPQ input) returns HPQ; #\n\n# asind(HPQ input) returns HPQ; #\n\n# acosd(HPQ input) returns HPQ; #\n# atand(HPQ input) returns HPQ; #\n\n# sinr(HPQ input) returns HPQ; #\n\n# cosr(HPQ input) returns HPQ; #\n# tanr(HPQ input) returns HPQ; #\n# asinr(HPQ input) returns HPQ; #\n\n# acosr(HPQ input) returns HPQ; #\n\n# atanr(HPQ input) returns HPQ; #\n# # \n###################################################\n\n\n-Informative articles on all these things exist at:\n\n\nComparison Operators: https://en.wikipedia.org/wiki/Relational_operator\nFloor and Ceiling Functions: https://en.wikipedia.org/wiki/Floor_and_ceiling_functions\nArithmetic Operations: https://en.wikipedia.org/wiki/Arithmetic\nInteger Division: https://en.wikipedia.org/wiki/Division_(mathematics)#Of_integers\nModulus Operation: https://en.wikipedia.org/wiki/Modulo_operation\nRounding (Commercial Rounding): https://en.wikipedia.org/wiki/Rounding\nFactorial Operation: https://en.wikipedia.org/wiki/Factorial\nDegrees: https://en.wikipedia.org/wiki/Degree_(angle)\nRadians: https://en.wikipedia.org/wiki/Radian\nElementary Functions: https://en.wikipedia.org/wiki/Elementary_function\n\n\nThe following chart could be used to help test trigonometry outputs, under\nFurther Condsideration of the Unit Circle:\nhttps://courses.lumenlearning.com/boundless-algebra/chapter/trigonometric-functions-and-the-unit-circle/\n\n\n ############\n # # \n # The End. #\n # # \n ############",
"msg_date": "Mon, 27 Sep 2021 01:36:38 +0000",
"msg_from": "A Z <poweruserm@live.com.au>",
"msg_from_op": true,
"msg_subject": "PostgreSQL High Precision Mathematics Extension."
},
{
"msg_contents": "On 2021-Sep-27, A Z wrote:\n\n> I have been trying to find active interest in a free for all use\n> PostgreSQL extension, complete and available, on the public internet,\n> that will support the following:\n\nYou have posted this question ten times already to the PostgreSQL\nmailing lists. I think it's time for you to stop -- people are starting\nto get annoyed.\n\nhttps://www.postgresql.org/message-id/PSXP216MB0085760D0FCA442A1D4974769AF99%40PSXP216MB0085.KORP216.PROD.OUTLOOK.COM\nhttps://www.postgresql.org/message-id/PSXP216MB0085E2A574FAE5EE16FE1BE09AFF9%40PSXP216MB0085.KORP216.PROD.OUTLOOK.COM\nhttps://www.postgresql.org/message-id/PSXP216MB008519B96A025725439F41719AC09%40PSXP216MB0085.KORP216.PROD.OUTLOOK.COM\nhttps://www.postgresql.org/message-id/PSXP216MB0085B1C0B3E10A1CF3BCD1A09ACD9%40PSXP216MB0085.KORP216.PROD.OUTLOOK.COM\nhttps://www.postgresql.org/message-id/PSXP216MB00856A5C2B402E6D646D24609ACD9%40PSXP216MB0085.KORP216.PROD.OUTLOOK.COM\nhttps://www.postgresql.org/message-id/PSXP216MB0085098A2D76E3C5DD4F8AE99ACF9%40PSXP216MB0085.KORP216.PROD.OUTLOOK.COM\nhttps://www.postgresql.org/message-id/PSXP216MB0085F21467C36F05AB9427879ADF9%40PSXP216MB0085.KORP216.PROD.OUTLOOK.COM\nhttps://www.postgresql.org/message-id/PSXP216MB008545A90DD2886F4BE21E489AA09%40PSXP216MB0085.KORP216.PROD.OUTLOOK.COM\nhttps://www.postgresql.org/message-id/PSXP216MB0085ADE313F9F48A134057F39AA19%40PSXP216MB0085.KORP216.PROD.OUTLOOK.COM\nhttps://www.postgresql.org/message-id/PSXP216MB0085D05D015DE0C46A11BE1F9AA79%40PSXP216MB0085.KORP216.PROD.OUTLOOK.COM\n\nThanks\n\n-- \nÁlvaro Herrera\n\n\n",
"msg_date": "Mon, 27 Sep 2021 10:18:44 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL High Precision Mathematics Extension."
}
] |
[
{
"msg_contents": "Hi,\n\nPlease refer to this scenario\nCASE 1- HEAD (publication)-> HEAD (Subscription)\nCASE 2 - PG 14 (Publication) ->HEAD (Subscription)\n\nTest-case -\nPublication = create table t(n int); create publication p for table t;\nSubscription = create table t(n int);\ncreate subscription s connection 'dbname=postgres host=localhost '\nPUBLICATION p WITH (two_phase=1);\n\nResult-\nCASE 1-\npostgres=# select two_phase from pg_replication_slots where slot_name='s';\n two_phase\n-----------\n t\n(1 row)\n\n\nCASE 2 -\npostgres=# select two_phase from pg_replication_slots where slot_name='s';\n two_phase\n-----------\n f\n(1 row)\n\nso are we silently ignoring this parameter as it is not supported on v14 ?\nand if yes then why not we just throw a message like\nERROR: unrecognized subscription parameter: \"two_phase\"\n\n--\nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\nHi,Please refer to this scenarioCASE 1- HEAD (publication)-> HEAD (Subscription)CASE 2 - PG 14 (Publication) ->HEAD (Subscription)Test-case -Publication = create table t(n int); create publication p for table t;Subscription = create table t(n int); create subscription s connection 'dbname=postgres host=localhost ' PUBLICATION p WITH (two_phase=1);Result-CASE 1-postgres=# select two_phase from pg_replication_slots where slot_name='s'; two_phase----------- t(1 row)CASE 2 -postgres=# select two_phase from pg_replication_slots where slot_name='s'; two_phase----------- f(1 row)so are we silently ignoring this parameter as it is not supported on v14 ? and if yes then why not we just throw a message likeERROR: unrecognized subscription parameter: \"two_phase\"--regards,tusharEnterpriseDB https://www.enterprisedb.com/The Enterprise PostgreSQL Company",
"msg_date": "Mon, 27 Sep 2021 12:39:51 +0530",
"msg_from": "tushar <tushar.ahuja@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "two_phase commit parameter used in subscription for a publication\n which is on < 15."
},
{
"msg_contents": "On Mon, Sep 27, 2021, at 4:09 AM, tushar wrote:\n> so are we silently ignoring this parameter as it is not supported on v14 ?\nYes. Because two_phase is a supported parameter for v15 (your current\nsubscriber). The issue is that this parameter are not forwarded to publisher\nbecause its version (v14) does not support it. Since we do not have a\nconnection before parse_subscription_options(), publisher server version is\nunknown. Hence, it does not know if that specific parameter is supported on\npublisher. I'm not sure it is worth parsing the options again after a\nreplication connection is available just to check those parameters that don't\nwork on all supported server versions.\n\nIMO we can provide messages during the connection (see\nlibpqrcv_startstreaming()) instead of while executing CREATE/ALTER\nSUBSCRIPTION. Something like:\n\n if (options->proto.logical.twophase &&\n PQserverVersion(conn->streamConn) >= 150000)\n appendStringInfoString(&cmd, \", two_phase 'on'\");\n else if (options->proto.logical.twophase)\n ereport(DEBUG1,\n (errmsg_internal(\"parameter \\\"two_phase\\\" is not supported on the publisher\")));\n\nIt is a DEBUG message because it can be annoying when the subscriber cannot\nconnect to the publisher.\n\nThe output plugin also raises an error if the subscriber sends the two_phase\nparameter. See pgoutput_startup(). The subscriber could probably send all\nparameters and the output plugin would be responsible to report an error. I\nthink the author decided to not do it because it is not an user-friendly\napproach.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Mon, Sep 27, 2021, at 4:09 AM, tushar wrote:so are we silently ignoring this parameter as it is not supported on v14 ?Yes. Because two_phase is a supported parameter for v15 (your currentsubscriber). The issue is that this parameter are not forwarded to publisherbecause its version (v14) does not support it. Since we do not have aconnection before parse_subscription_options(), publisher server version isunknown. Hence, it does not know if that specific parameter is supported onpublisher. I'm not sure it is worth parsing the options again after areplication connection is available just to check those parameters that don'twork on all supported server versions.IMO we can provide messages during the connection (seelibpqrcv_startstreaming()) instead of while executing CREATE/ALTERSUBSCRIPTION. Something like: if (options->proto.logical.twophase && PQserverVersion(conn->streamConn) >= 150000) appendStringInfoString(&cmd, \", two_phase 'on'\"); else if (options->proto.logical.twophase) ereport(DEBUG1, (errmsg_internal(\"parameter \\\"two_phase\\\" is not supported on the publisher\")));It is a DEBUG message because it can be annoying when the subscriber cannotconnect to the publisher.The output plugin also raises an error if the subscriber sends the two_phaseparameter. See pgoutput_startup(). The subscriber could probably send allparameters and the output plugin would be responsible to report an error. Ithink the author decided to not do it because it is not an user-friendlyapproach.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Mon, 27 Sep 2021 15:09:49 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?Q?Re:_two=5Fphase_commit_parameter_used_in_subscription_for_a_pu?=\n =?UTF-8?Q?blication_which_is_on_<_15.?="
},
{
"msg_contents": "On Mon, Sep 27, 2021 at 11:40 PM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Mon, Sep 27, 2021, at 4:09 AM, tushar wrote:\n>\n> so are we silently ignoring this parameter as it is not supported on v14 ?\n>\n> Yes. Because two_phase is a supported parameter for v15 (your current\n> subscriber). The issue is that this parameter are not forwarded to publisher\n> because its version (v14) does not support it. Since we do not have a\n> connection before parse_subscription_options(), publisher server version is\n> unknown. Hence, it does not know if that specific parameter is supported on\n> publisher. I'm not sure it is worth parsing the options again after a\n> replication connection is available just to check those parameters that don't\n> work on all supported server versions.\n>\n> IMO we can provide messages during the connection (see\n> libpqrcv_startstreaming()) instead of while executing CREATE/ALTER\n> SUBSCRIPTION. Something like:\n>\n> if (options->proto.logical.twophase &&\n> PQserverVersion(conn->streamConn) >= 150000)\n> appendStringInfoString(&cmd, \", two_phase 'on'\");\n> else if (options->proto.logical.twophase)\n> ereport(DEBUG1,\n> (errmsg_internal(\"parameter \\\"two_phase\\\" is not supported on the publisher\")));\n>\n> It is a DEBUG message because it can be annoying when the subscriber cannot\n> connect to the publisher.\n>\n> The output plugin also raises an error if the subscriber sends the two_phase\n> parameter. See pgoutput_startup(). The subscriber could probably send all\n> parameters and the output plugin would be responsible to report an error. I\n> think the author decided to not do it because it is not an user-friendly\n> approach.\n>\n\nTrue, and the same behavior was already there for 'binary' and\n'streaming' options. Shall we document this instead of DEBUG message\nor probably along with DEBUG message?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 28 Sep 2021 08:31:45 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: two_phase commit parameter used in subscription for a publication\n which is on < 15."
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI recently observed a failed assertion in EnsurePortalSnapshotExists().\n\nThe steps to reproduce the issue on the master branch are:\n\ncreate table bdt (a int primary key);\ninsert into bdt values (1),(2);\ncreate table bdt2 (a int);\ninsert into bdt2 values (1);\n\nThen launching:\n\nDO $$\nBEGIN\n FOR i IN 1..2 LOOP\n BEGIN\n INSERT INTO bdt (a) VALUES (i);\n exception when unique_violation then update bdt2 set a = i;\n COMMIT;\n END;\n END LOOP;\nEND;\n$$;\n\nWould produce:\n\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost\n\nDue to:\n\n#2 0x0000000000b2ffcb in ExceptionalCondition (conditionName=0xd0d598 \n\"portal->portalSnapshot == NULL\", errorType=0xd0d0b3 \"FailedAssertion\", \nfileName=0xd0d174 \"pquery.c\", lineNumber=1785) at assert.c:69\n#3 0x000000000099e666 in EnsurePortalSnapshotExists () at pquery.c:1785\n\n From what i have seen, we end up having ActiveSnapshot set to NULL in \nAtSubAbort_Snapshot() (while we still have ActivePortal->portalSnapshot \nnot being NULL and not set to NULL later on).\n\nThat leads to ActiveSnapshotSet() not being true in the next call to \nEnsurePortalSnapshotExists() and leads to the failed assertion (checking \nthat ActivePortal->portalSnapshot is NULL) later on in the code.\n\nBased on this, i have created the attached patch (which fixes the issue \nmentioned in the repro) but I have the feeling that I may have missed \nsomething more important here (that would not be addressed with the \nattached patch), thoughts?\n\nThanks\n\nBertrand",
"msg_date": "Mon, 27 Sep 2021 17:52:25 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "[BUG] failed assertion in EnsurePortalSnapshotExists()"
},
{
"msg_contents": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com> writes:\n> I recently observed a failed assertion in EnsurePortalSnapshotExists().\n\nHmm, interesting. If I take out the \"update bdt2\" step, so that the\nexception clause is just COMMIT, then I get something different:\n\nERROR: portal snapshots (1) did not account for all active snapshots (0)\nCONTEXT: PL/pgSQL function inline_code_block line 8 at COMMIT\n\nI think perhaps plpgsql's exception handling needs a bit of adjustment,\nbut not sure what yet.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 27 Sep 2021 15:44:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] failed assertion in EnsurePortalSnapshotExists()"
},
{
"msg_contents": "Hi,\n\nOn 9/27/21 9:44 PM, Tom Lane wrote:\n> \"Drouvot, Bertrand\" <bdrouvot@amazon.com> writes:\n>> I recently observed a failed assertion in EnsurePortalSnapshotExists().\n> Hmm, interesting.\nThanks for looking at it!\n> If I take out the \"update bdt2\" step, so that the\n> exception clause is just COMMIT, then I get something different:\n>\n> ERROR: portal snapshots (1) did not account for all active snapshots (0)\n> CONTEXT: PL/pgSQL function inline_code_block line 8 at COMMIT\n\nFWIW, I just gave it a try and it looks like this is also \"fixed\" by the \nproposed patch.\n\nDoes it make sense (as it is currently) to set the ActiveSnapshot to \nNULL and not ensuring the same is done for ActivePortal->portalSnapshot?\n\nOr does it mean we should not reach a state where we set ActiveSnapshot \nto NULL while ActivePortal->portalSnapshot is not already NULL?\n\nThanks\n\nBertrand\n\n\n\n",
"msg_date": "Tue, 28 Sep 2021 06:24:42 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] failed assertion in EnsurePortalSnapshotExists()"
},
{
"msg_contents": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com> writes:\n> Does it make sense (as it is currently) to set the ActiveSnapshot to \n> NULL and not ensuring the same is done for ActivePortal->portalSnapshot?\n\nI think that patch is just a kluge :-(\n\nAfter tracing through this I've figured out what is happening, and\nwhy you need to put the exception block inside a loop to make it\nhappen. The first iteration goes fine, and we end up with an empty\nActiveSnapshot stack (and no portalSnapshots) because that's how\nthe COMMIT leaves it. In the second iteration, we try to\nre-establish the portal snapshot, but at the point where\nEnsurePortalSnapshotExists is called (from the first INSERT\ncommand) we are already inside a subtransaction thanks to the\nplpgsql exception block. This means that the stacked ActiveSnapshot\nhas as_level = 2, although the Portal owning it belongs to the\nouter transaction. So at the next exception, AtSubAbort_Snapshot\nzaps the stacked ActiveSnapshot, but the Portal stays alive and\nnow it has a dangling portalSnapshot pointer.\n\nBasically there seem to be two ways to fix this, both requiring\nAPI additions to snapmgr.c (hence, cc'ing Alvaro for opinion):\n\n1. Provide a variant of PushActiveSnapshot that allows the caller\nto specify the as_level to use, and then have\nEnsurePortalSnapshotExists, as well as other places that create\nPortal-associated snapshots, use this with as_level equal to the\nPortal's createSubid. The main hazard I see here is that this could\ntheoretically allow the ActiveSnapshot stack to end up with\nout-of-order as_level values, causing issues. For the moment we\ncould probably just add assertions to check that the passed as_level\nis >= next level, or maybe even that this variant is only called with\nempty ActiveSnapshot stack.\n\n2. Provide a way for AtSubAbort_Portals to detect whether a\nportalSnapshot pointer points to a snap that's going to be\ndeleted by AtSubAbort_Snapshot, and then just have it clear any\nportalSnapshots that are about to become dangling. (This'd amount\nto accepting the possibility that portalSnapshot is of a different\nsubxact level from the portal, and dealing with the situation.)\n\nI initially thought #2 would be the way to go, but it turns out\nto be a bit messy since what we have is a Snapshot pointer not an\nActiveSnapshotElt pointer. We'd have to do something like search the\nActiveSnapshot stack looking for pointer equality to the caller's\nSnapshot pointer, which seems fragile --- do we assume as_snap is\nunique for any other purpose?\n\nThat being the case, I'm now leaning to #1. Thoughts?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 28 Sep 2021 12:50:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] failed assertion in EnsurePortalSnapshotExists()"
},
{
"msg_contents": "Hi,\n\nOn 9/28/21 6:50 PM, Tom Lane wrote:\n> \"Drouvot, Bertrand\" <bdrouvot@amazon.com> writes:\n>> Does it make sense (as it is currently) to set the ActiveSnapshot to\n>> NULL and not ensuring the same is done for ActivePortal->portalSnapshot?\n> I think that patch is just a kluge :-(\nright, sounded too simple to be \"true\".\n>\n> After tracing through this I've figured out what is happening, and\n> why you need to put the exception block inside a loop to make it\n> happen. The first iteration goes fine, and we end up with an empty\n> ActiveSnapshot stack (and no portalSnapshots) because that's how\n> the COMMIT leaves it. In the second iteration, we try to\n> re-establish the portal snapshot, but at the point where\n> EnsurePortalSnapshotExists is called (from the first INSERT\n> command) we are already inside a subtransaction thanks to the\n> plpgsql exception block. This means that the stacked ActiveSnapshot\n> has as_level = 2, although the Portal owning it belongs to the\n> outer transaction. So at the next exception, AtSubAbort_Snapshot\n> zaps the stacked ActiveSnapshot, but the Portal stays alive and\n> now it has a dangling portalSnapshot pointer.\n\nThanks for the explanation!\n\n> Basically there seem to be two ways to fix this, both requiring\n> API additions to snapmgr.c (hence, cc'ing Alvaro for opinion):\n>\n> 1. Provide a variant of PushActiveSnapshot that allows the caller\n> to specify the as_level to use, and then have\n> EnsurePortalSnapshotExists, as well as other places that create\n> Portal-associated snapshots, use this with as_level equal to the\n> Portal's createSubid. The main hazard I see here is that this could\n> theoretically allow the ActiveSnapshot stack to end up with\n> out-of-order as_level values, causing issues. For the moment we\n> could probably just add assertions to check that the passed as_level\n> is >= next level, or maybe even that this variant is only called with\n> empty ActiveSnapshot stack.\nI think we may get a non empty ActiveSnapshot stack with prepared \nstatements, so tempted to do the assertion on the levels.\n>\n> 2. Provide a way for AtSubAbort_Portals to detect whether a\n> portalSnapshot pointer points to a snap that's going to be\n> deleted by AtSubAbort_Snapshot, and then just have it clear any\n> portalSnapshots that are about to become dangling. (This'd amount\n> to accepting the possibility that portalSnapshot is of a different\n> subxact level from the portal, and dealing with the situation.)\n>\n> I initially thought #2 would be the way to go, but it turns out\n> to be a bit messy since what we have is a Snapshot pointer not an\n> ActiveSnapshotElt pointer. We'd have to do something like search the\n> ActiveSnapshot stack looking for pointer equality to the caller's\n> Snapshot pointer, which seems fragile --- do we assume as_snap is\n> unique for any other purpose?\n>\n> That being the case, I'm now leaning to #1. Thoughts?\n\nI'm also inclined to #1.\n\nPlease find attached a patch proposal for #1 that also adds a new test.\n\nThanks\n\nBertrand",
"msg_date": "Wed, 29 Sep 2021 11:54:40 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] failed assertion in EnsurePortalSnapshotExists()"
},
{
"msg_contents": "Em qua., 29 de set. de 2021 às 06:55, Drouvot, Bertrand <bdrouvot@amazon.com>\nescreveu:\n\n> Hi,\n>\n> On 9/28/21 6:50 PM, Tom Lane wrote:\n> > \"Drouvot, Bertrand\" <bdrouvot@amazon.com> writes:\n> >> Does it make sense (as it is currently) to set the ActiveSnapshot to\n> >> NULL and not ensuring the same is done for ActivePortal->portalSnapshot?\n> > I think that patch is just a kluge :-(\n> right, sounded too simple to be \"true\".\n> >\n> > After tracing through this I've figured out what is happening, and\n> > why you need to put the exception block inside a loop to make it\n> > happen. The first iteration goes fine, and we end up with an empty\n> > ActiveSnapshot stack (and no portalSnapshots) because that's how\n> > the COMMIT leaves it. In the second iteration, we try to\n> > re-establish the portal snapshot, but at the point where\n> > EnsurePortalSnapshotExists is called (from the first INSERT\n> > command) we are already inside a subtransaction thanks to the\n> > plpgsql exception block. This means that the stacked ActiveSnapshot\n> > has as_level = 2, although the Portal owning it belongs to the\n> > outer transaction. So at the next exception, AtSubAbort_Snapshot\n> > zaps the stacked ActiveSnapshot, but the Portal stays alive and\n> > now it has a dangling portalSnapshot pointer.\n>\n> Thanks for the explanation!\n>\n> > Basically there seem to be two ways to fix this, both requiring\n> > API additions to snapmgr.c (hence, cc'ing Alvaro for opinion):\n> >\n> > 1. Provide a variant of PushActiveSnapshot that allows the caller\n> > to specify the as_level to use, and then have\n> > EnsurePortalSnapshotExists, as well as other places that create\n> > Portal-associated snapshots, use this with as_level equal to the\n> > Portal's createSubid. The main hazard I see here is that this could\n> > theoretically allow the ActiveSnapshot stack to end up with\n> > out-of-order as_level values, causing issues. For the moment we\n> > could probably just add assertions to check that the passed as_level\n> > is >= next level, or maybe even that this variant is only called with\n> > empty ActiveSnapshot stack.\n> I think we may get a non empty ActiveSnapshot stack with prepared\n> statements, so tempted to do the assertion on the levels.\n> >\n> > 2. Provide a way for AtSubAbort_Portals to detect whether a\n> > portalSnapshot pointer points to a snap that's going to be\n> > deleted by AtSubAbort_Snapshot, and then just have it clear any\n> > portalSnapshots that are about to become dangling. (This'd amount\n> > to accepting the possibility that portalSnapshot is of a different\n> > subxact level from the portal, and dealing with the situation.)\n> >\n> > I initially thought #2 would be the way to go, but it turns out\n> > to be a bit messy since what we have is a Snapshot pointer not an\n> > ActiveSnapshotElt pointer. We'd have to do something like search the\n> > ActiveSnapshot stack looking for pointer equality to the caller's\n> > Snapshot pointer, which seems fragile --- do we assume as_snap is\n> > unique for any other purpose?\n> >\n> > That being the case, I'm now leaning to #1. Thoughts?\n>\n> I'm also inclined to #1.\n>\nI have a stupid question, why duplicate PushActiveSnapshot?\nWouldn't one function be better?\n\nPushActiveSnapshot(Snapshot snap, int as_level);\n\nSample calls:\nPushActiveSnapshot(GetTransactionSnapshot(),\nGetCurrentTransactionNestLevel());\nPushActiveSnapshot(queryDesc->snapshot, GetCurrentTransactionNestLevel());\nPushActiveSnapshot(GetTransactionSnapshot(), portal->createSubid);\n\nregards,\nRanier Vilela\n\nEm qua., 29 de set. de 2021 às 06:55, Drouvot, Bertrand <bdrouvot@amazon.com> escreveu:Hi,\n\nOn 9/28/21 6:50 PM, Tom Lane wrote:\n> \"Drouvot, Bertrand\" <bdrouvot@amazon.com> writes:\n>> Does it make sense (as it is currently) to set the ActiveSnapshot to\n>> NULL and not ensuring the same is done for ActivePortal->portalSnapshot?\n> I think that patch is just a kluge :-(\nright, sounded too simple to be \"true\".\n>\n> After tracing through this I've figured out what is happening, and\n> why you need to put the exception block inside a loop to make it\n> happen. The first iteration goes fine, and we end up with an empty\n> ActiveSnapshot stack (and no portalSnapshots) because that's how\n> the COMMIT leaves it. In the second iteration, we try to\n> re-establish the portal snapshot, but at the point where\n> EnsurePortalSnapshotExists is called (from the first INSERT\n> command) we are already inside a subtransaction thanks to the\n> plpgsql exception block. This means that the stacked ActiveSnapshot\n> has as_level = 2, although the Portal owning it belongs to the\n> outer transaction. So at the next exception, AtSubAbort_Snapshot\n> zaps the stacked ActiveSnapshot, but the Portal stays alive and\n> now it has a dangling portalSnapshot pointer.\n\nThanks for the explanation!\n\n> Basically there seem to be two ways to fix this, both requiring\n> API additions to snapmgr.c (hence, cc'ing Alvaro for opinion):\n>\n> 1. Provide a variant of PushActiveSnapshot that allows the caller\n> to specify the as_level to use, and then have\n> EnsurePortalSnapshotExists, as well as other places that create\n> Portal-associated snapshots, use this with as_level equal to the\n> Portal's createSubid. The main hazard I see here is that this could\n> theoretically allow the ActiveSnapshot stack to end up with\n> out-of-order as_level values, causing issues. For the moment we\n> could probably just add assertions to check that the passed as_level\n> is >= next level, or maybe even that this variant is only called with\n> empty ActiveSnapshot stack.\nI think we may get a non empty ActiveSnapshot stack with prepared \nstatements, so tempted to do the assertion on the levels.\n>\n> 2. Provide a way for AtSubAbort_Portals to detect whether a\n> portalSnapshot pointer points to a snap that's going to be\n> deleted by AtSubAbort_Snapshot, and then just have it clear any\n> portalSnapshots that are about to become dangling. (This'd amount\n> to accepting the possibility that portalSnapshot is of a different\n> subxact level from the portal, and dealing with the situation.)\n>\n> I initially thought #2 would be the way to go, but it turns out\n> to be a bit messy since what we have is a Snapshot pointer not an\n> ActiveSnapshotElt pointer. We'd have to do something like search the\n> ActiveSnapshot stack looking for pointer equality to the caller's\n> Snapshot pointer, which seems fragile --- do we assume as_snap is\n> unique for any other purpose?\n>\n> That being the case, I'm now leaning to #1. Thoughts?\n\nI'm also inclined to #1.I have a stupid question, why duplicate PushActiveSnapshot?Wouldn't one function be better?PushActiveSnapshot(Snapshot snap, int as_level);Sample calls:PushActiveSnapshot(GetTransactionSnapshot(), GetCurrentTransactionNestLevel());PushActiveSnapshot(queryDesc->snapshot, GetCurrentTransactionNestLevel()); PushActiveSnapshot(GetTransactionSnapshot(), portal->createSubid);regards,Ranier Vilela",
"msg_date": "Wed, 29 Sep 2021 07:59:41 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] failed assertion in EnsurePortalSnapshotExists()"
},
{
"msg_contents": "Hi,\n\nOn 9/29/21 12:59 PM, Ranier Vilela wrote:\n>\n> Em qua., 29 de set. de 2021 às 06:55, Drouvot, Bertrand \n> <bdrouvot@amazon.com> escreveu:\n>\n> I'm also inclined to #1.\n>\n> I have a stupid question, why duplicate PushActiveSnapshot?\n> Wouldn't one function be better?\n>\n> PushActiveSnapshot(Snapshot snap, int as_level);\n>\n> Sample calls:\n> PushActiveSnapshot(GetTransactionSnapshot(), \n> GetCurrentTransactionNestLevel());\n> PushActiveSnapshot(queryDesc->snapshot, \n> GetCurrentTransactionNestLevel());\n> PushActiveSnapshot(GetTransactionSnapshot(), portal->createSubid);\n\nI would say because that could \"break\" existing extensions for example.\n\nAdding a new function prevents \"updating\" existing extensions making use \nof PushActiveSnapshot().\n\nThanks\n\nBertrand\n\n\n\n\n\n\nHi,\n\nOn 9/29/21 12:59 PM, Ranier Vilela\n wrote:\n\n\n\n\n\n\n\n\nEm qua., 29 de set. de\n 2021 às 06:55, Drouvot, Bertrand <bdrouvot@amazon.com>\n escreveu:\n\nI'm also inclined to\n #1.\n\n\n\n\n\n\n\n\n\nI have a stupid question, why duplicate\n PushActiveSnapshot?\nWouldn't one function be better?\n\n\nPushActiveSnapshot(Snapshot snap, int as_level);\n\n\nSample calls:\nPushActiveSnapshot(GetTransactionSnapshot(),\n GetCurrentTransactionNestLevel());\nPushActiveSnapshot(queryDesc->snapshot,\n GetCurrentTransactionNestLevel()); \n\nPushActiveSnapshot(GetTransactionSnapshot(),\n portal->createSubid);\n\n\n\n\nI would say because that could \"break\" existing extensions for\n example.\nAdding a new function prevents \"updating\" existing extensions\n making use of PushActiveSnapshot().\nThanks\nBertrand",
"msg_date": "Wed, 29 Sep 2021 13:12:02 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] failed assertion in EnsurePortalSnapshotExists()"
},
{
"msg_contents": "Em qua., 29 de set. de 2021 às 08:12, Drouvot, Bertrand <bdrouvot@amazon.com>\nescreveu:\n\n> Hi,\n> On 9/29/21 12:59 PM, Ranier Vilela wrote:\n>\n>\n> Em qua., 29 de set. de 2021 às 06:55, Drouvot, Bertrand <\n> bdrouvot@amazon.com> escreveu:\n>\n>> I'm also inclined to #1.\n>>\n> I have a stupid question, why duplicate PushActiveSnapshot?\n> Wouldn't one function be better?\n>\n> PushActiveSnapshot(Snapshot snap, int as_level);\n>\n> Sample calls:\n> PushActiveSnapshot(GetTransactionSnapshot(),\n> GetCurrentTransactionNestLevel());\n> PushActiveSnapshot(queryDesc->snapshot, GetCurrentTransactionNestLevel());\n> PushActiveSnapshot(GetTransactionSnapshot(), portal->createSubid);\n>\n> I would say because that could \"break\" existing extensions for example.\n>\n> Adding a new function prevents \"updating\" existing extensions making use\n> of PushActiveSnapshot().\n>\nValid argument of course.\nBut the extensions should also fit the core code.\nDuplicating functions is very bad for maintenance and bloats the code\nunnecessarily, IMHO.\n\nregards,\nRanier Vilela\n\nEm qua., 29 de set. de 2021 às 08:12, Drouvot, Bertrand <bdrouvot@amazon.com> escreveu:\n\nHi,\n\nOn 9/29/21 12:59 PM, Ranier Vilela\n wrote:\n\n\n\n\n\n\n\nEm qua., 29 de set. de\n 2021 às 06:55, Drouvot, Bertrand <bdrouvot@amazon.com>\n escreveu:\n\nI'm also inclined to\n #1.\n\n\n\n\n\n\n\n\n\nI have a stupid question, why duplicate\n PushActiveSnapshot?\nWouldn't one function be better?\n\n\nPushActiveSnapshot(Snapshot snap, int as_level);\n\n\nSample calls:\nPushActiveSnapshot(GetTransactionSnapshot(),\n GetCurrentTransactionNestLevel());\nPushActiveSnapshot(queryDesc->snapshot,\n GetCurrentTransactionNestLevel()); \n\nPushActiveSnapshot(GetTransactionSnapshot(),\n portal->createSubid);\n\n\n\n\nI would say because that could \"break\" existing extensions for\n example.\nAdding a new function prevents \"updating\" existing extensions\n making use of PushActiveSnapshot().Valid argument of course.But the extensions should also fit the core code.Duplicating functions is very bad for maintenance and bloats the code unnecessarily, IMHO.regards,Ranier Vilela",
"msg_date": "Wed, 29 Sep 2021 08:23:22 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] failed assertion in EnsurePortalSnapshotExists()"
},
{
"msg_contents": "On 9/29/21 1:23 PM, Ranier Vilela wrote:\n>\n> Em qua., 29 de set. de 2021 às 08:12, Drouvot, Bertrand \n> <bdrouvot@amazon.com> escreveu:\n>\n> Hi,\n>\n> On 9/29/21 12:59 PM, Ranier Vilela wrote:\n>>\n>> Em qua., 29 de set. de 2021 às 06:55, Drouvot, Bertrand\n>> <bdrouvot@amazon.com> escreveu:\n>>\n>> I'm also inclined to #1.\n>>\n>> I have a stupid question, why duplicate PushActiveSnapshot?\n>> Wouldn't one function be better?\n>>\n>> PushActiveSnapshot(Snapshot snap, int as_level);\n>>\n>> Sample calls:\n>> PushActiveSnapshot(GetTransactionSnapshot(),\n>> GetCurrentTransactionNestLevel());\n>> PushActiveSnapshot(queryDesc->snapshot,\n>> GetCurrentTransactionNestLevel());\n>> PushActiveSnapshot(GetTransactionSnapshot(), portal->createSubid);\n>\n> I would say because that could \"break\" existing extensions for\n> example.\n>\n> Adding a new function prevents \"updating\" existing extensions\n> making use of PushActiveSnapshot().\n>\n> Valid argument of course.\n> But the extensions should also fit the core code.\n> Duplicating functions is very bad for maintenance and bloats the code \n> unnecessarily, IMHO.\n>\nRight. I don't have a strong opinion about this.\n\nLet's see what Tom, Alvaro or others arguments/opinions are (should they \nalso want to go with option #1).\n\nThanks\n\nBertrand\n\n\n\n\n\n\n\n\nOn 9/29/21 1:23 PM, Ranier Vilela\n wrote:\n\n\n\n\n\n\n\nEm qua., 29 de set. de\n 2021 às 08:12, Drouvot, Bertrand <bdrouvot@amazon.com>\n escreveu:\n\n\n\nHi,\n\nOn 9/29/21 12:59 PM, Ranier Vilela wrote:\n\n\n\n\n\n\nEm qua., 29 de\n set. de 2021 às 06:55, Drouvot, Bertrand <bdrouvot@amazon.com>\n escreveu:\n\n\n I'm also inclined to #1.\n\n\n\n\n\n\n\n\n\nI have a stupid question, why duplicate\n PushActiveSnapshot?\nWouldn't one function be better?\n\n\nPushActiveSnapshot(Snapshot snap, int\n as_level);\n\n\nSample calls:\nPushActiveSnapshot(GetTransactionSnapshot(),\n GetCurrentTransactionNestLevel());\nPushActiveSnapshot(queryDesc->snapshot,\n GetCurrentTransactionNestLevel()); \n\nPushActiveSnapshot(GetTransactionSnapshot(),\n portal->createSubid);\n\n\n\n\nI would say because that could \"break\" existing\n extensions for example.\nAdding a new function prevents \"updating\" existing\n extensions making use of PushActiveSnapshot().\n\n\nValid argument of course.\n But the extensions should also fit the core code.\n Duplicating functions is very bad for maintenance and\n bloats the code unnecessarily, IMHO.\n\n\n\n\n\n\nRight. I don't have a strong opinion about this.\nLet's see what Tom, Alvaro or others arguments/opinions are\n (should they also want to go with option #1).\n\nThanks\nBertrand",
"msg_date": "Wed, 29 Sep 2021 13:36:14 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] failed assertion in EnsurePortalSnapshotExists()"
},
{
"msg_contents": "On 2021-Sep-28, Tom Lane wrote:\n\n> 1. Provide a variant of PushActiveSnapshot that allows the caller\n> to specify the as_level to use, and then have\n> EnsurePortalSnapshotExists, as well as other places that create\n> Portal-associated snapshots, use this with as_level equal to the\n> Portal's createSubid. The main hazard I see here is that this could\n> theoretically allow the ActiveSnapshot stack to end up with\n> out-of-order as_level values, causing issues. For the moment we\n> could probably just add assertions to check that the passed as_level\n> is >= next level, or maybe even that this variant is only called with\n> empty ActiveSnapshot stack.\n\nI don't see anything wrong with this idea offhand.\n\nI didn't try to create scenarios with out-of-order as_level active\nsnapshots, but with all the creativity out there in the world, and\nconsidering that it's possible to write procedures in C, I think that\nasserting that the order is maintained is warranted.\n\nNow if we do meet a case with out-of-order levels, what do we do? I\nsuppose we'd need to update AtSubCommit_Snapshot and AtSubAbort_Snapshot\nto cope with that (but by all means let's wait until we have a test case\nwhere that happens ...)\n\n> 2. Provide a way for AtSubAbort_Portals to detect whether a\n> portalSnapshot pointer points to a snap that's going to be\n> deleted by AtSubAbort_Snapshot, and then just have it clear any\n> portalSnapshots that are about to become dangling. (This'd amount\n> to accepting the possibility that portalSnapshot is of a different\n> subxact level from the portal, and dealing with the situation.)\n> \n> I initially thought #2 would be the way to go, but it turns out\n> to be a bit messy since what we have is a Snapshot pointer not an\n> ActiveSnapshotElt pointer. We'd have to do something like search the\n> ActiveSnapshot stack looking for pointer equality to the caller's\n> Snapshot pointer, which seems fragile --- do we assume as_snap is\n> unique for any other purpose?\n\nI don't remember what patch it was, but I remember contemplating\nsometime during the past year the possibility of snapshots being used\ntwice in the active stack. Now maybe this is not possible in practice\nbecause in most cases we create a copy, but I couldn't swear that that's\nalways the case. I wouldn't rely on that.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"Las cosas son buenas o malas segun las hace nuestra opinión\" (Lisias)\n\n\n",
"msg_date": "Wed, 29 Sep 2021 09:50:28 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] failed assertion in EnsurePortalSnapshotExists()"
},
{
"msg_contents": "On 2021-Sep-29, Ranier Vilela wrote:\n\n> Em qua., 29 de set. de 2021 às 08:12, Drouvot, Bertrand <bdrouvot@amazon.com>\n> escreveu:\n\n> > Adding a new function prevents \"updating\" existing extensions making use\n> > of PushActiveSnapshot().\n> >\n> Valid argument of course.\n> But the extensions should also fit the core code.\n> Duplicating functions is very bad for maintenance and bloats the code\n> unnecessarily, IMHO.\n\nWell, there are 42 calls of PushActiveSnapshot currently, and only 6 are\nupdated in the patch. Given that six sevenths of the calls continue to\nuse the existing function and that it is less verbose than the new one,\nthat seems sufficient argument to keep it.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"How amazing is that? I call it a night and come back to find that a bug has\nbeen identified and patched while I sleep.\" (Robert Davidson)\n http://archives.postgresql.org/pgsql-sql/2006-03/msg00378.php\n\n\n",
"msg_date": "Wed, 29 Sep 2021 09:52:38 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] failed assertion in EnsurePortalSnapshotExists()"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2021-Sep-29, Ranier Vilela wrote:\n>> Em qua., 29 de set. de 2021 às 08:12, Drouvot, Bertrand <bdrouvot@amazon.com>\n>> escreveu:\n>> Duplicating functions is very bad for maintenance and bloats the code\n>> unnecessarily, IMHO.\n\n> Well, there are 42 calls of PushActiveSnapshot currently, and only 6 are\n> updated in the patch. Given that six sevenths of the calls continue to\n> use the existing function and that it is less verbose than the new one,\n> that seems sufficient argument to keep it.\n\nSeeing that we have to back-patch this, changing the ABI of\nPushActiveSnapshot seems like a complete non-starter.\n\nThe idea I'd had to avoid code duplication was to make\nPushActiveSnapshot a wrapper for the extended function:\n\nvoid\nPushActiveSnapshot(Snapshot snap)\n{\n PushActiveSnapshotWithLevel(snap, GetCurrentTransactionNestLevel());\n}\n\nThis would add one function call to the common code path, but there\nare enough function calls in PushActiveSnapshot that I don't think\nthat's a big concern.\n\nAnother point is that this'd also add the as_level ordering assertion\nto the common code path, but on the whole I think that's good not bad.\n\nBTW, this is not great code:\n\n+\tif (ActiveSnapshot != NULL && ActiveSnapshot->as_next != NULL)\n+\t\tAssert(as_level >= ActiveSnapshot->as_next->as_level);\n\nYou want it all wrapped in the Assert, so that there's not any code\nleft in a non-assert build (which the compiler may or may not optimize\naway, perhaps after complaining about a side-effect-free statement).\n\nActually, it's plain wrong, because you should be looking at the\ntop as_level not the next one. So more like\n\n Assert(ActiveSnapshot == NULL ||\n snap_level >= ActiveSnapshot->as_level);\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 29 Sep 2021 14:01:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] failed assertion in EnsurePortalSnapshotExists()"
},
{
"msg_contents": "Em qua., 29 de set. de 2021 às 15:01, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > On 2021-Sep-29, Ranier Vilela wrote:\n> >> Em qua., 29 de set. de 2021 às 08:12, Drouvot, Bertrand <\n> bdrouvot@amazon.com>\n> >> escreveu:\n> >> Duplicating functions is very bad for maintenance and bloats the code\n> >> unnecessarily, IMHO.\n>\n> > Well, there are 42 calls of PushActiveSnapshot currently, and only 6 are\n> > updated in the patch. Given that six sevenths of the calls continue to\n> > use the existing function and that it is less verbose than the new one,\n> > that seems sufficient argument to keep it.\n>\n> Seeing that we have to back-patch this, changing the ABI of\n> PushActiveSnapshot seems like a complete non-starter.\n>\n> The idea I'd had to avoid code duplication was to make\n> PushActiveSnapshot a wrapper for the extended function:\n>\n> void\n> PushActiveSnapshot(Snapshot snap)\n> {\n> PushActiveSnapshotWithLevel(snap, GetCurrentTransactionNestLevel());\n> }\n>\n> Much better.\n\nregards,\nRanier Vilela\n\nEm qua., 29 de set. de 2021 às 15:01, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2021-Sep-29, Ranier Vilela wrote:\n>> Em qua., 29 de set. de 2021 às 08:12, Drouvot, Bertrand <bdrouvot@amazon.com>\n>> escreveu:\n>> Duplicating functions is very bad for maintenance and bloats the code\n>> unnecessarily, IMHO.\n\n> Well, there are 42 calls of PushActiveSnapshot currently, and only 6 are\n> updated in the patch. Given that six sevenths of the calls continue to\n> use the existing function and that it is less verbose than the new one,\n> that seems sufficient argument to keep it.\n\nSeeing that we have to back-patch this, changing the ABI of\nPushActiveSnapshot seems like a complete non-starter.\n\nThe idea I'd had to avoid code duplication was to make\nPushActiveSnapshot a wrapper for the extended function:\n\nvoid\nPushActiveSnapshot(Snapshot snap)\n{\n PushActiveSnapshotWithLevel(snap, GetCurrentTransactionNestLevel());\n}\nMuch better. regards,Ranier Vilela",
"msg_date": "Wed, 29 Sep 2021 15:23:43 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] failed assertion in EnsurePortalSnapshotExists()"
},
{
"msg_contents": "On 9/29/21 8:01 PM, Tom Lane wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>> On 2021-Sep-29, Ranier Vilela wrote:\n>>> Em qua., 29 de set. de 2021 às 08:12, Drouvot, Bertrand <bdrouvot@amazon.com>\n>>> escreveu:\n>>> Duplicating functions is very bad for maintenance and bloats the code\n>>> unnecessarily, IMHO.\n>> Well, there are 42 calls of PushActiveSnapshot currently, and only 6 are\n>> updated in the patch. Given that six sevenths of the calls continue to\n>> use the existing function and that it is less verbose than the new one,\n>> that seems sufficient argument to keep it.\n> Seeing that we have to back-patch this, changing the ABI of\n> PushActiveSnapshot seems like a complete non-starter.\n>\n> The idea I'd had to avoid code duplication was to make\n> PushActiveSnapshot a wrapper for the extended function:\n>\n> void\n> PushActiveSnapshot(Snapshot snap)\n> {\n> PushActiveSnapshotWithLevel(snap, GetCurrentTransactionNestLevel());\n> }\n\nImplemented into the new attached patch.\n\n> So more like\n>\n> Assert(ActiveSnapshot == NULL ||\n> snap_level >= ActiveSnapshot->as_level);\n\nImplemented into the new attached patch.\n\nBut make check is now failing on join_hash.sql, I have been able to \nrepro with:\n\ncreate table bdt (a int);\nbegin;\nsavepoint a;\nrollback to a;\nexplain select count(*) from bdt;\n\nWhich triggers a failed assertion on the new one:\n\nTRAP: FailedAssertion(\"ActiveSnapshot == NULL || as_level >= \nActiveSnapshot->as_level\"\n\nbecause we have as_level = 2 while ActiveSnapshot->as_level = 3.\n\nThanks\n\nBertrand",
"msg_date": "Wed, 29 Sep 2021 21:53:51 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] failed assertion in EnsurePortalSnapshotExists()"
},
{
"msg_contents": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com> writes:\n> But make check is now failing on join_hash.sql, I have been able to \n> repro with:\n\nOh, duh, should have thought a bit harder. createSubid is a sequential\nsubtransaction number; it's not the same as the as_level nesting level.\n\nProbably the most effective way to handle this is to add a subtransaction\nnesting-level field to struct Portal, so we can pass that. I don't recall\nthat xact.c provides any easy way to extract the nesting level of a\nsubtransaction that's not the most closely nested one.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 29 Sep 2021 16:11:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] failed assertion in EnsurePortalSnapshotExists()"
},
{
"msg_contents": "On 9/29/21 10:11 PM, Tom Lane wrote:\n> \"Drouvot, Bertrand\" <bdrouvot@amazon.com> writes:\n>> But make check is now failing on join_hash.sql, I have been able to\n>> repro with:\n> Oh, duh, should have thought a bit harder. createSubid is a sequential\n> subtransaction number; it's not the same as the as_level nesting level.\nOh right, thanks for the explanation.\n>\n> Probably the most effective way to handle this is to add a subtransaction\n> nesting-level field to struct Portal, so we can pass that.\n\nAgree, done that way in the new attached patch.\n\nThanks\n\nBertrand",
"msg_date": "Thu, 30 Sep 2021 11:52:52 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] failed assertion in EnsurePortalSnapshotExists()"
},
{
"msg_contents": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com> writes:\n> [ v2-0003-EnsurePortalSnapshotExists-failed-assertion.patch ]\n\nLooking through this, I think you were overenthusiastic about applying\nPushActiveSnapshotWithLevel. We don't really need to use it except in\nthe places where we're setting portalSnapshot, because other than those\ncases we don't have a risk of portalSnapshot becoming a dangling pointer.\nAlso, I'm quite nervous about applying PushActiveSnapshotWithLevel to\nsnapshots that aren't created by the portal machinery itself, because\nwe don't know very much about where passed-in snapshots came from or\nwhat the caller thinks their lifespan is.\n\nThe attached revision therefore backs off to only using the new code\nin the two places where we really need it. I made a number of\nmore-cosmetic revisions too. Notably, I think it's useful to frame\nthe testing shortcoming as \"we were not testing COMMIT/ROLLBACK\ninside a plpgsql exception block\". So I moved the test code to the\nplpgsql tests and made it check ROLLBACK too.\n\n\t\t\tregards, tom lane\n\nPS: Memo to self: in the back branches, the new field has to be\nadded at the end of struct Portal.",
"msg_date": "Thu, 30 Sep 2021 13:16:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] failed assertion in EnsurePortalSnapshotExists()"
},
{
"msg_contents": "\nOn 9/30/21 7:16 PM, Tom Lane wrote:\n> \"Drouvot, Bertrand\" <bdrouvot@amazon.com> writes:\n>> [ v2-0003-EnsurePortalSnapshotExists-failed-assertion.patch ]\n> Looking through this, I think you were overenthusiastic about applying\n> PushActiveSnapshotWithLevel. We don't really need to use it except in\n> the places where we're setting portalSnapshot, because other than those\n> cases we don't have a risk of portalSnapshot becoming a dangling pointer.\n> Also, I'm quite nervous about applying PushActiveSnapshotWithLevel to\n> snapshots that aren't created by the portal machinery itself, because\n> we don't know very much about where passed-in snapshots came from or\n> what the caller thinks their lifespan is.\n\nOh right, I did not think about it, thanks!\n\n>\n> The attached revision therefore backs off to only using the new code\n> in the two places where we really need it.\n\nOk, so in PortalRunUtility() and EnsurePortalSnapshotExists().\n\n> I made a number of\n> more-cosmetic revisions too.\nthanks!\n> Notably, I think it's useful to frame\n> the testing shortcoming as \"we were not testing COMMIT/ROLLBACK\n> inside a plpgsql exception block\". So I moved the test code to the\n> plpgsql tests and made it check ROLLBACK too.\nIndeed, makes sense.\n>\n> regards, tom lane\n>\n> PS: Memo to self: in the back branches, the new field has to be\n> added at the end of struct Portal.\n\nout of curiosity, why?\n\nThanks\n\nBertrand\n\n\n\n",
"msg_date": "Thu, 30 Sep 2021 20:07:01 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] failed assertion in EnsurePortalSnapshotExists()"
},
{
"msg_contents": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com> writes:\n> On 9/30/21 7:16 PM, Tom Lane wrote:\n>> PS: Memo to self: in the back branches, the new field has to be\n>> added at the end of struct Portal.\n\n> out of curiosity, why?\n\nSticking it into the middle would create an ABI break for any\nextension code that's looking at struct Portal, due to movement\nof existing field offsets. In HEAD that's fine, so we should\nput the field where it makes the most sense. But we have to\nbe careful about changing globally-visible structs in released\nbranches.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 30 Sep 2021 14:25:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] failed assertion in EnsurePortalSnapshotExists()"
},
{
"msg_contents": "\nOn 9/30/21 8:25 PM, Tom Lane wrote:\n> \"Drouvot, Bertrand\" <bdrouvot@amazon.com> writes:\n>> On 9/30/21 7:16 PM, Tom Lane wrote:\n>>> PS: Memo to self: in the back branches, the new field has to be\n>>> added at the end of struct Portal.\n>> out of curiosity, why?\n> Sticking it into the middle would create an ABI break for any\n> extension code that's looking at struct Portal, due to movement\n> of existing field offsets. In HEAD that's fine, so we should\n> put the field where it makes the most sense. But we have to\n> be careful about changing globally-visible structs in released\n> branches.\n\nGot it, thanks!\n\nBertrand\n\n\n\n",
"msg_date": "Fri, 1 Oct 2021 07:27:42 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] failed assertion in EnsurePortalSnapshotExists()"
}
] |
[
{
"msg_contents": "Hi,\n\nWe've encountered some unexpected behavior with statement_timeout not cancelling a query in DECLARE CURSOR, but only if the DECLARE CURSOR is outside of a transaction:\n\nxof=# select version();\n version \n-------------------------------------------------------------------------------------------------------------------\n PostgreSQL 13.4 on x86_64-apple-darwin19.6.0, compiled by Apple clang version 12.0.0 (clang-1200.0.32.29), 64-bit\n(1 row)\n\nxof=# set statement_timeout = '1s';\nSET\nxof=# \\timing\nTiming is on.\nxof=# select * from (with test as (select pg_sleep(10), current_timestamp as cur_time) select 1 from test ) as slp;\nERROR: canceling statement due to statement timeout\nTime: 1000.506 ms (00:01.001)\nxof=# declare x no scroll cursor with hold for select * from (with test as (select pg_sleep(10), current_timestamp as cur_time) select 1 from test ) as slp;\nDECLARE CURSOR\nTime: 10001.929 ms (00:10.002)\nxof=# \n\nbut:\n\nxof=# set statement_timeout = '1s';\nSET\nxof=# \\timing\nTiming is on.\nxof=# begin;\nBEGIN\nTime: 0.161 ms\nxof=*# declare x no scroll cursor with hold for select * from (with test as (select pg_sleep(10), current_timestamp as cur_time) select 1 from test ) as slp;\nDECLARE CURSOR\nTime: 0.949 ms\nxof=*# fetch all from x;\nERROR: canceling statement due to statement timeout\nTime: 1000.520 ms (00:01.001)\nxof=!# abort;\nROLLBACK\nTime: 0.205 ms\nxof=# \n\n\n\n",
"msg_date": "Mon, 27 Sep 2021 10:42:09 -0700",
"msg_from": "Christophe Pettus <xof@thebuild.com>",
"msg_from_op": true,
"msg_subject": "statement_timeout vs DECLARE CURSOR"
},
{
"msg_contents": "\n\n> On Sep 27, 2021, at 10:42, Christophe Pettus <xof@thebuild.com> wrote:\n> We've encountered some unexpected behavior with statement_timeout not cancelling a query in DECLARE CURSOR, but only if the DECLARE CURSOR is outside of a transaction:\n\nA bit more poking revealed the reason: The ON HOLD cursor's query is executed at commit time (which is, logically, not interruptible), but that's all wrapped in the single statement outside of a transaction.\n\n",
"msg_date": "Mon, 27 Sep 2021 11:10:19 -0700",
"msg_from": "Christophe Pettus <xof@thebuild.com>",
"msg_from_op": true,
"msg_subject": "Re: statement_timeout vs DECLARE CURSOR"
},
{
"msg_contents": "Christophe Pettus <xof@thebuild.com> writes:\n>> On Sep 27, 2021, at 10:42, Christophe Pettus <xof@thebuild.com> wrote:\n>> We've encountered some unexpected behavior with statement_timeout not cancelling a query in DECLARE CURSOR, but only if the DECLARE CURSOR is outside of a transaction:\n\n> A bit more poking revealed the reason: The ON HOLD cursor's query is executed at commit time (which is, logically, not interruptible), but that's all wrapped in the single statement outside of a transaction.\n\nHmm ... seems like a bit of a UX failure. I wonder why we don't persist\nsuch cursors before we get into the uninterruptible part of COMMIT.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 27 Sep 2021 15:40:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: statement_timeout vs DECLARE CURSOR"
},
{
"msg_contents": "I wrote:\n> Christophe Pettus <xof@thebuild.com> writes:\n>> A bit more poking revealed the reason: The ON HOLD cursor's query is executed at commit time (which is, logically, not interruptible), but that's all wrapped in the single statement outside of a transaction.\n\n> Hmm ... seems like a bit of a UX failure. I wonder why we don't persist\n> such cursors before we get into the uninterruptible part of COMMIT.\n\nOh, I see the issue. It's not that that part of COMMIT isn't\ninterruptible; you can control-C out of it just fine. The problem\nis that finish_xact_command() disarms the statement timeout before\nstarting CommitTransactionCommand at all.\n\nWe could imagine pushing the responsibility for that down into\nxact.c, allowing it to happen after CommitTransaction has finished\nrunning user-defined code. But it seems like a bit of a mess\nbecause there are so many other code paths there. Not sure how\nto avoid future bugs-of-omission.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 28 Sep 2021 15:57:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: statement_timeout vs DECLARE CURSOR"
},
{
"msg_contents": "[ redirect to -hackers ]\n\nI wrote:\n>> Christophe Pettus <xof@thebuild.com> writes:\n>>> A bit more poking revealed the reason: The ON HOLD cursor's query is executed at commit time (which is, logically, not interruptible), but that's all wrapped in the single statement outside of a transaction.\n\n>> Hmm ... seems like a bit of a UX failure. I wonder why we don't persist\n>> such cursors before we get into the uninterruptible part of COMMIT.\n\n> Oh, I see the issue. It's not that that part of COMMIT isn't\n> interruptible; you can control-C out of it just fine. The problem\n> is that finish_xact_command() disarms the statement timeout before\n> starting CommitTransactionCommand at all.\n\n> We could imagine pushing the responsibility for that down into\n> xact.c, allowing it to happen after CommitTransaction has finished\n> running user-defined code. But it seems like a bit of a mess\n> because there are so many other code paths there. Not sure how\n> to avoid future bugs-of-omission.\n\nActually ... maybe it needn't be any harder than the attached?\n\nThis makes it possible for a statement timeout interrupt to occur\nanytime during CommitTransactionCommand, but I think\nCommitTransactionCommand had better be able to protect itself\nagainst that anyway, for a couple of reasons:\n\n1. It's not significantly different from a query-cancel interrupt,\nwhich surely could arrive during that window.\n\n2. COMMIT-within-procedures already exposes us to statement timeout\nduring COMMIT.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 28 Sep 2021 16:15:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: statement_timeout vs DECLARE CURSOR"
}
] |
[
{
"msg_contents": "Currently enum_in() is marked as stable, on the reasonable grounds\nthat it depends on system catalog contents. However, after the\ndiscussion at [1] I'm wondering why it wouldn't be perfectly safe,\nand useful, to mark it as immutable.\n\nHere's my reasoning: \"immutable\" promises that the function will\nalways give the same results for the same inputs. However, one of\nthe inputs is the type OID for the desired enum type. It's certainly\npossible that the enum type could be dropped later, and then its type\nOID could be recycled, so that at some future epoch enum_in() might\ngive a different result for the \"same\" type OID. But I cannot think\nof any situation where a stored output value of enum_in() would\noutlive the dropping of the enum type. It certainly couldn't happen\nfor a table column of that type, nor would it be safe for a stored\nrule to outlive a type it mentions. So it seems like the results of\nenum_in() are immutable for as long as anybody could care about them,\nand that's good enough.\n\nMoreover, if it's *not* good enough, then our existing practice of\nfolding enum literals to OID constants on-sight must be unsafe too.\n\nSo it seems like this would be okay, and if we did it it'd eliminate\nsome annoying corner cases for query optimization, as seen in the\nreferenced thread.\n\nI think that a similar argument could be made about enum_out, although\nmaybe I'm missing some interesting case there.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/9a51090e-e075-4f2f-e3a6-55ed4359a357%40kimmet.dk\n\n\n",
"msg_date": "Mon, 27 Sep 2021 17:54:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Couldn't we mark enum_in() as immutable?"
},
{
"msg_contents": "> On 27 Sep 2021, at 23:54, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> So it seems like the results of\n> enum_in() are immutable for as long as anybody could care about them,\n> and that's good enough.\n\n+1. I can't think of a situation where this wouldn't hold.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 28 Sep 2021 15:36:34 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Couldn't we mark enum_in() as immutable?"
},
{
"msg_contents": "\nOn 9/27/21 5:54 PM, Tom Lane wrote:\n> Currently enum_in() is marked as stable, on the reasonable grounds\n> that it depends on system catalog contents. However, after the\n> discussion at [1] I'm wondering why it wouldn't be perfectly safe,\n> and useful, to mark it as immutable.\n>\n> Here's my reasoning: \"immutable\" promises that the function will\n> always give the same results for the same inputs. However, one of\n> the inputs is the type OID for the desired enum type. It's certainly\n> possible that the enum type could be dropped later, and then its type\n> OID could be recycled, so that at some future epoch enum_in() might\n> give a different result for the \"same\" type OID. But I cannot think\n> of any situation where a stored output value of enum_in() would\n> outlive the dropping of the enum type. It certainly couldn't happen\n> for a table column of that type, nor would it be safe for a stored\n> rule to outlive a type it mentions. So it seems like the results of\n> enum_in() are immutable for as long as anybody could care about them,\n> and that's good enough.\n>\n> Moreover, if it's *not* good enough, then our existing practice of\n> folding enum literals to OID constants on-sight must be unsafe too.\n>\n> So it seems like this would be okay, and if we did it it'd eliminate\n> some annoying corner cases for query optimization, as seen in the\n> referenced thread.\n>\n> I think that a similar argument could be made about enum_out, although\n> maybe I'm missing some interesting case there.\n>\n> Thoughts?\n>\n> \t\t\t\n\n\nThe value returned depends on the label values in pg_enum, so if someone\ndecided to rename a label that would affect it, no? Same for enum_out.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 28 Sep 2021 09:50:41 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Couldn't we mark enum_in() as immutable?"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 9/27/21 5:54 PM, Tom Lane wrote:\n>> Currently enum_in() is marked as stable, on the reasonable grounds\n>> that it depends on system catalog contents. However, after the\n>> discussion at [1] I'm wondering why it wouldn't be perfectly safe,\n>> and useful, to mark it as immutable.\n\n> The value returned depends on the label values in pg_enum, so if someone\n> decided to rename a label that would affect it, no? Same for enum_out.\n\nHm. I'd thought about this to the extent of considering that if we\nrename label A to B, then stored values of \"A\" would now print as \"B\",\nand const-folding \"A\" earlier would track that which seems OK.\nBut you're right that then introducing a new definition of \"A\"\n(via ADD or RENAME) would make things messy.\n\n>> Moreover, if it's *not* good enough, then our existing practice of\n>> folding enum literals to OID constants on-sight must be unsafe too.\n\nI'm still a little troubled by this angle. However, we've gotten away\nwith far worse instability for datetime literals, so maybe it's not a\nproblem in practice.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 28 Sep 2021 11:04:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Couldn't we mark enum_in() as immutable?"
},
{
"msg_contents": "PostGIS has a very similar thing: ST_Transform is marked as immutable but\ndoes depend on contents of spatial_ref_sys table. Although it is shipped\nwith extension and almost never changes incompatibly, there are scenarios\nwhere it breaks: dump/restore + index or generated column can fail the\nimport if data gets fed into the immutable function before the contents of\nspatial_ref_sys is restored. I'd love this issue to be addressed at the\ncore level as benefits of having it as immutable outweigh even this\nunfortunate issue.\n\n\n\nOn Tue, Sep 28, 2021 at 6:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > On 9/27/21 5:54 PM, Tom Lane wrote:\n> >> Currently enum_in() is marked as stable, on the reasonable grounds\n> >> that it depends on system catalog contents. However, after the\n> >> discussion at [1] I'm wondering why it wouldn't be perfectly safe,\n> >> and useful, to mark it as immutable.\n>\n> > The value returned depends on the label values in pg_enum, so if someone\n> > decided to rename a label that would affect it, no? Same for enum_out.\n>\n> Hm. I'd thought about this to the extent of considering that if we\n> rename label A to B, then stored values of \"A\" would now print as \"B\",\n> and const-folding \"A\" earlier would track that which seems OK.\n> But you're right that then introducing a new definition of \"A\"\n> (via ADD or RENAME) would make things messy.\n>\n> >> Moreover, if it's *not* good enough, then our existing practice of\n> >> folding enum literals to OID constants on-sight must be unsafe too.\n>\n> I'm still a little troubled by this angle. However, we've gotten away\n> with far worse instability for datetime literals, so maybe it's not a\n> problem in practice.\n>\n> regards, tom lane\n>\n>\n>\n\nPostGIS has a very similar thing: ST_Transform is marked as immutable but does depend on contents of spatial_ref_sys table. Although it is shipped with extension and almost never changes incompatibly, there are scenarios where it breaks: dump/restore + index or generated column can fail the import if data gets fed into the immutable function before the contents of spatial_ref_sys is restored. I'd love this issue to be addressed at the core level as benefits of having it as immutable outweigh even this unfortunate issue.On Tue, Sep 28, 2021 at 6:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Andrew Dunstan <andrew@dunslane.net> writes:\n> On 9/27/21 5:54 PM, Tom Lane wrote:\n>> Currently enum_in() is marked as stable, on the reasonable grounds\n>> that it depends on system catalog contents. However, after the\n>> discussion at [1] I'm wondering why it wouldn't be perfectly safe,\n>> and useful, to mark it as immutable.\n\n> The value returned depends on the label values in pg_enum, so if someone\n> decided to rename a label that would affect it, no? Same for enum_out.\n\nHm. I'd thought about this to the extent of considering that if we\nrename label A to B, then stored values of \"A\" would now print as \"B\",\nand const-folding \"A\" earlier would track that which seems OK.\nBut you're right that then introducing a new definition of \"A\"\n(via ADD or RENAME) would make things messy.\n\n>> Moreover, if it's *not* good enough, then our existing practice of\n>> folding enum literals to OID constants on-sight must be unsafe too.\n\nI'm still a little troubled by this angle. However, we've gotten away\nwith far worse instability for datetime literals, so maybe it's not a\nproblem in practice.\n\n regards, tom lane",
"msg_date": "Tue, 28 Sep 2021 18:13:07 +0300",
"msg_from": "=?UTF-8?Q?Darafei_=22Kom=D1=8Fpa=22_Praliaskouski?= <me@komzpa.net>",
"msg_from_op": false,
"msg_subject": "Re: Couldn't we mark enum_in() as immutable?"
},
{
"msg_contents": "\nOn 9/28/21 11:04 AM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 9/27/21 5:54 PM, Tom Lane wrote:\n>>> Currently enum_in() is marked as stable, on the reasonable grounds\n>>> that it depends on system catalog contents. However, after the\n>>> discussion at [1] I'm wondering why it wouldn't be perfectly safe,\n>>> and useful, to mark it as immutable.\n>> The value returned depends on the label values in pg_enum, so if someone\n>> decided to rename a label that would affect it, no? Same for enum_out.\n> Hm. I'd thought about this to the extent of considering that if we\n> rename label A to B, then stored values of \"A\" would now print as \"B\",\n> and const-folding \"A\" earlier would track that which seems OK.\n> But you're right that then introducing a new definition of \"A\"\n> (via ADD or RENAME) would make things messy.\n>\n>>> Moreover, if it's *not* good enough, then our existing practice of\n>>> folding enum literals to OID constants on-sight must be unsafe too.\n> I'm still a little troubled by this angle. However, we've gotten away\n> with far worse instability for datetime literals, so maybe it's not a\n> problem in practice.\n>\n> \t\t\t\n\n\nYeah, I suspect it's not.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 28 Sep 2021 11:46:52 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Couldn't we mark enum_in() as immutable?"
}
] |
[
{
"msg_contents": "Hello,\n\nWhile developing I got this error and it was difficult to figure out what\nwas going on.\n\nThanks to Jacob, I was able to learn the context of the failure, so we\ncreated this small patch.\n\n\nThe text of the error message, of course, is up for debate, but hopefully\nthis will make it more clear to others.\n\n\nThank you,\nRachel Heaton",
"msg_date": "Mon, 27 Sep 2021 15:55:02 -0700",
"msg_from": "Rachel Heaton <rachelmheaton@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Print error when libpq-refs-stamp fails"
},
{
"msg_contents": "> On 28 Sep 2021, at 00:55, Rachel Heaton <rachelmheaton@gmail.com> wrote:\n\n> While developing I got this error and it was difficult to figure out what was going on. \n> \n> Thanks to Jacob, I was able to learn the context of the failure, so we created this small patch. \n\nI can see that, and I think this patch makes sense even though we don't have\nmuch of a precedent for outputting informational messages from Makefiles.\n\n> The text of the error message, of course, is up for debate, but hopefully this will make it more clear to others. \n\n+\techo 'libpq must not call exit'; exit 1; \\\n\nSince it's not actually libpq which calls exit (no such patch would ever be\ncommitted), I think it would be clearer to indicate that a library linked to is\nthe culprit. How about something like \"libpq must not be linked against any\nlibrary calling exit\"?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 28 Sep 2021 15:14:04 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Print error when libpq-refs-stamp fails"
},
{
"msg_contents": "On Tue, Sep 28, 2021 at 6:14 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n>\n> Since it's not actually libpq which calls exit (no such patch would ever be\n> committed), I think it would be clearer to indicate that a library linked to is\n> the culprit. How about something like \"libpq must not be linked against any\n> library calling exit\"?\n>\n\nExcellent update to the error message. Patch attached.",
"msg_date": "Tue, 28 Sep 2021 08:52:59 -0700",
"msg_from": "Rachel Heaton <rachelmheaton@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Print error when libpq-refs-stamp fails"
},
{
"msg_contents": "> On 28 Sep 2021, at 17:52, Rachel Heaton <rachelmheaton@gmail.com> wrote:\n\n> Patch attached.\n\nI tweaked the error message a little bit and pushed to master. Thanks!\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 4 Oct 2021 14:35:22 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Print error when libpq-refs-stamp fails"
},
{
"msg_contents": "Hello,\n\nOn 28/09/2021 05:55, Rachel Heaton wrote:\n> Hello,\n> \n> While developing I got this error and it was difficult to figure out \n> what was going on.\n> \n> Thanks to Jacob, I was able to learn the context of the failure, so we \n> created this small patch.\n\n-\t! nm -A -u $< 2>/dev/null | grep -v __cxa_atexit | grep exit\n+\t@if nm -a -u $< 2>/dev/null | grep -v __cxa_atexit | grep exit; then \\\n+\t\techo 'libpq must not be linked against any library calling exit'; \nexit 1; \\\n+\tfi\n\nCould you please confirm that the change from -A to -a in nm arguments \nin this patch is intentional?\n\n-- \nAnton Voloshin\nPostgres Professional, The Russian Postgres Company\nhttps://postgrespro.ru\n\n\n",
"msg_date": "Mon, 4 Oct 2021 23:40:02 +0700",
"msg_from": "Anton Voloshin <a.voloshin@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Print error when libpq-refs-stamp fails"
},
{
"msg_contents": "On Mon, 2021-10-04 at 23:40 +0700, Anton Voloshin wrote:\r\n> \r\n> Could you please confirm that the change from -A to -a in nm arguments \r\n> in this patch is intentional?\r\n\r\nThat was not intended by us, thank you for the catch! A stray\r\nlowercasing in vim, perhaps.\r\n\r\n--Jacob\r\n",
"msg_date": "Mon, 4 Oct 2021 17:02:11 +0000",
"msg_from": "Jacob Champion <pchampion@vmware.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Print error when libpq-refs-stamp fails"
},
{
"msg_contents": "\n\n> On 4 Oct 2021, at 19:02, Jacob Champion <pchampion@vmware.com> wrote:\n> \n> On Mon, 2021-10-04 at 23:40 +0700, Anton Voloshin wrote:\n>> \n>> Could you please confirm that the change from -A to -a in nm arguments \n>> in this patch is intentional?\n> \n> That was not intended by us, thank you for the catch! A stray\n> lowercasing in vim, perhaps.\n\nHmm, I will take care of this shortly.\n\n/ Daniel\n\n",
"msg_date": "Mon, 4 Oct 2021 19:21:25 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Print error when libpq-refs-stamp fails"
},
{
"msg_contents": "> On 4 Oct 2021, at 19:21, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 4 Oct 2021, at 19:02, Jacob Champion <pchampion@vmware.com> wrote:\n>> \n>> On Mon, 2021-10-04 at 23:40 +0700, Anton Voloshin wrote:\n>>> \n>>> Could you please confirm that the change from -A to -a in nm arguments \n>>> in this patch is intentional?\n>> \n>> That was not intended by us, thank you for the catch! A stray\n>> lowercasing in vim, perhaps.\n> \n> Hmm, I will take care of this shortly.\n\nRight, so I missed this in reviewing and testing, and I know why the latter\ndidn't catch it. nm -A and -a outputs the same thing *for this input* on my\nDebian and macOS boxes, with the small difference that -A prefixes the line\nwith the name of the input file. -a also include debugger symbols, but for\nthis usage it didn't alter the results. I will go ahead and fix this, thanks\nfor catching it! \n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 4 Oct 2021 20:36:54 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Print error when libpq-refs-stamp fails"
},
{
"msg_contents": "Thanks to you both!\r\n\r\nFrom: Daniel Gustafsson <daniel@yesql.se>\r\nDate: Monday, October 4, 2021 at 11:36 AM\r\nTo: Jacob Champion <pchampion@vmware.com>\r\nCc: Rachel Heaton <rachelmheaton@gmail.com>, pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>, a.voloshin@postgrespro.ru <a.voloshin@postgrespro.ru>\r\nSubject: Re: [PATCH] Print error when libpq-refs-stamp fails\r\n> On 4 Oct 2021, at 19:21, Daniel Gustafsson <daniel@yesql.se> wrote:\r\n>\r\n>> On 4 Oct 2021, at 19:02, Jacob Champion <pchampion@vmware.com> wrote:\r\n>>\r\n>> On Mon, 2021-10-04 at 23:40 +0700, Anton Voloshin wrote:\r\n>>>\r\n>>> Could you please confirm that the change from -A to -a in nm arguments\r\n>>> in this patch is intentional?\r\n>>\r\n>> That was not intended by us, thank you for the catch! A stray\r\n>> lowercasing in vim, perhaps.\r\n>\r\n> Hmm, I will take care of this shortly.\r\n\r\nRight, so I missed this in reviewing and testing, and I know why the latter\r\ndidn't catch it. nm -A and -a outputs the same thing *for this input* on my\r\nDebian and macOS boxes, with the small difference that -A prefixes the line\r\nwith the name of the input file. -a also include debugger symbols, but for\r\nthis usage it didn't alter the results. I will go ahead and fix this, thanks\r\nfor catching it!\r\n\r\n--\r\nDaniel Gustafsson https://vmware.com/\r\n\n\n\n\n\n\n\n\n\nThanks to you both!\n \n\nFrom:\r\nDaniel Gustafsson <daniel@yesql.se>\nDate: Monday, October 4, 2021 at 11:36 AM\nTo: Jacob Champion <pchampion@vmware.com>\nCc: Rachel Heaton <rachelmheaton@gmail.com>, pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>, a.voloshin@postgrespro.ru <a.voloshin@postgrespro.ru>\nSubject: Re: [PATCH] Print error when libpq-refs-stamp fails\n\n\n> On 4 Oct 2021, at 19:21, Daniel Gustafsson <daniel@yesql.se> wrote:\r\n> \r\n>> On 4 Oct 2021, at 19:02, Jacob Champion <pchampion@vmware.com> wrote:\r\n>> \r\n>> On Mon, 2021-10-04 at 23:40 +0700, Anton Voloshin wrote:\r\n>>> \r\n>>> Could you please confirm that the change from -A to -a in nm arguments \r\n>>> in this patch is intentional?\r\n>> \r\n>> That was not intended by us, thank you for the catch! A stray\r\n>> lowercasing in vim, perhaps.\r\n> \r\n> Hmm, I will take care of this shortly.\n\r\nRight, so I missed this in reviewing and testing, and I know why the latter\r\ndidn't catch it. nm -A and -a outputs the same thing *for this input* on my\r\nDebian and macOS boxes, with the small difference that -A prefixes the line\r\nwith the name of the input file. -a also include debugger symbols, but for\r\nthis usage it didn't alter the results. I will go ahead and fix this, thanks\r\nfor catching it! \n\r\n--\r\nDaniel Gustafsson https://vmware.com/",
"msg_date": "Tue, 5 Oct 2021 00:09:29 +0000",
"msg_from": "Rachel Heaton <rachelmheaton@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Print error when libpq-refs-stamp fails"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhile reviewing a patch that refactors syslogger.c, we use the\nfollowing code to pass down a HANDLE to a forked syslogger as of\nsyslogger_forkexec():\n if (syslogFile != NULL)\n snprintf(filenobuf, sizeof(filenobuf), \"%ld\",\n (long) _get_osfhandle(_fileno(syslogFile)));\n\nThen, in the kicked syslogger, the parsing is done as follows in\nsyslogger_parseArgs() for WIN32, with a simple atoi():\n fd = atoi(*argv++);\n\n_get_osfhandle() returns intptr_t whose size is system-dependent, as\nit would be 32b for Win32 and 64b for Win64:\nhttps://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/get-osfhandle\nhttps://docs.microsoft.com/en-us/cpp/c-runtime-library/standard-types\n\nAs long is 4 bytes on Windows, we would run into overflows here if the\nhandle is out of the normal 32b range. So the logic as coded is fine\nfor Win32, but it could be wrong under Win64.\n\nAm I missing something obvious? One thing that we could do here is\nto do the parsing with pg_lltoa() while printing the argument with\nINT64_FORMAT, no?\n\nThoughts?\n--\nMichael",
"msg_date": "Tue, 28 Sep 2021 12:41:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Incorrect fd handling in syslogger.c for Win64 under EXEC_BACKEND"
},
{
"msg_contents": "On Tue, Sep 28, 2021 at 12:41:40PM +0900, Michael Paquier wrote:\n> Am I missing something obvious? One thing that we could do here is\n> to do the parsing with pg_lltoa() while printing the argument with\n> INT64_FORMAT, no?\n\nI wrote that a bit too quickly. After looking at it, what we could\nuse to parse the handle pointer is scanint8() instead, even if that's\na bit ugly. I also found the code a bit confused regarding \"fd\", that\ncould be manipulated as an int or intptr_t, so something like the\nattached should improve the situation.\n\nOpinions welcome.\n--\nMichael",
"msg_date": "Tue, 28 Sep 2021 14:36:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Incorrect fd handling in syslogger.c for Win64 under EXEC_BACKEND"
},
{
"msg_contents": "On Tue, Sep 28, 2021 at 02:36:52PM +0900, Michael Paquier wrote:\n> I wrote that a bit too quickly. After looking at it, what we could\n> use to parse the handle pointer is scanint8() instead, even if that's\n> a bit ugly. I also found the code a bit confused regarding \"fd\", that\n> could be manipulated as an int or intptr_t, so something like the\n> attached should improve the situation.\n\nAs reminded by Jacob, the code is corrently correct as handles are\n4 bytes on both Win32 and Win64:\nhttps://docs.microsoft.com/en-us/windows/win32/winauto/32-bit-and-64-bit-interoperability\n\nSorry for the noise. It looks like I got confused by intptr_t :p\n--\nMichael",
"msg_date": "Wed, 29 Sep 2021 07:25:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Incorrect fd handling in syslogger.c for Win64 under EXEC_BACKEND"
}
] |
[
{
"msg_contents": "Hi,\n\n(LOCK TABLE options) “ONLY” and “NOWAIT” are not yet implemented in \ntab-complete. I made a patch for these options.\n\nregards,\nKoyu Tanigawa",
"msg_date": "Tue, 28 Sep 2021 16:13:45 +0900",
"msg_from": "bt21tanigaway <bt21tanigaway@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?Q?=28LOCK_TABLE_options=29_=E2=80=9CONLY=E2=80=9D_and_?=\n =?UTF-8?Q?=E2=80=9CNOWAIT=E2=80=9D_are_not_yet_implemented?="
},
{
"msg_contents": "\n\nOn 2021/09/28 16:13, bt21tanigaway wrote:\n> Hi,\n> \n> (LOCK TABLE options) “ONLY” and “NOWAIT” are not yet implemented in tab-complete. I made a patch for these options.\n\nThanks for the patch!\n\nThe patch seems to forget to handle the tab-completion for\n\"LOCK ONLY <table-name> IN\".\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 28 Sep 2021 16:36:49 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?B?UmU6IChMT0NLIFRBQkxFIG9wdGlvbnMpIOKAnE9OTFnigJ0gYW5kIA==?=\n =?UTF-8?Q?=e2=80=9cNOWAIT=e2=80=9d_are_not_yet_implemented?="
},
{
"msg_contents": "2021-09-28 16:36 に Fujii Masao さんは書きました:\n> On 2021/09/28 16:13, bt21tanigaway wrote:\n>> Hi,\n>> \n>> (LOCK TABLE options) “ONLY” and “NOWAIT” are not yet implemented in \n>> tab-complete. I made a patch for these options.\n> \n> Thanks for the patch!\n> The patch seems to forget to handle the tab-completion for\n> \"LOCK ONLY <table-name> IN\".\n\nThanks for your comment!\nI attach a new patch fixed to this mail.\n\nRegards,\n\nKoyu Tanigawa\n\n\n\n",
"msg_date": "Tue, 28 Sep 2021 17:03:57 +0900",
"msg_from": "bt21tanigaway <bt21tanigaway@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?Q?Re=3A_=28LOCK_TABLE_options=29_=E2=80=9CONLY=E2=80=9D_?=\n =?UTF-8?Q?and_=E2=80=9CNOWAIT=E2=80=9D_are_not_yet_implemented?="
},
{
"msg_contents": "2021-09-28 17:03 に bt21tanigaway さんは書きました:\n> 2021-09-28 16:36 に Fujii Masao さんは書きました:\n>> On 2021/09/28 16:13, bt21tanigaway wrote:\n>>> Hi,\n>>> \n>>> (LOCK TABLE options) “ONLY” and “NOWAIT” are not yet implemented in \n>>> tab-complete. I made a patch for these options.\n>> \n>> Thanks for the patch!\n>> The patch seems to forget to handle the tab-completion for\n>> \"LOCK ONLY <table-name> IN\".\n> \n> Thanks for your comment!\n> I attach a new patch fixed to this mail.\n> \n> Regards,\n> \n> Koyu Tanigawa\n\nSorry, I forgot to attach patch file.\n\"fix-tab-complete2.patch\" is fixed!\n\nRegards,\n\nKoyu Tanigawa",
"msg_date": "Tue, 28 Sep 2021 17:06:29 +0900",
"msg_from": "bt21tanigaway <bt21tanigaway@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?Q?Re=3A_=28LOCK_TABLE_options=29_=E2=80=9CONLY=E2=80=9D_?=\n =?UTF-8?Q?and_=E2=80=9CNOWAIT=E2=80=9D_are_not_yet_implemented?="
},
{
"msg_contents": "2021-09-28 17:06 に bt21tanigaway さんは書きました:\n> 2021-09-28 17:03 に bt21tanigaway さんは書きました:\n>> 2021-09-28 16:36 に Fujii Masao さんは書きました:\n>>> On 2021/09/28 16:13, bt21tanigaway wrote:\n>>>> Hi,\n>>>> \n>>>> (LOCK TABLE options) “ONLY” and “NOWAIT” are not yet implemented in \n>>>> tab-complete. I made a patch for these options.\n>>> \n>>> Thanks for the patch!\n>>> The patch seems to forget to handle the tab-completion for\n>>> \"LOCK ONLY <table-name> IN\".\n>> \n>> Thanks for your comment!\n>> I attach a new patch fixed to this mail.\n>> \n>> Regards,\n>> \n>> Koyu Tanigawa\n> \n> Sorry, I forgot to attach patch file.\n> \"fix-tab-complete2.patch\" is fixed!\n> \n> Regards,\n> \n> Koyu Tanigawa\nThank you for your patch.\nI have two comments.\n\n1. When I executed git apply, an error occured.\n---\n$ git apply ~/Downloads/fix-tab-complete2.patch\n/home/penguin/Downloads/fix-tab-complete2.patch:14: indent with spaces.\n COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables, \" UNION SELECT \n'TABLE'\" \" UNION SELECT 'ONLY'\");\nwarning: 1 line adds whitespace errors.\n---\n\n2. The command \"LOCK TABLE a, b;\" can be executed, but tab-completion \ndoesn't work properly. Is it OK?\n\n-- \nRegards,\n\n--\nShinya Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 28 Sep 2021 22:46:35 +0900",
"msg_from": "Shinya Kato <katousnk@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?Q?Re=3A_=28LOCK_TABLE_options=29_=E2=80=9CONLY=E2=80=9D_?=\n =?UTF-8?Q?and_=E2=80=9CNOWAIT=E2=80=9D_are_not_yet_implemented?="
},
{
"msg_contents": ">-----Original Message-----\r\n>From: bt21tanigaway <bt21tanigaway@oss.nttdata.com>\r\n>Sent: Tuesday, September 28, 2021 5:06 PM\r\n>To: Fujii Masao <masao.fujii@oss.nttdata.com>;\r\n>pgsql-hackers@lists.postgresql.org\r\n>Subject: Re: (LOCK TABLE options) “ONLY” and “NOWAIT” are not yet\r\n>implemented\r\n>\r\n>2021-09-28 17:03 に bt21tanigaway さんは書きました:\r\n>> 2021-09-28 16:36 に Fujii Masao さんは書きました:\r\n>>> On 2021/09/28 16:13, bt21tanigaway wrote:\r\n>>>> Hi,\r\n>>>>\r\n>>>> (LOCK TABLE options) “ONLY” and “NOWAIT” are not yet implemented in\r\n>>>> tab-complete. I made a patch for these options.\r\n>>>\r\n>>> Thanks for the patch!\r\n>>> The patch seems to forget to handle the tab-completion for \"LOCK ONLY\r\n>>> <table-name> IN\".\r\n>>\r\n>> Thanks for your comment!\r\n>> I attach a new patch fixed to this mail.\r\n>>\r\n>> Regards,\r\n>>\r\n>> Koyu Tanigawa\r\n>\r\n>Sorry, I forgot to attach patch file.\r\n>\"fix-tab-complete2.patch\" is fixed!\r\n>\r\n>Regards,\r\n>\r\n>Koyu Tanigawa\r\nThank you for your patch.\r\nI have two comments.\r\n\r\n1. When I executed git apply, an error occured.\r\n---\r\n$ git apply ~/Downloads/fix-tab-complete2.patch\r\n/home/penguin/Downloads/fix-tab-complete2.patch:14: indent with spaces.\r\n COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables, \" UNION SELECT 'TABLE'\" \" UNION SELECT 'ONLY'\");\r\nwarning: 1 line adds whitespace errors.\r\n---\r\n\r\n2. The command \"LOCK TABLE a, b;\" can be executed, but tab-completion doesn't work properly. Is it OK?\r\n\r\n-- \r\nRegards,\r\n\r\n--\r\nShinya Kato\r\nAdvanced Computing Technology Center\r\nResearch and Development Headquarters\r\nNTT DATA CORPORATION\r\n",
"msg_date": "Tue, 28 Sep 2021 13:55:48 +0000",
"msg_from": "<Shinya11.Kato@nttdata.com>",
"msg_from_op": false,
"msg_subject": "\n =?utf-8?B?UkU6IChMT0NLIFRBQkxFIG9wdGlvbnMpIOKAnE9OTFnigJ0gYW5kIOKAnE5P?=\n =?utf-8?B?V0FJVOKAnSBhcmUgbm90IHlldCBpbXBsZW1lbnRlZA==?="
},
{
"msg_contents": "2021-09-28 22:55 に Shinya11.Kato@nttdata.com さんは書きました:\n>> -----Original Message-----\n>> From: bt21tanigaway <bt21tanigaway@oss.nttdata.com>\n>> Sent: Tuesday, September 28, 2021 5:06 PM\n>> To: Fujii Masao <masao.fujii@oss.nttdata.com>;\n>> pgsql-hackers@lists.postgresql.org\n>> Subject: Re: (LOCK TABLE options) “ONLY” and “NOWAIT” are not yet\n>> implemented\n>> \n>> 2021-09-28 17:03 に bt21tanigaway さんは書きました:\n>>> 2021-09-28 16:36 に Fujii Masao さんは書きました:\n>>>> On 2021/09/28 16:13, bt21tanigaway wrote:\n>>>>> Hi,\n>>>>> \n>>>>> (LOCK TABLE options) “ONLY” and “NOWAIT” are not yet implemented in\n>>>>> tab-complete. I made a patch for these options.\n>>>> \n>>>> Thanks for the patch!\n>>>> The patch seems to forget to handle the tab-completion for \"LOCK \n>>>> ONLY\n>>>> <table-name> IN\".\n>>> \n>>> Thanks for your comment!\n>>> I attach a new patch fixed to this mail.\n>>> \n>>> Regards,\n>>> \n>>> Koyu Tanigawa\n>> \n>> Sorry, I forgot to attach patch file.\n>> \"fix-tab-complete2.patch\" is fixed!\n>> \n>> Regards,\n>> \n>> Koyu Tanigawa\n> Thank you for your patch.\n> I have two comments.\n> \n> 1. When I executed git apply, an error occured.\n> ---\n> $ git apply ~/Downloads/fix-tab-complete2.patch\n> /home/penguin/Downloads/fix-tab-complete2.patch:14: indent with spaces.\n> COMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_tables, \" UNION\n> SELECT 'TABLE'\" \" UNION SELECT 'ONLY'\");\n> warning: 1 line adds whitespace errors.\n> ---\n> \nThank you for your feedback.\nI might have added whitespace when I was checking the patch file.\nI attach a new patch to this mail.\n\n> 2. The command \"LOCK TABLE a, b;\" can be executed, but tab-completion\n> doesn't work properly. Is it OK?\nIt's OK for now.\nBut it should be able to handle a case of multiple tables in the future.\n\nRegards,\n\nKoyu Tanigawa",
"msg_date": "Wed, 29 Sep 2021 13:54:58 +0900",
"msg_from": "bt21tanigaway <bt21tanigaway@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?Q?Re=3A_=28LOCK_TABLE_options=29_=E2=80=9CONLY=E2=80=9D_?=\n =?UTF-8?Q?and_=E2=80=9CNOWAIT=E2=80=9D_are_not_yet_implemented?="
},
{
"msg_contents": ">Thank you for your feedback.\r\n>I might have added whitespace when I was checking the patch file.\r\n>I attach a new patch to this mail.\r\nThank you for the update!\r\n\r\n> \telse if (Matches(\"LOCK\", MatchAny, \"IN\", \"ACCESS|ROW\") ||\r\n>-\t\t\t Matches(\"LOCK\", \"TABLE\", MatchAny, \"IN\", \"ACCESS|ROW\"))\r\n>+\t\t\t Matches(\"LOCK\", \"TABLE\", MatchAny, \"IN\", \"ACCESS|ROW\") ||\r\n>+\t\t\t Matches(\"LOCK\", \"ONLY\", MatchAny, \"IN\", \"ACCESS|ROW\") ||\r\n>+\t\t\t Matches(\"LOCK\", \"TABLE\", \"ONLY\", MatchAny, \"IN\", \"ACCESS|ROW\"))\r\nI think this code is redundant, so I change following.\r\n---\r\n\telse if (HeadMatches(\"LOCK\") && TailMatches(\"IN\", \"ACCESS|ROW\"))\r\n---\r\nI created the patch, and attached it. Do you think?\r\n\r\n>> 2. The command \"LOCK TABLE a, b;\" can be executed, but tab-completion\r\n>> doesn't work properly. Is it OK?\r\n>It's OK for now.\r\n>But it should be able to handle a case of multiple tables in the future.\r\nOK. I agreed.\r\n\r\nRegards,\r\nShinya Kato",
"msg_date": "Thu, 30 Sep 2021 03:18:29 +0000",
"msg_from": "<Shinya11.Kato@nttdata.com>",
"msg_from_op": false,
"msg_subject": "\n =?utf-8?B?UkU6IChMT0NLIFRBQkxFIG9wdGlvbnMpIOKAnE9OTFnigJ0gYW5kIOKAnE5P?=\n =?utf-8?B?V0FJVOKAnSBhcmUgbm90IHlldCBpbXBsZW1lbnRlZA==?="
},
{
"msg_contents": ">> \telse if (Matches(\"LOCK\", MatchAny, \"IN\", \"ACCESS|ROW\") ||\n>> -\t\t\t Matches(\"LOCK\", \"TABLE\", MatchAny, \"IN\", \"ACCESS|ROW\"))\n>> +\t\t\t Matches(\"LOCK\", \"TABLE\", MatchAny, \"IN\", \"ACCESS|ROW\") ||\n>> +\t\t\t Matches(\"LOCK\", \"ONLY\", MatchAny, \"IN\", \"ACCESS|ROW\") ||\n>> +\t\t\t Matches(\"LOCK\", \"TABLE\", \"ONLY\", MatchAny, \"IN\", \"ACCESS|ROW\"))\n> I think this code is redundant, so I change following.\n> ---\n> \telse if (HeadMatches(\"LOCK\") && TailMatches(\"IN\", \"ACCESS|ROW\"))\n> ---\n> I created the patch, and attached it. Do you think?\nThank you for update!\nI think that your code is more concise than mine.\nThere seems to be no problem.\n\nRegards,\nKoyu Tanigawa\n\n\n\n",
"msg_date": "Mon, 04 Oct 2021 11:17:19 +0900",
"msg_from": "bt21tanigaway <bt21tanigaway@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?Q?Re=3A_=28LOCK_TABLE_options=29_=E2=80=9CONLY=E2=80=9D_?=\n =?UTF-8?Q?and_=E2=80=9CNOWAIT=E2=80=9D_are_not_yet_implemented?="
},
{
"msg_contents": "On 2021/10/04 11:17, bt21tanigaway wrote:\n>>> else if (Matches(\"LOCK\", MatchAny, \"IN\", \"ACCESS|ROW\") ||\n>>> - Matches(\"LOCK\", \"TABLE\", MatchAny, \"IN\", \"ACCESS|ROW\"))\n>>> + Matches(\"LOCK\", \"TABLE\", MatchAny, \"IN\", \"ACCESS|ROW\") ||\n>>> + Matches(\"LOCK\", \"ONLY\", MatchAny, \"IN\", \"ACCESS|ROW\") ||\n>>> + Matches(\"LOCK\", \"TABLE\", \"ONLY\", MatchAny, \"IN\", \"ACCESS|ROW\"))\n>> I think this code is redundant, so I change following.\n>> ---\n>> else if (HeadMatches(\"LOCK\") && TailMatches(\"IN\", \"ACCESS|ROW\"))\n>> ---\n>> I created the patch, and attached it. Do you think?\n> Thank you for update!\n> I think that your code is more concise than mine.\n> There seems to be no problem.\n\nThe patch looks good to me, too. I applied cosmetic changes to it.\nAttached is the updated version of the patch. Barring any objection,\nI will commit it.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Mon, 4 Oct 2021 13:59:19 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?B?UmU6IChMT0NLIFRBQkxFIG9wdGlvbnMpIOKAnE9OTFnigJ0gYW5kIA==?=\n =?UTF-8?Q?=e2=80=9cNOWAIT=e2=80=9d_are_not_yet_implemented?="
},
{
"msg_contents": ">-----Original Message-----\r\n>From: Fujii Masao <masao.fujii@oss.nttdata.com>\r\n>Sent: Monday, October 4, 2021 1:59 PM\r\n>To: bt21tanigaway <bt21tanigaway@oss.nttdata.com>; RDH 加藤 慎也/Kato,\r\n>Shinya (NTT DATA) <Shinya11.Kato@jp.nttdata.com>\r\n>Cc: pgsql-hackers@lists.postgresql.org\r\n>Subject: Re: (LOCK TABLE options) “ONLY” and “NOWAIT” are not yet\r\n>implemented\r\n>\r\n>\r\n>\r\n>On 2021/10/04 11:17, bt21tanigaway wrote:\r\n>>>> else if (Matches(\"LOCK\", MatchAny, \"IN\", \"ACCESS|ROW\") ||\r\n>>>> - Matches(\"LOCK\", \"TABLE\", MatchAny, \"IN\",\r\n>>>> \"ACCESS|ROW\"))\r\n>>>> + Matches(\"LOCK\", \"TABLE\", MatchAny, \"IN\",\r\n>\"ACCESS|ROW\")\r\n>>>> +||\r\n>>>> + Matches(\"LOCK\", \"ONLY\", MatchAny, \"IN\",\r\n>\"ACCESS|ROW\")\r\n>>>> +||\r\n>>>> + Matches(\"LOCK\", \"TABLE\", \"ONLY\", MatchAny, \"IN\",\r\n>>>> +\"ACCESS|ROW\"))\r\n>>> I think this code is redundant, so I change following.\r\n>>> ---\r\n>>> else if (HeadMatches(\"LOCK\") && TailMatches(\"IN\", \"ACCESS|ROW\"))\r\n>>> ---\r\n>>> I created the patch, and attached it. Do you think?\r\n>> Thank you for update!\r\n>> I think that your code is more concise than mine.\r\n>> There seems to be no problem.\r\n>\r\n>The patch looks good to me, too. I applied cosmetic changes to it.\r\n>Attached is the updated version of the patch. Barring any objection, I will commit\r\n>it.\r\nThank you for the patch!\r\nIt looks good to me.\r\n\r\nRegards,\r\nShinya Kato\r\n\r\n\r\n",
"msg_date": "Mon, 4 Oct 2021 05:28:15 +0000",
"msg_from": "<Shinya11.Kato@nttdata.com>",
"msg_from_op": false,
"msg_subject": "\n =?utf-8?B?UkU6IChMT0NLIFRBQkxFIG9wdGlvbnMpIOKAnE9OTFnigJ0gYW5kIOKAnE5P?=\n =?utf-8?B?V0FJVOKAnSBhcmUgbm90IHlldCBpbXBsZW1lbnRlZA==?="
},
{
"msg_contents": "\n\nOn 2021/10/04 14:28, Shinya11.Kato@nttdata.com wrote:\n>> The patch looks good to me, too. I applied cosmetic changes to it.\n>> Attached is the updated version of the patch. Barring any objection, I will commit\n>> it.\n> Thank you for the patch!\n> It looks good to me.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 5 Oct 2021 10:15:31 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?B?UmU6IChMT0NLIFRBQkxFIG9wdGlvbnMpIOKAnE9OTFnigJ0gYW5kIA==?=\n =?UTF-8?Q?=e2=80=9cNOWAIT=e2=80=9d_are_not_yet_implemented?="
}
] |
[
{
"msg_contents": "Reindexdb help has this for selection of what to reindex:\n\n -s, --system reindex system catalogs\n -S, --schema=SCHEMA reindex specific schema(s) only\n -t, --table=TABLE reindex specific table(s) only\n\nIs there a reason the \"only\" is missing from the -s option? AFAIK that's\nwhat it means, so the attached patch should be correct?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 28 Sep 2021 16:15:22 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "reindexdb usage message about system catalogs"
},
{
"msg_contents": "On Tue, Sep 28, 2021 at 04:15:22PM +0200, Magnus Hagander wrote:\n> Is there a reason the \"only\" is missing from the -s option? AFAIK that's\n> what it means, so the attached patch should be correct?\n\nI cannot think of a reason. This seems historically inherited from\npg_dump, and the option got added when the tool was moved from\ncontrib/ to src/bin/ as of 85e9a5a.\n--\nMichael",
"msg_date": "Wed, 29 Sep 2021 12:09:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: reindexdb usage message about system catalogs"
},
{
"msg_contents": "On Wed, Sep 29, 2021 at 5:10 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Sep 28, 2021 at 04:15:22PM +0200, Magnus Hagander wrote:\n> > Is there a reason the \"only\" is missing from the -s option? AFAIK that's\n> > what it means, so the attached patch should be correct?\n>\n> I cannot think of a reason. This seems historically inherited from\n> pg_dump, and the option got added when the tool was moved from\n> contrib/ to src/bin/ as of 85e9a5a.\n>\n\nThanks for the double check! Seems I forgot about this one, but I've\nbackpatched and pushed it now.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, Sep 29, 2021 at 5:10 AM Michael Paquier <michael@paquier.xyz> wrote:On Tue, Sep 28, 2021 at 04:15:22PM +0200, Magnus Hagander wrote:\n> Is there a reason the \"only\" is missing from the -s option? AFAIK that's\n> what it means, so the attached patch should be correct?\n\nI cannot think of a reason. This seems historically inherited from\npg_dump, and the option got added when the tool was moved from\ncontrib/ to src/bin/ as of 85e9a5a.Thanks for the double check! Seems I forgot about this one, but I've backpatched and pushed it now. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Wed, 27 Oct 2021 16:30:33 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: reindexdb usage message about system catalogs"
}
] |
[
{
"msg_contents": "A coworker has a space in a Postgres password and noticed .pgpass\ndidn't work; escaping it fixed the issue. That requirement wasn't\ndocumented (despite other escaping requirements being documented), so\nI've attached a patch to add that comment.\n\nThanks,\nJames Coleman",
"msg_date": "Tue, 28 Sep 2021 10:17:15 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Document spaces in .pgpass need to be escaped"
},
{
"msg_contents": "James Coleman <jtc331@gmail.com> writes:\n> A coworker has a space in a Postgres password and noticed .pgpass\n> didn't work; escaping it fixed the issue. That requirement wasn't\n> documented (despite other escaping requirements being documented), so\n> I've attached a patch to add that comment.\n\nI looked at passwordFromFile() and I don't see any indication that\nit treats spaces specially. Nor does a quick test here confirm\nthis report. So I'm pretty certain that this proposed doc change\nis wrong. Perhaps there's some other issue to investigate, though?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 29 Sep 2021 12:13:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Document spaces in .pgpass need to be escaped"
},
{
"msg_contents": "On Wed, Sep 29, 2021 at 12:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> James Coleman <jtc331@gmail.com> writes:\n> > A coworker has a space in a Postgres password and noticed .pgpass\n> > didn't work; escaping it fixed the issue. That requirement wasn't\n> > documented (despite other escaping requirements being documented), so\n> > I've attached a patch to add that comment.\n>\n> I looked at passwordFromFile() and I don't see any indication that\n> it treats spaces specially. Nor does a quick test here confirm\n> this report. So I'm pretty certain that this proposed doc change\n> is wrong. Perhaps there's some other issue to investigate, though?\n>\n> regards, tom lane\n\nThanks for taking a look.\n\nI'm honestly not sure what happened here. I couldn't reproduce again\neither, and on another box with this coworker we could reproduce, but\nthen realized the pgpass entry was missing a field. I imagine it must\nhave been similar on the original box we observed the error on, but\nboth of our memories were of just adding teh escape characters...\n\nI'll mark the CF entry as withdrawn.\n\nThanks,\nJames Coleman\n\n\n",
"msg_date": "Thu, 30 Sep 2021 13:37:44 -0400",
"msg_from": "James Coleman <jtc331@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Document spaces in .pgpass need to be escaped"
}
] |
[
{
"msg_contents": "I noticed that some test scripts, instead of using wait_for_catchup\nto wait for replication catchup, use ad-hoc code like\n\nmy $primary_lsn =\n $primary->safe_psql('postgres', 'select pg_current_wal_lsn()');\n$standby->poll_query_until('postgres',\n\tqq{SELECT '$primary_lsn'::pg_lsn <= pg_last_wal_replay_lsn()})\n or die \"standby never caught up\";\n\nThis does not look much like what wait_for_catchup does, which\ntypically ends up issuing queries like\n\nSELECT pg_current_wal_lsn() <= replay_lsn AND state = 'streaming'\nFROM pg_catalog.pg_stat_replication WHERE application_name = 'standby';\n\nIt seems to me that for most purposes wait_for_catchup's approach is\nstrictly worse, for two reasons:\n\n1. It continually recomputes the primary's pg_current_wal_lsn().\nThus, if anything is happening on the primary (e.g. autovacuum),\nwe're moving the goalposts as to how much the standby is required\nto replay before we continue. That slows down the tests, makes\nthem less reproducible, and could help to mask actual bugs of the\nform this-hasn't-been-done-when-it-should-have.\n\n2. It's querying the primary's view of the standby's state, which\nintroduces a reporting delay. This has exactly the same three\nproblems as the other point.\n\nSo I think we ought to change wait_for_catchup to do things more\nlike the first way, where feasible. This seems pretty easy for\nthe call sites that provide the standby's PostgresNode; but it's\nnot so easy for the ones that just provide a subscription name.\n\nSo my first question is: to what extent are these tests specifically\nintended to exercise the replay reporting mechanisms? Would we lose\nimportant coverage if we changed *all* the call sites to use the\nquery-the-standby approach? If so, which ones might be okay to\nchange?\n\nThe second question is what do we want the API to look like? This\nis simple if we can change all wait_for_catchup callers to pass\nthe standby node. Otherwise, the choices are to split wait_for_catchup\ninto two implementations, or to keep its current API and have it\ndo significantly different things depending on isa(\"PostgresNode\").\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 28 Sep 2021 18:17:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Some thoughts about the TAP tests' wait_for_catchup()"
},
{
"msg_contents": "On Wed, Sep 29, 2021 at 3:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I noticed that some test scripts, instead of using wait_for_catchup\n> to wait for replication catchup, use ad-hoc code like\n>\n> my $primary_lsn =\n> $primary->safe_psql('postgres', 'select pg_current_wal_lsn()');\n> $standby->poll_query_until('postgres',\n> qq{SELECT '$primary_lsn'::pg_lsn <= pg_last_wal_replay_lsn()})\n> or die \"standby never caught up\";\n>\n> This does not look much like what wait_for_catchup does, which\n> typically ends up issuing queries like\n>\n> SELECT pg_current_wal_lsn() <= replay_lsn AND state = 'streaming'\n> FROM pg_catalog.pg_stat_replication WHERE application_name = 'standby';\n>\n> It seems to me that for most purposes wait_for_catchup's approach is\n> strictly worse, for two reasons:\n>\n> 1. It continually recomputes the primary's pg_current_wal_lsn().\n> Thus, if anything is happening on the primary (e.g. autovacuum),\n> we're moving the goalposts as to how much the standby is required\n> to replay before we continue. That slows down the tests, makes\n> them less reproducible, and could help to mask actual bugs of the\n> form this-hasn't-been-done-when-it-should-have.\n>\n> 2. It's querying the primary's view of the standby's state, which\n> introduces a reporting delay. This has exactly the same three\n> problems as the other point.\n>\n\nI can't comment on all the use cases of wait_for_catchup() but I think\nthere are some use cases in logical replication where we need the\npublisher to use wait_for_catchup after setting up the replication to\nensure that wal sender is started and in-proper state by checking its\nstate (which should be 'streaming'). That also implicitly checks if\nthe wal receiver has responded to initial ping requests by sending\nreplay location. The typical use is as below where after setting up\ninitial replication we wait for a publisher to catch up and then check\nif the initial table sync is finished. There is no use in checking the\nsecond till the first statement is completed.\n\nsubscription/t/001_rep_changes.pl\n...\n$node_publisher->wait_for_catchup('tap_sub');\n\n# Also wait for initial table sync to finish\nmy $synced_query =\n \"SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT\nIN ('r', 's');\";\n$node_subscriber->poll_query_until('postgres', $synced_query)\n or die \"Timed out while waiting for subscriber to synchronize data\";\n\nI am not sure in such tests if checking solely the subscriber's\nwal_replay_lsn would be sufficient. So, I think there are cases\nespecially in physical replication tests where we can avoid using\nwait_for_catchup but I am not sure if we can completely eliminate it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 29 Sep 2021 17:22:39 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Some thoughts about the TAP tests' wait_for_catchup()"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Wed, Sep 29, 2021 at 3:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> It seems to me that for most purposes wait_for_catchup's approach is\n>> strictly worse, for two reasons:\n>> 1. It continually recomputes the primary's pg_current_wal_lsn().\n>> 2. It's querying the primary's view of the standby's state, which\n>> introduces a reporting delay.\n\n> I can't comment on all the use cases of wait_for_catchup() but I think\n> there are some use cases in logical replication where we need the\n> publisher to use wait_for_catchup after setting up the replication to\n> ensure that wal sender is started and in-proper state by checking its\n> state (which should be 'streaming'). That also implicitly checks if\n> the wal receiver has responded to initial ping requests by sending\n> replay location.\n\nYeah, for logical replication we can't look at the subscriber's WAL\npositions because they could be totally different. What I'm on\nabout is the tests that use physical replication. I think examining\nthe standby's state directly is better in that case, for the reasons\nI cited.\n\nI guess the question of interest is whether it's sufficient to test\nthe walreceiver feedback mechanisms in the context of logical\nreplication, or whether we feel that the physical-replication code\npath is enough different that there should be a specific test for\nthat combination too.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 29 Sep 2021 11:59:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Some thoughts about the TAP tests' wait_for_catchup()"
},
{
"msg_contents": "On Wed, Sep 29, 2021 at 9:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > On Wed, Sep 29, 2021 at 3:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> It seems to me that for most purposes wait_for_catchup's approach is\n> >> strictly worse, for two reasons:\n> >> 1. It continually recomputes the primary's pg_current_wal_lsn().\n> >> 2. It's querying the primary's view of the standby's state, which\n> >> introduces a reporting delay.\n>\n> > I can't comment on all the use cases of wait_for_catchup() but I think\n> > there are some use cases in logical replication where we need the\n> > publisher to use wait_for_catchup after setting up the replication to\n> > ensure that wal sender is started and in-proper state by checking its\n> > state (which should be 'streaming'). That also implicitly checks if\n> > the wal receiver has responded to initial ping requests by sending\n> > replay location.\n>\n> Yeah, for logical replication we can't look at the subscriber's WAL\n> positions because they could be totally different. What I'm on\n> about is the tests that use physical replication. I think examining\n> the standby's state directly is better in that case, for the reasons\n> I cited.\n>\n> I guess the question of interest is whether it's sufficient to test\n> the walreceiver feedback mechanisms in the context of logical\n> replication, or whether we feel that the physical-replication code\n> path is enough different that there should be a specific test for\n> that combination too.\n>\n\nThere is a difference in the handling of feedback messages for\nphysical and logical replication code paths. It is mainly about how we\nadvance slot's lsn based on wal flushed. See\nProcessStandbyReplyMessage, towards end, we call different functions\nbased on slot_type. So, I think it is better to have a test for the\nphysical replication feedback mechanism.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 30 Sep 2021 10:44:26 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Some thoughts about the TAP tests' wait_for_catchup()"
}
] |
[
{
"msg_contents": "Hi,\n\nI created a patch for COMMENT tab completion.\nIt was missing TRANSFORM FOR where it's supposed to be.\n\nBest wishes,\nKen Kato",
"msg_date": "Wed, 29 Sep 2021 09:46:18 +0900",
"msg_from": "katouknl <katouknl@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Added TRANSFORM FOR for COMMENT tab completion"
},
{
"msg_contents": "Hello,\nI reviewed your patch. At a first glance, I have the below comments.\n\n 1. The below change crosses the 80-character limit in a line. Please\n maintain the same.\n - \"LARGE OBJECT\", \"TABLESPACE\", \"TEXT SEARCH\", \"ROLE\");\n + \"LARGE OBJECT\", \"TABLESPACE\", \"TEXT SEARCH\", \"TRANSFORM FOR\",\n \"ROLE\");\n 2. There are trailing whitespaces after\n COMPLETE_WITH_QUERY(Query_for_list_of_languages);.\n Remove these extra whitespaces.\n surajkhamkar@localhost:tab_comment$ git apply\n fix_tab_complete_comment.patch\n fix_tab_complete_comment.patch:38: trailing whitespace.\n COMPLETE_WITH_QUERY(Query_for_list_of_languages);\n warning: 1 line adds whitespace errors.\n\n\nRegards,\nSuraj Khamkar\n\nOn Wed, Sep 29, 2021 at 2:04 PM katouknl <katouknl@oss.nttdata.com> wrote:\n\n> Hi,\n>\n> I created a patch for COMMENT tab completion.\n> It was missing TRANSFORM FOR where it's supposed to be.\n>\n> Best wishes,\n> Ken Kato\n\nHello,I reviewed your patch. At a first glance, I have the below comments.The below change crosses the 80-character limit in a line. Please maintain the same.- \"LARGE OBJECT\", \"TABLESPACE\", \"TEXT SEARCH\", \"ROLE\");+ \"LARGE OBJECT\", \"TABLESPACE\", \"TEXT SEARCH\", \"TRANSFORM FOR\", \"ROLE\");There are trailing whitespaces after COMPLETE_WITH_QUERY(Query_for_list_of_languages);.Remove these extra whitespaces.surajkhamkar@localhost:tab_comment$ git apply fix_tab_complete_comment.patchfix_tab_complete_comment.patch:38: trailing whitespace.\t\tCOMPLETE_WITH_QUERY(Query_for_list_of_languages);\twarning: 1 line adds whitespace errors.Regards,Suraj KhamkarOn Wed, Sep 29, 2021 at 2:04 PM katouknl <katouknl@oss.nttdata.com> wrote:Hi,\n\nI created a patch for COMMENT tab completion.\nIt was missing TRANSFORM FOR where it's supposed to be.\n\nBest wishes,\nKen Kato",
"msg_date": "Wed, 6 Oct 2021 22:27:32 +0530",
"msg_from": "Suraj Khamkar <khamkarsuraj.b@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Added TRANSFORM FOR for COMMENT tab completion"
},
{
"msg_contents": "Hi,\n\nThank you for the review.\n\nI wasn't quite sure where to start counting the characters,\nbut I used pgindent to properly format it, so hopefully everything is \nokay.\nAlso, I sorted them in alphabetical order just to make it look prettier.\n> \t* The below change crosses the 80-character limit in a line. Please\n> maintain the same.\n> - \"LARGE OBJECT\", \"TABLESPACE\", \"TEXT SEARCH\", \"ROLE\");\n> + \"LARGE OBJECT\", \"TABLESPACE\", \"TEXT SEARCH\", \"TRANSFORM FOR\",\n> \"ROLE\");\n\nI made sure there is no whitespaces this time.\n> \t* There are trailing whitespaces after\n> COMPLETE_WITH_QUERY(Query_for_list_of_languages);.\n> Remove these extra whitespaces.\n> surajkhamkar@localhost:tab_comment$ git apply\n> fix_tab_complete_comment.patch\n> fix_tab_complete_comment.patch:38: trailing whitespace.\n> COMPLETE_WITH_QUERY(Query_for_list_of_languages);\n> warning: 1 line adds whitespace errors.\n\nBest wishes,\nKen Kato",
"msg_date": "Thu, 07 Oct 2021 17:14:43 +0900",
"msg_from": "katouknl <katouknl@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Added TRANSFORM FOR for COMMENT tab completion"
},
{
"msg_contents": "On 2021-10-07 17:14, katouknl wrote:\n> Hi,\n> \n> Thank you for the review.\n> \n> I wasn't quite sure where to start counting the characters,\n> but I used pgindent to properly format it, so hopefully everything is \n> okay.\n> Also, I sorted them in alphabetical order just to make it look \n> prettier.\n>> \t* The below change crosses the 80-character limit in a line. Please\n>> maintain the same.\n>> - \"LARGE OBJECT\", \"TABLESPACE\", \"TEXT SEARCH\", \"ROLE\");\n>> + \"LARGE OBJECT\", \"TABLESPACE\", \"TEXT SEARCH\", \"TRANSFORM FOR\",\n>> \"ROLE\");\n> \n> I made sure there is no whitespaces this time.\n>> \t* There are trailing whitespaces after\n>> COMPLETE_WITH_QUERY(Query_for_list_of_languages);.\n>> Remove these extra whitespaces.\n>> surajkhamkar@localhost:tab_comment$ git apply\n>> fix_tab_complete_comment.patch\n>> fix_tab_complete_comment.patch:38: trailing whitespace.\n>> COMPLETE_WITH_QUERY(Query_for_list_of_languages);\n>> warning: 1 line adds whitespace errors.\nThank you for the patch!\nIt is very good, but it seems to me that there are some tab-completion \nmissing in COMMENT command.\nFor example,\n- CONSTRAINT ... ON DOMAIN\n- OPERATOR CLASS\n- OPERATOR FAMILY\n- POLICY ... ON\n- [PROCEDURAL]\n- RULE ... ON\n- TRIGGER ... ON\n\nI think these tab-comletion also can be improved and it's a good timing \nfor that.\n\n-- \nRegards,\n\n--\nShinya Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 07 Oct 2021 18:59:38 +0900",
"msg_from": "Shinya Kato <Shinya11.Kato@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Added TRANSFORM FOR for COMMENT tab completion"
},
{
"msg_contents": "Hello,\nThanks for the revised patch.\n\nIt is very good, but it seems to me that there are some tab-completion\n> missing in COMMENT command.\n\n\nThanks Shinya, for having a look. I was also about to say that it would be\ngood\nif we take care of tab-completion for other options as well in this patch\nitself.\nI would like to ask @katouknl <katouknl@oss.nttdata.com> if it's ok to do\nso.\n\nAnd regarding the revised patch, arranging options in alphabetical order\nseems\ngood to me. Though, there is one line where it crosses 80 characters in a\nline.\n+ COMPLETE_WITH(\"ACCESS METHOD\", \"AGGREGATE\", \"CAST\", \"COLLATION\", \"COLUMN\",\n\nApart from this I don't have any major comment.\n\n\nRegards,\nSuraj Khamkar\n\nOn Thu, Oct 7, 2021 at 3:29 PM Shinya Kato <Shinya11.Kato@oss.nttdata.com>\nwrote:\n\n> On 2021-10-07 17:14, katouknl wrote:\n> > Hi,\n> >\n> > Thank you for the review.\n> >\n> > I wasn't quite sure where to start counting the characters,\n> > but I used pgindent to properly format it, so hopefully everything is\n> > okay.\n> > Also, I sorted them in alphabetical order just to make it look\n> > prettier.\n> >> * The below change crosses the 80-character limit in a line. Please\n> >> maintain the same.\n> >> - \"LARGE OBJECT\", \"TABLESPACE\", \"TEXT SEARCH\", \"ROLE\");\n> >> + \"LARGE OBJECT\", \"TABLESPACE\", \"TEXT SEARCH\", \"TRANSFORM FOR\",\n> >> \"ROLE\");\n> >\n> > I made sure there is no whitespaces this time.\n> >> * There are trailing whitespaces after\n> >> COMPLETE_WITH_QUERY(Query_for_list_of_languages);.\n> >> Remove these extra whitespaces.\n> >> surajkhamkar@localhost:tab_comment$ git apply\n> >> fix_tab_complete_comment.patch\n> >> fix_tab_complete_comment.patch:38: trailing whitespace.\n> >> COMPLETE_WITH_QUERY(Query_for_list_of_languages);\n> >> warning: 1 line adds whitespace errors.\n> Thank you for the patch!\n> It is very good, but it seems to me that there are some tab-completion\n> missing in COMMENT command.\n> For example,\n> - CONSTRAINT ... ON DOMAIN\n> - OPERATOR CLASS\n> - OPERATOR FAMILY\n> - POLICY ... ON\n> - [PROCEDURAL]\n> - RULE ... ON\n> - TRIGGER ... ON\n>\n> I think these tab-comletion also can be improved and it's a good timing\n> for that.\n>\n> --\n> Regards,\n>\n> --\n> Shinya Kato\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n>\n\nHello,Thanks for the revised patch.It is very good, but it seems to me that there are some tab-completionmissing in COMMENT command. Thanks Shinya, for having a look. I was also about to say that it would be goodif we take care of tab-completion for other options as well in this patch itself.I would like to ask @katouknl if it's ok to do so.And regarding the revised patch, arranging options in alphabetical order seemsgood to me. Though, there is one line where it crosses 80 characters in a line.+\t\tCOMPLETE_WITH(\"ACCESS METHOD\", \"AGGREGATE\", \"CAST\", \"COLLATION\", \"COLUMN\",Apart from this I don't have any major comment.Regards,Suraj KhamkarOn Thu, Oct 7, 2021 at 3:29 PM Shinya Kato <Shinya11.Kato@oss.nttdata.com> wrote:On 2021-10-07 17:14, katouknl wrote:\n> Hi,\n> \n> Thank you for the review.\n> \n> I wasn't quite sure where to start counting the characters,\n> but I used pgindent to properly format it, so hopefully everything is \n> okay.\n> Also, I sorted them in alphabetical order just to make it look \n> prettier.\n>> * The below change crosses the 80-character limit in a line. Please\n>> maintain the same.\n>> - \"LARGE OBJECT\", \"TABLESPACE\", \"TEXT SEARCH\", \"ROLE\");\n>> + \"LARGE OBJECT\", \"TABLESPACE\", \"TEXT SEARCH\", \"TRANSFORM FOR\",\n>> \"ROLE\");\n> \n> I made sure there is no whitespaces this time.\n>> * There are trailing whitespaces after\n>> COMPLETE_WITH_QUERY(Query_for_list_of_languages);.\n>> Remove these extra whitespaces.\n>> surajkhamkar@localhost:tab_comment$ git apply\n>> fix_tab_complete_comment.patch\n>> fix_tab_complete_comment.patch:38: trailing whitespace.\n>> COMPLETE_WITH_QUERY(Query_for_list_of_languages);\n>> warning: 1 line adds whitespace errors.\nThank you for the patch!\nIt is very good, but it seems to me that there are some tab-completion \nmissing in COMMENT command.\nFor example,\n- CONSTRAINT ... ON DOMAIN\n- OPERATOR CLASS\n- OPERATOR FAMILY\n- POLICY ... ON\n- [PROCEDURAL]\n- RULE ... ON\n- TRIGGER ... ON\n\nI think these tab-comletion also can be improved and it's a good timing \nfor that.\n\n-- \nRegards,\n\n--\nShinya Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Thu, 7 Oct 2021 18:23:38 +0530",
"msg_from": "Suraj Khamkar <khamkarsuraj.b@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Added TRANSFORM FOR for COMMENT tab completion"
},
{
"msg_contents": "> It is very good, but it seems to me that there are some tab-completion\n> missing in COMMENT command.\n> For example,\n> - CONSTRAINT ... ON DOMAIN\n> - OPERATOR CLASS\n> - OPERATOR FAMILY\n> - POLICY ... ON\n> - [PROCEDURAL]\n> - RULE ... ON\n> - TRIGGER ... ON\n> \n> I think these tab-comletion also can be improved and it's a good\n> timing for that.\n\nThank you for the comments!\n\nI fixed where you pointed out.\n\nBest wishes,\nKen Kato",
"msg_date": "Thu, 14 Oct 2021 14:30:19 +0900",
"msg_from": "katouknl <katouknl@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Added TRANSFORM FOR for COMMENT tab completion"
},
{
"msg_contents": "On 2021-10-14 14:30, katouknl wrote:\n>> It is very good, but it seems to me that there are some tab-completion\n>> missing in COMMENT command.\n>> For example,\n>> - CONSTRAINT ... ON DOMAIN\n>> - OPERATOR CLASS\n>> - OPERATOR FAMILY\n>> - POLICY ... ON\n>> - [PROCEDURAL]\n>> - RULE ... ON\n>> - TRIGGER ... ON\n>> \n>> I think these tab-comletion also can be improved and it's a good\n>> timing for that.\n> \n> Thank you for the comments!\n> \n> I fixed where you pointed out.\nThank you for the update!\nI tried \"COMMENT ON OPERATOR ...\", and an operator seemed to be \ncomplemented with double quotation marks.\nHowever, it caused the COMMENT command to fail.\n---\npostgres=# COMMENT ON OPERATOR \"+\" (integer, integer) IS 'test_fail';\nERROR: syntax error at or near \"(\"\nLINE 1: COMMENT ON OPERATOR \"+\" (integer, integer) IS 'test_fail';\npostgres=# COMMENT ON OPERATOR + (integer, integer) IS 'test_success';\nCOMMENT\n---\n\nSo, I think as with \\do command, you do not need to complete the \noperators.\nDo you think?\n\n\n-- \nRegards,\n\n--\nShinya Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 15 Oct 2021 13:29:12 +0900",
"msg_from": "Shinya Kato <Shinya11.Kato@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Added TRANSFORM FOR for COMMENT tab completion"
},
{
"msg_contents": "2021-10-15 13:29 に Shinya Kato さんは書きました:\n> On 2021-10-14 14:30, katouknl wrote:\n>>> It is very good, but it seems to me that there are some \n>>> tab-completion\n>>> missing in COMMENT command.\n>>> For example,\n>>> - CONSTRAINT ... ON DOMAIN\n>>> - OPERATOR CLASS\n>>> - OPERATOR FAMILY\n>>> - POLICY ... ON\n>>> - [PROCEDURAL]\n>>> - RULE ... ON\n>>> - TRIGGER ... ON\n>>> \n>>> I think these tab-comletion also can be improved and it's a good\n>>> timing for that.\n>> \n>> Thank you for the comments!\n>> \n>> I fixed where you pointed out.\n> Thank you for the update!\n> I tried \"COMMENT ON OPERATOR ...\", and an operator seemed to be\n> complemented with double quotation marks.\n> However, it caused the COMMENT command to fail.\n> ---\n> postgres=# COMMENT ON OPERATOR \"+\" (integer, integer) IS 'test_fail';\n> ERROR: syntax error at or near \"(\"\n> LINE 1: COMMENT ON OPERATOR \"+\" (integer, integer) IS 'test_fail';\n> postgres=# COMMENT ON OPERATOR + (integer, integer) IS 'test_success';\n> COMMENT\n> ---\n> \n> So, I think as with \\do command, you do not need to complete the \n> operators.\n> Do you think?\nThank you for the further comments!\n\nI fixed so that it doesn't complete the operators anymore.\nIt only completes with CLASS and FAMILY.\n\nAlso, I updated TEXT SEARCH.\nIt completes object names for each one of CONFIGURATION, DICTIONARY, \nPARSER, and TEMPLATE.\n\n-- \nBest wishes,\n\nKen Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Fri, 15 Oct 2021 17:49:43 +0900",
"msg_from": "Ken Kato <katouknl@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Added TRANSFORM FOR for COMMENT tab completion"
},
{
"msg_contents": "On 2021-10-15 17:49, Ken Kato wrote:\n> 2021-10-15 13:29 に Shinya Kato さんは書きました:\n>> On 2021-10-14 14:30, katouknl wrote:\n>>>> It is very good, but it seems to me that there are some \n>>>> tab-completion\n>>>> missing in COMMENT command.\n>>>> For example,\n>>>> - CONSTRAINT ... ON DOMAIN\n>>>> - OPERATOR CLASS\n>>>> - OPERATOR FAMILY\n>>>> - POLICY ... ON\n>>>> - [PROCEDURAL]\n>>>> - RULE ... ON\n>>>> - TRIGGER ... ON\n>>>> \n>>>> I think these tab-comletion also can be improved and it's a good\n>>>> timing for that.\n>>> \n>>> Thank you for the comments!\n>>> \n>>> I fixed where you pointed out.\n>> Thank you for the update!\n>> I tried \"COMMENT ON OPERATOR ...\", and an operator seemed to be\n>> complemented with double quotation marks.\n>> However, it caused the COMMENT command to fail.\n>> ---\n>> postgres=# COMMENT ON OPERATOR \"+\" (integer, integer) IS 'test_fail';\n>> ERROR: syntax error at or near \"(\"\n>> LINE 1: COMMENT ON OPERATOR \"+\" (integer, integer) IS 'test_fail';\n>> postgres=# COMMENT ON OPERATOR + (integer, integer) IS 'test_success';\n>> COMMENT\n>> ---\n>> \n>> So, I think as with \\do command, you do not need to complete the \n>> operators.\n>> Do you think?\n> Thank you for the further comments!\n> \n> I fixed so that it doesn't complete the operators anymore.\n> It only completes with CLASS and FAMILY.\n> \n> Also, I updated TEXT SEARCH.\n> It completes object names for each one of CONFIGURATION, DICTIONARY,\n> PARSER, and TEMPLATE.\n\nThank you for update!\nThe patch looks good to me. I applied cosmetic changes to it.\nAttached is the updated version of the patch.\n\nBarring any objection, I will change status to Ready for Committer.\n\n-- \nRegards,\n\n--\nShinya Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Tue, 26 Oct 2021 17:04:24 +0900",
"msg_from": "Shinya Kato <Shinya11.Kato@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Added TRANSFORM FOR for COMMENT tab completion"
},
{
"msg_contents": "On Tue, Oct 26, 2021 at 05:04:24PM +0900, Shinya Kato wrote:\n> Barring any objection, I will change status to Ready for Committer.\n\n+ else if (Matches(\"COMMENT\", \"ON\", \"PROCEDURAL\"))\n+ COMPLETE_WITH(\"LANGUAGE\");\n+ else if (Matches(\"COMMENT\", \"ON\", \"PROCEDURAL\", \"LANGUAGE\"))\n+ COMPLETE_WITH_QUERY(Query_for_list_of_languages);\nI don't think that there is much point in being this picky either with\nthe usage of PROCEDURAL, as we already complete a similar and simpler\ngrammar with LANGUAGE. I would just remove this part of the patch.\n\n+ else if (Matches(\"COMMENT\", \"ON\", \"OPERATOR\"))\n+ COMPLETE_WITH(\"CLASS\", \"FAMILY\");\nIsn't this one wrong? Operators can have comments, and we'd miss\nthem. This is mentioned upthread, but it seems to me that we'd better\ndrop this part of the patch if the operator naming part cannot be\nsolved easily.\n--\nMichael",
"msg_date": "Wed, 27 Oct 2021 14:45:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Added TRANSFORM FOR for COMMENT tab completion"
},
{
"msg_contents": "On 2021-10-27 14:45, Michael Paquier wrote:\n> On Tue, Oct 26, 2021 at 05:04:24PM +0900, Shinya Kato wrote:\n>> Barring any objection, I will change status to Ready for Committer.\n> \n> + else if (Matches(\"COMMENT\", \"ON\", \"PROCEDURAL\"))\n> + COMPLETE_WITH(\"LANGUAGE\");\n> + else if (Matches(\"COMMENT\", \"ON\", \"PROCEDURAL\", \"LANGUAGE\"))\n> + COMPLETE_WITH_QUERY(Query_for_list_of_languages);\n> I don't think that there is much point in being this picky either with\n> the usage of PROCEDURAL, as we already complete a similar and simpler\n> grammar with LANGUAGE. I would just remove this part of the patch.\nIn my opinion, it is written in the documentation, so tab-completion of \n\"PROCEDURAL\"is good.\nHow about a completion with \"LANGUAGE\" and \"PROCEDURAL LANGUAGE\", like \n\"PASSWORD\" and \"ENCRYPTED PASSWORD\" in CREATE ROLE?\n\n> + else if (Matches(\"COMMENT\", \"ON\", \"OPERATOR\"))\n> + COMPLETE_WITH(\"CLASS\", \"FAMILY\");\n> Isn't this one wrong? Operators can have comments, and we'd miss\n> them. This is mentioned upthread, but it seems to me that we'd better\n> drop this part of the patch if the operator naming part cannot be\n> solved easily.\nAs you said, it may be misleading.\nI agree to drop it.\n\n-- \nRegards,\n\n--\nShinya Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 27 Oct 2021 15:54:03 +0900",
"msg_from": "Shinya Kato <Shinya11.Kato@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Added TRANSFORM FOR for COMMENT tab completion"
},
{
"msg_contents": "On Wed, Oct 27, 2021 at 03:54:03PM +0900, Shinya Kato wrote:\n> In my opinion, it is written in the documentation, so tab-completion of\n> \"PROCEDURAL\"is good.\n\nIt does not mean that we need to add everything either, especially\nhere as there is a shorter option.\n\n> How about a completion with \"LANGUAGE\" and \"PROCEDURAL LANGUAGE\", like\n> \"PASSWORD\" and \"ENCRYPTED PASSWORD\" in CREATE ROLE?\n\nThis has been around for some time already, so I'd just leave those\nparts be.\n--\nMichael",
"msg_date": "Wed, 27 Oct 2021 16:59:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Added TRANSFORM FOR for COMMENT tab completion"
},
{
"msg_contents": "\n\nOn 2021/10/27 15:54, Shinya Kato wrote:\n> On 2021-10-27 14:45, Michael Paquier wrote:\n>> On Tue, Oct 26, 2021 at 05:04:24PM +0900, Shinya Kato wrote:\n>>> Barring any objection, I will change status to Ready for Committer.\n>>\n>> + else if (Matches(\"COMMENT\", \"ON\", \"PROCEDURAL\"))\n>> + COMPLETE_WITH(\"LANGUAGE\");\n>> + else if (Matches(\"COMMENT\", \"ON\", \"PROCEDURAL\", \"LANGUAGE\"))\n>> + COMPLETE_WITH_QUERY(Query_for_list_of_languages);\n>> I don't think that there is much point in being this picky either with\n>> the usage of PROCEDURAL, as we already complete a similar and simpler\n>> grammar with LANGUAGE. I would just remove this part of the patch.\n> In my opinion, it is written in the documentation, so tab-completion of \"PROCEDURAL\"is good.\n> How about a completion with \"LANGUAGE\" and \"PROCEDURAL LANGUAGE\", like \"PASSWORD\" and \"ENCRYPTED PASSWORD\" in CREATE ROLE?\n> \n>> + else if (Matches(\"COMMENT\", \"ON\", \"OPERATOR\"))\n>> + COMPLETE_WITH(\"CLASS\", \"FAMILY\");\n>> Isn't this one wrong? Operators can have comments, and we'd miss\n>> them. This is mentioned upthread, but it seems to me that we'd better\n>> drop this part of the patch if the operator naming part cannot be\n>> solved easily.\n> As you said, it may be misleading.\n> I agree to drop it.\n\nSo I changed the status of the patch to Waiting on Author in CF.\n\n\n+static const SchemaQuery Query_for_list_of_text_search_configurations = {\n\nWe already have Query_for_list_of_ts_configurations in tab-complete.c.\nDo we really need both queries? Or we can drop either of them?\n\n\n+#define Query_for_list_of_operator_class_index_methods \\\n+\"SELECT pg_catalog.quote_ident(amname)\"\\\n+\" FROM pg_catalog.pg_am\"\\\n+\" WHERE (%d = pg_catalog.length('%s'))\"\\\n+\" AND oid IN \"\\\n+\" (SELECT opcmethod FROM pg_catalog.pg_opclass \"\\\n+\" WHERE pg_catalog.quote_ident(opcname)='%s')\"\n\nIsn't it overkill to tab-complete this? I thought that because\nI'm not sure if COMMENT command for OPERATOR CLASS or FAMILY is\nusually executed via psql interactively, instead I just guess\nit's executed via script. Also because there is no tab-completion\nsupport for ALTER OPERATOR CLASS or FAMILY command. It's a bit\nstrange to support the tab-complete for COMMENT for OPERATOR CLASS\nor FAMILY first.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 2 Nov 2021 00:26:10 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Added TRANSFORM FOR for COMMENT tab completion"
},
{
"msg_contents": ">>> + else if (Matches(\"COMMENT\", \"ON\", \"PROCEDURAL\"))\n>>> + COMPLETE_WITH(\"LANGUAGE\");\n>>> + else if (Matches(\"COMMENT\", \"ON\", \"PROCEDURAL\", \"LANGUAGE\"))\n>>> + COMPLETE_WITH_QUERY(Query_for_list_of_languages);\n>>> I don't think that there is much point in being this picky either \n>>> with\n>>> the usage of PROCEDURAL, as we already complete a similar and simpler\n>>> grammar with LANGUAGE. I would just remove this part of the patch.\n>> In my opinion, it is written in the documentation, so tab-completion \n>> of \"PROCEDURAL\"is good.\n>> How about a completion with \"LANGUAGE\" and \"PROCEDURAL LANGUAGE\", like \n>> \"PASSWORD\" and \"ENCRYPTED PASSWORD\" in CREATE ROLE?\n\nI kept LANGUAGE and PROCEDURAL LANGUAGE just like PASSWORD and ENCRYPTED \nPASSWORD.\n\n\n>>> + else if (Matches(\"COMMENT\", \"ON\", \"OPERATOR\"))\n>>> + COMPLETE_WITH(\"CLASS\", \"FAMILY\");\n>>> Isn't this one wrong? Operators can have comments, and we'd miss\n>>> them. This is mentioned upthread, but it seems to me that we'd \n>>> better\n>>> drop this part of the patch if the operator naming part cannot be\n>>> solved easily.\n>> As you said, it may be misleading.\n>> I agree to drop it.\n\nHearing all the opinions given, I decided not to support OPERATOR CLASS \nor FAMILY in COMMENT.\nTherefore, I drooped Query_for_list_of_operator_class_index_methods as \nwell.\n\n\n> +static const SchemaQuery Query_for_list_of_text_search_configurations \n> = {\n> \n> We already have Query_for_list_of_ts_configurations in tab-complete.c.\n> Do we really need both queries? Or we can drop either of them?\n\nThank you for pointing out!\nI didn't notice that there already exists \nQuery_for_list_of_ts_configurations,\nso I changed TEXT SEARCH completion with using Query_for_list_of_ts_XXX.\n\nI made the changes to the points above and updated the patch.\n\n-- \nBest wishes,\n\nKen Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Thu, 04 Nov 2021 19:18:03 +0900",
"msg_from": "Ken Kato <katouknl@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Added TRANSFORM FOR for COMMENT tab completion"
},
{
"msg_contents": "Hi,\n\nI found unnecessary line deletion in my previous patch, so I made a \nminor update for that.\n\n-- \nBest wishes,\n\nKen Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Fri, 05 Nov 2021 12:31:42 +0900",
"msg_from": "Ken Kato <katouknl@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Added TRANSFORM FOR for COMMENT tab completion"
},
{
"msg_contents": "On Fri, Nov 05, 2021 at 12:31:42PM +0900, Ken Kato wrote:\n> I found unnecessary line deletion in my previous patch, so I made a minor\n> update for that.\n\nI have looked at this version, and this is much simpler than what was\nproposed upthread. This looks good, so applied after fixing a couple\nof indentation issues in the list of objects after COMMENT ON.\n--\nMichael",
"msg_date": "Fri, 5 Nov 2021 15:30:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Added TRANSFORM FOR for COMMENT tab completion"
},
{
"msg_contents": "> On 5 Nov 2021, at 07:30, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Fri, Nov 05, 2021 at 12:31:42PM +0900, Ken Kato wrote:\n>> I found unnecessary line deletion in my previous patch, so I made a minor\n>> update for that.\n> \n> I have looked at this version, and this is much simpler than what was\n> proposed upthread. This looks good, so applied after fixing a couple\n> of indentation issues in the list of objects after COMMENT ON.\n\nIs there anything left on this or can we close it in the commitfest?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Fri, 5 Nov 2021 10:39:36 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Added TRANSFORM FOR for COMMENT tab completion"
},
{
"msg_contents": "On Fri, Nov 05, 2021 at 10:39:36AM +0100, Daniel Gustafsson wrote:\n> Is there anything left on this or can we close it in the commitfest?\n\nOops. I have missed that there was a CF entry for this patch, and\nmissed that Fujii-san was registered as a committer for it. My\napologies!\n--\nMichael",
"msg_date": "Fri, 5 Nov 2021 21:35:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Added TRANSFORM FOR for COMMENT tab completion"
},
{
"msg_contents": "\n\nOn 2021/11/05 21:35, Michael Paquier wrote:\n> On Fri, Nov 05, 2021 at 10:39:36AM +0100, Daniel Gustafsson wrote:\n>> Is there anything left on this or can we close it in the commitfest?\n> \n> Oops. I have missed that there was a CF entry for this patch, and\n> missed that Fujii-san was registered as a committer for it. My\n> apologies!\n\nNo problem. Thanks for committing the patch!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 5 Nov 2021 21:36:52 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Added TRANSFORM FOR for COMMENT tab completion"
}
] |
[
{
"msg_contents": "Fix WAL replay in presence of an incomplete record\n\nPhysical replication always ships WAL segment files to replicas once\nthey are complete. This is a problem if one WAL record is split across\na segment boundary and the primary server crashes before writing down\nthe segment with the next portion of the WAL record: WAL writing after\ncrash recovery would happily resume at the point where the broken record\nstarted, overwriting that record ... but any standby or backup may have\nalready received a copy of that segment, and they are not rewinding.\nThis causes standbys to stop following the primary after the latter\ncrashes:\n LOG: invalid contrecord length 7262 at A8/D9FFFBC8\nbecause the standby is still trying to read the continuation record\n(contrecord) for the original long WAL record, but it is not there and\nit will never be. A workaround is to stop the replica, delete the WAL\nfile, and restart it -- at which point a fresh copy is brought over from\nthe primary. But that's pretty labor intensive, and I bet many users\nwould just give up and re-clone the standby instead.\n\nA fix for this problem was already attempted in commit 515e3d84a0b5, but\nit only addressed the case for the scenario of WAL archiving, so\nstreaming replication would still be a problem (as well as other things\nsuch as taking a filesystem-level backup while the server is down after\nhaving crashed), and it had performance scalability problems too; so it\nhad to be reverted.\n\nThis commit fixes the problem using an approach suggested by Andres\nFreund, whereby the initial portion(s) of the split-up WAL record are\nkept, and a special type of WAL record is written where the contrecord\nwas lost, so that WAL replay in the replica knows to skip the broken\nparts. With this approach, we can continue to stream/archive segment\nfiles as soon as they are complete, and replay of the broken records\nwill proceed across the crash point without a hitch.\n\nBecause a new type of WAL record is added, users should be careful to\nupgrade standbys first, primaries later. Otherwise they risk the standby\nbeing unable to start if the primary happens to write such a record.\n\nA new TAP test that exercises this is added, but the portability of it\nis yet to be seen.\n\nThis has been wrong since the introduction of physical replication, so\nbackpatch all the way back. In stable branches, keep the new\nXLogReaderState members at the end of the struct, to avoid an ABI\nbreak.\n\nAuthor: Álvaro Herrera <alvherre@alvh.no-ip.org>\nReviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nReviewed-by: Nathan Bossart <bossartn@amazon.com>\nDiscussion: https://postgr.es/m/202108232252.dh7uxf6oxwcy@alvherre.pgsql\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/ff9f111bce24fd9bbca7a20315586de877d74923\n\nModified Files\n--------------\nsrc/backend/access/rmgrdesc/xlogdesc.c | 12 ++\nsrc/backend/access/transam/xlog.c | 154 +++++++++++++++++-\nsrc/backend/access/transam/xlogreader.c | 40 ++++-\nsrc/include/access/xlog_internal.h | 11 +-\nsrc/include/access/xlogreader.h | 10 ++\nsrc/include/catalog/pg_control.h | 2 +\nsrc/test/recovery/t/026_overwrite_contrecord.pl | 207 ++++++++++++++++++++++++\nsrc/test/recovery/t/idiosyncratic_copy | 20 +++\nsrc/tools/pgindent/typedefs.list | 1 +\n9 files changed, 450 insertions(+), 7 deletions(-)",
"msg_date": "Wed, 29 Sep 2021 14:40:29 +0000",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "pgsql: Fix WAL replay in presence of an incomplete record"
},
{
"msg_contents": "Hi Alvaro,\n\nOn Wed, Sep 29, 2021 at 02:40:29PM +0000, Alvaro Herrera wrote:\n> Fix WAL replay in presence of an incomplete record\n> [...]\n> src/test/recovery/t/026_overwrite_contrecord.pl | 207 ++++++++++++++++++++++++\n> src/test/recovery/t/idiosyncratic_copy | 20 +++\n\nThe builfarm is saying that this test fails on Windows:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2021-09-29%2020%3A00%3A01\nSep 29 17:27:23 t/026_overwrite_contrecord..........FAILED--Further testing stopped: command \"pg_basebackup -D...\n[...]\npg_basebackup: error: connection to server at \"127.0.0.1\", port 55644 failed: FATAL: no pg_hba.conf entry for replication connection from host \"127.0.0.1\", user \"pgrunner\", no encryption\n\n+# Second test: a standby that receives WAL via archive/restore commands.\n+$node = PostgresNode->new('primary2');\n+$node->init(\n+ has_archiving => 1,\n+ extra => ['--wal-segsize=1']);\n\nThe error is here, where you need to set has_streaming => 1 to set up\nprimary2 correctly on Windows (see 992d353).\n\nThanks,\n--\nMichael",
"msg_date": "Thu, 30 Sep 2021 08:50:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix WAL replay in presence of an incomplete record"
},
{
"msg_contents": "On 2021-Sep-30, Michael Paquier wrote:\n\n> Hi Alvaro,\n> \n> On Wed, Sep 29, 2021 at 02:40:29PM +0000, Alvaro Herrera wrote:\n> > Fix WAL replay in presence of an incomplete record\n> > [...]\n> > src/test/recovery/t/026_overwrite_contrecord.pl | 207 ++++++++++++++++++++++++\n> > src/test/recovery/t/idiosyncratic_copy | 20 +++\n> \n> The builfarm is saying that this test fails on Windows:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2021-09-29%2020%3A00%3A01\n> Sep 29 17:27:23 t/026_overwrite_contrecord..........FAILED--Further testing stopped: command \"pg_basebackup -D...\n> [...]\n> pg_basebackup: error: connection to server at \"127.0.0.1\", port 55644 failed: FATAL: no pg_hba.conf entry for replication connection from host \"127.0.0.1\", user \"pgrunner\", no encryption\n\nThanks. We had already discussed this in the other thread and I opted\nto call ->set_replication_conf instead:\nhttps://www.postgresql.org/message-id/202109292127.7q66qhxhde67%40alvherre.pgsql\n\nAccording to Andres, there's still going to be a failure for other\nreasons, but let's see what happens.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n<inflex> really, I see PHP as like a strange amalgamation of C, Perl, Shell\n<crab> inflex: you know that \"amalgam\" means \"mixture with mercury\",\n more or less, right?\n<crab> i.e., \"deadly poison\"\n\n\n",
"msg_date": "Thu, 30 Sep 2021 10:07:51 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Fix WAL replay in presence of an incomplete record"
},
{
"msg_contents": "[ I'm working on the release notes ]\n\nAlvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Fix WAL replay in presence of an incomplete record\n> ...\n> Because a new type of WAL record is added, users should be careful to\n> upgrade standbys first, primaries later. Otherwise they risk the standby\n> being unable to start if the primary happens to write such a record.\n\nIs there really any point in issuing such advice? IIUC, the standbys\nwould be unable to proceed anyway in case of a primary crash at the\nwrong time, because an un-updated primary would send them inconsistent\nWAL. If anything, it seems like it might be marginally better to\nupdate the primary first, reducing the window for it to send WAL that\nthe standbys will *never* be able to handle. Then, if it crashes, at\nleast the WAL contains something the standbys can process once you\nupdate them.\n\nOr am I missing something?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 04 Nov 2021 20:13:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Fix WAL replay in presence of an incomplete record"
},
{
"msg_contents": "On 2021-Nov-04, Tom Lane wrote:\n\n> Is there really any point in issuing such advice? IIUC, the standbys\n> would be unable to proceed anyway in case of a primary crash at the\n> wrong time, because an un-updated primary would send them inconsistent\n> WAL. If anything, it seems like it might be marginally better to\n> update the primary first, reducing the window for it to send WAL that\n> the standbys will *never* be able to handle. Then, if it crashes, at\n> least the WAL contains something the standbys can process once you\n> update them.\n\nYes -- in production settings, it is better to be able to shut down the\nstandbys in a scheduled manner, than find out after updating the primary\nthat your standbys are suddenly inaccessible until you take the further\naction of updating them.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\nSi no sabes adonde vas, es muy probable que acabes en otra parte.\n\n\n",
"msg_date": "Fri, 5 Nov 2021 09:06:50 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Fix WAL replay in presence of an incomplete record"
},
{
"msg_contents": "On 2021-Nov-05, Alvaro Herrera wrote:\n\n> On 2021-Nov-04, Tom Lane wrote:\n> \n> > the standbys\n> > would be unable to proceed anyway in case of a primary crash at the\n> > wrong time, because an un-updated primary would send them inconsistent\n> > WAL. If anything, it seems like it might be marginally better to\n> > update the primary first, reducing the window for it to send WAL that\n> > the standbys will *never* be able to handle. Then, if it crashes, at\n> > least the WAL contains something the standbys can process once you\n> > update them.\n\nI suppose the strategy is useless if the primary never crashes. If the\nsituation does occur, users can handle it the same way they've handled\nit thus far: manually delete the segment from the standby and restart.\nAt least they know what to do and may even have already automated it.\nThe other situation is new and would need somebody, possibly taken\nabruptly from their sleep, to try to understand why their standbys\nrefuse to proceed replication in a novel way.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"Porque Kim no hacía nada, pero, eso sí,\ncon extraordinario éxito\" (\"Kim\", Kipling)\n\n\n",
"msg_date": "Fri, 5 Nov 2021 09:28:16 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Fix WAL replay in presence of an incomplete record"
}
] |
[
{
"msg_contents": "System catalog indexes do not support deduplication as a matter of\npolicy. I chose to do things that way during the Postgres 13\ndevelopment cycle due to the restriction on using storage parameters\nwith system catalog indexes. At the time I felt that *forcing* the use\nof deduplication with system catalog indexes might expose users to\nproblems. But this is something that seems worth revisiting now. (I\nhaven't actually investigated what it would take to make system\ncatalogs support the 'deduplicate_items' parameter, but that may not\nmatter now.)\n\nI would like to enable deduplication within system catalog indexes for\nPostgres 15. Leaving it disabled forever seems kind of arbitrary at\nbest. In general enabling deduplication (or not disabling it) has only\na fixed, small downside in the worst case. It has a huge upside in\nfavorable cases. Deduplication is part of our high level strategy for\navoiding nbtree index bloat from version churn (non-HOT updates with\nseveral indexes that are never \"logically modified\"). It effectively\ncooperates with and enhances the new enhancements to index deletion in\nPostgres 14. Plus these recent index deletion enhancements more or\nless eliminated a theoretical downside of deduplication: now it\ndoesn't really matter that posting list tuples only have a single\nLP_DEAD bit (if it ever did). This is because we can now do granular\nposting list TID deletion, provided the deletion process visits the\nsame heap block in passing.\n\nI can find no evidence that even one single user found it useful to\ndisable deduplication while using Postgres 13 in production (by\nsearching for \"deduplicate_items\" on Google). While I myself said that\nthere might be a regression of up to 2% of throughput back in early\n2020, that was under highly unrealistic conditions, that could never\napply to system catalogs -- I was being conservative. Most system\ncatalog indexes are unique indexes, where there is no possible\noverhead from deduplication unless we already know for sure that the\nindex is subject to some kind of version churn (and so have high\nconfidence that deduplication will be at least somewhat effective at\nbuying time for VACUUM). The non-unique system catalog indexes seem\npretty likely to benefit from deduplication in the usual obvious way\n(not so much because of versioning and bloat). The two pg_depend\nnon-unique indexes tend to have a fair number of duplicates.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 29 Sep 2021 11:27:28 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Enabling deduplication with system catalog indexes"
},
{
"msg_contents": "On Wed, Sep 29, 2021 at 11:27 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> I would like to enable deduplication within system catalog indexes for\n> Postgres 15.\n\nI decided to run a simple experiment, to give us some idea of what\nbenefits my proposal gives users: I ran \"make installcheck\" on a newly\ninitdb'd database (master branch), and then with the attached patch\n(which enables deduplication with system catalog indexes) applied.\n\nI ran a query that shows the 20 largest system catalog indexes in each\ncase. I'm interested in when and where we see improvements to space\nutilization. Any reduction in index size must be a result of index\ndeduplication (excluding any noise-level changes).\n\nMaster branch:\n\nregression=# SELECT\n pg_size_pretty(pg_relation_size(c.oid)) as sz,\n c.relname\nFROM pg_index i\nJOIN pg_opclass op ON i.indclass[0] = op.oid\nJOIN pg_am am ON op.opcmethod = am.oid\nJOIN pg_class c ON i.indexrelid = c.oid\nJOIN pg_namespace n ON c.relnamespace = n.oid\nWHERE am.amname = 'btree' AND n.nspname = 'pg_catalog'\nAND c.relkind = 'i' AND i.indisready AND i.indisvalid\nORDER BY pg_relation_size(c.oid) DESC LIMIT 20;\n sz | relname\n---------+-----------------------------------\n 1088 kB | pg_attribute_relid_attnam_index\n 928 kB | pg_depend_depender_index\n 800 kB | pg_attribute_relid_attnum_index\n 736 kB | pg_depend_reference_index\n 352 kB | pg_proc_proname_args_nsp_index\n 216 kB | pg_description_o_c_o_index\n 200 kB | pg_class_relname_nsp_index\n 184 kB | pg_type_oid_index\n 176 kB | pg_class_tblspc_relfilenode_index\n 160 kB | pg_type_typname_nsp_index\n 104 kB | pg_proc_oid_index\n 64 kB | pg_class_oid_index\n 64 kB | pg_statistic_relid_att_inh_index\n 56 kB | pg_collation_name_enc_nsp_index\n 48 kB | pg_constraint_conname_nsp_index\n 48 kB | pg_amop_fam_strat_index\n 48 kB | pg_amop_opr_fam_index\n 48 kB | pg_largeobject_loid_pn_index\n 48 kB | pg_operator_oprname_l_r_n_index\n 48 kB | pg_index_indexrelid_index\n(20 rows)\n\nPatch:\n\n sz | relname\n---------+-----------------------------------\n 1048 kB | pg_attribute_relid_attnam_index\n 888 kB | pg_depend_depender_index\n 752 kB | pg_attribute_relid_attnum_index\n 616 kB | pg_depend_reference_index\n 352 kB | pg_proc_proname_args_nsp_index\n 216 kB | pg_description_o_c_o_index\n 192 kB | pg_class_relname_nsp_index\n 184 kB | pg_type_oid_index\n 152 kB | pg_type_typname_nsp_index\n 144 kB | pg_class_tblspc_relfilenode_index\n 104 kB | pg_proc_oid_index\n 72 kB | pg_class_oid_index\n 56 kB | pg_collation_name_enc_nsp_index\n 56 kB | pg_statistic_relid_att_inh_index\n 48 kB | pg_index_indexrelid_index\n 48 kB | pg_amop_fam_strat_index\n 48 kB | pg_amop_opr_fam_index\n 48 kB | pg_largeobject_loid_pn_index\n 48 kB | pg_operator_oprname_l_r_n_index\n 40 kB | pg_index_indrelid_index\n(20 rows)\n\nThe improvements to space utilization for the larger indexes\n(especially the two pg_depends non-unique indexes) is smaller than I\nremember from last time around, back in early 2020. This is probably\ndue to a combination of the Postgres 14 work and the pg_depend PIN\noptimization work from commit a49d0812.\n\nThe single biggest difference is the decrease in the size of\npg_depend_reference_index -- it goes from 736 kB to 616 kB. Another\nnotable difference is that pg_class_tblspc_relfilenode_index shrinks,\ngoing from 176 kB to 144 kB. These are not huge differences, but they\nstill seem worth having.\n\nThe best argument in favor of my proposal is definitely the index\nbloat argument, which this test case tells us little or nothing about.\nI'm especially concerned about scenarios where logical replication is\nused, or where index deletion and VACUUM are inherently unable to\nremove older index tuple versions for some other reason.\n\n--\nPeter Geoghegan",
"msg_date": "Wed, 29 Sep 2021 15:32:09 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Enabling deduplication with system catalog indexes"
},
{
"msg_contents": "On Wed, Sep 29, 2021 at 3:32 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I decided to run a simple experiment, to give us some idea of what\n> benefits my proposal gives users: I ran \"make installcheck\" on a newly\n> initdb'd database (master branch), and then with the attached patch\n> (which enables deduplication with system catalog indexes) applied.\n\nI will commit this patch in a few days, barring objections.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 30 Sep 2021 15:41:42 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Enabling deduplication with system catalog indexes"
},
{
"msg_contents": "On 9/30/21, 3:44 PM, \"Peter Geoghegan\" <pg@bowt.ie> wrote:\r\n> On Wed, Sep 29, 2021 at 3:32 PM Peter Geoghegan <pg@bowt.ie> wrote:\r\n>> I decided to run a simple experiment, to give us some idea of what\r\n>> benefits my proposal gives users: I ran \"make installcheck\" on a newly\r\n>> initdb'd database (master branch), and then with the attached patch\r\n>> (which enables deduplication with system catalog indexes) applied.\r\n>\r\n> I will commit this patch in a few days, barring objections.\r\n\r\n+1\r\n\r\nNathan\r\n\r\n",
"msg_date": "Fri, 1 Oct 2021 21:35:52 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Enabling deduplication with system catalog indexes"
},
{
"msg_contents": "On Fri, Oct 1, 2021 at 2:35 PM Bossart, Nathan <bossartn@amazon.com> wrote:\n> On 9/30/21, 3:44 PM, \"Peter Geoghegan\" <pg@bowt.ie> wrote:\n> > I will commit this patch in a few days, barring objections.\n>\n> +1\n\nOkay, pushed.\n\nThanks\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 2 Oct 2021 17:14:44 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Enabling deduplication with system catalog indexes"
}
] |
[
{
"msg_contents": "Hi,\n\nI found a crash (segmentation fault) on jsonb.\nThis is the best I could do to reduce the query:\n\n\"\"\"\nselect \n 75 as c1\nfrom \n public.pagg_tab_ml as ref_0,\n lateral (select \n ref_0.a as c5 \n from generate_series(1, 300) as sample_0\n fetch first 78 rows only\n ) as subq_0\nwhere case when (subq_0.c5 < 2) \n then cast(null as jsonb) \n\t else cast(null as jsonb) \n end ? ref_0.c\n\"\"\"\n\nAnd because it needs pagg_tab_ml it should be run a regression database.\nThis affects at least 14 and 15.\n\nAttached is the backtrace.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL",
"msg_date": "Wed, 29 Sep 2021 13:55:44 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": true,
"msg_subject": "jsonb crash"
},
{
"msg_contents": "Em qua., 29 de set. de 2021 às 15:55, Jaime Casanova <\njcasanov@systemguards.com.ec> escreveu:\n\n> Hi,\n>\n> I found a crash (segmentation fault) on jsonb.\n> This is the best I could do to reduce the query:\n>\n> \"\"\"\n> select\n> 75 as c1\n> from\n> public.pagg_tab_ml as ref_0,\n> lateral (select\n> ref_0.a as c5\n> from generate_series(1, 300) as sample_0\n> fetch first 78 rows only\n> ) as subq_0\n> where case when (subq_0.c5 < 2)\n> then cast(null as jsonb)\n> else cast(null as jsonb)\n> end ? ref_0.c\n> \"\"\"\n>\n> And because it needs pagg_tab_ml it should be run a regression database.\n> This affects at least 14 and 15.\n>\n> Attached is the backtrace.\n>\nYeah, Coverity has a report about this at function:\n\nJsonbValue *\npushJsonbValue(JsonbParseState **pstate, JsonbIteratorToken seq,\n JsonbValue *jbval)\n\n1. CID undefined: Dereference after null check (FORWARD_NULL)\nreturn pushJsonbValueScalar(pstate, seq, jbval);\n\n2. CID undefined (#1 of 1): Dereference after null check (FORWARD_NULL)16.\nvar_deref_model:\nPassing pstate to pushJsonbValueScalar, which dereferences null *pstate\n\nres = pushJsonbValueScalar(pstate, tok,\n tok <\nWJB_BEGIN_ARRAY ||\n (tok ==\nWJB_BEGIN_ARRAY &&\n v.\nval.array.rawScalar) ? &v : NULL);\n\nregards,\nRanier Vilela\n\nEm qua., 29 de set. de 2021 às 15:55, Jaime Casanova <jcasanov@systemguards.com.ec> escreveu:Hi,\n\nI found a crash (segmentation fault) on jsonb.\nThis is the best I could do to reduce the query:\n\n\"\"\"\nselect \n 75 as c1\nfrom \n public.pagg_tab_ml as ref_0,\n lateral (select \n ref_0.a as c5 \n from generate_series(1, 300) as sample_0\n fetch first 78 rows only\n ) as subq_0\nwhere case when (subq_0.c5 < 2) \n then cast(null as jsonb) \n else cast(null as jsonb) \n end ? ref_0.c\n\"\"\"\n\nAnd because it needs pagg_tab_ml it should be run a regression database.\nThis affects at least 14 and 15.\n\nAttached is the backtrace.Yeah, Coverity has a report about this at function: \nJsonbValue *\npushJsonbValue(JsonbParseState **pstate, JsonbIteratorToken seq,\n JsonbValue *jbval)\n1. \nCID undefined: Dereference after null check (FORWARD_NULL) \nreturn pushJsonbValueScalar(pstate, seq, jbval); 2. \nCID undefined (#1 of 1): Dereference after null check (FORWARD_NULL)16. var_deref_model: Passing pstate to pushJsonbValueScalar, which dereferences null *pstate \nres = pushJsonbValueScalar(pstate, tok,\n tok < WJB_BEGIN_ARRAY ||\n (tok == WJB_BEGIN_ARRAY &&\n v.val.array.rawScalar) ? &v : NULL);\nregards,Ranier Vilela",
"msg_date": "Wed, 29 Sep 2021 16:16:44 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: jsonb crash"
},
{
"msg_contents": "Jaime Casanova <jcasanov@systemguards.com.ec> writes:\n> I found a crash (segmentation fault) on jsonb.\n> This is the best I could do to reduce the query:\n\n> \"\"\"\n> select \n> 75 as c1\n> from \n> public.pagg_tab_ml as ref_0,\n> lateral (select \n> ref_0.a as c5 \n> from generate_series(1, 300) as sample_0\n> fetch first 78 rows only\n> ) as subq_0\n> where case when (subq_0.c5 < 2) \n> then cast(null as jsonb) \n> \t else cast(null as jsonb) \n> end ? ref_0.c\n> \"\"\"\n\nI think this must be a memoize bug. AFAICS, nowhere in this query\ncan we be processing a non-null JSONB value, so what are we doing\nin jsonb_hash? Something down-stack must have lost the information\nthat the Datum is actually null.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 29 Sep 2021 16:00:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: jsonb crash"
},
{
"msg_contents": "I wrote:\n> I think this must be a memoize bug. AFAICS, nowhere in this query\n> can we be processing a non-null JSONB value, so what are we doing\n> in jsonb_hash? Something down-stack must have lost the information\n> that the Datum is actually null.\n\nAfter further inspection, \"what are we doing in jsonb_hash?\" is\nindeed a relevant question, but it seems like it's a type mismatch\nnot a nullness issue. EXPLAIN VERBOSE shows\n\n -> Memoize (cost=0.01..1.96 rows=1 width=4)\n Output: subq_0.c5\n Cache Key: ref_0.c, ref_0.a\n -> Subquery Scan on subq_0 (cost=0.00..1.95 rows=1 width=4)\n Output: subq_0.c5\n Filter: (CASE WHEN (subq_0.c5 < 2) THEN NULL::jsonb ELSE NULL::jsonb END ? ref_0.c)\n -> Limit (cost=0.00..0.78 rows=78 width=4)\n Output: (ref_0.a)\n -> Function Scan on pg_catalog.generate_series sample_0 (cost=0.00..3.00 rows=300 width=4)\n Output: ref_0.a\n Function Call: generate_series(1, 300)\n\nso unless the \"Cache Key\" output is a complete lie, the cache key\ntypes we should be concerned with are text and integer. The Datum\nthat's being passed to jsonb_hash looks suspiciously like it is a\ntext value '0000', too, which matches the \"c\" value from the first\nrow of pagg_tab_ml. I now think some part of Memoize is looking in\ncompletely the wrong place to discover the cache key datatypes.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 29 Sep 2021 16:24:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: jsonb crash"
},
{
"msg_contents": "\nOn 9/29/21 4:00 PM, Tom Lane wrote:\n> Jaime Casanova <jcasanov@systemguards.com.ec> writes:\n>> I found a crash (segmentation fault) on jsonb.\n>> This is the best I could do to reduce the query:\n>> \"\"\"\n>> select \n>> 75 as c1\n>> from \n>> public.pagg_tab_ml as ref_0,\n>> lateral (select \n>> ref_0.a as c5 \n>> from generate_series(1, 300) as sample_0\n>> fetch first 78 rows only\n>> ) as subq_0\n>> where case when (subq_0.c5 < 2) \n>> then cast(null as jsonb) \n>> \t else cast(null as jsonb) \n>> end ? ref_0.c\n>> \"\"\"\n> I think this must be a memoize bug. AFAICS, nowhere in this query\n> can we be processing a non-null JSONB value, so what are we doing\n> in jsonb_hash? Something down-stack must have lost the information\n> that the Datum is actually null.\n\n\nYeah, confirmed that this is not failing in release 13.\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 29 Sep 2021 16:30:54 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: jsonb crash"
},
{
"msg_contents": "On Thu, 30 Sept 2021 at 09:24, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> After further inspection, \"what are we doing in jsonb_hash?\" is\n> indeed a relevant question, but it seems like it's a type mismatch\n> not a nullness issue. EXPLAIN VERBOSE shows\n\nI think you're right here. It should be hashing text. That seems to\nbe going wrong in check_memoizable() because it assumes it's always\nfine to use the left side's type of the OpExpr to figure out the hash\nfunction to use.\n\nMaybe we can cache the left and the right type's hash function and use\nthe correct one in paraminfo_get_equal_hashops().\n\nDavid\n\n\n",
"msg_date": "Thu, 30 Sep 2021 09:48:03 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: jsonb crash"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Thu, 30 Sept 2021 at 09:24, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> After further inspection, \"what are we doing in jsonb_hash?\" is\n>> indeed a relevant question, but it seems like it's a type mismatch\n>> not a nullness issue. EXPLAIN VERBOSE shows\n\n> I think you're right here. It should be hashing text. That seems to\n> be going wrong in check_memoizable() because it assumes it's always\n> fine to use the left side's type of the OpExpr to figure out the hash\n> function to use.\n\n> Maybe we can cache the left and the right type's hash function and use\n> the correct one in paraminfo_get_equal_hashops().\n\nUm ... it seems to have correctly identified the cache key expressions,\nso why isn't it just doing exprType on those? The jsonb_exists operator\nseems entirely irrelevant here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 29 Sep 2021 17:09:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: jsonb crash"
},
{
"msg_contents": "On Thu, 30 Sept 2021 at 10:09, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > Maybe we can cache the left and the right type's hash function and use\n> > the correct one in paraminfo_get_equal_hashops().\n>\n> Um ... it seems to have correctly identified the cache key expressions,\n> so why isn't it just doing exprType on those? The jsonb_exists operator\n> seems entirely irrelevant here.\n\nThis is down to the caching stuff I added to RestrictInfo to minimise\nthe amount of work done during the join search. I cached the hash\nequal function in RestrictInfo so I didn't have to check what that was\neach time we consider a join. The problem is, that I did a bad job of\ntaking inspiration from check_hashjoinable() which just looks at the\nleft type.\n\nDavid\n\n\n",
"msg_date": "Thu, 30 Sep 2021 10:17:19 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: jsonb crash"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Thu, 30 Sept 2021 at 10:09, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Um ... it seems to have correctly identified the cache key expressions,\n>> so why isn't it just doing exprType on those? The jsonb_exists operator\n>> seems entirely irrelevant here.\n\n> This is down to the caching stuff I added to RestrictInfo to minimise\n> the amount of work done during the join search. I cached the hash\n> equal function in RestrictInfo so I didn't have to check what that was\n> each time we consider a join. The problem is, that I did a bad job of\n> taking inspiration from check_hashjoinable() which just looks at the\n> left type.\n\nI'm still confused. AFAICS, the top-level operator of the qual clause has\nexactly nada to do with the cache keys, as this example makes plain.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 29 Sep 2021 17:20:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: jsonb crash"
},
{
"msg_contents": "On Thu, 30 Sept 2021 at 10:20, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > On Thu, 30 Sept 2021 at 10:09, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Um ... it seems to have correctly identified the cache key expressions,\n> >> so why isn't it just doing exprType on those? The jsonb_exists operator\n> >> seems entirely irrelevant here.\n>\n> > This is down to the caching stuff I added to RestrictInfo to minimise\n> > the amount of work done during the join search. I cached the hash\n> > equal function in RestrictInfo so I didn't have to check what that was\n> > each time we consider a join. The problem is, that I did a bad job of\n> > taking inspiration from check_hashjoinable() which just looks at the\n> > left type.\n>\n> I'm still confused. AFAICS, the top-level operator of the qual clause has\n> exactly nada to do with the cache keys, as this example makes plain.\n\nYou're right that it does not. The lateral join condition could be\nanything. We just need to figure out the hash function and which\nequality function so that we can properly find any cached tuples when\nwe're probing the hash table. We need the equal function too as we\ncan't just return any old cache tuples that match the same hash value.\n\nMaybe recording the operator is not the best thing to do. Maybe I\nshould have just recorded the regproc's Oid for the equal function.\nThat would save from calling get_opcode() in ExecInitMemoize().\n\nDavid\n\n\n",
"msg_date": "Thu, 30 Sep 2021 10:37:20 +1300",
"msg_from": "David Rowley <dgrowley@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: jsonb crash"
},
{
"msg_contents": "On Thu, 30 Sept 2021 at 09:48, David Rowley <dgrowleyml@gmail.com> wrote:\n> Maybe we can cache the left and the right type's hash function and use\n> the correct one in paraminfo_get_equal_hashops().\n\nHere's a patch of what I had in mind for the fix. It's just hot off\nthe press, so really only intended to assist discussion at this stage.\n\nDavid",
"msg_date": "Thu, 30 Sep 2021 10:43:54 +1300",
"msg_from": "David Rowley <dgrowley@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: jsonb crash"
},
{
"msg_contents": "David Rowley <dgrowley@gmail.com> writes:\n> On Thu, 30 Sept 2021 at 10:20, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I'm still confused. AFAICS, the top-level operator of the qual clause has\n>> exactly nada to do with the cache keys, as this example makes plain.\n\n> You're right that it does not. The lateral join condition could be\n> anything.\n\nActually, the more I look at this the more unhappy I get, because\nit's becoming clear that you have made unfounded semantic\nassumptions. The hash functions generally only promise that they\nwill distinguish values that are distinguishable by the associated\nequality operator. We have plenty of data types in which that does\nnot map to bitwise equality ... you need not look further than\nfloat8 for an example. And in turn, that means that there are lots\nof functions/operators that *can* distinguish hash-equal values.\nThe fact that you're willing to treat this example as cacheable\nmeans that memoize will fail on such clauses.\n\nSo I'm now thinking you weren't that far wrong to be looking at\nhashability of the top-level qual operator. What is missing is\nthat you have to restrict candidate cache keys to be the *direct*\narguments of such an operator. Looking any further down in the\nexpression introduces untenable assumptions.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 29 Sep 2021 17:54:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: jsonb crash"
},
{
"msg_contents": "I wrote:\n> So I'm now thinking you weren't that far wrong to be looking at\n> hashability of the top-level qual operator. What is missing is\n> that you have to restrict candidate cache keys to be the *direct*\n> arguments of such an operator. Looking any further down in the\n> expression introduces untenable assumptions.\n\nHmm ... I think that actually, a correct statement of the semantic\nrestriction is\n\n To be eligible for memoization, the inside of a join can use the\n passed-in parameters *only* as direct arguments of hashable equality\n operators.\n\nIn order to exploit RestrictInfo-based caching, you could make the\nfurther restriction that all such equality operators appear at the\ntop level of RestrictInfo clauses. But that's not semantically\nnecessary.\n\nAs an example, assuming p1 and p2 are the path parameters,\n\n\t(p1 = x) xor (p2 = y)\n\nis semantically safe to memoize, despite the upper-level xor\noperator. But the example we started with, with a parameter\nused as an argument of jsonb_exists, is not safe to memoize\nbecause we have no grounds to suppose that two hash-equal values\nwill act the same in jsonb_exists.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 29 Sep 2021 18:20:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: jsonb crash"
},
{
"msg_contents": "On Thu, 30 Sept 2021 at 11:20, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > So I'm now thinking you weren't that far wrong to be looking at\n> > hashability of the top-level qual operator. What is missing is\n> > that you have to restrict candidate cache keys to be the *direct*\n> > arguments of such an operator. Looking any further down in the\n> > expression introduces untenable assumptions.\n>\n> Hmm ... I think that actually, a correct statement of the semantic\n> restriction is\n>\n> To be eligible for memoization, the inside of a join can use the\n> passed-in parameters *only* as direct arguments of hashable equality\n> operators.\n>\n> In order to exploit RestrictInfo-based caching, you could make the\n> further restriction that all such equality operators appear at the\n> top level of RestrictInfo clauses. But that's not semantically\n> necessary.\n>\n> As an example, assuming p1 and p2 are the path parameters,\n>\n> (p1 = x) xor (p2 = y)\n>\n> is semantically safe to memoize, despite the upper-level xor\n> operator. But the example we started with, with a parameter\n> used as an argument of jsonb_exists, is not safe to memoize\n> because we have no grounds to suppose that two hash-equal values\n> will act the same in jsonb_exists.\n\nI'm not really sure if I follow your comment about the top-level qual\noperator. I'm not really sure why that has anything to do with it.\nRemember that we *never* do any hashing of any values from the inner\nside of the join. If we're doing a parameterized nested loop and say\nour parameter has the value of 1, the first time through we don't find\nany cached tuples, so we run the plan from the inner side of the\nnested loop join and cache all the tuples that we get from it. When\nthe parameter changes, we check if the current value of the parameter\nhas any tuples cached. This is what the hashing and equality\ncomparison does. If the new parameter value is 2, then we'll hash that\nand probe the hash table. Since we've only seen value 1 so far, we\nwon't get a cache hit. If at some later point in time we see the\nparameter value of 1 again, we hash that, find something in the hash\nbucket for that value then do an equality test to ensure the values\nare actually the same and not just the same hash bucket or hash value.\n\nAt no point do we do any hashing on the actual cached tuples.\n\nThis allows us to memoize any join expression, not just equality\nexpressions. e.g if the SQL is: SELECT * FROM t1 INNER JOIN t2 on t1.a\n> t2.a; assuming t2 is on the inner side of the nested loop join,\nthen the only thing we hash is the t1.a parameter and the only thing\nwe do an equality comparison on is the current value of t1.a vs some\nprevious value of t1.a that is stored in the hash table. You can see\nhere that if t1.a and t2.a are not the same data type then that's of\nno relevance as we *never* hash or do any equality comparisons on t2.a\nin the memoize code.\n\nThe whole thing just hangs together by the assumption that parameters\nwith the same value will always yield the same tuples. If that's\nsomehow a wrong assumption, then we have a problem.\n\nI'm not sure if this helps explain how it's meant to work, or if I\njust misunderstood you.\n\nDavid\n\n\n",
"msg_date": "Thu, 30 Sep 2021 11:45:13 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: jsonb crash"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Thu, 30 Sept 2021 at 11:20, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Hmm ... I think that actually, a correct statement of the semantic\n>> restriction is\n>> To be eligible for memoization, the inside of a join can use the\n>> passed-in parameters *only* as direct arguments of hashable equality\n>> operators.\n\n> I'm not really sure if I follow your comment about the top-level qual\n> operator. I'm not really sure why that has anything to do with it.\n> Remember that we *never* do any hashing of any values from the inner\n> side of the join. If we're doing a parameterized nested loop and say\n> our parameter has the value of 1, the first time through we don't find\n> any cached tuples, so we run the plan from the inner side of the\n> nested loop join and cache all the tuples that we get from it. When\n> the parameter changes, we check if the current value of the parameter\n> has any tuples cached.\n\nRight, and the point is that if you *do* get a hit, you are assuming\nthat the inner side would return the same values as it returned for\nthe previous hash-equal value. You are doing yourself no good by\nthinking about simple cases like integers. Think about float8,\nand ask yourself whether, if you cached a result for +0, that result\nis still good for -0. In general we can only assume that for applications\nof the hash equality operator itself (or members of its hash opfamily).\nAnything involving a cast to text, for example, would fail on such a case.\n\n> This allows us to memoize any join expression, not just equality\n> expressions.\n\nI am clearly failing to get through to you. Do I need to build\nan example?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 29 Sep 2021 18:54:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: jsonb crash"
},
{
"msg_contents": "On Thu, 30 Sept 2021 at 11:54, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > This allows us to memoize any join expression, not just equality\n> > expressions.\n>\n> I am clearly failing to get through to you. Do I need to build\n> an example?\n\nTaking your -0.0 / +0.0 float example, If I understand correctly due\nto -0.0 and +0.0 hashing to the same value and being classed as equal,\nwe're really only guaranteed to get the same rows if the join\ncondition uses the float value (in this example) directly.\n\nIf for example there was something like a function we could pass that\nfloat value through that was able to distinguish -0.0 from +0.0, then\nthat could cause issues as the return value of that function could be\nanything and have completely different join partners on the other side\nof the join.\n\nA function something like:\n\ncreate or replace function text_sign(f float) returns text as\n$$\nbegin\n if f::text like '-%' then\n return 'neg';\n else\n return 'pos';\n end if;\nend;\n$$ language plpgsql;\n\nwould be enough to do it. If the join condition was text_sign(t1.f) =\n t2.col and the cache key was t1.f rather than text_sign(t1.f)\n\nOn Thu, 30 Sept 2021 at 10:54, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> So I'm now thinking you weren't that far wrong to be looking at\n> hashability of the top-level qual operator. What is missing is\n> that you have to restrict candidate cache keys to be the *direct*\n> arguments of such an operator. Looking any further down in the\n> expression introduces untenable assumptions.\n\nI think I also follow you now with the above. The problem is that if\nthe join operator is able to distinguish something that the equality\noperator and hash function are not then we have the same problem.\nRestricting the join operator to hash equality ensures that the join\ncondition cannot distinguish anything that we cannot distinguish in\nthe cache hash table.\n\nDavid\n\n\n",
"msg_date": "Tue, 5 Oct 2021 20:25:52 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: jsonb crash"
},
{
"msg_contents": "On Thu, 30 Sept 2021 at 07:55, Jaime Casanova\n<jcasanov@systemguards.com.ec> wrote:\n> \"\"\"\n> select\n> 75 as c1\n> from\n> public.pagg_tab_ml as ref_0,\n> lateral (select\n> ref_0.a as c5\n> from generate_series(1, 300) as sample_0\n> fetch first 78 rows only\n> ) as subq_0\n> where case when (subq_0.c5 < 2)\n> then cast(null as jsonb)\n> else cast(null as jsonb)\n> end ? ref_0.c\n> \"\"\"\n\nI've attached 2 patches that aim to fix this bug. One for master and\none for pg14. Unfortunately, for pg14, RestrictInfo lacks a field to\nstore the righthand type's hash equality operator. I don't think it's\ngoing to be possible as [1] shows me that there's at least one project\noutside of core that does makeNode(RestrictInfo). The next best thing\nI can think to do for pg14 is just to limit Memoization for\nparameterizations where the RestrictInfo has the same types on the\nleft and right of an OpExpr. For pg14, the above query just does not\nuse Memoize anymore.\n\nIn theory, since this field is just caching the hash equality\noperator, it might be possible to look up the hash equality function\neach time when the left and right types don't match and we need to\nknow the hash equality operator of the right type. That'll probably\nnot be a super common case, but I wouldn't go as far as to say that\nit'll be rare. I'd be a bit worried about the additional planning\ntime if we went that route. The extra lookup would have to be done\nduring the join search, so could happen thousands of times given a bit\nmore than a handful of tables in the join search.\n\nFor master, we can just add a new field to RestrictInfo. The master\nversion of the patch does that.\n\nDoes anyone have any thoughts on the proposed fixes?\n\nDavid\n\n[1] https://codesearch.debian.net/search?q=makeNode%28RestrictInfo%29&literal=1",
"msg_date": "Tue, 26 Oct 2021 19:07:01 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: jsonb crash"
},
{
"msg_contents": "On Tue, Oct 26, 2021 at 07:07:01PM +1300, David Rowley wrote:\n> Does anyone have any thoughts on the proposed fixes?\n\nI don't have any thoughts, but I want to be sure it isn't forgotten.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 5 Nov 2021 17:38:08 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: jsonb crash"
},
{
"msg_contents": "On Sat, 6 Nov 2021 at 11:38, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Tue, Oct 26, 2021 at 07:07:01PM +1300, David Rowley wrote:\n> > Does anyone have any thoughts on the proposed fixes?\n>\n> I don't have any thoughts, but I want to be sure it isn't forgotten.\n\nNot forgotten. I was just hoping to get some feedback.\n\nI've now pushed the fix to restrict v14 to only allow Memoize when the\nleft and right types are the same. For master, since it's possible to\nadd a field to RestrictInfo, I've changed that to cache the left and\nright hash equality operators.\n\nThis does not fix the binary / logical issue mentioned by Tom. I have\nideas about allowing Memoize to operate in a binary equality mode or\nlogical equality mode. I'll need to run in binary mode when there are\nlateral vars or when any join operator is not hashable.\n\nDavid\n\n\n",
"msg_date": "Mon, 8 Nov 2021 14:46:03 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: jsonb crash"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I've now pushed the fix to restrict v14 to only allow Memoize when the\n> left and right types are the same. For master, since it's possible to\n> add a field to RestrictInfo, I've changed that to cache the left and\n> right hash equality operators.\n\nIf you were going to push this fix before 14.1, you should have done it\ndays ago. At this point it's not possible to get a full set of buildfarm\nresults before the wrap. The valgrind and clobber-cache animals, in\nparticular, are unlikely to report back in time.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 07 Nov 2021 21:38:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: jsonb crash"
},
{
"msg_contents": "On Mon, 8 Nov 2021 at 15:38, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > I've now pushed the fix to restrict v14 to only allow Memoize when the\n> > left and right types are the same. For master, since it's possible to\n> > add a field to RestrictInfo, I've changed that to cache the left and\n> > right hash equality operators.\n>\n> If you were going to push this fix before 14.1, you should have done it\n> days ago. At this point it's not possible to get a full set of buildfarm\n> results before the wrap. The valgrind and clobber-cache animals, in\n> particular, are unlikely to report back in time.\n\nSorry, I was under the impression that it was ok until you'd done the\nstamp for the release. As far as I can see, that's not done yet.\n\nDo you want me to back out the change I made to 14?\n\nDavid\n\n\n",
"msg_date": "Mon, 8 Nov 2021 15:47:58 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: jsonb crash"
},
{
"msg_contents": "On Thu, 30 Sept 2021 at 10:54, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Actually, the more I look at this the more unhappy I get, because\n> it's becoming clear that you have made unfounded semantic\n> assumptions. The hash functions generally only promise that they\n> will distinguish values that are distinguishable by the associated\n> equality operator. We have plenty of data types in which that does\n> not map to bitwise equality ... you need not look further than\n> float8 for an example.\n\nI think this part might be best solved by allowing Memoize to work in\na binary mode. We already have datum_image_eq() for performing a\nbinary comparison on a Datum. We'll also need to supplement that with\na function that generates a hash value based on the binary value too.\n\nIf we do that and put Memoize in binary mode when join operators are\nnot hashable or when we're doing LATERAL joins, I think it should fix\nthis.\n\nIt might be possible to work a bit harder and allow the logical mode\nfor some LATERAL joins. e.g. something like: SELECT * FROM a, LATERAL\n(SELECT * FROM b WHERE a.a = b.b LIMIT 1) b; could use the logical\nmode (assuming the join operator is hashable), however, we really only\nknow the lateral_vars. We don't really collect their context\ncurrently, or the full Expr that they're contained in. That case gets\nmore complex if the join condition had contained a mix of lateral and\nnon-lateral vars on one side of the qual, e.g WHERE a.a = b.b + a.z.\n\nCertainly if the lateral part of the query was a function call, then\nwe'd be forced into binary mode as we'd have no idea what the function\nis doing with the lateral vars being passed to it.\n\nI've my proposed patch.\n\nAn easier way out of this would be to disable Memoize for lateral\njoins completely and only allow it for normal joins when the join\noperators are hashable. I don't want to do this as people are already\nseeing good wins in PG14 with Memoize and lateral joins [1]. I think\nquite a few people would be upset if we removed that ability.\n\nDavid\n\n[1] https://twitter.com/RPorsager/status/1455660236375826436",
"msg_date": "Thu, 11 Nov 2021 18:08:29 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: jsonb crash"
},
{
"msg_contents": "On Thu, 11 Nov 2021 at 18:08, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 30 Sept 2021 at 10:54, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Actually, the more I look at this the more unhappy I get, because\n> > it's becoming clear that you have made unfounded semantic\n> > assumptions. The hash functions generally only promise that they\n> > will distinguish values that are distinguishable by the associated\n> > equality operator. We have plenty of data types in which that does\n> > not map to bitwise equality ... you need not look further than\n> > float8 for an example.\n>\n> I think this part might be best solved by allowing Memoize to work in\n> a binary mode. We already have datum_image_eq() for performing a\n> binary comparison on a Datum. We'll also need to supplement that with\n> a function that generates a hash value based on the binary value too.\n\nSince I really don't want to stop Memoize from working with LATERAL\njoined function calls, I see no other way other than to make use of a\nbinary key'd cache for cases where we can't be certain that it's safe\nto make work in non-binary mode.\n\nI've had thoughts about if we should just make it work in binary mode\nall the time, but my thoughts are that that's not exactly a great idea\nsince it could have a negative effect on cache hits due to there being\nthe possibility that some types such as case insensitive text where\nthe number of variations in the binary representation might be vast.\nThe reason this could be bad is that the estimated cache hit ratio is\ncalculated by looking at n_distinct, which is obviously not looking\nfor distinctions in the binary representation. So in binary mode, we\nmay get a lower cache hit ratio than we might think we'll get, even\nwith accurate statistics. I'd like to minimise those times by only\nusing binary mode when we can't be certain that logical mode is safe.\n\nThe patch does add new fields to the Memoize plan node type, the\nMemoizeState executor node and also MemoizePath. The new fields do\nfit inside the padding of the existing structs. I've also obviously\nhad to modify the read/write/copy functions for Memoize to add the new\nfield there too. My understanding is that this should be ok since\nthose are only used for parallel query to send plans to the working\nand to deserialise it on the worker side. There should never be any\nversion mismatches there.\n\nIf anyone wants to chime in about my proposed patch for this, then\nplease do so soon. I'm planning to look at this in my Tuesday morning\n(UTC+13).\n\nDavid\n\n\n",
"msg_date": "Tue, 23 Nov 2021 00:00:06 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: jsonb crash"
}
] |
[
{
"msg_contents": "Forking this thread in which Thomas implemented syncfs for the startup process\n(61752afb2).\nhttps://www.postgresql.org/message-id/flat/CA%2BhUKG%2BSG9jSW3ekwib0cSdC0yD-jReJ21X4bZAmqxoWTLTc2A%40mail.gmail.com\n\nIs there any reason that initdb/pg_basebackup/pg_checksums/pg_rewind shouldn't\nuse syncfs() ?\n\ndo_syncfs() is in src/backend/ so would need to be duplicated^Wimplemented in\ncommon.\n\nThey can't use the GUC, so need to add an cmdline option or look at an\nenvironment variable.",
"msg_date": "Wed, 29 Sep 2021 19:43:41 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Wed, Sep 29, 2021 at 07:43:41PM -0500, Justin Pryzby wrote:\n> Forking this thread in which Thomas implemented syncfs for the startup process\n> (61752afb2).\n> https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BSG9jSW3ekwib0cSdC0yD-jReJ21X4bZAmqxoWTLTc2A%40mail.gmail.com\n> \n> Is there any reason that initdb/pg_basebackup/pg_checksums/pg_rewind shouldn't\n> use syncfs() ?\n\nThat makes sense.\n\n> do_syncfs() is in src/backend/ so would need to be duplicated^Wimplemented in\n> common.\n\nThe fd handling in the backend makes things tricky if trying to plug\nin a common interface, so I'd rather do that as this is frontend-only\ncode.\n\n> They can't use the GUC, so need to add an cmdline option or look at an\n> environment variable.\n\nfsync_pgdata() is going to manipulate many inodes anyway, because\nthat's a code path designed to do so. If we know that syncfs() is\njust going to be better, I'd rather just call it by default if\navailable and not add new switches to all the frontend tools in need\nof flushing the data folder, switches that are not documented in your\npatch.\n--\nMichael",
"msg_date": "Thu, 30 Sep 2021 12:49:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Thu, Sep 30, 2021 at 4:49 PM Michael Paquier <michael@paquier.xyz> wrote:\n> fsync_pgdata() is going to manipulate many inodes anyway, because\n> that's a code path designed to do so. If we know that syncfs() is\n> just going to be better, I'd rather just call it by default if\n> available and not add new switches to all the frontend tools in need\n> of flushing the data folder, switches that are not documented in your\n> patch.\n\nIf we want this it should be an option, because it flushes out data\nother than the pgdata dir, and it doesn't report errors on old\nkernels.\n\n\n",
"msg_date": "Thu, 30 Sep 2021 17:08:24 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Thu, Sep 30, 2021 at 05:08:24PM +1300, Thomas Munro wrote:\n> If we want this it should be an option, because it flushes out data\n> other than the pgdata dir, and it doesn't report errors on old\n> kernels.\n\nOh, OK, thanks. That's the part about 5.8. The only option\ncontrolling if sync is used now in those binaries is --no-sync.\nShould we use a different design for the option rather than a \n--syncfs? Something like --sync={on,off,syncfs,fsync} could be a\npossibility, for example.\n--\nMichael",
"msg_date": "Thu, 30 Sep 2021 15:56:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Thu, Sep 30, 2021 at 05:08:24PM +1300, Thomas Munro wrote:\n> On Thu, Sep 30, 2021 at 4:49 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > fsync_pgdata() is going to manipulate many inodes anyway, because\n> > that's a code path designed to do so. If we know that syncfs() is\n> > just going to be better, I'd rather just call it by default if\n> > available and not add new switches to all the frontend tools in need\n> > of flushing the data folder, switches that are not documented in your\n> > patch.\n> \n> If we want this it should be an option, because it flushes out data\n> other than the pgdata dir, and it doesn't report errors on old\n> kernels.\n\nI ran into bad performance of initdb --sync-only shortly after adding it to my\ndb migration script, so added initdb --syncfs.\n\nI found that with sufficiently recent coreutils, I can do what's wanted by calling \n/bin/sync -f /datadir\n\nSince it's not integrated into initdb, it's necessary to include each\ntablespace and wal.\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 2 Oct 2021 10:41:54 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Thu, Sep 30, 2021 at 12:49:36PM +0900, Michael Paquier wrote:\n> On Wed, Sep 29, 2021 at 07:43:41PM -0500, Justin Pryzby wrote:\n> > Forking this thread in which Thomas implemented syncfs for the startup process\n> > (61752afb2).\n> > https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BSG9jSW3ekwib0cSdC0yD-jReJ21X4bZAmqxoWTLTc2A%40mail.gmail.com\n> > \n> > Is there any reason that initdb/pg_basebackup/pg_checksums/pg_rewind shouldn't\n> > use syncfs() ?\n> \n> That makes sense.\n> \n> > do_syncfs() is in src/backend/ so would need to be duplicated^Wimplemented in\n> > common.\n> \n> The fd handling in the backend makes things tricky if trying to plug\n> in a common interface, so I'd rather do that as this is frontend-only\n> code.\n> \n> > They can't use the GUC, so need to add an cmdline option or look at an\n> > environment variable.\n> \n> fsync_pgdata() is going to manipulate many inodes anyway, because\n> that's a code path designed to do so. If we know that syncfs() is\n> just going to be better, I'd rather just call it by default if\n> available and not add new switches to all the frontend tools in need\n> of flushing the data folder, switches that are not documented in your\n> patch.\n\nIt is a draft/POC, after all.\n\nThe argument against using syncfs by default is that it could be worse than\nrecursive fsync if a tiny 200MB postgres instance lives on a shared filesystem\nalong with other, larger applications (maybe a larger postgres instance).\n\nThere's also an argument that syncfs might be unreliable in the case of a write\nerror. (But I agreed with Thomas' earlier assessment: that claim caries little\nweight since fsync() itself wasn't reliable for 20some years).\n\nI didn't pursue this patch, as it's easier for me to use /bin/sync -f. Someone\nshould adopt it if interested.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 13 Apr 2022 06:54:12 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Wed, Apr 13, 2022 at 06:54:12AM -0500, Justin Pryzby wrote:\n> I didn't pursue this patch, as it's easier for me to use /bin/sync -f. Someone\n> should adopt it if interested.\n\nI was about to start a new thread, but I found this one with some good\npreliminary discussion. I came to the same conclusion about introducing a\nnew option instead of using syncfs() by default wherever it is available.\nThe attached patch is still a work-in-progress, but it seems to behave as\nexpected. I began investigating this because I noticed that the\nsync-data-directory step on pg_upgrade takes quite a while when there are\nmany files, and I am looking for ways to reduce the amount of downtime\nrequired for pg_upgrade.\n\nThe attached patch adds a new --sync-method option to the relevant frontend\nutilities, but I am not wedded to that name/approach.\n\nThoughts?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 29 Jul 2023 14:40:10 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Sat, Jul 29, 2023 at 02:40:10PM -0700, Nathan Bossart wrote:\n> I was about to start a new thread, but I found this one with some good\n> preliminary discussion. I came to the same conclusion about introducing a\n> new option instead of using syncfs() by default wherever it is available.\n> The attached patch is still a work-in-progress, but it seems to behave as\n> expected. I began investigating this because I noticed that the\n> sync-data-directory step on pg_upgrade takes quite a while when there are\n> many files, and I am looking for ways to reduce the amount of downtime\n> required for pg_upgrade.\n> \n> The attached patch adds a new --sync-method option to the relevant frontend\n> utilities, but I am not wedded to that name/approach.\n\nHere is a new version of the patch with documentation updates and a couple\nother small improvements.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 31 Jul 2023 10:51:38 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Mon, Jul 31, 2023 at 10:51:38AM -0700, Nathan Bossart wrote:\n> Here is a new version of the patch with documentation updates and a couple\n> other small improvements.\n\nI just realized I forgot to update the --help output for these utilities.\nI'll do that in the next version of the patch.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 31 Jul 2023 11:39:46 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Mon, Jul 31, 2023 at 11:39:46AM -0700, Nathan Bossart wrote:\n> I just realized I forgot to update the --help output for these utilities.\n> I'll do that in the next version of the patch.\n\nDone in v3. Sorry for the noise.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 1 Aug 2023 09:37:07 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "I ran a couple of tests for pg_upgrade with 100k tables (created using the\nscript here [0]) in order to demonstrate the potential benefits of this\npatch.\n\npg_upgrade --sync-method fsync\n real 5m50.072s\n user 0m10.606s\n sys 0m40.298s\n\npg_upgrade --sync-method syncfs\n real 3m44.096s\n user 0m8.906s\n sys 0m26.398s\n\npg_upgrade --no-sync\n real 3m27.697s\n user 0m9.056s\n sys 0m26.605s\n\n[0] https://postgr.es/m/3612876.1689443232%40sss.pgh.pa.us\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 8 Aug 2023 13:06:06 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Tue, Aug 08, 2023 at 01:06:06PM -0700, Nathan Bossart wrote:\n> I ran a couple of tests for pg_upgrade with 100k tables (created using the\n> script here [0]) in order to demonstrate the potential benefits of this\n> patch.\n\nThat shows some nice numbers with many files, indeed. How does the\nsize of each file influence the difference in time?\n\n+ else\n+ {\n+ while (errno = 0, (de = readdir(dir)) != NULL)\n+ {\n+ char subpath[MAXPGPATH * 2];\n+\n+ if (strcmp(de->d_name, \".\") == 0 ||\n+ strcmp(de->d_name, \"..\") == 0)\n+ continue;\n\nIt seems to me that there is no need to do that for in-place\ntablespaces. There are relative paths in pg_tblspc/, so they would be\ntaken care of by the syncfs() done on the main data folder.\n\nThis does not really check if the mount points of each tablespace is\ndifferent, as well. For example, imagine that you have two\ntablespaces within the same disk, syncfs() twice. Perhaps, the\ncurrent logic is OK anyway as long as the behavior is optional, but it\nshould be explained in the docs, at least.\n\nI'm finding a bit confusing that fsync_pgdata() is coded in such a way\nthat it does a silent fallback to the cascading syncs through\nwalkdir() when syncfs is specified but not available in the build.\nPerhaps an error is more helpful because one would then know that they\nare trying something that's not around?\n\n+ pg_log_error(\"could not synchronize file system for file \\\"%s\\\": %m\", path);\n+ (void) close(fd);\n+ exit(EXIT_FAILURE);\n\nwalkdir() reports errors and does not consider these fatal. Why the\nearly exit()?\n\nI am a bit concerned about the amount of duplication this patch\nintroduces in the docs. Perhaps this had better be moved into a new\nsection of the docs to explain the tradeoffs, with each tool linking\nto it?\n\nDo we actually need --no-sync at all if --sync-method is around? We\ncould have an extra --sync-method=none at option level with --no-sync\nstill around mainly for compatibility? Or perhaps that's just\nover-designing things?\n--\nMichael",
"msg_date": "Wed, 16 Aug 2023 08:10:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "Thanks for taking a look.\n\nOn Wed, Aug 16, 2023 at 08:10:10AM +0900, Michael Paquier wrote:\n> On Tue, Aug 08, 2023 at 01:06:06PM -0700, Nathan Bossart wrote:\n>> I ran a couple of tests for pg_upgrade with 100k tables (created using the\n>> script here [0]) in order to demonstrate the potential benefits of this\n>> patch.\n> \n> That shows some nice numbers with many files, indeed. How does the\n> size of each file influence the difference in time?\n\nIME the number of files tends to influence the duration much more than the\nsize. I assume this is because most files are already sync'd in these code\npaths that loop through every file.\n\n> + else\n> + {\n> + while (errno = 0, (de = readdir(dir)) != NULL)\n> + {\n> + char subpath[MAXPGPATH * 2];\n> +\n> + if (strcmp(de->d_name, \".\") == 0 ||\n> + strcmp(de->d_name, \"..\") == 0)\n> + continue;\n> \n> It seems to me that there is no need to do that for in-place\n> tablespaces. There are relative paths in pg_tblspc/, so they would be\n> taken care of by the syncfs() done on the main data folder.\n> \n> This does not really check if the mount points of each tablespace is\n> different, as well. For example, imagine that you have two\n> tablespaces within the same disk, syncfs() twice. Perhaps, the\n> current logic is OK anyway as long as the behavior is optional, but it\n> should be explained in the docs, at least.\n\nTrue. But I don't know if checking the mount point of each tablespace is\nworth the complexity. In the worst case, we'll call syncfs() on the same\nfile system a few times, which is probably still much faster in most cases.\nFWIW this is what recovery_init_sync_method does today, and I'm not aware\nof any complaints about this behavior.\n\nThe patch does have the following note:\n\n+ On Linux, <literal>syncfs</literal> may be used instead to ask the\n+ operating system to synchronize the whole file systems that contain the\n+ data directory, the WAL files, and each tablespace.\n\nDo you think that is sufficient, or do you think we should really clearly\nexplain that you could end up calling syncfs() on the same file system a\nfew times if your tablespaces are on the same disk? I personally feel\nlike that'd be a bit too verbose for the already lengthy descriptions of\nthis setting.\n\n> I'm finding a bit confusing that fsync_pgdata() is coded in such a way\n> that it does a silent fallback to the cascading syncs through\n> walkdir() when syncfs is specified but not available in the build.\n> Perhaps an error is more helpful because one would then know that they\n> are trying something that's not around?\n\nIf syncfs() is not available, SYNC_METHOD_SYNCFS won't even be defined, and\nparse_sync_method() should fail if \"syncfs\" is specified. Furthermore, the\nrelevant part of fsync_pgdata() won't be compiled in whenever HAVE_SYNCFS\nis not defined.\n \n> + pg_log_error(\"could not synchronize file system for file \\\"%s\\\": %m\", path);\n> + (void) close(fd);\n> + exit(EXIT_FAILURE);\n> \n> walkdir() reports errors and does not consider these fatal. Why the\n> early exit()?\n\nI know it claims to, but fsync_fname() does exit when fsync() fails:\n\n\treturncode = fsync(fd);\n\n\t/*\n\t * Some OSes don't allow us to fsync directories at all, so we can ignore\n\t * those errors. Anything else needs to be reported.\n\t */\n\tif (returncode != 0 && !(isdir && (errno == EBADF || errno == EINVAL)))\n\t{\n\t\tpg_log_error(\"could not fsync file \\\"%s\\\": %m\", fname);\n\t\t(void) close(fd);\n\t\texit(EXIT_FAILURE);\n\t}\n\nI suspect that the current code does not treat failures for things like\nopen() as fatal because it's likely due to a lack of permissions on the\nfile, but it does treat failures to fsync() as fatal because it is more\nlikely to indicate that ѕomething is very wrong. I don't know whether this\nreasoning is sound, but I tried to match the current convention in the\nsyncfs() code.\n\n> I am a bit concerned about the amount of duplication this patch\n> introduces in the docs. Perhaps this had better be moved into a new\n> section of the docs to explain the tradeoffs, with each tool linking\n> to it?\n\nYeah, this crossed my mind. Do you know of any existing examples of\noptions with links to a common section? One problem with this approach is\nthat there are small differences in the wording for some of the frontend\nutilities, so it might be difficult to cleanly unite these sections.\n\n> Do we actually need --no-sync at all if --sync-method is around? We\n> could have an extra --sync-method=none at option level with --no-sync\n> still around mainly for compatibility? Or perhaps that's just\n> over-designing things?\n\nI don't have a strong opinion. We could take up deprecating --no-sync in a\nfollow-up thread, though. Like you said, we'll probably need to keep it\naround for backward compatibility, so it might not be worth the trouble.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 16 Aug 2023 08:17:05 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Wed, Aug 16, 2023 at 08:17:05AM -0700, Nathan Bossart wrote:\n> On Wed, Aug 16, 2023 at 08:10:10AM +0900, Michael Paquier wrote:\n>> + pg_log_error(\"could not synchronize file system for file \\\"%s\\\": %m\", path);\n>> + (void) close(fd);\n>> + exit(EXIT_FAILURE);\n>> \n>> walkdir() reports errors and does not consider these fatal. Why the\n>> early exit()?\n> \n> I know it claims to, but fsync_fname() does exit when fsync() fails:\n> \n> \treturncode = fsync(fd);\n> \n> \t/*\n> \t * Some OSes don't allow us to fsync directories at all, so we can ignore\n> \t * those errors. Anything else needs to be reported.\n> \t */\n> \tif (returncode != 0 && !(isdir && (errno == EBADF || errno == EINVAL)))\n> \t{\n> \t\tpg_log_error(\"could not fsync file \\\"%s\\\": %m\", fname);\n> \t\t(void) close(fd);\n> \t\texit(EXIT_FAILURE);\n> \t}\n> \n> I suspect that the current code does not treat failures for things like\n> open() as fatal because it's likely due to a lack of permissions on the\n> file, but it does treat failures to fsync() as fatal because it is more\n> likely to indicate that ѕomething is very wrong. I don't know whether this\n> reasoning is sound, but I tried to match the current convention in the\n> syncfs() code.\n\nAh, it looks like this code used to treat fsync() errors as non-fatal, but\nit was changed in commit 1420617. I still find it a bit strange that some\nerrors that prevent a file from being sync'd are non-fatal while others\n_are_ fatal, but that is probably a topic for another thread.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 16 Aug 2023 08:23:25 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Wed, Aug 16, 2023 at 08:23:25AM -0700, Nathan Bossart wrote:\n> Ah, it looks like this code used to treat fsync() errors as non-fatal, but\n> it was changed in commit 1420617. I still find it a bit strange that some\n> errors that prevent a file from being sync'd are non-fatal while others\n> _are_ fatal, but that is probably a topic for another thread.\n\nRight. That rings a bell.\n--\nMichael",
"msg_date": "Thu, 17 Aug 2023 11:15:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Wed, Aug 16, 2023 at 08:17:05AM -0700, Nathan Bossart wrote:\n> On Wed, Aug 16, 2023 at 08:10:10AM +0900, Michael Paquier wrote:\n>> On Tue, Aug 08, 2023 at 01:06:06PM -0700, Nathan Bossart wrote:\n>> + else\n>> + {\n>> + while (errno = 0, (de = readdir(dir)) != NULL)\n>> + {\n>> + char subpath[MAXPGPATH * 2];\n>> +\n>> + if (strcmp(de->d_name, \".\") == 0 ||\n>> + strcmp(de->d_name, \"..\") == 0)\n>> + continue;\n>> \n>> It seems to me that there is no need to do that for in-place\n>> tablespaces. There are relative paths in pg_tblspc/, so they would be\n>> taken care of by the syncfs() done on the main data folder.\n>> \n>> This does not really check if the mount points of each tablespace is\n>> different, as well. For example, imagine that you have two\n>> tablespaces within the same disk, syncfs() twice. Perhaps, the\n>> current logic is OK anyway as long as the behavior is optional, but it\n>> should be explained in the docs, at least.\n> \n> True. But I don't know if checking the mount point of each tablespace is\n> worth the complexity.\n\nPerhaps worth a note, this would depend on statvfs(), which is not\nthat portable the last time I looked at it (NetBSD, some BSD-ish? And\nof course WIN32).\n\n> In the worst case, we'll call syncfs() on the same\n> file system a few times, which is probably still much faster in most cases.\n> FWIW this is what recovery_init_sync_method does today, and I'm not aware\n> of any complaints about this behavior.\n\nHmm. Okay.\n\n> The patch does have the following note:\n> \n> + On Linux, <literal>syncfs</literal> may be used instead to ask the\n> + operating system to synchronize the whole file systems that contain the\n> + data directory, the WAL files, and each tablespace.\n> \n> Do you think that is sufficient, or do you think we should really clearly\n> explain that you could end up calling syncfs() on the same file system a\n> few times if your tablespaces are on the same disk? I personally feel\n> like that'd be a bit too verbose for the already lengthy descriptions of\n> this setting.\n\nIt does not hurt to mention that the code syncfs()-es each tablespace\npath (not in-place tablespaces), ignoring locations that share the\nsame mounting point, IMO. For that, we'd better rely on\nget_dirent_type() like the normal sync path.\n\n>> I'm finding a bit confusing that fsync_pgdata() is coded in such a way\n>> that it does a silent fallback to the cascading syncs through\n>> walkdir() when syncfs is specified but not available in the build.\n>> Perhaps an error is more helpful because one would then know that they\n>> are trying something that's not around?\n> \n> If syncfs() is not available, SYNC_METHOD_SYNCFS won't even be defined, and\n> parse_sync_method() should fail if \"syncfs\" is specified. Furthermore, the\n> relevant part of fsync_pgdata() won't be compiled in whenever HAVE_SYNCFS\n> is not defined.\n\nThat feels structurally inconsistent with what we do with other\noption sets that have library dependencies. For example, look at\ncompression.h and what happens for pg_compress_algorithm. So, it\nseems to me that it would be more friendly to list SYNC_METHOD_SYNCFS\nall the time in SyncMethod even if HAVE_SYNCFS is not around, and at\nleast generate a warning rather than having a platform-dependent set\nof options?\n\nSyncMethod may be a bit too generic as name for the option structure.\nHow about a PGSyncMethod or pg_sync_method?\n\n>> I am a bit concerned about the amount of duplication this patch\n>> introduces in the docs. Perhaps this had better be moved into a new\n>> section of the docs to explain the tradeoffs, with each tool linking\n>> to it?\n> \n> Yeah, this crossed my mind. Do you know of any existing examples of\n> options with links to a common section? One problem with this approach is\n> that there are small differences in the wording for some of the frontend\n> utilities, so it might be difficult to cleanly unite these sections.\n\nThe closest thing I can think of is Color Support in section\nAppendixes, that describes something shared across a lot of binaries\n(that would be 6 tools with this patch).\n\n>> Do we actually need --no-sync at all if --sync-method is around? We\n>> could have an extra --sync-method=none at option level with --no-sync\n>> still around mainly for compatibility? Or perhaps that's just\n>> over-designing things?\n> \n> I don't have a strong opinion. We could take up deprecating --no-sync in a\n> follow-up thread, though. Like you said, we'll probably need to keep it\n> around for backward compatibility, so it might not be worth the trouble.\n\nOkay, maybe that's not worth it.\n--\nMichael",
"msg_date": "Thu, 17 Aug 2023 12:50:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Thu, Aug 17, 2023 at 12:50:31PM +0900, Michael Paquier wrote:\n> On Wed, Aug 16, 2023 at 08:17:05AM -0700, Nathan Bossart wrote:\n>> The patch does have the following note:\n>> \n>> + On Linux, <literal>syncfs</literal> may be used instead to ask the\n>> + operating system to synchronize the whole file systems that contain the\n>> + data directory, the WAL files, and each tablespace.\n>> \n>> Do you think that is sufficient, or do you think we should really clearly\n>> explain that you could end up calling syncfs() on the same file system a\n>> few times if your tablespaces are on the same disk? I personally feel\n>> like that'd be a bit too verbose for the already lengthy descriptions of\n>> this setting.\n> \n> It does not hurt to mention that the code syncfs()-es each tablespace\n> path (not in-place tablespaces), ignoring locations that share the\n> same mounting point, IMO. For that, we'd better rely on\n> get_dirent_type() like the normal sync path.\n\nBut it doesn't ignore tablespace locations that share the same mount point.\nIt simply calls syncfs() for each tablespace path, just like\nrecovery_init_sync_method.\n\n>> If syncfs() is not available, SYNC_METHOD_SYNCFS won't even be defined, and\n>> parse_sync_method() should fail if \"syncfs\" is specified. Furthermore, the\n>> relevant part of fsync_pgdata() won't be compiled in whenever HAVE_SYNCFS\n>> is not defined.\n> \n> That feels structurally inconsistent with what we do with other\n> option sets that have library dependencies. For example, look at\n> compression.h and what happens for pg_compress_algorithm. So, it\n> seems to me that it would be more friendly to list SYNC_METHOD_SYNCFS\n> all the time in SyncMethod even if HAVE_SYNCFS is not around, and at\n> least generate a warning rather than having a platform-dependent set\n> of options?\n\nDone.\n\n> SyncMethod may be a bit too generic as name for the option structure.\n> How about a PGSyncMethod or pg_sync_method?\n\nIn v4, I renamed this to DataDirSyncMethod and merged it with\nRecoveryInitSyncMethod. I'm not wedded to the name, but that seemed\ngeneric enough for both use-cases. As an aside, we need to be careful to\ndistinguish these options from those for wal_sync_method.\n\n>> Yeah, this crossed my mind. Do you know of any existing examples of\n>> options with links to a common section? One problem with this approach is\n>> that there are small differences in the wording for some of the frontend\n>> utilities, so it might be difficult to cleanly unite these sections.\n> \n> The closest thing I can think of is Color Support in section\n> Appendixes, that describes something shared across a lot of binaries\n> (that would be 6 tools with this patch).\n\nIf I added a \"syncfs() Caveats\" appendix for the common parts of the docs,\nit would only say something like the following:\n\n\tUsing syncfs may be a lot faster than using fsync, because it doesn't\n\tneed to open each file one by one. On the other hand, it may be slower\n\tif a file system is shared by other applications that modify a lot of\n\tfiles, since those files will also be written to disk. Furthermore, on\n\tversions of Linux before 5.8, I/O errors encountered while writing data\n\tto disk may not be reported to the calling program, and relevant error\n\tmessages may appear only in kernel logs.\n\nDoes that seem reasonable? It would reduce the duplication a little bit,\nbut I'm not sure it's really much of an improvement in this case.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 18 Aug 2023 09:01:11 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Wed, Aug 16, 2023 at 11:50 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >> Do we actually need --no-sync at all if --sync-method is around? We\n> >> could have an extra --sync-method=none at option level with --no-sync\n> >> still around mainly for compatibility? Or perhaps that's just\n> >> over-designing things?\n> >\n> > I don't have a strong opinion. We could take up deprecating --no-sync in a\n> > follow-up thread, though. Like you said, we'll probably need to keep it\n> > around for backward compatibility, so it might not be worth the trouble.\n>\n> Okay, maybe that's not worth it.\n\nDoesn't seem worth it to me. I think --no-sync is more intuitive than\n--sync-method=none, it's certainly shorter, and it's a pretty\nimportant setting because we use it when running the regression tests.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 21 Aug 2023 16:08:46 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Mon, Aug 21, 2023 at 04:08:46PM -0400, Robert Haas wrote:\n> Doesn't seem worth it to me. I think --no-sync is more intuitive than\n> --sync-method=none, it's certainly shorter, and it's a pretty\n> important setting because we use it when running the regression tests.\n\nNo arguments against that ;)\n--\nMichael",
"msg_date": "Tue, 22 Aug 2023 08:25:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Fri, Aug 18, 2023 at 09:01:11AM -0700, Nathan Bossart wrote:\n> On Thu, Aug 17, 2023 at 12:50:31PM +0900, Michael Paquier wrote:\n>> SyncMethod may be a bit too generic as name for the option structure.\n>> How about a PGSyncMethod or pg_sync_method?\n> \n> In v4, I renamed this to DataDirSyncMethod and merged it with\n> RecoveryInitSyncMethod. I'm not wedded to the name, but that seemed\n> generic enough for both use-cases. As an aside, we need to be careful to\n> distinguish these options from those for wal_sync_method.\n\nOkay.\n\n>>> Yeah, this crossed my mind. Do you know of any existing examples of\n>>> options with links to a common section? One problem with this approach is\n>>> that there are small differences in the wording for some of the frontend\n>>> utilities, so it might be difficult to cleanly unite these sections.\n>> \n>> The closest thing I can think of is Color Support in section\n>> Appendixes, that describes something shared across a lot of binaries\n>> (that would be 6 tools with this patch).\n> \n> If I added a \"syncfs() Caveats\" appendix for the common parts of the docs,\n> it would only say something like the following:\n> \n> \tUsing syncfs may be a lot faster than using fsync, because it doesn't\n> \tneed to open each file one by one. On the other hand, it may be slower\n> \tif a file system is shared by other applications that modify a lot of\n> \tfiles, since those files will also be written to disk. Furthermore, on\n> \tversions of Linux before 5.8, I/O errors encountered while writing data\n> \tto disk may not be reported to the calling program, and relevant error\n> \tmessages may appear only in kernel logs.\n> \n> Does that seem reasonable? It would reduce the duplication a little bit,\n> but I'm not sure it's really much of an improvement in this case.\n\nThis would cut 60% (?) of the documentation added by the patch for\nthese six tools, so that looks like an improvement to me. Perhaps\nother may disagree, so more opinions are welcome.\n\n--- a/src/include/storage/fd.h\n+++ b/src/include/storage/fd.h\n@@ -43,15 +43,11 @@\n #ifndef FD_H\n #define FD_H\n \n+#ifndef FRONTEND\n+\n #include <dirent.h>\n #include <fcntl.h>\n \nUgh. So you need this part because pg_rewind's filemap.c includes\nfd.h, and pg_rewind also needs file_utils.h. This is not the fault of\nyour patch, but this does not make the situation better, either.. It\nlooks like we need to think harder about this layer. An improvement\nwould be to split file_utils.c so as its frontend-only code is moved\nto OBJS_FRONTEND in a new file with a new header? It should be OK to\nkeep DataDirSyncMethod in file_utils.h as long as the split is clean.\n--\nMichael",
"msg_date": "Tue, 22 Aug 2023 08:56:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Tue, Aug 22, 2023 at 08:56:26AM +0900, Michael Paquier wrote:\n> --- a/src/include/storage/fd.h\n> +++ b/src/include/storage/fd.h\n> @@ -43,15 +43,11 @@\n> #ifndef FD_H\n> #define FD_H\n> \n> +#ifndef FRONTEND\n> +\n> #include <dirent.h>\n> #include <fcntl.h>\n> \n> Ugh. So you need this part because pg_rewind's filemap.c includes\n> fd.h, and pg_rewind also needs file_utils.h. This is not the fault of\n> your patch, but this does not make the situation better, either.. It\n> looks like we need to think harder about this layer. An improvement\n> would be to split file_utils.c so as its frontend-only code is moved\n> to OBJS_FRONTEND in a new file with a new header? It should be OK to\n> keep DataDirSyncMethod in file_utils.h as long as the split is clean.\n\nI'm hoping there's a simpler path forward here. pg_rewind only needs the\nfollowing lines from fd.h:\n\n\t/* Filename components */\n\t#define PG_TEMP_FILES_DIR \"pgsql_tmp\"\n\t#define PG_TEMP_FILE_PREFIX \"pgsql_tmp\"\n\nMaybe we could move these to file_utils.h instead. WDYT?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 21 Aug 2023 18:44:07 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Mon, Aug 21, 2023 at 06:44:07PM -0700, Nathan Bossart wrote:\n> I'm hoping there's a simpler path forward here. pg_rewind only needs the\n> following lines from fd.h:\n> \n> \t/* Filename components */\n> \t#define PG_TEMP_FILES_DIR \"pgsql_tmp\"\n> \t#define PG_TEMP_FILE_PREFIX \"pgsql_tmp\"\n> \n> Maybe we could move these to file_utils.h instead. WDYT?\n\nI guess so.. At the same time, something can be said about\npg_checksums that redeclares PG_TEMP_FILE_PREFIX and PG_TEMP_FILES_DIR\nbecause it does not want to include fd.h and its sync routines.\n--\nMichael",
"msg_date": "Tue, 22 Aug 2023 10:50:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Tue, Aug 22, 2023 at 10:50:01AM +0900, Michael Paquier wrote:\n> On Mon, Aug 21, 2023 at 06:44:07PM -0700, Nathan Bossart wrote:\n>> I'm hoping there's a simpler path forward here. pg_rewind only needs the\n>> following lines from fd.h:\n>> \n>> \t/* Filename components */\n>> \t#define PG_TEMP_FILES_DIR \"pgsql_tmp\"\n>> \t#define PG_TEMP_FILE_PREFIX \"pgsql_tmp\"\n>> \n>> Maybe we could move these to file_utils.h instead. WDYT?\n> \n> I guess so.. At the same time, something can be said about\n> pg_checksums that redeclares PG_TEMP_FILE_PREFIX and PG_TEMP_FILES_DIR\n> because it does not want to include fd.h and its sync routines.\n\nThis would look something like the attached patch. I think this is nicer.\nWith this patch, we don't have to choose between including fd.h or\nredefining the macros in the frontend code.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 21 Aug 2023 19:06:32 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Mon, Aug 21, 2023 at 07:06:32PM -0700, Nathan Bossart wrote:\n> This would look something like the attached patch. I think this is nicer.\n> With this patch, we don't have to choose between including fd.h or\n> redefining the macros in the frontend code.\n\nYes, this one is moving the needle in the good direction. +1.\n--\nMichael",
"msg_date": "Tue, 22 Aug 2023 12:53:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Tue, Aug 22, 2023 at 12:53:53PM +0900, Michael Paquier wrote:\n> On Mon, Aug 21, 2023 at 07:06:32PM -0700, Nathan Bossart wrote:\n>> This would look something like the attached patch. I think this is nicer.\n>> With this patch, we don't have to choose between including fd.h or\n>> redefining the macros in the frontend code.\n> \n> Yes, this one is moving the needle in the good direction. +1.\n\nGreat. Here is a new patch set that includes this change as well as the\nsuggested documentation updates.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 22 Aug 2023 10:11:10 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "rebased\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 29 Aug 2023 08:45:59 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 08:45:59AM -0700, Nathan Bossart wrote:\n> rebased\n\n0001 looks OK, worth its own, independent, commit.\n\nI understand that I'm perhaps sounding pedantic about fsync_pgdata()..\nBut, after thinking more about it, I would still make this code fail\nhard with an exit(EXIT_FAILURE) to let any C code calling directly\nthis routine with sync_method = DATA_DIR_SYNC_METHOD_SYNCFS know that\nthe build does not allow the use of this option when we don't have\nHAVE_SYNCFS. parse_sync_method() offers some protection, but adding\nthis restriction also in the execution path is more friendly than\nfalling back silently to the default of flushing each file if\nfsync_pgdata() is called with syncfs but the build does not support\nit. At least that's more predictible.\n\nI'm fine with the doc changes.\n--\nMichael",
"msg_date": "Wed, 30 Aug 2023 09:10:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Wed, Aug 30, 2023 at 09:10:47AM +0900, Michael Paquier wrote:\n> I understand that I'm perhaps sounding pedantic about fsync_pgdata()..\n> But, after thinking more about it, I would still make this code fail\n> hard with an exit(EXIT_FAILURE) to let any C code calling directly\n> this routine with sync_method = DATA_DIR_SYNC_METHOD_SYNCFS know that\n> the build does not allow the use of this option when we don't have\n> HAVE_SYNCFS. parse_sync_method() offers some protection, but adding\n> this restriction also in the execution path is more friendly than\n> falling back silently to the default of flushing each file if\n> fsync_pgdata() is called with syncfs but the build does not support\n> it. At least that's more predictible.\n\nThat seems fair enough. I did this in v7. I restructured fsync_pgdata()\nand fsync_dir_recurse() so that any new sync methods should cause compiler\nwarnings until they are implemented.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 29 Aug 2023 18:14:08 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Tue, Aug 29, 2023 at 06:14:08PM -0700, Nathan Bossart wrote:\n> That seems fair enough. I did this in v7. I restructured fsync_pgdata()\n> and fsync_dir_recurse() so that any new sync methods should cause compiler\n> warnings until they are implemented.\n\nThat's pretty cool and easier to maintain in the long term.\n\nAfter sleeping on it, there are two things that popped up in my mind\nthat may be worth considering:\n- Should we have some regression tests? We should only need one test\nin one of the binaries to be able to stress the new code paths of\nfile_utils.c with syncfs. The cheapest one may be pg_dump with a\ndump in directory format? Note that we have tests there that depend\non lz4 or gzip existing, which are conditional.\n- Perhaps 0002 should be split into two parts? The first patch could\nintroduce DataDirSyncMethod in file_utils.h with the new routines in\nfile_utils.h (including syncfs support), and the second patch would\nplug the new option to all the binaries. In the first patch, I would\nhardcode DATA_DIR_SYNC_METHOD_FSYNC.\n--\nMichael",
"msg_date": "Thu, 31 Aug 2023 14:30:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Thu, Aug 31, 2023 at 02:30:33PM +0900, Michael Paquier wrote:\n> - Should we have some regression tests? We should only need one test\n> in one of the binaries to be able to stress the new code paths of\n> file_utils.c with syncfs. The cheapest one may be pg_dump with a\n> dump in directory format? Note that we have tests there that depend\n> on lz4 or gzip existing, which are conditional.\n\nI added one for initdb in v8.\n\n> - Perhaps 0002 should be split into two parts? The first patch could\n> introduce DataDirSyncMethod in file_utils.h with the new routines in\n> file_utils.h (including syncfs support), and the second patch would\n> plug the new option to all the binaries. In the first patch, I would\n> hardcode DATA_DIR_SYNC_METHOD_FSYNC.\n\nHa, I was just thinking about this, too. I actually split it into 3\npatches. The first adds DataDirSyncMethod and uses it for\nrecovery_init_sync_method. The second adds syncfs() support in\nfile_utils.c. And the third adds the ability to specify syncfs in the\nfrontend utilities. WDYT?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 31 Aug 2023 08:48:58 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Thu, Aug 31, 2023 at 08:48:58AM -0700, Nathan Bossart wrote:\n> On Thu, Aug 31, 2023 at 02:30:33PM +0900, Michael Paquier wrote:\n> > - Should we have some regression tests? We should only need one test\n> > in one of the binaries to be able to stress the new code paths of\n> > file_utils.c with syncfs. The cheapest one may be pg_dump with a\n> > dump in directory format? Note that we have tests there that depend\n> > on lz4 or gzip existing, which are conditional.\n> \n> I added one for initdb in v8.\n\n+my $supports_syncfs = check_pg_config(\"#define HAVE_SYNCFS 1\"); \n\nThat should be OK this way. The extra running time is not really\nvisible, right?\n\n+command_ok([ 'initdb', '-S', $datadir, '--sync-method', 'fsync' ],\n+ 'sync method fsync');\n\nRemoving this one may be fine, actually, because we test the sync\npaths on other places like pg_dump.\n\n> Ha, I was just thinking about this, too. I actually split it into 3\n> patches. The first adds DataDirSyncMethod and uses it for\n> recovery_init_sync_method. The second adds syncfs() support in\n> file_utils.c. And the third adds the ability to specify syncfs in the\n> frontend utilities. WDYT?\n\nThis split is OK by me, so WFM.\n--\nMichael",
"msg_date": "Fri, 1 Sep 2023 10:40:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Fri, Sep 01, 2023 at 10:40:12AM +0900, Michael Paquier wrote:\n> That should be OK this way. The extra running time is not really\n> visible, right?\n\nAFAICT it is negligible. Presumably it could take a little longer if there\nis a lot to sync on the file system, but I don't know if that's worth\nworrying about.\n\n> +command_ok([ 'initdb', '-S', $datadir, '--sync-method', 'fsync' ],\n> + 'sync method fsync');\n> \n> Removing this one may be fine, actually, because we test the sync\n> paths on other places like pg_dump.\n\nDone.\n\n> This split is OK by me, so WFM.\n\nCool.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 31 Aug 2023 19:17:27 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "> +\tif (!user_opts.sync_method)\n> +\t\tuser_opts.sync_method = pg_strdup(\"fsync\");\n\nwhy pstrdup?\n\n> +parse_sync_method(const char *optarg, SyncMethod *sync_method)\n> +{\n> +\tif (strcmp(optarg, \"fsync\") == 0)\n> +\t\t*sync_method = SYNC_METHOD_FSYNC;\n> +#ifdef HAVE_SYNCFS\n> +\telse if (strcmp(optarg, \"syncfs\") == 0)\n> +\t\t*sync_method = SYNC_METHOD_SYNCFS;\n> +#endif\n> +\telse\n> +\t{\n> +\t\tpg_log_error(\"unrecognized sync method: %s\", optarg);\n> +\t\treturn false;\n> +\t}\n\nThis should probably give a distinct error when syncfs is not supported\nthan when it's truely recognized.\n\nThe patch should handle pg_dumpall, too.\n\nNote that /bin/sync doesn't try to de-duplicate, it does just what you\ntell it.\n\n$ strace -e openat,syncfs,fsync sync / / / -f\n...\nopenat(AT_FDCWD, \"/\", O_RDONLY|O_NONBLOCK) = 3\nsyncfs(3) = 0\nopenat(AT_FDCWD, \"/\", O_RDONLY|O_NONBLOCK) = 3\nsyncfs(3) = 0\nopenat(AT_FDCWD, \"/\", O_RDONLY|O_NONBLOCK) = 3\nsyncfs(3) = 0\n+++ exited with 0 +++\n\n\n",
"msg_date": "Fri, 1 Sep 2023 12:58:10 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "Thanks for taking a look.\n\nOn Fri, Sep 01, 2023 at 12:58:10PM -0500, Justin Pryzby wrote:\n>> +\tif (!user_opts.sync_method)\n>> +\t\tuser_opts.sync_method = pg_strdup(\"fsync\");\n> \n> why pstrdup?\n\nI believe I was just following the precedent set by some of the other\noptions.\n\n>> +parse_sync_method(const char *optarg, SyncMethod *sync_method)\n>> +{\n>> +\tif (strcmp(optarg, \"fsync\") == 0)\n>> +\t\t*sync_method = SYNC_METHOD_FSYNC;\n>> +#ifdef HAVE_SYNCFS\n>> +\telse if (strcmp(optarg, \"syncfs\") == 0)\n>> +\t\t*sync_method = SYNC_METHOD_SYNCFS;\n>> +#endif\n>> +\telse\n>> +\t{\n>> +\t\tpg_log_error(\"unrecognized sync method: %s\", optarg);\n>> +\t\treturn false;\n>> +\t}\n> \n> This should probably give a distinct error when syncfs is not supported\n> than when it's truely recognized.\n\nLater versions of the patch should have this.\n\n> The patch should handle pg_dumpall, too.\n\nIt looks like pg_dumpall only ever fsyncs a single file, so I don't think\nit is really needed there.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 1 Sep 2023 11:08:51 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Fri, Sep 01, 2023 at 11:08:51AM -0700, Nathan Bossart wrote:\n> > This should probably give a distinct error when syncfs is not supported\n> > than when it's truely recognized.\n> \n> Later versions of the patch should have this.\n\nOops, right.\n\n> > The patch should handle pg_dumpall, too.\n> \n> It looks like pg_dumpall only ever fsyncs a single file, so I don't think\n> it is really needed there.\n\nWhat about (per git grep no-sync doc) pg_receivewal?\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 1 Sep 2023 13:19:13 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Fri, Sep 01, 2023 at 01:19:13PM -0500, Justin Pryzby wrote:\n> What about (per git grep no-sync doc) pg_receivewal?\n\nI don't think it's applicable there, either. IIUC that option specifies\nwhether to sync the data as it is streamed over.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 1 Sep 2023 11:31:00 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "I've committed 0001 for now. I plan to commit the rest in the next couple\nof days.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 5 Sep 2023 17:08:53 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Tue, Sep 05, 2023 at 05:08:53PM -0700, Nathan Bossart wrote:\n> I've committed 0001 for now. I plan to commit the rest in the next couple\n> of days.\n\nCommitted.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 6 Sep 2023 16:29:08 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Thu, 7 Sept 2023 at 03:34, Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n\n> Committed.\n>\n\nHi! Great job!\n\nBut here is one problem I've encountered during working on some unrelated\nstuff.\nHow we have two different things call the same name – sync_method. One in\nxlog:\nint sync_method = DEFAULT_SYNC_METHOD;\n...and another one in \"bins\":\nstatic DataDirSyncMethod sync_method = DATA_DIR_SYNC_METHOD_FSYNC;\n\nIn current include order, this is not a problem, but imagine you add a\ncouple of new includes,\nfor example:\n--- a/src/include/storage/bufpage.h\n+++ b/src/include/storage/bufpage.h\n@@ -18,6 +18,8 @@\n #include \"storage/block.h\"\n #include \"storage/item.h\"\n #include \"storage/off.h\"\n+#include \"postgres.h\"\n+#include \"utils/rel.h\"\n\nAnd build will be broken, because we how have two different things called\n\"sync_method\" with\ndifferent types:\nIn file included from .../src/bin/pg_rewind/pg_rewind.c:33:\nIn file included from .../src/include/storage/bufpage.h:22:\nIn file included from .../src/include/utils/rel.h:18:\n.../src/include/access/xlog.h:27:24: error: redeclaration of 'sync_method'\nwith a different type: 'int' vs 'DataDirSyncMethod' (aka 'enum\nDataDirSyncMethod')\nextern PGDLLIMPORT int sync_method;\n...\n\nAs a solution, I suggest renaming sync_method in xlog module to\nwal_sync_method. In fact,\nappropriate GUC for this variable, called \"wal_sync_method\" and I see no\nreason not to use\nthe exact same name for a variable in xlog module.\n\n-- \nBest regards,\nMaxim Orlov.",
"msg_date": "Wed, 20 Sep 2023 15:12:56 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Wed, Sep 20, 2023 at 03:12:56PM +0300, Maxim Orlov wrote:\n> As a solution, I suggest renaming sync_method in xlog module to\n> wal_sync_method. In fact,\n> appropriate GUC for this variable, called \"wal_sync_method\" and I see no\n> reason not to use\n> the exact same name for a variable in xlog module.\n\n+1\n\nI think we should also consider renaming things like SYNC_METHOD_FSYNC to\nWAL_SYNC_METHOD_FSYNC, and sync_method_options to wal_sync_method_options.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 20 Sep 2023 12:07:48 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Wed, 20 Sept 2023 at 22:08, Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n\n> I think we should also consider renaming things like SYNC_METHOD_FSYNC to\n> WAL_SYNC_METHOD_FSYNC, and sync_method_options to wal_sync_method_options.\n>\n\nI've already rename sync_method_options in previous patch.\n 34 @@ -171,7 +171,7 @@ static bool check_wal_consistency_checking_deferred\n= false;\n 35 /*\n 36 * GUC support\n 37 */\n 38 -const struct config_enum_entry sync_method_options[] = {\n 39 +const struct config_enum_entry wal_sync_method_options[] = {\n\nAs for SYNC_METHOD_FSYNC rename, PFA patch.\nAlso make enum for WAL sync methods instead of defines.\n\n-- \nBest regards,\nMaxim Orlov.",
"msg_date": "Thu, 21 Sep 2023 11:29:48 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On 17.08.23 04:50, Michael Paquier wrote:\n>> Yeah, this crossed my mind. Do you know of any existing examples of\n>> options with links to a common section? One problem with this approach is\n>> that there are small differences in the wording for some of the frontend\n>> utilities, so it might be difficult to cleanly unite these sections.\n> The closest thing I can think of is Color Support in section\n> Appendixes, that describes something shared across a lot of binaries\n> (that would be 6 tools with this patch).\n\nI think it's a bit much to add a whole appendix for that little content.\n\nWe have a collection of platform-specific notes in chapter 19, including \nfile-system-related notes in section 19.2. Maybe it could be put there?\n\n\n\n",
"msg_date": "Wed, 27 Sep 2023 13:56:08 +0100",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Wed, Sep 27, 2023 at 01:56:08PM +0100, Peter Eisentraut wrote:\n> I think it's a bit much to add a whole appendix for that little content.\n\nI'm inclined to agree.\n\n> We have a collection of platform-specific notes in chapter 19, including\n> file-system-related notes in section 19.2. Maybe it could be put there?\n\nI will give this a try.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 4 Oct 2023 10:29:07 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "Back to the patch v11. I don’t understand a bit, what we should do next?\nMake a separate thread or put this one on commitfest?\n\n-- \nBest regards,\nMaxim Orlov.\n\nBack to the patch v11. I don’t understand a bit, what we should do next? Make a separate thread or put this one on commitfest?-- Best regards,Maxim Orlov.",
"msg_date": "Fri, 6 Oct 2023 10:50:11 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Fri, Oct 06, 2023 at 10:50:11AM +0300, Maxim Orlov wrote:\n> Back to the patch v11. I don’t understand a bit, what we should do next?\n> Make a separate thread or put this one on commitfest?\n\n From a quick skim, this one looks pretty good to me. Would you mind adding\nit to the commitfest so that it doesn't get lost? I will aim to take a\ncloser look at it next week.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 6 Oct 2023 14:35:44 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Fri, 6 Oct 2023 at 22:35, Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n\n> From a quick skim, this one looks pretty good to me. Would you mind adding\n> it to the commitfest so that it doesn't get lost? I will aim to take a\n> closer look at it next week.\n>\n\nSounds good, thanks a lot!\n\nhttps://commitfest.postgresql.org/45/4609/\n\n-- \nBest regards,\nMaxim Orlov.\n\nOn Fri, 6 Oct 2023 at 22:35, Nathan Bossart <nathandbossart@gmail.com> wrote:\n From a quick skim, this one looks pretty good to me. Would you mind adding\nit to the commitfest so that it doesn't get lost? I will aim to take a\ncloser look at it next week.Sounds good, thanks a lot!https://commitfest.postgresql.org/45/4609/ -- Best regards,Maxim Orlov.",
"msg_date": "Mon, 9 Oct 2023 13:12:16 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Mon, Oct 09, 2023 at 01:12:16PM +0300, Maxim Orlov wrote:\n> On Fri, 6 Oct 2023 at 22:35, Nathan Bossart <nathandbossart@gmail.com>\n> wrote:\n>> From a quick skim, this one looks pretty good to me. Would you mind adding\n>> it to the commitfest so that it doesn't get lost? I will aim to take a\n>> closer look at it next week.\n> \n> Sounds good, thanks a lot!\n> \n> https://commitfest.postgresql.org/45/4609/\n\nThanks. I've made a couple of small changes, but otherwise I think this\none is just about ready.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 9 Oct 2023 11:14:39 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Mon, Oct 09, 2023 at 11:14:39AM -0500, Nathan Bossart wrote:\n> Thanks. I've made a couple of small changes, but otherwise I think this\n> one is just about ready.\n\nI forgot to rename one thing. Here's a v13 with that fixed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 9 Oct 2023 14:34:27 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Wed, Oct 04, 2023 at 10:29:07AM -0500, Nathan Bossart wrote:\n> On Wed, Sep 27, 2023 at 01:56:08PM +0100, Peter Eisentraut wrote:\n>> We have a collection of platform-specific notes in chapter 19, including\n>> file-system-related notes in section 19.2. Maybe it could be put there?\n> \n> I will give this a try.\n\nI started on this, but I couldn't shake the feeling that this wasn't the\nright place for these notes. This chapter is about setting up a server,\nand the syncfs() notes do apply to the recovery_init_sync_method\nconfiguration parameter, but it also applies to a number of server/client\napplications.\n\nI've been looking around, and I haven't found a great place to move this\nsection to. IMO some of the other appendices have similar amounts of\ninformation (e.g., Date/Time Support, The Source Code Repository, Color\nSupport), so maybe a dedicated appendix isn't too extreme. Another option\ncould be to introduce a new section for platform-specific notes, but that\nwould just make this section even larger for now.\n\nThoughts?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 9 Oct 2023 15:48:23 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
},
{
"msg_contents": "On Mon, Oct 09, 2023 at 02:34:27PM -0500, Nathan Bossart wrote:\n> On Mon, Oct 09, 2023 at 11:14:39AM -0500, Nathan Bossart wrote:\n>> Thanks. I've made a couple of small changes, but otherwise I think this\n>> one is just about ready.\n> \n> I forgot to rename one thing. Here's a v13 with that fixed.\n\nCommitted.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 13 Oct 2023 15:23:21 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: should frontend tools use syncfs() ?"
}
] |
[
{
"msg_contents": "Today, I noticed that we have mentioned pg_stat_replication_slots\nunder \"Dynamic Statistics Views\". I think it should be under\n\"Collected Statistics Views\"?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Thu, 30 Sep 2021 12:17:44 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_stat_replication_slots docs"
},
{
"msg_contents": "On Thu, Sep 30, 2021 at 3:47 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Today, I noticed that we have mentioned pg_stat_replication_slots\n> under \"Dynamic Statistics Views\". I think it should be under\n> \"Collected Statistics Views\"?\n\nGood catch!\n\nAgreed and the patch looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 30 Sep 2021 15:59:07 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_replication_slots docs"
}
] |
[
{
"msg_contents": "Previously successfully opened TCP connections can still fail on reads\nwith ETIMEDOUT. This should be considered a connection failure, so that\nthe connection in libpq is marked as CONNECTION_BAD. The reason I got an\nETIMEDOUT was, because I had set a low tcp_user_timeout in the\nconnection string. However, it can probably also happen due to\nkeepalive limits being reached.",
"msg_date": "Thu, 30 Sep 2021 10:00:43 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "Add ETIMEDOUT to ALL_CONNECTION_FAILURE_ERRNOS"
},
{
"msg_contents": "Jelte Fennema <Jelte.Fennema@microsoft.com> writes:\n> Previously successfully opened TCP connections can still fail on reads\n> with ETIMEDOUT. This should be considered a connection failure, so that\n> the connection in libpq is marked as CONNECTION_BAD. The reason I got an\n> ETIMEDOUT was, because I had set a low tcp_user_timeout in the\n> connection string. However, it can probably also happen due to\n> keepalive limits being reached.\n\nI'm dubious about the portability of this patch, because we don't\nuse ETIMEDOUT elsewhere. strerror.c thinks it may not exist,\nwhich is probably overly conservative because POSIX has required\nit since SUSv2. The bigger problem is that it's not accounted for in\nthe WSAxxx mapping done in port/win32_port.h and TranslateSocketError.\nThat'd have to be fixed for this to behave reasonably on Windows,\nI think.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 30 Sep 2021 10:04:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add ETIMEDOUT to ALL_CONNECTION_FAILURE_ERRNOS"
},
{
"msg_contents": "Attached is a new patch that I think addresses your concerns.\n________________________________\nFrom: Tom Lane <tgl@sss.pgh.pa.us>\nSent: Thursday, September 30, 2021 16:04\nTo: Jelte Fennema <Jelte.Fennema@microsoft.com>\nCc: pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\nSubject: [EXTERNAL] Re: Add ETIMEDOUT to ALL_CONNECTION_FAILURE_ERRNOS\n\nJelte Fennema <Jelte.Fennema@microsoft.com> writes:\n> Previously successfully opened TCP connections can still fail on reads\n> with ETIMEDOUT. This should be considered a connection failure, so that\n> the connection in libpq is marked as CONNECTION_BAD. The reason I got an\n> ETIMEDOUT was, because I had set a low tcp_user_timeout in the\n> connection string. However, it can probably also happen due to\n> keepalive limits being reached.\n\nI'm dubious about the portability of this patch, because we don't\nuse ETIMEDOUT elsewhere. strerror.c thinks it may not exist,\nwhich is probably overly conservative because POSIX has required\nit since SUSv2. The bigger problem is that it's not accounted for in\nthe WSAxxx mapping done in port/win32_port.h and TranslateSocketError.\nThat'd have to be fixed for this to behave reasonably on Windows,\nI think.\n\n regards, tom lane",
"msg_date": "Thu, 30 Sep 2021 15:00:44 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [EXTERNAL] Re: Add ETIMEDOUT to ALL_CONNECTION_FAILURE_ERRNOS"
},
{
"msg_contents": "Jelte Fennema <Jelte.Fennema@microsoft.com> writes:\n> Attached is a new patch that I think addresses your concerns.\n\nYou missed TranslateSocketError ...\n\nPushed to HEAD only with that fix.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 30 Sep 2021 14:17:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add ETIMEDOUT to ALL_CONNECTION_FAILURE_ERRNOS"
},
{
"msg_contents": "Oops sorry, I did make that change locally but apparently didn't update my .patch file after committing, so I uploaded an intermediary one...\nThanks for fixing that.\n\nI saw you added this section to the commit message:\n\n> Perhaps this should be back-patched, but I'm hesitant to do so given\n> the lack of previous complaints, and the hazard that there's a small\n> ABI break on Windows from redefining the symbol. Even if we decide\n> to do that, it'd be prudent to let this bake awhile in HEAD first.\n\nPersonally, I would love to see this backpatched. Since together with a second bug I reported[1] it's causing high query timeouts in Citus even if tcp_user_timeout is set to a low value. I do understand your worry though. Would a patch like the one I attached now be a better fit for a backport?\n\nJelte\n\n[1]: https://www.postgresql.org/message-id/flat/AM5PR83MB017870DE81FC84D5E21E9D1EF7AA9%40AM5PR83MB0178.EURPRD83.prod.outlook.com",
"msg_date": "Fri, 1 Oct 2021 07:14:05 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [EXTERNAL] Re: Add ETIMEDOUT to ALL_CONNECTION_FAILURE_ERRNOS"
},
{
"msg_contents": "Jelte Fennema <Jelte.Fennema@microsoft.com> writes:\n> Personally, I would love to see this backpatched. Since together with a second bug I reported[1] it's causing high query timeouts in Citus even if tcp_user_timeout is set to a low value. I do understand your worry though. Would a patch like the one I attached now be a better fit for a backport?\n\nThe only way that could work as-intended is if c.h includes whatever\nheader provides TCP_USER_TIMEOUT, which does not seem like a great idea.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 01 Oct 2021 07:50:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: Add ETIMEDOUT to ALL_CONNECTION_FAILURE_ERRNOS"
},
{
"msg_contents": "I would still love to get a version of this patch backported. And I just thought of an idea to do so without breaking the Windows ABI, by slightly modifying my previous idea. See the attached patch.",
"msg_date": "Tue, 28 Dec 2021 22:19:20 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [EXTERNAL] Re: Add ETIMEDOUT to ALL_CONNECTION_FAILURE_ERRNOS"
}
] |
[
{
"msg_contents": "Hi All,\n\nWhile working on one of the internal projects I noticed that currently in\nPostgres, we do not allow normal users to alter attributes of the\nreplication user. However we do allow normal users to drop replication\nusers or to even rename it using the alter command. Is that behaviour ok?\nIf yes, can someone please help me understand how and why this is okay.\n\nHere is an example illustrating this behaviour:\n\nsupusr@postgres=# create user repusr with password 'repusr' replication;\nCREATE ROLE\n\nsupusr@postgres=# create user nonsu with password 'nonsu' createrole\ncreatedb;\nCREATE ROLE\n\nsupusr@postgres=# \\c postgres nonsu;\nYou are now connected to database \"postgres\" as user \"nonsu\".\n\nnonsu@postgres=> alter user repusr nocreatedb;\nERROR: 42501: must be superuser to alter replication roles or change\nreplication attribute\n\nnonsu@postgres=> alter user repusr rename to refusr;\nALTER ROLE\n\nnonsu@postgres=> drop user refusr;\nDROP ROLE\n\nnonsu@postgres=> create user repusr2 with password 'repusr2' replication;\nERROR: 42501: must be superuser to create replication users\n\n--\nWith Regards,\nAshutosh Sharma.\n\nHi All,While working on one of the internal projects I noticed that currently in Postgres, we do not allow normal users to alter attributes of the replication user. However we do allow normal users to drop replication users or to even rename it using the alter command. Is that behaviour ok? If yes, can someone please help me understand how and why this is okay.Here is an example illustrating this behaviour:supusr@postgres=# create user repusr with password 'repusr' replication;CREATE ROLEsupusr@postgres=# create user nonsu with password 'nonsu' createrole createdb;CREATE ROLEsupusr@postgres=# \\c postgres nonsu;You are now connected to database \"postgres\" as user \"nonsu\".nonsu@postgres=> alter user repusr nocreatedb;ERROR: 42501: must be superuser to alter replication roles or change replication attributenonsu@postgres=> alter user repusr rename to refusr;ALTER ROLEnonsu@postgres=> drop user refusr;DROP ROLEnonsu@postgres=> create user repusr2 with password 'repusr2' replication;ERROR: 42501: must be superuser to create replication users--With Regards,Ashutosh Sharma.",
"msg_date": "Thu, 30 Sep 2021 15:37:02 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "non-superusers are allowed to drop the replication user, but are not\n allowed to alter or even create them, is that ok?"
},
{
"msg_contents": "On Thu, Sep 30, 2021 at 3:37 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Hi All,\n>\n> While working on one of the internal projects I noticed that currently in Postgres, we do not allow normal users to alter attributes of the replication user. However we do allow normal users to drop replication users or to even rename it using the alter command. Is that behaviour ok? If yes, can someone please help me understand how and why this is okay.\n>\n> Here is an example illustrating this behaviour:\n>\n> supusr@postgres=# create user repusr with password 'repusr' replication;\n> CREATE ROLE\n>\n> supusr@postgres=# create user nonsu with password 'nonsu' createrole createdb;\n> CREATE ROLE\n>\n> supusr@postgres=# \\c postgres nonsu;\n> You are now connected to database \"postgres\" as user \"nonsu\".\n>\n> nonsu@postgres=> alter user repusr nocreatedb;\n> ERROR: 42501: must be superuser to alter replication roles or change replication attribute\n>\n> nonsu@postgres=> alter user repusr rename to refusr;\n> ALTER ROLE\n>\n> nonsu@postgres=> drop user refusr;\n> DROP ROLE\n>\n> nonsu@postgres=> create user repusr2 with password 'repusr2' replication;\n> ERROR: 42501: must be superuser to create replication users\n\nI think having createrole for a non-super allows them to rename/drop a\nuser with a replication role. Because renaming/creating/dropping roles\nis what createrole/nocreaterole is meant for.\n\npostgres=# create user nonsu_nocreterole with createdb;\nCREATE ROLE\npostgres=# set role nonsu_nocreterole;\nSET\npostgres=> alter user repusr rename to refusr;\nERROR: permission denied to rename role\npostgres=> drop user refusr;\nERROR: permission denied to drop role\npostgres=>\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Thu, 30 Sep 2021 19:45:49 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: non-superusers are allowed to drop the replication user, but are\n not allowed to alter or even create them, is that ok?"
},
{
"msg_contents": "\n\n> On Sep 30, 2021, at 3:07 AM, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> \n> While working on one of the internal projects I noticed that currently in Postgres, we do not allow normal users to alter attributes of the replication user. However we do allow normal users to drop replication users or to even rename it using the alter command. Is that behaviour ok? If yes, can someone please help me understand how and why this is okay.\n\nThe definition of CREATEROLE is a bit of a mess. Part of the problem is that roles do not have owners, which makes the permissions to drop roles work differently than for other object types. I have a patch pending [1] for the version 15 development cycle that fixes this and other problems. I'd appreciate feedback on the design and whether it addresses your concerns.\n\n[1] https://commitfest.postgresql.org/34/3223/\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 30 Sep 2021 08:10:06 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: non-superusers are allowed to drop the replication user, but are\n not allowed to alter or even create them, is that ok?"
},
{
"msg_contents": "On Thu, Sep 30, 2021 at 8:40 PM Mark Dilger <mark.dilger@enterprisedb.com>\nwrote:\n\n>\n>\n> > On Sep 30, 2021, at 3:07 AM, Ashutosh Sharma <ashu.coek88@gmail.com>\n> wrote:\n> >\n> > While working on one of the internal projects I noticed that currently\n> in Postgres, we do not allow normal users to alter attributes of the\n> replication user. However we do allow normal users to drop replication\n> users or to even rename it using the alter command. Is that behaviour ok?\n> If yes, can someone please help me understand how and why this is okay.\n>\n> The definition of CREATEROLE is a bit of a mess. Part of the problem is\n> that roles do not have owners, which makes the permissions to drop roles\n> work differently than for other object types. I have a patch pending [1]\n> for the version 15 development cycle that fixes this and other problems.\n> I'd appreciate feedback on the design and whether it addresses your\n> concerns.\n>\n> [1] https://commitfest.postgresql.org/34/3223/\n\n\nThanks Mark. I'll take a look at this thread in detail to see if\nit addresses the issue raised here. Although from the first email it seems\nlike the proposal is about allowing normal users to set some of the GUC\nparams that can only be set by the superusers.\n\nWith Regards,\nAshutosh Sharma.\n\nOn Thu, Sep 30, 2021 at 8:40 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n\n> On Sep 30, 2021, at 3:07 AM, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> \n> While working on one of the internal projects I noticed that currently in Postgres, we do not allow normal users to alter attributes of the replication user. However we do allow normal users to drop replication users or to even rename it using the alter command. Is that behaviour ok? If yes, can someone please help me understand how and why this is okay.\n\nThe definition of CREATEROLE is a bit of a mess. Part of the problem is that roles do not have owners, which makes the permissions to drop roles work differently than for other object types. I have a patch pending [1] for the version 15 development cycle that fixes this and other problems. I'd appreciate feedback on the design and whether it addresses your concerns.\n\n[1] https://commitfest.postgresql.org/34/3223/Thanks Mark. I'll take a look at this thread in detail to see if it addresses the issue raised here. Although from the first email it seems like the proposal is about allowing normal users to set some of the GUC params that can only be set by the superusers.With Regards,Ashutosh Sharma.",
"msg_date": "Fri, 1 Oct 2021 09:56:11 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: non-superusers are allowed to drop the replication user, but are\n not allowed to alter or even create them, is that ok?"
},
{
"msg_contents": "On Thu, Sep 30, 2021 at 7:45 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Thu, Sep 30, 2021 at 3:37 PM Ashutosh Sharma <ashu.coek88@gmail.com>\n> wrote:\n> >\n> > Hi All,\n> >\n> > While working on one of the internal projects I noticed that currently\n> in Postgres, we do not allow normal users to alter attributes of the\n> replication user. However we do allow normal users to drop replication\n> users or to even rename it using the alter command. Is that behaviour ok?\n> If yes, can someone please help me understand how and why this is okay.\n> >\n> > Here is an example illustrating this behaviour:\n> >\n> > supusr@postgres=# create user repusr with password 'repusr' replication;\n> > CREATE ROLE\n> >\n> > supusr@postgres=# create user nonsu with password 'nonsu' createrole\n> createdb;\n> > CREATE ROLE\n> >\n> > supusr@postgres=# \\c postgres nonsu;\n> > You are now connected to database \"postgres\" as user \"nonsu\".\n> >\n> > nonsu@postgres=> alter user repusr nocreatedb;\n> > ERROR: 42501: must be superuser to alter replication roles or change\n> replication attribute\n> >\n> > nonsu@postgres=> alter user repusr rename to refusr;\n> > ALTER ROLE\n> >\n> > nonsu@postgres=> drop user refusr;\n> > DROP ROLE\n> >\n> > nonsu@postgres=> create user repusr2 with password 'repusr2'\n> replication;\n> > ERROR: 42501: must be superuser to create replication users\n>\n> I think having createrole for a non-super allows them to rename/drop a\n> user with a replication role. Because renaming/creating/dropping roles\n> is what createrole/nocreaterole is meant for.\n>\n\nWell, if we go by this theory then the CREATE ROLE command shouldn't have\nfailed, right?\n\n--\nWith Regards,\nAshutosh Sharma.\n\nOn Thu, Sep 30, 2021 at 7:45 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Thu, Sep 30, 2021 at 3:37 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Hi All,\n>\n> While working on one of the internal projects I noticed that currently in Postgres, we do not allow normal users to alter attributes of the replication user. However we do allow normal users to drop replication users or to even rename it using the alter command. Is that behaviour ok? If yes, can someone please help me understand how and why this is okay.\n>\n> Here is an example illustrating this behaviour:\n>\n> supusr@postgres=# create user repusr with password 'repusr' replication;\n> CREATE ROLE\n>\n> supusr@postgres=# create user nonsu with password 'nonsu' createrole createdb;\n> CREATE ROLE\n>\n> supusr@postgres=# \\c postgres nonsu;\n> You are now connected to database \"postgres\" as user \"nonsu\".\n>\n> nonsu@postgres=> alter user repusr nocreatedb;\n> ERROR: 42501: must be superuser to alter replication roles or change replication attribute\n>\n> nonsu@postgres=> alter user repusr rename to refusr;\n> ALTER ROLE\n>\n> nonsu@postgres=> drop user refusr;\n> DROP ROLE\n>\n> nonsu@postgres=> create user repusr2 with password 'repusr2' replication;\n> ERROR: 42501: must be superuser to create replication users\n\nI think having createrole for a non-super allows them to rename/drop a\nuser with a replication role. Because renaming/creating/dropping roles\nis what createrole/nocreaterole is meant for.Well, if we go by this theory then the CREATE ROLE command shouldn't have failed, right?--With Regards,Ashutosh Sharma.",
"msg_date": "Fri, 1 Oct 2021 09:58:38 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: non-superusers are allowed to drop the replication user, but are\n not allowed to alter or even create them, is that ok?"
},
{
"msg_contents": "\n\n> On Sep 30, 2021, at 9:26 PM, Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> \n> I'll take a look at this thread in detail to see if it addresses the issue raised here. Although from the first email it seems like the proposal is about allowing normal users to set some of the GUC params that can only be set by the superusers.\n\nRight. The bit that you will be interested in patches 1, 19, 20, and 21:\n\n\nSubject: [PATCH v8 01/21] Add tests of the CREATEROLE attribute.\n\nWhile developing alternate rules for what privileges CREATEROLE has,\nI noticed that none of the changes to how CREATEROLE works triggered\nany regression test failures. This is problematic for two reasons.\nIt means the existing code has insufficient test coverage, and it\nmeans that unintended changes introduced by subsequent patches may\ngo unnoticed. Fix that.\n\n\nSubject: [PATCH v8 19/21] Add owners to roles\n\nAll roles now have owners. By default, roles belong to the role\nthat created them, and initdb-time roles are owned by POSTGRES.\n\nThis is a preparatory patch for changing how CREATEROLE works.\n\n\nSubject: [PATCH v8 20/21] Give role owners control over owned roles\n\nCreate a role ownership hierarchy. The previous commit added owners\nto roles. This goes further, making role ownership transitive. If\nrole A owns role B, and role B owns role C, then role A can act as\nthe owner of role C. Also, roles A and B can perform any action on\nobjects belonging to role C that role C could itself perform.\n\nThis is a preparatory patch for changing how CREATEROLE works.\n\n\nSubject: [PATCH v8 21/21] Restrict power granted via CREATEROLE.\n\nThe CREATEROLE attribute no longer has anything to do with the power\nto alter roles or to grant or revoke role membership, but merely the\nability to create new roles, as its name suggests. The ability to\nalter a role is based on role ownership; the ability to grant and\nrevoke role membership is based on having admin privilege on the\nrelevant role or alternatively on role ownership, as owners now\nimplicitly have admin privileges on roles they own.\n\nA role must either be superuser or have the CREATEROLE attribute to\ncreate roles. This is unchanged from the prior behavior. A new\nprinciple is adopted, though, to make CREATEROLE less dangerous: a\nrole may not create new roles with privileges that the creating role\nlacks. This new principle is intended to prevent privilege\nescalation attacks stemming from giving CREATEROLE to a user. This\nis not backwards compatible. The idea is to fix the CREATEROLE\nprivilege to not be pathway to gaining superuser, and no\nnon-breaking change to accomplish that is apparent. \n \nSUPERUSER, REPLICATION, BYPASSRLS, CREATEDB, CREATEROLE and LOGIN\nprivilege can only be given to new roles by creators who have the\nsame privilege. In the case of the CREATEROLE privilege, this is\ntrivially true, as the creator must necessarily have it or they\ncouldn't be creating the role to begin with.\n\nThe INHERIT attribute is not considered a privilege, and since a\nuser who belongs to a role may SET ROLE to that role and do anything\nthat role can do, it isn't clear that treating it as a privilege\nwould stop any privilege escalation attacks.\n\nThe CONNECTION LIMIT and VALID UNTIL attributes are also not\nconsidered privileges, but this design choice is debatable. One\ncould think of the ability to log in during a given window of time,\nor up to a certain number of connections as a privilege, and\nallowing such a restricted role to create a new role with unlimited\nconnections or no expiration as a privilege escalation which escapes\nthe intended restrictions. However, it is just as easy to think of\nthese limitations as being used to guard against badly written\nclient programs connecting too many times, or connecting at a time\nof day that is not intended. Since it is unclear which design is\nbetter, this commit is conservative and the handling of these\nattributes is unchanged relative to prior behavior.\n\nSince the grammar of the CREATE ROLE command allows specifying roles\ninto which the new role should be enrolled, and also lists of roles\nwhich become members of the newly created role (as admin or not),\nthe CREATE ROLE command may now fail if the creating role has\ninsufficient privilege on the roles so listed. Such failures were\nnot possible before, since the CREATEROLE privilege was always\nsufficient.\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 1 Oct 2021 07:38:52 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: non-superusers are allowed to drop the replication user, but are\n not allowed to alter or even create them, is that ok?"
}
] |
[
{
"msg_contents": "The new connection made by PQcancel does not use the tcp_user_timeout, connect_timeout or any of the keepalive settings that are provided in the connection string. This means that a call to PQcancel can block for a much longer time than intended if there are network issues. This can be especially impactful, because PQcancel is a blocking function an there is no non blocking version of it. \n\nI attached a proposed patch to use the tcp_user_timeout from the connection string when connecting to Postgres in PQcancel. This resolves the issue for me, since this will make connecting timeout after a configurable time. So the other options are not strictly needed. It might still be nice for completeness to support them too though. I didn't do this yet, because I first wanted some feedback and also because implementing connect_timeout would require using non blocking TCP to connect and then use select to have a timeout.",
"msg_date": "Thu, 30 Sep 2021 14:44:45 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "PQcancel does not use tcp_user_timeout, connect_timeout and keepalive\n settings"
},
{
"msg_contents": "On Thu, Sep 30, 2021 at 7:45 AM Jelte Fennema <Jelte.Fennema@microsoft.com>\nwrote:\n\n> The new connection made by PQcancel does not use the tcp_user_timeout,\n> connect_timeout or any of the keepalive settings that are provided in the\n> connection string. This means that a call to PQcancel can block for a much\n> longer time than intended if there are network issues. This can be\n> especially impactful, because PQcancel is a blocking function an there is\n> no non blocking version of it.\n>\n> I attached a proposed patch to use the tcp_user_timeout from the\n> connection string when connecting to Postgres in PQcancel. This resolves\n> the issue for me, since this will make connecting timeout after a\n> configurable time. So the other options are not strictly needed. It might\n> still be nice for completeness to support them too though. I didn't do this\n> yet, because I first wanted some feedback and also because implementing\n> connect_timeout would require using non blocking TCP to connect and then\n> use select to have a timeout.\n\n\nHi,\n\n int be_key; /* key of backend --- needed for cancels */\n+ int pgtcp_user_timeout; /* tcp user timeout */\n\nThe other field names are quite short. How about naming the field\ntcp_timeout ?\n\nCheers\n\nOn Thu, Sep 30, 2021 at 7:45 AM Jelte Fennema <Jelte.Fennema@microsoft.com> wrote:The new connection made by PQcancel does not use the tcp_user_timeout, connect_timeout or any of the keepalive settings that are provided in the connection string. This means that a call to PQcancel can block for a much longer time than intended if there are network issues. This can be especially impactful, because PQcancel is a blocking function an there is no non blocking version of it. \n\nI attached a proposed patch to use the tcp_user_timeout from the connection string when connecting to Postgres in PQcancel. This resolves the issue for me, since this will make connecting timeout after a configurable time. So the other options are not strictly needed. It might still be nice for completeness to support them too though. I didn't do this yet, because I first wanted some feedback and also because implementing connect_timeout would require using non blocking TCP to connect and then use select to have a timeout.Hi, int be_key; /* key of backend --- needed for cancels */+ int pgtcp_user_timeout; /* tcp user timeout */ The other field names are quite short. How about naming the field tcp_timeout ?Cheers",
"msg_date": "Thu, 30 Sep 2021 07:52:22 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: PQcancel does not use tcp_user_timeout, connect_timeout and\n keepalive settings"
},
{
"msg_contents": "We actually ran into an issue caused by this in production, where a PQcancel connection was open on the client for a 2+ days because the server had restarted at the wrong moment in the cancel handshake. The client was now indefinitely waiting for the server to send an EOF back, and because keepalives were not enabled on this socket it was never closed.\n\nI attached an updated patch which also uses the keepalive settings in PQ. The connect_timeout is a bit harder to get it to work. As far as I can tell it would require something like this. https://stackoverflow.com/a/2597774/2570866\n\n> The other field names are quite short. How about naming the field tcp_timeout ?\n\nI kept the same names as in the pg_conn struct for consistency sake.\n\n\n",
"msg_date": "Wed, 6 Oct 2021 19:56:37 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [EXTERNAL] Re: PQcancel does not use tcp_user_timeout,\n connect_timeout and keepalive settings"
},
{
"msg_contents": "Ugh forgot to attach the patch. Here it is.\n________________________________\nFrom: Jelte Fennema <Jelte.Fennema@microsoft.com>\nSent: Wednesday, October 6, 2021 21:56\nTo: Zhihong Yu <zyu@yugabyte.com>\nCc: pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\nSubject: Re: [EXTERNAL] Re: PQcancel does not use tcp_user_timeout, connect_timeout and keepalive settings\n\nWe actually ran into an issue caused by this in production, where a PQcancel connection was open on the client for a 2+ days because the server had restarted at the wrong moment in the cancel handshake. The client was now indefinitely waiting for the server to send an EOF back, and because keepalives were not enabled on this socket it was never closed.\n\nI attached an updated patch which also uses the keepalive settings in PQ. The connect_timeout is a bit harder to get it to work. As far as I can tell it would require something like this. https://stackoverflow.com/a/2597774/2570866\n\n> The other field names are quite short. How about naming the field tcp_timeout ?\n\nI kept the same names as in the pg_conn struct for consistency sake.",
"msg_date": "Wed, 6 Oct 2021 19:58:32 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [EXTERNAL] Re: PQcancel does not use tcp_user_timeout,\n connect_timeout and keepalive settings"
},
{
"msg_contents": "\n\nOn 2021/10/07 4:58, Jelte Fennema wrote:\n> Ugh forgot to attach the patch. Here it is.\n\nThanks for working on this patch!\n\n\n@@ -4546,10 +4684,21 @@ PQrequestCancel(PGconn *conn)\n \n \t\treturn false;\n \t}\n-\n-\tr = internal_cancel(&conn->raddr, conn->be_pid, conn->be_key,\n\nSince PQrequestCancel() is marked as deprecated, I don't think that\nwe need to add the feature into it.\n\n\n+\t\tif (cancel->pgtcp_user_timeout >= 0) {\n+\t\t\tif (setsockopt(tmpsock, IPPROTO_TCP, TCP_USER_TIMEOUT,\n+\t\t\t\t\t\t (char *) &cancel->pgtcp_user_timeout,\n+\t\t\t\t\t\t sizeof(cancel->pgtcp_user_timeout)) < 0) {\n+\t\t\t\tgoto cancel_errReturn;\n+\t\t\t}\n+\t\t}\n\nlibpq has already setKeepalivesXXX() functions to do the almost same thing.\nIsn't it better to modify and reuse them instead of adding the almost\nsame code, to simplify the code?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 7 Oct 2021 09:46:47 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: PQcancel does not use tcp_user_timeout,\n connect_timeout and keepalive settings"
},
{
"msg_contents": "Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> Since PQrequestCancel() is marked as deprecated, I don't think that\n> we need to add the feature into it.\n\nNot absolutely necessary, agreed, but it looks like it's pretty\neasy to make that happen, so why not? I'd suggest dropping the\nseparate implementation and turning PQrequestCancel() into a\nthin wrapper around PQgetCancel, PQcancel, PQfreeCancel.\n\n> libpq has already setKeepalivesXXX() functions to do the almost same thing.\n> Isn't it better to modify and reuse them instead of adding the almost\n> same code, to simplify the code?\n\nI find this patch fairly scary, because it's apparently been coded\nwith little regard for the expectation that PQcancel can be called\nwithin a signal handler.\n\nI see that setsockopt(2) is specified to be async signal safe by\nPOSIX, so at least in principle it is okay to add those calls to\nPQcancel. But you've got to be really careful what else you do in\nthere. You can NOT use appendPQExpBuffer. You can NOT access the\nPGconn (I don't think the Windows part of this even compiles ...\nnope, it doesn't, per the cfbot). I'm not sure that WSAIoctl\nis safe to use in a signal handler, so on the whole I think\nI'd drop the Windows-specific chunk altogether. But in any case,\nI'm very strongly against calling out to other libpq code from here,\nbecause then the signal-safety restrictions start applying to that\nother code too, and that's a recipe for trouble in future.\n\nThe patch could use a little attention to conforming to PG coding\nconventions (comment style, brace style, C99 declarations are all\nwrong --- pgindent would fix much of that, but maybe not in a way\nyou like). The lack of comments about why it's doing what it's doing\nneeds to be rectified, too. Why are these specific options important\nand not any others?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 10 Nov 2021 16:06:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: PQcancel does not use tcp_user_timeout,\n connect_timeout and keepalive settings"
},
{
"msg_contents": "> I'd suggest dropping the separate implementation and turning\n> PQrequestCancel() into a thin wrapper around PQgetCancel,\n> PQcancel, PQfreeCancel.\n\nFine by me. I didn't want to change behavior of a deprecated\nfunction.\n\n> I find this patch fairly scary, because it's apparently been coded\n> with little regard for the expectation that PQcancel can be called\n> within a signal handler.\n> You can NOT use appendPQExpBuffer. You can NOT access the\n> PGconn (I don't think the Windows part of this even compiles ...\n> nope, it doesn't, per the cfbot).\n\nI guess I was tired or at least inattentive when I added the windows part at the end\nof my coding session. It really was the Linux part that I cared about. For that part\nI definitely took care to make the code signal safe. Which is also why I did not call out to\nany of the existing functions, like setKeepalivesXXX(). I don't think I'm the right person\nto write the windows code for this (I have zero C windows experience). So, if it's not\nrequired for this patch to be accepted I'll happily remove it.\n\n> The patch could use a little attention to conforming to PG coding\n> conventions (comment style, brace style, C99 declarations are all\n> wrong --- pgindent would fix much of that, but maybe not in a way\n> you like).\n\nSure, I'll run pgindent for my next version of the patch.\n\n> The lack of comments about why it's doing what it's doing\n> needs to be rectified, too. Why are these specific options important\n> and not any others?\n\nI'll make sure to add comments before the final version of this patch. This\npatch was more meant as a draft to gauge if this was even the correct way of fixing\nthis problem.\n\nTo be honest I think it would make sense to add a new PQcancel function that is not\nrequired to be signal safe and reuses regular connection setup code. This would make sure\noptions like this are supported automatically in the future. Another advantage is that it would\nallow for sending cancel messages in a non-blocking way. So, you would be able to easily\nsend multiple cancels in a concurrent way. It looks to me like PQcancel is mostly designed\nthe way it is to keep it easy for psql to send cancelations. I think many other uses of PQcancel\ndon't require it to be signal safe at all (at least for Citus its usage signal safety is not required).\n\n\n\n\n\n\n\n\n> I'd suggest dropping the separate implementation and turning \n\n\n> PQrequestCancel() into a thin wrapper around PQgetCancel, \n> PQcancel, PQfreeCancel.\n\n\nFine by me. I didn't want to change behavior of a deprecated \nfunction.\n\n> I find this patch fairly scary, because it's apparently been coded\n> with little regard for the expectation that PQcancel can be called\n> within a signal handler.\n> You can NOT use appendPQExpBuffer. You can NOT access the\n> PGconn (I don't think the Windows part of this even compiles ...\n> nope, it doesn't, per the cfbot).\n\n\nI guess I was tired or at least inattentive when I added the windows part at the end\nof my coding session. It really was the Linux part that I cared about. For that part \nI definitely took care to make the code signal safe. Which is also why I did not call out to \nany of the existing functions, like setKeepalivesXXX(). I don't think I'm the right person\nto write the windows code for this (I have zero C windows experience). So, if it's not \nrequired for this patch to be accepted I'll happily remove it.\n\n> The patch could use a little attention to conforming to PG coding\n> conventions (comment style, brace style, C99 declarations are all\n> wrong --- pgindent would fix much of that, but maybe not in a way\n> you like). \n\n\nSure, I'll run pgindent for my next version of the patch.\n\n\n> The lack of comments about why it's doing what it's doing\n> needs to be rectified, too. Why are these specific options important\n> and not any others?\n\n\nI'll make sure to add comments before the final version of this patch. This \npatch was more meant as a draft to gauge if this was even the correct way of fixing \nthis problem. \n\nTo be honest I think it would make sense to add a new PQcancel function that is not \nrequired to be signal safe and reuses regular connection setup code. This would make sure\noptions like this are supported automatically in the future. Another advantage is that it would \nallow for sending cancel messages in a non-blocking way. So, you would be able to easily \nsend multiple cancels in a concurrent way. It looks to me like PQcancel is mostly designed \nthe way it is to keep it easy for psql to send cancelations. I think many other uses of PQcancel\ndon't require it to be signal safe at all (at least for Citus its usage signal safety is not required).",
"msg_date": "Wed, 10 Nov 2021 22:38:34 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [EXTERNAL] Re: PQcancel does not use tcp_user_timeout,\n connect_timeout and keepalive settings"
},
{
"msg_contents": "I was able to spend some time on this again. I attached two patches to this email:\n\nThe first patch is a cleaned up version of my previous patch. I think I addressed\nall feedback on the previous version in that patch (e.g. removed windows code, \nfixed formatting).\n\nThe second patch is a new one, it implements honouring of the connect_timeout \nconnection option in PQcancel. This patch requires the first patch to also be applied,\nbut since it seemed fairly separate and the code is not trivial I didn't want the first\npatch to be blocked on this.\n\nFinally, I would love it if once these fixes are merged the would also be backpatched to \nprevious versions of libpq. Does that seem possible? As far as I can tell it would be fine, \nsince it doesn't really change any of the public APIs. The only change is that the pg_cancel \nstruct now has a few additional fields. But since that struct is defined in libpq-int.h, so that \nstruct should not be used by users of libpq directly, right?.",
"msg_date": "Tue, 28 Dec 2021 15:49:00 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [EXTERNAL] Re: PQcancel does not use tcp_user_timeout,\n connect_timeout and keepalive settings"
},
{
"msg_contents": "Hi,\n\nOn 2021-12-28 15:49:00 +0000, Jelte Fennema wrote:\n> The first patch is a cleaned up version of my previous patch. I think I addressed\n> all feedback on the previous version in that patch (e.g. removed windows code, \n> fixed formatting).\n\nTo me it seems a bit problematic to introduce a divergence between windows /\neverything else here. Isn't that just going to lead to other complaints just\nlike this thread, where somebody discovered the hard way that there's platform\ndependent behaviour here?\n\n\n> The second patch is a new one, it implements honouring of the connect_timeout \n> connection option in PQcancel. This patch requires the first patch to also be applied,\n> but since it seemed fairly separate and the code is not trivial I didn't want the first\n> patch to be blocked on this.\n> \n> Finally, I would love it if once these fixes are merged the would also be backpatched to \n> previous versions of libpq. Does that seem possible? As far as I can tell it would be fine, \n> since it doesn't really change any of the public APIs. The only change is that the pg_cancel \n> struct now has a few additional fields. But since that struct is defined in libpq-int.h, so that \n> struct should not be used by users of libpq directly, right?.\n\nI'm not really convinced this is a good patch to backpatch. There does seem to\nbe some potential for subtle breakage - code in signal handlers is notoriously\nfinnicky, it's a rarely exercised code path, etc. It's also not fixing\nsomething that previously worked.\n\n\n> +\t * NOTE: These socket options are currently not set for Windows. The\n> +\t * reason is that signal safety in this function is very important, and it\n> +\t * was not clear to if the functions required to set the socket options on\n> +\t * Windows were signal-safe.\n> +\t */\n> +#ifndef WIN32\n> +\tif (!IS_AF_UNIX(cancel->raddr.addr.ss_family))\n> +\t{\n> +#ifdef TCP_USER_TIMEOUT\n> +\t\tif (cancel->pgtcp_user_timeout >= 0)\n> +\t\t{\n> +\t\t\tif (setsockopt(tmpsock, IPPROTO_TCP, TCP_USER_TIMEOUT,\n> +\t\t\t\t\t\t (char *) &cancel->pgtcp_user_timeout,\n> +\t\t\t\t\t\t sizeof(cancel->pgtcp_user_timeout)) < 0)\n> +\t\t\t{\n> +\t\t\t\tstrlcpy(errbuf, \"PQcancel() -- setsockopt(TCP_USER_TIMEOUT) failed: \", errbufsize);\n> +\t\t\t\tgoto cancel_errReturn;\n> +\t\t\t}\n> +\t\t}\n> +#endif\n> +\n> +\t\tif (cancel->keepalives != 0)\n> +\t\t{\n> +\t\t\tint\t\t\ton = 1;\n> +\n> +\t\t\tif (setsockopt(tmpsock,\n> +\t\t\t\t\t\t SOL_SOCKET, SO_KEEPALIVE,\n> +\t\t\t\t\t\t (char *) &on, sizeof(on)) < 0)\n> +\t\t\t{\n> +\t\t\t\tstrlcpy(errbuf, \"PQcancel() -- setsockopt(SO_KEEPALIVE) failed: \", errbufsize);\n> +\t\t\t\tgoto cancel_errReturn;\n> +\t\t\t}\n> +\t\t}\n\nThis is very repetitive - how about introducing a helper function for this?\n\n\n\n> @@ -4467,8 +4601,8 @@ retry3:\n> \n> \tcrp.packetlen = pg_hton32((uint32) sizeof(crp));\n> \tcrp.cp.cancelRequestCode = (MsgType) pg_hton32(CANCEL_REQUEST_CODE);\n> -\tcrp.cp.backendPID = pg_hton32(be_pid);\n> -\tcrp.cp.cancelAuthCode = pg_hton32(be_key);\n> +\tcrp.cp.backendPID = pg_hton32(cancel->be_pid);\n> +\tcrp.cp.cancelAuthCode = pg_hton32(cancel->be_key);\n\n\nOthers might differ, but I'd separate changing the type passed to\ninternal_cancel() into its own commit.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 5 Jan 2022 10:42:25 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: PQcancel does not use tcp_user_timeout,\n connect_timeout and keepalive settings"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-12-28 15:49:00 +0000, Jelte Fennema wrote:\n>> Finally, I would love it if once these fixes are merged the would also be backpatched to \n>> previous versions of libpq.\n\n> I'm not really convinced this is a good patch to backpatch. There does seem to\n> be some potential for subtle breakage - code in signal handlers is notoriously\n> finnicky, it's a rarely exercised code path, etc. It's also not fixing\n> something that previously worked.\n\nIMO, this is a new feature not a bug fix, and as such it's not backpatch\nmaterial ... especially if we don't have 100.00% confidence in it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 05 Jan 2022 13:54:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: PQcancel does not use tcp_user_timeout,\n connect_timeout and keepalive settings"
},
{
"msg_contents": "Attached are 3 patches that address the feedback from Andres about code duplication \nand splitting up commits. I completely removed internal_cancel now, since it only had \none caller at this point.\n\n> IMO, this is a new feature not a bug fix\n\nIMO this is definitely a bugfix. Nowhere in the libpq docs it stated that the connection \noptions in question do not apply to connections that are opened for cancellations. So \nas a user I definitely expect that any connections that libpq opens would use these options.\nWhich is why I definitely consider it a bug that they are currently not honoured for cancel \nrequests. \n\nHowever, even though I think it's a bugfix, I can understand the being hesitant to \nbackport this. IMHO in that case at least the docs should be updated to explain this \ndiscrepancy. I attached a patch to do so against the docs on the REL_14_STABLE branch.\n\n> To me it seems a bit problematic to introduce a divergence between windows /\n> everything else here. Isn't that just going to lead to other complaints just\n> like this thread, where somebody discovered the hard way that there's platform\n> dependent behaviour here?\n\nOf course, fixing this also for windows would be much better. There's two problems:\n1. I cannot find any clear documentation on which functions are signal safe in Windows \n and which are not. The only reference I can find is this: https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/signal?view=msvc-170 \n However, this says that you should not use any function that generates a system call. \n PQcancel is obviously already violating that when calling \"connect\", so this is not very helpful.\n2. My Windows C experience is non existent, so I don't think I would be the right person to write this code.\n\nIMO blocking this bugfix, because it does not fix it for Windows, would be an example of perfect \nbecoming the enemy of good. One thing I could do is add a note to the docs that these options \nare not supported on Windows for cancellation requests (similar to my proposed doc change \nfor PG14 and below).",
"msg_date": "Thu, 6 Jan 2022 15:58:28 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [EXTERNAL] Re: PQcancel does not use tcp_user_timeout,\n connect_timeout and keepalive settings"
},
{
"msg_contents": "Jelte Fennema <Jelte.Fennema@microsoft.com> writes:\n> Attached are 3 patches that address the feedback from Andres about code duplication \n> and splitting up commits. I completely removed internal_cancel now, since it only had \n> one caller at this point.\n\nHere's some cleaned-up versions of 0001 and 0002. I have not bothered\nwith 0003 because I for one will not be convinced to commit it. The\nrisk-vs-reward ratio looks far too high on that, and it's not obvious\nwhy 0002 doesn't already solve whatever problem there is.\n\nA couple of notes:\n\n0001 introduces malloc/free into PQrequestCancel, which promotes it\nfrom being \"probably unsafe in a signal handler\" to \"definitely unsafe\nin a signal handler\". Before, maybe you could have gotten away with\nit if you were sure the PGconn object was idle, but now it's no-go for\nsure. I don't have a big problem with that, given that it's been\ndeprecated for decades, but I made the warning text in libpq.sgml a\nlittle stronger.\n\nAs for 0002, I don't see a really good reason why we shouldn't try\nto do it on Windows too. If connect() will work, then it seems\nlikely that setsockopt() and WSAIOCtl() will too. Moreover, note\nthat at least in our own programs, PQcancel doesn't *really* run in a\nsignal handler on Windows: see commentary in src/fe_utils/cancel.c.\n(The fact that we now have test coverage for PQcancel makes me a lot\nmore willing to try this than I might otherwise be. Will be\ninterested to see the cfbot's results.)\n\nAlso, I was somewhat surprised while working on this to realize\nthat PQconnectPoll doesn't call setTCPUserTimeout if keepalives\nare disabled per useKeepalives(). I made PQcancel act the same,\nbut I wonder if that was intentional or a bug. I'd personally\nhave thought that those settings were independent.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 11 Jan 2022 18:27:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: PQcancel does not use tcp_user_timeout,\n connect_timeout and keepalive settings"
},
{
"msg_contents": "I wrote:\n> (The fact that we now have test coverage for PQcancel makes me a lot\n> more willing to try this than I might otherwise be. Will be\n> interested to see the cfbot's results.)\n\nOn closer look, I'm not sure that psql/t/020_cancel.pl is actually doing\nanything on Windows; the cfbot's test transcript says it ran without\nskipping, but looking at the test itself, it seems like it would skip on\nWindows.\n\nMeanwhile, the warnings build noticed that we need to #ifdef\nout the optional_setsockopt function in some cases. Revised\n0002 attached.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 11 Jan 2022 19:39:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: PQcancel does not use tcp_user_timeout,\n connect_timeout and keepalive settings"
},
{
"msg_contents": "... btw, speaking of signal-safe functions: I am dismayed to\nnotice that strerror (and strerror_r) are *not* in POSIX's\nlist of async-signal-safe functions. This is really quite\nunsurprising, considering that they are chartered to return\nlocale-dependent strings. Unless the data has already been\ncollected in the current process, that'd imply reading something\nfrom the locale definition files, allocating memory to hold it,\netc.\n\nSo I'm now thinking this bit in PQcancel is completely unsafe:\n\n strncat(errbuf, SOCK_STRERROR(SOCK_ERRNO, sebuf, sizeof(sebuf)),\n maxlen);\n\nSeems we have to give up on providing any details beyond the\nname of the function that failed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 11 Jan 2022 23:15:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: PQcancel does not use tcp_user_timeout,\n connect_timeout and keepalive settings"
},
{
"msg_contents": "Thanks for all the cleanup and adding of windows support. To me it now looks good to merge.\n\nMeanwhile I've created another patch that adds, a non-blocking version of PQcancel to libpq.\nWhich doesn't have this problem by design, because it simply reuses the normal code for \nconnection establishement. And it also includes tests for PQcancel itself.\n\n",
"msg_date": "Wed, 12 Jan 2022 16:11:26 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [EXTERNAL] Re: PQcancel does not use tcp_user_timeout,\n connect_timeout and keepalive settings"
},
{
"msg_contents": "Jelte Fennema <Jelte.Fennema@microsoft.com> writes:\n> Thanks for all the cleanup and adding of windows support. To me it now looks good to merge.\n\nI was about to commit this when I started to wonder if it actually does\nanything useful. In particular, I read in the Linux tcp(7) man page\n\n TCP_USER_TIMEOUT (since Linux 2.6.37)\n ...\n This option can be set during any state of a TCP connection, but\n is effective only during the synchronized states of a connection\n (ESTABLISHED, FIN-WAIT-1, FIN-WAIT-2, CLOSE-WAIT, CLOSING, and\n LAST-ACK).\n\nISTM that the case we care about is where the server fails to respond\nto the TCP connection request. If it does so, it seems pretty unlikely\nthat it wouldn't then eat the small amount of data we're going to send.\n\nWhile the keepalive options aren't so explicitly documented, I'd bet that\nthey too don't have any effect until the connection is known established.\n\nSo I'm unconvinced that setting these values really has much effect\nto make PQcancel more robust (or more accurately, make it fail faster).\nI would like to know what scenarios you tested to convince you that\nthis is worth doing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Jan 2022 15:39:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: PQcancel does not use tcp_user_timeout,\n connect_timeout and keepalive settings"
},
{
"msg_contents": "It seems the man page of TCP_USER_TIMEOUT does not align with \nreality then. When I use it on my local machine it is effectively used\nas a connection timeout too. The second command times out after \ntwo seconds:\n\nsudo iptables -A INPUT -p tcp --destination-port 5432 -j DROP\npsql 'host=localhost tcp_user_timeout=2000'\n\nThe keepalive settings only apply once you get to the recv however. And yes, \nit is pretty unlikely for the connection to break right when it is waiting for data.\nBut it has happened for us. And when it happens it is really bad, because\nthe process will be blocked forever. Since it is a blocking call.\n\nAfter investigation when this happened it seemed to be a combination of a few\nthings making this happen: \n1. The way citus uses cancelation requests: A Citus query on the coordinator creates \n multiple connections to a worker and with 2PC for distributed transactions. If one \n connection receives an error it sends a cancel request for all others.\n2. When a machine is under heavy CPU or memory pressure things don't work\n well: \n i. errors can occur pretty frequently, causing lots of cancels to be sent by Citus.\n ii. postmaster can be slow in handling new cancelation requests.\n iii. Our failover system can think the node is down, because health checks are\n failing.\n3. Our failover system effectively cuts the power and the network of the primary \n when it triggers a fail over to the secondary\n\nThis all together can result in a cancel request being interrupted right at that \nwrong moment. And when it happens a distributed query on the Citus \ncoordinator, becomes blocked forever. We've had queries stuck in this state \nfor multiple days. The only way to get out of it at that point is either by restarting\npostgres or manually closing the blocked socket (either with ss or gdb).\n\nJelte\n\n",
"msg_date": "Tue, 18 Jan 2022 00:35:36 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [EXTERNAL] Re: PQcancel does not use tcp_user_timeout,\n connect_timeout and keepalive settings"
},
{
"msg_contents": "Jelte Fennema <Jelte.Fennema@microsoft.com> writes:\n> It seems the man page of TCP_USER_TIMEOUT does not align with \n> reality then. When I use it on my local machine it is effectively used\n> as a connection timeout too.\n\nHuh, I should have thought to try that. I confirm this behavior\non RHEL8 (kernel 4.18.0). Not the first mistake I've seen in\nLinux man pages :-(.\n\nDoesn't seem to help on macOS, but AFAICT that platform doesn't\nhave TCP_USER_TIMEOUT, so no surprise there.\n\nAnyway, that removes my objection, so I'll proceed with committing.\nThanks for working on this patch!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 Jan 2022 13:32:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: PQcancel does not use tcp_user_timeout,\n connect_timeout and keepalive settings"
}
] |
[
{
"msg_contents": "Hi,\n\nCI showed me a failure in 002_types.pl on windows. I only just now noticed\nthat because the subscription tests aren't run by any of the vcregress.pl\nsteps :(\n\nIt turns out to be dependant on the current timezone. I have just about zero\nunderstanding how timezones work on windows, so I can't really interpret why\nthat causes a problem on windows, but apparently not on linux.\n\nThe CI instance not unreasonably runs with the timezone set to GMT. With that\nthe tests fail. If I set it to PST, they work. For the detailed (way too long)\noutput see [1]. The relevant excerpt:\n\ntzutil /s \"Pacific Standard Time\"\n...\ntimeout -k60s 30m perl src/tools/msvc/vcregress.pl taptest .\\src\\test\\subscription\\ || true\nt/002_types.pl ..................... ok\n..\n\ntzutil /s \"Greenwich Standard Time\"\ntimeout -k60s 30m perl src/tools/msvc/vcregress.pl taptest .\\src\\test\\subscription\\ || true\n..\n# Failed test 'check replicated inserts on subscriber'\n# at t/002_types.pl line 278.\n# got: '1|{1,2,3}\n...\n# 5|[5,51)\n# 1|[\"2014-08-04 00:00:00+02\",infinity)|{\"[1,3)\",\"[10,21)\"}\n# 2|[\"2014-08-02 01:00:00+02\",\"2014-08-04 00:00:00+02\")|{\"[2,4)\",\"[20,31)\"}\n# 3|[\"2014-08-01 01:00:00+02\",\"2014-08-04 00:00:00+02\")|{\"[3,5)\"}\n# 4|[\"2014-07-31 01:00:00+02\",\"2014-08-04 00:00:00+02\")|{\"[4,6)\",NULL,\"[40,51)\"}\n...\n# expected: '1|{1,2,3}\n...\n# 1|[\"2014-08-04 00:00:00+02\",infinity)|{\"[1,3)\",\"[10,21)\"}\n# 2|[\"2014-08-02 00:00:00+02\",\"2014-08-04 00:00:00+02\")|{\"[2,4)\",\"[20,31)\"}\n# 3|[\"2014-08-01 00:00:00+02\",\"2014-08-04 00:00:00+02\")|{\"[3,5)\"}\n# 4|[\"2014-07-31 00:00:00+02\",\"2014-08-04 00:00:00+02\")|{\"[4,6)\",NULL,\"[40,51)\"}\n...\n\nGreetings,\n\nAndres Freund\n\n[1] https://api.cirrus-ci.com/v1/task/5800120848482304/logs/check_tz_sub.log\n\n\n",
"msg_date": "Thu, 30 Sep 2021 11:36:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "002_types.pl fails on some timezones on windows"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> It turns out to be dependant on the current timezone. I have just about zero\n> understanding how timezones work on windows, so I can't really interpret why\n> that causes a problem on windows, but apparently not on linux.\n\nWeird. Unless you're using --with-system-tzdata, I wouldn't expect that\ncode to work any differently on Windows.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 30 Sep 2021 15:12:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 002_types.pl fails on some timezones on windows"
},
{
"msg_contents": "\nOn 9/30/21 2:36 PM, Andres Freund wrote:\n> Hi,\n>\n> CI showed me a failure in 002_types.pl on windows. I only just now noticed\n> that because the subscription tests aren't run by any of the vcregress.pl\n> steps :(\n\n\n\nWe have windows buildfarm animals running the subscription tests, e.g.\n<https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=drongo&dt=2021-09-29%2019%3A08%3A23&stg=subscription-check>\nand they do it by calling vcregress.pl.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 30 Sep 2021 15:19:30 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: 002_types.pl fails on some timezones on windows"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 9/30/21 2:36 PM, Andres Freund wrote:\n>> CI showed me a failure in 002_types.pl on windows. I only just now noticed\n>> that because the subscription tests aren't run by any of the vcregress.pl\n>> steps :(\n\n> We have windows buildfarm animals running the subscription tests, e.g.\n> <https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=drongo&dt=2021-09-29%2019%3A08%3A23&stg=subscription-check>\n> and they do it by calling vcregress.pl.\n\nBut are they running with the prevailing zone set to \"Greenwich Standard\nTime\"?\n\nI dug around to see exactly how we handle that, and was somewhat\ngobsmacked to find this mapping in findtimezone.c:\n\n\t\t/* (UTC+00:00) Monrovia, Reykjavik */\n\t\t\"Greenwich Standard Time\", \"Greenwich Daylight Time\",\n\t\t\"Africa/Casablanca\"\n\nAccording to current tzdb,\n\n# Zone\tNAME\t\tSTDOFF\tRULES\tFORMAT\t[UNTIL]\nZone Africa/Casablanca\t-0:30:20 -\tLMT\t1913 Oct 26\n\t\t\t 0:00\tMorocco\t+00/+01\t1984 Mar 16\n\t\t\t 1:00\t-\t+01\t1986\n\t\t\t 0:00\tMorocco\t+00/+01\t2018 Oct 28 3:00\n\t\t\t 1:00\tMorocco\t+01/+00\n\nMorocco has had weird changes-every-year DST rules since 2008, which'd\ngo a long way towards explaining funny behavior with this zone, even\nwithout the \"reverse DST\" since 2018. And sure enough, 002_types.pl\nfalls over with TZ=Africa/Casablanca on my Linux machine, too.\n\nI'm inclined to think we ought to be translating that zone name to\nEurope/London instead. Or maybe we should translate to straight-up UTC?\nBut the option of \"Greenwich Daylight Time\" suggests that Windows thinks\nthis means UK civil time, not UTC.\n\nI wonder if findtimezone.c has any other surprising Windows mappings.\nI've never dug through that list particularly.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 30 Sep 2021 15:38:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 002_types.pl fails on some timezones on windows"
},
{
"msg_contents": "I wrote:\n> ... sure enough, 002_types.pl\n> falls over with TZ=Africa/Casablanca on my Linux machine, too.\n\nIndependently of whether Africa/Casablanca is a sane translation of\nthat Windows zone name, it'd be nice if 002_types.pl weren't so\nsensitive to the prevailing zone. I looked into exactly why it's\nfalling over, and the answer seems to be this bit:\n\n\t\t(2, tstzrange('Mon Aug 04 00:00:00 2014 CEST'::timestamptz - interval '2 days', 'Mon Aug 04 00:00:00 2014 CEST'::timestamptz), '{\"[2,3]\", \"[20,30]\"}'),\n\t\t(3, tstzrange('Mon Aug 04 00:00:00 2014 CEST'::timestamptz - interval '3 days', 'Mon Aug 04 00:00:00 2014 CEST'::timestamptz), '{\"[3,4]\"}'),\n\t\t(4, tstzrange('Mon Aug 04 00:00:00 2014 CEST'::timestamptz - interval '4 days', 'Mon Aug 04 00:00:00 2014 CEST'::timestamptz), '{\"[4,5]\", NULL, \"[40,50]\"}'),\n\nThe problem with this is the blithe assumption that \"minus N days\"\nis an immutable computation. It ain't. As bad luck would have it,\nthese intervals all manage to cross a Moroccan DST boundary\n(Ramadan, I assume):\n\nRule\tMorocco\t2014\tonly\t-\tJun\t28\t 3:00\t0\t-\nRule\tMorocco\t2014\tonly\t-\tAug\t 2\t 2:00\t1:00\t-\n\nThus, in GMT or most other zones, we get 24-hour-spaced times of day for\nthese calculations:\n\nregression=# set timezone to 'GMT';\nSET\nregression=# select n, 'Mon Aug 04 00:00:00 2014 CEST'::timestamptz - n * interval '1 day' from generate_series(0,4) n;\n n | ?column? \n---+------------------------\n 0 | 2014-08-03 22:00:00+00\n 1 | 2014-08-02 22:00:00+00\n 2 | 2014-08-01 22:00:00+00\n 3 | 2014-07-31 22:00:00+00\n 4 | 2014-07-30 22:00:00+00\n(5 rows)\n\nbut not so much in Morocco:\n\nregression=# set timezone to 'Africa/Casablanca';\nSET\nregression=# select n, 'Mon Aug 04 00:00:00 2014 CEST'::timestamptz - n * interval '1 day' from generate_series(0,4) n;\n n | ?column? \n---+------------------------\n 0 | 2014-08-03 23:00:00+01\n 1 | 2014-08-02 23:00:00+01\n 2 | 2014-08-01 23:00:00+00\n 3 | 2014-07-31 23:00:00+00\n 4 | 2014-07-30 23:00:00+00\n(5 rows)\n\nWhat I'm inclined to do about that is get rid of the totally-irrelevant-\nto-this-test interval subtractions, and just write the desired timestamps\nas constants.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 30 Sep 2021 16:03:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 002_types.pl fails on some timezones on windows"
},
{
"msg_contents": "Hi,\n\nOn 2021-09-30 15:19:30 -0400, Andrew Dunstan wrote:\n> On 9/30/21 2:36 PM, Andres Freund wrote:\n> > Hi,\n> >\n> > CI showed me a failure in 002_types.pl on windows. I only just now noticed\n> > that because the subscription tests aren't run by any of the vcregress.pl\n> > steps :(\n\n> We have windows buildfarm animals running the subscription tests, e.g.\n> <https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=drongo&dt=2021-09-29%2019%3A08%3A23&stg=subscription-check>\n> and they do it by calling vcregress.pl.\n\nThe point I was trying to make is that there's no \"target\" in vcregress.pl for\nit. You have to know that you need to call\nsrc/tools/msvc/vcregress.pl taptest src\\test\\subscription\nto run them. Contrasting to recoverycheck or so, which has it's own\nvcregress.pl target.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 30 Sep 2021 13:09:56 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: 002_types.pl fails on some timezones on windows"
},
{
"msg_contents": "On Fri, Oct 1, 2021 at 8:38 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> But the option of \"Greenwich Daylight Time\" suggests that Windows thinks\n> this means UK civil time, not UTC.\n\nYes, it's been a while but IIRC Windows in the UK uses confusing\nterminology here even in user interfaces, so that in summer it appears\nto be wrong, which is annoying to anyone brought up on Eggert's\nsystem. The CLDR windowsZones.xml file shows this.\n\n\n",
"msg_date": "Fri, 1 Oct 2021 09:24:56 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: 002_types.pl fails on some timezones on windows"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> It turns out to be dependant on the current timezone. I have just about zero\n> understanding how timezones work on windows, so I can't really interpret why\n> that causes a problem on windows, but apparently not on linux.\n\nAs of 20f8671ef, \"TZ=Africa/Casablanca make check-world\" passes here,\nso your CI should be okay. We still oughta fix the Windows\ntranslation, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 30 Sep 2021 16:31:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 002_types.pl fails on some timezones on windows"
},
{
"msg_contents": "Hi,\n\nOn 2021-09-30 16:03:15 -0400, Tom Lane wrote:\n> I wrote:\n> > ... sure enough, 002_types.pl\n> > falls over with TZ=Africa/Casablanca on my Linux machine, too.\n> \n> Independently of whether Africa/Casablanca is a sane translation of\n> that Windows zone name, it'd be nice if 002_types.pl weren't so\n> sensitive to the prevailing zone. I looked into exactly why it's\n> falling over, and the answer seems to be this bit:\n\n> \t\t(2, tstzrange('Mon Aug 04 00:00:00 2014 CEST'::timestamptz - interval '2 days', 'Mon Aug 04 00:00:00 2014 CEST'::timestamptz), '{\"[2,3]\", \"[20,30]\"}'),\n> \t\t(3, tstzrange('Mon Aug 04 00:00:00 2014 CEST'::timestamptz - interval '3 days', 'Mon Aug 04 00:00:00 2014 CEST'::timestamptz), '{\"[3,4]\"}'),\n> \t\t(4, tstzrange('Mon Aug 04 00:00:00 2014 CEST'::timestamptz - interval '4 days', 'Mon Aug 04 00:00:00 2014 CEST'::timestamptz), '{\"[4,5]\", NULL, \"[40,50]\"}'),\n> \n> The problem with this is the blithe assumption that \"minus N days\"\n> is an immutable computation. It ain't. As bad luck would have it,\n> these intervals all manage to cross a Moroccan DST boundary\n> (Ramadan, I assume):\n\nFor a minute I was confused, because of course we should still get the same\nresult on the subscriber as on the publisher. But then I re-re-re-realized\nthat the comparison data is a constant in the test script...\n\n\n> What I'm inclined to do about that is get rid of the totally-irrelevant-\n> to-this-test interval subtractions, and just write the desired timestamps\n> as constants.\n\nSounds like a plan.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 30 Sep 2021 13:31:36 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: 002_types.pl fails on some timezones on windows"
},
{
"msg_contents": "Hi,\n\nOn 2021-09-30 16:31:33 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > It turns out to be dependant on the current timezone. I have just about zero\n> > understanding how timezones work on windows, so I can't really interpret why\n> > that causes a problem on windows, but apparently not on linux.\n> \n> As of 20f8671ef, \"TZ=Africa/Casablanca make check-world\" passes here,\n> so your CI should be okay. We still oughta fix the Windows\n> translation, though.\n\nIndeed, it just passed (after reverting my timezone workaround):\nhttps://cirrus-ci.com/task/5899963000422400?logs=check#L129\n\nIt still fails in t/026_overwrite_contrecord.pl though. But that's another\nthread.\n\n\nThanks!\n\nAndres\n\n\n",
"msg_date": "Thu, 30 Sep 2021 14:04:38 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: 002_types.pl fails on some timezones on windows"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Yes, it's been a while but IIRC Windows in the UK uses confusing\n> terminology here even in user interfaces, so that in summer it appears\n> to be wrong, which is annoying to anyone brought up on Eggert's\n> system. The CLDR windowsZones.xml file shows this.\n\nOh, thanks for the pointer to CLDR! I tried re-generating our data\nbased on theirs, and ended up with the attached draft patch.\nMy notes summarizing the changes say:\n\n\nChoose Europe/London for \"Greenwich Standard Time\"\n(CLDR doesn't do this, but all their mappings for it are insane)\n\nAlphabetize a bit better\n\n\nZone name changes:\n\nJerusalem Standard Time -> Israel Standard Time\n\nNumerous Russian zones slightly renamed\n\nShould we preserve the old spellings of the above? It's not clear\nhow long-obsolete the old spellings are.\n\n\nMaybe politically sensitive:\n\nAsia/Hong_Kong -> Asia/Shanghai\n\nI think the latter has way better claim on \"China Standard Time\",\nand CLDR agrees.\n\n\nResolve Links to underlying real zones:\n\nAsia/Kuwait -> Asia/Riyadh\nAsia/Muscat -> Asia/Dubai\nAustralia/Canberra -> Australia/Sydney\nCanada/Atlantic -> America/Halifax\nCanada/Newfoundland -> America/St_Johns\nCanada/Saskatchewan -> America/Regina\nUS/Alaska -> America/Anchorage\nUS/Arizona -> America/Phoenix\nUS/Central -> America/Chicago\nUS/Eastern -> America/New_York\nUS/Hawaii -> Pacific/Honolulu\nUS/Mountain -> America/Denver\nUS/Pacific -> America/Los_Angeles\n\n\nJust plain wrong:\n\nUS/Aleutan (misspelling of US/Aleutian, which is a link anyway)\n\nAmerica/Salvador does not exist; tzdb says\n# There are too many Salvadors elsewhere, so use America/Bahia instead\n# of America/Salvador.\n\nEtc/UTC+12 doesn't exist in tzdb\n\nIndiana (East) is not the regular US/Eastern zone\n\nAsia/Baku -> Asia/Yerevan (Baku is in Azerbaijan, Yerevan is in Armenia)\n\nAsia/Dhaka -> Asia/Almaty (Dhaka has its own zone, and it's in Bangladesh\nnot Astana)\n\nEurope/Sarajevo is a link to Europe/Belgrade these days, so use Warsaw\n\nChisinau is in Moldova not Romania\n\nChetumal is in Quintana Roo, which is represented by Cancun not Mexico City\n\nHaiti has its own zone\n\nAmerica/Araguaina seems to just be a mistake; use Sao_Paulo\n\nAmerica/Buenos_Aires for SA Eastern Standard Time is a mistake\n(it has its own zone)\nlikewise America/Caracas for SA Western Standard Time\n\nAfrica/Harare seems to be obsoleted by Africa/Johannesburg\n\nKarachi is in Pakistan, not Tashkent\n\n\nNew Windows zones:\n\n\"South Sudan Standard Time\" -> Africa/Juba\n\n\"West Bank Standard Time\" -> Asia/Hebron\n(CLDR seem to have this replacing Gaza, but I kept that one too)\n\n\"Yukon Standard Time\" -> America/Whitehorse\n\nuncomment \"W. Central Africa Standard Time\" as Africa/Lagos\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 30 Sep 2021 19:47:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 002_types.pl fails on some timezones on windows"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n>> Yes, it's been a while but IIRC Windows in the UK uses confusing\n>> terminology here even in user interfaces, so that in summer it appears\n>> to be wrong, which is annoying to anyone brought up on Eggert's\n>> system. The CLDR windowsZones.xml file shows this.\n\nBTW, on closer inspection of CLDR's data, the Windows zone name they\nassociate with Europe/London is \"GMT Standard Time\". \"Greenwich Standard\nTime\" is associated with a bunch of places that happen to lie near the\nprime meridian, but whose timekeeping likely has nothing to do with UK\ncivil time:\n\n<!-- (UTC+00:00) Monrovia, Reykjavik -->\n<mapZone other=\"Greenwich Standard Time\" territory=\"001\" type=\"Atlantic/Reykjavik\"/>\n<mapZone other=\"Greenwich Standard Time\" territory=\"BF\" type=\"Africa/Ouagadougou\"/>\n<mapZone other=\"Greenwich Standard Time\" territory=\"CI\" type=\"Africa/Abidjan\"/>\n<mapZone other=\"Greenwich Standard Time\" territory=\"GH\" type=\"Africa/Accra\"/>\n<mapZone other=\"Greenwich Standard Time\" territory=\"GL\" type=\"America/Danmarkshavn\"/>\n<mapZone other=\"Greenwich Standard Time\" territory=\"GM\" type=\"Africa/Banjul\"/>\n<mapZone other=\"Greenwich Standard Time\" territory=\"GN\" type=\"Africa/Conakry\"/>\n<mapZone other=\"Greenwich Standard Time\" territory=\"GW\" type=\"Africa/Bissau\"/>\n<mapZone other=\"Greenwich Standard Time\" territory=\"IS\" type=\"Atlantic/Reykjavik\"/>\n<mapZone other=\"Greenwich Standard Time\" territory=\"LR\" type=\"Africa/Monrovia\"/>\n<mapZone other=\"Greenwich Standard Time\" territory=\"ML\" type=\"Africa/Bamako\"/>\n<mapZone other=\"Greenwich Standard Time\" territory=\"MR\" type=\"Africa/Nouakchott\"/>\n<mapZone other=\"Greenwich Standard Time\" territory=\"SH\" type=\"Atlantic/St_Helena\"/>\n<mapZone other=\"Greenwich Standard Time\" territory=\"SL\" type=\"Africa/Freetown\"/>\n<mapZone other=\"Greenwich Standard Time\" territory=\"SN\" type=\"Africa/Dakar\"/>\n<mapZone other=\"Greenwich Standard Time\" territory=\"TG\" type=\"Africa/Lome\"/>\n\nSo arguably, the problem that started this thread was Andres' user\nerror: I doubt he expected \"Greenwich Standard Time\" to mean any\nof these. Still, I think we're better off to map that to London,\nbecause he won't be the only one to make that mistake.\n\nBTW, I find those \"territory\" annotations in the CLDR data to be\nfascinating. If that corresponds to something that we could retrieve\nat runtime, it'd allow far better mapping of Windows zones than we\nare doing now. I have no interest in working on that myself though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 01 Oct 2021 08:55:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 002_types.pl fails on some timezones on windows"
},
{
"msg_contents": "\nOn 9/30/21 3:38 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 9/30/21 2:36 PM, Andres Freund wrote:\n>>> CI showed me a failure in 002_types.pl on windows. I only just now noticed\n>>> that because the subscription tests aren't run by any of the vcregress.pl\n>>> steps :(\n>> We have windows buildfarm animals running the subscription tests, e.g.\n>> <https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=drongo&dt=2021-09-29%2019%3A08%3A23&stg=subscription-check>\n>> and they do it by calling vcregress.pl.\n> But are they running with the prevailing zone set to \"Greenwich Standard\n> Time\"?\n\n\ndrongo's timezone is set to plain \"UTC\".\n\n\nIt also offers me \"UTC+00:00(Dublin, Edinburgh, Lisbon, London)\" and\n\"UTC+00:00(Monrovia, Reykjavik)\"\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 1 Oct 2021 10:25:55 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: 002_types.pl fails on some timezones on windows"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 9/30/21 3:38 PM, Tom Lane wrote:\n>> But are they running with the prevailing zone set to \"Greenwich Standard\n>> Time\"?\n\n> drongo's timezone is set to plain \"UTC\".\n\n> It also offers me \"UTC+00:00(Dublin, Edinburgh, Lisbon, London)\" and\n> \"UTC+00:00(Monrovia, Reykjavik)\"\n\nYeah, the last of those is (was) the problematic one.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 01 Oct 2021 10:35:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 002_types.pl fails on some timezones on windows"
},
{
"msg_contents": "I wrote:\n> Oh, thanks for the pointer to CLDR! I tried re-generating our data\n> based on theirs, and ended up with the attached draft patch.\n\nHearing no objections, pushed after another round of review\nand a couple more fixes.\n\nFor the archives' sake, here are the remaining discrepancies\nbetween our mapping and CLDR's entries for \"territory 001\",\nwhich I take to be their recommended defaults:\n\n* Our documented decision to map \"Central America\" to \"CST6\",\non the grounds that most of Central America doesn't actually\nobserve DST nowadays.\n\n* Now-documented decision to map \"Greenwich Standard Time\"\nto Europe/London, not Atlantic/Reykjavik as they have it.\n\n* The miscellaneous deltas shown in the attached diff, which in\nmany cases boil down to \"we chose the first name mentioned for the\nzone, while CLDR did something else\". I felt that our historical\nmappings of these cases weren't wrong enough to justify any\npolitical flak I might take for changing them. OTOH, maybe we\nshould just say \"we follow CLDR\" and be done with it.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 02 Oct 2021 16:42:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 002_types.pl fails on some timezones on windows"
},
{
"msg_contents": "On Sun, Oct 3, 2021 at 9:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> * Now-documented decision to map \"Greenwich Standard Time\"\n> to Europe/London, not Atlantic/Reykjavik as they have it.\n\nHmm. It's hard to pick a default from that set of merged zones, but\nthe funny thing about this choice is that Europe/London is the one\nOlson zone that it's sure *not* to be, because then your system would\nbe using that other name, IIUC.\n\n> * The miscellaneous deltas shown in the attached diff, which in\n> many cases boil down to \"we chose the first name mentioned for the\n> zone, while CLDR did something else\". I felt that our historical\n> mappings of these cases weren't wrong enough to justify any\n> political flak I might take for changing them. OTOH, maybe we\n> should just say \"we follow CLDR\" and be done with it.\n\nEyeballing these, three look strange to me in a list of otherwise\ncity-based names: Pacific/Guam (instead of Port Moresby, capital of\nPNG which apparently shares zone rules with the territory of Guam) and\nPacific/Samoa (country name instead of its capital Apia; the city\navoids any potential confusion with American Samoa which is on the\nother side of the date line) and then \"CET\", an abbreviation. But\ndebating individual points of geography and politics like this seems a\nbit silly... I wasn't really aware of this Windows->Olson zone name\nproblem lurking in our tree before, but it sounds to me like switching\nto 100% \"we use CLDR, if you think it's wrong, please file a report at\ncldr.unicode.org\" wouldn't be a bad idea at all!\n\n\n",
"msg_date": "Sun, 3 Oct 2021 10:23:39 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: 002_types.pl fails on some timezones on windows"
},
{
"msg_contents": "On Sat, Oct 2, 2021 at 1:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> BTW, I find those \"territory\" annotations in the CLDR data to be\n> fascinating. If that corresponds to something that we could retrieve\n> at runtime, it'd allow far better mapping of Windows zones than we\n> are doing now. I have no interest in working on that myself though.\n\nI wonder if it could be derived from the modern standards-based locale\nname, which we're not currently using as a default locale but probably\nshould[1]. For single-zone countries you might be able to match\nexactly one zone mapping.\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGJ%3DXThErgAQRoqfCy1bKPxXVuF0%3D2zDbB%2BSxDs59pv7Fw%40mail.gmail.com\n\n\n",
"msg_date": "Sun, 3 Oct 2021 10:31:41 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: 002_types.pl fails on some timezones on windows"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Sun, Oct 3, 2021 at 9:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> * Now-documented decision to map \"Greenwich Standard Time\"\n>> to Europe/London, not Atlantic/Reykjavik as they have it.\n\n> Hmm. It's hard to pick a default from that set of merged zones, but\n> the funny thing about this choice is that Europe/London is the one\n> Olson zone that it's sure *not* to be, because then your system would\n> be using that other name, IIUC.\n\nAgreed, this choice is definitely formally wrong. However, the example\nwe started the thread with is that Andres thought \"Greenwich Standard\nTime\" would get him UTC, or at least something a lot less oddball than\nwhat he got.\n\nBut wait a minute ... looking into the tzdb sources, I find that Iceland\nhasn't observed DST since 1968, and tzdb spells their zone abbreviation as\n\"GMT\" since then. That means that Atlantic/Reykjavik is actually a way\nbetter approximation to \"plain GMT\" than Europe/London is. Maybe there\nis some method in CLDR's madness here.\n\n>> * The miscellaneous deltas shown in the attached diff, which in\n>> many cases boil down to \"we chose the first name mentioned for the\n>> zone, while CLDR did something else\". I felt that our historical\n>> mappings of these cases weren't wrong enough to justify any\n>> political flak I might take for changing them. OTOH, maybe we\n>> should just say \"we follow CLDR\" and be done with it.\n\n> Eyeballing these, three look strange to me in a list of otherwise\n> city-based names: Pacific/Guam (instead of Port Moresby, capital of\n> PNG which apparently shares zone rules with the territory of Guam) and\n> Pacific/Samoa (country name instead of its capital Apia; the city\n> avoids any potential confusion with American Samoa which is on the\n> other side of the date line) and then \"CET\", an abbreviation.\n\nOooh. Looking closer, I see that the Windows zone is defined as\n\t<!-- (UTC+13:00) Samoa -->\nwhich makes it *definitely* Pacific/Apia ... Pacific/Samoa is a\nlink to Pacific/Pago_Pago which is in American Samoa, at UTC-11.\nSo our mapping was kind of okay up till 2011 when Samoa decided\nthey wanted to be on the other side of the date line, but now\nit's wrong as can be. Ooops.\n\n> But\n> debating individual points of geography and politics like this seems a\n> bit silly... I wasn't really aware of this Windows->Olson zone name\n> problem lurking in our tree before, but it sounds to me like switching\n> to 100% \"we use CLDR, if you think it's wrong, please file a report at\n> cldr.unicode.org\" wouldn't be a bad idea at all!\n\nI'd still defend our exception for Central America: CLDR maps that\nto Guatemala which seems pretty random, even if they haven't observed\nDST there for a few years. For the rest of it, though, \"we follow CLDR\"\nhas definitely got a lot of attraction. The one change that makes me\nnervous is adopting Europe/Berlin for \"W. Europe Standard Time\",\non account of the flak Paul Eggert just got from trying to make a\nsomewhat-similar change :-(. (If you don't read the tz mailing list\nyou may not be aware of that particular tempest in a teapot, but he\ntried to merge a bunch of zones into Europe/Berlin, and there were\na lot of complaints. Some from me.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 02 Oct 2021 18:26:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 002_types.pl fails on some timezones on windows"
},
{
"msg_contents": "Hi, \n\nOn October 2, 2021 3:26:35 PM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>However, the example\n>we started the thread with is that Andres thought \"Greenwich Standard\n>Time\" would get him UTC, or at least something a lot less oddball than\n>what he got.\n\nFWIW, that was just the default on those machines (which in turn seems to be the default of some containers Microsoft distributes), not something I explicitly chose.\n\n- Andres\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Sat, 02 Oct 2021 15:50:02 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: 002_types.pl fails on some timezones on windows"
},
{
"msg_contents": "On Sun, Oct 3, 2021 at 11:26 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Eyeballing these, three look strange to me in a list of otherwise\n> > city-based names: Pacific/Guam (instead of Port Moresby, capital of\n> > PNG which apparently shares zone rules with the territory of Guam) and\n> > Pacific/Samoa (country name instead of its capital Apia; the city\n> > avoids any potential confusion with American Samoa which is on the\n> > other side of the date line) and then \"CET\", an abbreviation.\n>\n> Oooh. Looking closer, I see that the Windows zone is defined as\n> <!-- (UTC+13:00) Samoa -->\n> which makes it *definitely* Pacific/Apia ... Pacific/Samoa is a\n> link to Pacific/Pago_Pago which is in American Samoa, at UTC-11.\n> So our mapping was kind of okay up till 2011 when Samoa decided\n> they wanted to be on the other side of the date line, but now\n> it's wrong as can be. Ooops.\n\nHah. That's a *terrible* link to have.\n\n> I'd still defend our exception for Central America: CLDR maps that\n> to Guatemala which seems pretty random, even if they haven't observed\n> DST there for a few years. For the rest of it, though, \"we follow CLDR\"\n> has definitely got a lot of attraction. The one change that makes me\n> nervous is adopting Europe/Berlin for \"W. Europe Standard Time\",\n> on account of the flak Paul Eggert just got from trying to make a\n> somewhat-similar change :-(.\n\nIt would be interesting to know if that idea of matching BCP47 locale\nnames to territories could address that. Perhaps we should get that\nmodern-locale-name patch first (I think I got stuck on \"let's kill off\nold Windows versions so we can use this\", due to confusing versioning\nand a lack of a guiding policy on our part, but I think I should just\npropose something), and then revisit this?\n\n> (If you don't read the tz mailing list\n> you may not be aware of that particular tempest in a teapot, but he\n> tried to merge a bunch of zones into Europe/Berlin, and there were\n> a lot of complaints. Some from me.)\n\nI don't follow the list but there was a nice summary in LWN: \"A fork\nfor the time-zone database?\". From the peanut gallery, I thought it\nwas a bit of a double standard, considering the rejection of that idea\nof yours about getting rid of longitude-based pre-standard times on\ndata stability grounds, and a lot less justifiable. I hope there\nisn't a fork.\n\n\n",
"msg_date": "Sun, 3 Oct 2021 11:52:56 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: 002_types.pl fails on some timezones on windows"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On October 2, 2021 3:26:35 PM PDT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> However, the example\n>> we started the thread with is that Andres thought \"Greenwich Standard\n>> Time\" would get him UTC, or at least something a lot less oddball than\n>> what he got.\n\n> FWIW, that was just the default on those machines (which in turn seems to be the default of some containers Microsoft distributes), not something I explicitly chose.\n\nSo *somebody* thought it was an unsurprising default ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 02 Oct 2021 19:45:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 002_types.pl fails on some timezones on windows"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Sun, Oct 3, 2021 at 11:26 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I'd still defend our exception for Central America: CLDR maps that\n>> to Guatemala which seems pretty random, even if they haven't observed\n>> DST there for a few years. For the rest of it, though, \"we follow CLDR\"\n>> has definitely got a lot of attraction.\n\nActually ... digging in the archives, the reason we have a special case\nfor Central America is that there was a user complaint about the previous\nmapping to CST6CDT:\n\nhttps://www.postgresql.org/message-id/flat/1316149023380-4809498.post%40n5.nabble.com\n\nCST6CDT was *way* wrong, because it implies USA DST rules, so the\ncomplaint was well-founded. I wrote in that thread:\n\n> I think we ought to map \"Central America Standard Time\" to plain CST6.\n> (Or we could map to one of America/Costa_Rica, America/Guatemala,\n> America/El_Salvador, etc, but that seems more likely to offend people in\n> the other countries than provide any additional precision.)\n\nHowever, if we can cite CLDR as authority, I see no reason why\nAmerica/Guatemala should be any more offensive than any of the\nother fairly-arbitrary choices CLDR has made. None of those\nzones have observed DST for a decade or more, so at least in\nrecent years it wouldn't make any difference anyway.\n\nSo, I'm now sold on just making all our mappings match CLDR.\nI'll do that in a couple of days if I don't hear objections.\n\n> It would be interesting to know if that idea of matching BCP47 locale\n> names to territories could address that. Perhaps we should get that\n> modern-locale-name patch first (I think I got stuck on \"let's kill off\n> old Windows versions so we can use this\", due to confusing versioning\n> and a lack of a guiding policy on our part, but I think I should just\n> propose something), and then revisit this?\n\nThat seems like potentially a nice long-term solution, but it doesn't\nsound likely to be back-patchable. So I'd like to get the existing\ndata in as good shape as we can before we go looking for a replacement\nmechanism.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 02 Oct 2021 20:04:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 002_types.pl fails on some timezones on windows"
}
] |
[
{
"msg_contents": "Hi,\n\nFor me 001_libpq_pipeline.pl doesn't reliably work on windows, because it\ntries to add something to PATH, using unix syntax (vs ; used on windows).\n\n$ENV{PATH} = \"$ENV{TESTDIR}:$ENV{PATH}\";\n\nIf the first two elements in PATH are something needed, this can cause the\ntest to fail... I'm surprised this doesn't cause problems on the buildfarm - a\nplain\n perl src\\tools\\msvc\\vcregress.pl taptest src\\test\\modules\\libpq_pipeline\\\nfails for me.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 30 Sep 2021 14:40:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "PATH manipulation in 001_libpq_pipeline.pl fails on windows"
},
{
"msg_contents": "\nOn 9/30/21 5:40 PM, Andres Freund wrote:\n> Hi,\n>\n> For me 001_libpq_pipeline.pl doesn't reliably work on windows, because it\n> tries to add something to PATH, using unix syntax (vs ; used on windows).\n>\n> $ENV{PATH} = \"$ENV{TESTDIR}:$ENV{PATH}\";\n>\n> If the first two elements in PATH are something needed, this can cause the\n> test to fail... I'm surprised this doesn't cause problems on the buildfarm - a\n> plain\n> perl src\\tools\\msvc\\vcregress.pl taptest src\\test\\modules\\libpq_pipeline\\\n> fails for me.\n\n\nNot sure. That's certainly an error.\n\n\nBut why are we mangling the PATH at all? Wouldn't it be better just to\ncall command_ok with \"$ENV{TESTDIR}/libpg_pipeline\" ?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 1 Oct 2021 14:07:42 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: PATH manipulation in 001_libpq_pipeline.pl fails on windows"
},
{
"msg_contents": "Hi,\n\nOn 2021-10-01 14:07:42 -0400, Andrew Dunstan wrote:\n> But why are we mangling the PATH at all? Wouldn't it be better just to\n> call command_ok with \"$ENV{TESTDIR}/libpg_pipeline\" ?\n\nYea, it probably would. Alvaro, I assume you don't mind if I change that?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 1 Oct 2021 13:25:36 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: PATH manipulation in 001_libpq_pipeline.pl fails on windows"
},
{
"msg_contents": "On 2021-Oct-01, Andres Freund wrote:\n\n> Hi,\n> \n> On 2021-10-01 14:07:42 -0400, Andrew Dunstan wrote:\n> > But why are we mangling the PATH at all? Wouldn't it be better just to\n> > call command_ok with \"$ENV{TESTDIR}/libpg_pipeline\" ?\n> \n> Yea, it probably would. Alvaro, I assume you don't mind if I change that?\n\nHi, no, please go ahead.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"En las profundidades de nuestro inconsciente hay una obsesiva necesidad\nde un universo lógico y coherente. Pero el universo real se halla siempre\nun paso más allá de la lógica\" (Irulan)\n\n\n",
"msg_date": "Fri, 1 Oct 2021 17:37:11 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: PATH manipulation in 001_libpq_pipeline.pl fails on windows"
},
{
"msg_contents": "On 2021-10-01 17:37:11 -0300, Alvaro Herrera wrote:\n> On 2021-Oct-01, Andres Freund wrote:\n> > On 2021-10-01 14:07:42 -0400, Andrew Dunstan wrote:\n> > > But why are we mangling the PATH at all? Wouldn't it be better just to\n> > > call command_ok with \"$ENV{TESTDIR}/libpg_pipeline\" ?\n> > \n> > Yea, it probably would. Alvaro, I assume you don't mind if I change that?\n> \n> Hi, no, please go ahead.\n\nDone.\n\n\n",
"msg_date": "Fri, 1 Oct 2021 16:02:27 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: PATH manipulation in 001_libpq_pipeline.pl fails on windows"
}
] |
[
{
"msg_contents": "Hello,\n\nIf we set a parameter in the postgresql.conf that the loaded library doesn't\nrecognize at startup, it throws a warning.\nFor example if one sets `plpgsql.no_such_setting` for plpgsql:\n\n```\nWARNING: unrecognized configuration parameter \"plpgsql.no_such_setting\"\n```\n\nWe could also help users get a warning if they set a parameter with the\n`SET`\ncommand. I've seen many cases where users make typos and break things badly,\ncheck the following example:\n\n```\npostgres=# BEGIN;\nBEGIN\npostgres=*# SET plpgsql.no_such_setting = false;\nSET\npostgres=*# -- do critical queries taking into account that\nplpgsql.no_such_setting is false;\npostgres=*# COMMIT;\nCOMMIT\n```\n\nI propose to make the user aware of such mistakes. I also made the patch\nonly\nto warn the user but still correctly `SET` the parameter so that he is the\none\nthat chooses if he wants to continue or `ROLLBACK`. I don't know if this\nlast\npart is correct, but at least it doesn't break any previous implementation.\n\nThis is what I mean:\n\n```\npostgres=# BEGIN;\nBEGIN\npostgres=*# SET plpgsql.no_such_setting = false;\nWARNING: unrecognized configuration parameter \"plpgsql.no_such_setting\"\nDETAIL: \"plpgsql\" is a reserved prefix.\nHINT: If you need to create a custom placeholder use a different prefix.\nSET\npostgres=*# -- choose to continue or not based on the warning\npostgres=*# ROLLBACK or COMMIT\n```\n\nThe patch I'm attaching is registering the prefix for all the loaded\nlibraries,\nand eventually, it uses them to check if any parameter is recognized,just\nas we\ndo at startup.\n\nPlease, let me know what you think.\n\nCheers,\nFlorin Irion",
"msg_date": "Thu, 30 Sep 2021 23:54:04 +0200",
"msg_from": "Florin Irion <irionr@gmail.com>",
"msg_from_op": true,
"msg_subject": "Reserve prefixes for loaded libraries proposal"
},
{
"msg_contents": "On 09/30/21 17:54, Florin Irion wrote:\n\n> We could also help users get a warning if they set a parameter with the\n> `SET` command.\n\nThis is funny. For years I have been so confident I knew how this worked\nthat I, obviously, hadn't tried it. :)\n\nMy first setting of a made-up variable gets no warning, as I already expected:\n\npostgres=# set plpgsql.no_such_setting = false;\nSET\n\nThen as soon as I do the first thing in the session involving plpgsql,\nI get the warning for that one:\n\npostgres=# do language plpgsql\npostgres-# 'begin delete from pg_class where false; end';\nWARNING: unrecognized configuration parameter \"plpgsql.no_such_setting\"\nDO\n\n\nBut then, I have always assumed I would get warnings thereafter:\n\npostgres=# set plpgsql.not_this_one_neither = false;\nSET\n\nBut no!\n\nSo I am in favor of patching this.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Thu, 30 Sep 2021 18:26:08 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Reserve prefixes for loaded libraries proposal"
},
{
"msg_contents": "On Thu, 2021-09-30 at 18:26 -0400, Chapman Flack wrote:\n> On 09/30/21 17:54, Florin Irion wrote:\n> > We could also help users get a warning if they set a parameter with the\n> > `SET` command.\n> \n> So I am in favor of patching this.\n\n+1 on the idea.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Fri, 01 Oct 2021 06:27:41 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Reserve prefixes for loaded libraries proposal"
},
{
"msg_contents": "Il giorno ven 1 ott 2021 alle ore 00:26 Chapman Flack <chap@anastigmatix.net>\nha scritto:\n>\n> On 09/30/21 17:54, Florin Irion wrote:\n>\n> > We could also help users get a warning if they set a parameter with the\n> > `SET` command.\n>\n> This is funny. For years I have been so confident I knew how this worked\n> that I, obviously, hadn't tried it. :)\n>\n> My first setting of a made-up variable gets no warning, as I already\nexpected:\n>\n> postgres=# set plpgsql.no_such_setting = false;\n> SET\n>\n> Then as soon as I do the first thing in the session involving plpgsql,\n> I get the warning for that one:\n>\n> postgres=# do language plpgsql\n> postgres-# 'begin delete from pg_class where false; end';\n> WARNING: unrecognized configuration parameter \"plpgsql.no_such_setting\"\n> DO\n>\n\nI choose `plpgsql` in my example because perhaps it is best known to the\nmajority, plpgsql gets loaded when the user first uses it, and doesn't need\nto be preloaded at startup.\nThis proposal will help when we have any extension in the\n`shared_preload_libraries`\nand the check is only made at postgres start.\nHowever, if one already used plpgsql in a session and then it `SET`s an\nunknown parameter\nit will not get any warning as the check is made only when it gets loaded\nthe first time.\n\n```\npostgres=# do language plpgsql\n'begin delete from pg_class where false; end';\nDO\npostgres=# set plpgsql.no_such_setting = false;\nSET\npostgres=# do language plpgsql\n'begin delete from pg_class where false; end';\nDO\n```\n\nWith my patch it will be registered and it will throw a warning also in\nthis case:\n\n```\npostgres=# do language plpgsql\npostgres-# 'begin delete from pg_class where false; end';\nDO\npostgres=# set plpgsql.no_such_setting = false;\nWARNING: unrecognized configuration parameter \"plpgsql.no_such_setting\"\nDETAIL: \"plpgsql\" is a reserved prefix.\nHINT: If you need to create a custom placeholder use a different prefix.\nSET\n```\n\n>\n> But then, I have always assumed I would get warnings thereafter:\n>\n> postgres=# set plpgsql.not_this_one_neither = false;\n> SET\n>\n> But no!\n\nExactly.\n\n> So I am in favor of patching this.\n>\n> Regards,\n> -Chap\n\nThanks,\nFlorin Irion\n\nIl giorno ven 1 ott 2021 alle ore 00:26 Chapman Flack <chap@anastigmatix.net> ha scritto:>> On 09/30/21 17:54, Florin Irion wrote:>> > We could also help users get a warning if they set a parameter with the> > `SET` command.>> This is funny. For years I have been so confident I knew how this worked> that I, obviously, hadn't tried it. :)>> My first setting of a made-up variable gets no warning, as I already expected:>> postgres=# set plpgsql.no_such_setting = false;> SET>> Then as soon as I do the first thing in the session involving plpgsql,> I get the warning for that one:>> postgres=# do language plpgsql> postgres-# 'begin delete from pg_class where false; end';> WARNING: unrecognized configuration parameter \"plpgsql.no_such_setting\"> DO>I choose `plpgsql` in my example because perhaps it is best known to themajority, plpgsql gets loaded when the user first uses it, and doesn't needto be preloaded at startup. This proposal will help when we have any extension in the `shared_preload_libraries`and the check is only made at postgres start.However, if one already used plpgsql in a session and then it `SET`s an unknown parameterit will not get any warning as the check is made only when it gets loaded the first time.```postgres=# do language plpgsql'begin delete from pg_class where false; end';DOpostgres=# set plpgsql.no_such_setting = false;SETpostgres=# do language plpgsql'begin delete from pg_class where false; end';DO```With my patch it will be registered and it will throw a warning also in this case:```postgres=# do language plpgsqlpostgres-# 'begin delete from pg_class where false; end';DOpostgres=# set plpgsql.no_such_setting = false;WARNING: unrecognized configuration parameter \"plpgsql.no_such_setting\"DETAIL: \"plpgsql\" is a reserved prefix.HINT: If you need to create a custom placeholder use a different prefix.SET```>> But then, I have always assumed I would get warnings thereafter:>> postgres=# set plpgsql.not_this_one_neither = false;> SET>> But no!Exactly.> So I am in favor of patching this.>> Regards,> -ChapThanks,Florin Irion",
"msg_date": "Fri, 1 Oct 2021 09:07:01 +0200",
"msg_from": "Florin Irion <irionr@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reserve prefixes for loaded libraries proposal"
},
{
"msg_contents": "Hi, \n\nI adjusted a bit the code and configured my mail client to\nsend patch attachments appropiately(hopefully). :) \n\nSo here is my second version.\n\nCheers, \nFlorin Irion",
"msg_date": "Thu, 7 Oct 2021 14:03:02 +0200",
"msg_from": "Florin Irion <irionr@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reserve prefixes for loaded libraries proposal"
},
{
"msg_contents": "On Thu, Sep 30, 2021 at 11:54:04PM +0200, Florin Irion wrote:\n> We could also help users get a warning if they set a parameter with the\n> `SET`\n> command. I've seen many cases where users make typos and break things badly,\n> check the following example:\n> \n> ```\n> postgres=# BEGIN;\n> BEGIN\n> postgres=*# SET plpgsql.no_such_setting = false;\n> SET\n> postgres=*# -- do critical queries taking into account that\n> plpgsql.no_such_setting is false;\n> postgres=*# COMMIT;\n> COMMIT\n> ```\n\nCould you give a more concrete example here? What kind of critical\nwork are you talking about here when using prefixes? Please note that\nI am not against the idea of improving the user experience in this\narea as that's inconsistent, as you say.\n--\nMichael",
"msg_date": "Thu, 21 Oct 2021 15:02:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Reserve prefixes for loaded libraries proposal"
},
{
"msg_contents": "Hi,\n\nIl giorno gio 21 ott 2021 alle ore 08:02 Michael Paquier <\nmichael@paquier.xyz> ha scritto:\n>\n> On Thu, Sep 30, 2021 at 11:54:04PM +0200, Florin Irion wrote:\n> > We could also help users get a warning if they set a parameter with the\n> > `SET`\n> > command. I've seen many cases where users make typos and break things\nbadly,\n> > check the following example:\n> >\n> > ```\n> > postgres=# BEGIN;\n> > BEGIN\n> > postgres=*# SET plpgsql.no_such_setting = false;\n> > SET\n> > postgres=*# -- do critical queries taking into account that\n> > plpgsql.no_such_setting is false;\n> > postgres=*# COMMIT;\n> > COMMIT\n> > ```\n>\n> Could you give a more concrete example here? What kind of critical\n> work are you talking about here when using prefixes? Please note that\n> I am not against the idea of improving the user experience in this\n> area as that's inconsistent, as you say.\n> --\n> Michael\n\nThank you for the interest.\n\nI used `plpgsql` in my example/test because it's something that many of\nus know already.\n\nHowever, for example, pglogical2\n<https://github.com/2ndQuadrant/pglogical> has the\n`pglogical.conflict_resolution`\nconfiguration parameter that can be set to one of the following:\n\n```\nerror\napply_remote\nkeep_local\nlast_update_wins\nfirst_update_wins\n```\n\nYou can imagine that conflicting queries could have different outcomes\nbased on this setting.\nIMHO there are other settings like this, in other extensions, that can\nbe critical.\n\nIn any case, even setting something that is not critical could still\nbe important for a user, one example, `pg_stat_statements.track`.\n\nCheers,\nFlorin\n\n--\nFlorin Irion\nwww.enterprisedb.com\n\nHi,Il giorno gio 21 ott 2021 alle ore 08:02 Michael Paquier <michael@paquier.xyz> ha scritto:>> On Thu, Sep 30, 2021 at 11:54:04PM +0200, Florin Irion wrote:> > We could also help users get a warning if they set a parameter with the> > `SET`> > command. I've seen many cases where users make typos and break things badly,> > check the following example:> >> > ```> > postgres=# BEGIN;> > BEGIN> > postgres=*# SET plpgsql.no_such_setting = false;> > SET> > postgres=*# -- do critical queries taking into account that> > plpgsql.no_such_setting is false;> > postgres=*# COMMIT;> > COMMIT> > ```>> Could you give a more concrete example here? What kind of critical> work are you talking about here when using prefixes? Please note that> I am not against the idea of improving the user experience in this> area as that's inconsistent, as you say.> --> MichaelThank you for the interest.I used `plpgsql` in my example/test because it's something that many ofus know already.However, for example, pglogical2 has the `pglogical.conflict_resolution`configuration parameter that can be set to one of the following:```errorapply_remotekeep_locallast_update_winsfirst_update_wins```You can imagine that conflicting queries could have different outcomesbased on this setting.IMHO there are other settings like this, in other extensions, that canbe critical.In any case, even setting something that is not critical could stillbe important for a user, one example, `pg_stat_statements.track`.Cheers,Florin--Florin Irionwww.enterprisedb.com",
"msg_date": "Sat, 23 Oct 2021 01:08:21 +0200",
"msg_from": "Florin Irion <irionr@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reserve prefixes for loaded libraries proposal"
},
{
"msg_contents": "On 07.10.21 14:03, Florin Irion wrote:\n> I adjusted a bit the code and configured my mail client to\n> send patch attachments appropiately(hopefully). :)\n> \n> So here is my second version.\n\nCommitted.\n\nI made two notable changes: I renamed the function, since it looked \nlike EmitWarningsOnPlaceholders() but was doing something not analogous. \n Also, you had in your function\n\n strncmp(varName, var->name, varLen)\n\nprobably copied from EmitWarningsOnPlaceholders(), but unlike there, we \nwant to compare the whole string here, and this would potentially do \nsomething wrong if there were a GUC setting that was a substring of the \nname of another one.\n\n\n\n",
"msg_date": "Wed, 1 Dec 2021 15:22:39 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Reserve prefixes for loaded libraries proposal"
},
{
"msg_contents": ">\n> Committed.\n>\n> I made two notable changes: I renamed the function, since it looked\n> like EmitWarningsOnPlaceholders() but was doing something not analogous.\n\n Also, you had in your function\n>\n> strncmp(varName, var->name, varLen)\n>\n> probably copied from EmitWarningsOnPlaceholders(), but unlike there, we\n> want to compare the whole string here, and this would potentially do\n> something wrong if there were a GUC setting that was a substring of the\n> name of another one.\n>\n\nYeah, it makes sense!\n\nThank you very much!\nThank you for the changes and thank you for committing it!\n\nCheers,\nFlorin\n\n\n-- \n*Florin Irion*\n\n*www.enterprisedb.com <http://www.enterprisedb.com>*\nCel: +39 328 5904901\nTel: +39 0574 054953\nVia Alvise Cadamosto, 47\n59100, Prato, PO\nItalia\n\nCommitted.\n\nI made two notable changes: I renamed the function, since it looked \nlike EmitWarningsOnPlaceholders() but was doing something not analogous. \n Also, you had in your function\n\n strncmp(varName, var->name, varLen)\n\nprobably copied from EmitWarningsOnPlaceholders(), but unlike there, we \nwant to compare the whole string here, and this would potentially do \nsomething wrong if there were a GUC setting that was a substring of the \nname of another one.Yeah, it makes sense! Thank you very much!Thank you for the changes and thank you for committing it!Cheers,Florin-- Florin Irionwww.enterprisedb.comCel: +39 328 5904901Tel: +39 0574 054953Via Alvise Cadamosto, 4759100, Prato, POItalia",
"msg_date": "Wed, 1 Dec 2021 18:47:40 +0100",
"msg_from": "Florin Irion <irionr@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Reserve prefixes for loaded libraries proposal"
}
] |
[
{
"msg_contents": "Does this bother anyone else:\n\nCREATE INDEX uses an amoptions parser specific for the index type and, at least for btree, rejects relation options from the \"toast\" namespace:\n\n+-- Bad reloption for index draws an error\n+CREATE INDEX idx ON test_tbl USING btree (i) WITH (toast.nonsense=insanity);\n+ERROR: unrecognized parameter namespace \"toast\"\n\nNo so for CREATE VIEW, which shares logic with CREATE TABLE:\n\n+-- But not for views, where \"toast\" namespace relopts are ignored\n+CREATE VIEW nonsense_1 WITH (toast.nonsense=insanity, toast.foo=\"bar baz\")\n+ AS SELECT * FROM test_tbl;\n+SELECT relname, reloptions FROM pg_class WHERE relname = 'nonsense_1';\n+ relname | reloptions \n+------------+------------\n+ nonsense_1 | \n+(1 row)\n+\n+-- Well-formed but irrelevant toast options are also silently ignored\n+CREATE VIEW vac_opts_1 WITH (toast.autovacuum_enabled=false)\n+ AS SELECT * FROM test_tbl;\n+SELECT relname, reloptions FROM pg_class WHERE relname = 'vac_opts_1';\n+ relname | reloptions \n+------------+------------\n+ vac_opts_1 | \n+(1 row)\n\nSo far as I can see, this does no harm other than to annoy me. It might confuse new users, though, as changing to a MATERIALIZED VIEW makes the toast options relevant, but the user feedback for the command is no different:\n\n+-- But if we upgrade to a materialized view, they are not ignored, but\n+-- they attach to the toast table, not the view, so users might not notice\n+-- the difference\n+CREATE MATERIALIZED VIEW vac_opts_2 WITH (toast.autovacuum_enabled=false)\n+ AS SELECT * FROM test_tbl;\n+SELECT relname, reloptions FROM pg_class WHERE relname = 'vac_opts_2';\n+ relname | reloptions \n+------------+------------\n+ vac_opts_2 | \n+(1 row)\n+\n+-- They can find the difference if they know where to look\n+SELECT rel.relname, toast.relname, toast.reloptions\n+ FROM pg_class rel LEFT JOIN pg_class toast ON rel.reltoastrelid = toast.oid \n+ WHERE rel.relname IN ('nonsense_1', 'vac_opts_1', 'vac_opts_2');\n+ relname | relname | reloptions \n+------------+----------------+----------------------------\n+ nonsense_1 | | \n+ vac_opts_1 | | \n+ vac_opts_2 | pg_toast_19615 | {autovacuum_enabled=false}\n+(3 rows)\n\nThe solution is simple enough: stop using HEAP_RELOPT_NAMESPACES when parsing reloptions for views and instead create a VIEW_RELOPT_NAMESPACES array which does not include \"toast\".\n\nI've already fixed this, mixed into some other work. I'll pull it out as its own patch if there is any interest.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 30 Sep 2021 19:23:44 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "minor gripe about lax reloptions parsing for views"
},
{
"msg_contents": "On 2021-Sep-30, Mark Dilger wrote:\n\n> The solution is simple enough: stop using HEAP_RELOPT_NAMESPACES when\n> parsing reloptions for views and instead create a\n> VIEW_RELOPT_NAMESPACES array which does not include \"toast\".\n\nIt seems a reasonable (non-backpatchable) change to me.\n\n> I've already fixed this, mixed into some other work. I'll pull it out\n> as its own patch if there is any interest.\n\nYeah.\n\nI suppose you'll need a new routine that returns the namespace array to\nuse based on relkind.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Nunca confiaré en un traidor. Ni siquiera si el traidor lo he creado yo\"\n(Barón Vladimir Harkonnen)\n\n\n",
"msg_date": "Fri, 1 Oct 2021 10:15:27 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: minor gripe about lax reloptions parsing for views"
},
{
"msg_contents": "> On Oct 1, 2021, at 6:15 AM, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n> On 2021-Sep-30, Mark Dilger wrote:\n> \n>> The solution is simple enough: stop using HEAP_RELOPT_NAMESPACES when\n>> parsing reloptions for views and instead create a\n>> VIEW_RELOPT_NAMESPACES array which does not include \"toast\".\n> \n> It seems a reasonable (non-backpatchable) change to me.\n\nI agree. It's neither important enough to be back-patched nor completely non-breaking. Somebody could be passing bogus reloptions and relying on the parser to ignore them.\n\n>> I've already fixed this, mixed into some other work. I'll pull it out\n>> as its own patch if there is any interest.\n> \n> Yeah.\n> \n> I suppose you'll need a new routine that returns the namespace array to\n> use based on relkind.\n\nThe patch does it this way. The new routine can just return NULL for relkinds that don't accept \"toast\" as an option namespace. We don't need to create the VIEW_RELOPT_NAMESPACES array mentioned upthread.\n\nThe patch changes the docs for index storage option \"fillfactor\". The existing documentation imply that all index methods support this parameter, but in truth built-in methods brin and gin do not, and we should not imply anything about what non-built-in methods do.\n\nThe changes to create_view.sql demonstrate what the patch has fixed.\n\nThe other changes to regression tests provide previously missing test coverage of storage options for which the behavior is unchanged. I prefer to have coverage, but if the committer who picks this up disagrees, those changes could just be ignored. I'd also be happy to remove them and repost if the committer prefers.\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 1 Oct 2021 12:34:53 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: minor gripe about lax reloptions parsing for views"
},
{
"msg_contents": "> On Oct 1, 2021, at 12:34 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> The patch does it this way. \n\nA rebased patch is attached.\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 3 Nov 2021 16:21:38 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: minor gripe about lax reloptions parsing for views"
},
{
"msg_contents": "Rebased patch attached:\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 21 Dec 2021 11:23:38 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: minor gripe about lax reloptions parsing for views"
},
{
"msg_contents": "On 2021-Dec-21, Mark Dilger wrote:\n\n> Rebased patch attached:\n\nThese tests are boringly repetitive. Can't we have something like a\nnested loop, with AMs on one and reloptions on the other, where each\nreloption is tried on each AM and an exception block to report the\nfailure or success for each case? Maybe have the list of AMs queried\nfrom pg_am with hardcoded additions if needed?\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"Ninguna manada de bestias tiene una voz tan horrible como la humana\" (Orual)\n\n\n",
"msg_date": "Fri, 24 Dec 2021 18:48:55 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: minor gripe about lax reloptions parsing for views"
},
{
"msg_contents": "The patch is currently not applying.\n\nAnd it looks like there hasn't been any discussion since Alvaro's\ncomments last december. I'm marking the patch Returned with Feedback.\n\nOn Fri, 24 Dec 2021 at 16:49, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Dec-21, Mark Dilger wrote:\n>\n> > Rebased patch attached:\n>\n> These tests are boringly repetitive. Can't we have something like a\n> nested loop, with AMs on one and reloptions on the other, where each\n> reloption is tried on each AM and an exception block to report the\n> failure or success for each case? Maybe have the list of AMs queried\n> from pg_am with hardcoded additions if needed?\n>\n> --\n> Álvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n> \"Ninguna manada de bestias tiene una voz tan horrible como la humana\" (Orual)\n>\n>\n\n\n-- \ngreg\n\n\n",
"msg_date": "Thu, 31 Mar 2022 15:11:00 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: minor gripe about lax reloptions parsing for views"
}
] |
[
{
"msg_contents": "Hi,\n\nHere is a patch fixing the subject.\n\nRegards,\n\n-- \nSergey Shinderuk\t\thttps://postgrespro.com/",
"msg_date": "Fri, 1 Oct 2021 13:39:40 +0300",
"msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Memory leak in pg_hmac_final"
},
{
"msg_contents": "> On 1 Oct 2021, at 12:39, Sergey Shinderuk <s.shinderuk@postgrespro.ru> wrote:\n\n> Here is a patch fixing the subject.\n\nSeems reasonable on a quick glance, the interim h buffer should be freed (this\nis present since 14). I'll have another look at this in a bit and will take\ncare of it.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Fri, 1 Oct 2021 14:05:05 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak in pg_hmac_final"
},
{
"msg_contents": "On 01.10.2021 15:05, Daniel Gustafsson wrote:\n>> On 1 Oct 2021, at 12:39, Sergey Shinderuk <s.shinderuk@postgrespro.ru> wrote:\n> \n>> Here is a patch fixing the subject.\n> \n> Seems reasonable on a quick glance, the interim h buffer should be freed (this\n> is present since 14). I'll have another look at this in a bit and will take\n> care of it.\n\nThanks. I found it using the leaks tool on macOS.\n\nWithout the patch:\n\n% MallocStackLogging=1 leaks -quiet -atExit -- psql -d 'dbname=postgres\nuser=alice password=secret' -XAtc 'select 1'\n...\nProcess 91635: 4390 nodes malloced for 252 KB\nProcess 91635: 4103 leaks for 131296 total leaked bytes.\n...\n\n(User alice has a SCRAM-encrypted password.)\n\n\nWith the patch:\n\nProcess 98250: 290 nodes malloced for 124 KB\nProcess 98250: 3 leaks for 96 total leaked bytes.\n\n\nThe remaining leaks are expected and not worth fixing, I guess:\n\nSTACK OF 1 INSTANCE OF 'ROOT LEAK: malloc<32>':\n4 libdyld.dylib 0x7fff68d80cc9 start + 1\n3 psql 0x10938b9f9 main + 2393\nstartup.c:207\n2 psql 0x1093ab5a5 pg_malloc + 21\nfe_memutils.c:49\n1 libsystem_malloc.dylib 0x7fff68f36cf5 malloc + 21\n0 libsystem_malloc.dylib 0x7fff68f36d9e malloc_zone_malloc\n+ 140\n====\n 2 (48 bytes) ROOT LEAK: 0x7ffbb75040d0 [32]\n 1 (16 bytes) 0x7ffbb75040f0 [16] length: 8 \"select 1\"\n\nSTACK OF 1 INSTANCE OF 'ROOT LEAK: malloc<48>':\n5 libdyld.dylib 0x7fff68d80cc9 start + 1\n4 psql 0x10938b8b0 main + 2064\nstartup.c:207\n3 psql 0x1093ab78e pg_strdup + 14\nfe_memutils.c:96\n2 libsystem_c.dylib 0x7fff68e26ce6 strdup + 32\n1 libsystem_malloc.dylib 0x7fff68f36cf5 malloc + 21\n0 libsystem_malloc.dylib 0x7fff68f36d9e malloc_zone_malloc\n+ 140\n====\n 1 (48 bytes) ROOT LEAK: 0x7ffbb75040a0 [48] length: 42\n\"dbname=postgres user=alice password=secret\"\n\n\n\n-- \nSergey Shinderuk\t\thttps://postgrespro.com/\n\n\n",
"msg_date": "Fri, 1 Oct 2021 15:31:23 +0300",
"msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Memory leak in pg_hmac_final"
},
{
"msg_contents": "> On 1 Oct 2021, at 14:31, Sergey Shinderuk <s.shinderuk@postgrespro.ru> wrote:\n> \n> On 01.10.2021 15:05, Daniel Gustafsson wrote:\n>>> On 1 Oct 2021, at 12:39, Sergey Shinderuk <s.shinderuk@postgrespro.ru> wrote:\n>> \n>>> Here is a patch fixing the subject.\n>> \n>> Seems reasonable on a quick glance, the interim h buffer should be freed (this\n>> is present since 14). I'll have another look at this in a bit and will take\n>> care of it.\n\nPatch pushed to master and 14.\n\n> Thanks. I found it using the leaks tool on macOS.\n\nNice, I hadn't heard of that before but it seems quite neat.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Fri, 1 Oct 2021 22:58:07 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak in pg_hmac_final"
},
{
"msg_contents": "On Fri, Oct 01, 2021 at 10:58:07PM +0200, Daniel Gustafsson wrote:\n> Nice, I hadn't heard of that before but it seems quite neat.\n\nThanks for the fix, it looks fine. I just saw the thread. Perhaps\nthe commit log should have said that this only impacts non-OpenSSL\nbuilds. Worth noting that in ~13 we used a static buffer for \"h\" in\nthe SCRAM code, as its size was known thanks to SHA-256.\n--\nMichael",
"msg_date": "Sat, 2 Oct 2021 14:25:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Memory leak in pg_hmac_final"
}
] |
[
{
"msg_contents": "During testing of the new Visual Studio 2022 Preview Version 4.1 from Microsoft I also tried PG14.0 on it.\n\nThe x64 version built without error!.\n\nEven when this is only a preview version (the real thing is to expected soon) it seems appropriate to include the support to Postgres msvc tools directory.\n\nI followed the guideline of the patch msvc-2019-support-v4.patch for VS2019 support. New patch attached.\n\nThe only thing that will change later in the first non-preview release is the exact version number, which seems to change allways on every minor VS upgrade and is not used explicitely:\n\n$self->{VisualStudioVersion} = '17.0.31717.71';\n\nThe patch is not invasive, so it should follow the practice of backpatching it to (most) supported versions.\n\nI have tested the x64 compile and install with the release source code of PG14.0 from 2021-09-30.\n\nDue to bad development environment I did not a full run of all tests afterwords.\n\n\nVisual Studio is co-installable to an already existing VS version on the same machine (I had VS2019 installed) and is separately choosable as compile environment.\n\nCompilation time and file sizes are almost identical, but the GUI promises a native 64bit implementation, so it may appealing to use the new version.\n\nHELP NEEDED:\n\nPlease could somebody test the patch and enter it to the next commit fest?\n(Only my second patch, not much experience with the tool chain :-( )\n\nAnother point is the failure of using VS2019/VS2022 for building the 32bit version, but this has to be discussed in another thread (if the Windows 32bit Version is still important to support on newer VS Versions)\n\nThanks for looking at it\n\nHans Buschmann",
"msg_date": "Fri, 1 Oct 2021 15:15:59 +0000",
"msg_from": "Hans Buschmann <buschmann@nidsa.net>",
"msg_from_op": true,
"msg_subject": "VS2022: Support Visual Studio 2022 on Windows"
},
{
"msg_contents": "On Fri, 2021-10-01 at 15:15 +0000, Hans Buschmann wrote:\n> During testing of the new Visual Studio 2022 Preview Version 4.1 from Microsoft I also tried PG14.0 on it.\n> The x64 version built without error!.\n> \n> Even when this is only a preview version (the real thing is to expected soon) it seems appropriate to include the support to Postgres msvc tools directory.\n> \n> I followed the guideline of the patch msvc-2019-support-v4.patch for VS2019 support. New patch attached.\n[...]\n> HELP NEEDED:\n> \n> Please could somebody test the patch and enter it to the next commit fest?\n\nThanks for that work; help with Windows is always welcome.\n\nPlease go ahead and add the patch to the commitfest yourself.\nTesting will (hopefully) be done by a reviewer who has access to MSVC 2022.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Mon, 04 Oct 2021 12:13:45 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: VS2022: Support Visual Studio 2022 on Windows"
},
{
"msg_contents": "\nOn 10/4/21 6:13 AM, Laurenz Albe wrote:\n> On Fri, 2021-10-01 at 15:15 +0000, Hans Buschmann wrote:\n>> During testing of the new Visual Studio 2022 Preview Version 4.1 from Microsoft I also tried PG14.0 on it.\n>> The x64 version built without error!.\n>>\n>> Even when this is only a preview version (the real thing is to expected soon) it seems appropriate to include the support to Postgres msvc tools directory.\n>>\n>> I followed the guideline of the patch msvc-2019-support-v4.patch for VS2019 support. New patch attached.\n> [...]\n>> HELP NEEDED:\n>>\n>> Please could somebody test the patch and enter it to the next commit fest?\n> Thanks for that work; help with Windows is always welcome.\n>\n> Please go ahead and add the patch to the commitfest yourself.\n> Testing will (hopefully) be done by a reviewer who has access to MSVC 2022.\n>\n\nI think we'll want to wait for the official release before we add\nsupport for it.\n\n\ncheers\n\n\nandew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 4 Oct 2021 08:21:52 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: VS2022: Support Visual Studio 2022 on Windows"
},
{
"msg_contents": "On Mon, Oct 04, 2021 at 08:21:52AM -0400, Andrew Dunstan wrote:\n> I think we'll want to wait for the official release before we add\n> support for it.\n\nAgreed. I am pretty sure that the version strings this patch is using\nare going to change until the release happens.\n--\nMichael",
"msg_date": "Mon, 11 Oct 2021 15:03:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: VS2022: Support Visual Studio 2022 on Windows"
},
{
"msg_contents": "During October Patchday 2021 the Visual Studio components where upraded too.\n\nNow VS2022 Preview 5 is out, also Visual Studio 2022 RC is available to be used for production use (seems like our RC with respect to features).\n\nIn the long of this process Microsoft announced the general availability of VS200 for Monday, November 8, see\n\nhttps://devblogs.microsoft.com/visualstudio/join-us-november-8th-for-the-launch-of-visual-studio-2022/\n\nThis date is just some hours after the wrapup for our minor release on November 11.\n\nBarring any objections I suggest to apply the patch just before this weekend in November to have the support for Microsofts Developer Suite for the following 3 months available. (PS: no one is OBLIGUED to use the new version of VS, the interest for PG14 will grow with PG14.1 and this support effects only experienced users self-compiling on windows).\n\nI will watch the development in the first week of November and will update the patch to include the latest version number.\n\nIt seems clear that VS2022 will take the 14.30 range as Version number (seen from the VC runtime versions installed)\n\nOnly the VisualStudioVersion (17.0.31717.71) will be changed like on EVERY update of a Visual Studio installation/upgrade.\n\nVS2019 is now on 16.11.5, Postgres never upgrades this number for older versions and always uses the initial number when introduced (here 16.0.28729.10 for VS2019).\n\nSo it seems safe to use a number of a version which can be used for building PG without errors.\n\nThanks\n\nHans Buschmann\n\n________________________________________\nVon: Michael Paquier <michael@paquier.xyz>\nGesendet: Montag, 11. Oktober 2021 08:03\nAn: Andrew Dunstan\nCc: Laurenz Albe; Hans Buschmann; pgsql-hackers@postgresql.org\nBetreff: Re: VS2022: Support Visual Studio 2022 on Windows\n\nOn Mon, Oct 04, 2021 at 08:21:52AM -0400, Andrew Dunstan wrote:\n> I think we'll want to wait for the official release before we add\n> support for it.\n\nAgreed. I am pretty sure that the version strings this patch is using\nare going to change until the release happens.\n--\nMichael\n\n\n",
"msg_date": "Wed, 13 Oct 2021 16:44:53 +0000",
"msg_from": "Hans Buschmann <buschmann@nidsa.net>",
"msg_from_op": true,
"msg_subject": "AW: VS2022: Support Visual Studio 2022 on Windows"
},
{
"msg_contents": "Hans Buschmann <buschmann@nidsa.net> writes:\n> In the long of this process Microsoft announced the general availability of VS200 for Monday, November 8, see\n> https://devblogs.microsoft.com/visualstudio/join-us-november-8th-for-the-launch-of-visual-studio-2022/\n> This date is just some hours after the wrapup for our minor release on November 11.\n\nUgh, bad timing.\n\n> Barring any objections I suggest to apply the patch just before this weekend in November to have the support for Microsofts Developer Suite for the following 3 months available.\n\nImmediately before a release is the worst possible time to be applying\nnon-critical patches. I think better options are to\n(1) commit now, using the RC release's version as the minimum, or\n(2) wait till just after our release cycle.\n\nOf course, if we only plan to commit to HEAD and not back-patch,\nthis is all moot.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 13 Oct 2021 15:49:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AW: VS2022: Support Visual Studio 2022 on Windows"
},
{
"msg_contents": "\nOn 10/13/21 3:49 PM, Tom Lane wrote:\n> Hans Buschmann <buschmann@nidsa.net> writes:\n>> In the long of this process Microsoft announced the general availability of VS200 for Monday, November 8, see\n>> https://devblogs.microsoft.com/visualstudio/join-us-november-8th-for-the-launch-of-visual-studio-2022/\n>> This date is just some hours after the wrapup for our minor release on November 11.\n> Ugh, bad timing.\n>\n>> Barring any objections I suggest to apply the patch just before this weekend in November to have the support for Microsofts Developer Suite for the following 3 months available.\n> Immediately before a release is the worst possible time to be applying\n> non-critical patches. I think better options are to\n> (1) commit now, using the RC release's version as the minimum, or\n> (2) wait till just after our release cycle.\n>\n> Of course, if we only plan to commit to HEAD and not back-patch,\n> this is all moot.\n>\n> \t\t\n\n\nNo, we always try to backpatch these so that we can have buildfarm\nanimals that build all live branches.\n\n\nI really don't see that we need to be in a hurry with this. There is no\nrequirement that we support VS2022 on day one of its release. Three\nmonths really won't matter. Impatience doesn't serve us well here.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 13 Oct 2021 16:11:12 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: AW: VS2022: Support Visual Studio 2022 on Windows"
},
{
"msg_contents": "Now it is three days before release of VS2022.\n\nI updated to the latest Preview 7 (= RC3) and recompiled PG14 64bit Release without issues.\nThere seem to be not many internal differences from previous versions in the tools used for building Postgres.\n\nMy intention for an early support is to catch the momentum for signalling to any user: \"We support current tools\".\nThe risks seem non-existent.\n\nUpdated the patch to reflect the VisualStudioVersion for Preview 7, which is the version number compiled into the main devenv.exe image.\nThis version number seems to be of no interest elsewhere in the postgres source tree.\n\nI will reflect any updates after official release on monday, November 8\n\nHans Buschmann",
"msg_date": "Sat, 6 Nov 2021 09:29:34 +0000",
"msg_from": "Hans Buschmann <buschmann@nidsa.net>",
"msg_from_op": true,
"msg_subject": "AW: VS2022: Support Visual Studio 2022 on Windows"
},
{
"msg_contents": "> On 6 Nov 2021, at 10:29, Hans Buschmann <buschmann@nidsa.net> wrote:\n\n> Updated the patch to reflect the VisualStudioVersion for Preview 7, which is the version number compiled into the main devenv.exe image.\n> This version number seems to be of no interest elsewhere in the postgres source tree.\n\nThis patch fails to apply as it's anchored beneath the root of the source tree,\nplease create the patch from inside the sourcetree such that others (and the CF\npatch tester) can apply it without tweaking:\n\n--- a/postgresql-14.0_orig/doc/src/sgml/install-windows.sgml\n+++ b/postgresql-14.0_vs2022/doc/src/sgml/install-windows.sgml\n\nAlso note that patches should be against Git HEAD unless fixing a bug only\npresent in a backbranch.\n\n> I will reflect any updates after official release on monday, November 8\n\nAny updates on this following the November 8 release?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 17 Nov 2021 15:11:24 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: VS2022: Support Visual Studio 2022 on Windows"
},
{
"msg_contents": "Hello Daniel,\n\nThank you for looking into it.\n\nMy skills with git are minmal yet and I am working on a correct development platform, so sorry for any inconveniances from my side .\n\nWhen upgraded Microsoft jumped directly from Preview 7 to Preview 7.1 of VS2022 by skipping the release version of 7.0.\n\nI had to install it on a different machine to test it with the final VS2022 version from november 8.\n\nOn both platforms the build of snapshot from 19.11.2021 is successfull but gives the following warnings which seem not correlated to the proposed patch:\n\nDer Buildvorgang wurde erfolgreich ausgeführt.\n\n\"C:\\pgdev\\postgresql-15devel\\pgsql.sln\" (Standardziel) (1) ->\n\"C:\\pgdev\\postgresql-15devel\\postgres.vcxproj\" (Standardziel) (2) ->\n(ClCompile Ziel) ->\n C:\\pgdev\\postgresql-15devel\\src\\backend\\access\\heap\\pruneheap.c(858,18): warning C4101: \"htup\": Unreferenzierte lokale Variable [C:\\pgdev\\postgresql-15devel\\postgres.vcxproj]\n C:\\pgdev\\postgresql-15devel\\src\\backend\\access\\heap\\pruneheap.c(870,11): warning C4101: \"tolp\": Unreferenzierte lokale Variable [C:\\pgdev\\postgresql-15devel\\postgres.vcxproj]\n\n 2 Warnung(en)\n 0 Fehler\n\n(Meaning 2 unreferenced local variables in pruneheap.c)\n\nThe build produced .vcxproj files with ToolsVersion=\"17.0\", so it recognized the new environment correctly.\n\nI corrected some ommissions in _GetVisualStudioVersion in VSObjectFactory.pm.\n\nPlease find attached the corrected patch version v4.\n\nDue to my restricted devlopment environment I appreciate if anybody can test the resulting binaries (but MS seems not have changed much on the C Build environment internally).\n\nThanks\n\nHans Buschmann",
"msg_date": "Sat, 20 Nov 2021 17:54:30 +0000",
"msg_from": "Hans Buschmann <buschmann@nidsa.net>",
"msg_from_op": true,
"msg_subject": "AW: VS2022: Support Visual Studio 2022 on Windows"
},
{
"msg_contents": "On Sat, Nov 20, 2021 at 05:54:30PM +0000, Hans Buschmann wrote:\n> My skills with git are minmal yet and I am working on a correct\n> development platform, so sorry for any inconveniances from my side.\n\nNo need to worry here. We all learn all the time. I have been able\nto apply your patch with a \"patch -p2\", which is fine enough. If you\nwant to generate cleaner diffs, you could use a \"git diff\" or a \"git\nformat-patch\". Folks around here rely on those commands heavily when\ngenerating patches.\n\n> On both platforms the build of snapshot from 19.11.2021 is\n> successfull but gives the following warnings which seem not\n> correlated to the proposed patch:\n\nThat's fine by me.\n\n> Der Buildvorgang wurde erfolgreich ausgeführt.\n> \n> \"C:\\pgdev\\postgresql-15devel\\pgsql.sln\" (Standardziel) (1) ->\n> \"C:\\pgdev\\postgresql-15devel\\postgres.vcxproj\" (Standardziel) (2) ->\n> (ClCompile Ziel) ->\n> C:\\pgdev\\postgresql-15devel\\src\\backend\\access\\heap\\pruneheap.c(858,18): warning C4101: \"htup\": Unreferenzierte lokale Variable [C:\\pgdev\\postgresql-15devel\\postgres.vcxproj]\n> C:\\pgdev\\postgresql-15devel\\src\\backend\\access\\heap\\pruneheap.c(870,11): warning C4101: \"tolp\": Unreferenzierte lokale Variable [C:\\pgdev\\postgresql-15devel\\postgres.vcxproj]\n> \n> 2 Warnung(en)\n> 0 Fehler\n> \n> (Meaning 2 unreferenced local variables in pruneheap.c)\n\nThose warnings are knows. A commit from Peter G is at the origin of\nthat but nothing has been done about these yet:\nhttps://www.postgresql.org/message-id/YYTTuYykpVXEfnOr@paquier.xyz\n\nSo don't worry about that :)\n\nGlad to see that we should have nothing to do about locales this\ntime. I have not tested, but I think that you covering all the areas\nthat need a refresh here. Nice work.\n\n+ # The version of nmake bundled in Visual Studio 2022 is greater\n+ # than 14.30 and less than 14.40. And the version number is\n+ # actually 17.00.\n+ elsif (\n+ ($visualStudioVersion ge '14.30' && $visualStudioVersion lt '14.40')\n+ || $visualStudioVersion eq '17.00')\n+ {\n+ return new VS2022Solution(@_);\n+ }\nWow, really? MSVC has not yet simplified their version numbering with\nnmake.\n\n+VC2017Project,VC2019Project or VC2022Project from MSBuildProject.pm) to it.\nNit: you should use a space when listing elements in a comma-separated\nlist.\n\n- method for compressing table or WAL data. Binaries and source can be\n+ method for compressing the table data. Binaries and source can be\nDiff unrelated to your patch. \n\nI'll double-check your patch later, but that looks rather good to me.\nWill try to apply and back-patch, and it would be better to check the\nversion numbers assigned in the patch, as well.\n--\nMichael",
"msg_date": "Sun, 21 Nov 2021 10:41:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: VS2022: Support Visual Studio 2022 on Windows"
},
{
"msg_contents": "On Sun, Nov 21, 2021 at 10:41:17AM +0900, Michael Paquier wrote:\n> I'll double-check your patch later, but that looks rather good to me.\n> Will try to apply and back-patch, and it would be better to check the\n> version numbers assigned in the patch, as well.\n\nI have spent a couple of hours on that today, and applied that down to\n10 so as all branches benefit from that. There was a hidden problem\nin 10 and 11, where we need to be careful to use VC2012Project as base \nin MSBuildProject.pm.\n\nThanks, Hans!\n--\nMichael",
"msg_date": "Wed, 24 Nov 2021 13:11:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: VS2022: Support Visual Studio 2022 on Windows"
},
{
"msg_contents": "Hello Michael,\n\nthanks for your hard work and quick response!\nIt is very convenient to only use VS2022 for Windows from now on...\n\n>Diff unrelated to your patch. \n\nSorry for the copysoft problem from the first version.\n\n>Glad to see that we should have nothing to do about locales this\n>time. I have not tested, but I think that you covering all the areas\n>that need a refresh here. Nice work.\n\nI think it is almost impossible to overestimate the value of such support from experienced hackers to others starting their journey right now...\n\nI hope I can motivate you (and other experienced hackers) to give me some more support on my real project arriving anytime soon. It addresses hex_encoding (and more) targetting mostly pg_dump, but requires also some deeper knowledge of general infrastructure and building (also on Windows). Stay tuned!\n\nPS: Does anybody have good relations to EDB suggesting them to target VS2022 as the build environment for the upcoming PG15 release?\n\npostgres=# select version ();\n version\n------------------------------------------------------------\n PostgreSQL 14.1, compiled by Visual C++ build 1931, 64-bit\n(1 row)\n\nThanks!\n\nHans Buschmann\n\n\n",
"msg_date": "Wed, 24 Nov 2021 09:12:24 +0000",
"msg_from": "Hans Buschmann <buschmann@nidsa.net>",
"msg_from_op": true,
"msg_subject": "AW: VS2022: Support Visual Studio 2022 on Windows"
},
{
"msg_contents": "Hi\n\nOn Wed, Nov 24, 2021 at 9:12 AM Hans Buschmann <buschmann@nidsa.net> wrote:\n\n> Hello Michael,\n>\n> thanks for your hard work and quick response!\n> It is very convenient to only use VS2022 for Windows from now on...\n>\n> >Diff unrelated to your patch.\n>\n> Sorry for the copysoft problem from the first version.\n>\n> >Glad to see that we should have nothing to do about locales this\n> >time. I have not tested, but I think that you covering all the areas\n> >that need a refresh here. Nice work.\n>\n> I think it is almost impossible to overestimate the value of such support\n> from experienced hackers to others starting their journey right now...\n>\n> I hope I can motivate you (and other experienced hackers) to give me some\n> more support on my real project arriving anytime soon. It addresses\n> hex_encoding (and more) targetting mostly pg_dump, but requires also some\n> deeper knowledge of general infrastructure and building (also on Windows).\n> Stay tuned!\n>\n> PS: Does anybody have good relations to EDB suggesting them to target\n> VS2022 as the build environment for the upcoming PG15 release?\n>\n\nThat would be me...\n\n\n>\n> postgres=# select version ();\n> version\n> ------------------------------------------------------------\n> PostgreSQL 14.1, compiled by Visual C++ build 1931, 64-bit\n> (1 row)\n>\n\nIt's extremely unlikely that we'd shift to such a new version for PG15. We\nbuild many components aside from PostgreSQL, and need to use the same\ntoolchain for all of them (we've had very painful experiences with mix n\nmatch CRT versions in the past) so it's not just PG that needs to support\nVS2022 as far as we're concerned - Perl, Python, TCL, MIT Kerberos,\nOpenSSL, libxml2, libxslt etc. are all built with the same toolchain for\nconsistency.\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nHiOn Wed, Nov 24, 2021 at 9:12 AM Hans Buschmann <buschmann@nidsa.net> wrote:Hello Michael,\n\nthanks for your hard work and quick response!\nIt is very convenient to only use VS2022 for Windows from now on...\n\n>Diff unrelated to your patch. \n\nSorry for the copysoft problem from the first version.\n\n>Glad to see that we should have nothing to do about locales this\n>time. I have not tested, but I think that you covering all the areas\n>that need a refresh here. Nice work.\n\nI think it is almost impossible to overestimate the value of such support from experienced hackers to others starting their journey right now...\n\nI hope I can motivate you (and other experienced hackers) to give me some more support on my real project arriving anytime soon. It addresses hex_encoding (and more) targetting mostly pg_dump, but requires also some deeper knowledge of general infrastructure and building (also on Windows). Stay tuned!\n\nPS: Does anybody have good relations to EDB suggesting them to target VS2022 as the build environment for the upcoming PG15 release?That would be me... \n\npostgres=# select version ();\n version\n------------------------------------------------------------\n PostgreSQL 14.1, compiled by Visual C++ build 1931, 64-bit\n(1 row)It's extremely unlikely that we'd shift to such a new version for PG15. We build many components aside from PostgreSQL, and need to use the same toolchain for all of them (we've had very painful experiences with mix n match CRT versions in the past) so it's not just PG that needs to support VS2022 as far as we're concerned - Perl, Python, TCL, MIT Kerberos, OpenSSL, libxml2, libxslt etc. are all built with the same toolchain for consistency.-- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 24 Nov 2021 10:00:19 +0000",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": false,
"msg_subject": "Re: VS2022: Support Visual Studio 2022 on Windows"
},
{
"msg_contents": "On Wed, Nov 24, 2021 at 10:00:19AM +0000, Dave Page wrote:\n> It's extremely unlikely that we'd shift to such a new version for PG15. We\n> build many components aside from PostgreSQL, and need to use the same\n> toolchain for all of them (we've had very painful experiences with mix n\n> match CRT versions in the past) so it's not just PG that needs to support\n> VS2022 as far as we're concerned\n\nYes, I can understand that upgrading the base version of VS used is a\nvery difficult exercise. I have been through that, on Windows for\nPostgres.. As well as for the compilation of all its dependencies.\n\n> - Perl, Python, TCL, MIT Kerberos,\n> OpenSSL, libxml2, libxslt etc. are all built with the same toolchain for\n> consistency.\n\nDave, do you include LZ4 in 14? Just asking, as a matter of\ncuriosity.\n--\nMichael",
"msg_date": "Wed, 24 Nov 2021 20:36:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: VS2022: Support Visual Studio 2022 on Windows"
},
{
"msg_contents": "Hi\n\nOn Wed, Nov 24, 2021 at 11:36 AM Michael Paquier <michael@paquier.xyz>\nwrote:\n\n> On Wed, Nov 24, 2021 at 10:00:19AM +0000, Dave Page wrote:\n> > It's extremely unlikely that we'd shift to such a new version for PG15.\n> We\n> > build many components aside from PostgreSQL, and need to use the same\n> > toolchain for all of them (we've had very painful experiences with mix n\n> > match CRT versions in the past) so it's not just PG that needs to support\n> > VS2022 as far as we're concerned\n>\n> Yes, I can understand that upgrading the base version of VS used is a\n> very difficult exercise. I have been through that, on Windows for\n> Postgres.. As well as for the compilation of all its dependencies.\n>\n> > - Perl, Python, TCL, MIT Kerberos,\n> > OpenSSL, libxml2, libxslt etc. are all built with the same toolchain for\n> > consistency.\n>\n> Dave, do you include LZ4 in 14? Just asking, as a matter of\n> curiosity.\n>\n\nYes we do :-)\n\nC:\\Program Files\\PostgreSQL\\14\\bin>pg_config\nBINDIR = C:/PROGRA~1/POSTGR~1/14/bin\nDOCDIR = C:/PROGRA~1/POSTGR~1/14/doc\nHTMLDIR = C:/PROGRA~1/POSTGR~1/14/doc\nINCLUDEDIR = C:/PROGRA~1/POSTGR~1/14/include\nPKGINCLUDEDIR = C:/PROGRA~1/POSTGR~1/14/include\nINCLUDEDIR-SERVER = C:/PROGRA~1/POSTGR~1/14/include/server\nLIBDIR = C:/Program Files/PostgreSQL/14/lib\nPKGLIBDIR = C:/Program Files/PostgreSQL/14/lib\nLOCALEDIR = C:/PROGRA~1/POSTGR~1/14/share/locale\nMANDIR = C:/Program Files/PostgreSQL/14/man\nSHAREDIR = C:/PROGRA~1/POSTGR~1/14/share\nSYSCONFDIR = C:/Program Files/PostgreSQL/14/etc\nPGXS = C:/Program Files/PostgreSQL/14/lib/pgxs/src/makefiles/pgxs.mk\nCONFIGURE = --enable-thread-safety --enable-nls --with-ldap\n--with-ssl=openssl --with-uuid --with-libxml --with-libxslt --with-lz4\n--with-icu --with-tcl --with-perl --with-python\nCC = not recorded\nCPPFLAGS = not recorded\nCFLAGS = not recorded\nCFLAGS_SL = not recorded\nLDFLAGS = not recorded\nLDFLAGS_EX = not recorded\nLDFLAGS_SL = not recorded\nLIBS = not recorded\nVERSION = PostgreSQL 14.1\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nHiOn Wed, Nov 24, 2021 at 11:36 AM Michael Paquier <michael@paquier.xyz> wrote:On Wed, Nov 24, 2021 at 10:00:19AM +0000, Dave Page wrote:\n> It's extremely unlikely that we'd shift to such a new version for PG15. We\n> build many components aside from PostgreSQL, and need to use the same\n> toolchain for all of them (we've had very painful experiences with mix n\n> match CRT versions in the past) so it's not just PG that needs to support\n> VS2022 as far as we're concerned\n\nYes, I can understand that upgrading the base version of VS used is a\nvery difficult exercise. I have been through that, on Windows for\nPostgres.. As well as for the compilation of all its dependencies.\n\n> - Perl, Python, TCL, MIT Kerberos,\n> OpenSSL, libxml2, libxslt etc. are all built with the same toolchain for\n> consistency.\n\nDave, do you include LZ4 in 14? Just asking, as a matter of\ncuriosity.Yes we do :-)C:\\Program Files\\PostgreSQL\\14\\bin>pg_configBINDIR = C:/PROGRA~1/POSTGR~1/14/binDOCDIR = C:/PROGRA~1/POSTGR~1/14/docHTMLDIR = C:/PROGRA~1/POSTGR~1/14/docINCLUDEDIR = C:/PROGRA~1/POSTGR~1/14/includePKGINCLUDEDIR = C:/PROGRA~1/POSTGR~1/14/includeINCLUDEDIR-SERVER = C:/PROGRA~1/POSTGR~1/14/include/serverLIBDIR = C:/Program Files/PostgreSQL/14/libPKGLIBDIR = C:/Program Files/PostgreSQL/14/libLOCALEDIR = C:/PROGRA~1/POSTGR~1/14/share/localeMANDIR = C:/Program Files/PostgreSQL/14/manSHAREDIR = C:/PROGRA~1/POSTGR~1/14/shareSYSCONFDIR = C:/Program Files/PostgreSQL/14/etcPGXS = C:/Program Files/PostgreSQL/14/lib/pgxs/src/makefiles/pgxs.mkCONFIGURE = --enable-thread-safety --enable-nls --with-ldap --with-ssl=openssl --with-uuid --with-libxml --with-libxslt --with-lz4 --with-icu --with-tcl --with-perl --with-pythonCC = not recordedCPPFLAGS = not recordedCFLAGS = not recordedCFLAGS_SL = not recordedLDFLAGS = not recordedLDFLAGS_EX = not recordedLDFLAGS_SL = not recordedLIBS = not recordedVERSION = PostgreSQL 14.1 -- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 24 Nov 2021 12:01:27 +0000",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": false,
"msg_subject": "Re: VS2022: Support Visual Studio 2022 on Windows"
}
] |
[
{
"msg_contents": "Good day.\n\nI found some opportunity in Buffer Manager code in BufferAlloc\nfunction:\n- When valid buffer is evicted, BufferAlloc acquires two partition\nlwlocks: for partition for evicted block is in and partition for new\nblock placement.\n\nIt doesn't matter if there is small number of concurrent replacements.\nBut if there are a lot of concurrent backends replacing buffers,\ncomplex dependency net quickly arose.\n\nIt could be easily seen with select-only pgbench with scale 100 and\nshared buffers 128MB: scale 100 produces 1.5GB tables, and it certainly\ndoesn't fit shared buffers. This way performance starts to degrade at\n~100 connections. Even with shared buffers 1GB it slowly degrades after\n150 connections. \n\nBut strictly speaking, there is no need to hold both lock\nsimultaneously. Buffer is pinned so other processes could not select it\nfor eviction. If tag is cleared and buffer removed from old partition\nthen other processes will not find it. Therefore it is safe to release\nold partition lock before acquiring new partition lock.\n\nIf other process concurrently inserts same new buffer, then old buffer\nis placed to bufmanager's freelist.\n\nAdditional optimisation: in case of old buffer is reused, there is no\nneed to put its BufferLookupEnt into dynahash's freelist. That reduces\nlock contention a bit more. To acomplish this FreeListData.nentries is\nchanged to pg_atomic_u32/pg_atomic_u64 and atomic increment/decrement\nis used.\n\nRemark: there were bug in the `hash_update_hash_key`: nentries were not\nkept in sync if freelist partitions differ. This bug were never\ntriggered because single use of `hash_update_hash_key` doesn't move\nentry between partitions.\n\nThere is some tests results.\n\n- pgbench with scale 100 were tested with --select-only (since we want\nto test buffer manager alone). It produces 1.5GB table.\n- two shared_buffers values were tested: 128MB and 1GB.\n- second best result were taken among five runs\n\nTest were made in three system configurations:\n- notebook with i7-1165G7 (limited to 2.8GHz to not overheat)\n- Xeon X5675 6 core 2 socket NUMA system (12 cores/24 threads).\n- same Xeon X5675 but restricted to single socket\n (with numactl -m 0 -N 0)\n\nResults for i7-1165G7:\n\n conns | master | patched | master 1G | patched 1G \n--------+------------+------------+------------+------------\n 1 | 29667 | 29079 | 29425 | 29411 \n 2 | 55577 | 55553 | 57974 | 57223 \n 3 | 87393 | 87924 | 87246 | 89210 \n 5 | 136222 | 136879 | 133775 | 133949 \n 7 | 179865 | 176734 | 178297 | 175559 \n 17 | 215953 | 214708 | 222908 | 223651 \n 27 | 211162 | 213014 | 220506 | 219752 \n 53 | 211620 | 218702 | 220906 | 225218 \n 83 | 213488 | 221799 | 219075 | 228096 \n 107 | 212018 | 222110 | 222502 | 227825 \n 139 | 207068 | 220812 | 218191 | 226712 \n 163 | 203716 | 220793 | 213498 | 226493 \n 191 | 199248 | 217486 | 210994 | 221026 \n 211 | 195887 | 217356 | 209601 | 219397 \n 239 | 193133 | 215695 | 209023 | 218773 \n 271 | 190686 | 213668 | 207181 | 219137 \n 307 | 188066 | 214120 | 205392 | 218782 \n 353 | 185449 | 213570 | 202120 | 217786 \n 397 | 182173 | 212168 | 201285 | 216489 \n\nResults for 1 socket X5675\n\n conns | master | patched | master 1G | patched 1G \n--------+------------+------------+------------+------------\n 1 | 16864 | 16584 | 17419 | 17630 \n 2 | 32764 | 32735 | 34593 | 34000 \n 3 | 47258 | 46022 | 49570 | 47432 \n 5 | 64487 | 64929 | 68369 | 68885 \n 7 | 81932 | 82034 | 87543 | 87538 \n 17 | 114502 | 114218 | 127347 | 127448 \n 27 | 116030 | 115758 | 130003 | 128890 \n 53 | 116814 | 117197 | 131142 | 131080 \n 83 | 114438 | 116704 | 130198 | 130985 \n 107 | 113255 | 116910 | 129932 | 131468 \n 139 | 111577 | 116929 | 129012 | 131782 \n 163 | 110477 | 116818 | 128628 | 131697 \n 191 | 109237 | 116672 | 127833 | 131586 \n 211 | 108248 | 116396 | 127474 | 131650 \n 239 | 107443 | 116237 | 126731 | 131760 \n 271 | 106434 | 115813 | 126009 | 131526 \n 307 | 105077 | 115542 | 125279 | 131421 \n 353 | 104516 | 115277 | 124491 | 131276 \n 397 | 103016 | 114842 | 123624 | 131019 \n\nResults for 2 socket x5675\n\n conns | master | patched | master 1G | patched 1G \n--------+------------+------------+------------+------------\n 1 | 16323 | 16280 | 16959 | 17598 \n 2 | 30510 | 31431 | 33763 | 31690 \n 3 | 45051 | 45834 | 48896 | 47991 \n 5 | 71800 | 73208 | 78077 | 74714 \n 7 | 89792 | 89980 | 95986 | 96662 \n 17 | 178319 | 177979 | 195566 | 196143 \n 27 | 210475 | 205209 | 226966 | 235249 \n 53 | 222857 | 220256 | 252673 | 251041 \n 83 | 219652 | 219938 | 250309 | 250464 \n 107 | 218468 | 219849 | 251312 | 251425 \n 139 | 210486 | 217003 | 250029 | 250695 \n 163 | 204068 | 218424 | 248234 | 252940 \n 191 | 200014 | 218224 | 246622 | 253331 \n 211 | 197608 | 218033 | 245331 | 253055 \n 239 | 195036 | 218398 | 243306 | 253394 \n 271 | 192780 | 217747 | 241406 | 253148 \n 307 | 189490 | 217607 | 239246 | 253373 \n 353 | 186104 | 216697 | 236952 | 253034 \n 397 | 183507 | 216324 | 234764 | 252872 \n\nAs can be seen, patched version degrades much slower than master.\n(Or even doesn't degrade with 1G shared buffer on older processor).\n\nPS.\n\nThere is a room for further improvements:\n- buffer manager's freelist could be partitioned\n- dynahash's freelist could be sized/aligned to CPU cache line\n- in fact, there is no need in dynahash at all. It is better to make\n custom hash-table using BufferDesc as entries. BufferDesc has spare\n space for link to next and hashvalue.\n\nregards,\nYura Sokolov\ny.sokolov@postgrespro.ru\nfunny.falcon@gmail.com",
"msg_date": "Sat, 02 Oct 2021 01:25:57 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "On Fri, Oct 1, 2021 at 3:26 PM Yura Sokolov <y.sokolov@postgrespro.ru>\nwrote:\n\n> Good day.\n>\n> I found some opportunity in Buffer Manager code in BufferAlloc\n> function:\n> - When valid buffer is evicted, BufferAlloc acquires two partition\n> lwlocks: for partition for evicted block is in and partition for new\n> block placement.\n>\n> It doesn't matter if there is small number of concurrent replacements.\n> But if there are a lot of concurrent backends replacing buffers,\n> complex dependency net quickly arose.\n>\n> It could be easily seen with select-only pgbench with scale 100 and\n> shared buffers 128MB: scale 100 produces 1.5GB tables, and it certainly\n> doesn't fit shared buffers. This way performance starts to degrade at\n> ~100 connections. Even with shared buffers 1GB it slowly degrades after\n> 150 connections.\n>\n> But strictly speaking, there is no need to hold both lock\n> simultaneously. Buffer is pinned so other processes could not select it\n> for eviction. If tag is cleared and buffer removed from old partition\n> then other processes will not find it. Therefore it is safe to release\n> old partition lock before acquiring new partition lock.\n>\n> If other process concurrently inserts same new buffer, then old buffer\n> is placed to bufmanager's freelist.\n>\n> Additional optimisation: in case of old buffer is reused, there is no\n> need to put its BufferLookupEnt into dynahash's freelist. That reduces\n> lock contention a bit more. To acomplish this FreeListData.nentries is\n> changed to pg_atomic_u32/pg_atomic_u64 and atomic increment/decrement\n> is used.\n>\n> Remark: there were bug in the `hash_update_hash_key`: nentries were not\n> kept in sync if freelist partitions differ. This bug were never\n> triggered because single use of `hash_update_hash_key` doesn't move\n> entry between partitions.\n>\n> There is some tests results.\n>\n> - pgbench with scale 100 were tested with --select-only (since we want\n> to test buffer manager alone). It produces 1.5GB table.\n> - two shared_buffers values were tested: 128MB and 1GB.\n> - second best result were taken among five runs\n>\n> Test were made in three system configurations:\n> - notebook with i7-1165G7 (limited to 2.8GHz to not overheat)\n> - Xeon X5675 6 core 2 socket NUMA system (12 cores/24 threads).\n> - same Xeon X5675 but restricted to single socket\n> (with numactl -m 0 -N 0)\n>\n> Results for i7-1165G7:\n>\n> conns | master | patched | master 1G | patched 1G\n> --------+------------+------------+------------+------------\n> 1 | 29667 | 29079 | 29425 | 29411\n> 2 | 55577 | 55553 | 57974 | 57223\n> 3 | 87393 | 87924 | 87246 | 89210\n> 5 | 136222 | 136879 | 133775 | 133949\n> 7 | 179865 | 176734 | 178297 | 175559\n> 17 | 215953 | 214708 | 222908 | 223651\n> 27 | 211162 | 213014 | 220506 | 219752\n> 53 | 211620 | 218702 | 220906 | 225218\n> 83 | 213488 | 221799 | 219075 | 228096\n> 107 | 212018 | 222110 | 222502 | 227825\n> 139 | 207068 | 220812 | 218191 | 226712\n> 163 | 203716 | 220793 | 213498 | 226493\n> 191 | 199248 | 217486 | 210994 | 221026\n> 211 | 195887 | 217356 | 209601 | 219397\n> 239 | 193133 | 215695 | 209023 | 218773\n> 271 | 190686 | 213668 | 207181 | 219137\n> 307 | 188066 | 214120 | 205392 | 218782\n> 353 | 185449 | 213570 | 202120 | 217786\n> 397 | 182173 | 212168 | 201285 | 216489\n>\n> Results for 1 socket X5675\n>\n> conns | master | patched | master 1G | patched 1G\n> --------+------------+------------+------------+------------\n> 1 | 16864 | 16584 | 17419 | 17630\n> 2 | 32764 | 32735 | 34593 | 34000\n> 3 | 47258 | 46022 | 49570 | 47432\n> 5 | 64487 | 64929 | 68369 | 68885\n> 7 | 81932 | 82034 | 87543 | 87538\n> 17 | 114502 | 114218 | 127347 | 127448\n> 27 | 116030 | 115758 | 130003 | 128890\n> 53 | 116814 | 117197 | 131142 | 131080\n> 83 | 114438 | 116704 | 130198 | 130985\n> 107 | 113255 | 116910 | 129932 | 131468\n> 139 | 111577 | 116929 | 129012 | 131782\n> 163 | 110477 | 116818 | 128628 | 131697\n> 191 | 109237 | 116672 | 127833 | 131586\n> 211 | 108248 | 116396 | 127474 | 131650\n> 239 | 107443 | 116237 | 126731 | 131760\n> 271 | 106434 | 115813 | 126009 | 131526\n> 307 | 105077 | 115542 | 125279 | 131421\n> 353 | 104516 | 115277 | 124491 | 131276\n> 397 | 103016 | 114842 | 123624 | 131019\n>\n> Results for 2 socket x5675\n>\n> conns | master | patched | master 1G | patched 1G\n> --------+------------+------------+------------+------------\n> 1 | 16323 | 16280 | 16959 | 17598\n> 2 | 30510 | 31431 | 33763 | 31690\n> 3 | 45051 | 45834 | 48896 | 47991\n> 5 | 71800 | 73208 | 78077 | 74714\n> 7 | 89792 | 89980 | 95986 | 96662\n> 17 | 178319 | 177979 | 195566 | 196143\n> 27 | 210475 | 205209 | 226966 | 235249\n> 53 | 222857 | 220256 | 252673 | 251041\n> 83 | 219652 | 219938 | 250309 | 250464\n> 107 | 218468 | 219849 | 251312 | 251425\n> 139 | 210486 | 217003 | 250029 | 250695\n> 163 | 204068 | 218424 | 248234 | 252940\n> 191 | 200014 | 218224 | 246622 | 253331\n> 211 | 197608 | 218033 | 245331 | 253055\n> 239 | 195036 | 218398 | 243306 | 253394\n> 271 | 192780 | 217747 | 241406 | 253148\n> 307 | 189490 | 217607 | 239246 | 253373\n> 353 | 186104 | 216697 | 236952 | 253034\n> 397 | 183507 | 216324 | 234764 | 252872\n>\n> As can be seen, patched version degrades much slower than master.\n> (Or even doesn't degrade with 1G shared buffer on older processor).\n>\n> PS.\n>\n> There is a room for further improvements:\n> - buffer manager's freelist could be partitioned\n> - dynahash's freelist could be sized/aligned to CPU cache line\n> - in fact, there is no need in dynahash at all. It is better to make\n> custom hash-table using BufferDesc as entries. BufferDesc has spare\n> space for link to next and hashvalue.\n>\n> regards,\n> Yura Sokolov\n> y.sokolov@postgrespro.ru\n> funny.falcon@gmail.com\n\n\nHi,\nImprovement is impressive.\n\nFor BufTableFreeDeleted(), since it only has one call, maybe its caller can\ninvoke hash_return_to_freelist() directly.\n\nFor free_list_decrement_nentries():\n\n+ Assert(hctl->freeList[freelist_idx].nentries.value < MAX_NENTRIES);\n\nIs the assertion necessary ? There is similar assertion in\nfree_list_increment_nentries() which would\nmaintain hctl->freeList[freelist_idx].nentries.value <= MAX_NENTRIES.\n\nCheers\n\nOn Fri, Oct 1, 2021 at 3:26 PM Yura Sokolov <y.sokolov@postgrespro.ru> wrote:Good day.\n\nI found some opportunity in Buffer Manager code in BufferAlloc\nfunction:\n- When valid buffer is evicted, BufferAlloc acquires two partition\nlwlocks: for partition for evicted block is in and partition for new\nblock placement.\n\nIt doesn't matter if there is small number of concurrent replacements.\nBut if there are a lot of concurrent backends replacing buffers,\ncomplex dependency net quickly arose.\n\nIt could be easily seen with select-only pgbench with scale 100 and\nshared buffers 128MB: scale 100 produces 1.5GB tables, and it certainly\ndoesn't fit shared buffers. This way performance starts to degrade at\n~100 connections. Even with shared buffers 1GB it slowly degrades after\n150 connections. \n\nBut strictly speaking, there is no need to hold both lock\nsimultaneously. Buffer is pinned so other processes could not select it\nfor eviction. If tag is cleared and buffer removed from old partition\nthen other processes will not find it. Therefore it is safe to release\nold partition lock before acquiring new partition lock.\n\nIf other process concurrently inserts same new buffer, then old buffer\nis placed to bufmanager's freelist.\n\nAdditional optimisation: in case of old buffer is reused, there is no\nneed to put its BufferLookupEnt into dynahash's freelist. That reduces\nlock contention a bit more. To acomplish this FreeListData.nentries is\nchanged to pg_atomic_u32/pg_atomic_u64 and atomic increment/decrement\nis used.\n\nRemark: there were bug in the `hash_update_hash_key`: nentries were not\nkept in sync if freelist partitions differ. This bug were never\ntriggered because single use of `hash_update_hash_key` doesn't move\nentry between partitions.\n\nThere is some tests results.\n\n- pgbench with scale 100 were tested with --select-only (since we want\nto test buffer manager alone). It produces 1.5GB table.\n- two shared_buffers values were tested: 128MB and 1GB.\n- second best result were taken among five runs\n\nTest were made in three system configurations:\n- notebook with i7-1165G7 (limited to 2.8GHz to not overheat)\n- Xeon X5675 6 core 2 socket NUMA system (12 cores/24 threads).\n- same Xeon X5675 but restricted to single socket\n (with numactl -m 0 -N 0)\n\nResults for i7-1165G7:\n\n conns | master | patched | master 1G | patched 1G \n--------+------------+------------+------------+------------\n 1 | 29667 | 29079 | 29425 | 29411 \n 2 | 55577 | 55553 | 57974 | 57223 \n 3 | 87393 | 87924 | 87246 | 89210 \n 5 | 136222 | 136879 | 133775 | 133949 \n 7 | 179865 | 176734 | 178297 | 175559 \n 17 | 215953 | 214708 | 222908 | 223651 \n 27 | 211162 | 213014 | 220506 | 219752 \n 53 | 211620 | 218702 | 220906 | 225218 \n 83 | 213488 | 221799 | 219075 | 228096 \n 107 | 212018 | 222110 | 222502 | 227825 \n 139 | 207068 | 220812 | 218191 | 226712 \n 163 | 203716 | 220793 | 213498 | 226493 \n 191 | 199248 | 217486 | 210994 | 221026 \n 211 | 195887 | 217356 | 209601 | 219397 \n 239 | 193133 | 215695 | 209023 | 218773 \n 271 | 190686 | 213668 | 207181 | 219137 \n 307 | 188066 | 214120 | 205392 | 218782 \n 353 | 185449 | 213570 | 202120 | 217786 \n 397 | 182173 | 212168 | 201285 | 216489 \n\nResults for 1 socket X5675\n\n conns | master | patched | master 1G | patched 1G \n--------+------------+------------+------------+------------\n 1 | 16864 | 16584 | 17419 | 17630 \n 2 | 32764 | 32735 | 34593 | 34000 \n 3 | 47258 | 46022 | 49570 | 47432 \n 5 | 64487 | 64929 | 68369 | 68885 \n 7 | 81932 | 82034 | 87543 | 87538 \n 17 | 114502 | 114218 | 127347 | 127448 \n 27 | 116030 | 115758 | 130003 | 128890 \n 53 | 116814 | 117197 | 131142 | 131080 \n 83 | 114438 | 116704 | 130198 | 130985 \n 107 | 113255 | 116910 | 129932 | 131468 \n 139 | 111577 | 116929 | 129012 | 131782 \n 163 | 110477 | 116818 | 128628 | 131697 \n 191 | 109237 | 116672 | 127833 | 131586 \n 211 | 108248 | 116396 | 127474 | 131650 \n 239 | 107443 | 116237 | 126731 | 131760 \n 271 | 106434 | 115813 | 126009 | 131526 \n 307 | 105077 | 115542 | 125279 | 131421 \n 353 | 104516 | 115277 | 124491 | 131276 \n 397 | 103016 | 114842 | 123624 | 131019 \n\nResults for 2 socket x5675\n\n conns | master | patched | master 1G | patched 1G \n--------+------------+------------+------------+------------\n 1 | 16323 | 16280 | 16959 | 17598 \n 2 | 30510 | 31431 | 33763 | 31690 \n 3 | 45051 | 45834 | 48896 | 47991 \n 5 | 71800 | 73208 | 78077 | 74714 \n 7 | 89792 | 89980 | 95986 | 96662 \n 17 | 178319 | 177979 | 195566 | 196143 \n 27 | 210475 | 205209 | 226966 | 235249 \n 53 | 222857 | 220256 | 252673 | 251041 \n 83 | 219652 | 219938 | 250309 | 250464 \n 107 | 218468 | 219849 | 251312 | 251425 \n 139 | 210486 | 217003 | 250029 | 250695 \n 163 | 204068 | 218424 | 248234 | 252940 \n 191 | 200014 | 218224 | 246622 | 253331 \n 211 | 197608 | 218033 | 245331 | 253055 \n 239 | 195036 | 218398 | 243306 | 253394 \n 271 | 192780 | 217747 | 241406 | 253148 \n 307 | 189490 | 217607 | 239246 | 253373 \n 353 | 186104 | 216697 | 236952 | 253034 \n 397 | 183507 | 216324 | 234764 | 252872 \n\nAs can be seen, patched version degrades much slower than master.\n(Or even doesn't degrade with 1G shared buffer on older processor).\n\nPS.\n\nThere is a room for further improvements:\n- buffer manager's freelist could be partitioned\n- dynahash's freelist could be sized/aligned to CPU cache line\n- in fact, there is no need in dynahash at all. It is better to make\n custom hash-table using BufferDesc as entries. BufferDesc has spare\n space for link to next and hashvalue.\n\nregards,\nYura Sokolov\ny.sokolov@postgrespro.ru\nfunny.falcon@gmail.comHi,Improvement is impressive.For BufTableFreeDeleted(), since it only has one call, maybe its caller can invoke hash_return_to_freelist() directly.For free_list_decrement_nentries():+ Assert(hctl->freeList[freelist_idx].nentries.value < MAX_NENTRIES);Is the assertion necessary ? There is similar assertion in free_list_increment_nentries() which would maintain hctl->freeList[freelist_idx].nentries.value <= MAX_NENTRIES.Cheers",
"msg_date": "Fri, 1 Oct 2021 15:46:26 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "В Пт, 01/10/2021 в 15:46 -0700, Zhihong Yu wrote:\n> \n> \n> On Fri, Oct 1, 2021 at 3:26 PM Yura Sokolov <y.sokolov@postgrespro.ru>\n> wrote:\n> > Good day.\n> > \n> > I found some opportunity in Buffer Manager code in BufferAlloc\n> > function:\n> > - When valid buffer is evicted, BufferAlloc acquires two partition\n> > lwlocks: for partition for evicted block is in and partition for new\n> > block placement.\n> > \n> > It doesn't matter if there is small number of concurrent\n> > replacements.\n> > But if there are a lot of concurrent backends replacing buffers,\n> > complex dependency net quickly arose.\n> > \n> > It could be easily seen with select-only pgbench with scale 100 and\n> > shared buffers 128MB: scale 100 produces 1.5GB tables, and it\n> > certainly\n> > doesn't fit shared buffers. This way performance starts to degrade\n> > at\n> > ~100 connections. Even with shared buffers 1GB it slowly degrades\n> > after\n> > 150 connections. \n> > \n> > But strictly speaking, there is no need to hold both lock\n> > simultaneously. Buffer is pinned so other processes could not select\n> > it\n> > for eviction. If tag is cleared and buffer removed from old\n> > partition\n> > then other processes will not find it. Therefore it is safe to\n> > release\n> > old partition lock before acquiring new partition lock.\n> > \n> > If other process concurrently inserts same new buffer, then old\n> > buffer\n> > is placed to bufmanager's freelist.\n> > \n> > Additional optimisation: in case of old buffer is reused, there is\n> > no\n> > need to put its BufferLookupEnt into dynahash's freelist. That\n> > reduces\n> > lock contention a bit more. To acomplish this FreeListData.nentries\n> > is\n> > changed to pg_atomic_u32/pg_atomic_u64 and atomic\n> > increment/decrement\n> > is used.\n> > \n> > Remark: there were bug in the `hash_update_hash_key`: nentries were\n> > not\n> > kept in sync if freelist partitions differ. This bug were never\n> > triggered because single use of `hash_update_hash_key` doesn't move\n> > entry between partitions.\n> > \n> > There is some tests results.\n> > \n> > - pgbench with scale 100 were tested with --select-only (since we\n> > want\n> > to test buffer manager alone). It produces 1.5GB table.\n> > - two shared_buffers values were tested: 128MB and 1GB.\n> > - second best result were taken among five runs\n> > \n> > Test were made in three system configurations:\n> > - notebook with i7-1165G7 (limited to 2.8GHz to not overheat)\n> > - Xeon X5675 6 core 2 socket NUMA system (12 cores/24 threads).\n> > - same Xeon X5675 but restricted to single socket\n> > (with numactl -m 0 -N 0)\n> > \n> > Results for i7-1165G7:\n> > \n> > conns | master | patched | master 1G | patched 1G \n> > --------+------------+------------+------------+------------\n> > 1 | 29667 | 29079 | 29425 | 29411 \n> > 2 | 55577 | 55553 | 57974 | 57223 \n> > 3 | 87393 | 87924 | 87246 | 89210 \n> > 5 | 136222 | 136879 | 133775 | 133949 \n> > 7 | 179865 | 176734 | 178297 | 175559 \n> > 17 | 215953 | 214708 | 222908 | 223651 \n> > 27 | 211162 | 213014 | 220506 | 219752 \n> > 53 | 211620 | 218702 | 220906 | 225218 \n> > 83 | 213488 | 221799 | 219075 | 228096 \n> > 107 | 212018 | 222110 | 222502 | 227825 \n> > 139 | 207068 | 220812 | 218191 | 226712 \n> > 163 | 203716 | 220793 | 213498 | 226493 \n> > 191 | 199248 | 217486 | 210994 | 221026 \n> > 211 | 195887 | 217356 | 209601 | 219397 \n> > 239 | 193133 | 215695 | 209023 | 218773 \n> > 271 | 190686 | 213668 | 207181 | 219137 \n> > 307 | 188066 | 214120 | 205392 | 218782 \n> > 353 | 185449 | 213570 | 202120 | 217786 \n> > 397 | 182173 | 212168 | 201285 | 216489 \n> > \n> > Results for 1 socket X5675\n> > \n> > conns | master | patched | master 1G | patched 1G \n> > --------+------------+------------+------------+------------\n> > 1 | 16864 | 16584 | 17419 | 17630 \n> > 2 | 32764 | 32735 | 34593 | 34000 \n> > 3 | 47258 | 46022 | 49570 | 47432 \n> > 5 | 64487 | 64929 | 68369 | 68885 \n> > 7 | 81932 | 82034 | 87543 | 87538 \n> > 17 | 114502 | 114218 | 127347 | 127448 \n> > 27 | 116030 | 115758 | 130003 | 128890 \n> > 53 | 116814 | 117197 | 131142 | 131080 \n> > 83 | 114438 | 116704 | 130198 | 130985 \n> > 107 | 113255 | 116910 | 129932 | 131468 \n> > 139 | 111577 | 116929 | 129012 | 131782 \n> > 163 | 110477 | 116818 | 128628 | 131697 \n> > 191 | 109237 | 116672 | 127833 | 131586 \n> > 211 | 108248 | 116396 | 127474 | 131650 \n> > 239 | 107443 | 116237 | 126731 | 131760 \n> > 271 | 106434 | 115813 | 126009 | 131526 \n> > 307 | 105077 | 115542 | 125279 | 131421 \n> > 353 | 104516 | 115277 | 124491 | 131276 \n> > 397 | 103016 | 114842 | 123624 | 131019 \n> > \n> > Results for 2 socket x5675\n> > \n> > conns | master | patched | master 1G | patched 1G \n> > --------+------------+------------+------------+------------\n> > 1 | 16323 | 16280 | 16959 | 17598 \n> > 2 | 30510 | 31431 | 33763 | 31690 \n> > 3 | 45051 | 45834 | 48896 | 47991 \n> > 5 | 71800 | 73208 | 78077 | 74714 \n> > 7 | 89792 | 89980 | 95986 | 96662 \n> > 17 | 178319 | 177979 | 195566 | 196143 \n> > 27 | 210475 | 205209 | 226966 | 235249 \n> > 53 | 222857 | 220256 | 252673 | 251041 \n> > 83 | 219652 | 219938 | 250309 | 250464 \n> > 107 | 218468 | 219849 | 251312 | 251425 \n> > 139 | 210486 | 217003 | 250029 | 250695 \n> > 163 | 204068 | 218424 | 248234 | 252940 \n> > 191 | 200014 | 218224 | 246622 | 253331 \n> > 211 | 197608 | 218033 | 245331 | 253055 \n> > 239 | 195036 | 218398 | 243306 | 253394 \n> > 271 | 192780 | 217747 | 241406 | 253148 \n> > 307 | 189490 | 217607 | 239246 | 253373 \n> > 353 | 186104 | 216697 | 236952 | 253034 \n> > 397 | 183507 | 216324 | 234764 | 252872 \n> > \n> > As can be seen, patched version degrades much slower than master.\n> > (Or even doesn't degrade with 1G shared buffer on older processor).\n> > \n> > PS.\n> > \n> > There is a room for further improvements:\n> > - buffer manager's freelist could be partitioned\n> > - dynahash's freelist could be sized/aligned to CPU cache line\n> > - in fact, there is no need in dynahash at all. It is better to make\n> > custom hash-table using BufferDesc as entries. BufferDesc has\n> > spare\n> > space for link to next and hashvalue.\n> > \n> > regards,\n> > Yura Sokolov\n> > y.sokolov@postgrespro.ru\n> > funny.falcon@gmail.com\n> \n> Hi,\n> Improvement is impressive.\n\nThank you!\n\n> For BufTableFreeDeleted(), since it only has one call, maybe its\n> caller can invoke hash_return_to_freelist() directly.\n\nIt will be a dirty break of abstraction. Everywhere we talk with\nBufTable, and here will be hash ... eugh\n\n> For free_list_decrement_nentries():\n> \n> + Assert(hctl->freeList[freelist_idx].nentries.value <\n> MAX_NENTRIES);\n> \n> Is the assertion necessary ? There is similar assertion in\n> free_list_increment_nentries() which would maintain hctl-\n> >freeList[freelist_idx].nentries.value <= MAX_NENTRIES.\n\nAssertion in free_list_decrement_nentries is absolutely necessary:\nit is direct translation of Assert(nentries>=0) from signed types\nto unsigned. (Since there is no signed atomics in pg, I had to convert\nsigned `long nentries` to unsigned `pg_atomic_uXX nentries`).\n\nAssertion in free_list_increment_nentries is not necessary. But it\ndoesn't hurt either - it is just Assert that doesn't compile into\nproduction code.\n\n\nregards\n\nYura Sokolov\ny.sokolov@postgrespro.ru\nfunny.falcon@gmail.com\n\n\n\n",
"msg_date": "Mon, 04 Oct 2021 07:18:56 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "В Сб, 02/10/2021 в 01:25 +0300, Yura Sokolov пишет:\n> Good day.\n> \n> I found some opportunity in Buffer Manager code in BufferAlloc\n> function:\n> - When valid buffer is evicted, BufferAlloc acquires two partition\n> lwlocks: for partition for evicted block is in and partition for new\n> block placement.\n> \n> It doesn't matter if there is small number of concurrent replacements.\n> But if there are a lot of concurrent backends replacing buffers,\n> complex dependency net quickly arose.\n> \n> It could be easily seen with select-only pgbench with scale 100 and\n> shared buffers 128MB: scale 100 produces 1.5GB tables, and it certainly\n> doesn't fit shared buffers. This way performance starts to degrade at\n> ~100 connections. Even with shared buffers 1GB it slowly degrades after\n> 150 connections. \n> \n> But strictly speaking, there is no need to hold both lock\n> simultaneously. Buffer is pinned so other processes could not select it\n> for eviction. If tag is cleared and buffer removed from old partition\n> then other processes will not find it. Therefore it is safe to release\n> old partition lock before acquiring new partition lock.\n> \n> If other process concurrently inserts same new buffer, then old buffer\n> is placed to bufmanager's freelist.\n> \n> Additional optimisation: in case of old buffer is reused, there is no\n> need to put its BufferLookupEnt into dynahash's freelist. That reduces\n> lock contention a bit more. To acomplish this FreeListData.nentries is\n> changed to pg_atomic_u32/pg_atomic_u64 and atomic increment/decrement\n> is used.\n> \n> Remark: there were bug in the `hash_update_hash_key`: nentries were not\n> kept in sync if freelist partitions differ. This bug were never\n> triggered because single use of `hash_update_hash_key` doesn't move\n> entry between partitions.\n> \n> There is some tests results.\n> \n> - pgbench with scale 100 were tested with --select-only (since we want\n> to test buffer manager alone). It produces 1.5GB table.\n> - two shared_buffers values were tested: 128MB and 1GB.\n> - second best result were taken among five runs\n> \n> Test were made in three system configurations:\n> - notebook with i7-1165G7 (limited to 2.8GHz to not overheat)\n> - Xeon X5675 6 core 2 socket NUMA system (12 cores/24 threads).\n> - same Xeon X5675 but restricted to single socket\n> (with numactl -m 0 -N 0)\n> \n> Results for i7-1165G7:\n> \n> conns | master | patched | master 1G | patched 1G \n> --------+------------+------------+------------+------------\n> 1 | 29667 | 29079 | 29425 | 29411 \n> 2 | 55577 | 55553 | 57974 | 57223 \n> 3 | 87393 | 87924 | 87246 | 89210 \n> 5 | 136222 | 136879 | 133775 | 133949 \n> 7 | 179865 | 176734 | 178297 | 175559 \n> 17 | 215953 | 214708 | 222908 | 223651 \n> 27 | 211162 | 213014 | 220506 | 219752 \n> 53 | 211620 | 218702 | 220906 | 225218 \n> 83 | 213488 | 221799 | 219075 | 228096 \n> 107 | 212018 | 222110 | 222502 | 227825 \n> 139 | 207068 | 220812 | 218191 | 226712 \n> 163 | 203716 | 220793 | 213498 | 226493 \n> 191 | 199248 | 217486 | 210994 | 221026 \n> 211 | 195887 | 217356 | 209601 | 219397 \n> 239 | 193133 | 215695 | 209023 | 218773 \n> 271 | 190686 | 213668 | 207181 | 219137 \n> 307 | 188066 | 214120 | 205392 | 218782 \n> 353 | 185449 | 213570 | 202120 | 217786 \n> 397 | 182173 | 212168 | 201285 | 216489 \n> \n> Results for 1 socket X5675\n> \n> conns | master | patched | master 1G | patched 1G \n> --------+------------+------------+------------+------------\n> 1 | 16864 | 16584 | 17419 | 17630 \n> 2 | 32764 | 32735 | 34593 | 34000 \n> 3 | 47258 | 46022 | 49570 | 47432 \n> 5 | 64487 | 64929 | 68369 | 68885 \n> 7 | 81932 | 82034 | 87543 | 87538 \n> 17 | 114502 | 114218 | 127347 | 127448 \n> 27 | 116030 | 115758 | 130003 | 128890 \n> 53 | 116814 | 117197 | 131142 | 131080 \n> 83 | 114438 | 116704 | 130198 | 130985 \n> 107 | 113255 | 116910 | 129932 | 131468 \n> 139 | 111577 | 116929 | 129012 | 131782 \n> 163 | 110477 | 116818 | 128628 | 131697 \n> 191 | 109237 | 116672 | 127833 | 131586 \n> 211 | 108248 | 116396 | 127474 | 131650 \n> 239 | 107443 | 116237 | 126731 | 131760 \n> 271 | 106434 | 115813 | 126009 | 131526 \n> 307 | 105077 | 115542 | 125279 | 131421 \n> 353 | 104516 | 115277 | 124491 | 131276 \n> 397 | 103016 | 114842 | 123624 | 131019 \n> \n> Results for 2 socket x5675\n> \n> conns | master | patched | master 1G | patched 1G \n> --------+------------+------------+------------+------------\n> 1 | 16323 | 16280 | 16959 | 17598 \n> 2 | 30510 | 31431 | 33763 | 31690 \n> 3 | 45051 | 45834 | 48896 | 47991 \n> 5 | 71800 | 73208 | 78077 | 74714 \n> 7 | 89792 | 89980 | 95986 | 96662 \n> 17 | 178319 | 177979 | 195566 | 196143 \n> 27 | 210475 | 205209 | 226966 | 235249 \n> 53 | 222857 | 220256 | 252673 | 251041 \n> 83 | 219652 | 219938 | 250309 | 250464 \n> 107 | 218468 | 219849 | 251312 | 251425 \n> 139 | 210486 | 217003 | 250029 | 250695 \n> 163 | 204068 | 218424 | 248234 | 252940 \n> 191 | 200014 | 218224 | 246622 | 253331 \n> 211 | 197608 | 218033 | 245331 | 253055 \n> 239 | 195036 | 218398 | 243306 | 253394 \n> 271 | 192780 | 217747 | 241406 | 253148 \n> 307 | 189490 | 217607 | 239246 | 253373 \n> 353 | 186104 | 216697 | 236952 | 253034 \n> 397 | 183507 | 216324 | 234764 | 252872 \n> \n> As can be seen, patched version degrades much slower than master.\n> (Or even doesn't degrade with 1G shared buffer on older processor).\n> \n> PS.\n> \n> There is a room for further improvements:\n> - buffer manager's freelist could be partitioned\n> - dynahash's freelist could be sized/aligned to CPU cache line\n> - in fact, there is no need in dynahash at all. It is better to make\n> custom hash-table using BufferDesc as entries. BufferDesc has spare\n> space for link to next and hashvalue.\n\nHere is fixed version:\n- in first version InvalidateBuffer's BufTableDelete were not paired\n with BufTableFreeDeleted.\n\nregards,\nYura Sokolov\ny.sokolov@postgrespro.ru\nfunny.falcon@gmail.com",
"msg_date": "Tue, 21 Dec 2021 08:23:35 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "\n\n> 21 дек. 2021 г., в 10:23, Yura Sokolov <y.sokolov@postgrespro.ru> написал(а):\n> \n> <v1-0001-bufmgr-do-not-acquire-two-partition-lo.patch>\n\nHi Yura!\n\nI've took a look into the patch. The idea seems reasonable to me: clearing\\evicting old buffer and placing new one seem to be different units of work, there is no need to couple both partition locks together. And the claimed performance impact is fascinating! Though I didn't verify it yet.\n\nOn a first glance API change in BufTable does not seem obvious to me. Is void *oldelem actually BufferTag * or maybe BufferLookupEnt *? What if we would like to use or manipulate with oldelem in future?\n\nAnd the name BufTableFreeDeleted() confuses me a bit. You know, in C we usually free(), but in C++ we delete [], and here we do both... Just to be sure.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n\n\n",
"msg_date": "Sat, 22 Jan 2022 12:56:14 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "At Sat, 22 Jan 2022 12:56:14 +0500, Andrey Borodin <x4mmm@yandex-team.ru> wrote in \n> I've took a look into the patch. The idea seems reasonable to me:\n> clearing\\evicting old buffer and placing new one seem to be\n> different units of work, there is no need to couple both partition\n> locks together. And the claimed performance impact is fascinating!\n> Though I didn't verify it yet.\n\nThe need for having both locks came from, I seems to me, that the\nfunction was moving a buffer between two pages, and that there is a\nmoment where buftable holds two entries for one buffer. It seems to\nme this patch is trying to move a victim buffer to new page via\n\"unallocated\" state and to avoid the buftable from having duplicate\nentries for the same buffer. The outline of the story sounds\nreasonable.\n\n> On a first glance API change in BufTable does not seem obvious to\n> me. Is void *oldelem actually BufferTag * or maybe BufferLookupEnt\n> *? What if we would like to use or manipulate with oldelem in\n> future?\n> \n> And the name BufTableFreeDeleted() confuses me a bit. You know, in C\n> we usually free(), but in C++ we delete [], and here we do\n> both... Just to be sure.\n\nHonestly, I don't like the API change at all as the change allows a\ndynahash to be in a (even tentatively) broken state and bufmgr touches\ntoo much of dynahash details. Couldn't we get a good extent of\nbenefit without that invasive changes?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 24 Jan 2022 17:19:48 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "Hello, Yura.\n\nTest results look promising. But it seems like the naming and dynahash\nAPI change is a little confusing.\n\n1) I think it is better to split the main part and atomic nentries\noptimization into separate commits.\n2) Also, it would be nice to also fix hash_update_hash_key bug :)\n3) Do we really need a SIZEOF_LONG check? I think pg_atomic_uint64 is\nfine these days.\n4) Looks like hash_insert_with_hash_nocheck could potentially break\nthe hash table. Is it better to replace it with\nhash_search_with_hash_value with HASH_ATTACH action?\n5) In such a case hash_delete_skip_freelist with\nhash_search_with_hash_value with HASH_DETTACH.\n6) And then hash_return_to_freelist -> hash_dispose_dettached_entry?\n\nAnother approach is a new version of hash_update_hash_key with\ncallbacks. Probably it is the most \"correct\" way to keep a hash table\nimplementation details closed. It should be doable, I think.\n\nThanks,\nMichail.\n\n\n",
"msg_date": "Sun, 30 Jan 2022 20:27:43 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "Hello, Yura.\n\nA one additional moment:\n\n> 1332: Assert((oldFlags & (BM_PIN_COUNT_WAITER | BM_IO_IN_PROGRESS)) == 0);\n> 1333: CLEAR_BUFFERTAG(buf->tag);\n> 1334: buf_state &= ~(BUF_FLAG_MASK | BUF_USAGECOUNT_MASK);\n> 1335: UnlockBufHdr(buf, buf_state);\n\nI think there is no sense to unlock buffer here because it will be\nlocked after a few moments (and no one is able to find it somehow). Of\ncourse, it should be unlocked in case of collision.\n\nBTW, I still think is better to introduce some kind of\nhash_update_hash_key and use it.\n\nIt may look like this:\n\n// should be called with oldPartitionLock acquired\n// newPartitionLock hold on return\n// oldPartitionLock and newPartitionLock are not taken at the same time\n// if newKeyPtr is present - existingEntry is removed\nbool hash_update_hash_key_or_remove(\n HTAB *hashp,\n void *existingEntry,\n const void *newKeyPtr,\n uint32 newHashValue,\n LWLock *oldPartitionLock,\n LWLock *newPartitionLock\n);\n\nThanks,\nMichail.\n\n\n",
"msg_date": "Sun, 6 Feb 2022 19:34:54 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "В Вс, 06/02/2022 в 19:34 +0300, Michail Nikolaev пишет:\n> Hello, Yura.\n> \n> A one additional moment:\n> \n> > 1332: Assert((oldFlags & (BM_PIN_COUNT_WAITER | BM_IO_IN_PROGRESS)) == 0);\n> > 1333: CLEAR_BUFFERTAG(buf->tag);\n> > 1334: buf_state &= ~(BUF_FLAG_MASK | BUF_USAGECOUNT_MASK);\n> > 1335: UnlockBufHdr(buf, buf_state);\n> \n> I think there is no sense to unlock buffer here because it will be\n> locked after a few moments (and no one is able to find it somehow). Of\n> course, it should be unlocked in case of collision.\n\nUnlockBufHdr actually writes buf_state. Until it called, buffer\nis in intermediate state and it is ... locked.\n\nWe have to write state with BM_TAG_VALID cleared before we\ncall BufTableDelete and release oldPartitionLock to maintain\nconsistency.\n\nPerhaps, it could be cheated, and there is no harm to skip state\nwrite at this point. But I'm not so confident to do it.\n\n> \n> BTW, I still think is better to introduce some kind of\n> hash_update_hash_key and use it.\n> \n> It may look like this:\n> \n> // should be called with oldPartitionLock acquired\n> // newPartitionLock hold on return\n> // oldPartitionLock and newPartitionLock are not taken at the same time\n> // if newKeyPtr is present - existingEntry is removed\n> bool hash_update_hash_key_or_remove(\n> HTAB *hashp,\n> void *existingEntry,\n> const void *newKeyPtr,\n> uint32 newHashValue,\n> LWLock *oldPartitionLock,\n> LWLock *newPartitionLock\n> );\n\nInteresting suggestion, thanks. I'll think about.\nIt has downside of bringing LWLock knowdlege to dynahash.c .\nBut otherwise looks smart.\n\n---------\n\nregards,\nYura Sokolov\n\n\n\n",
"msg_date": "Wed, 16 Feb 2022 10:33:18 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "Hello, all.\n\nI thought about patch simplification, and tested version\nwithout BufTable and dynahash api change at all.\n\nIt performs suprisingly well. It is just a bit worse\nthan v1 since there is more contention around dynahash's\nfreelist, but most of improvement remains.\n\nI'll finish benchmarking and will attach graphs with\nnext message. Patch is attached here.\n\n------\n\nregards,\nYura Sokolov\nPostgres Professional\ny.sokolov@postgrespro.ru\nfunny.falcon@gmail.com",
"msg_date": "Wed, 16 Feb 2022 10:40:56 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "At Wed, 16 Feb 2022 10:40:56 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrote in \n> Hello, all.\n> \n> I thought about patch simplification, and tested version\n> without BufTable and dynahash api change at all.\n> \n> It performs suprisingly well. It is just a bit worse\n> than v1 since there is more contention around dynahash's\n> freelist, but most of improvement remains.\n> \n> I'll finish benchmarking and will attach graphs with\n> next message. Patch is attached here.\n\nThanks for the new patch. The patch as a whole looks fine to me. But\nsome comments needs to be revised.\n\n(existing comments)\n> * To change the association of a valid buffer, we'll need to have\n> * exclusive lock on both the old and new mapping partitions.\n...\n> * Somebody could have pinned or re-dirtied the buffer while we were\n> * doing the I/O and making the new hashtable entry. If so, we can't\n> * recycle this buffer; we must undo everything we've done and start\n> * over with a new victim buffer.\n\nWe no longer take a lock on the new partition and have no new hash\nentry (if others have not yet done) at this point.\n\n\n+\t * Clear out the buffer's tag and flags. We must do this to ensure that\n+\t * linear scans of the buffer array don't think the buffer is valid. We\n\nThe reason we can clear out the tag is it's safe to use the victim\nbuffer at this point. This comment needs to mention that reason.\n\n+\t *\n+\t * Since we are single pinner, there should no be PIN_COUNT_WAITER or\n+\t * IO_IN_PROGRESS (flags that were not cleared in previous code).\n+\t */\n+\tAssert((oldFlags & (BM_PIN_COUNT_WAITER | BM_IO_IN_PROGRESS)) == 0);\n\nIt seems like to be a test for potential bugs in other functions. As\nthe comment is saying, we are sure that no other processes are pinning\nthe buffer and the existing code doesn't seem to be care about that\ncondition. Is it really needed?\n\n\n+\t/*\n+\t * Try to make a hashtable entry for the buffer under its new tag. This\n+\t * could fail because while we were writing someone else allocated another\n\nThe most significant point of this patch is the reason that the victim\nbuffer is protected from stealing until it is set up for new tag. I\nthink we need an explanation about the protection here.\n\n\n+\t * buffer for the same block we want to read in. Note that we have not yet\n+\t * removed the hashtable entry for the old tag.\n\nSince we have removed the hash table entry for the old tag at this\npoint, the comment got wrong.\n\n\n+\t\t * the first place. First, give up the buffer we were planning to use\n+\t\t * and put it to free lists.\n..\n+\t\tStrategyFreeBuffer(buf);\n\nThis is one downside of this patch. But it seems to me that the odds\nare low that many buffers are freed in a short time by this logic. By\nthe way it would be better if the sentence starts with \"First\" has a\nseparate comment section.\n\n\n(existing comment)\n|\t * Okay, it's finally safe to rename the buffer.\n\nWe don't \"rename\" the buffer here. And the safety is already\nestablishsed at the end of the oldPartitionLock section. So it would\nbe just something like \"Now allocate the victim buffer for the new\ntag\"?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 17 Feb 2022 14:16:47 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "Good day, Kyotaro Horiguchi and hackers.\n\nВ Чт, 17/02/2022 в 14:16 +0900, Kyotaro Horiguchi пишет:\n> At Wed, 16 Feb 2022 10:40:56 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrote in \n> > Hello, all.\n> > \n> > I thought about patch simplification, and tested version\n> > without BufTable and dynahash api change at all.\n> > \n> > It performs suprisingly well. It is just a bit worse\n> > than v1 since there is more contention around dynahash's\n> > freelist, but most of improvement remains.\n> > \n> > I'll finish benchmarking and will attach graphs with\n> > next message. Patch is attached here.\n> \n> Thanks for the new patch. The patch as a whole looks fine to me. But\n> some comments needs to be revised.\n\nThank you for review and remarks.\n\n> \n> (existing comments)\n> > * To change the association of a valid buffer, we'll need to have\n> > * exclusive lock on both the old and new mapping partitions.\n> ...\n> > * Somebody could have pinned or re-dirtied the buffer while we were\n> > * doing the I/O and making the new hashtable entry. If so, we can't\n> > * recycle this buffer; we must undo everything we've done and start\n> > * over with a new victim buffer.\n> \n> We no longer take a lock on the new partition and have no new hash\n> entry (if others have not yet done) at this point.\n\nfixed\n\n> + * Clear out the buffer's tag and flags. We must do this to ensure that\n> + * linear scans of the buffer array don't think the buffer is valid. We\n> \n> The reason we can clear out the tag is it's safe to use the victim\n> buffer at this point. This comment needs to mention that reason.\n\nTried to describe.\n\n> + *\n> + * Since we are single pinner, there should no be PIN_COUNT_WAITER or\n> + * IO_IN_PROGRESS (flags that were not cleared in previous code).\n> + */\n> + Assert((oldFlags & (BM_PIN_COUNT_WAITER | BM_IO_IN_PROGRESS)) == 0);\n> \n> It seems like to be a test for potential bugs in other functions. As\n> the comment is saying, we are sure that no other processes are pinning\n> the buffer and the existing code doesn't seem to be care about that\n> condition. Is it really needed?\n\nOk, I agree this check is excess.\nThese two flags were not cleared in the previous code, and I didn't get\nwhy. Probably, it is just a historical accident.\n\n> \n> + /*\n> + * Try to make a hashtable entry for the buffer under its new tag. This\n> + * could fail because while we were writing someone else allocated another\n> \n> The most significant point of this patch is the reason that the victim\n> buffer is protected from stealing until it is set up for new tag. I\n> think we need an explanation about the protection here.\n\nI don't get what you mean clearly :( . I would appreciate your\nsuggestion for this comment.\n\n> \n> \n> + * buffer for the same block we want to read in. Note that we have not yet\n> + * removed the hashtable entry for the old tag.\n> \n> Since we have removed the hash table entry for the old tag at this\n> point, the comment got wrong.\n\nThanks, changed.\n\n> + * the first place. First, give up the buffer we were planning to use\n> + * and put it to free lists.\n> ..\n> + StrategyFreeBuffer(buf);\n> \n> This is one downside of this patch. But it seems to me that the odds\n> are low that many buffers are freed in a short time by this logic. By\n> the way it would be better if the sentence starts with \"First\" has a\n> separate comment section.\n\nSplitted the comment.\n\n> (existing comment)\n> | * Okay, it's finally safe to rename the buffer.\n> \n> We don't \"rename\" the buffer here. And the safety is already\n> establishsed at the end of the oldPartitionLock section. So it would\n> be just something like \"Now allocate the victim buffer for the new\n> tag\"?\n\nChanged to \"Now it is safe to use victim buffer for new tag.\"\n\n\nThere is also tiny code change at block reuse finalization: instead\nof LockBufHdr+UnlockBufHdr I use single atomic_fetch_or protected\nwith WaitBufHdrUnlocked. I've tried to explain its safety. Please,\ncheck it.\n\n\nBenchmarks:\n- base point is 6ce16088bfed97f9.\n- notebook with i7-1165G7 and server with Xeon 8354H (1&2 sockets)\n- pgbench select only scale 100 (1.5GB on disk)\n- two shared_buffers values: 128MB and 1GB.\n- enabled hugepages\n- second best result from five runs\n\nNotebook:\n conns | master | patch_v3 | master 1G | patch_v3 1G \n--------+------------+------------+------------+------------\n 1 | 29508 | 29481 | 31774 | 32305 \n 2 | 57139 | 56694 | 63393 | 62968 \n 3 | 89759 | 90861 | 101873 | 102399 \n 5 | 133491 | 134573 | 145451 | 145009 \n 7 | 156739 | 155832 | 164633 | 164562 \n 17 | 216863 | 216379 | 251923 | 251017 \n 27 | 209532 | 209802 | 244202 | 243709 \n 53 | 212615 | 213552 | 248107 | 250317 \n 83 | 214446 | 218230 | 252414 | 252337 \n 107 | 211276 | 217109 | 252762 | 250328 \n 139 | 208070 | 214265 | 248350 | 249684 \n 163 | 206764 | 214594 | 247369 | 250323 \n 191 | 205478 | 213511 | 244597 | 246877 \n 211 | 200976 | 212976 | 244035 | 245032 \n 239 | 196588 | 211519 | 243897 | 245055 \n 271 | 195813 | 209631 | 237457 | 242771 \n 307 | 192724 | 208074 | 237658 | 241759 \n 353 | 187847 | 207189 | 234548 | 239008 \n 397 | 186942 | 205317 | 230465 | 238782\n\nI don't get why numbers changed from first letter ))\nBut still no slowdown, and measurable gain at 128MB shared\nbuffers.\n\nXeon 1 socket\n\n conns | master | patch_v3 | master 1G | patch_v3 1G \n--------+------------+------------+------------+------------\n 1 | 41975 | 41799 | 52898 | 52715 \n 2 | 77693 | 77531 | 97571 | 98547 \n 3 | 114713 | 114533 | 142709 | 143579 \n 5 | 188898 | 187241 | 239322 | 236682 \n 7 | 261516 | 260249 | 329119 | 328562 \n 17 | 521821 | 518981 | 672390 | 662987 \n 27 | 555487 | 557019 | 674630 | 675703 \n 53 | 868213 | 897097 | 1190734 | 1202575 \n 83 | 868232 | 881705 | 1164997 | 1157764 \n 107 | 850477 | 855169 | 1140597 | 1128027 \n 139 | 816311 | 826756 | 1101471 | 1096197 \n 163 | 794788 | 805946 | 1078445 | 1071535 \n 191 | 765934 | 783209 | 1059497 | 1039936 \n 211 | 738656 | 786171 | 1083356 | 1049025 \n 239 | 713124 | 837040 | 1104629 | 1125969 \n 271 | 692138 | 847741 | 1094432 | 1131968 \n 307 | 682919 | 847939 | 1086306 | 1124649 \n 353 | 679449 | 844596 | 1071482 | 1125980 \n 397 | 676217 | 833009 | 1058937 | 1113496 \n\nHere is small slowdown at some connection numbers (17,\n107-191).It is reproducible. Probably it is due to one more\natomice write. Perhaps for some other scheduling issues (\nprocesses block less on buffer manager but compete more\non other resources). I could not reliably determine why,\nbecause change is too small, and `perf record` harms\nperformance more at this point.\n\nThis is the reason I've changed finalization to atomic_or\ninstead of Lock+Unlock pair. The changed helped a bit, but\ndidn't remove slowdown completely.\n\nXeon 2 socket\n\n conns | m0 | patch_v3 | m0 1G | patch_v3 1G \n--------+------------+------------+------------+------------\n 1 | 44317 | 43747 | 53920 | 53759 \n 2 | 81193 | 79976 | 99138 | 99213 \n 3 | 120755 | 114481 | 148102 | 146494 \n 5 | 190007 | 187384 | 232078 | 229627 \n 7 | 258602 | 256657 | 325545 | 322417 \n 17 | 551814 | 549041 | 692312 | 688204 \n 27 | 787353 | 787916 | 1023509 | 1020995 \n 53 | 973880 | 996019 | 1228274 | 1246128 \n 83 | 1108442 | 1258589 | 1596292 | 1662586 \n 107 | 1072188 | 1317229 | 1542401 | 1684603 \n 139 | 1000446 | 1272759 | 1490757 | 1672507 \n 163 | 967378 | 1224513 | 1461468 | 1660012 \n 191 | 926010 | 1178067 | 1435317 | 1645886 \n 211 | 909919 | 1148862 | 1417437 | 1629487 \n 239 | 895944 | 1108579 | 1393530 | 1616824 \n 271 | 880545 | 1078280 | 1374878 | 1608412 \n 307 | 865560 | 1056988 | 1355164 | 1601066 \n 353 | 857591 | 1033980 | 1330069 | 1586769 \n 397 | 840374 | 1016690 | 1312257 | 1573376 \n\nregards,\nYura Sokolov\nPostgres Professional\ny.sokolov@postgrespro.ru\nfunny.falcon@gmail.com",
"msg_date": "Mon, 21 Feb 2022 11:06:49 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "On Mon, 21 Feb 2022 at 08:06, Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n>\n> Good day, Kyotaro Horiguchi and hackers.\n>\n> В Чт, 17/02/2022 в 14:16 +0900, Kyotaro Horiguchi пишет:\n> > At Wed, 16 Feb 2022 10:40:56 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrote in\n> > > Hello, all.\n> > >\n> > > I thought about patch simplification, and tested version\n> > > without BufTable and dynahash api change at all.\n> > >\n> > > It performs suprisingly well. It is just a bit worse\n> > > than v1 since there is more contention around dynahash's\n> > > freelist, but most of improvement remains.\n> > >\n> > > I'll finish benchmarking and will attach graphs with\n> > > next message. Patch is attached here.\n> >\n> > Thanks for the new patch. The patch as a whole looks fine to me. But\n> > some comments needs to be revised.\n>\n> Thank you for review and remarks.\n\nv3 gets the buffer partition locking right, well done, great results!\n\nIn v3, the comment at line 1279 still implies we take both locks\ntogether, which is not now the case.\n\nDynahash actions are still possible. You now have the BufTableDelete\nbefore the BufTableInsert, which opens up the possibility I discussed\nhere:\nhttp://postgr.es/m/CANbhV-F0H-8oB_A+m=55hP0e0QRL=RdDDQuSXMTFt6JPrdX+pQ@mail.gmail.com\n(Apologies for raising a similar topic, I hadn't noticed this thread\nbefore; thanks to Horiguchi-san for pointing this out).\n\nv1 had a horrible API (sorry!) where you returned the entry and then\nexplicitly re-used it. I think we *should* make changes to dynahash,\nbut not with the API you proposed.\n\nProposal for new BufTable API\nBufTableReuse() - similar to BufTableDelete() but does NOT put entry\nback on freelist, we remember it in a private single item cache in\ndynahash\nBufTableAssign() - similar to BufTableInsert() but can only be\nexecuted directly after BufTableReuse(), fails with ERROR otherwise.\nTakes the entry from single item cache and re-assigns it to new tag\n\nIn dynahash we have two new modes that match the above\nHASH_REUSE - used by BufTableReuse(), similar to HASH_REMOVE, but\nplaces entry on the single item cache, avoiding freelist\nHASH_ASSIGN - used by BufTableAssign(), similar to HASH_ENTER, but\nuses the entry from the single item cache, rather than asking freelist\nThis last call can fail if someone else already inserted the tag, in\nwhich case it adds the single item cache entry back onto freelist\n\nNotice that single item cache is not in shared memory, so on abort we\nshould give it back, so we probably need an extra API call for that\nalso to avoid leaking an entry.\n\nDoing it this way allows us to\n* avoid touching freelists altogether in the common path - we know we\nare about to reassign the entry, so we do remember it - no contention\nfrom other backends, no borrowing etc..\n* avoid sharing the private details outside of the dynahash module\n* allows us to use the same technique elsewhere that we have\npartitioned hash tables\n\nThis approach is cleaner than v1, but should also perform better\nbecause there will be a 1:1 relationship between a buffer and its\ndynahash entry, most of the time.\n\nWith these changes, I think we will be able to *reduce* the number of\nfreelists for partitioned dynahash from 32 to maybe 8, as originally\nspeculated by Robert in 2016:\n https://www.postgresql.org/message-id/CA%2BTgmoZkg-04rcNRURt%3DjAG0Cs5oPyB-qKxH4wqX09e-oXy-nw%40mail.gmail.com\nsince the freelists will be much less contended with the above approach\n\nIt would be useful to see performance with a higher number of connections, >400.\n\n--\nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 25 Feb 2022 04:35:49 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-21 11:06:49 +0300, Yura Sokolov wrote:\n> From 04b07d0627ec65ba3327dc8338d59dbd15c405d8 Mon Sep 17 00:00:00 2001\n> From: Yura Sokolov <y.sokolov@postgrespro.ru>\n> Date: Mon, 21 Feb 2022 08:49:03 +0300\n> Subject: [PATCH v3] [PGPRO-5616] bufmgr: do not acquire two partition locks.\n> \n> Acquiring two partition locks leads to complex dependency chain that hurts\n> at high concurrency level.\n> \n> There is no need to hold both lock simultaneously. Buffer is pinned so\n> other processes could not select it for eviction. If tag is cleared and\n> buffer removed from old partition other processes will not find it.\n> Therefore it is safe to release old partition lock before acquiring\n> new partition lock.\n\nYes, the current design is pretty nonsensical. It leads to really absurd stuff\nlike holding the relation extension lock while we write out old buffer\ncontents etc.\n\n\n\n> +\t * We have pinned buffer and we are single pinner at the moment so there\n> +\t * is no other pinners.\n\nSeems redundant.\n\n\n> We hold buffer header lock and exclusive partition\n> +\t * lock if tag is valid. Given these statements it is safe to clear tag\n> +\t * since no other process can inspect it to the moment.\n> +\t */\n\nCould we share code with InvalidateBuffer here? It's not quite the same code,\nbut nearly the same.\n\n\n> +\t * The usage_count starts out at 1 so that the buffer can survive one\n> +\t * clock-sweep pass.\n> +\t *\n> +\t * We use direct atomic OR instead of Lock+Unlock since no other backend\n> +\t * could be interested in the buffer. But StrategyGetBuffer,\n> +\t * Flush*Buffers, Drop*Buffers are scanning all buffers and locks them to\n> +\t * compare tag, and UnlockBufHdr does raw write to state. So we have to\n> +\t * spin if we found buffer locked.\n\nSo basically the first half of of the paragraph is wrong, because no, we\ncan't?\n\n\n> +\t * Note that we write tag unlocked. It is also safe since there is always\n> +\t * check for BM_VALID when tag is compared.\n\n\n\n> \t */\n> \tbuf->tag = newTag;\n> -\tbuf_state &= ~(BM_VALID | BM_DIRTY | BM_JUST_DIRTIED |\n> -\t\t\t\t BM_CHECKPOINT_NEEDED | BM_IO_ERROR | BM_PERMANENT |\n> -\t\t\t\t BUF_USAGECOUNT_MASK);\n> \tif (relpersistence == RELPERSISTENCE_PERMANENT || forkNum == INIT_FORKNUM)\n> -\t\tbuf_state |= BM_TAG_VALID | BM_PERMANENT | BUF_USAGECOUNT_ONE;\n> +\t\tnew_bits = BM_TAG_VALID | BM_PERMANENT | BUF_USAGECOUNT_ONE;\n> \telse\n> -\t\tbuf_state |= BM_TAG_VALID | BUF_USAGECOUNT_ONE;\n> -\n> -\tUnlockBufHdr(buf, buf_state);\n> +\t\tnew_bits = BM_TAG_VALID | BUF_USAGECOUNT_ONE;\n> \n> -\tif (oldPartitionLock != NULL)\n> +\tbuf_state = pg_atomic_fetch_or_u32(&buf->state, new_bits);\n> +\twhile (unlikely(buf_state & BM_LOCKED))\n\nI don't think it's safe to atomic in arbitrary bits. If somebody else has\nlocked the buffer header in this moment, it'll lead to completely bogus\nresults, because unlocking overwrites concurrently written contents (which\nthere shouldn't be any, but here there are)...\n\nAnd or'ing contents in also doesn't make sense because we it doesn't work to\nactually unset any contents?\n\nWhy don't you just use LockBufHdr/UnlockBufHdr?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 25 Feb 2022 00:04:55 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "At Fri, 25 Feb 2022 00:04:55 -0800, Andres Freund <andres@anarazel.de> wrote in \n> Why don't you just use LockBufHdr/UnlockBufHdr?\n\nFWIW, v2 looked fine to me in regards to this point.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 25 Feb 2022 18:14:33 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "Hello, Simon.\n\nВ Пт, 25/02/2022 в 04:35 +0000, Simon Riggs пишет:\n> On Mon, 21 Feb 2022 at 08:06, Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n> > Good day, Kyotaro Horiguchi and hackers.\n> > \n> > В Чт, 17/02/2022 в 14:16 +0900, Kyotaro Horiguchi пишет:\n> > > At Wed, 16 Feb 2022 10:40:56 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrote in\n> > > > Hello, all.\n> > > > \n> > > > I thought about patch simplification, and tested version\n> > > > without BufTable and dynahash api change at all.\n> > > > \n> > > > It performs suprisingly well. It is just a bit worse\n> > > > than v1 since there is more contention around dynahash's\n> > > > freelist, but most of improvement remains.\n> > > > \n> > > > I'll finish benchmarking and will attach graphs with\n> > > > next message. Patch is attached here.\n> > > \n> > > Thanks for the new patch. The patch as a whole looks fine to me. But\n> > > some comments needs to be revised.\n> > \n> > Thank you for review and remarks.\n> \n> v3 gets the buffer partition locking right, well done, great results!\n> \n> In v3, the comment at line 1279 still implies we take both locks\n> together, which is not now the case.\n> \n> Dynahash actions are still possible. You now have the BufTableDelete\n> before the BufTableInsert, which opens up the possibility I discussed\n> here:\n> http://postgr.es/m/CANbhV-F0H-8oB_A+m=55hP0e0QRL=RdDDQuSXMTFt6JPrdX+pQ@mail.gmail.com\n> (Apologies for raising a similar topic, I hadn't noticed this thread\n> before; thanks to Horiguchi-san for pointing this out).\n> \n> v1 had a horrible API (sorry!) where you returned the entry and then\n> explicitly re-used it. I think we *should* make changes to dynahash,\n> but not with the API you proposed.\n> \n> Proposal for new BufTable API\n> BufTableReuse() - similar to BufTableDelete() but does NOT put entry\n> back on freelist, we remember it in a private single item cache in\n> dynahash\n> BufTableAssign() - similar to BufTableInsert() but can only be\n> executed directly after BufTableReuse(), fails with ERROR otherwise.\n> Takes the entry from single item cache and re-assigns it to new tag\n> \n> In dynahash we have two new modes that match the above\n> HASH_REUSE - used by BufTableReuse(), similar to HASH_REMOVE, but\n> places entry on the single item cache, avoiding freelist\n> HASH_ASSIGN - used by BufTableAssign(), similar to HASH_ENTER, but\n> uses the entry from the single item cache, rather than asking freelist\n> This last call can fail if someone else already inserted the tag, in\n> which case it adds the single item cache entry back onto freelist\n> \n> Notice that single item cache is not in shared memory, so on abort we\n> should give it back, so we probably need an extra API call for that\n> also to avoid leaking an entry.\n\nWhy there is need for this? Which way backend could be forced to abort\nbetween BufTableReuse and BufTableAssign in this code path? I don't\nsee any CHECK_FOR_INTERRUPTS on the way, but may be I'm missing\nsomething.\n\n> \n> Doing it this way allows us to\n> * avoid touching freelists altogether in the common path - we know we\n> are about to reassign the entry, so we do remember it - no contention\n> from other backends, no borrowing etc..\n> * avoid sharing the private details outside of the dynahash module\n> * allows us to use the same technique elsewhere that we have\n> partitioned hash tables\n> \n> This approach is cleaner than v1, but should also perform better\n> because there will be a 1:1 relationship between a buffer and its\n> dynahash entry, most of the time.\n\nThank you for suggestion. Yes, it is much clearer than my initial proposal.\n\nShould I incorporate it to v4 patch? Perhaps, it could be a separate\ncommit in new version.\n\n\n> \n> With these changes, I think we will be able to *reduce* the number of\n> freelists for partitioned dynahash from 32 to maybe 8, as originally\n> speculated by Robert in 2016:\n> https://www.postgresql.org/message-id/CA%2BTgmoZkg-04rcNRURt%3DjAG0Cs5oPyB-qKxH4wqX09e-oXy-nw%40mail.gmail.com\n> since the freelists will be much less contended with the above approach\n> \n> It would be useful to see performance with a higher number of connections, >400.\n> \n> --\n> Simon Riggs http://www.EnterpriseDB.com/\n\n------\n\nregards,\nYura Sokolov\n\n\n\n",
"msg_date": "Fri, 25 Feb 2022 12:24:40 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "On Fri, 25 Feb 2022 at 09:24, Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n\n> > This approach is cleaner than v1, but should also perform better\n> > because there will be a 1:1 relationship between a buffer and its\n> > dynahash entry, most of the time.\n>\n> Thank you for suggestion. Yes, it is much clearer than my initial proposal.\n>\n> Should I incorporate it to v4 patch? Perhaps, it could be a separate\n> commit in new version.\n\nI don't insist that you do that, but since the API changes are a few\nhours work ISTM better to include in one patch for combined perf\ntesting. It would be better to put all changes in this area into PG15\nthan to split it across multiple releases.\n\n> Why there is need for this? Which way backend could be forced to abort\n> between BufTableReuse and BufTableAssign in this code path? I don't\n> see any CHECK_FOR_INTERRUPTS on the way, but may be I'm missing\n> something.\n\nSounds reasonable.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 25 Feb 2022 09:38:36 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "Hello, Andres\n\nВ Пт, 25/02/2022 в 00:04 -0800, Andres Freund пишет:\n> Hi,\n> \n> On 2022-02-21 11:06:49 +0300, Yura Sokolov wrote:\n> > From 04b07d0627ec65ba3327dc8338d59dbd15c405d8 Mon Sep 17 00:00:00 2001\n> > From: Yura Sokolov <y.sokolov@postgrespro.ru>\n> > Date: Mon, 21 Feb 2022 08:49:03 +0300\n> > Subject: [PATCH v3] [PGPRO-5616] bufmgr: do not acquire two partition locks.\n> > \n> > Acquiring two partition locks leads to complex dependency chain that hurts\n> > at high concurrency level.\n> > \n> > There is no need to hold both lock simultaneously. Buffer is pinned so\n> > other processes could not select it for eviction. If tag is cleared and\n> > buffer removed from old partition other processes will not find it.\n> > Therefore it is safe to release old partition lock before acquiring\n> > new partition lock.\n> \n> Yes, the current design is pretty nonsensical. It leads to really absurd stuff\n> like holding the relation extension lock while we write out old buffer\n> contents etc.\n> \n> \n> \n> > +\t * We have pinned buffer and we are single pinner at the moment so there\n> > +\t * is no other pinners.\n> \n> Seems redundant.\n> \n> \n> > We hold buffer header lock and exclusive partition\n> > +\t * lock if tag is valid. Given these statements it is safe to clear tag\n> > +\t * since no other process can inspect it to the moment.\n> > +\t */\n> \n> Could we share code with InvalidateBuffer here? It's not quite the same code,\n> but nearly the same.\n> \n> \n> > +\t * The usage_count starts out at 1 so that the buffer can survive one\n> > +\t * clock-sweep pass.\n> > +\t *\n> > +\t * We use direct atomic OR instead of Lock+Unlock since no other backend\n> > +\t * could be interested in the buffer. But StrategyGetBuffer,\n> > +\t * Flush*Buffers, Drop*Buffers are scanning all buffers and locks them to\n> > +\t * compare tag, and UnlockBufHdr does raw write to state. So we have to\n> > +\t * spin if we found buffer locked.\n> \n> So basically the first half of of the paragraph is wrong, because no, we\n> can't?\n\nLogically, there are no backends that could be interesting in the buffer.\nPhysically they do LockBufHdr/UnlockBufHdr just to check they are not interesting.\n\n> > +\t * Note that we write tag unlocked. It is also safe since there is always\n> > +\t * check for BM_VALID when tag is compared.\n> \n> \n> > \t */\n> > \tbuf->tag = newTag;\n> > -\tbuf_state &= ~(BM_VALID | BM_DIRTY | BM_JUST_DIRTIED |\n> > -\t\t\t\t BM_CHECKPOINT_NEEDED | BM_IO_ERROR | BM_PERMANENT |\n> > -\t\t\t\t BUF_USAGECOUNT_MASK);\n> > \tif (relpersistence == RELPERSISTENCE_PERMANENT || forkNum == INIT_FORKNUM)\n> > -\t\tbuf_state |= BM_TAG_VALID | BM_PERMANENT | BUF_USAGECOUNT_ONE;\n> > +\t\tnew_bits = BM_TAG_VALID | BM_PERMANENT | BUF_USAGECOUNT_ONE;\n> > \telse\n> > -\t\tbuf_state |= BM_TAG_VALID | BUF_USAGECOUNT_ONE;\n> > -\n> > -\tUnlockBufHdr(buf, buf_state);\n> > +\t\tnew_bits = BM_TAG_VALID | BUF_USAGECOUNT_ONE;\n> > \n> > -\tif (oldPartitionLock != NULL)\n> > +\tbuf_state = pg_atomic_fetch_or_u32(&buf->state, new_bits);\n> > +\twhile (unlikely(buf_state & BM_LOCKED))\n> \n> I don't think it's safe to atomic in arbitrary bits. If somebody else has\n> locked the buffer header in this moment, it'll lead to completely bogus\n> results, because unlocking overwrites concurrently written contents (which\n> there shouldn't be any, but here there are)...\n\nThat is why there is safety loop in the case buf->state were locked just\nafter first optimistic atomic_fetch_or. 99.999% times this loop will not\nhave a job. But in case other backend did lock buf->state, loop waits\nuntil it releases lock and retry atomic_fetch_or.\n\n> And or'ing contents in also doesn't make sense because we it doesn't work to\n> actually unset any contents?\n\nSorry, I didn't understand sentence :((\n\n> Why don't you just use LockBufHdr/UnlockBufHdr?\n\nThis pair makes two atomic writes to memory. Two writes are heavier than\none write in this version (if optimistic case succeed).\n\nBut I thought to use Lock+UnlockBuhHdr instead of safety loop:\n\n buf_state = pg_atomic_fetch_or_u32(&buf->state, new_bits);\n if (unlikely(buf_state & BM_LOCKED))\n {\n buf_state = LockBufHdr(&buf->state);\n UnlockBufHdr(&buf->state, buf_state | new_bits);\n }\n\nI agree this way code is cleaner. Will do in next version.\n\n-----\n\nregards,\nYura Sokolov\n\n\n\n",
"msg_date": "Fri, 25 Feb 2022 12:51:22 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-25 12:51:22 +0300, Yura Sokolov wrote:\n> > > +\t * The usage_count starts out at 1 so that the buffer can survive one\n> > > +\t * clock-sweep pass.\n> > > +\t *\n> > > +\t * We use direct atomic OR instead of Lock+Unlock since no other backend\n> > > +\t * could be interested in the buffer. But StrategyGetBuffer,\n> > > +\t * Flush*Buffers, Drop*Buffers are scanning all buffers and locks them to\n> > > +\t * compare tag, and UnlockBufHdr does raw write to state. So we have to\n> > > +\t * spin if we found buffer locked.\n> > \n> > So basically the first half of of the paragraph is wrong, because no, we\n> > can't?\n> \n> Logically, there are no backends that could be interesting in the buffer.\n> Physically they do LockBufHdr/UnlockBufHdr just to check they are not interesting.\n\nYea, but that's still being interested in the buffer...\n\n\n> > > +\t * Note that we write tag unlocked. It is also safe since there is always\n> > > +\t * check for BM_VALID when tag is compared.\n> > \n> > \n> > > \t */\n> > > \tbuf->tag = newTag;\n> > > -\tbuf_state &= ~(BM_VALID | BM_DIRTY | BM_JUST_DIRTIED |\n> > > -\t\t\t\t BM_CHECKPOINT_NEEDED | BM_IO_ERROR | BM_PERMANENT |\n> > > -\t\t\t\t BUF_USAGECOUNT_MASK);\n> > > \tif (relpersistence == RELPERSISTENCE_PERMANENT || forkNum == INIT_FORKNUM)\n> > > -\t\tbuf_state |= BM_TAG_VALID | BM_PERMANENT | BUF_USAGECOUNT_ONE;\n> > > +\t\tnew_bits = BM_TAG_VALID | BM_PERMANENT | BUF_USAGECOUNT_ONE;\n> > > \telse\n> > > -\t\tbuf_state |= BM_TAG_VALID | BUF_USAGECOUNT_ONE;\n> > > -\n> > > -\tUnlockBufHdr(buf, buf_state);\n> > > +\t\tnew_bits = BM_TAG_VALID | BUF_USAGECOUNT_ONE;\n> > > \n> > > -\tif (oldPartitionLock != NULL)\n> > > +\tbuf_state = pg_atomic_fetch_or_u32(&buf->state, new_bits);\n> > > +\twhile (unlikely(buf_state & BM_LOCKED))\n> > \n> > I don't think it's safe to atomic in arbitrary bits. If somebody else has\n> > locked the buffer header in this moment, it'll lead to completely bogus\n> > results, because unlocking overwrites concurrently written contents (which\n> > there shouldn't be any, but here there are)...\n> \n> That is why there is safety loop in the case buf->state were locked just\n> after first optimistic atomic_fetch_or. 99.999% times this loop will not\n> have a job. But in case other backend did lock buf->state, loop waits\n> until it releases lock and retry atomic_fetch_or.\n\n> > And or'ing contents in also doesn't make sense because we it doesn't work to\n> > actually unset any contents?\n> \n> Sorry, I didn't understand sentence :((\n\n\nYou're OR'ing multiple bits into buf->state. LockBufHdr() only ORs in\nBM_LOCKED. ORing BM_LOCKED is fine:\nEither the buffer is not already locked, in which case it just sets the\nBM_LOCKED bit, acquiring the lock. Or it doesn't change anything, because\nBM_LOCKED already was set.\n\nBut OR'ing in multiple bits is *not* fine, because it'll actually change the\ncontents of ->state while the buffer header is locked.\n\n\n> > Why don't you just use LockBufHdr/UnlockBufHdr?\n> \n> This pair makes two atomic writes to memory. Two writes are heavier than\n> one write in this version (if optimistic case succeed).\n\nUnlockBufHdr doesn't use a locked atomic op. It uses a write barrier and an\nunlocked write.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 25 Feb 2022 09:01:27 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "В Пт, 25/02/2022 в 09:01 -0800, Andres Freund пишет:\n> Hi,\n> \n> On 2022-02-25 12:51:22 +0300, Yura Sokolov wrote:\n> > > > +\t * The usage_count starts out at 1 so that the buffer can survive one\n> > > > +\t * clock-sweep pass.\n> > > > +\t *\n> > > > +\t * We use direct atomic OR instead of Lock+Unlock since no other backend\n> > > > +\t * could be interested in the buffer. But StrategyGetBuffer,\n> > > > +\t * Flush*Buffers, Drop*Buffers are scanning all buffers and locks them to\n> > > > +\t * compare tag, and UnlockBufHdr does raw write to state. So we have to\n> > > > +\t * spin if we found buffer locked.\n> > > \n> > > So basically the first half of of the paragraph is wrong, because no, we\n> > > can't?\n> > \n> > Logically, there are no backends that could be interesting in the buffer.\n> > Physically they do LockBufHdr/UnlockBufHdr just to check they are not interesting.\n> \n> Yea, but that's still being interested in the buffer...\n> \n> \n> > > > +\t * Note that we write tag unlocked. It is also safe since there is always\n> > > > +\t * check for BM_VALID when tag is compared.\n> > > > \t */\n> > > > \tbuf->tag = newTag;\n> > > > -\tbuf_state &= ~(BM_VALID | BM_DIRTY | BM_JUST_DIRTIED |\n> > > > -\t\t\t\t BM_CHECKPOINT_NEEDED | BM_IO_ERROR | BM_PERMANENT |\n> > > > -\t\t\t\t BUF_USAGECOUNT_MASK);\n> > > > \tif (relpersistence == RELPERSISTENCE_PERMANENT || forkNum == INIT_FORKNUM)\n> > > > -\t\tbuf_state |= BM_TAG_VALID | BM_PERMANENT | BUF_USAGECOUNT_ONE;\n> > > > +\t\tnew_bits = BM_TAG_VALID | BM_PERMANENT | BUF_USAGECOUNT_ONE;\n> > > > \telse\n> > > > -\t\tbuf_state |= BM_TAG_VALID | BUF_USAGECOUNT_ONE;\n> > > > -\n> > > > -\tUnlockBufHdr(buf, buf_state);\n> > > > +\t\tnew_bits = BM_TAG_VALID | BUF_USAGECOUNT_ONE;\n> > > > \n> > > > -\tif (oldPartitionLock != NULL)\n> > > > +\tbuf_state = pg_atomic_fetch_or_u32(&buf->state, new_bits);\n> > > > +\twhile (unlikely(buf_state & BM_LOCKED))\n> > > \n> > > I don't think it's safe to atomic in arbitrary bits. If somebody else has\n> > > locked the buffer header in this moment, it'll lead to completely bogus\n> > > results, because unlocking overwrites concurrently written contents (which\n> > > there shouldn't be any, but here there are)...\n> > \n> > That is why there is safety loop in the case buf->state were locked just\n> > after first optimistic atomic_fetch_or. 99.999% times this loop will not\n> > have a job. But in case other backend did lock buf->state, loop waits\n> > until it releases lock and retry atomic_fetch_or.\n> > > And or'ing contents in also doesn't make sense because we it doesn't work to\n> > > actually unset any contents?\n> > \n> > Sorry, I didn't understand sentence :((\n> \n> You're OR'ing multiple bits into buf->state. LockBufHdr() only ORs in\n> BM_LOCKED. ORing BM_LOCKED is fine:\n> Either the buffer is not already locked, in which case it just sets the\n> BM_LOCKED bit, acquiring the lock. Or it doesn't change anything, because\n> BM_LOCKED already was set.\n> \n> But OR'ing in multiple bits is *not* fine, because it'll actually change the\n> contents of ->state while the buffer header is locked.\n\nFirst, both states are valid: before atomic_or and after.\nSecond, there are no checks for buffer->state while buffer header is locked.\nAll LockBufHdr users uses result of LockBufHdr. (I just checked that).\n\n> > > Why don't you just use LockBufHdr/UnlockBufHdr?\n> > \n> > This pair makes two atomic writes to memory. Two writes are heavier than\n> > one write in this version (if optimistic case succeed).\n> \n> UnlockBufHdr doesn't use a locked atomic op. It uses a write barrier and an\n> unlocked write.\n\nWrite barrier is not free on any platform.\n\nWell, while I don't see problem with modifying buffer->state, there is problem\nwith modifying buffer->tag: I missed Drop*Buffers doesn't check BM_TAG_VALID\nflag. Therefore either I had to add this check to those places, or return to\nLockBufHdr+UnlockBufHdr pair.\n\nFor patch simplicity I'll return Lock+UnlockBufHdr pair. But it has measurable\nimpact on low connection numbers on many-sockets.\n\n> \n> Greetings,\n> \n> Andres Freund\n\n\n\n",
"msg_date": "Mon, 28 Feb 2022 09:01:49 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "В Пт, 25/02/2022 в 09:38 +0000, Simon Riggs пишет:\n> On Fri, 25 Feb 2022 at 09:24, Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n> \n> > > This approach is cleaner than v1, but should also perform better\n> > > because there will be a 1:1 relationship between a buffer and its\n> > > dynahash entry, most of the time.\n> > \n> > Thank you for suggestion. Yes, it is much clearer than my initial proposal.\n> > \n> > Should I incorporate it to v4 patch? Perhaps, it could be a separate\n> > commit in new version.\n> \n> I don't insist that you do that, but since the API changes are a few\n> hours work ISTM better to include in one patch for combined perf\n> testing. It would be better to put all changes in this area into PG15\n> than to split it across multiple releases.\n> \n> > Why there is need for this? Which way backend could be forced to abort\n> > between BufTableReuse and BufTableAssign in this code path? I don't\n> > see any CHECK_FOR_INTERRUPTS on the way, but may be I'm missing\n> > something.\n> \n> Sounds reasonable.\n\nOk, here is v4.\nIt is with two commits: one for BufferAlloc locking change and other\nfor dynahash's freelist avoiding.\n\nBuffer locking patch is same to v2 with some comment changes. Ie it uses\nLock+UnlockBufHdr \n\nFor dynahash HASH_REUSE and HASH_ASSIGN as suggested.\nHASH_REUSE stores deleted element into per-process static variable.\nHASH_ASSIGN uses this element instead of freelist. If there's no\nsuch stored element, it falls back to HASH_ENTER.\n\nI've implemented Robert Haas's suggestion to count element in freelists\ninstead of nentries:\n\n> One idea is to jigger things so that we maintain a count of the total\n> number of entries that doesn't change except when we allocate, and\n> then for each freelist partition we maintain the number of entries in\n> that freelist partition. So then the size of the hash table, instead\n> of being sum(nentries) is totalsize - sum(nfree).\n\nhttps://postgr.es/m/CA%2BTgmoZkg-04rcNRURt%3DjAG0Cs5oPyB-qKxH4wqX09e-oXy-nw%40mail.gmail.com\n\nIt helps to avoid freelist lock just to actualize counters.\nI made it with replacing \"nentries\" with \"nfree\" and adding\n\"nalloced\" to each freelist. It also makes \"hash_update_hash_key\" valid\nfor key that migrates partitions.\n\nI believe, there is no need for \"nalloced\" for each freelist, and\ninstead single such field should be in HASHHDR. More, it seems to me\n`element_alloc` function needs no acquiring freelist partition lock\nsince it is called only during initialization of shared hash table.\nAm I right?\n\nI didn't go this path in v4 for simplicity, but can put it to v5\nif approved.\n\nTo be honest, \"reuse\" patch gives little improvement. But still\nmeasurable on some connection numbers.\n\nI tried to reduce freelist partitions to 8, but it has mixed impact.\nMost of time performance is same, but sometimes a bit lower. I\ndidn't investigate reasons. Perhaps they are not related to buffer\nmanager.\n\nI didn't introduce new functions BufTableReuse and BufTableAssign\nsince there are single call to BufTableInsert and two calls to\nBufTableDelete. So I reused this functions, just added \"reuse\" flag\nto BufTableDelete. \n\nTests simple_select for Xeon 8354H, 128MB and 1G shared buffers\nfor scale 100.\n\n1 socket:\n conns | master | patch_v4 | master 1G | patch_v4 1G \n--------+------------+------------+------------+------------\n 1 | 41975 | 41540 | 52898 | 52213 \n 2 | 77693 | 77908 | 97571 | 98371 \n 3 | 114713 | 115522 | 142709 | 145226 \n 5 | 188898 | 187617 | 239322 | 237269 \n 7 | 261516 | 260006 | 329119 | 329449 \n 17 | 521821 | 519473 | 672390 | 662106 \n 27 | 555487 | 555697 | 674630 | 672736 \n 53 | 868213 | 896539 | 1190734 | 1202505 \n 83 | 868232 | 866029 | 1164997 | 1158719 \n 107 | 850477 | 845685 | 1140597 | 1134502 \n 139 | 816311 | 816808 | 1101471 | 1091258 \n 163 | 794788 | 796517 | 1078445 | 1071568 \n 191 | 765934 | 776185 | 1059497 | 1041944 \n 211 | 738656 | 777365 | 1083356 | 1046422 \n 239 | 713124 | 841337 | 1104629 | 1116668 \n 271 | 692138 | 847803 | 1094432 | 1128971 \n 307 | 682919 | 849239 | 1086306 | 1127051 \n 353 | 679449 | 842125 | 1071482 | 1117471 \n 397 | 676217 | 844015 | 1058937 | 1118628 \n\n2 sockets:\n conns | master | patch_v4 | master 1G | patch_v4 1G \n--------+------------+------------+------------+------------\n 1 | 44317 | 44034 | 53920 | 53583 \n 2 | 81193 | 78621 | 99138 | 97968 \n 3 | 120755 | 115648 | 148102 | 147423 \n 5 | 190007 | 188943 | 232078 | 231029 \n 7 | 258602 | 260649 | 325545 | 318567 \n 17 | 551814 | 552914 | 692312 | 697518 \n 27 | 787353 | 786573 | 1023509 | 1022891 \n 53 | 973880 | 1008534 | 1228274 | 1278194 \n 83 | 1108442 | 1269777 | 1596292 | 1648156 \n 107 | 1072188 | 1339634 | 1542401 | 1664476 \n 139 | 1000446 | 1316372 | 1490757 | 1676127 \n 163 | 967378 | 1257445 | 1461468 | 1655574 \n 191 | 926010 | 1189591 | 1435317 | 1639313 \n 211 | 909919 | 1149905 | 1417437 | 1632764 \n 239 | 895944 | 1115681 | 1393530 | 1616329 \n 271 | 880545 | 1090208 | 1374878 | 1609544 \n 307 | 865560 | 1066798 | 1355164 | 1593769 \n 353 | 857591 | 1046426 | 1330069 | 1584006 \n 397 | 840374 | 1024711 | 1312257 | 1564872 \n\n--------\n\nregards\n\nYura Sokolov\nPostgres Professional\ny.sokolov@postgrespro.ru\nfunny.falcon@gmail.com",
"msg_date": "Tue, 01 Mar 2022 10:24:22 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "В Вт, 01/03/2022 в 10:24 +0300, Yura Sokolov пишет:\n> Ok, here is v4.\n\nAnd here is v5.\n\nFirst, there was compilation error in Assert in dynahash.c .\nExcuse me for not checking before sending previous version.\n\nSecond, I add third commit that reduces HASHHDR allocation\nsize for non-partitioned dynahash:\n- moved freeList to last position\n- alloc and memset offset(HASHHDR, freeList[1]) for\n non-partitioned hash tables.\nI didn't benchmarked it, but I will be surprised if it\nmatters much in performance sence.\n\nThird, I put all three commits into single file to not\nconfuse commitfest application.\n\n \n--------\n\nregards\n\nYura Sokolov\nPostgres Professional\ny.sokolov@postgrespro.ru\nfunny.falcon@gmail.com",
"msg_date": "Thu, 03 Mar 2022 01:35:57 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "At Thu, 03 Mar 2022 01:35:57 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrote in \n> В Вт, 01/03/2022 в 10:24 +0300, Yura Sokolov пишет:\n> > Ok, here is v4.\n> \n> And here is v5.\n> \n> First, there was compilation error in Assert in dynahash.c .\n> Excuse me for not checking before sending previous version.\n> \n> Second, I add third commit that reduces HASHHDR allocation\n> size for non-partitioned dynahash:\n> - moved freeList to last position\n> - alloc and memset offset(HASHHDR, freeList[1]) for\n> non-partitioned hash tables.\n> I didn't benchmarked it, but I will be surprised if it\n> matters much in performance sence.\n> \n> Third, I put all three commits into single file to not\n> confuse commitfest application.\n\nThanks! I looked into dynahash part.\n\n struct HASHHDR\n {\n-\t/*\n-\t * The freelist can become a point of contention in high-concurrency hash\n\nWhy did you move around the freeList?\n\n\n-\tlong\t\tnentries;\t\t/* number of entries in associated buckets */\n+\tlong\t\tnfree;\t\t\t/* number of free entries in the list */\n+\tlong\t\tnalloced;\t\t/* number of entries initially allocated for\n\nWhy do we need nfree? HASH_ASSING should do the same thing with\nHASH_REMOVE. Maybe the reason is the code tries to put the detached\nbucket to different free list, but we can just remember the\nfreelist_idx for the detached bucket as we do for hashp. I think that\nshould largely reduce the footprint of this patch.\n\n-static void hdefault(HTAB *hashp);\n+static void hdefault(HTAB *hashp, bool partitioned);\n\nThat optimization may work even a bit, but it is not irrelevant to\nthis patch?\n\n+\t\tcase HASH_REUSE:\n+\t\t\tif (currBucket != NULL)\n+\t\t\t{\n+\t\t\t\t/* check there is no unfinished HASH_REUSE+HASH_ASSIGN pair */\n+\t\t\t\tAssert(DynaHashReuse.hashp == NULL);\n+\t\t\t\tAssert(DynaHashReuse.element == NULL);\n\nI think all cases in the switch(action) other than HASH_ASSIGN needs\nthis assertion and no need for checking both, maybe only for element\nwould be enough.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 11 Mar 2022 15:30:30 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "At Fri, 11 Mar 2022 15:30:30 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Thanks! I looked into dynahash part.\n> \n> struct HASHHDR\n> {\n> -\t/*\n> -\t * The freelist can become a point of contention in high-concurrency hash\n> \n> Why did you move around the freeList?\n> \n> \n> -\tlong\t\tnentries;\t\t/* number of entries in associated buckets */\n> +\tlong\t\tnfree;\t\t\t/* number of free entries in the list */\n> +\tlong\t\tnalloced;\t\t/* number of entries initially allocated for\n> \n> Why do we need nfree? HASH_ASSING should do the same thing with\n> HASH_REMOVE. Maybe the reason is the code tries to put the detached\n> bucket to different free list, but we can just remember the\n> freelist_idx for the detached bucket as we do for hashp. I think that\n> should largely reduce the footprint of this patch.\n> \n> -static void hdefault(HTAB *hashp);\n> +static void hdefault(HTAB *hashp, bool partitioned);\n> \n> That optimization may work even a bit, but it is not irrelevant to\n> this patch?\n> \n> +\t\tcase HASH_REUSE:\n> +\t\t\tif (currBucket != NULL)\n> +\t\t\t{\n> +\t\t\t\t/* check there is no unfinished HASH_REUSE+HASH_ASSIGN pair */\n> +\t\t\t\tAssert(DynaHashReuse.hashp == NULL);\n> +\t\t\t\tAssert(DynaHashReuse.element == NULL);\n> \n> I think all cases in the switch(action) other than HASH_ASSIGN needs\n> this assertion and no need for checking both, maybe only for element\n> would be enough.\n\nWhile I looked buf_table part, I came up with additional comments.\n\nBufTableInsert(BufferTag *tagPtr, uint32 hashcode, int buf_id)\n{\n\t\thash_search_with_hash_value(SharedBufHash,\n\t\t\t\t\t\t\t\t\tHASH_ASSIGN,\n...\nBufTableDelete(BufferTag *tagPtr, uint32 hashcode, bool reuse)\n\nBufTableDelete considers both reuse and !reuse cases but\nBufTableInsert doesn't and always does HASH_ASSIGN. That looks\nodd. We should use HASH_ENTER here. Thus I think it is more\nreasonable that HASH_ENTRY uses the stashed entry if exists and\nneeded, or returns it to freelist if exists but not needed.\n\nWhat do you think about this?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 11 Mar 2022 15:49:49 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "At Fri, 11 Mar 2022 15:49:49 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Fri, 11 Mar 2022 15:30:30 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > Thanks! I looked into dynahash part.\n\nThen I looked into bufmgr part. It looks fine to me but I have some\ncomments on code comments.\n\n>\t\t * To change the association of a valid buffer, we'll need to have\n>\t\t * exclusive lock on both the old and new mapping partitions.\n>\t\tif (oldFlags & BM_TAG_VALID)\n\nWe don't take lock on the new mapping partition here.\n\n\n+\t * Clear out the buffer's tag and flags. We must do this to ensure that\n+\t * linear scans of the buffer array don't think the buffer is valid. We\n+\t * also reset the usage_count since any recency of use of the old content\n+\t * is no longer relevant.\n+ *\n+\t * We are single pinner, we hold buffer header lock and exclusive\n+\t * partition lock (if tag is valid). Given these statements it is safe to\n+\t * clear tag since no other process can inspect it to the moment.\n\nThis comment is a merger of the comments from InvalidateBuffer and\nBufferAlloc. But I think what we need to explain here is why we\ninvalidate the buffer here despite of we are going to reuse it soon.\nAnd I think we need to state that the old buffer is now safe to use\nfor the new tag here. I'm not sure the statement is really correct\nbut clearing-out actually looks like safer.\n\n> Now it is safe to use victim buffer for new tag. Invalidate the\n> buffer before releasing header lock to ensure that linear scans of\n> the buffer array don't think the buffer is valid. It is safe\n> because it is guaranteed that we're the single pinner of the buffer.\n> That pin also prevents the buffer from being stolen by others until\n> we reuse it or return it to freelist.\n\nSo I want to revise the following comment.\n\n-\t * Now it is safe to use victim buffer for new tag.\n+\t * Now reuse victim buffer for new tag.\n>\t * Make sure BM_PERMANENT is set for buffers that must be written at every\n>\t * checkpoint. Unlogged buffers only need to be written at shutdown\n>\t * checkpoints, except for their \"init\" forks, which need to be treated\n>\t * just like permanent relations.\n>\t *\n>\t * The usage_count starts out at 1 so that the buffer can survive one\n>\t * clock-sweep pass.\n\nBut if you think the current commet is fine, I don't insist on the\ncomment chagnes.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 11 Mar 2022 17:21:37 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "В Пт, 11/03/2022 в 15:30 +0900, Kyotaro Horiguchi пишет:\n> At Thu, 03 Mar 2022 01:35:57 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrote in \n> > В Вт, 01/03/2022 в 10:24 +0300, Yura Sokolov пишет:\n> > > Ok, here is v4.\n> > \n> > And here is v5.\n> > \n> > First, there was compilation error in Assert in dynahash.c .\n> > Excuse me for not checking before sending previous version.\n> > \n> > Second, I add third commit that reduces HASHHDR allocation\n> > size for non-partitioned dynahash:\n> > - moved freeList to last position\n> > - alloc and memset offset(HASHHDR, freeList[1]) for\n> > non-partitioned hash tables.\n> > I didn't benchmarked it, but I will be surprised if it\n> > matters much in performance sence.\n> > \n> > Third, I put all three commits into single file to not\n> > confuse commitfest application.\n> \n> Thanks! I looked into dynahash part.\n> \n> struct HASHHDR\n> {\n> - /*\n> - * The freelist can become a point of contention in high-concurrency hash\n> \n> Why did you move around the freeList?\n> \n> \n> - long nentries; /* number of entries in associated buckets */\n> + long nfree; /* number of free entries in the list */\n> + long nalloced; /* number of entries initially allocated for\n> \n> Why do we need nfree? HASH_ASSING should do the same thing with\n> HASH_REMOVE. Maybe the reason is the code tries to put the detached\n> bucket to different free list, but we can just remember the\n> freelist_idx for the detached bucket as we do for hashp. I think that\n> should largely reduce the footprint of this patch.\n\nIf we keep nentries, then we need to fix nentries in both old\n\"freeList\" partition and new one. It is two freeList[partition]->mutex\nlock+unlock pairs.\n\nBut count of free elements doesn't change, so if we change nentries\nto nfree, then no need to fix freeList[partition]->nfree counters,\nno need to lock+unlock. \n\n> \n> -static void hdefault(HTAB *hashp);\n> +static void hdefault(HTAB *hashp, bool partitioned);\n> \n> That optimization may work even a bit, but it is not irrelevant to\n> this patch?\n> \n> + case HASH_REUSE:\n> + if (currBucket != NULL)\n> + {\n> + /* check there is no unfinished HASH_REUSE+HASH_ASSIGN pair */\n> + Assert(DynaHashReuse.hashp == NULL);\n> + Assert(DynaHashReuse.element == NULL);\n> \n> I think all cases in the switch(action) other than HASH_ASSIGN needs\n> this assertion and no need for checking both, maybe only for element\n> would be enough.\n\nAgree.\n\n\n\n",
"msg_date": "Fri, 11 Mar 2022 11:30:27 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "В Пт, 11/03/2022 в 15:49 +0900, Kyotaro Horiguchi пишет:\n> At Fri, 11 Mar 2022 15:30:30 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > Thanks! I looked into dynahash part.\n> > \n> > struct HASHHDR\n> > {\n> > - /*\n> > - * The freelist can become a point of contention in high-concurrency hash\n> > \n> > Why did you move around the freeList?\n> > \n> > \n> > - long nentries; /* number of entries in associated buckets */\n> > + long nfree; /* number of free entries in the list */\n> > + long nalloced; /* number of entries initially allocated for\n> > \n> > Why do we need nfree? HASH_ASSING should do the same thing with\n> > HASH_REMOVE. Maybe the reason is the code tries to put the detached\n> > bucket to different free list, but we can just remember the\n> > freelist_idx for the detached bucket as we do for hashp. I think that\n> > should largely reduce the footprint of this patch.\n> > \n> > -static void hdefault(HTAB *hashp);\n> > +static void hdefault(HTAB *hashp, bool partitioned);\n> > \n> > That optimization may work even a bit, but it is not irrelevant to\n> > this patch?\n\n(forgot to answer in previous letter).\nYes, third commit is very optional. But adding `nalloced` to\n`FreeListData` increases allocation a lot even for usual\nnon-shared non-partitioned dynahashes. And this allocation is\nquite huge right now for no meaningful reason.\n\n> > \n> > + case HASH_REUSE:\n> > + if (currBucket != NULL)\n> > + {\n> > + /* check there is no unfinished HASH_REUSE+HASH_ASSIGN pair */\n> > + Assert(DynaHashReuse.hashp == NULL);\n> > + Assert(DynaHashReuse.element == NULL);\n> > \n> > I think all cases in the switch(action) other than HASH_ASSIGN needs\n> > this assertion and no need for checking both, maybe only for element\n> > would be enough.\n> \n> While I looked buf_table part, I came up with additional comments.\n> \n> BufTableInsert(BufferTag *tagPtr, uint32 hashcode, int buf_id)\n> {\n> hash_search_with_hash_value(SharedBufHash,\n> HASH_ASSIGN,\n> ...\n> BufTableDelete(BufferTag *tagPtr, uint32 hashcode, bool reuse)\n> \n> BufTableDelete considers both reuse and !reuse cases but\n> BufTableInsert doesn't and always does HASH_ASSIGN. That looks\n> odd. We should use HASH_ENTER here. Thus I think it is more\n> reasonable that HASH_ENTRY uses the stashed entry if exists and\n> needed, or returns it to freelist if exists but not needed.\n> \n> What do you think about this?\n\nWell... I don't like it but I don't mind either.\n\nCode in HASH_ENTER and HASH_ASSIGN cases differs much.\nOn the other hand, probably it is possible to merge it carefuly.\nI'll try.\n\n---------\n\nregards\n\nYura Sokolov\n\n\n\n",
"msg_date": "Fri, 11 Mar 2022 12:34:32 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "В Пт, 11/03/2022 в 17:21 +0900, Kyotaro Horiguchi пишет:\n> At Fri, 11 Mar 2022 15:49:49 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > At Fri, 11 Mar 2022 15:30:30 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > > Thanks! I looked into dynahash part.\n> > > \n> > > struct HASHHDR\n> > > {\n> > > - /*\n> > > - * The freelist can become a point of contention in high-concurrency hash\n> > > \n> > > Why did you move around the freeList?\n\nThis way it is possible to allocate just first partition, not all 32 partitions.\n\n> \n> Then I looked into bufmgr part. It looks fine to me but I have some\n> comments on code comments.\n> \n> > * To change the association of a valid buffer, we'll need to have\n> > * exclusive lock on both the old and new mapping partitions.\n> > if (oldFlags & BM_TAG_VALID)\n> \n> We don't take lock on the new mapping partition here.\n\nThx, fixed.\n\n> + * Clear out the buffer's tag and flags. We must do this to ensure that\n> + * linear scans of the buffer array don't think the buffer is valid. We\n> + * also reset the usage_count since any recency of use of the old content\n> + * is no longer relevant.\n> + *\n> + * We are single pinner, we hold buffer header lock and exclusive\n> + * partition lock (if tag is valid). Given these statements it is safe to\n> + * clear tag since no other process can inspect it to the moment.\n> \n> This comment is a merger of the comments from InvalidateBuffer and\n> BufferAlloc. But I think what we need to explain here is why we\n> invalidate the buffer here despite of we are going to reuse it soon.\n> And I think we need to state that the old buffer is now safe to use\n> for the new tag here. I'm not sure the statement is really correct\n> but clearing-out actually looks like safer.\n\nI've tried to reformulate the comment block.\n\n> \n> > Now it is safe to use victim buffer for new tag. Invalidate the\n> > buffer before releasing header lock to ensure that linear scans of\n> > the buffer array don't think the buffer is valid. It is safe\n> > because it is guaranteed that we're the single pinner of the buffer.\n> > That pin also prevents the buffer from being stolen by others until\n> > we reuse it or return it to freelist.\n> \n> So I want to revise the following comment.\n> \n> - * Now it is safe to use victim buffer for new tag.\n> + * Now reuse victim buffer for new tag.\n> > * Make sure BM_PERMANENT is set for buffers that must be written at every\n> > * checkpoint. Unlogged buffers only need to be written at shutdown\n> > * checkpoints, except for their \"init\" forks, which need to be treated\n> > * just like permanent relations.\n> > *\n> > * The usage_count starts out at 1 so that the buffer can survive one\n> > * clock-sweep pass.\n> \n> But if you think the current commet is fine, I don't insist on the\n> comment chagnes.\n\nUsed suggestion.\n\nFr, 11/03/22 Yura Sokolov wrote:\n> В Пт, 11/03/2022 в 15:49 +0900, Kyotaro Horiguchi пишет:\n> > BufTableDelete considers both reuse and !reuse cases but\n> > BufTableInsert doesn't and always does HASH_ASSIGN. That looks\n> > odd. We should use HASH_ENTER here. Thus I think it is more\n> > reasonable that HASH_ENTRY uses the stashed entry if exists and\n> > needed, or returns it to freelist if exists but not needed.\n> > \n> > What do you think about this?\n> \n> Well... I don't like it but I don't mind either.\n> \n> Code in HASH_ENTER and HASH_ASSIGN cases differs much.\n> On the other hand, probably it is possible to merge it carefuly.\n> I'll try.\n\nI've merged HASH_ASSIGN into HASH_ENTER.\n\nAs in previous letter, three commits are concatted to one file\nand could be applied with `git am`.\n\n-------\n\nregards\n\nYura Sokolov\nPostgres Professional\ny.sokolov@postgrespro.ru\nfunny.falcon@gmail.com",
"msg_date": "Sun, 13 Mar 2022 13:24:51 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "On Sun, Mar 13, 2022 at 3:25 AM Yura Sokolov <y.sokolov@postgrespro.ru>\nwrote:\n\n> В Пт, 11/03/2022 в 17:21 +0900, Kyotaro Horiguchi пишет:\n> > At Fri, 11 Mar 2022 15:49:49 +0900 (JST), Kyotaro Horiguchi <\n> horikyota.ntt@gmail.com> wrote in\n> > > At Fri, 11 Mar 2022 15:30:30 +0900 (JST), Kyotaro Horiguchi <\n> horikyota.ntt@gmail.com> wrote in\n> > > > Thanks! I looked into dynahash part.\n> > > >\n> > > > struct HASHHDR\n> > > > {\n> > > > - /*\n> > > > - * The freelist can become a point of contention in\n> high-concurrency hash\n> > > >\n> > > > Why did you move around the freeList?\n>\n> This way it is possible to allocate just first partition, not all 32\n> partitions.\n>\n> >\n> > Then I looked into bufmgr part. It looks fine to me but I have some\n> > comments on code comments.\n> >\n> > > * To change the association of a valid buffer, we'll\n> need to have\n> > > * exclusive lock on both the old and new mapping\n> partitions.\n> > > if (oldFlags & BM_TAG_VALID)\n> >\n> > We don't take lock on the new mapping partition here.\n>\n> Thx, fixed.\n>\n> > + * Clear out the buffer's tag and flags. We must do this to\n> ensure that\n> > + * linear scans of the buffer array don't think the buffer is\n> valid. We\n> > + * also reset the usage_count since any recency of use of the\n> old content\n> > + * is no longer relevant.\n> > + *\n> > + * We are single pinner, we hold buffer header lock and exclusive\n> > + * partition lock (if tag is valid). Given these statements it\n> is safe to\n> > + * clear tag since no other process can inspect it to the moment.\n> >\n> > This comment is a merger of the comments from InvalidateBuffer and\n> > BufferAlloc. But I think what we need to explain here is why we\n> > invalidate the buffer here despite of we are going to reuse it soon.\n> > And I think we need to state that the old buffer is now safe to use\n> > for the new tag here. I'm not sure the statement is really correct\n> > but clearing-out actually looks like safer.\n>\n> I've tried to reformulate the comment block.\n>\n> >\n> > > Now it is safe to use victim buffer for new tag. Invalidate the\n> > > buffer before releasing header lock to ensure that linear scans of\n> > > the buffer array don't think the buffer is valid. It is safe\n> > > because it is guaranteed that we're the single pinner of the buffer.\n> > > That pin also prevents the buffer from being stolen by others until\n> > > we reuse it or return it to freelist.\n> >\n> > So I want to revise the following comment.\n> >\n> > - * Now it is safe to use victim buffer for new tag.\n> > + * Now reuse victim buffer for new tag.\n> > > * Make sure BM_PERMANENT is set for buffers that must be\n> written at every\n> > > * checkpoint. Unlogged buffers only need to be written at\n> shutdown\n> > > * checkpoints, except for their \"init\" forks, which need to be\n> treated\n> > > * just like permanent relations.\n> > > *\n> > > * The usage_count starts out at 1 so that the buffer can\n> survive one\n> > > * clock-sweep pass.\n> >\n> > But if you think the current commet is fine, I don't insist on the\n> > comment chagnes.\n>\n> Used suggestion.\n>\n> Fr, 11/03/22 Yura Sokolov wrote:\n> > В Пт, 11/03/2022 в 15:49 +0900, Kyotaro Horiguchi пишет:\n> > > BufTableDelete considers both reuse and !reuse cases but\n> > > BufTableInsert doesn't and always does HASH_ASSIGN. That looks\n> > > odd. We should use HASH_ENTER here. Thus I think it is more\n> > > reasonable that HASH_ENTRY uses the stashed entry if exists and\n> > > needed, or returns it to freelist if exists but not needed.\n> > >\n> > > What do you think about this?\n> >\n> > Well... I don't like it but I don't mind either.\n> >\n> > Code in HASH_ENTER and HASH_ASSIGN cases differs much.\n> > On the other hand, probably it is possible to merge it carefuly.\n> > I'll try.\n>\n> I've merged HASH_ASSIGN into HASH_ENTER.\n>\n> As in previous letter, three commits are concatted to one file\n> and could be applied with `git am`.\n>\n> -------\n>\n> regards\n>\n> Yura Sokolov\n> Postgres Professional\n> y.sokolov@postgrespro.ru\n> funny.falcon@gmail.com\n\n\nHi,\nIn the description:\n\nThere is no need to hold both lock simultaneously.\n\nboth lock -> both locks\n\n+ * We also reset the usage_count since any recency of use of the old\n\nrecency of use -> recent use\n\n+BufTableDelete(BufferTag *tagPtr, uint32 hashcode, bool reuse)\n\nLater on, there is code:\n\n+ reuse ? HASH_REUSE : HASH_REMOVE,\n\nCan flag (such as HASH_REUSE) be passed to BufTableDelete() instead of bool\n? That way, flag can be used directly in the above place.\n\n+ long nalloced; /* number of entries initially allocated for\n\nnallocated isn't very long. I think it would be better to name the\nfield nallocated* '*nallocated'.\n\n+ sum += hashp->hctl->freeList[i].nalloced;\n+ sum -= hashp->hctl->freeList[i].nfree;\n\nI think it would be better to calculate the difference between nalloced and\nnfree first, then add the result to sum (to avoid overflow).\n\nSubject: [PATCH 3/3] reduce memory allocation for non-partitioned dynahash\n\nmemory allocation -> memory allocations\n\nCheers\n\nOn Sun, Mar 13, 2022 at 3:25 AM Yura Sokolov <y.sokolov@postgrespro.ru> wrote:В Пт, 11/03/2022 в 17:21 +0900, Kyotaro Horiguchi пишет:\n> At Fri, 11 Mar 2022 15:49:49 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > At Fri, 11 Mar 2022 15:30:30 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > > Thanks! I looked into dynahash part.\n> > > \n> > > struct HASHHDR\n> > > {\n> > > - /*\n> > > - * The freelist can become a point of contention in high-concurrency hash\n> > > \n> > > Why did you move around the freeList?\n\nThis way it is possible to allocate just first partition, not all 32 partitions.\n\n> \n> Then I looked into bufmgr part. It looks fine to me but I have some\n> comments on code comments.\n> \n> > * To change the association of a valid buffer, we'll need to have\n> > * exclusive lock on both the old and new mapping partitions.\n> > if (oldFlags & BM_TAG_VALID)\n> \n> We don't take lock on the new mapping partition here.\n\nThx, fixed.\n\n> + * Clear out the buffer's tag and flags. We must do this to ensure that\n> + * linear scans of the buffer array don't think the buffer is valid. We\n> + * also reset the usage_count since any recency of use of the old content\n> + * is no longer relevant.\n> + *\n> + * We are single pinner, we hold buffer header lock and exclusive\n> + * partition lock (if tag is valid). Given these statements it is safe to\n> + * clear tag since no other process can inspect it to the moment.\n> \n> This comment is a merger of the comments from InvalidateBuffer and\n> BufferAlloc. But I think what we need to explain here is why we\n> invalidate the buffer here despite of we are going to reuse it soon.\n> And I think we need to state that the old buffer is now safe to use\n> for the new tag here. I'm not sure the statement is really correct\n> but clearing-out actually looks like safer.\n\nI've tried to reformulate the comment block.\n\n> \n> > Now it is safe to use victim buffer for new tag. Invalidate the\n> > buffer before releasing header lock to ensure that linear scans of\n> > the buffer array don't think the buffer is valid. It is safe\n> > because it is guaranteed that we're the single pinner of the buffer.\n> > That pin also prevents the buffer from being stolen by others until\n> > we reuse it or return it to freelist.\n> \n> So I want to revise the following comment.\n> \n> - * Now it is safe to use victim buffer for new tag.\n> + * Now reuse victim buffer for new tag.\n> > * Make sure BM_PERMANENT is set for buffers that must be written at every\n> > * checkpoint. Unlogged buffers only need to be written at shutdown\n> > * checkpoints, except for their \"init\" forks, which need to be treated\n> > * just like permanent relations.\n> > *\n> > * The usage_count starts out at 1 so that the buffer can survive one\n> > * clock-sweep pass.\n> \n> But if you think the current commet is fine, I don't insist on the\n> comment chagnes.\n\nUsed suggestion.\n\nFr, 11/03/22 Yura Sokolov wrote:\n> В Пт, 11/03/2022 в 15:49 +0900, Kyotaro Horiguchi пишет:\n> > BufTableDelete considers both reuse and !reuse cases but\n> > BufTableInsert doesn't and always does HASH_ASSIGN. That looks\n> > odd. We should use HASH_ENTER here. Thus I think it is more\n> > reasonable that HASH_ENTRY uses the stashed entry if exists and\n> > needed, or returns it to freelist if exists but not needed.\n> > \n> > What do you think about this?\n> \n> Well... I don't like it but I don't mind either.\n> \n> Code in HASH_ENTER and HASH_ASSIGN cases differs much.\n> On the other hand, probably it is possible to merge it carefuly.\n> I'll try.\n\nI've merged HASH_ASSIGN into HASH_ENTER.\n\nAs in previous letter, three commits are concatted to one file\nand could be applied with `git am`.\n\n-------\n\nregards\n\nYura Sokolov\nPostgres Professional\ny.sokolov@postgrespro.ru\nfunny.falcon@gmail.comHi,In the description:There is no need to hold both lock simultaneously. both lock -> both locks+ * We also reset the usage_count since any recency of use of the oldrecency of use -> recent use+BufTableDelete(BufferTag *tagPtr, uint32 hashcode, bool reuse)Later on, there is code:+ reuse ? HASH_REUSE : HASH_REMOVE,Can flag (such as HASH_REUSE) be passed to BufTableDelete() instead of bool ? That way, flag can be used directly in the above place.+ long nalloced; /* number of entries initially allocated fornallocated isn't very long. I think it would be better to name the field nallocated 'nallocated'.+ sum += hashp->hctl->freeList[i].nalloced;+ sum -= hashp->hctl->freeList[i].nfree;I think it would be better to calculate the difference between nalloced and nfree first, then add the result to sum (to avoid overflow).Subject: [PATCH 3/3] reduce memory allocation for non-partitioned dynahashmemory allocation -> memory allocationsCheers",
"msg_date": "Sun, 13 Mar 2022 07:05:10 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "В Вс, 13/03/2022 в 07:05 -0700, Zhihong Yu пишет:\n> \n> Hi,\n> In the description:\n> \n> There is no need to hold both lock simultaneously. \n> \n> both lock -> both locks\n\nThanks.\n\n> + * We also reset the usage_count since any recency of use of the old\n> \n> recency of use -> recent use\n\nThanks.\n\n> +BufTableDelete(BufferTag *tagPtr, uint32 hashcode, bool reuse)\n> \n> Later on, there is code:\n> \n> + reuse ? HASH_REUSE : HASH_REMOVE,\n> \n> Can flag (such as HASH_REUSE) be passed to BufTableDelete() instead of bool ? That way, flag can be used directly in the above place.\n\nNo.\nBufTable* functions are created to abstract Buffer Table from dynahash.\nPass of HASH_REUSE directly will break abstraction.\n\n> + long nalloced; /* number of entries initially allocated for\n> \n> nallocated isn't very long. I think it would be better to name the field nallocated 'nallocated'.\n\nIt is debatable.\nWhy not num_allocated? allocated_count? number_of_allocations?\nSame points for nfree.\n`nalloced` is recognizable and unambiguous. And there are a lot\nof `*alloced` in the postgresql's source, so this one will not\nbe unusual.\n\nI don't see the need to make it longer.\n\nBut if someone supports your point, I will not mind to changing\nthe name.\n\n> + sum += hashp->hctl->freeList[i].nalloced;\n> + sum -= hashp->hctl->freeList[i].nfree;\n> \n> I think it would be better to calculate the difference between nalloced and nfree first, then add the result to sum (to avoid overflow).\n\nDoesn't really matter much, because calculation must be valid\neven if all nfree==0.\n\nI'd rather debate use of 'long' in dynahash at all: 'long' is\n32bit on 64bit Windows. It is better to use 'Size' here.\n\nBut 'nelements' were 'long', so I didn't change things. I think\nit is place for another patch.\n\n(On the other hand, dynahash with 2**31 elements is at least\n512GB RAM... we doubtfully trigger problem before OOM killer\ncame. Does Windows have an OOM killer?)\n\n> Subject: [PATCH 3/3] reduce memory allocation for non-partitioned dynahash\n> \n> memory allocation -> memory allocations\n\nFor each dynahash instance single allocation were reduced.\nI think, 'memory allocation' is correct.\n\nPlural will be\n reduce memory allocations for non-partitioned dynahashes\nie both 'allocations' and 'dynahashes'.\nAm I wrong?\n\n\n------\n\nregards\nYura Sokolov",
"msg_date": "Mon, 14 Mar 2022 01:27:47 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "On Sun, Mar 13, 2022 at 3:27 PM Yura Sokolov <y.sokolov@postgrespro.ru>\nwrote:\n\n> В Вс, 13/03/2022 в 07:05 -0700, Zhihong Yu пишет:\n> >\n> > Hi,\n> > In the description:\n> >\n> > There is no need to hold both lock simultaneously.\n> >\n> > both lock -> both locks\n>\n> Thanks.\n>\n> > + * We also reset the usage_count since any recency of use of the old\n> >\n> > recency of use -> recent use\n>\n> Thanks.\n>\n> > +BufTableDelete(BufferTag *tagPtr, uint32 hashcode, bool reuse)\n> >\n> > Later on, there is code:\n> >\n> > + reuse ? HASH_REUSE : HASH_REMOVE,\n> >\n> > Can flag (such as HASH_REUSE) be passed to BufTableDelete() instead of\n> bool ? That way, flag can be used directly in the above place.\n>\n> No.\n> BufTable* functions are created to abstract Buffer Table from dynahash.\n> Pass of HASH_REUSE directly will break abstraction.\n>\n> > + long nalloced; /* number of entries initially allocated\n> for\n> >\n> > nallocated isn't very long. I think it would be better to name the field\n> nallocated 'nallocated'.\n>\n> It is debatable.\n> Why not num_allocated? allocated_count? number_of_allocations?\n> Same points for nfree.\n> `nalloced` is recognizable and unambiguous. And there are a lot\n> of `*alloced` in the postgresql's source, so this one will not\n> be unusual.\n>\n> I don't see the need to make it longer.\n>\n> But if someone supports your point, I will not mind to changing\n> the name.\n>\n> > + sum += hashp->hctl->freeList[i].nalloced;\n> > + sum -= hashp->hctl->freeList[i].nfree;\n> >\n> > I think it would be better to calculate the difference between nalloced\n> and nfree first, then add the result to sum (to avoid overflow).\n>\n> Doesn't really matter much, because calculation must be valid\n> even if all nfree==0.\n>\n> I'd rather debate use of 'long' in dynahash at all: 'long' is\n> 32bit on 64bit Windows. It is better to use 'Size' here.\n>\n> But 'nelements' were 'long', so I didn't change things. I think\n> it is place for another patch.\n>\n> (On the other hand, dynahash with 2**31 elements is at least\n> 512GB RAM... we doubtfully trigger problem before OOM killer\n> came. Does Windows have an OOM killer?)\n>\n> > Subject: [PATCH 3/3] reduce memory allocation for non-partitioned\n> dynahash\n> >\n> > memory allocation -> memory allocations\n>\n> For each dynahash instance single allocation were reduced.\n> I think, 'memory allocation' is correct.\n>\n> Plural will be\n> reduce memory allocations for non-partitioned dynahashes\n> ie both 'allocations' and 'dynahashes'.\n> Am I wrong?\n>\n> Hi,\nbq. reduce memory allocation for non-partitioned dynahash\n\nIt seems the following is clearer:\n\nreduce one memory allocation for every non-partitioned dynahash\n\nCheers\n\nOn Sun, Mar 13, 2022 at 3:27 PM Yura Sokolov <y.sokolov@postgrespro.ru> wrote:В Вс, 13/03/2022 в 07:05 -0700, Zhihong Yu пишет:\n> \n> Hi,\n> In the description:\n> \n> There is no need to hold both lock simultaneously. \n> \n> both lock -> both locks\n\nThanks.\n\n> + * We also reset the usage_count since any recency of use of the old\n> \n> recency of use -> recent use\n\nThanks.\n\n> +BufTableDelete(BufferTag *tagPtr, uint32 hashcode, bool reuse)\n> \n> Later on, there is code:\n> \n> + reuse ? HASH_REUSE : HASH_REMOVE,\n> \n> Can flag (such as HASH_REUSE) be passed to BufTableDelete() instead of bool ? That way, flag can be used directly in the above place.\n\nNo.\nBufTable* functions are created to abstract Buffer Table from dynahash.\nPass of HASH_REUSE directly will break abstraction.\n\n> + long nalloced; /* number of entries initially allocated for\n> \n> nallocated isn't very long. I think it would be better to name the field nallocated 'nallocated'.\n\nIt is debatable.\nWhy not num_allocated? allocated_count? number_of_allocations?\nSame points for nfree.\n`nalloced` is recognizable and unambiguous. And there are a lot\nof `*alloced` in the postgresql's source, so this one will not\nbe unusual.\n\nI don't see the need to make it longer.\n\nBut if someone supports your point, I will not mind to changing\nthe name.\n\n> + sum += hashp->hctl->freeList[i].nalloced;\n> + sum -= hashp->hctl->freeList[i].nfree;\n> \n> I think it would be better to calculate the difference between nalloced and nfree first, then add the result to sum (to avoid overflow).\n\nDoesn't really matter much, because calculation must be valid\neven if all nfree==0.\n\nI'd rather debate use of 'long' in dynahash at all: 'long' is\n32bit on 64bit Windows. It is better to use 'Size' here.\n\nBut 'nelements' were 'long', so I didn't change things. I think\nit is place for another patch.\n\n(On the other hand, dynahash with 2**31 elements is at least\n512GB RAM... we doubtfully trigger problem before OOM killer\ncame. Does Windows have an OOM killer?)\n\n> Subject: [PATCH 3/3] reduce memory allocation for non-partitioned dynahash\n> \n> memory allocation -> memory allocations\n\nFor each dynahash instance single allocation were reduced.\nI think, 'memory allocation' is correct.\n\nPlural will be\n reduce memory allocations for non-partitioned dynahashes\nie both 'allocations' and 'dynahashes'.\nAm I wrong?Hi,bq. reduce memory allocation for non-partitioned dynahashIt seems the following is clearer:reduce one memory allocation for every non-partitioned dynahash Cheers",
"msg_date": "Sun, 13 Mar 2022 15:40:18 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "At Fri, 11 Mar 2022 11:30:27 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrote in \n> В Пт, 11/03/2022 в 15:30 +0900, Kyotaro Horiguchi пишет:\n> > At Thu, 03 Mar 2022 01:35:57 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrote in \n> > > В Вт, 01/03/2022 в 10:24 +0300, Yura Sokolov пишет:\n> > > > Ok, here is v4.\n> > > \n> > > And here is v5.\n> > > \n> > > First, there was compilation error in Assert in dynahash.c .\n> > > Excuse me for not checking before sending previous version.\n> > > \n> > > Second, I add third commit that reduces HASHHDR allocation\n> > > size for non-partitioned dynahash:\n> > > - moved freeList to last position\n> > > - alloc and memset offset(HASHHDR, freeList[1]) for\n> > > non-partitioned hash tables.\n> > > I didn't benchmarked it, but I will be surprised if it\n> > > matters much in performance sence.\n> > > \n> > > Third, I put all three commits into single file to not\n> > > confuse commitfest application.\n> > \n> > Thanks! I looked into dynahash part.\n> > \n> > struct HASHHDR\n> > {\n> > - /*\n> > - * The freelist can become a point of contention in high-concurrency hash\n> > \n> > Why did you move around the freeList?\n> > \n> > \n> > - long nentries; /* number of entries in associated buckets */\n> > + long nfree; /* number of free entries in the list */\n> > + long nalloced; /* number of entries initially allocated for\n> > \n> > Why do we need nfree? HASH_ASSING should do the same thing with\n> > HASH_REMOVE. Maybe the reason is the code tries to put the detached\n> > bucket to different free list, but we can just remember the\n> > freelist_idx for the detached bucket as we do for hashp. I think that\n> > should largely reduce the footprint of this patch.\n> \n> If we keep nentries, then we need to fix nentries in both old\n> \"freeList\" partition and new one. It is two freeList[partition]->mutex\n> lock+unlock pairs.\n>\n> But count of free elements doesn't change, so if we change nentries\n> to nfree, then no need to fix freeList[partition]->nfree counters,\n> no need to lock+unlock. \n\nAh, okay. I missed that bucket reuse chages key in most cases.\n\nBut still I don't think its good to move entries around partition\nfreelists for another reason. I'm afraid that the freelists get into\nimbalanced state. get_hash_entry prefers main shmem allocation than\nother freelist so that could lead to freelist bloat, or worse\ncontension than the traditinal way involving more than two partitions.\n\nI'll examine the possibility to resolve this...\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 14 Mar 2022 09:39:48 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "At Fri, 11 Mar 2022 12:34:32 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrote in \n> В Пт, 11/03/2022 в 15:49 +0900, Kyotaro Horiguchi пишет:\n> > At Fri, 11 Mar 2022 15:30:30 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@g> > BufTableDelete(BufferTag *tagPtr, uint32 hashcode, bool reuse)\n> > \n> > BufTableDelete considers both reuse and !reuse cases but\n> > BufTableInsert doesn't and always does HASH_ASSIGN. That looks\n> > odd. We should use HASH_ENTER here. Thus I think it is more\n> > reasonable that HASH_ENTRY uses the stashed entry if exists and\n> > needed, or returns it to freelist if exists but not needed.\n> > \n> > What do you think about this?\n> \n> Well... I don't like it but I don't mind either.\n> \n> Code in HASH_ENTER and HASH_ASSIGN cases differs much.\n> On the other hand, probably it is possible to merge it carefuly.\n> I'll try.\n\nHonestly, I'm not sure it wins on performance basis. It just came from\ninterface consistency (mmm. a bit different, maybe.. convincibility?).\n\nregards.\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 14 Mar 2022 09:44:22 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "At Mon, 14 Mar 2022 09:39:48 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> I'll examine the possibility to resolve this...\n\nThe existence of nfree and nalloc made me confused and I found the\nreason.\n\nIn the case where a parittion collects many REUSE-ASSIGN-REMOVEed\nelemetns from other paritiotns, nfree gets larger than nalloced. This\nis a strange point of the two counters. nalloced is only referred to\nas (sum(nalloced[])). So we don't need nalloced per-partition basis\nand the formula to calculate the number of used elements would be as\nfollows.\n\n sum(nalloced - nfree)\n = <total_nalloced> - sum(nfree)\n\nWe rarely create fresh elements in shared hashes so I don't think\nthere's additional contention on the <total_nalloced> even if it were\na global atomic.\n\nSo, the remaining issue is the possible imbalancement among\npartitions. On second thought, by the current way, if there's a bad\ndeviation in partition-usage, a heavily hit partition finally collects\nelements via get_hash_entry(). By the patch's way, similar thing\nhappens via the REUSE-ASSIGN-REMOVE sequence. But buffers once used\nfor something won't be freed until buffer invalidation. But bulk\nbuffer invalidation won't deviatedly distribute freed buffers among\npartitions. So I conclude for now that is a non-issue.\n\nSo my opinion on the counters is:\n\nI'd like to ask you to remove nalloced from partitions then add a\nglobal atomic for the same use?\n\nNo need to do something for the possible deviation issue.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 14 Mar 2022 14:31:12 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "В Пн, 14/03/2022 в 14:31 +0900, Kyotaro Horiguchi пишет:\n> At Mon, 14 Mar 2022 09:39:48 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> > I'll examine the possibility to resolve this...\n> \n> The existence of nfree and nalloc made me confused and I found the\n> reason.\n> \n> In the case where a parittion collects many REUSE-ASSIGN-REMOVEed\n> elemetns from other paritiotns, nfree gets larger than nalloced. This\n> is a strange point of the two counters. nalloced is only referred to\n> as (sum(nalloced[])). So we don't need nalloced per-partition basis\n> and the formula to calculate the number of used elements would be as\n> follows.\n> \n> sum(nalloced - nfree)\n> = <total_nalloced> - sum(nfree)\n> \n> We rarely create fresh elements in shared hashes so I don't think\n> there's additional contention on the <total_nalloced> even if it were\n> a global atomic.\n> \n> So, the remaining issue is the possible imbalancement among\n> partitions. On second thought, by the current way, if there's a bad\n> deviation in partition-usage, a heavily hit partition finally collects\n> elements via get_hash_entry(). By the patch's way, similar thing\n> happens via the REUSE-ASSIGN-REMOVE sequence. But buffers once used\n> for something won't be freed until buffer invalidation. But bulk\n> buffer invalidation won't deviatedly distribute freed buffers among\n> partitions. So I conclude for now that is a non-issue.\n> \n> So my opinion on the counters is:\n> \n> I'd like to ask you to remove nalloced from partitions then add a\n> global atomic for the same use?\n\nI really believe it should be global. I made it per-partition to\nnot overcomplicate first versions. Glad you tell it.\n\nI thought to protect it with freeList[0].mutex, but probably atomic\nis better idea here. But which atomic to chose: uint64 or uint32?\nBased on sizeof(long)?\nOk, I'll do in next version.\n\nWhole get_hash_entry look strange.\nDoesn't it better to cycle through partitions and only then go to\nget_hash_entry?\nMay be there should be bitmap for non-empty free lists? 32bit for\n32 partitions. But wouldn't bitmap became contention point itself?\n\n> No need to do something for the possible deviation issue.\n\n-------\n\nregards\nYura Sokolov\n\n\n\n",
"msg_date": "Mon, 14 Mar 2022 09:15:11 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "At Mon, 14 Mar 2022 09:15:11 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrote in \n> В Пн, 14/03/2022 в 14:31 +0900, Kyotaro Horiguchi пишет:\n> > I'd like to ask you to remove nalloced from partitions then add a\n> > global atomic for the same use?\n> \n> I really believe it should be global. I made it per-partition to\n> not overcomplicate first versions. Glad you tell it.\n> \n> I thought to protect it with freeList[0].mutex, but probably atomic\n> is better idea here. But which atomic to chose: uint64 or uint32?\n> Based on sizeof(long)?\n> Ok, I'll do in next version.\n\nCurrent nentries is a long (= int64 on CentOS). And uint32 can support\nroughly 2^32 * 8192 = 32TB shared buffers, which doesn't seem safe\nenough. So it would be uint64.\n\n> Whole get_hash_entry look strange.\n> Doesn't it better to cycle through partitions and only then go to\n> get_hash_entry?\n> May be there should be bitmap for non-empty free lists? 32bit for\n> 32 partitions. But wouldn't bitmap became contention point itself?\n\nThe code puts significance on avoiding contention caused by visiting\nfreelists of other partitions. And perhaps thinks that freelist\nshortage rarely happen.\n\nI tried pgbench runs with scale 100 (with 10 threads, 10 clients) on\n128kB shared buffers and I saw that get_hash_entry never takes the\n!element_alloc() path and always allocate a fresh entry, then\nsaturates at 30 new elements allocated at the medium of a 100 seconds\nrun.\n\nThen, I tried the same with the patch, and I am surprized to see that\nthe rise of the number of newly allocated elements didn't stop and\nwent up to 511 elements after the 100 seconds run. So I found that my\nconcern was valid. The change in dynahash actually\ncontinuously/repeatedly causes lack of free list entries. I'm not\nsure how much the impact given on performance if we change\nget_hash_entry to prefer other freelists, though.\n\n\nBy the way, there's the following comment in StrategyInitalize.\n\n>\t * Initialize the shared buffer lookup hashtable.\n>\t *\n>\t * Since we can't tolerate running out of lookup table entries, we must be\n>\t * sure to specify an adequate table size here. The maximum steady-state\n>\t * usage is of course NBuffers entries, but BufferAlloc() tries to insert\n>\t * a new entry before deleting the old. In principle this could be\n>\t * happening in each partition concurrently, so we could need as many as\n>\t * NBuffers + NUM_BUFFER_PARTITIONS entries.\n>\t */\n>\tInitBufTable(NBuffers + NUM_BUFFER_PARTITIONS);\n\n\"but BufferAlloc() tries to insert a new entry before deleting the\nold.\" gets false by this patch but still need that additional room for\nstashed entries. It seems like needing a fix.\n\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 14 Mar 2022 17:12:48 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "At Mon, 14 Mar 2022 17:12:48 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> Then, I tried the same with the patch, and I am surprized to see that\n> the rise of the number of newly allocated elements didn't stop and\n> went up to 511 elements after the 100 seconds run. So I found that my\n> concern was valid.\n\nWhich means my last decision was wrong with high odds..\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 14 Mar 2022 17:34:10 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "В Пн, 14/03/2022 в 17:12 +0900, Kyotaro Horiguchi пишет:\n> At Mon, 14 Mar 2022 09:15:11 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrote in \n> > В Пн, 14/03/2022 в 14:31 +0900, Kyotaro Horiguchi пишет:\n> > > I'd like to ask you to remove nalloced from partitions then add a\n> > > global atomic for the same use?\n> > \n> > I really believe it should be global. I made it per-partition to\n> > not overcomplicate first versions. Glad you tell it.\n> > \n> > I thought to protect it with freeList[0].mutex, but probably atomic\n> > is better idea here. But which atomic to chose: uint64 or uint32?\n> > Based on sizeof(long)?\n> > Ok, I'll do in next version.\n> \n> Current nentries is a long (= int64 on CentOS). And uint32 can support\n> roughly 2^32 * 8192 = 32TB shared buffers, which doesn't seem safe\n> enough. So it would be uint64.\n> \n> > Whole get_hash_entry look strange.\n> > Doesn't it better to cycle through partitions and only then go to\n> > get_hash_entry?\n> > May be there should be bitmap for non-empty free lists? 32bit for\n> > 32 partitions. But wouldn't bitmap became contention point itself?\n> \n> The code puts significance on avoiding contention caused by visiting\n> freelists of other partitions. And perhaps thinks that freelist\n> shortage rarely happen.\n> \n> I tried pgbench runs with scale 100 (with 10 threads, 10 clients) on\n> 128kB shared buffers and I saw that get_hash_entry never takes the\n> !element_alloc() path and always allocate a fresh entry, then\n> saturates at 30 new elements allocated at the medium of a 100 seconds\n> run.\n> \n> Then, I tried the same with the patch, and I am surprized to see that\n> the rise of the number of newly allocated elements didn't stop and\n> went up to 511 elements after the 100 seconds run. So I found that my\n> concern was valid. The change in dynahash actually\n> continuously/repeatedly causes lack of free list entries. I'm not\n> sure how much the impact given on performance if we change\n> get_hash_entry to prefer other freelists, though.\n\nWell, it is quite strange SharedBufHash is not allocated as\nHASH_FIXED_SIZE. Could you check what happens with this flag set?\nI'll try as well.\n\nOther way to reduce observed case is to remember freelist_idx for\nreused entry. I didn't believe it matters much since entries migrated\nnetherless, but probably due to some hot buffers there are tention to\ncrowd particular freelist.\n\n> By the way, there's the following comment in StrategyInitalize.\n> \n> > * Initialize the shared buffer lookup hashtable.\n> > *\n> > * Since we can't tolerate running out of lookup table entries, we must be\n> > * sure to specify an adequate table size here. The maximum steady-state\n> > * usage is of course NBuffers entries, but BufferAlloc() tries to insert\n> > * a new entry before deleting the old. In principle this could be\n> > * happening in each partition concurrently, so we could need as many as\n> > * NBuffers + NUM_BUFFER_PARTITIONS entries.\n> > */\n> > InitBufTable(NBuffers + NUM_BUFFER_PARTITIONS);\n> \n> \"but BufferAlloc() tries to insert a new entry before deleting the\n> old.\" gets false by this patch but still need that additional room for\n> stashed entries. It seems like needing a fix.\n> \n> \n> \n> regards.\n> \n> -- \n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n\n\n\n",
"msg_date": "Mon, 14 Mar 2022 14:57:38 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "В Пн, 14/03/2022 в 14:57 +0300, Yura Sokolov пишет:\n> В Пн, 14/03/2022 в 17:12 +0900, Kyotaro Horiguchi пишет:\n> > At Mon, 14 Mar 2022 09:15:11 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrote in \n> > > В Пн, 14/03/2022 в 14:31 +0900, Kyotaro Horiguchi пишет:\n> > > > I'd like to ask you to remove nalloced from partitions then add a\n> > > > global atomic for the same use?\n> > > \n> > > I really believe it should be global. I made it per-partition to\n> > > not overcomplicate first versions. Glad you tell it.\n> > > \n> > > I thought to protect it with freeList[0].mutex, but probably atomic\n> > > is better idea here. But which atomic to chose: uint64 or uint32?\n> > > Based on sizeof(long)?\n> > > Ok, I'll do in next version.\n> > \n> > Current nentries is a long (= int64 on CentOS). And uint32 can support\n> > roughly 2^32 * 8192 = 32TB shared buffers, which doesn't seem safe\n> > enough. So it would be uint64.\n> > \n> > > Whole get_hash_entry look strange.\n> > > Doesn't it better to cycle through partitions and only then go to\n> > > get_hash_entry?\n> > > May be there should be bitmap for non-empty free lists? 32bit for\n> > > 32 partitions. But wouldn't bitmap became contention point itself?\n> > \n> > The code puts significance on avoiding contention caused by visiting\n> > freelists of other partitions. And perhaps thinks that freelist\n> > shortage rarely happen.\n> > \n> > I tried pgbench runs with scale 100 (with 10 threads, 10 clients) on\n> > 128kB shared buffers and I saw that get_hash_entry never takes the\n> > !element_alloc() path and always allocate a fresh entry, then\n> > saturates at 30 new elements allocated at the medium of a 100 seconds\n> > run.\n> > \n> > Then, I tried the same with the patch, and I am surprized to see that\n> > the rise of the number of newly allocated elements didn't stop and\n> > went up to 511 elements after the 100 seconds run. So I found that my\n> > concern was valid. The change in dynahash actually\n> > continuously/repeatedly causes lack of free list entries. I'm not\n> > sure how much the impact given on performance if we change\n> > get_hash_entry to prefer other freelists, though.\n> \n> Well, it is quite strange SharedBufHash is not allocated as\n> HASH_FIXED_SIZE. Could you check what happens with this flag set?\n> I'll try as well.\n> \n> Other way to reduce observed case is to remember freelist_idx for\n> reused entry. I didn't believe it matters much since entries migrated\n> netherless, but probably due to some hot buffers there are tention to\n> crowd particular freelist.\n\nWell, I did both. Everything looks ok.\n\n> > By the way, there's the following comment in StrategyInitalize.\n> > \n> > > * Initialize the shared buffer lookup hashtable.\n> > > *\n> > > * Since we can't tolerate running out of lookup table entries, we must be\n> > > * sure to specify an adequate table size here. The maximum steady-state\n> > > * usage is of course NBuffers entries, but BufferAlloc() tries to insert\n> > > * a new entry before deleting the old. In principle this could be\n> > > * happening in each partition concurrently, so we could need as many as\n> > > * NBuffers + NUM_BUFFER_PARTITIONS entries.\n> > > */\n> > > InitBufTable(NBuffers + NUM_BUFFER_PARTITIONS);\n> > \n> > \"but BufferAlloc() tries to insert a new entry before deleting the\n> > old.\" gets false by this patch but still need that additional room for\n> > stashed entries. It seems like needing a fix.\n\nRemoved whole paragraph because fixed table without extra entries works\njust fine.\n\nI lost access to Xeon 8354H, so returned to old Xeon X5675.\n\n128MB and 1GB shared buffers\npgbench with scale 100\nselect_only benchmark, unix sockets.\n\nNotebook i7-1165G7:\n\n\n conns | master | v8 | master 1G | v8 1G \n--------+------------+------------+------------+------------\n 1 | 29614 | 29285 | 32413 | 32784 \n 2 | 58541 | 60052 | 65851 | 65938 \n 3 | 91126 | 90185 | 101404 | 101956 \n 5 | 135809 | 133670 | 143783 | 143471 \n 7 | 155547 | 153568 | 162566 | 162361 \n 17 | 221794 | 218143 | 250562 | 250136 \n 27 | 213742 | 211226 | 241806 | 242594 \n 53 | 216067 | 214792 | 245868 | 246269 \n 83 | 216610 | 218261 | 246798 | 250515 \n 107 | 216169 | 216656 | 248424 | 250105 \n 139 | 208892 | 215054 | 244630 | 246439 \n 163 | 206988 | 212751 | 244061 | 248051 \n 191 | 203842 | 214764 | 241793 | 245081 \n 211 | 201304 | 213997 | 240863 | 246076 \n 239 | 199313 | 211713 | 239639 | 243586 \n 271 | 196712 | 211849 | 236231 | 243831 \n 307 | 194879 | 209813 | 233811 | 241303 \n 353 | 191279 | 210145 | 230896 | 241039 \n 397 | 188509 | 207480 | 227812 | 240637 \n\nX5675 1 socket:\n\n conns | master | v8 | master 1G | v8 1G \n--------+------------+------------+------------+------------\n 1 | 18590 | 18473 | 19652 | 19051 \n 2 | 34899 | 34799 | 37242 | 37432 \n 3 | 51484 | 51393 | 54750 | 54398 \n 5 | 71037 | 70564 | 76482 | 75985 \n 7 | 87391 | 86937 | 96185 | 95433 \n 17 | 122609 | 123087 | 140578 | 140325 \n 27 | 120051 | 120508 | 136318 | 136343 \n 53 | 116851 | 117601 | 133338 | 133265 \n 83 | 113682 | 116755 | 131841 | 132736 \n 107 | 111925 | 116003 | 130661 | 132386 \n 139 | 109338 | 115011 | 128319 | 131453 \n 163 | 107661 | 114398 | 126684 | 130677 \n 191 | 105000 | 113745 | 124850 | 129909 \n 211 | 103607 | 113347 | 123469 | 129302 \n 239 | 101820 | 112428 | 121752 | 128621 \n 271 | 100060 | 111863 | 119743 | 127624 \n 307 | 98554 | 111270 | 117650 | 126877 \n 353 | 97530 | 110231 | 115904 | 125351 \n 397 | 96122 | 109471 | 113609 | 124150 \n\nX5675 2 socket:\n\n conns | master | v8 | master 1G | v8 1G \n--------+------------+------------+------------+------------\n 1 | 17815 | 17577 | 19321 | 19187 \n 2 | 34312 | 35655 | 37121 | 36479 \n 3 | 51868 | 52165 | 56048 | 54984 \n 5 | 81704 | 82477 | 90945 | 90109 \n 7 | 107937 | 105411 | 116015 | 115810 \n 17 | 191339 | 190813 | 216899 | 215775 \n 27 | 236541 | 238078 | 278507 | 278073 \n 53 | 230323 | 231709 | 267226 | 267449 \n 83 | 225560 | 227455 | 261996 | 262344 \n 107 | 221317 | 224030 | 259694 | 259553 \n 139 | 206945 | 219005 | 254817 | 256736 \n 163 | 197723 | 220353 | 251631 | 257305 \n 191 | 193243 | 219149 | 246960 | 256528 \n 211 | 189603 | 218545 | 245362 | 255785 \n 239 | 186382 | 217229 | 240006 | 255024 \n 271 | 183141 | 216359 | 236927 | 253069 \n 307 | 179275 | 215218 | 232571 | 252375 \n 353 | 175559 | 213298 | 227244 | 250534 \n 397 | 172916 | 211627 | 223513 | 248919 \n\nStrange thing: both master and patched version has higher\npeak tps at X5676 at medium connections (17 or 27 clients)\nthan in first october version [1]. But lower tps at higher\nconnections number (>= 191 clients).\nI'll try to bisect on master this unfortunate change.\n\nOctober master was 2d44dee0281a1abf and today's is 7e12256b478b895\n\n(There is small possibility that I tested with TCP sockets\nin october and with UNIX sockets today and that gave difference.)\n\n[1] https://postgr.esq/m/1edbb61981fe1d99c3f20e3d56d6c88999f4227c.camel%40postgrespro.ru\n\n-------\n\nregards\nYura Sokolov\nPostgres Professional\ny.sokolov@postgrespro.ru",
"msg_date": "Tue, 15 Mar 2022 08:07:39 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "Thanks for the new version.\n\nAt Tue, 15 Mar 2022 08:07:39 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrote in \n> В Пн, 14/03/2022 в 14:57 +0300, Yura Sokolov пишет:\n> > В Пн, 14/03/2022 в 17:12 +0900, Kyotaro Horiguchi пишет:\n> > > At Mon, 14 Mar 2022 09:15:11 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrote in \n> > > > В Пн, 14/03/2022 в 14:31 +0900, Kyotaro Horiguchi пишет:\n> > > I tried pgbench runs with scale 100 (with 10 threads, 10 clients) on\n> > > 128kB shared buffers and I saw that get_hash_entry never takes the\n> > > !element_alloc() path and always allocate a fresh entry, then\n> > > saturates at 30 new elements allocated at the medium of a 100 seconds\n> > > run.\n> > > \n> > > Then, I tried the same with the patch, and I am surprized to see that\n> > > the rise of the number of newly allocated elements didn't stop and\n> > > went up to 511 elements after the 100 seconds run. So I found that my\n> > > concern was valid. The change in dynahash actually\n> > > continuously/repeatedly causes lack of free list entries. I'm not\n> > > sure how much the impact given on performance if we change\n> > > get_hash_entry to prefer other freelists, though.\n> > \n> > Well, it is quite strange SharedBufHash is not allocated as\n> > HASH_FIXED_SIZE. Could you check what happens with this flag set?\n> > I'll try as well.\n> > \n> > Other way to reduce observed case is to remember freelist_idx for\n> > reused entry. I didn't believe it matters much since entries migrated\n> > netherless, but probably due to some hot buffers there are tention to\n> > crowd particular freelist.\n> \n> Well, I did both. Everything looks ok.\n\nHmm. v8 returns stashed element with original patition index when the\nelement is *not* reused. But what I saw in the previous test runs is\nthe REUSE->ENTER(reuse)(->REMOVE) case. So the new version looks like\nbehaving the same way (or somehow even worse) with the previous\nversion. get_hash_entry continuously suffer lack of freelist\nentry. (FWIW, attached are the test-output diff for both master and\npatched)\n\nmaster finally allocated 31 fresh elements for a 100s run.\n\n> ALLOCED: 31 ;; freshly allocated\n\nv8 finally borrowed 33620 times from another freelist and 0 freshly\nallocated (ah, this version changes that..)\nFinally v8 results in:\n\n> RETURNED: 50806 ;; returned stashed elements\n> BORROWED: 33620 ;; borrowed from another freelist\n> REUSED: 1812664 ;; stashed\n> ASSIGNED: 1762377 ;; reused\n>(ALLOCED: 0) ;; freshly allocated\n\nIt contains a huge degradation by frequent elog's so they cannot be\nnaively relied on, but it should show what is happening sufficiently.\n\n> I lost access to Xeon 8354H, so returned to old Xeon X5675.\n...\n> Strange thing: both master and patched version has higher\n> peak tps at X5676 at medium connections (17 or 27 clients)\n> than in first october version [1]. But lower tps at higher\n> connections number (>= 191 clients).\n> I'll try to bisect on master this unfortunate change.\n\nThe reversing of the preference order between freshly-allocation and\nborrow-from-another-freelist might affect.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\ndiff --git a/src/backend/storage/buffer/buf_table.c b/src/backend/storage/buffer/buf_table.c\nindex dc439940fa..ac651b98e6 100644\n--- a/src/backend/storage/buffer/buf_table.c\n+++ b/src/backend/storage/buffer/buf_table.c\n@@ -31,7 +31,7 @@ typedef struct\n \tint\t\t\tid;\t\t\t\t/* Associated buffer ID */\n } BufferLookupEnt;\n \n-static HTAB *SharedBufHash;\n+HTAB *SharedBufHash;\n \n \n /*\ndiff --git a/src/backend/utils/hash/dynahash.c b/src/backend/utils/hash/dynahash.c\nindex 3babde8d70..294516ef01 100644\n--- a/src/backend/utils/hash/dynahash.c\n+++ b/src/backend/utils/hash/dynahash.c\n@@ -195,6 +195,11 @@ struct HASHHDR\n \tlong\t\tssize;\t\t\t/* segment size --- must be power of 2 */\n \tint\t\t\tsshift;\t\t\t/* segment shift = log2(ssize) */\n \tint\t\t\tnelem_alloc;\t/* number of entries to allocate at once */\n+\tint alloc;\n+\tint reuse;\n+\tint borrow;\n+\tint assign;\n+\tint ret;\n \n #ifdef HASH_STATISTICS\n \n@@ -963,6 +968,7 @@ hash_search(HTAB *hashp,\n \t\t\t\t\t\t\t\t\t foundPtr);\n }\n \n+extern HTAB *SharedBufHash;\n void *\n hash_search_with_hash_value(HTAB *hashp,\n \t\t\t\t\t\t\tconst void *keyPtr,\n@@ -1354,6 +1360,8 @@ get_hash_entry(HTAB *hashp, int freelist_idx)\n \t\t\t\t\thctl->freeList[freelist_idx].nentries++;\n \t\t\t\t\tSpinLockRelease(&hctl->freeList[freelist_idx].mutex);\n \n+\t\t\t\t\tif (hashp == SharedBufHash)\n+\t\t\t\t\t\telog(LOG, \"BORROWED: %d\", ++hctl->borrow);\n \t\t\t\t\treturn newElement;\n \t\t\t\t}\n \n@@ -1363,6 +1371,8 @@ get_hash_entry(HTAB *hashp, int freelist_idx)\n \t\t\t/* no elements available to borrow either, so out of memory */\n \t\t\treturn NULL;\n \t\t}\n+\t\telse if (hashp == SharedBufHash)\n+\t\t\telog(LOG, \"ALLOCED: %d\", ++hctl->alloc);\n \t}\n \n \t/* remove entry from freelist, bump nentries */\n\ndiff --git a/src/backend/storage/buffer/buf_table.c b/src/backend/storage/buffer/buf_table.c\nindex 55bb491ad0..029bb89f26 100644\n--- a/src/backend/storage/buffer/buf_table.c\n+++ b/src/backend/storage/buffer/buf_table.c\n@@ -31,7 +31,7 @@ typedef struct\n \tint\t\t\tid;\t\t\t\t/* Associated buffer ID */\n } BufferLookupEnt;\n \n-static HTAB *SharedBufHash;\n+HTAB *SharedBufHash;\n \n \n /*\ndiff --git a/src/backend/utils/hash/dynahash.c b/src/backend/utils/hash/dynahash.c\nindex 50c0e47643..00159714d1 100644\n--- a/src/backend/utils/hash/dynahash.c\n+++ b/src/backend/utils/hash/dynahash.c\n@@ -199,6 +199,11 @@ struct HASHHDR\n \tint\t\t\tnelem_alloc;\t/* number of entries to allocate at once */\n \tnalloced_t\tnalloced;\t\t/* number of entries allocated */\n \n+\tint alloc;\n+\tint reuse;\n+\tint borrow;\n+\tint assign;\n+\tint ret;\n #ifdef HASH_STATISTICS\n \n \t/*\n@@ -1006,6 +1011,7 @@ hash_search(HTAB *hashp,\n \t\t\t\t\t\t\t\t\t foundPtr);\n }\n \n+extern HTAB *SharedBufHash;\n void *\n hash_search_with_hash_value(HTAB *hashp,\n \t\t\t\t\t\t\tconst void *keyPtr,\n@@ -1143,6 +1149,8 @@ hash_search_with_hash_value(HTAB *hashp,\n \t\t\t\tDynaHashReuse.hashp = hashp;\n \t\t\t\tDynaHashReuse.freelist_idx = freelist_idx;\n \n+\t\t\t\tif (hashp == SharedBufHash)\n+\t\t\t\t\telog(LOG, \"REUSED: %d\", ++hctl->reuse);\n \t\t\t\t/* Caller should call HASH_ASSIGN as the very next step. */\n \t\t\t\treturn (void *) ELEMENTKEY(currBucket);\n \t\t\t}\n@@ -1160,6 +1168,9 @@ hash_search_with_hash_value(HTAB *hashp,\n \t\t\t\tif (likely(DynaHashReuse.element == NULL))\n \t\t\t\t\treturn (void *) ELEMENTKEY(currBucket);\n \n+\t\t\t\tif (hashp == SharedBufHash)\n+\t\t\t\t\telog(LOG, \"RETURNED: %d\", ++hctl->ret);\n+\n \t\t\t\tfreelist_idx = DynaHashReuse.freelist_idx;\n \t\t\t\t/* if partitioned, must lock to touch nfree and freeList */\n \t\t\t\tif (IS_PARTITIONED(hctl))\n@@ -1191,6 +1202,13 @@ hash_search_with_hash_value(HTAB *hashp,\n \t\t\t}\n \t\t\telse\n \t\t\t{\n+\t\t\t\tif (hashp == SharedBufHash)\n+\t\t\t\t{\n+\t\t\t\t\thctl->assign++;\n+\t\t\t\t\telog(LOG, \"ASSIGNED: %d (%d)\",\n+\t\t\t\t\t\t hctl->assign, hctl->reuse - hctl->assign);\n+\t\t\t\t}\n+\t\t\t\t\t\n \t\t\t\tcurrBucket = DynaHashReuse.element;\n \t\t\t\tDynaHashReuse.element = NULL;\n \t\t\t\tDynaHashReuse.hashp = NULL;\n@@ -1448,6 +1466,8 @@ get_hash_entry(HTAB *hashp, int freelist_idx)\n \t\t\t\t\thctl->freeList[borrow_from_idx].nfree--;\n \t\t\t\t\tSpinLockRelease(&(hctl->freeList[borrow_from_idx].mutex));\n \n+\t\t\t\t\tif (hashp == SharedBufHash)\n+\t\t\t\t\t\telog(LOG, \"BORROWED: %d\", ++hctl->borrow);\n \t\t\t\t\treturn newElement;\n \t\t\t\t}\n \n@@ -1457,6 +1477,10 @@ get_hash_entry(HTAB *hashp, int freelist_idx)\n \t\t\t/* no elements available to borrow either, so out of memory */\n \t\t\treturn NULL;\n \t\t}\n+\t\telse if (hashp == SharedBufHash)\n+\t\t\telog(LOG, \"ALLOCED: %d\", ++hctl->alloc);\n+\n+\t\t\t\n \t}\n \n \t/* remove entry from freelist, decrease nfree */",
"msg_date": "Tue, 15 Mar 2022 16:25:45 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "В Вт, 15/03/2022 в 16:25 +0900, Kyotaro Horiguchi пишет:\n> Thanks for the new version.\n> \n> At Tue, 15 Mar 2022 08:07:39 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrote in \n> > В Пн, 14/03/2022 в 14:57 +0300, Yura Sokolov пишет:\n> > > В Пн, 14/03/2022 в 17:12 +0900, Kyotaro Horiguchi пишет:\n> > > > At Mon, 14 Mar 2022 09:15:11 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrote in \n> > > > > В Пн, 14/03/2022 в 14:31 +0900, Kyotaro Horiguchi пишет:\n> > > > I tried pgbench runs with scale 100 (with 10 threads, 10 clients) on\n> > > > 128kB shared buffers and I saw that get_hash_entry never takes the\n> > > > !element_alloc() path and always allocate a fresh entry, then\n> > > > saturates at 30 new elements allocated at the medium of a 100 seconds\n> > > > run.\n> > > > \n> > > > Then, I tried the same with the patch, and I am surprized to see that\n> > > > the rise of the number of newly allocated elements didn't stop and\n> > > > went up to 511 elements after the 100 seconds run. So I found that my\n> > > > concern was valid. The change in dynahash actually\n> > > > continuously/repeatedly causes lack of free list entries. I'm not\n> > > > sure how much the impact given on performance if we change\n> > > > get_hash_entry to prefer other freelists, though.\n> > > \n> > > Well, it is quite strange SharedBufHash is not allocated as\n> > > HASH_FIXED_SIZE. Could you check what happens with this flag set?\n> > > I'll try as well.\n> > > \n> > > Other way to reduce observed case is to remember freelist_idx for\n> > > reused entry. I didn't believe it matters much since entries migrated\n> > > netherless, but probably due to some hot buffers there are tention to\n> > > crowd particular freelist.\n> > \n> > Well, I did both. Everything looks ok.\n> \n> Hmm. v8 returns stashed element with original patition index when the\n> element is *not* reused. But what I saw in the previous test runs is\n> the REUSE->ENTER(reuse)(->REMOVE) case. So the new version looks like\n> behaving the same way (or somehow even worse) with the previous\n> version.\n\nv8 doesn't differ in REMOVE case neither from master nor from\nprevious version. It differs in RETURNED case only.\nOr I didn't understand what you mean :(\n\n> get_hash_entry continuously suffer lack of freelist\n> entry. (FWIW, attached are the test-output diff for both master and\n> patched)\n> \n> master finally allocated 31 fresh elements for a 100s run.\n> \n> > ALLOCED: 31 ;; freshly allocated\n> \n> v8 finally borrowed 33620 times from another freelist and 0 freshly\n> allocated (ah, this version changes that..)\n> Finally v8 results in:\n> \n> > RETURNED: 50806 ;; returned stashed elements\n> > BORROWED: 33620 ;; borrowed from another freelist\n> > REUSED: 1812664 ;; stashed\n> > ASSIGNED: 1762377 ;; reused\n> >(ALLOCED: 0) ;; freshly allocated\n> \n> It contains a huge degradation by frequent elog's so they cannot be\n> naively relied on, but it should show what is happening sufficiently.\n\nIs there any measurable performance hit cause of borrowing?\nLooks like \"borrowed\" happened in 1.5% of time. And it is on 128kb\nshared buffers that is extremely small. (Or it was 128MB?)\n\nWell, I think some spare entries could reduce borrowing if there is\na need. I'll test on 128MB with spare entries. If there is profit,\nI'll return some, but will keep SharedBufHash fixed.\n\nMaster branch does less freelist manipulations since it tries to\ninsert first and if there is collision it doesn't delete victim\nbuffer.\n\n> > I lost access to Xeon 8354H, so returned to old Xeon X5675.\n> ...\n> > Strange thing: both master and patched version has higher\n> > peak tps at X5676 at medium connections (17 or 27 clients)\n> > than in first october version [1]. But lower tps at higher\n> > connections number (>= 191 clients).\n> > I'll try to bisect on master this unfortunate change.\n> \n> The reversing of the preference order between freshly-allocation and\n> borrow-from-another-freelist might affect.\n\n`master` changed its behaviour as well.\nIt is not problem of the patch at all.\n\n------\n\nregards\nYura.\n\n\n\n",
"msg_date": "Tue, 15 Mar 2022 13:47:17 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "В Вт, 15/03/2022 в 13:47 +0300, Yura Sokolov пишет:\n> В Вт, 15/03/2022 в 16:25 +0900, Kyotaro Horiguchi пишет:\n> > > I lost access to Xeon 8354H, so returned to old Xeon X5675.\n> > ...\n> > > Strange thing: both master and patched version has higher\n> > > peak tps at X5676 at medium connections (17 or 27 clients)\n> > > than in first october version [1]. But lower tps at higher\n> > > connections number (>= 191 clients).\n> > > I'll try to bisect on master this unfortunate change.\n> > \n> > The reversing of the preference order between freshly-allocation and\n> > borrow-from-another-freelist might affect.\n> \n> `master` changed its behaviour as well.\n> It is not problem of the patch at all.\n\nLooks like there is no issue: old commmit 2d44dee0281a1abf\nbehaves similar to new one at the moment.\n\nI think, something changed in environment.\nI remember there were maintanance downtime in the autumn.\nPerhaps kernel were updated or some sysctl tuning changed.\n\n----\n\nregards\nYura.\n\n\n\n",
"msg_date": "Tue, 15 Mar 2022 18:10:08 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "В Вт, 15/03/2022 в 13:47 +0300, Yura Sokolov пишет:\n> В Вт, 15/03/2022 в 16:25 +0900, Kyotaro Horiguchi пишет:\n> > Thanks for the new version.\n> > \n> > At Tue, 15 Mar 2022 08:07:39 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrote in \n> > > В Пн, 14/03/2022 в 14:57 +0300, Yura Sokolov пишет:\n> > > > В Пн, 14/03/2022 в 17:12 +0900, Kyotaro Horiguchi пишет:\n> > > > > At Mon, 14 Mar 2022 09:15:11 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrote in \n> > > > > > В Пн, 14/03/2022 в 14:31 +0900, Kyotaro Horiguchi пишет:\n> > > > > I tried pgbench runs with scale 100 (with 10 threads, 10 clients) on\n> > > > > 128kB shared buffers and I saw that get_hash_entry never takes the\n> > > > > !element_alloc() path and always allocate a fresh entry, then\n> > > > > saturates at 30 new elements allocated at the medium of a 100 seconds\n> > > > > run.\n> > > > > \n> > > > > Then, I tried the same with the patch, and I am surprized to see that\n> > > > > the rise of the number of newly allocated elements didn't stop and\n> > > > > went up to 511 elements after the 100 seconds run. So I found that my\n> > > > > concern was valid. The change in dynahash actually\n> > > > > continuously/repeatedly causes lack of free list entries. I'm not\n> > > > > sure how much the impact given on performance if we change\n> > > > > get_hash_entry to prefer other freelists, though.\n> > > > \n> > > > Well, it is quite strange SharedBufHash is not allocated as\n> > > > HASH_FIXED_SIZE. Could you check what happens with this flag set?\n> > > > I'll try as well.\n> > > > \n> > > > Other way to reduce observed case is to remember freelist_idx for\n> > > > reused entry. I didn't believe it matters much since entries migrated\n> > > > netherless, but probably due to some hot buffers there are tention to\n> > > > crowd particular freelist.\n> > > \n> > > Well, I did both. Everything looks ok.\n> > \n> > Hmm. v8 returns stashed element with original patition index when the\n> > element is *not* reused. But what I saw in the previous test runs is\n> > the REUSE->ENTER(reuse)(->REMOVE) case. So the new version looks like\n> > behaving the same way (or somehow even worse) with the previous\n> > version.\n> \n> v8 doesn't differ in REMOVE case neither from master nor from\n> previous version. It differs in RETURNED case only.\n> Or I didn't understand what you mean :(\n> \n> > get_hash_entry continuously suffer lack of freelist\n> > entry. (FWIW, attached are the test-output diff for both master and\n> > patched)\n> > \n> > master finally allocated 31 fresh elements for a 100s run.\n> > \n> > > ALLOCED: 31 ;; freshly allocated\n> > \n> > v8 finally borrowed 33620 times from another freelist and 0 freshly\n> > allocated (ah, this version changes that..)\n> > Finally v8 results in:\n> > \n> > > RETURNED: 50806 ;; returned stashed elements\n> > > BORROWED: 33620 ;; borrowed from another freelist\n> > > REUSED: 1812664 ;; stashed\n> > > ASSIGNED: 1762377 ;; reused\n> > > (ALLOCED: 0) ;; freshly allocated\n> > \n> > It contains a huge degradation by frequent elog's so they cannot be\n> > naively relied on, but it should show what is happening sufficiently.\n> \n> Is there any measurable performance hit cause of borrowing?\n> Looks like \"borrowed\" happened in 1.5% of time. And it is on 128kb\n> shared buffers that is extremely small. (Or it was 128MB?)\n> \n> Well, I think some spare entries could reduce borrowing if there is\n> a need. I'll test on 128MB with spare entries. If there is profit,\n> I'll return some, but will keep SharedBufHash fixed.\n\nWell, I added GetMaxBackends spare items, but I don't see certain\nprofit. It is probably a bit better at 128MB shared buffers and\nprobably a bit worse at 1GB shared buffers (select_only on scale 100).\n\nBut it is on old Xeon X5675. Probably things will change on more\ncapable hardware. I just don't have access at the moment.\n\n> \n> Master branch does less freelist manipulations since it tries to\n> insert first and if there is collision it doesn't delete victim\n> buffer.\n> \n\n-----\n\nregards\nYura\n\n\n\n",
"msg_date": "Wed, 16 Mar 2022 01:13:09 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "At Tue, 15 Mar 2022 13:47:17 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrote in \n> В Вт, 15/03/2022 в 16:25 +0900, Kyotaro Horiguchi пишет:\n> > Hmm. v8 returns stashed element with original patition index when the\n> > element is *not* reused. But what I saw in the previous test runs is\n> > the REUSE->ENTER(reuse)(->REMOVE) case. So the new version looks like\n> > behaving the same way (or somehow even worse) with the previous\n> > version.\n> \n> v8 doesn't differ in REMOVE case neither from master nor from\n> previous version. It differs in RETURNED case only.\n> Or I didn't understand what you mean :(\n\nIn v7, HASH_ENTER returns the element stored in DynaHashReuse using\nthe freelist_idx of the new key. v8 uses that of the old key (at the\ntime of HASH_REUSE). So in the case \"REUSE->ENTER(elem exists and\nreturns the stashed)\" case the stashed element is returned to its\noriginal partition. But it is not what I mentioned.\n\nOn the other hand, once the stahsed element is reused by HASH_ENTER,\nit gives the same resulting state with HASH_REMOVE->HASH_ENTER(borrow\nfrom old partition) case. I suspect that ththat the frequent freelist\nstarvation comes from the latter case.\n\n> > get_hash_entry continuously suffer lack of freelist\n> > entry. (FWIW, attached are the test-output diff for both master and\n> > patched)\n> > \n> > master finally allocated 31 fresh elements for a 100s run.\n> > \n> > > ALLOCED: 31 ;; freshly allocated\n> > \n> > v8 finally borrowed 33620 times from another freelist and 0 freshly\n> > allocated (ah, this version changes that..)\n> > Finally v8 results in:\n> > \n> > > RETURNED: 50806 ;; returned stashed elements\n> > > BORROWED: 33620 ;; borrowed from another freelist\n> > > REUSED: 1812664 ;; stashed\n> > > ASSIGNED: 1762377 ;; reused\n> > >(ALLOCED: 0) ;; freshly allocated\n\n(I misunderstand that v8 modified get_hash_entry's preference between\nallocation and borrowing.)\n\nI re-ran the same check for v7 and it showed different result.\n\nRETURNED: 1\nALLOCED: 15\nBORROWED: 0\nREUSED: 505435\nASSIGNED: 505462 (-27) ## the counters are not locked.\n\n> Is there any measurable performance hit cause of borrowing?\n> Looks like \"borrowed\" happened in 1.5% of time. And it is on 128kb\n> shared buffers that is extremely small. (Or it was 128MB?)\n\nIt is intentional set small to get extremely frequent buffer\nreplacements. The point here was the patch actually can induce\nfrequent freelist starvation. And as you do, I also doubt the\nsignificance of the performance hit by that. Just I was not usre.\n\nI re-ran the same for v8 and got a result largely different from the\nprevious trial on the same v8.\n\nRETURNED: 2\nALLOCED: 0\nBORROWED: 435\nREUSED: 495444\nASSIGNED: 495467 (-23)\n\nNow \"BORROWED\" happens 0.8% of REUSED.\n\n> Well, I think some spare entries could reduce borrowing if there is\n> a need. I'll test on 128MB with spare entries. If there is profit,\n> I'll return some, but will keep SharedBufHash fixed.\n\nI don't doubt the benefit of this patch. And now convinced by myself\nthat the downside is negligible than the benefit.\n\n> Master branch does less freelist manipulations since it tries to\n> insert first and if there is collision it doesn't delete victim\n> buffer.\n> \n> > > I lost access to Xeon 8354H, so returned to old Xeon X5675.\n> > ...\n> > > Strange thing: both master and patched version has higher\n> > > peak tps at X5676 at medium connections (17 or 27 clients)\n> > > than in first october version [1]. But lower tps at higher\n> > > connections number (>= 191 clients).\n> > > I'll try to bisect on master this unfortunate change.\n> > \n> > The reversing of the preference order between freshly-allocation and\n> > borrow-from-another-freelist might affect.\n> \n> `master` changed its behaviour as well.\n> It is not problem of the patch at all.\n\nAgreed. So I think we should go on this direction.\n\nThere are some last comments on v8.\n\n+\t\t\t\t\t\t\t\t HASH_FIXED_SIZE);\n\nAh, now I understand that this prevented allocation of new elements.\nI think this good to do for SharedBufHash.\n\n\n====\n+\tlong\t\tnfree;\t\t\t/* number of free entries in the list */\n \tHASHELEMENT *freeList;\t\t/* chain of free elements */\n } FreeListData;\n \n+#if SIZEOF_LONG == 4\n+typedef pg_atomic_uint32 nalloced_store_t;\n+typedef uint32 nalloced_value_t;\n+#define nalloced_read(a)\t(long)pg_atomic_read_u32(a)\n+#define nalloced_add(a, v)\tpg_atomic_fetch_add_u32((a), (uint32)(v))\n====\n\nI don't think nalloced needs to be the same width to long. For the\nplatforms with 32-bit long, anyway the possible degradation if any by\n64-bit atomic there doesn't matter. So don't we always define the\natomic as 64bit and use the pg_atomic_* functions directly?\n\n\n+\t\tcase HASH_REUSE:\n+\t\t\tif (currBucket != NULL)\n\nDon't we need an assertion on (DunaHashReuse.element == NULL) here?\n\n\n-\tsize = add_size(size, BufTableShmemSize(NBuffers + NUM_BUFFER_PARTITIONS));\n+\t/* size of lookup hash table */\n+\tsize = add_size(size, BufTableShmemSize(NBuffers));\n\nI was not sure that this is safe, but actually I didn't get \"out of\nshared memory\". On second thought, I realized that when a dynahash\nentry is stashed, BufferAlloc always holding a buffer block, too.\nSo now I'm sure that this is safe.\n\n\nThat's all.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 16 Mar 2022 12:07:47 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "В Ср, 16/03/2022 в 12:07 +0900, Kyotaro Horiguchi пишет:\n> At Tue, 15 Mar 2022 13:47:17 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrote in \n> > В Вт, 15/03/2022 в 16:25 +0900, Kyotaro Horiguchi пишет:\n> > > Hmm. v8 returns stashed element with original patition index when the\n> > > element is *not* reused. But what I saw in the previous test runs is\n> > > the REUSE->ENTER(reuse)(->REMOVE) case. So the new version looks like\n> > > behaving the same way (or somehow even worse) with the previous\n> > > version.\n> > \n> > v8 doesn't differ in REMOVE case neither from master nor from\n> > previous version. It differs in RETURNED case only.\n> > Or I didn't understand what you mean :(\n> \n> In v7, HASH_ENTER returns the element stored in DynaHashReuse using\n> the freelist_idx of the new key. v8 uses that of the old key (at the\n> time of HASH_REUSE). So in the case \"REUSE->ENTER(elem exists and\n> returns the stashed)\" case the stashed element is returned to its\n> original partition. But it is not what I mentioned.\n> \n> On the other hand, once the stahsed element is reused by HASH_ENTER,\n> it gives the same resulting state with HASH_REMOVE->HASH_ENTER(borrow\n> from old partition) case. I suspect that ththat the frequent freelist\n> starvation comes from the latter case.\n\nDoubtfully. Due to probabilty theory, single partition doubdfully\nwill be too overflowed. Therefore, freelist.\n\nBut! With 128kb shared buffers there is just 32 buffers. 32 entry for\n32 freelist partition - certainly some freelist partition will certainly\nhave 0 entry even if all entries are in freelists. \n\n> > > get_hash_entry continuously suffer lack of freelist\n> > > entry. (FWIW, attached are the test-output diff for both master and\n> > > patched)\n> > > \n> > > master finally allocated 31 fresh elements for a 100s run.\n> > > \n> > > > ALLOCED: 31 ;; freshly allocated\n> > > \n> > > v8 finally borrowed 33620 times from another freelist and 0 freshly\n> > > allocated (ah, this version changes that..)\n> > > Finally v8 results in:\n> > > \n> > > > RETURNED: 50806 ;; returned stashed elements\n> > > > BORROWED: 33620 ;; borrowed from another freelist\n> > > > REUSED: 1812664 ;; stashed\n> > > > ASSIGNED: 1762377 ;; reused\n> > > >(ALLOCED: 0) ;; freshly allocated\n> \n> (I misunderstand that v8 modified get_hash_entry's preference between\n> allocation and borrowing.)\n> \n> I re-ran the same check for v7 and it showed different result.\n> \n> RETURNED: 1\n> ALLOCED: 15\n> BORROWED: 0\n> REUSED: 505435\n> ASSIGNED: 505462 (-27) ## the counters are not locked.\n> \n> > Is there any measurable performance hit cause of borrowing?\n> > Looks like \"borrowed\" happened in 1.5% of time. And it is on 128kb\n> > shared buffers that is extremely small. (Or it was 128MB?)\n> \n> It is intentional set small to get extremely frequent buffer\n> replacements. The point here was the patch actually can induce\n> frequent freelist starvation. And as you do, I also doubt the\n> significance of the performance hit by that. Just I was not usre.\n>\n> I re-ran the same for v8 and got a result largely different from the\n> previous trial on the same v8.\n> \n> RETURNED: 2\n> ALLOCED: 0\n> BORROWED: 435\n> REUSED: 495444\n> ASSIGNED: 495467 (-23)\n> \n> Now \"BORROWED\" happens 0.8% of REUSED\n\n0.08% actually :)\n\n> \n> > Well, I think some spare entries could reduce borrowing if there is\n> > a need. I'll test on 128MB with spare entries. If there is profit,\n> > I'll return some, but will keep SharedBufHash fixed.\n> \n> I don't doubt the benefit of this patch. And now convinced by myself\n> that the downside is negligible than the benefit.\n> \n> > Master branch does less freelist manipulations since it tries to\n> > insert first and if there is collision it doesn't delete victim\n> > buffer.\n> > \n> > > > I lost access to Xeon 8354H, so returned to old Xeon X5675.\n> > > ...\n> > > > Strange thing: both master and patched version has higher\n> > > > peak tps at X5676 at medium connections (17 or 27 clients)\n> > > > than in first october version [1]. But lower tps at higher\n> > > > connections number (>= 191 clients).\n> > > > I'll try to bisect on master this unfortunate change.\n> > > \n> > > The reversing of the preference order between freshly-allocation and\n> > > borrow-from-another-freelist might affect.\n> > \n> > `master` changed its behaviour as well.\n> > It is not problem of the patch at all.\n> \n> Agreed. So I think we should go on this direction.\n\nI've checked. Looks like something had changed on the server, since\nold master commit behaves now same to new one (and differently to\nhow it behaved in October).\nI remember maintainance downtime of the server in november/december.\nProbably, kernel were upgraded or some system settings were changed.\n\n> There are some last comments on v8.\n> \n> + HASH_FIXED_SIZE);\n> \n> Ah, now I understand that this prevented allocation of new elements.\n> I think this good to do for SharedBufHash.\n> \n> \n> ====\n> + long nfree; /* number of free entries in the list */\n> HASHELEMENT *freeList; /* chain of free elements */\n> } FreeListData;\n> \n> +#if SIZEOF_LONG == 4\n> +typedef pg_atomic_uint32 nalloced_store_t;\n> +typedef uint32 nalloced_value_t;\n> +#define nalloced_read(a) (long)pg_atomic_read_u32(a)\n> +#define nalloced_add(a, v) pg_atomic_fetch_add_u32((a), (uint32)(v))\n> ====\n> \n> I don't think nalloced needs to be the same width to long. For the\n> platforms with 32-bit long, anyway the possible degradation if any by\n> 64-bit atomic there doesn't matter. So don't we always define the\n> atomic as 64bit and use the pg_atomic_* functions directly?\n\nSome 32bit platforms has no native 64bit atomics. Then they are\nemulated with locks.\n\nNative atomic read/write is quite cheap. So I don't bother with\nunlocked read/write for non-partitioned table. (And I don't know\nwhich platform has sizeof(long)>4 without having native 64bit\natomic as well)\n\n(May be I'm wrong a bit? element_alloc invokes nalloc_add, which\nis atomic increment. Could it be expensive enough to be problem\nin non-shared dynahash instances?)\n\nIf patch stick with pg_atomic_uint64 for nalloced, then it have\nto separate read+write for partitioned(actually shared) and\nnon-partitioned cases.\n\nWell, and for 32bit platform long is just enough. Why spend other\n4 bytes per each dynahash?\n\nBy the way, there is unfortunate miss of PG_HAVE_8BYTE_SINGLE_COPY_ATOMICITY\nin port/atomics/arch-arm.h for aarch64 . I'll send patch for\nin new thread.\n\n> + case HASH_REUSE:\n> + if (currBucket != NULL)\n> \n> Don't we need an assertion on (DunaHashReuse.element == NULL) here?\n\nCommon assert is higher on line 1094:\n\n\tAssert(action == HASH_ENTER || DynaHashReuse.element == NULL);\n\nI thought it is more accurate than duplicated in each switch case.\n\n> - size = add_size(size, BufTableShmemSize(NBuffers + NUM_BUFFER_PARTITIONS));\n> + /* size of lookup hash table */\n> + size = add_size(size, BufTableShmemSize(NBuffers));\n> \n> I was not sure that this is safe, but actually I didn't get \"out of\n> shared memory\". On second thought, I realized that when a dynahash\n> entry is stashed, BufferAlloc always holding a buffer block, too.\n> So now I'm sure that this is safe.\n> \n> \n> That's all.\n\nThanks you very much for productive review and discussion.\n\n\n\nregards,\n\nYura Sokolov\nPostgres Professional\ny.sokolov@postgrespro.ru\nfunny.falcon@gmail.com\n\n\n\n",
"msg_date": "Wed, 16 Mar 2022 14:11:58 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "At Wed, 16 Mar 2022 14:11:58 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrote in \n> В Ср, 16/03/2022 в 12:07 +0900, Kyotaro Horiguchi пишет:\n> > At Tue, 15 Mar 2022 13:47:17 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrote in \n> > In v7, HASH_ENTER returns the element stored in DynaHashReuse using\n> > the freelist_idx of the new key. v8 uses that of the old key (at the\n> > time of HASH_REUSE). So in the case \"REUSE->ENTER(elem exists and\n> > returns the stashed)\" case the stashed element is returned to its\n> > original partition. But it is not what I mentioned.\n> > \n> > On the other hand, once the stahsed element is reused by HASH_ENTER,\n> > it gives the same resulting state with HASH_REMOVE->HASH_ENTER(borrow\n> > from old partition) case. I suspect that ththat the frequent freelist\n> > starvation comes from the latter case.\n> \n> Doubtfully. Due to probabilty theory, single partition doubdfully\n> will be too overflowed. Therefore, freelist.\n\nYeah. I think so generally.\n\n> But! With 128kb shared buffers there is just 32 buffers. 32 entry for\n> 32 freelist partition - certainly some freelist partition will certainly\n> have 0 entry even if all entries are in freelists. \n\nAnyway, it's an extreme condition and the starvation happens only at a\nneglegible ratio.\n\n> > RETURNED: 2\n> > ALLOCED: 0\n> > BORROWED: 435\n> > REUSED: 495444\n> > ASSIGNED: 495467 (-23)\n> > \n> > Now \"BORROWED\" happens 0.8% of REUSED\n> \n> 0.08% actually :)\n\nMmm. Doesn't matter:p\n\n> > > > > I lost access to Xeon 8354H, so returned to old Xeon X5675.\n> > > > ...\n> > > > > Strange thing: both master and patched version has higher\n> > > > > peak tps at X5676 at medium connections (17 or 27 clients)\n> > > > > than in first october version [1]. But lower tps at higher\n> > > > > connections number (>= 191 clients).\n> > > > > I'll try to bisect on master this unfortunate change.\n...\n> I've checked. Looks like something had changed on the server, since\n> old master commit behaves now same to new one (and differently to\n> how it behaved in October).\n> I remember maintainance downtime of the server in november/december.\n> Probably, kernel were upgraded or some system settings were changed.\n\nOne thing I have a little concern is that numbers shows 1-2% of\ndegradation steadily for connection numbers < 17.\n\nI think there are two possible cause of the degradation.\n\n1. Additional branch by consolidating HASH_ASSIGN into HASH_ENTER.\n This might cause degradation for memory-contended use.\n\n2. nallocs operation might cause degradation on non-shared dynahasyes?\n I believe doesn't but I'm not sure.\n\n On a simple benchmarking with pgbench on a laptop, dynahash\n allocation (including shared and non-shared) happend about at 50\n times per second with 10 processes and 200 with 100 processes.\n\n> > I don't think nalloced needs to be the same width to long. For the\n> > platforms with 32-bit long, anyway the possible degradation if any by\n> > 64-bit atomic there doesn't matter. So don't we always define the\n> > atomic as 64bit and use the pg_atomic_* functions directly?\n> \n> Some 32bit platforms has no native 64bit atomics. Then they are\n> emulated with locks.\n> \n> Well, and for 32bit platform long is just enough. Why spend other\n> 4 bytes per each dynahash?\n\nI don't think additional bytes doesn't matter, but emulated atomic\noperations can matter. However I'm not sure which platform uses that\nfallback implementations. (x86 seems to have __sync_fetch_and_add()\nsince P4).\n\nMy opinion in the previous mail is that if that level of degradation\ncaued by emulated atomic operations matters, we shouldn't use atomic\nthere at all since atomic operations on the modern platforms are not\nalso free.\n\nIn relation to 2 above, if we observe that the degradation disappears\nby (tentatively) use non-atomic operations for nalloced, we should go\nback to the previous per-freelist nalloced.\n\nI don't have access to such a musculous machines, though..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 17 Mar 2022 12:02:48 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "В Чт, 17/03/2022 в 12:02 +0900, Kyotaro Horiguchi пишет:\n> At Wed, 16 Mar 2022 14:11:58 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrote in \n> > В Ср, 16/03/2022 в 12:07 +0900, Kyotaro Horiguchi пишет:\n> > > At Tue, 15 Mar 2022 13:47:17 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrote in \n> > > In v7, HASH_ENTER returns the element stored in DynaHashReuse using\n> > > the freelist_idx of the new key. v8 uses that of the old key (at the\n> > > time of HASH_REUSE). So in the case \"REUSE->ENTER(elem exists and\n> > > returns the stashed)\" case the stashed element is returned to its\n> > > original partition. But it is not what I mentioned.\n> > > \n> > > On the other hand, once the stahsed element is reused by HASH_ENTER,\n> > > it gives the same resulting state with HASH_REMOVE->HASH_ENTER(borrow\n> > > from old partition) case. I suspect that ththat the frequent freelist\n> > > starvation comes from the latter case.\n> > \n> > Doubtfully. Due to probabilty theory, single partition doubdfully\n> > will be too overflowed. Therefore, freelist.\n> \n> Yeah. I think so generally.\n> \n> > But! With 128kb shared buffers there is just 32 buffers. 32 entry for\n> > 32 freelist partition - certainly some freelist partition will certainly\n> > have 0 entry even if all entries are in freelists. \n> \n> Anyway, it's an extreme condition and the starvation happens only at a\n> neglegible ratio.\n> \n> > > RETURNED: 2\n> > > ALLOCED: 0\n> > > BORROWED: 435\n> > > REUSED: 495444\n> > > ASSIGNED: 495467 (-23)\n> > > \n> > > Now \"BORROWED\" happens 0.8% of REUSED\n> > \n> > 0.08% actually :)\n> \n> Mmm. Doesn't matter:p\n> \n> > > > > > I lost access to Xeon 8354H, so returned to old Xeon X5675.\n> > > > > ...\n> > > > > > Strange thing: both master and patched version has higher\n> > > > > > peak tps at X5676 at medium connections (17 or 27 clients)\n> > > > > > than in first october version [1]. But lower tps at higher\n> > > > > > connections number (>= 191 clients).\n> > > > > > I'll try to bisect on master this unfortunate change.\n> ...\n> > I've checked. Looks like something had changed on the server, since\n> > old master commit behaves now same to new one (and differently to\n> > how it behaved in October).\n> > I remember maintainance downtime of the server in november/december.\n> > Probably, kernel were upgraded or some system settings were changed.\n> \n> One thing I have a little concern is that numbers shows 1-2% of\n> degradation steadily for connection numbers < 17.\n> \n> I think there are two possible cause of the degradation.\n> \n> 1. Additional branch by consolidating HASH_ASSIGN into HASH_ENTER.\n> This might cause degradation for memory-contended use.\n> \n> 2. nallocs operation might cause degradation on non-shared dynahasyes?\n> I believe doesn't but I'm not sure.\n> \n> On a simple benchmarking with pgbench on a laptop, dynahash\n> allocation (including shared and non-shared) happend about at 50\n> times per second with 10 processes and 200 with 100 processes.\n> \n> > > I don't think nalloced needs to be the same width to long. For the\n> > > platforms with 32-bit long, anyway the possible degradation if any by\n> > > 64-bit atomic there doesn't matter. So don't we always define the\n> > > atomic as 64bit and use the pg_atomic_* functions directly?\n> > \n> > Some 32bit platforms has no native 64bit atomics. Then they are\n> > emulated with locks.\n> > \n> > Well, and for 32bit platform long is just enough. Why spend other\n> > 4 bytes per each dynahash?\n> \n> I don't think additional bytes doesn't matter, but emulated atomic\n> operations can matter. However I'm not sure which platform uses that\n> fallback implementations. (x86 seems to have __sync_fetch_and_add()\n> since P4).\n> \n> My opinion in the previous mail is that if that level of degradation\n> caued by emulated atomic operations matters, we shouldn't use atomic\n> there at all since atomic operations on the modern platforms are not\n> also free.\n> \n> In relation to 2 above, if we observe that the degradation disappears\n> by (tentatively) use non-atomic operations for nalloced, we should go\n> back to the previous per-freelist nalloced.\n\nHere is version with nalloced being union of appropriate atomic and\nlong.\n\n------\n\nregards\nYura Sokolov",
"msg_date": "Sun, 20 Mar 2022 12:38:06 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "Good day, Kyotaoro-san.\nGood day, hackers.\n\nВ Вс, 20/03/2022 в 12:38 +0300, Yura Sokolov пишет:\n> В Чт, 17/03/2022 в 12:02 +0900, Kyotaro Horiguchi пишет:\n> > At Wed, 16 Mar 2022 14:11:58 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrote in \n> > > В Ср, 16/03/2022 в 12:07 +0900, Kyotaro Horiguchi пишет:\n> > > > At Tue, 15 Mar 2022 13:47:17 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrote in \n> > > > In v7, HASH_ENTER returns the element stored in DynaHashReuse using\n> > > > the freelist_idx of the new key. v8 uses that of the old key (at the\n> > > > time of HASH_REUSE). So in the case \"REUSE->ENTER(elem exists and\n> > > > returns the stashed)\" case the stashed element is returned to its\n> > > > original partition. But it is not what I mentioned.\n> > > > \n> > > > On the other hand, once the stahsed element is reused by HASH_ENTER,\n> > > > it gives the same resulting state with HASH_REMOVE->HASH_ENTER(borrow\n> > > > from old partition) case. I suspect that ththat the frequent freelist\n> > > > starvation comes from the latter case.\n> > > \n> > > Doubtfully. Due to probabilty theory, single partition doubdfully\n> > > will be too overflowed. Therefore, freelist.\n> > \n> > Yeah. I think so generally.\n> > \n> > > But! With 128kb shared buffers there is just 32 buffers. 32 entry for\n> > > 32 freelist partition - certainly some freelist partition will certainly\n> > > have 0 entry even if all entries are in freelists. \n> > \n> > Anyway, it's an extreme condition and the starvation happens only at a\n> > neglegible ratio.\n> > \n> > > > RETURNED: 2\n> > > > ALLOCED: 0\n> > > > BORROWED: 435\n> > > > REUSED: 495444\n> > > > ASSIGNED: 495467 (-23)\n> > > > \n> > > > Now \"BORROWED\" happens 0.8% of REUSED\n> > > \n> > > 0.08% actually :)\n> > \n> > Mmm. Doesn't matter:p\n> > \n> > > > > > > I lost access to Xeon 8354H, so returned to old Xeon X5675.\n> > > > > > ...\n> > > > > > > Strange thing: both master and patched version has higher\n> > > > > > > peak tps at X5676 at medium connections (17 or 27 clients)\n> > > > > > > than in first october version [1]. But lower tps at higher\n> > > > > > > connections number (>= 191 clients).\n> > > > > > > I'll try to bisect on master this unfortunate change.\n> > ...\n> > > I've checked. Looks like something had changed on the server, since\n> > > old master commit behaves now same to new one (and differently to\n> > > how it behaved in October).\n> > > I remember maintainance downtime of the server in november/december.\n> > > Probably, kernel were upgraded or some system settings were changed.\n> > \n> > One thing I have a little concern is that numbers shows 1-2% of\n> > degradation steadily for connection numbers < 17.\n> > \n> > I think there are two possible cause of the degradation.\n> > \n> > 1. Additional branch by consolidating HASH_ASSIGN into HASH_ENTER.\n> > This might cause degradation for memory-contended use.\n> > \n> > 2. nallocs operation might cause degradation on non-shared dynahasyes?\n> > I believe doesn't but I'm not sure.\n> > \n> > On a simple benchmarking with pgbench on a laptop, dynahash\n> > allocation (including shared and non-shared) happend about at 50\n> > times per second with 10 processes and 200 with 100 processes.\n> > \n> > > > I don't think nalloced needs to be the same width to long. For the\n> > > > platforms with 32-bit long, anyway the possible degradation if any by\n> > > > 64-bit atomic there doesn't matter. So don't we always define the\n> > > > atomic as 64bit and use the pg_atomic_* functions directly?\n> > > \n> > > Some 32bit platforms has no native 64bit atomics. Then they are\n> > > emulated with locks.\n> > > \n> > > Well, and for 32bit platform long is just enough. Why spend other\n> > > 4 bytes per each dynahash?\n> > \n> > I don't think additional bytes doesn't matter, but emulated atomic\n> > operations can matter. However I'm not sure which platform uses that\n> > fallback implementations. (x86 seems to have __sync_fetch_and_add()\n> > since P4).\n> > \n> > My opinion in the previous mail is that if that level of degradation\n> > caued by emulated atomic operations matters, we shouldn't use atomic\n> > there at all since atomic operations on the modern platforms are not\n> > also free.\n> > \n> > In relation to 2 above, if we observe that the degradation disappears\n> > by (tentatively) use non-atomic operations for nalloced, we should go\n> > back to the previous per-freelist nalloced.\n> \n> Here is version with nalloced being union of appropriate atomic and\n> long.\n> \n\nOk, I got access to stronger server, did the benchmark, found weird\nthings, and so here is new version :-)\n\nFirst I found if table size is strictly limited to NBuffers and FIXED,\nthen under high concurrency get_hash_entry may not find free entry\ndespite it must be there. It seems while process scans free lists, other\nconcurrent processes \"moves entry around\", ie one concurrent process\nfetched it from one free list, other process put new entry in other\nfreelist, and unfortunate process missed it since it tests freelists\nonly once.\n\nSecond, I confirm there is problem with freelist spreading.\nIf I keep entry's freelist_idx, then one freelist is crowded.\nIf I use new entry's freelist_idx, then one freelist is emptified\nconstantly.\n\nThird, I found increased concurrency could harm. When popular block is\nevicted for some reason, then thundering herd effect occures: many\nbackends wants to read same block, they evict many other buffers, but\nonly one is inserted. Other goes to freelist. Evicted buffers by itself\nreduce cache hit ratio and provocates more work. Old version resists\nthis effect by not removing old buffer before new entry is successfully\ninserted.\n\nTo fix this issues I made following:\n\n# Concurrency\n\nFirst, I limit concurrency by introducing other lwlocks tranche -\nBufferEvict. It is 8 times larger than BufferMapping tranche (1024 vs\n128).\nIf backend doesn't find buffer in buffer table and wants to introduce\nit, it first calls\n LWLockAcquireOrWait(newEvictPartitionLock, LW_EXCLUSIVE)\nIf lock were acquired, then it goes to eviction and replace process.\nOtherwise, it waits lock to be released and repeats search.\n\nThis greately improve performance for > 400 clients in pgbench.\n\nI tried other variant as well:\n- first insert entry with dummy buffer index into buffer table.\n- if such entry were already here, then wait it to be filled.\n- otherwise find victim buffer and replace dummy index with new one.\nWait were done with shared lock on EvictPartitionLock as well.\nThis variant performed quite same.\n\nLogically I like that variant more, but there is one gotcha: \nFlushBuffer could fail with elog(ERROR). Therefore then there is\na need to reliable remove entry with dummy index.\nAnd after all, I still need to hold EvictPartitionLock to notice\nwaiters.\n\nI've tried to use ConditionalVariable, but its performance were much\nworse.\n\n# Dynahash capacity and freelists.\n\nI returned back buffer table initialization:\n- removed FIXES_SIZE restriction introduced in previous version\n- returned `NBuffers + NUM_BUFFER_PARTITIONS`.\nI really think, there should be more spare items, since almost always\nentry_alloc is called at least once (on 128MB shared_buffers). But\nlet keep it as is for now.\n\n`get_hash_entry` were changed to probe NUM_FREELISTS/4 (==8) freelists\nbefore falling back to `entry_alloc`, and probing is changed from\nlinear to quadratic. This greately reduces number of calls to\n`entry_alloc`, so more shared memory left intact. And I didn't notice\nlarge performance hit from. Probably there is some, but I think it is\nadequate trade-off.\n\n`free_reused_entry` now returns entry to random position. It flattens\nfree entry's spread. Although it is not enough without other changes\n(thundering herd mitigation and probing more lists in get_hash_entry).\n\n# Benchmarks\n\nBenchmarked on two socket Xeon(R) Gold 5220 CPU @2.20GHz\n18 cores per socket + hyper-threading - upto 72 virtual core total.\nturbo-boost disabled\nLinux 5.10.103-1 Debian.\n\npgbench scale 100 simple_select + simple select with 3 keys (sql file\nattached).\n\nshared buffers 128MB & 1GB\nhuge_pages=on\n\n1 socket\n conns | master | patch-v11 | master 1G | patch-v11 1G \n--------+------------+------------+------------+------------\n 1 | 27882 | 27738 | 32735 | 32439 \n 2 | 54082 | 54336 | 64387 | 63846 \n 3 | 80724 | 81079 | 96387 | 94439 \n 5 | 134404 | 133429 | 160085 | 157399 \n 7 | 185977 | 184502 | 219916 | 217142 \n 17 | 335345 | 338214 | 393112 | 388796 \n 27 | 393686 | 394948 | 447945 | 444915 \n 53 | 572234 | 577092 | 678884 | 676493 \n 83 | 558875 | 561689 | 669212 | 655697 \n 107 | 553054 | 551896 | 654550 | 646010 \n 139 | 541263 | 538354 | 641937 | 633840 \n 163 | 532932 | 531829 | 635127 | 627600 \n 191 | 524647 | 524442 | 626228 | 617347 \n 211 | 521624 | 522197 | 629740 | 613143 \n 239 | 509448 | 554894 | 652353 | 652972 \n 271 | 468190 | 557467 | 647403 | 661348 \n 307 | 454139 | 558694 | 642229 | 657649 \n 353 | 446853 | 554301 | 635991 | 654571 \n 397 | 441909 | 549822 | 625194 | 647973 \n\n1 socket 3 keys\n\n conns | master | patch-v11 | master 1G | patch-v11 1G \n--------+------------+------------+------------+------------\n 1 | 16677 | 16477 | 22219 | 22030 \n 2 | 32056 | 31874 | 43298 | 43153 \n 3 | 48091 | 47766 | 64877 | 64600 \n 5 | 78999 | 78609 | 105433 | 106101 \n 7 | 108122 | 107529 | 148713 | 145343 \n 17 | 205656 | 209010 | 272676 | 271449 \n 27 | 252015 | 254000 | 323983 | 323499 \n 53 | 317928 | 334493 | 446740 | 449641 \n 83 | 299234 | 327738 | 437035 | 443113 \n 107 | 290089 | 322025 | 430535 | 431530 \n 139 | 277294 | 314384 | 422076 | 423606 \n 163 | 269029 | 310114 | 416229 | 417412 \n 191 | 257315 | 306530 | 408487 | 416170 \n 211 | 249743 | 304278 | 404766 | 416393 \n 239 | 243333 | 310974 | 397139 | 428167 \n 271 | 236356 | 309215 | 389972 | 427498 \n 307 | 229094 | 307519 | 382444 | 425891 \n 353 | 224385 | 305366 | 375020 | 423284 \n 397 | 218549 | 302577 | 364373 | 420846 \n\n2 sockets\n\n conns | master | patch-v11 | master 1G | patch-v11 1G \n--------+------------+------------+------------+------------\n 1 | 27287 | 27631 | 32943 | 32493 \n 2 | 52397 | 54011 | 64572 | 63596 \n 3 | 76157 | 80473 | 93363 | 93528 \n 5 | 127075 | 134310 | 153176 | 149984 \n 7 | 177100 | 176939 | 216356 | 211599 \n 17 | 379047 | 383179 | 464249 | 470351 \n 27 | 545219 | 546706 | 664779 | 662488 \n 53 | 728142 | 728123 | 857454 | 869407 \n 83 | 918276 | 957722 | 1215252 | 1203443 \n 107 | 884112 | 971797 | 1206930 | 1234606 \n 139 | 822564 | 970920 | 1167518 | 1233230 \n 163 | 788287 | 968248 | 1130021 | 1229250 \n 191 | 772406 | 959344 | 1097842 | 1218541 \n 211 | 756085 | 955563 | 1077747 | 1209489 \n 239 | 732926 | 948855 | 1050096 | 1200878 \n 271 | 692999 | 941722 | 1017489 | 1194012 \n 307 | 668241 | 920478 | 994420 | 1179507 \n 353 | 642478 | 908645 | 968648 | 1174265 \n 397 | 617673 | 893568 | 950736 | 1173411 \n\n2 sockets 3 keys\n\n conns | master | patch-v11 | master 1G | patch-v11 1G \n--------+------------+------------+------------+------------\n 1 | 16722 | 16393 | 20340 | 21813 \n 2 | 32057 | 32009 | 39993 | 42959 \n 3 | 46202 | 47678 | 59216 | 64374 \n 5 | 78882 | 72002 | 98054 | 103731 \n 7 | 103398 | 99538 | 135098 | 135828 \n 17 | 205863 | 217781 | 293958 | 299690 \n 27 | 283526 | 290539 | 414968 | 411219 \n 53 | 336717 | 356130 | 460596 | 474563 \n 83 | 307310 | 342125 | 419941 | 469989 \n 107 | 294059 | 333494 | 405706 | 469593 \n 139 | 278453 | 328031 | 390984 | 470553 \n 163 | 270833 | 326457 | 384747 | 470977 \n 191 | 259591 | 322590 | 376582 | 470335 \n 211 | 263584 | 321263 | 375969 | 469443 \n 239 | 257135 | 316959 | 370108 | 470904 \n 271 | 251107 | 315393 | 365794 | 469517 \n 307 | 246605 | 311585 | 360742 | 467566 \n 353 | 236899 | 308581 | 353464 | 466936 \n 397 | 249036 | 305042 | 344673 | 466842 \n\nI skipped v10 since I used it internally for variant\n\"insert entry with dummy index then search victim\".\n\n\n------\n\nregards\n\nYura Sokolov\ny.sokolov@postgrespro.ru\nfunny.falcon@gmail.com",
"msg_date": "Wed, 06 Apr 2022 16:17:28 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "Hi, Yura.\n\nAt Wed, 06 Apr 2022 16:17:28 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrot\ne in \n> Ok, I got access to stronger server, did the benchmark, found weird\n> things, and so here is new version :-)\n\nThanks for the new version and benchmarking.\n\n> First I found if table size is strictly limited to NBuffers and FIXED,\n> then under high concurrency get_hash_entry may not find free entry\n> despite it must be there. It seems while process scans free lists, other\n> concurrent processes \"moves etry around\", ie one concurrent process\n> fetched it from one free list, other process put new entry in other\n> freelist, and unfortunate process missed it since it tests freelists\n> only once.\n\nStrategyGetBuffer believes that entries don't move across freelists\nand it was true before this patch.\n\n> Second, I confirm there is problem with freelist spreading.\n> If I keep entry's freelist_idx, then one freelist is crowded.\n> If I use new entry's freelist_idx, then one freelist is emptified\n> constantly.\n\nPerhaps it is what I saw before. I'm not sure about the details of\nhow that happens, though.\n\n> Third, I found increased concurrency could harm. When popular block\n> is evicted for some reason, then thundering herd effect occures:\n> many backends wants to read same block, they evict many other\n> buffers, but only one is inserted. Other goes to freelist. Evicted\n> buffers by itself reduce cache hit ratio and provocates more\n> work. Old version resists this effect by not removing old buffer\n> before new entry is successfully inserted.\n\nNice finding.\n\n> To fix this issues I made following:\n> \n> # Concurrency\n> \n> First, I limit concurrency by introducing other lwlocks tranche -\n> BufferEvict. It is 8 times larger than BufferMapping tranche (1024 vs\n> 128).\n> If backend doesn't find buffer in buffer table and wants to introduce\n> it, it first calls\n> LWLockAcquireOrWait(newEvictPartitionLock, LW_EXCLUSIVE)\n> If lock were acquired, then it goes to eviction and replace process.\n> Otherwise, it waits lock to be released and repeats search.\n>\n> This greately improve performance for > 400 clients in pgbench.\n\nSo the performance difference between the existing code and v11 is the\nlatter has a collision cross section eight times smaller than the\nformer?\n\n+\t * Prevent \"thundering herd\" problem and limit concurrency.\n\nthis is something like pressing accelerator and break pedals at the\nsame time. If it improves performance, just increasing the number of\nbuffer partition seems to work?\n\nIt's also not great that follower backends runs a busy loop on the\nlock until the top-runner backend inserts the new buffer to the\nbuftable then releases the newParititionLock.\n\n> I tried other variant as well:\n> - first insert entry with dummy buffer index into buffer table.\n> - if such entry were already here, then wait it to be filled.\n> - otherwise find victim buffer and replace dummy index with new one.\n> Wait were done with shared lock on EvictPartitionLock as well.\n> This variant performed quite same.\n\nThis one looks better to me. Since a partition can be shared by two or\nmore new-buffers, condition variable seems to work better here...\n\n> Logically I like that variant more, but there is one gotcha: \n> FlushBuffer could fail with elog(ERROR). Therefore then there is\n> a need to reliable remove entry with dummy index.\n\nPerhaps UnlockBuffers can do that.\n\n> And after all, I still need to hold EvictPartitionLock to notice\n> waiters.\n> I've tried to use ConditionalVariable, but its performance were much\n> worse.\n\nHow many CVs did you use?\n\n> # Dynahash capacity and freelists.\n> \n> I returned back buffer table initialization:\n> - removed FIXES_SIZE restriction introduced in previous version\n\nMmm. I don't see v10 in this list and v9 doesn't contain FIXES_SIZE..\n\n> - returned `NBuffers + NUM_BUFFER_PARTITIONS`.\n> I really think, there should be more spare items, since almost always\n> entry_alloc is called at least once (on 128MB shared_buffers). But\n> let keep it as is for now.\n\nMaybe s/entry_alloc/element_alloc/ ? :p\n\nI see it with shared_buffers=128kB (not MB) and pgbench -i on master.\n\nThe required number of elements are already allocaed to freelists at\nhash creation. So the reason for the call is imbalanced use among\nfreelists. Even in that case other freelists holds elements. So we\ndon't need to expand the element size.\n\n> `get_hash_entry` were changed to probe NUM_FREELISTS/4 (==8) freelists\n> before falling back to `entry_alloc`, and probing is changed from\n> linear to quadratic. This greately reduces number of calls to\n> `entry_alloc`, so more shared memory left intact. And I didn't notice\n> large performance hit from. Probably there is some, but I think it is\n> adequate trade-off.\n\nI don't think that causes significant performance hit, but I don't\nunderstand how it improves freelist hit ratio other than by accident.\nCould you have some reasoning for it?\n\nBy the way the change of get_hash_entry looks something wrong.\n\nIf I understand it correctly, it visits num_freelists/4 freelists at\nonce, then tries element_alloc. If element_alloc() fails (that must\nhappen), it only tries freeList[freelist_idx] and gives up, even\nthough there must be an element in other 3/4 freelists.\n\n> `free_reused_entry` now returns entry to random position. It flattens\n> free entry's spread. Although it is not enough without other changes\n> (thundering herd mitigation and probing more lists in get_hash_entry).\n\nIf \"thudering herd\" means \"many backends rush trying to read-in the\nsame page at once\", isn't it avoided by the change in BufferAlloc?\n\nI feel the random returning method might work. I want to get rid of\nthe randomness here but I don't come up with a better way.\n\nAnyway the code path is used only by buftable so it doesn't harm\ngenerally.\n\n> # Benchmarks\n\n# Thanks for benchmarking!!\n\n> Benchmarked on two socket Xeon(R) Gold 5220 CPU @2.20GHz\n> 18 cores per socket + hyper-threading - upto 72 virtual core total.\n> turbo-boost disabled\n> Linux 5.10.103-1 Debian.\n> \n> pgbench scale 100 simple_select + simple select with 3 keys (sql file\n> attached).\n> \n> shared buffers 128MB & 1GB\n> huge_pages=on\n> \n> 1 socket\n> conns | master | patch-v11 | master 1G | patch-v11 1G \n> --------+------------+------------+------------+------------\n> 1 | 27882 | 27738 | 32735 | 32439 \n> 2 | 54082 | 54336 | 64387 | 63846 \n> 3 | 80724 | 81079 | 96387 | 94439 \n> 5 | 134404 | 133429 | 160085 | 157399 \n> 7 | 185977 | 184502 | 219916 | 217142 \n\nv11+128MB degrades above here..\n\n> 17 | 335345 | 338214 | 393112 | 388796 \n> 27 | 393686 | 394948 | 447945 | 444915 \n> 53 | 572234 | 577092 | 678884 | 676493 \n> 83 | 558875 | 561689 | 669212 | 655697 \n> 107 | 553054 | 551896 | 654550 | 646010 \n> 139 | 541263 | 538354 | 641937 | 633840 \n> 163 | 532932 | 531829 | 635127 | 627600 \n> 191 | 524647 | 524442 | 626228 | 617347 \n> 211 | 521624 | 522197 | 629740 | 613143 \n\nv11+1GB degrades above here..\n\n> 239 | 509448 | 554894 | 652353 | 652972 \n> 271 | 468190 | 557467 | 647403 | 661348 \n> 307 | 454139 | 558694 | 642229 | 657649 \n> 353 | 446853 | 554301 | 635991 | 654571 \n> 397 | 441909 | 549822 | 625194 | 647973 \n> \n> 1 socket 3 keys\n> \n> conns | master | patch-v11 | master 1G | patch-v11 1G \n> --------+------------+------------+------------+------------\n> 1 | 16677 | 16477 | 22219 | 22030 \n> 2 | 32056 | 31874 | 43298 | 43153 \n> 3 | 48091 | 47766 | 64877 | 64600 \n> 5 | 78999 | 78609 | 105433 | 106101 \n> 7 | 108122 | 107529 | 148713 | 145343 \n\nv11+128MB degrades above here..\n\n> 17 | 205656 | 209010 | 272676 | 271449 \n> 27 | 252015 | 254000 | 323983 | 323499 \n\nv11+1GB degrades above here..\n\n> 53 | 317928 | 334493 | 446740 | 449641 \n> 83 | 299234 | 327738 | 437035 | 443113 \n> 107 | 290089 | 322025 | 430535 | 431530 \n> 139 | 277294 | 314384 | 422076 | 423606 \n> 163 | 269029 | 310114 | 416229 | 417412 \n> 191 | 257315 | 306530 | 408487 | 416170 \n> 211 | 249743 | 304278 | 404766 | 416393 \n> 239 | 243333 | 310974 | 397139 | 428167 \n> 271 | 236356 | 309215 | 389972 | 427498 \n> 307 | 229094 | 307519 | 382444 | 425891 \n> 353 | 224385 | 305366 | 375020 | 423284 \n> 397 | 218549 | 302577 | 364373 | 420846 \n> \n> 2 sockets\n> \n> conns | master | patch-v11 | master 1G | patch-v11 1G \n> --------+------------+------------+------------+------------\n> 1 | 27287 | 27631 | 32943 | 32493 \n> 2 | 52397 | 54011 | 64572 | 63596 \n> 3 | 76157 | 80473 | 93363 | 93528 \n> 5 | 127075 | 134310 | 153176 | 149984 \n> 7 | 177100 | 176939 | 216356 | 211599 \n> 17 | 379047 | 383179 | 464249 | 470351 \n> 27 | 545219 | 546706 | 664779 | 662488 \n> 53 | 728142 | 728123 | 857454 | 869407 \n> 83 | 918276 | 957722 | 1215252 | 1203443 \n\nv11+1GB degrades above here..\n\n> 107 | 884112 | 971797 | 1206930 | 1234606 \n> 139 | 822564 | 970920 | 1167518 | 1233230 \n> 163 | 788287 | 968248 | 1130021 | 1229250 \n> 191 | 772406 | 959344 | 1097842 | 1218541 \n> 211 | 756085 | 955563 | 1077747 | 1209489 \n> 239 | 732926 | 948855 | 1050096 | 1200878 \n> 271 | 692999 | 941722 | 1017489 | 1194012 \n> 307 | 668241 | 920478 | 994420 | 1179507 \n> 353 | 642478 | 908645 | 968648 | 1174265 \n> 397 | 617673 | 893568 | 950736 | 1173411 \n> \n> 2 sockets 3 keys\n> \n> conns | master | patch-v11 | master 1G | patch-v11 1G \n> --------+------------+------------+------------+------------\n> 1 | 16722 | 16393 | 20340 | 21813 \n> 2 | 32057 | 32009 | 39993 | 42959 \n> 3 | 46202 | 47678 | 59216 | 64374 \n> 5 | 78882 | 72002 | 98054 | 103731 \n> 7 | 103398 | 99538 | 135098 | 135828 \n\nv11+128MB degrades above here..\n\n> 17 | 205863 | 217781 | 293958 | 299690 \n> 27 | 283526 | 290539 | 414968 | 411219 \n> 53 | 336717 | 356130 | 460596 | 474563 \n> 83 | 307310 | 342125 | 419941 | 469989 \n> 107 | 294059 | 333494 | 405706 | 469593 \n> 139 | 278453 | 328031 | 390984 | 470553 \n> 163 | 270833 | 326457 | 384747 | 470977 \n> 191 | 259591 | 322590 | 376582 | 470335 \n> 211 | 263584 | 321263 | 375969 | 469443 \n> 239 | 257135 | 316959 | 370108 | 470904 \n> 271 | 251107 | 315393 | 365794 | 469517 \n> 307 | 246605 | 311585 | 360742 | 467566 \n> 353 | 236899 | 308581 | 353464 | 466936 \n> 397 | 249036 | 305042 | 344673 | 466842 \n> \n> I skipped v10 since I used it internally for variant\n> \"insert entry with dummy index then search victim\".\n\nUp to about 15%(?) of gain is great.\nI'm not sure it is okay that it seems to slow by about 1%..\n\n\nAh, I see.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 07 Apr 2022 16:55:16 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "В Чт, 07/04/2022 в 16:55 +0900, Kyotaro Horiguchi пишет:\n> Hi, Yura.\n> \n> At Wed, 06 Apr 2022 16:17:28 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrot\n> e in \n> > Ok, I got access to stronger server, did the benchmark, found weird\n> > things, and so here is new version :-)\n> \n> Thanks for the new version and benchmarking.\n> \n> > First I found if table size is strictly limited to NBuffers and FIXED,\n> > then under high concurrency get_hash_entry may not find free entry\n> > despite it must be there. It seems while process scans free lists, other\n> > concurrent processes \"moves etry around\", ie one concurrent process\n> > fetched it from one free list, other process put new entry in other\n> > freelist, and unfortunate process missed it since it tests freelists\n> > only once.\n> \n> StrategyGetBuffer believes that entries don't move across freelists\n> and it was true before this patch.\n\nStrategyGetBuffer knows nothing about dynahash's freelist.\nIt knows about buffer manager's freelist, which is not partitioned.\n\n> \n> > Second, I confirm there is problem with freelist spreading.\n> > If I keep entry's freelist_idx, then one freelist is crowded.\n> > If I use new entry's freelist_idx, then one freelist is emptified\n> > constantly.\n> \n> Perhaps it is what I saw before. I'm not sure about the details of\n> how that happens, though.\n> \n> > Third, I found increased concurrency could harm. When popular block\n> > is evicted for some reason, then thundering herd effect occures:\n> > many backends wants to read same block, they evict many other\n> > buffers, but only one is inserted. Other goes to freelist. Evicted\n> > buffers by itself reduce cache hit ratio and provocates more\n> > work. Old version resists this effect by not removing old buffer\n> > before new entry is successfully inserted.\n> \n> Nice finding.\n> \n> > To fix this issues I made following:\n> > \n> > # Concurrency\n> > \n> > First, I limit concurrency by introducing other lwlocks tranche -\n> > BufferEvict. It is 8 times larger than BufferMapping tranche (1024 vs\n> > 128).\n> > If backend doesn't find buffer in buffer table and wants to introduce\n> > it, it first calls\n> > LWLockAcquireOrWait(newEvictPartitionLock, LW_EXCLUSIVE)\n> > If lock were acquired, then it goes to eviction and replace process.\n> > Otherwise, it waits lock to be released and repeats search.\n> >\n> > This greately improve performance for > 400 clients in pgbench.\n> \n> So the performance difference between the existing code and v11 is the\n> latter has a collision cross section eight times smaller than the\n> former?\n\nNo. Acquiring EvictPartitionLock\n1. doesn't block readers, since readers doesn't acquire EvictPartitionLock\n2. doesn't form \"tree of lock dependency\" since EvictPartitionLock is\n independent from PartitionLock.\n\nProblem with existing code:\n1. Process A locks P1 and P2\n2. Process B (p3-old, p1-new) locks P3 and wants to lock P1\n3. Process C (p4-new, p1-old) locks P4 and wants to lock P1\n4. Process D (p5-new, p4-old) locks P5 and wants to lock P4\nAt this moment locks P1, P2, P3, P4 and P5 are all locked and waiting\nfor Process A.\nAnd readers can't read from same five partitions.\n\nWith new code:\n1. Process A locks E1 (evict partition) and locks P2,\n then releases P2 and locks P1.\n2. Process B tries to locks E1, waits and retries search.\n3. Process C locks E4, locks P1, then releases P1 and locks P4\n4. Process D locks E5, locks P4, then releases P4 and locks P5\nSo, there is no network of locks.\nProcess A doesn't block Process D in any moment:\n- either A blocks C, but C doesn't block D at this moment\n- or A doesn't block C.\nAnd readers doesn't see simultaneously locked five locks which\ndepends on single Process A.\n\n> + * Prevent \"thundering herd\" problem and limit concurrency.\n> \n> this is something like pressing accelerator and break pedals at the\n> same time. If it improves performance, just increasing the number of\n> buffer partition seems to work?\n\nTo be honestly: of cause simple increase of NUM_BUFFER_PARTITIONS\ndoes improve average case.\nBut it is better to cure problem than anesthetize.\nIncrease of\nNUM_BUFFER_PARTITIONS reduces probability and relative\nweight of lock network, but doesn't eliminate.\n\n> It's also not great that follower backends runs a busy loop on the\n> lock until the top-runner backend inserts the new buffer to the\n> buftable then releases the newParititionLock.\n> \n> > I tried other variant as well:\n> > - first insert entry with dummy buffer index into buffer table.\n> > - if such entry were already here, then wait it to be filled.\n> > - otherwise find victim buffer and replace dummy index with new one.\n> > Wait were done with shared lock on EvictPartitionLock as well.\n> > This variant performed quite same.\n> \n> This one looks better to me. Since a partition can be shared by two or\n> more new-buffers, condition variable seems to work better here...\n> \n> > Logically I like that variant more, but there is one gotcha: \n> > FlushBuffer could fail with elog(ERROR). Therefore then there is\n> > a need to reliable remove entry with dummy index.\n> \n> Perhaps UnlockBuffers can do that.\n\nThanks for suggestion. I'll try to investigate and retry this way\nof patch.\n\n> > And after all, I still need to hold EvictPartitionLock to notice\n> > waiters.\n> > I've tried to use ConditionalVariable, but its performance were much\n> > worse.\n> \n> How many CVs did you use?\n\nI've tried both NUM_PARTITION_LOCKS and NUM_PARTITION_LOCKS*8.\nIt doesn't matter.\nLooks like use of WaitLatch (which uses epoll) and/or tripple\nSpinLockAcquire per good case (with two list traversing) is much worse\nthan PgSemaphorLock (which uses futex) and single wait list action.\n\nOther probability is while ConditionVariable eliminates thundering\nnerd effect, it doesn't limit concurrency enough... but that's just\ntheory.\n\nIn reality, I'd like to try to make BufferLookupEnt->id to be atomic\nand add LwLock to BufferLookupEnt. I'll test it, but doubt it could\nbe merged, since there is no way to initialize dynahash's entries\nreliably.\n\n> > # Dynahash capacity and freelists.\n> > \n> > I returned back buffer table initialization:\n> > - removed FIXES_SIZE restriction introduced in previous version\n> \n> Mmm. I don't see v10 in this list and v9 doesn't contain FIXES_SIZE..\n\nv9 contains HASH_FIXED_SIZE - line 815 of patch, PATCH 3/4 \"fixed BufTable\".\n\n> > - returned `NBuffers + NUM_BUFFER_PARTITIONS`.\n> > I really think, there should be more spare items, since almost always\n> > entry_alloc is called at least once (on 128MB shared_buffers). But\n> > let keep it as is for now.\n> \n> Maybe s/entry_alloc/element_alloc/ ? :p\n\n:p yes\n\n> I see it with shared_buffers=128kB (not MB) and pgbench -i on master.\n> \n> The required number of elements are already allocaed to freelists at\n> hash creation. So the reason for the call is imbalanced use among\n> freelists. Even in that case other freelists holds elements. So we\n> don't need to expand the element size.\n> \n> > `get_hash_entry` were changed to probe NUM_FREELISTS/4 (==8) freelists\n> > before falling back to `entry_alloc`, and probing is changed from\n> > linear to quadratic. This greately reduces number of calls to\n> > `entry_alloc`, so more shared memory left intact. And I didn't notice\n> > large performance hit from. Probably there is some, but I think it is\n> > adequate trade-off.\n> \n> I don't think that causes significant performance hit, but I don't\n> understand how it improves freelist hit ratio other than by accident.\n> Could you have some reasoning for it?\n\nSince free_reused_entry returns entry into random free_list, this\nprobability is quite high. In tests, I see stabilisa\n\n> By the way the change of get_hash_entry looks something wrong.\n> \n> If I understand it correctly, it visits num_freelists/4 freelists at\n> once, then tries element_alloc. If element_alloc() fails (that must\n> happen), it only tries freeList[freelist_idx] and gives up, even\n> though there must be an element in other 3/4 freelists.\n\nNo. If element_alloc fails, it tries all NUM_FREELISTS again.\n- condition: `ntries || !allocFailed`. `!allocFailed` become true,\n so `ntries` remains.\n- `ntries = num_freelists;` regardless of `allocFailed`.\nTherefore, all `NUM_FREELISTS` are retried for partitioned table.\n\n> \n> > `free_reused_entry` now returns entry to random position. It flattens\n> > free entry's spread. Although it is not enough without other changes\n> > (thundering herd mitigation and probing more lists in get_hash_entry).\n> \n> If \"thudering herd\" means \"many backends rush trying to read-in the\n> same page at once\", isn't it avoided by the change in BufferAlloc?\n\n\"thundering herd\" reduces speed of entries migration a lot. But\n`simple_select` benchmark is too biased: looks like btree root is\nevicted from time to time. So entries are slowly migrated to of from\nfreelist of its partition.\nWithout \"thundering herd\" fix this migration is very fast.\n\n> I feel the random returning method might work. I want to get rid of\n> the randomness here but I don't come up with a better way.\n> \n> Anyway the code path is used only by buftable so it doesn't harm\n> generally.\n> \n> > # Benchmarks\n> \n> # Thanks for benchmarking!!\n> \n> > Benchmarked on two socket Xeon(R) Gold 5220 CPU @2.20GHz\n> > 18 cores per socket + hyper-threading - upto 72 virtual core total.\n> > turbo-boost disabled\n> > Linux 5.10.103-1 Debian.\n> > \n> > pgbench scale 100 simple_select + simple select with 3 keys (sql file\n> > attached).\n> > \n> > shared buffers 128MB & 1GB\n> > huge_pages=on\n> > \n> > 1 socket\n> > conns | master | patch-v11 | master 1G | patch-v11 1G \n> > --------+------------+------------+------------+------------\n> > 1 | 27882 | 27738 | 32735 | 32439 \n> > 2 | 54082 | 54336 | 64387 | 63846 \n> > 3 | 80724 | 81079 | 96387 | 94439 \n> > 5 | 134404 | 133429 | 160085 | 157399 \n> > 7 | 185977 | 184502 | 219916 | 217142 \n> \n> v11+128MB degrades above here..\n\n+ 1GB?\n\n> \n> > 17 | 335345 | 338214 | 393112 | 388796 \n> > 27 | 393686 | 394948 | 447945 | 444915 \n> > 53 | 572234 | 577092 | 678884 | 676493 \n> > 83 | 558875 | 561689 | 669212 | 655697 \n> > 107 | 553054 | 551896 | 654550 | 646010 \n> > 139 | 541263 | 538354 | 641937 | 633840 \n> > 163 | 532932 | 531829 | 635127 | 627600 \n> > 191 | 524647 | 524442 | 626228 | 617347 \n> > 211 | 521624 | 522197 | 629740 | 613143 \n> \n> v11+1GB degrades above here..\n> \n> > 239 | 509448 | 554894 | 652353 | 652972 \n> > 271 | 468190 | 557467 | 647403 | 661348 \n> > 307 | 454139 | 558694 | 642229 | 657649 \n> > 353 | 446853 | 554301 | 635991 | 654571 \n> > 397 | 441909 | 549822 | 625194 | 647973 \n> > \n> > 1 socket 3 keys\n> > \n> > conns | master | patch-v11 | master 1G | patch-v11 1G \n> > --------+------------+------------+------------+------------\n> > 1 | 16677 | 16477 | 22219 | 22030 \n> > 2 | 32056 | 31874 | 43298 | 43153 \n> > 3 | 48091 | 47766 | 64877 | 64600 \n> > 5 | 78999 | 78609 | 105433 | 106101 \n> > 7 | 108122 | 107529 | 148713 | 145343 \n> \n> v11+128MB degrades above here..\n> \n> > 17 | 205656 | 209010 | 272676 | 271449 \n> > 27 | 252015 | 254000 | 323983 | 323499 \n> \n> v11+1GB degrades above here..\n> \n> > 53 | 317928 | 334493 | 446740 | 449641 \n> > 83 | 299234 | 327738 | 437035 | 443113 \n> > 107 | 290089 | 322025 | 430535 | 431530 \n> > 139 | 277294 | 314384 | 422076 | 423606 \n> > 163 | 269029 | 310114 | 416229 | 417412 \n> > 191 | 257315 | 306530 | 408487 | 416170 \n> > 211 | 249743 | 304278 | 404766 | 416393 \n> > 239 | 243333 | 310974 | 397139 | 428167 \n> > 271 | 236356 | 309215 | 389972 | 427498 \n> > 307 | 229094 | 307519 | 382444 | 425891 \n> > 353 | 224385 | 305366 | 375020 | 423284 \n> > 397 | 218549 | 302577 | 364373 | 420846 \n> > \n> > 2 sockets\n> > \n> > conns | master | patch-v11 | master 1G | patch-v11 1G \n> > --------+------------+------------+------------+------------\n> > 1 | 27287 | 27631 | 32943 | 32493 \n> > 2 | 52397 | 54011 | 64572 | 63596 \n> > 3 | 76157 | 80473 | 93363 | 93528 \n> > 5 | 127075 | 134310 | 153176 | 149984 \n> > 7 | 177100 | 176939 | 216356 | 211599 \n> > 17 | 379047 | 383179 | 464249 | 470351 \n> > 27 | 545219 | 546706 | 664779 | 662488 \n> > 53 | 728142 | 728123 | 857454 | 869407 \n> > 83 | 918276 | 957722 | 1215252 | 1203443 \n> \n> v11+1GB degrades above here..\n> \n> > 107 | 884112 | 971797 | 1206930 | 1234606 \n> > 139 | 822564 | 970920 | 1167518 | 1233230 \n> > 163 | 788287 | 968248 | 1130021 | 1229250 \n> > 191 | 772406 | 959344 | 1097842 | 1218541 \n> > 211 | 756085 | 955563 | 1077747 | 1209489 \n> > 239 | 732926 | 948855 | 1050096 | 1200878 \n> > 271 | 692999 | 941722 | 1017489 | 1194012 \n> > 307 | 668241 | 920478 | 994420 | 1179507 \n> > 353 | 642478 | 908645 | 968648 | 1174265 \n> > 397 | 617673 | 893568 | 950736 | 1173411 \n> > \n> > 2 sockets 3 keys\n> > \n> > conns | master | patch-v11 | master 1G | patch-v11 1G \n> > --------+------------+------------+------------+------------\n> > 1 | 16722 | 16393 | 20340 | 21813 \n> > 2 | 32057 | 32009 | 39993 | 42959 \n> > 3 | 46202 | 47678 | 59216 | 64374 \n> > 5 | 78882 | 72002 | 98054 | 103731 \n> > 7 | 103398 | 99538 | 135098 | 135828 \n> \n> v11+128MB degrades above here..\n> \n> > 17 | 205863 | 217781 | 293958 | 299690 \n> > 27 | 283526 | 290539 | 414968 | 411219 \n> > 53 | 336717 | 356130 | 460596 | 474563 \n> > 83 | 307310 | 342125 | 419941 | 469989 \n> > 107 | 294059 | 333494 | 405706 | 469593 \n> > 139 | 278453 | 328031 | 390984 | 470553 \n> > 163 | 270833 | 326457 | 384747 | 470977 \n> > 191 | 259591 | 322590 | 376582 | 470335 \n> > 211 | 263584 | 321263 | 375969 | 469443 \n> > 239 | 257135 | 316959 | 370108 | 470904 \n> > 271 | 251107 | 315393 | 365794 | 469517 \n> > 307 | 246605 | 311585 | 360742 | 467566 \n> > 353 | 236899 | 308581 | 353464 | 466936 \n> > 397 | 249036 | 305042 | 344673 | 466842 \n> > \n> > I skipped v10 since I used it internally for variant\n> > \"insert entry with dummy index then search victim\".\n> \n> Up to about 15%(?) of gain is great.\n\nUp to 35% in \"2 socket 3 key 1GB\" case.\nUp to 44% in \"2 socket 1 key 128MB\" case. \n\n> I'm not sure it is okay that it seems to slow by about 1%..\n\nWell, in fact some degradation is not reproducible.\nSurprisingly, results change a bit from time to time.\nI just didn't rerun whole `master` branch bench again\nafter v11 bench, since each whole test run costs me 1.5 hour.\n\nBut I confirm regression on \"1 socket 1 key 1GB\" test case\nbetween 83 and 211 connections. It were reproducible on\nmore powerful Xeon 8354H, although it were less visible.\n\nOther fluctuations close to 1% are not reliable.\nFor example, sometimes I see degradation or improvement with\n2GB shared buffers (and even more than 1%). But 2GB is enough\nfor whole test dataset (scale 100 pgbench is 1.5GB on disk).\nTherefore modified code is not involved in benchmarking at all.\nHow it could be explained?\nThat is why I don't post 2GB benchmark results. (yeah, I'm\ncheating a bit).\n\n> Ah, I see.\n> \n> regards.\n> \n> -- \n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n> \n> \n\n\n\n",
"msg_date": "Thu, 07 Apr 2022 14:14:59 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "At Thu, 07 Apr 2022 14:14:59 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrote in \n> В Чт, 07/04/2022 в 16:55 +0900, Kyotaro Horiguchi пишет:\n> > Hi, Yura.\n> > \n> > At Wed, 06 Apr 2022 16:17:28 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrot\n> > e in \n> > > Ok, I got access to stronger server, did the benchmark, found weird\n> > > things, and so here is new version :-)\n> > \n> > Thanks for the new version and benchmarking.\n> > \n> > > First I found if table size is strictly limited to NBuffers and FIXED,\n> > > then under high concurrency get_hash_entry may not find free entry\n> > > despite it must be there. It seems while process scans free lists, other\n> > > concurrent processes \"moves etry around\", ie one concurrent process\n> > > fetched it from one free list, other process put new entry in other\n> > > freelist, and unfortunate process missed it since it tests freelists\n> > > only once.\n> > \n> > StrategyGetBuffer believes that entries don't move across freelists\n> > and it was true before this patch.\n> \n> StrategyGetBuffer knows nothing about dynahash's freelist.\n> It knows about buffer manager's freelist, which is not partitioned.\n\nYeah, right. I meant get_hash_entry.\n\n> > > To fix this issues I made following:\n> > > \n> > > # Concurrency\n> > > \n> > > First, I limit concurrency by introducing other lwlocks tranche -\n> > > BufferEvict. It is 8 times larger than BufferMapping tranche (1024 vs\n> > > 128).\n> > > If backend doesn't find buffer in buffer table and wants to introduce\n> > > it, it first calls\n> > > LWLockAcquireOrWait(newEvictPartitionLock, LW_EXCLUSIVE)\n> > > If lock were acquired, then it goes to eviction and replace process.\n> > > Otherwise, it waits lock to be released and repeats search.\n> > >\n> > > This greately improve performance for > 400 clients in pgbench.\n> > \n> > So the performance difference between the existing code and v11 is the\n> > latter has a collision cross section eight times smaller than the\n> > former?\n> \n> No. Acquiring EvictPartitionLock\n> 1. doesn't block readers, since readers doesn't acquire EvictPartitionLock\n> 2. doesn't form \"tree of lock dependency\" since EvictPartitionLock is\n> independent from PartitionLock.\n> \n> Problem with existing code:\n> 1. Process A locks P1 and P2\n> 2. Process B (p3-old, p1-new) locks P3 and wants to lock P1\n> 3. Process C (p4-new, p1-old) locks P4 and wants to lock P1\n> 4. Process D (p5-new, p4-old) locks P5 and wants to lock P4\n> At this moment locks P1, P2, P3, P4 and P5 are all locked and waiting\n> for Process A.\n> And readers can't read from same five partitions.\n> \n> With new code:\n> 1. Process A locks E1 (evict partition) and locks P2,\n> then releases P2 and locks P1.\n> 2. Process B tries to locks E1, waits and retries search.\n> 3. Process C locks E4, locks P1, then releases P1 and locks P4\n> 4. Process D locks E5, locks P4, then releases P4 and locks P5\n> So, there is no network of locks.\n> Process A doesn't block Process D in any moment:\n> - either A blocks C, but C doesn't block D at this moment\n> - or A doesn't block C.\n> And readers doesn't see simultaneously locked five locks which\n> depends on single Process A.\n\nThansk for the detailed explanation. I see that.\n\n> > + * Prevent \"thundering herd\" problem and limit concurrency.\n> > \n> > this is something like pressing accelerator and break pedals at the\n> > same time. If it improves performance, just increasing the number of\n> > buffer partition seems to work?\n> \n> To be honestly: of cause simple increase of NUM_BUFFER_PARTITIONS\n> does improve average case.\n> But it is better to cure problem than anesthetize.\n> Increase of\n> NUM_BUFFER_PARTITIONS reduces probability and relative\n> weight of lock network, but doesn't eliminate.\n\nAgreed.\n\n> > It's also not great that follower backends runs a busy loop on the\n> > lock until the top-runner backend inserts the new buffer to the\n> > buftable then releases the newParititionLock.\n> > \n> > > I tried other variant as well:\n> > > - first insert entry with dummy buffer index into buffer table.\n> > > - if such entry were already here, then wait it to be filled.\n> > > - otherwise find victim buffer and replace dummy index with new one.\n> > > Wait were done with shared lock on EvictPartitionLock as well.\n> > > This variant performed quite same.\n> > \n> > This one looks better to me. Since a partition can be shared by two or\n> > more new-buffers, condition variable seems to work better here...\n> > \n> > > Logically I like that variant more, but there is one gotcha: \n> > > FlushBuffer could fail with elog(ERROR). Therefore then there is\n> > > a need to reliable remove entry with dummy index.\n> > \n> > Perhaps UnlockBuffers can do that.\n> \n> Thanks for suggestion. I'll try to investigate and retry this way\n> of patch.\n> \n> > > And after all, I still need to hold EvictPartitionLock to notice\n> > > waiters.\n> > > I've tried to use ConditionalVariable, but its performance were much\n> > > worse.\n> > \n> > How many CVs did you use?\n> \n> I've tried both NUM_PARTITION_LOCKS and NUM_PARTITION_LOCKS*8.\n> It doesn't matter.\n> Looks like use of WaitLatch (which uses epoll) and/or tripple\n> SpinLockAcquire per good case (with two list traversing) is much worse\n> than PgSemaphorLock (which uses futex) and single wait list action.\n\nSure. I unintentionally neglected the overhead of our CV\nimplementation. It cannot be used in such a hot path.\n\n> Other probability is while ConditionVariable eliminates thundering\n> nerd effect, it doesn't limit concurrency enough... but that's just\n> theory.\n> \n> In reality, I'd like to try to make BufferLookupEnt->id to be atomic\n> and add LwLock to BufferLookupEnt. I'll test it, but doubt it could\n> be merged, since there is no way to initialize dynahash's entries\n> reliably.\n\nYeah, that's what came to my mind first (but with not a LWLock but a\nCV) but gave up for the reason of additional size. The size of\nBufferLookupEnt is 24 and sizeof(ConditionVariable) is 12. By the way\nsizeof(LWLock) is 16.. So I think we don't take the per-bufentry\napproach here for the reason of additional memory usage.\n\n> > I don't think that causes significant performance hit, but I don't\n> > understand how it improves freelist hit ratio other than by accident.\n> > Could you have some reasoning for it?\n> \n> Since free_reused_entry returns entry into random free_list, this\n> probability is quite high. In tests, I see stabilisa\n\nMaybe. Doesn't it improve the efficiency if we prioritize emptied\nfreelist on returning an element? I tried it with an atomic_u32 to\nremember empty freelist. On the uin32, each bit represents a freelist\nindex. I saw it eliminated calls to element_alloc. I tried to\nremember a single freelist index in an atomic but there was a case\nwhere two freelists are emptied at once and that lead to element_alloc\ncall.\n\n> > By the way the change of get_hash_entry looks something wrong.\n> > \n> > If I understand it correctly, it visits num_freelists/4 freelists at\n> > once, then tries element_alloc. If element_alloc() fails (that must\n> > happen), it only tries freeList[freelist_idx] and gives up, even\n> > though there must be an element in other 3/4 freelists.\n> \n> No. If element_alloc fails, it tries all NUM_FREELISTS again.\n> - condition: `ntries || !allocFailed`. `!allocFailed` become true,\n> so `ntries` remains.\n> - `ntries = num_freelists;` regardless of `allocFailed`.\n> Therefore, all `NUM_FREELISTS` are retried for partitioned table.\n\nAh, okay. ntries is set to num_freelists after calling element_alloc.\nI think we (I?) need more comments.\n\nBy the way, why it is num_freelists / 4 + 1?\n\n> > > `free_reused_entry` now returns entry to random position. It flattens\n> > > free entry's spread. Although it is not enough without other changes\n> > > (thundering herd mitigation and probing more lists in get_hash_entry).\n> > \n> > If \"thudering herd\" means \"many backends rush trying to read-in the\n> > same page at once\", isn't it avoided by the change in BufferAlloc?\n> \n> \"thundering herd\" reduces speed of entries migration a lot. But\n> `simple_select` benchmark is too biased: looks like btree root is\n> evicted from time to time. So entries are slowly migrated to of from\n> freelist of its partition.\n> Without \"thundering herd\" fix this migration is very fast.\n\nAh, that observation agree with the seemingly unidirectional migration\nof free entries.\n\nI remember that it is raised in this list several times to prioritize\nindex pages in shared buffers..\n\n> > Up to about 15%(?) of gain is great.\n> \n> Up to 35% in \"2 socket 3 key 1GB\" case.\n> Up to 44% in \"2 socket 1 key 128MB\" case. \n\nOh, more great!\n\n> > I'm not sure it is okay that it seems to slow by about 1%..\n> \n> Well, in fact some degradation is not reproducible.\n> Surprisingly, results change a bit from time to time.\n\nYeah.\n\n> I just didn't rerun whole `master` branch bench again\n> after v11 bench, since each whole test run costs me 1.5 hour.\n\nThans for the labor.\n\n> But I confirm regression on \"1 socket 1 key 1GB\" test case\n> between 83 and 211 connections. It were reproducible on\n> more powerful Xeon 8354H, although it were less visible.\n> \n> Other fluctuations close to 1% are not reliable.\n\nI'm glad to hear that. It is not surprising that some fluctuation\nhappens.\n\n> For example, sometimes I see degradation or improvement with\n> 2GB shared buffers (and even more than 1%). But 2GB is enough\n> for whole test dataset (scale 100 pgbench is 1.5GB on disk).\n> Therefore modified code is not involved in benchmarking at all.\n> How it could be explained?\n> That is why I don't post 2GB benchmark results. (yeah, I'm\n> cheating a bit).\n\nIf buffer replacement doesn't happen, theoretically this patch cannot\nbe involved in the fluctuation. I think we can consider it an error.\n\nIt might come from placement of other variables. I have somethimes got\nannoyed by such small but steady change of performance that persists\nuntil I recompiled the whole tree. But, sorry, I don't have a clear\nidea of how such performance shift happens..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 08 Apr 2022 16:46:45 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "В Пт, 08/04/2022 в 16:46 +0900, Kyotaro Horiguchi пишет:\n> At Thu, 07 Apr 2022 14:14:59 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrote in \n> > В Чт, 07/04/2022 в 16:55 +0900, Kyotaro Horiguchi пишет:\n> > > Hi, Yura.\n> > > \n> > > At Wed, 06 Apr 2022 16:17:28 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrot\n> > > e in \n> > > > Ok, I got access to stronger server, did the benchmark, found weird\n> > > > things, and so here is new version :-)\n> > > \n> > > Thanks for the new version and benchmarking.\n> > > \n> > > > First I found if table size is strictly limited to NBuffers and FIXED,\n> > > > then under high concurrency get_hash_entry may not find free entry\n> > > > despite it must be there. It seems while process scans free lists, other\n> > > > concurrent processes \"moves etry around\", ie one concurrent process\n> > > > fetched it from one free list, other process put new entry in other\n> > > > freelist, and unfortunate process missed it since it tests freelists\n> > > > only once.\n> > > \n> > > StrategyGetBuffer believes that entries don't move across freelists\n> > > and it was true before this patch.\n> > \n> > StrategyGetBuffer knows nothing about dynahash's freelist.\n> > It knows about buffer manager's freelist, which is not partitioned.\n> \n> Yeah, right. I meant get_hash_entry.\n\nBut entries doesn't move.\nOne backends takes some entry from one freelist, other backend puts\nother entry to other freelist. \n\n> > > I don't think that causes significant performance hit, but I don't\n> > > understand how it improves freelist hit ratio other than by accident.\n> > > Could you have some reasoning for it?\n> > \n> > Since free_reused_entry returns entry into random free_list, this\n> > probability is quite high. In tests, I see stabilisa\n> \n> Maybe. Doesn't it improve the efficiency if we prioritize emptied\n> freelist on returning an element? I tried it with an atomic_u32 to\n> remember empty freelist. On the uin32, each bit represents a freelist\n> index. I saw it eliminated calls to element_alloc. I tried to\n> remember a single freelist index in an atomic but there was a case\n> where two freelists are emptied at once and that lead to element_alloc\n> call.\n\nI thought on bitmask too.\nBut doesn't it return contention which many freelists were against?\nWell, in case there are enough entries to keep it almost always \"all\nset\", it would be immutable.\n\n> > > By the way the change of get_hash_entry looks something wrong.\n> > > \n> > > If I understand it correctly, it visits num_freelists/4 freelists at\n> > > once, then tries element_alloc. If element_alloc() fails (that must\n> > > happen), it only tries freeList[freelist_idx] and gives up, even\n> > > though there must be an element in other 3/4 freelists.\n> > \n> > No. If element_alloc fails, it tries all NUM_FREELISTS again.\n> > - condition: `ntries || !allocFailed`. `!allocFailed` become true,\n> > so `ntries` remains.\n> > - `ntries = num_freelists;` regardless of `allocFailed`.\n> > Therefore, all `NUM_FREELISTS` are retried for partitioned table.\n> \n> Ah, okay. ntries is set to num_freelists after calling element_alloc.\n> I think we (I?) need more comments.\n> \n> By the way, why it is num_freelists / 4 + 1?\n\nWell, num_freelists could be 1 or 32.\nIf num_freelists is 1 then num_freelists / 4 == 0 - not good :-) \n\n------\n\nregards\n\nYura Sokolov\n\n\n\n",
"msg_date": "Thu, 14 Apr 2022 08:58:33 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "On Wed, Apr 6, 2022 at 9:17 AM Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n> I skipped v10 since I used it internally for variant\n> \"insert entry with dummy index then search victim\".\n\nHi,\n\nI think there's a big problem with this patch:\n\n--- a/src/backend/storage/buffer/freelist.c\n+++ b/src/backend/storage/buffer/freelist.c\n@@ -481,10 +481,10 @@ StrategyInitialize(bool init)\n *\n * Since we can't tolerate running out of lookup table entries, we must be\n * sure to specify an adequate table size here. The maximum steady-state\n- * usage is of course NBuffers entries, but BufferAlloc() tries to insert\n- * a new entry before deleting the old. In principle this could be\n- * happening in each partition concurrently, so we could need as many as\n- * NBuffers + NUM_BUFFER_PARTITIONS entries.\n+ * usage is of course NBuffers entries. But due to concurrent\n+ * access to numerous free lists in dynahash we can miss free entry that\n+ * moved between free lists. So it is better to have some spare free entries\n+ * to reduce probability of entry allocations after server start.\n */\n InitBufTable(NBuffers + NUM_BUFFER_PARTITIONS);\n\nWith the existing system, there is a hard cap on the number of hash\ntable entries that we can ever need: one per buffer, plus one per\npartition to cover the \"extra\" entries that are needed while changing\nbuffer tags. With the patch, the number of concurrent buffer tag\nchanges is no longer limited by NUM_BUFFER_PARTITIONS, because you\nrelease the lock on the old buffer partition before acquiring the lock\non the new partition, and therefore there can be any number of\nbackends trying to change buffer tags at the same time. But that\nmeans, as the comment implies, that there's no longer a hard cap on\nhow many hash table entries we might need. I don't think we can just\naccept the risk that the hash table might try to allocate after\nstartup. If it tries, it might fail, because all of the extra shared\nmemory that we allocate at startup may already have been consumed, and\nthen somebody's query may randomly error out. That's not OK. It's true\nthat very few users are likely to be affected, because most people\nwon't consume the extra shared memory, and of those who do, most won't\nhammer the system hard enough to cause an error.\n\nHowever, I don't see us deciding that it's OK to ship something that\ncould randomly break just because it won't do so very often.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 14 Apr 2022 09:46:03 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> With the existing system, there is a hard cap on the number of hash\n> table entries that we can ever need: one per buffer, plus one per\n> partition to cover the \"extra\" entries that are needed while changing\n> buffer tags. With the patch, the number of concurrent buffer tag\n> changes is no longer limited by NUM_BUFFER_PARTITIONS, because you\n> release the lock on the old buffer partition before acquiring the lock\n> on the new partition, and therefore there can be any number of\n> backends trying to change buffer tags at the same time. But that\n> means, as the comment implies, that there's no longer a hard cap on\n> how many hash table entries we might need.\n\nI agree that \"just hope it doesn't overflow\" is unacceptable.\nBut couldn't you bound the number of extra entries as MaxBackends?\n\nFWIW, I have extremely strong doubts about whether this patch\nis safe at all. This particular problem seems resolvable though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Apr 2022 10:03:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "On Thu, Apr 14, 2022 at 10:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I agree that \"just hope it doesn't overflow\" is unacceptable.\n> But couldn't you bound the number of extra entries as MaxBackends?\n\nYeah, possibly ... as long as it can't happen that an operation still\ncounts against the limit after it's failed due to an error or\nsomething like that.\n\n> FWIW, I have extremely strong doubts about whether this patch\n> is safe at all. This particular problem seems resolvable though.\n\nCan you be any more specific?\n\nThis existing comment is surely in the running for terrible comment of the year:\n\n * To change the association of a valid buffer, we'll need to have\n * exclusive lock on both the old and new mapping partitions.\n\nAnybody with a little bit of C knowledge will have no difficulty\ngleaning from the code which follows that we are in fact acquiring\nboth buffer locks, but whoever wrote this (and I think it was a very\nlong time ago) did not feel it necessary to explain WHY we will need\nto have an exclusive lock on both the old and new mapping partitions,\nor more specifically, why we must hold both of those locks\nsimultaneously. That's unfortunate. It is clear that we need to hold\nboth locks at some point, just because the hash table is partitioned,\nbut it is not clear why we need to hold them both simultaneously.\n\nIt seems to me that whatever hazards exist must come from the fact\nthat the operation is no longer fully atomic. The existing code\nacquires every relevant lock, then does the work, then releases locks.\nErgo, we don't have to worry about concurrency because there basically\ncan't be any. Stuff could be happening at the same time in other\npartitions that are entirely unrelated to what we're doing, but at the\ntime we touch the two partitions we care about, we're the only one\ntouching them. Now, if we do as proposed here, we will acquire one\nlock, release it, and then take the other lock, and that means that\nsome operations could overlap that can't overlap today. Whatever gets\nbroken must get broken because of that possible overlapping, because\nin the absence of concurrency, the end state is the same either way.\n\nSo ... how could things get broken by having these operations overlap\neach other? The possibility that we might run out of buffer mapping\nentries is one concern. I guess there's also the question of whether\nthe collision handling is adequate: if we fail due to a collision and\nhandle that by putting the buffer on the free list, is that OK? And\nwhat if we fail midway through and the buffer doesn't end up either on\nthe free list or in the buffer mapping table? I think maybe that's\nimpossible, but I'm not 100% sure that it's impossible, and I'm not\nsure how bad it would be if it did happen. A permanent \"leak\" of a\nbuffer that resulted in it becoming permanently unusable would be bad,\nfor sure. But all of these issues seem relatively possible to avoid\nwith sufficiently good coding. My intuition is that the buffer mapping\ntable size limit is the nastiest of the problems, and if that's\nresolvable then I'm not sure what else could be a hard blocker. I'm\nnot saying there isn't anything, just that I don't know what it might\nbe.\n\nTo put all this another way, suppose that we threw out the way we do\nbuffer allocation today and always allocated from the freelist. If the\nfreelist is found to be empty, the backend wanting a buffer has to do\nsome kind of clock sweep to populate the freelist with >=1 buffers,\nand then try again. I don't think that would be performant or fair,\nbecause it would probably happen frequently that a buffer some backend\nhad just added to the free list got stolen by some other backend, but\nI think it would be safe, because we already put buffers on the\nfreelist when relations or databases are dropped, and we allocate from\nthere just fine in that case. So then why isn't this safe? It's\nfunctionally the same thing, except we (usually) skip over the\nintermediate step of putting the buffer on the freelist and taking it\noff again.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 14 Apr 2022 11:02:33 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Apr 14, 2022 at 10:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> FWIW, I have extremely strong doubts about whether this patch\n>> is safe at all. This particular problem seems resolvable though.\n\n> Can you be any more specific?\n\n> This existing comment is surely in the running for terrible comment of the year:\n\n> * To change the association of a valid buffer, we'll need to have\n> * exclusive lock on both the old and new mapping partitions.\n\nI'm pretty sure that text is mine, and I didn't really think it needed\nany additional explanation, because of exactly this:\n\n> It seems to me that whatever hazards exist must come from the fact\n> that the operation is no longer fully atomic.\n\nIf it's not atomic, then you have to worry about what happens if you\nfail partway through, or somebody else changes relevant state while\nyou aren't holding the lock. Maybe all those cases can be dealt with,\nbut it will be significantly more fragile and more complicated (and\ntherefore slower in isolation) than the current code. Is the gain in\npotential concurrency worth it? I didn't think so at the time, and\nthe graphs upthread aren't doing much to convince me otherwise.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Apr 2022 11:27:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "On Thu, Apr 14, 2022 at 11:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> If it's not atomic, then you have to worry about what happens if you\n> fail partway through, or somebody else changes relevant state while\n> you aren't holding the lock. Maybe all those cases can be dealt with,\n> but it will be significantly more fragile and more complicated (and\n> therefore slower in isolation) than the current code. Is the gain in\n> potential concurrency worth it? I didn't think so at the time, and\n> the graphs upthread aren't doing much to convince me otherwise.\n\nThose graphs show pretty big improvements. Maybe that's only because\nwhat is being done is not actually safe, but it doesn't look like a\ntrivial effect.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 14 Apr 2022 12:29:53 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "At Thu, 14 Apr 2022 11:02:33 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> It seems to me that whatever hazards exist must come from the fact\n> that the operation is no longer fully atomic. The existing code\n> acquires every relevant lock, then does the work, then releases locks.\n> Ergo, we don't have to worry about concurrency because there basically\n> can't be any. Stuff could be happening at the same time in other\n> partitions that are entirely unrelated to what we're doing, but at the\n> time we touch the two partitions we care about, we're the only one\n> touching them. Now, if we do as proposed here, we will acquire one\n> lock, release it, and then take the other lock, and that means that\n> some operations could overlap that can't overlap today. Whatever gets\n> broken must get broken because of that possible overlapping, because\n> in the absence of concurrency, the end state is the same either way.\n> \n> So ... how could things get broken by having these operations overlap\n> each other? The possibility that we might run out of buffer mapping\n> entries is one concern. I guess there's also the question of whether\n> the collision handling is adequate: if we fail due to a collision and\n> handle that by putting the buffer on the free list, is that OK? And\n> what if we fail midway through and the buffer doesn't end up either on\n> the free list or in the buffer mapping table? I think maybe that's\n> impossible, but I'm not 100% sure that it's impossible, and I'm not\n> sure how bad it would be if it did happen. A permanent \"leak\" of a\n> buffer that resulted in it becoming permanently unusable would be bad,\n\nThe patch removes buftable entry frist then either inserted again or\nreturned to freelist. I don't understand how it can be in both\nbuftable and freelist.. What kind of trouble do you have in mind for\nexample? Even if some underlying functions issued ERROR, the result\nwouldn't differ from the current code. (It seems to me only WARNING or\nPANIC by a quick look). Maybe to make us sure that it works, we need\nto make sure the victim buffer is surely isolated. It is described as\nthe following.\n\n * We are single pinner, we hold buffer header lock and exclusive\n * partition lock (if tag is valid). It means no other process can inspect\n * it at the moment.\n *\n * But we will release partition lock and buffer header lock. We must be\n * sure other backend will not use this buffer until we reuse it for new\n * tag. Therefore, we clear out the buffer's tag and flags and remove it\n * from buffer table. Also buffer remains pinned to ensure\n * StrategyGetBuffer will not try to reuse the buffer concurrently.\n\n\n> for sure. But all of these issues seem relatively possible to avoid\n> with sufficiently good coding. My intuition is that the buffer mapping\n> table size limit is the nastiest of the problems, and if that's\n\nI believe that still no additional entries are required in buftable.\nThe reason for expansion is explained as the follows.\n\nAt Wed, 06 Apr 2022 16:17:28 +0300, Yura Sokolov <y.sokolov@postgrespro.ru> wrote in \n> First I found if table size is strictly limited to NBuffers and FIXED,\n> then under high concurrency get_hash_entry may not find free entry\n> despite it must be there. It seems while process scans free lists, other\n\nThe freelist starvation is caused from almost sigle-directioned\ninter-freelist migration that this patch introduced. So it is not\nneeded if we neglect the slowdown (I'm not sure how much it is..)\ncaused by walking though all freelists. The inter-freelist migration\nwill stop if we pull out the HASH_REUSE feature from deynahash.\n\n> resolvable then I'm not sure what else could be a hard blocker. I'm\n> not saying there isn't anything, just that I don't know what it might\n> be.\n> \n> To put all this another way, suppose that we threw out the way we do\n> buffer allocation today and always allocated from the freelist. If the\n> freelist is found to be empty, the backend wanting a buffer has to do\n> some kind of clock sweep to populate the freelist with >=1 buffers,\n> and then try again. I don't think that would be performant or fair,\n> because it would probably happen frequently that a buffer some backend\n> had just added to the free list got stolen by some other backend, but\n> I think it would be safe, because we already put buffers on the\n> freelist when relations or databases are dropped, and we allocate from\n> there just fine in that case. So then why isn't this safe? It's\n> functionally the same thing, except we (usually) skip over the\n> intermediate step of putting the buffer on the freelist and taking it\n> off again.\n\nSo, does this get progressed if someone (maybe Yura?) runs a\nbenchmarking with this method?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 15 Apr 2022 17:29:13 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "On Fri, Apr 15, 2022 at 4:29 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> The patch removes buftable entry frist then either inserted again or\n> returned to freelist. I don't understand how it can be in both\n> buftable and freelist.. What kind of trouble do you have in mind for\n> example?\n\nI'm not sure. I'm just thinking about potential dangers. I was more\nworried about it ending up in neither place.\n\n> So, does this get progressed if someone (maybe Yura?) runs a\n> benchmarking with this method?\n\nI think we're talking about theoretical concerns about safety here,\nand you can't resolve that by benchmarking. Tom or others may have a\ndifferent view, but IMHO the issue with this patch isn't that there\nare no performance benefits, but that the patch needs to be fully\nsafe. He and I may disagree on how likely it is that it can be made\nsafe, but it can be a million times faster and if it's not safe it's\nstill dead.\n\nSomething clearly needs to be done to plug the specific problem that I\nmentioned earlier, somehow making it so we never need to grow the hash\ntable at runtime. If anyone can think of other such hazards those also\nneed to be fixed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 18 Apr 2022 09:53:42 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "At Mon, 18 Apr 2022 09:53:42 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Fri, Apr 15, 2022 at 4:29 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > The patch removes buftable entry frist then either inserted again or\n> > returned to freelist. I don't understand how it can be in both\n> > buftable and freelist.. What kind of trouble do you have in mind for\n> > example?\n> \n> I'm not sure. I'm just thinking about potential dangers. I was more\n> worried about it ending up in neither place.\n\nI think that is more likely to happen. But I think that that can\nhappen also by the current code if it had exits on the way. And the\npatch does not add a new exit.\n\n> > So, does this get progressed if someone (maybe Yura?) runs a\n> > benchmarking with this method?\n> \n> I think we're talking about theoretical concerns about safety here,\n> and you can't resolve that by benchmarking. Tom or others may have a\n\nYeah.. I didn't mean that benchmarking resolves the concerns. I meant\nthat if benchmarking shows that the safer (or cleaner) way give\nsufficient gain, we can take that direction.\n\n> different view, but IMHO the issue with this patch isn't that there\n> are no performance benefits, but that the patch needs to be fully\n> safe. He and I may disagree on how likely it is that it can be made\n> safe, but it can be a million times faster and if it's not safe it's\n> still dead.\n\nRight.\n\n> Something clearly needs to be done to plug the specific problem that I\n> mentioned earlier, somehow making it so we never need to grow the hash\n> table at runtime. If anyone can think of other such hazards those also\n> need to be fixed.\n\n- Running out of buffer mapping entries?\n\nIt seems to me related to \"runtime growth of the table mapping hash\ntable\". Does the runtime growth of the hash mean that get_hash_entry\nmay call element_alloc even if the hash is created with a sufficient\nnumber of elements? If so, it's not the fault of this patch. We can\nsearch all freelists before asking element_alloc() (maybe) in exchange\nof some extent of potential temporary degradation. That being said, I\ndon't think it's good that we call element_alloc for shared hashes\nafter creation.\n\n- Is the collision handling correct that just returning the victimized\n buffer to freelist?\n\nPotentially the patch can over-vicitimzie buffers up to\nmax_connections-1. Is this what you are concerned about? A way to\npreveint over-victimization was raised upthread, that is, we insert a\nspecial table mapping entry that signals \"this page is going to be\navailable soon.\" before releasing newPartitionLock. This prevents\nover-vicitimaztion.\n\n- Doesn't buffer-leak or duplicate mapping happen?\n\nThis patch does not change the order of the required steps, and\nthere's no exit on the way (if the current code doesn't have.). No two\nprocesses victimize the same buffer since the victimizing steps are\nprotected by oldPartitionLock (and header lock) same as the current\ncode, and no two processes insert different buffers for the same page\nsince the inserting steps are protected by newPartitionLock. No\nvicitmized buffer gets orphaned *if* that doesn't happen by the\ncurrent code. So *I* am at loss how *I* can make it clear that they\ndon't happenX( (Of course Yura might think differently.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 19 Apr 2022 10:45:15 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "Good day, hackers.\n\nThere are some sentences.\n\nSentence one\n============\n\n> With the existing system, there is a hard cap on the number of hash\n> table entries that we can ever need: one per buffer, plus one per\n> partition to cover the \"extra\" entries that are needed while changing\n> buffer tags.\n\nAs I understand it: current shared buffers implementation doesn't\nallocate entries after initialization.\n(I experiment on master 6c0f9f60f1 )\n\nOk, then it is safe to elog(FATAL) if shared buffers need to allocate?\nhttps://pastebin.com/x8arkEdX\n\n(all tests were done on base initialized with `pgbench -i -s 100`)\n\n $ pgbench -c 1 -T 10 -P 1 -S -M prepared postgres\n ....\n pgbench: error: client 0 script 0 aborted in command 1 query 0: FATAL: extend SharedBufHash\n\noops...\n\nHow many entries are allocated after start?\nhttps://pastebin.com/c5z0d5mz\n(shared_buffers = 128MB .\n 40/80ht cores on EPYC 7702 (VM on 64/128ht cores))\n\n $ pid=`ps x | awk '/checkpointer/ && !/awk/ { print $1 }'`\n $ gdb -p $pid -batch -ex 'p SharedBufHash->hctl->allocated.value'\n\n $1 = 16512\n\n $ install/bin/pgbench -c 600 -j 800 -T 10 -P 1 -S -M prepared postgres\n ...\n $ gdb -p $pid -batch -ex 'p SharedBufHash->hctl->allocated.value'\n \n $1 = 20439\n \n $ install/bin/pgbench -c 600 -j 800 -T 10 -P 1 -S -M prepared postgres\n ...\n $ gdb -p $pid -batch -ex 'p SharedBufHash->hctl->allocated.value'\n \n $1 = 20541\n \nIt stabilizes at 20541\n\nTo be honestly, if we add HASH_FIXED_SIZE to SharedBufHash=ShmemInitHash\nthen it works, but with noticable performance regression.\n\nMore over, I didn't notice \"out of shared memory\" starting with 23\nspare items instead of 128 (NUM_BUFFER_PARTITIONS).\n\n\nSentence two:\n=============\n\n> With the patch, the number of concurrent buffer tag\n> changes is no longer limited by NUM_BUFFER_PARTITIONS, because you\n> release the lock on the old buffer partition before acquiring the lock\n> on the new partition, and therefore there can be any number of\n> backends trying to change buffer tags at the same time.\n\nLets check.\nI take v9 branch:\n- no \"thundering nerd\" prevention yet\n- \"get_hash_entry\" is not modified\n- SharedBufHash is HASH_FIXED_SIZE (!!!)\n- no spare items at all, just NBuffers. (!!!)\n\nhttps://www.postgresql.org/message-id/6e6cfb8eea5ccac8e4bc2249fe0614d9f97055ee.camel%40postgrespro.ru\n\nI noticed some \"out of shared memory\" under high connection number\n(> 350) with this version. But I claimed it is because of race\nconditions in \"get_hash_entry\": concurrent backends may take free\nentries from one slot and but to another.\nExample:\n- backend A checks freeList[30] - it is empty\n- backend B takes entry from freeList[31]\n- backend C put entry to freeList[30]\n- backend A checks freeList[31] - it is empty\n- backend A fails with \"out of shared memory\"\n\nLets check my claim: set NUM_FREELISTS to 1, therefore there is no\npossible race condition in \"get_hash_entry\".\n....\nNo single \"out of shared memory\" for 800 clients for 30 seconds.\n\n(well, in fact on this single socket 80 ht-core EPYC I didn't get\n\"out of shared memory\" even with NUM_FREELISTS 32. I noticed them\non 2 socket 56 ht-core Xeon Gold).\n\nAt the same time master branch has to have at least 15 spare items\nwith NUM_FREELISTS 1 to work without \"out of shared memory\" on\n800 clients for 30 seconds.\n\nTherefore suggested approach reduces real need in hash entries\n(when there is no races in \"get_hash_entry\").\n\nIf one look into code they see, there is no need in spare item in\nsuggested code:\n- when backend calls BufTableInsert it already has victim buffer.\n Victim buffer either:\n - was uninitialized\n -- therefore wasn't in hash table\n --- therefore there is free entry for it in freeList\n - was just cleaned\n -- then there is stored free entry in DynaHashReuse\n --- then there is no need for free entry in freeList.\n\nAnd, not-surprisingly, there is no huge regression from setting\nNUM_FREELISTS to 1 because we usually \n\n\nSentence three:\n===============\n\n(not exact citation)\n- It is not atomic now therefore fragile.\n\nWell, going from \"theoretical concerns\" to practical, there is new part\nof control flow:\n- we clear buffer (but remain it pinned)\n- delete buffer from hash table if it was there, and store it for reuse\n- release old partition lock\n- acquire new partition lock\n- try insert into new partition\n- on conflict\n-- return hash entry to some freelist\n-- Pin found buffer\n-- unpin victim buffer\n-- return victim to Buffer's free list.\n- without conflict\n-- reuse saved entry if it was\n\nTo get some problem one of this action should fail without fail of\nwhole cluster. Therefore it should either elog(ERROR) or elog(FATAL).\nIn any other case whole cluster will stop.\n\nCould BufTableDelete elog(ERROR|FATAL)?\nNo.\n(there is one elog(ERROR), but with comment \"shouldn't happen\".\nIt really could be changed to PANIC).\n\nCould LWLockRelease elog(ERROR|FATAL)?\n(elog(ERROR, \"lock is not held\") could not be triggerred since we\ncertainly hold the lock).\n\nCould LWLockAcquire elog(ERROR|FATAL)?\nWell, there is `elog(ERROR, \"too many LWLocks taken\");`\nIt is not possible becase we just did LWLockRelease.\n\nCould BufTableInsert elog(ERROR|FATAL)?\nThere is \"out of shared memory\" which is avoidable with get_hash_entry\nmodifications or with HASH_FIXED_SIZE + some spare items.\n\nCould CHECK_FOR_INTERRUPTS raise something?\nNo: there is single line between LWLockRelease and LWLockAcquire, and\nit doesn't contain CHECK_FOR_INTERRUPTS.\n\nTherefore there is single fixable case of \"out of shared memory\" (by\nHASH_FIXED_SIZE or improvements to \"get_hash_entry\").\n\n\nMay be I'm not quite right at some point. I'd glad to learn.\n\n---------\n\nregards\n\nYura Sokolov\n\n\n\n",
"msg_date": "Thu, 21 Apr 2022 12:04:46 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "On Thu, Apr 21, 2022 at 5:04 AM Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n> $ pid=`ps x | awk '/checkpointer/ && !/awk/ { print $1 }'`\n> $ gdb -p $pid -batch -ex 'p SharedBufHash->hctl->allocated.value'\n>\n> $1 = 16512\n>\n> $ install/bin/pgbench -c 600 -j 800 -T 10 -P 1 -S -M prepared postgres\n> ...\n> $ gdb -p $pid -batch -ex 'p SharedBufHash->hctl->allocated.value'\n>\n> $1 = 20439\n>\n> $ install/bin/pgbench -c 600 -j 800 -T 10 -P 1 -S -M prepared postgres\n> ...\n> $ gdb -p $pid -batch -ex 'p SharedBufHash->hctl->allocated.value'\n>\n> $1 = 20541\n>\n> It stabilizes at 20541\n\nHmm. So is the existing comment incorrect? Remember, I was complaining\nabout this change:\n\n--- a/src/backend/storage/buffer/freelist.c\n+++ b/src/backend/storage/buffer/freelist.c\n@@ -481,10 +481,10 @@ StrategyInitialize(bool init)\n *\n * Since we can't tolerate running out of lookup table entries, we must be\n * sure to specify an adequate table size here. The maximum steady-state\n- * usage is of course NBuffers entries, but BufferAlloc() tries to insert\n- * a new entry before deleting the old. In principle this could be\n- * happening in each partition concurrently, so we could need as many as\n- * NBuffers + NUM_BUFFER_PARTITIONS entries.\n+ * usage is of course NBuffers entries. But due to concurrent\n+ * access to numerous free lists in dynahash we can miss free entry that\n+ * moved between free lists. So it is better to have some spare free entries\n+ * to reduce probability of entry allocations after server start.\n */\n InitBufTable(NBuffers + NUM_BUFFER_PARTITIONS);\n\nPre-patch, the comment claims that the maximum number of buffer\nentries that can be simultaneously used is limited to NBuffers +\nNUM_BUFFER_PARTITIONS, and that's why we make the hash table that\nsize. The idea is that we normally need more than 1 entry per buffer,\nbut sometimes we might have 2 entries for the same buffer if we're in\nthe process of changing the buffer tag, because we make the new entry\nbefore removing the old one. To change the buffer tag, we need the\nbuffer mapping lock for the old partition and the new one, but if both\nare the same, we need only one buffer mapping lock. That means that in\nthe worst case, you could have a number of processes equal to\nNUM_BUFFER_PARTITIONS each in the process of changing the buffer tag\nbetween values that both fall into the same partition, and thus each\nusing 2 entries. Then you could have every other buffer in use and\nthus using 1 entry, for a total of NBuffers + NUM_BUFFER_PARTITIONS\nentries. Now I think you're saying we go far beyond that number, and\nwhat I wonder is how that's possible. If the system doesn't work the\nway the comment says it does, maybe we ought to start by talking about\nwhat to do about that.\n\nI am a bit confused by your description of having done \"p\nSharedBufHash->hctl->allocated.value\" because SharedBufHash is of type\nHTAB and HTAB's hctl member is of type HASHHDR, which has no field\ncalled \"allocated\". I thought maybe my analysis here was somehow\nmistaken, so I tried the debugger, which took the same view of it that\nI did:\n\n(lldb) p SharedBufHash->hctl->allocated.value\nerror: <user expression 0>:1:22: no member named 'allocated' in 'HASHHDR'\nSharedBufHash->hctl->allocated.value\n~~~~~~~~~~~~~~~~~~~ ^\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 21 Apr 2022 16:24:24 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "В Чт, 21/04/2022 в 16:24 -0400, Robert Haas пишет:\n> On Thu, Apr 21, 2022 at 5:04 AM Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n> > $ pid=`ps x | awk '/checkpointer/ && !/awk/ { print $1 }'`\n> > $ gdb -p $pid -batch -ex 'p SharedBufHash->hctl->allocated.value'\n> > \n> > $1 = 16512\n> > \n> > $ install/bin/pgbench -c 600 -j 800 -T 10 -P 1 -S -M prepared postgres\n> > ...\n> > $ gdb -p $pid -batch -ex 'p SharedBufHash->hctl->allocated.value'\n> > \n> > $1 = 20439\n> > \n> > $ install/bin/pgbench -c 600 -j 800 -T 10 -P 1 -S -M prepared postgres\n> > ...\n> > $ gdb -p $pid -batch -ex 'p SharedBufHash->hctl->allocated.value'\n> > \n> > $1 = 20541\n> > \n> > It stabilizes at 20541\n> \n> Hmm. So is the existing comment incorrect?\n\nIt is correct and incorrect at the same time. Logically it is correct.\nAnd it is correct in practice if HASH_FIXED_SIZE is set for SharedBufHash\n(which is not currently). But setting HASH_FIXED_SIZE hurts performance\nwith low number of spare items.\n\n> Remember, I was complaining\n> about this change:\n> \n> --- a/src/backend/storage/buffer/freelist.c\n> +++ b/src/backend/storage/buffer/freelist.c\n> @@ -481,10 +481,10 @@ StrategyInitialize(bool init)\n> *\n> * Since we can't tolerate running out of lookup table entries, we must be\n> * sure to specify an adequate table size here. The maximum steady-state\n> - * usage is of course NBuffers entries, but BufferAlloc() tries to insert\n> - * a new entry before deleting the old. In principle this could be\n> - * happening in each partition concurrently, so we could need as many as\n> - * NBuffers + NUM_BUFFER_PARTITIONS entries.\n> + * usage is of course NBuffers entries. But due to concurrent\n> + * access to numerous free lists in dynahash we can miss free entry that\n> + * moved between free lists. So it is better to have some spare free entries\n> + * to reduce probability of entry allocations after server start.\n> */\n> InitBufTable(NBuffers + NUM_BUFFER_PARTITIONS);\n> \n> Pre-patch, the comment claims that the maximum number of buffer\n> entries that can be simultaneously used is limited to NBuffers +\n> NUM_BUFFER_PARTITIONS, and that's why we make the hash table that\n> size. The idea is that we normally need more than 1 entry per buffer,\n> but sometimes we might have 2 entries for the same buffer if we're in\n> the process of changing the buffer tag, because we make the new entry\n> before removing the old one. To change the buffer tag, we need the\n> buffer mapping lock for the old partition and the new one, but if both\n> are the same, we need only one buffer mapping lock. That means that in\n> the worst case, you could have a number of processes equal to\n> NUM_BUFFER_PARTITIONS each in the process of changing the buffer tag\n> between values that both fall into the same partition, and thus each\n> using 2 entries. Then you could have every other buffer in use and\n> thus using 1 entry, for a total of NBuffers + NUM_BUFFER_PARTITIONS\n> entries. Now I think you're saying we go far beyond that number, and\n> what I wonder is how that's possible. If the system doesn't work the\n> way the comment says it does, maybe we ought to start by talking about\n> what to do about that.\n\nAt the master state:\n- SharedBufHash is not declared as HASH_FIXED_SIZE\n- get_hash_entry falls back to element_alloc too fast (just if it doesn't\n found free entry in current freelist partition).\n- get_hash_entry has races.\n- if there are small number of spare items (and NUM_BUFFER_PARTITIONS is\n small number) and HASH_FIXED_SIZE is set, it becomes contended and\n therefore slow.\n\nHASH_REUSE solves (for shared buffers) most of this issues. Free list\nbecame rare fallback, so HASH_FIXED_SIZE for SharedBufHash doesn't lead\nto performance hit. And with fair number of spare items, get_hash_entry\nwill find free entry despite its races.\n\n> I am a bit confused by your description of having done \"p\n> SharedBufHash->hctl->allocated.value\" because SharedBufHash is of type\n> HTAB and HTAB's hctl member is of type HASHHDR, which has no field\n> called \"allocated\".\n\nPrevious letter contains links to small patches that I used for\nexperiments. Link that adds \"allocated\" is https://pastebin.com/c5z0d5mz\n\n> I thought maybe my analysis here was somehow\n> mistaken, so I tried the debugger, which took the same view of it that\n> I did:\n> \n> (lldb) p SharedBufHash->hctl->allocated.value\n> error: <user expression 0>:1:22: no member named 'allocated' in 'HASHHDR'\n> SharedBufHash->hctl->allocated.value\n> ~~~~~~~~~~~~~~~~~~~ ^\n\n\n-----\n\nregards\n\nYura Sokolov\n\n\n\n",
"msg_date": "Fri, 22 Apr 2022 01:58:07 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "Btw, I've runned tests on EPYC (80 cores).\n\n1 key per select\n conns | master | patch-v11 | master 1G | patch-v11 1G \n--------+------------+------------+------------+------------\n 1 | 29053 | 28959 | 26715 | 25631 \n 2 | 53714 | 53002 | 55211 | 53699 \n 3 | 69796 | 72100 | 72355 | 71164 \n 5 | 118045 | 112066 | 122182 | 119825 \n 7 | 151933 | 156298 | 162001 | 160834 \n 17 | 344594 | 347809 | 390103 | 386676 \n 27 | 497656 | 527313 | 587806 | 598450 \n 53 | 732524 | 853831 | 906569 | 947050 \n 83 | 823203 | 991415 | 1056884 | 1222530 \n 107 | 812730 | 930175 | 1004765 | 1232307 \n 139 | 781757 | 938718 | 995326 | 1196653 \n 163 | 758991 | 969781 | 990644 | 1143724 \n 191 | 774137 | 977633 | 996763 | 1210899 \n 211 | 771856 | 973361 | 1024798 | 1187824 \n 239 | 756925 | 940808 | 954326 | 1165303 \n 271 | 756220 | 940508 | 970254 | 1198773 \n 307 | 746784 | 941038 | 940369 | 1159446 \n 353 | 710578 | 928296 | 923437 | 1189575 \n 397 | 715352 | 915931 | 911638 | 1180688 \n\n3 keys per select\n\n conns | master | patch-v11 | master 1G | patch-v11 1G \n--------+------------+------------+------------+------------\n 1 | 17448 | 17104 | 18359 | 19077 \n 2 | 30888 | 31650 | 35074 | 35861 \n 3 | 44653 | 43371 | 47814 | 47360 \n 5 | 69632 | 64454 | 76695 | 76208 \n 7 | 96385 | 92526 | 107587 | 107930 \n 17 | 195157 | 205156 | 253440 | 239740 \n 27 | 302343 | 316768 | 386748 | 335148 \n 53 | 334321 | 396359 | 402506 | 486341 \n 83 | 300439 | 374483 | 408694 | 452731 \n 107 | 302768 | 369207 | 390599 | 453817 \n 139 | 294783 | 364885 | 379332 | 459884 \n 163 | 272646 | 344643 | 376629 | 460839 \n 191 | 282307 | 334016 | 363322 | 449928 \n 211 | 275123 | 321337 | 371023 | 445246 \n 239 | 263072 | 341064 | 356720 | 441250 \n 271 | 271506 | 333066 | 373994 | 436481 \n 307 | 261545 | 333489 | 348569 | 466673 \n 353 | 255700 | 331344 | 333792 | 455430 \n 397 | 247745 | 325712 | 326680 | 439245",
"msg_date": "Fri, 22 Apr 2022 11:49:56 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "On Thu, Apr 21, 2022 at 6:58 PM Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n> At the master state:\n> - SharedBufHash is not declared as HASH_FIXED_SIZE\n> - get_hash_entry falls back to element_alloc too fast (just if it doesn't\n> found free entry in current freelist partition).\n> - get_hash_entry has races.\n> - if there are small number of spare items (and NUM_BUFFER_PARTITIONS is\n> small number) and HASH_FIXED_SIZE is set, it becomes contended and\n> therefore slow.\n>\n> HASH_REUSE solves (for shared buffers) most of this issues. Free list\n> became rare fallback, so HASH_FIXED_SIZE for SharedBufHash doesn't lead\n> to performance hit. And with fair number of spare items, get_hash_entry\n> will find free entry despite its races.\n\nHmm, I see. The idea of trying to arrange to reuse entries rather than\npushing them onto a freelist and immediately trying to take them off\nagain is an interesting one, and I kind of like it. But I can't\nimagine that anyone would commit this patch the way you have it. It's\nway too much action at a distance. If any ereport(ERROR,...) could\nhappen between the HASH_REUSE operation and the subsequent HASH_ENTER,\nit would be disastrous, and those things are separated by multiple\nlevels of call stack across different modules, so mistakes would be\neasy to make. If this could be made into something dynahash takes care\nof internally without requiring extensive cooperation with the calling\ncode, I think it would very possibly be accepted.\n\nOne approach would be to have a hash_replace() call that takes two\nconst void * arguments, one to delete and one to insert. Then maybe\nyou propagate that idea upward and have, similarly, a BufTableReplace\noperation that uses that, and then the bufmgr code calls\nBufTableReplace instead of BufTableDelete. Maybe there are other\nbetter ideas out there...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 6 May 2022 10:26:08 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "В Пт, 06/05/2022 в 10:26 -0400, Robert Haas пишет:\n> On Thu, Apr 21, 2022 at 6:58 PM Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n> > At the master state:\n> > - SharedBufHash is not declared as HASH_FIXED_SIZE\n> > - get_hash_entry falls back to element_alloc too fast (just if it doesn't\n> > found free entry in current freelist partition).\n> > - get_hash_entry has races.\n> > - if there are small number of spare items (and NUM_BUFFER_PARTITIONS is\n> > small number) and HASH_FIXED_SIZE is set, it becomes contended and\n> > therefore slow.\n> > \n> > HASH_REUSE solves (for shared buffers) most of this issues. Free list\n> > became rare fallback, so HASH_FIXED_SIZE for SharedBufHash doesn't lead\n> > to performance hit. And with fair number of spare items, get_hash_entry\n> > will find free entry despite its races.\n> \n> Hmm, I see. The idea of trying to arrange to reuse entries rather than\n> pushing them onto a freelist and immediately trying to take them off\n> again is an interesting one, and I kind of like it. But I can't\n> imagine that anyone would commit this patch the way you have it. It's\n> way too much action at a distance. If any ereport(ERROR,...) could\n> happen between the HASH_REUSE operation and the subsequent HASH_ENTER,\n> it would be disastrous, and those things are separated by multiple\n> levels of call stack across different modules, so mistakes would be\n> easy to make. If this could be made into something dynahash takes care\n> of internally without requiring extensive cooperation with the calling\n> code, I think it would very possibly be accepted.\n> \n> One approach would be to have a hash_replace() call that takes two\n> const void * arguments, one to delete and one to insert. Then maybe\n> you propagate that idea upward and have, similarly, a BufTableReplace\n> operation that uses that, and then the bufmgr code calls\n> BufTableReplace instead of BufTableDelete. Maybe there are other\n> better ideas out there...\n\nNo.\n\nWhile HASH_REUSE is a good addition to overall performance improvement\nof the patch, it is not required for major gain.\n\nMajor gain is from not taking two partition locks simultaneously.\n\nhash_replace would require two locks, so it is not an option.\n\nregards\n\n-----\n\nYura\n\n\n\n",
"msg_date": "Wed, 11 May 2022 01:50:08 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "Good day, hackers.\n\nThis is continuation of BufferAlloc saga.\n\nThis time I've tried to implement approach:\n- if there's no buffer, insert placeholder\n- then find victim\n- if other backend wants to insert same buffer, it waits on\n ConditionVariable.\n\nPatch make separate ConditionVariable per backend, and placeholder\ncontains backend id. So waiters don't suffer from collision on\npartition, they wait exactly for concrete buffer.\n\nThis patch doesn't contain any dynahash changes since order of\noperation doesn't change: \"insert then delete\". So there is no way to\n\"reserve\" entry.\n\nBut it contains changes to ConditionVariable:\n\n- adds ConditionVariableSleepOnce, which doesn't reinsert process back\n on CV's proclist.\n This method could not be used in loop as ConditionVariableSleep,\n and ConditionVariablePrepareSleep must be called before.\n \n- adds ConditionVariableBroadcastFast - improvement over regular\n ConditionVariableBroadcast that awakes processes in batches.\n So CVBroadcastFast doesn't acquire/release CV's spinlock mutex for\n every proclist entry, but rather for batch of entries.\n \n I believe, it could safely replace ConditionVariableBroadcast. Though\n I didn't try yet to replace and check.\n\nTests:\n- tests done on 2 socket Xeon 5220 2.20GHz with turbo bust disabled\n (ie max frequency is 2.20GHz)\n- runs on 1 socket or 2 sockets using numactl\n- pgbench scale 100 - 1.5GB of data\n- shared_buffers : 128MB, 1GB (and 2GB)\n- variations of simple_select with 1 key per query, 3 keys per query\n and 10 keys per query.\n\n1 socket 1 key\n\n conns | master 128M | v12 128M | master 1G | v12 1G \n--------+--------------+--------------+--------------+--------------\n 1 | 25670 | 24926 | 29491 | 28858 \n 2 | 50157 | 48894 | 58356 | 57180 \n 3 | 75036 | 72904 | 87152 | 84869 \n 5 | 124479 | 120720 | 143550 | 140799 \n 7 | 168586 | 164277 | 199360 | 195578 \n 17 | 319943 | 314010 | 364963 | 358550 \n 27 | 423617 | 420528 | 491493 | 485139 \n 53 | 491357 | 490994 | 574477 | 571753 \n 83 | 487029 | 486750 | 571057 | 566335 \n 107 | 478429 | 479862 | 565471 | 560115 \n 139 | 467953 | 469981 | 556035 | 551056 \n 163 | 459467 | 463272 | 548976 | 543660 \n 191 | 448420 | 456105 | 540881 | 534556 \n 211 | 440229 | 458712 | 545195 | 535333 \n 239 | 431754 | 471373 | 547111 | 552591 \n 271 | 421767 | 473479 | 544014 | 557910 \n 307 | 408234 | 474285 | 539653 | 556629 \n 353 | 389360 | 472491 | 534719 | 554696 \n 397 | 377063 | 471513 | 527887 | 554383 \n\n1 socket 3 keys\n\n conns | master 128M | v12 128M | master 1G | v12 1G \n--------+--------------+--------------+--------------+--------------\n 1 | 15277 | 14917 | 20109 | 19564 \n 2 | 29587 | 28892 | 39430 | 36986 \n 3 | 44204 | 43198 | 58993 | 57196 \n 5 | 71471 | 68703 | 96923 | 92497 \n 7 | 98823 | 97823 | 133173 | 130134 \n 17 | 201351 | 198865 | 258139 | 254702 \n 27 | 254959 | 255503 | 338117 | 339044 \n 53 | 277048 | 291923 | 384300 | 390812 \n 83 | 251486 | 287247 | 376170 | 385302 \n 107 | 232037 | 281922 | 365585 | 380532 \n 139 | 210478 | 276544 | 352430 | 373815 \n 163 | 193875 | 271842 | 341636 | 368034 \n 191 | 179544 | 267033 | 334408 | 362985 \n 211 | 172837 | 269329 | 330287 | 366478 \n 239 | 162647 | 272046 | 322646 | 371807 \n 271 | 153626 | 271423 | 314017 | 371062 \n 307 | 144122 | 270540 | 305358 | 370462 \n 353 | 129544 | 268239 | 292867 | 368162 \n 397 | 123430 | 267112 | 284394 | 366845 \n \n1 socket 10 keys\n\n conns | master 128M | v12 128M | master 1G | v12 1G \n--------+--------------+--------------+--------------+--------------\n 1 | 6824 | 6735 | 10475 | 10220 \n 2 | 13037 | 12628 | 20382 | 19849 \n 3 | 19416 | 19043 | 30369 | 29554 \n 5 | 31756 | 30657 | 49402 | 48614 \n 7 | 42794 | 42179 | 67526 | 65071 \n 17 | 91443 | 89772 | 139630 | 139929 \n 27 | 107751 | 110689 | 165996 | 169955 \n 53 | 97128 | 120621 | 157670 | 184382 \n 83 | 82344 | 117814 | 142380 | 183863 \n 107 | 70764 | 115841 | 134266 | 182426 \n 139 | 57561 | 112528 | 125090 | 180121 \n 163 | 50490 | 110443 | 119932 | 178453 \n 191 | 45143 | 108583 | 114690 | 175899 \n 211 | 42375 | 107604 | 111444 | 174109 \n 239 | 39861 | 106702 | 106253 | 172410 \n 271 | 37398 | 105819 | 102260 | 170792 \n 307 | 35279 | 105355 | 97164 | 168313 \n 353 | 33427 | 103537 | 91629 | 166232 \n 397 | 31778 | 101793 | 87230 | 164381 \n \n2 sockets 1 key\n\n conns | master 128M | v12 128M | master 1G | v12 1G \n--------+--------------+--------------+--------------+--------------\n 1 | 24839 | 24386 | 29246 | 28361 \n 2 | 46655 | 45265 | 55942 | 54327 \n 3 | 69278 | 68332 | 83984 | 81608 \n 5 | 115263 | 112746 | 139012 | 135426 \n 7 | 159881 | 155119 | 193846 | 188399 \n 17 | 373808 | 365085 | 456463 | 441603 \n 27 | 503663 | 495443 | 600335 | 584741 \n 53 | 708849 | 744274 | 900923 | 908488 \n 83 | 593053 | 862003 | 985953 | 1038033 \n 107 | 431806 | 875704 | 957115 | 1075172 \n 139 | 328380 | 879890 | 881652 | 1069872 \n 163 | 288339 | 874792 | 824619 | 1064047 \n 191 | 255666 | 870532 | 790583 | 1061124 \n 211 | 241230 | 865975 | 764898 | 1058473 \n 239 | 227344 | 857825 | 732353 | 1049745 \n 271 | 216095 | 848240 | 703729 | 1043182 \n 307 | 206978 | 833980 | 674711 | 1031533 \n 353 | 198426 | 803830 | 633783 | 1018479 \n 397 | 191617 | 744466 | 599170 | 1006134 \n \n2 sockets 3 keys\n\n conns | master 128M | v12 128M | master 1G | v12 1G \n--------+--------------+--------------+--------------+--------------\n 1 | 14688 | 14088 | 18912 | 18905 \n 2 | 26759 | 25925 | 36817 | 35924 \n 3 | 40002 | 38658 | 54765 | 53266 \n 5 | 63479 | 63041 | 90521 | 87496 \n 7 | 88561 | 87101 | 123425 | 121877 \n 17 | 199411 | 196932 | 289555 | 282146 \n 27 | 270121 | 275950 | 386884 | 383019 \n 53 | 202918 | 374848 | 395967 | 501648 \n 83 | 149599 | 363623 | 335815 | 478628 \n 107 | 126501 | 348125 | 311617 | 472473 \n 139 | 106091 | 331350 | 279843 | 466408 \n 163 | 95497 | 321978 | 260884 | 461688 \n 191 | 87427 | 312815 | 241189 | 458252 \n 211 | 82783 | 307261 | 231435 | 454327 \n 239 | 78930 | 299661 | 219655 | 451826 \n 271 | 74081 | 294233 | 211555 | 448412 \n 307 | 71352 | 288133 | 202838 | 446143 \n 353 | 67872 | 279948 | 193354 | 441929 \n 397 | 66178 | 275784 | 185556 | 438330 \n\n2 sockets 10 keys\n\n conns | master 128M | v12 128M | master 1G | v12 1G \n--------+--------------+--------------+--------------+--------------\n 1 | 6200 | 6108 | 10163 | 9563 \n 2 | 11196 | 10871 | 18373 | 17827 \n 3 | 16479 | 16129 | 26807 | 26584 \n 5 | 26750 | 26241 | 44291 | 43409 \n 7 | 36501 | 35433 | 60508 | 59379 \n 17 | 77320 | 77451 | 130413 | 128452 \n 27 | 91833 | 105643 | 147259 | 156833 \n 53 | 57138 | 115793 | 119306 | 150647 \n 83 | 44435 | 108850 | 105454 | 148006 \n 107 | 38031 | 105199 | 95108 | 146162 \n 139 | 31697 | 101096 | 84011 | 143281 \n 163 | 28826 | 98255 | 78411 | 141375 \n 191 | 26223 | 96224 | 74256 | 139646 \n 211 | 24933 | 94815 | 71542 | 137834 \n 239 | 23626 | 92849 | 69289 | 137235 \n 271 | 22664 | 90938 | 66431 | 136080 \n 307 | 21691 | 89358 | 64661 | 133166 \n 353 | 20712 | 88239 | 61619 | 133339 \n 397 | 20374 | 86708 | 58937 | 130684 \n\nWell, as you see, there is some regression on low connection numbers.\nI don't get where it from.\n\nMore over, it is even in case of 2GB shared buffers - when all data\nfits into buffers cache and new code doesn't work at all.\n(except this incomprehensible regression there's no different in\n performance with 2GB shared buffers).\n\nFor example 2GB shared buffers 1 socket 3 keys:\n conns | master 2G | v12 2G \n--------+--------------+--------------\n 1 | 23491 | 22621 \n 2 | 46436 | 44851 \n 3 | 69265 | 66844 \n 5 | 112432 | 108801 \n 7 | 158859 | 150247 \n 17 | 297600 | 291605 \n 27 | 390041 | 384590 \n 53 | 448384 | 447588 \n 83 | 445582 | 442048 \n 107 | 440544 | 438200 \n 139 | 433893 | 430818 \n 163 | 427436 | 424182 \n 191 | 420854 | 417045 \n 211 | 417228 | 413456 \n\nPerhaps something changes in memory layout due to array of CV's, or\ncompiler layouts/optimizes functions differently. I can't find the\nreason ;-( I would appreciate help on this.\n\n\nregards\n\n---\n\nYura Sokolov",
"msg_date": "Tue, 28 Jun 2022 14:13:06 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "В Вт, 28/06/2022 в 14:13 +0300, Yura Sokolov пишет:\n\n> Tests:\n> - tests done on 2 socket Xeon 5220 2.20GHz with turbo bust disabled\n> (ie max frequency is 2.20GHz)\n\nForgot to mention:\n- this time it was Centos7.9.2009 (Core) with Linux mn10 3.10.0-1160.el7.x86_64\n\nPerhaps older kernel describes poor master's performance on 2 sockets\ncompared to my previous results (when this server had Linux 5.10.103-1 Debian).\n\nOr there is degradation in PostgreSQL's master branch between.\nI'll try to check today.\n\nregards\n\n---\n\nYura Sokolov\n\n\n\n",
"msg_date": "Tue, 28 Jun 2022 14:26:54 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "В Вт, 28/06/2022 в 14:26 +0300, Yura Sokolov пишет:\n> В Вт, 28/06/2022 в 14:13 +0300, Yura Sokolov пишет:\n> \n> > Tests:\n> > - tests done on 2 socket Xeon 5220 2.20GHz with turbo bust disabled\n> > (ie max frequency is 2.20GHz)\n> \n> Forgot to mention:\n> - this time it was Centos7.9.2009 (Core) with Linux mn10 3.10.0-1160.el7.x86_64\n> \n> Perhaps older kernel describes poor master's performance on 2 sockets\n> compared to my previous results (when this server had Linux 5.10.103-1 Debian).\n> \n> Or there is degradation in PostgreSQL's master branch between.\n> I'll try to check today.\n\nNo, old master commit ( 7e12256b47 Sat Mar 12 14:21:40 2022) behaves same.\nSo it is clearly old-kernel issue. Perhaps, futex was much slower than this\ndays.\n\n\n\n",
"msg_date": "Tue, 28 Jun 2022 14:50:46 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "On Tue, Jun 28, 2022 at 4:50 PM Yura Sokolov <y.sokolov@postgrespro.ru>\nwrote:\n\n> В Вт, 28/06/2022 в 14:26 +0300, Yura Sokolov пишет:\n> > В Вт, 28/06/2022 в 14:13 +0300, Yura Sokolov пишет:\n> >\n> > > Tests:\n> > > - tests done on 2 socket Xeon 5220 2.20GHz with turbo bust disabled\n> > > (ie max frequency is 2.20GHz)\n> >\n> > Forgot to mention:\n> > - this time it was Centos7.9.2009 (Core) with Linux mn10\n> 3.10.0-1160.el7.x86_64\n> >\n> > Perhaps older kernel describes poor master's performance on 2 sockets\n> > compared to my previous results (when this server had Linux 5.10.103-1\n> Debian).\n> >\n> > Or there is degradation in PostgreSQL's master branch between.\n> > I'll try to check today.\n>\n> No, old master commit ( 7e12256b47 Sat Mar 12 14:21:40 2022) behaves same.\n> So it is clearly old-kernel issue. Perhaps, futex was much slower than this\n> days.\n>\n>\n>\n> The patch requires a rebase; please do that.\n\nHunk #1 FAILED at 231.\nHunk #2 succeeded at 409 (offset 82 lines).\n\n1 out of 2 hunks FAILED -- saving rejects to file\nsrc/include/storage/buf_internals.h.rej\n\n\n-- \nIbrar Ahmed\n\nOn Tue, Jun 28, 2022 at 4:50 PM Yura Sokolov <y.sokolov@postgrespro.ru> wrote:В Вт, 28/06/2022 в 14:26 +0300, Yura Sokolov пишет:\n> В Вт, 28/06/2022 в 14:13 +0300, Yura Sokolov пишет:\n> \n> > Tests:\n> > - tests done on 2 socket Xeon 5220 2.20GHz with turbo bust disabled\n> > (ie max frequency is 2.20GHz)\n> \n> Forgot to mention:\n> - this time it was Centos7.9.2009 (Core) with Linux mn10 3.10.0-1160.el7.x86_64\n> \n> Perhaps older kernel describes poor master's performance on 2 sockets\n> compared to my previous results (when this server had Linux 5.10.103-1 Debian).\n> \n> Or there is degradation in PostgreSQL's master branch between.\n> I'll try to check today.\n\nNo, old master commit ( 7e12256b47 Sat Mar 12 14:21:40 2022) behaves same.\nSo it is clearly old-kernel issue. Perhaps, futex was much slower than this\ndays.\n\n\n\nThe patch requires a rebase; please do that.Hunk #1 FAILED at 231.Hunk #2 succeeded at 409 (offset 82 lines). 1 out of 2 hunks FAILED -- saving rejects to file src/include/storage/buf_internals.h.rej-- Ibrar Ahmed",
"msg_date": "Wed, 7 Sep 2022 12:53:07 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
},
{
"msg_contents": "On Wed, Sep 07, 2022 at 12:53:07PM +0500, Ibrar Ahmed wrote:\n> Hunk #1 FAILED at 231.\n> Hunk #2 succeeded at 409 (offset 82 lines).\n> \n> 1 out of 2 hunks FAILED -- saving rejects to file\n> src/include/storage/buf_internals.h.rej\n\nWith no rebase done since this notice, I have marked this entry as\nRwF.\n--\nMichael",
"msg_date": "Wed, 12 Oct 2022 16:46:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: BufferAlloc: don't take two simultaneous locks"
}
] |
[
{
"msg_contents": "Hi,\n\nFor several development efforts I found it to be incredibly valuable to push\nchanges to a personal repository and see a while later whether tests succeed\non a number of different platforms. This is especially useful for platforms\nthat are quite different from ones own platform, like e.g. windows in my case.\n\nOf course everybody can set this up for themselves. However, doing so well is\na significant effort, particularly if windows is to be supported well. And\ndoubly so if useful things like getting backtraces for crashes is desirable\n([1])\n\nWe do a form of pre-commit CI via cfbot. That is valuable. But it's not really\ncomparable to having CI in tree - one need to post to the list and one cannot\nadjust the dependencies etc installed for the CI runs.\n\n\nNew contributors (and quite a bit of older ones too) IMO expect to be able to\nsee whether their changes work as-is, without sending a patch to the list.\n\n\nAn obvious criticism of the effort to put CI runner infrastructure into core\nis that they are effectively all proprietary technology, and that we should be\nhesistant to depend too much on one of them. I think that's a valid\nconcern. However, once one CI integration is done, a good chunk (but not all!)\nthe work is transferrable to another CI solution, which I do think reduces the\ndependency sufficiently.\n\n\nThe attached patch adds CI using cirrus-ci. The reason for choosing cirrus\nwere that\na) Thomas has ended up using cirrus for cfbot\nb) cirrus provides a comparatively wide variety of operating systems\nc) it allows custom VM images to be used.\nd) it does not require a login to look at\n\nc) is very valuable to be able to test e.g. upcoming linux versions,\npre-installing software on systems that do not support docker (freebsd), and\nbeing faster to boot once the image is more than a trivial size. I've created\na number of images for testing of the aio patchset [2]\n\n\nRight now the patch attached\n- runs check-world on FreeBSD, Linux, macOS - all using gcc\n - freebsd, linux use a custom generated image\n - macOS installs missing dependencies at runtime, with some caching\n - all use ccache to make subsequent compilation faster\n- runs all the tests I could find on windows, via vcregress.pl\n- checks for compiler warnings on linux, with both clang and gcc\n\n- captures all logs after a failing run\n- generates backtraces from core files (the output format differs between platforms)\n- allows to limit CI to certain OSs, by adding\n ci-os-only: (freebsd|linux|macos|windows)+ to the commit message\n (useful when fixing a platform dependent problem)\n\nExample output of a\n- successful run: https://cirrus-ci.com/build/4625606928236544\n- github interface for the same: https://github.com/anarazel/postgres/runs/3772435617\n- failed run on windows, with backtrace: https://cirrus-ci.com/task/6640561307254784?logs=cat_dumps#L150\n\nComments? Rotten tomatoes?\n\n\nThere's some polishing we should do before actually adding this to the\ntree. But I wanted to discuss the idea before investing even more time.\n\nOne policy discussion that we'd have to have is who should control the images\nused for CI. Right now that's on my personal google cloud account - which I am\nhappy to do, but medium - long term that'd not be optimal.\n\n\nThanks to Thomas - I based this on hist .cirrus.yml file. And made several\ncontributions later. Also thanks to Andrew, who helped out with some windows\nissues I hit at some point...\n\nGreetings,\n\nAndres Freund\n\n[1] I did get this to work on windows, but it does require a small set of\nchanges to work reliably, I'll start a separate thread about it.\n[2] https://github.com/anarazel/pg-vm-images/",
"msg_date": "Fri, 1 Oct 2021 15:27:52 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Adding CI to our tree"
},
{
"msg_contents": "On Fri, Oct 1, 2021 at 3:28 PM Andres Freund <andres@anarazel.de> wrote:\n> For several development efforts I found it to be incredibly valuable to push\n> changes to a personal repository and see a while later whether tests succeed\n> on a number of different platforms. This is especially useful for platforms\n> that are quite different from ones own platform, like e.g. windows in my case.\n\n> An obvious criticism of the effort to put CI runner infrastructure into core\n> is that they are effectively all proprietary technology, and that we should be\n> hesistant to depend too much on one of them. I think that's a valid\n> concern. However, once one CI integration is done, a good chunk (but not all!)\n> the work is transferrable to another CI solution, which I do think reduces the\n> dependency sufficiently.\n\nI agree with everything you've said, including the nuanced parts about\nthe possible downsides.\n\nWe already know what happens when one of these CI providers stops\nproviding open source projects with free resources, because that's\nexactly what happened with Travis CI: projects that use their\ninfrastructure are mildly inconvenienced. I can't see any real notable\ndownside, as long as we just use the resources that they make\navailable for development work. Clearly these services could never\nreplace the buildfarm.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 1 Oct 2021 16:04:03 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2021-10-02 01:49:39 +0200, 0010203112132233 wrote:\n> On Sat, 2 Oct 2021 at 00:28, Andres Freund <andres@anarazel.de> wrote:\n> > New contributors (and quite a bit of older ones too) IMO expect to be able to\n> > see whether their changes work as-is, without sending a patch to the list.\n> \n> Have they checked 'make installcheck-world'? I believe that is one of\n> the first action items on the 'So, you want to become a developer?'\n> wiki page, after downloading the sources. Of course, that is limited\n> to only the environment of the user, but that's what they're generally\n> developing for, right?\n\nIf you want to get a change into postgres, it almost always needs to actually\nwork on all operating systems, and always needs to at least not cause build\nfailures on all platforms.\n\n\n\n> Furthermore, after looking it through, I think that Cirrus is an\n> unfortunate choice as a CI platform of preference, as you cannot use\n> it without access to Github (which is problematic for people located\n> in certain localities due to USA regulations).\n\nI agree that it's not optimal that cirrus isn't available on all git hosting\nplatforms. Hence saying that I think it's likely we'd end up adding a few more\nplatforms over time. If we factor the meat of the work into an helper script,\nso that the CI specific bit is just a couple invocation of that script, it's\nnot a lot of overhead to have 2-3 CI platforms.\n\n\n> If we're going to include CI configuration for private use, I'd prefer if it\n> were a CI that can be enjoyed in private without pushing code to a 3rd\n> party.\n\nFWIW, you can use cirrus locally on your machine:\nhttps://github.com/cirruslabs/cirrus-cli\n\nIt'll not be able to run all kinds of tasks though (e.g. no windows docker on\na linux host, dealing with the license costs for that presumably would be\nnontrivial).\n\n\n> Lastly, I consider CI configuration similar to IDE configuration: each\n> developer has their own preferred tools which they use, but we don't\n> favour one over the other. We don't include IDE-specific configuration\n> files either, or at least, the policy is against that.\n> \n> So, I greatly appreciate the effort, but I don't think this is\n> something that should be committed into core. Maybe as a dedicated\n> wiki page detailing configurations for CI, similar to the Buildfarm\n> page?\n\nThat doesn't scale - I've actually added CI to all my substantial development\nwork in my private branches, and it's pretty annoying to need to do so every\ntime. And there's many developers who won't go through the effort most of the\ntime.\n\nIt's not like this forces you to use cirrus or anything. For people that don't\nwant to use CI, It'll make cfbot a bit more effective (because people can\nadjust what it tests as appropriate for $patch), but that's it.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 1 Oct 2021 17:10:53 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On Sat, Oct 2, 2021 at 1:10 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2021-10-02 01:49:39 +0200, 0010203112132233 wrote:\n> > Furthermore, after looking it through, I think that Cirrus is an\n> > unfortunate choice as a CI platform of preference, as you cannot use\n> > it without access to Github (which is problematic for people located\n> > in certain localities due to USA regulations).\n>\n> I agree that it's not optimal that cirrus isn't available on all git hosting\n> platforms. Hence saying that I think it's likely we'd end up adding a few more\n> platforms over time. If we factor the meat of the work into an helper script,\n> so that the CI specific bit is just a couple invocation of that script, it's\n> not a lot of overhead to have 2-3 CI platforms.\n\nBTW I think they might be considering supporting other code hosting\nplatforms (at least they ask for feedback on this at\nhttps://cirrus-ci.org/guide/quick-start/ ).\n\n> > Lastly, I consider CI configuration similar to IDE configuration: each\n> > developer has their own preferred tools which they use, but we don't\n> > favour one over the other. We don't include IDE-specific configuration\n> > files either, or at least, the policy is against that.\n\nWe have some files in the tree to help users of Emacs, vim, and even\nmake github format text the way we like.\n\nPersonally, I think that if someone is willing to develop and maintain\nhigh quality CI control files that work for any public\nfree-for-open-source CI system, then we should accept them too. It\ncosts very little to have a few .something.yml files at top level. If\nat any point the file for a given provider is showing signs of being\nunmaintained, we can remove it. Personally, I'm willing and able to\nhelp maintain Cirrus control files, not least because it means that\ncfbot will become simpler and will match exactly what you can get in\nyour own github account.\n\nI really like Cirrus because our project has more portability concerns\nthan most, and most other CIs are like \"we got both kinds, country and\nwestern!\". I wanted to add FreeBSD to cfbot, which is something they\nadvertise as a feature, but it looks like at least 3 other OSes we\ntarget would probably work just as well given a suitable image.\n\n\n",
"msg_date": "Sat, 2 Oct 2021 15:41:13 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> It's not like this forces you to use cirrus or anything. For people that don't\n> want to use CI, It'll make cfbot a bit more effective (because people can\n> adjust what it tests as appropriate for $patch), but that's it.\n\nYeah. I cannot see any reason to object to Andres' 0002 patch: you can\njust ignore those files if you don't want to use cirrus. It does set a\nprecedent that we'd also accept infrastructure for other CI systems,\nbut as long as they're similarly noninvasive, why not? (Maybe there\nneeds to be one more directory level though, ie ci/cirrus/whatever.\nI don't want to end up with one toplevel directory per CI platform.)\n\nI don't know enough about Windows to evaluate 0001, but I'm a little\nworried about it because it looks like it's changing our *production*\nerror handling on that platform.\n\nAs for 0003, wasn't that committed already?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 02 Oct 2021 11:05:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "> On 2 Oct 2021, at 00:27, Andres Freund <andres@anarazel.de> wrote:\n\n> For several development efforts I found it to be incredibly valuable to push\n> changes to a personal repository and see a while later whether tests succeed\n> on a number of different platforms. This is especially useful for platforms\n> that are quite different from ones own platform, like e.g. windows in my case.\n\nSame, and for my case I run several CI jobs to compile/test against different\nOpenSSL versions etc.\n\n> Of course everybody can set this up for themselves. However, doing so well is\n> a significant effort, particularly if windows is to be supported well. And\n> doubly so if useful things like getting backtraces for crashes is desirable\n> ([1])\n\n+1 on adding these, rather than having everyone duplicate the effort. Those\nwho don't want to use them can disregard them.\n\n> Right now the patch attached\n> - runs check-world on FreeBSD, Linux, macOS - all using gcc\n> - freebsd, linux use a custom generated image\n> - macOS installs missing dependencies at runtime, with some caching\n> - all use ccache to make subsequent compilation faster\n> - runs all the tests I could find on windows, via vcregress.pl\n> - checks for compiler warnings on linux, with both clang and gcc\n\nWhy not compiling with OpenSSL on FreeBSD and macOS? On FreeBSD all you need\nis --with-ssl=openssl while on macOS you need to point to the headers and libs\nlike:\n\n --with-includes=/usr/local/include:/usr/local/opt/openssl/include --with-libs=/usr/local/libs:/usr/local/opt/openssl/lib\n\nOne thing to note for Cirrus on macOS (I've never seen it anywhere else) is\nthat it intermittently will fail on a too long socketpath:\n\n Unix-domain socket path \"/private/var/folders/wh/z5_y2cv53sg24tzvtw_f_y1m0000gn/T/cirrus-ci-build/src/bin/pg_upgrade/.s.PGSQL.51696\" is too long (maximum 103 bytes)\n\nExporting PGSOCKETDIR can avoid that annoyance.\n\n+ tests_script:\n+ - su postgres -c 'ulimit -c unlimited ; ${TIMEOUT_CMD} make -s ${CHECK} ${CHECKFLAGS} -j8'\nDon't you need PG_TEST_EXTRA=ssl here to ensure the src/test/ssl tests are run?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Sat, 2 Oct 2021 20:42:00 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2021-10-02 20:42:00 +0200, Daniel Gustafsson wrote:\n> > On 2 Oct 2021, at 00:27, Andres Freund <andres@anarazel.de> wrote:\n> > Right now the patch attached\n> > - runs check-world on FreeBSD, Linux, macOS - all using gcc\n> > - freebsd, linux use a custom generated image\n> > - macOS installs missing dependencies at runtime, with some caching\n> > - all use ccache to make subsequent compilation faster\n> > - runs all the tests I could find on windows, via vcregress.pl\n> > - checks for compiler warnings on linux, with both clang and gcc\n> \n> Why not compiling with OpenSSL on FreeBSD and macOS? On FreeBSD all you need\n> is --with-ssl=openssl while on macOS you need to point to the headers and libs\n> like:\n>\n> --with-includes=/usr/local/include:/usr/local/opt/openssl/include --with-libs=/usr/local/libs:/usr/local/opt/openssl/lib\n\nYea, there's several things like that, that should be added. The CI files\noriginated from development where breakage around SSL wasn't likely (AIO,\nshared memory stats, procarray scalability etc), so I didn't focussed on that\nangle.\n\nNeeding to get all that stuff right on multiple platforms is one of the\nreasons why I think having this thing in-tree would be good. No need for\neveryone to discover the magic incantations themselves. Even if you e.g. might\nwant to extend them to test multiple SSL versions or such, it's a lot easier\nto do that if the basics are there.\n\n\n> One thing to note for Cirrus on macOS (I've never seen it anywhere else) is\n> that it intermittently will fail on a too long socketpath:\n\nI've seen it somewhere else before. It wasn't even intermittent - it always\nfailed. I worked around that by setting CIRRUS_WORKING_DIR: ${HOME}/pgsql/ -\nalso made output including filenames easier to read ;)\n\n\n> + tests_script:\n> + - su postgres -c 'ulimit -c unlimited ; ${TIMEOUT_CMD} make -s ${CHECK} ${CHECKFLAGS} -j8'\n> Don't you need PG_TEST_EXTRA=ssl here to ensure the src/test/ssl tests are run?\n\nProbably. I quickly added that stuff, we'll see how many mistakes I made:\nhttps://cirrus-ci.com/build/5846034501861376\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 2 Oct 2021 12:41:07 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2021-10-02 12:41:07 -0700, Andres Freund wrote:\n> On 2021-10-02 20:42:00 +0200, Daniel Gustafsson wrote:\n> > + tests_script:\n> > + - su postgres -c 'ulimit -c unlimited ; ${TIMEOUT_CMD} make -s ${CHECK} ${CHECKFLAGS} -j8'\n> > Don't you need PG_TEST_EXTRA=ssl here to ensure the src/test/ssl tests are run?\n> \n> Probably. I quickly added that stuff, we'll see how many mistakes I made:\n> https://cirrus-ci.com/build/5846034501861376\n\nI wonder if we shouldn't stop skipping the ssl / kerberos / ldap (and perhaps\nothers) tests in the makefile, and instead do so in the tap tests\nthemselves. Then one can see them included as the skipped in the tap result\noutput, which seems like it'd make it easier to discover them?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 2 Oct 2021 12:45:14 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "> On 2 Oct 2021, at 21:45, Andres Freund <andres@anarazel.de> wrote:\n\n> I wonder if we shouldn't stop skipping the ssl / kerberos / ldap (and perhaps\n> others) tests in the makefile, and instead do so in the tap tests\n> themselves. Then one can see them included as the skipped in the tap result\n> output, which seems like it'd make it easier to discover them?\n\nI am definitely in favor of doing that, better to see them skipped rather than\nhaving to remember to opt in. We even do so already to some extent already,\nlike for example the SSL tests:\n\n if ($ENV{with_ssl} ne 'openssl')\n {\n plan skip_all => 'OpenSSL not supported by this build';\n }\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Sat, 2 Oct 2021 21:48:16 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "> On 2 Oct 2021, at 21:41, Andres Freund <andres@anarazel.de> wrote:\n\n>> One thing to note for Cirrus on macOS (I've never seen it anywhere else) is\n>> that it intermittently will fail on a too long socketpath:\n> \n> I've seen it somewhere else before. It wasn't even intermittent - it always\n> failed. I worked around that by setting CIRRUS_WORKING_DIR: ${HOME}/pgsql/ -\n> also made output including filenames easier to read ;)\n\nAha, nice trick! Didn't know about that one but that's easier than setting\nspecific dirs via PG* environment vars.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Sat, 2 Oct 2021 21:49:30 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2021-10-02 11:05:20 -0400, Tom Lane wrote:\n> Yeah. I cannot see any reason to object to Andres' 0002 patch: you can\n> just ignore those files if you don't want to use cirrus. It does set a\n> precedent that we'd also accept infrastructure for other CI systems,\n> but as long as they're similarly noninvasive, why not?\n\nExactly.\n\n\n> (Maybe there needs to be one more directory level though, ie\n> ci/cirrus/whatever. I don't want to end up with one toplevel directory per\n> CI platform.)\n\nGood question - it definitely shouldn't be one toplevel directory per CI\nplatform (although some will require their own hidden toplevel directories,\nlike .github/workflows etc). I'd hope to share a bunch of the infrastructure\nbetween them over time, so perhaps we don't need a deeper hierarchy.\n\n\n> I don't know enough about Windows to evaluate 0001, but I'm a little\n> worried about it because it looks like it's changing our *production*\n> error handling on that platform.\n\nYea. It's clearly not ready as-is - it's the piece that I was planning to\nwrite a separate email about.\n\n\nIt's hard to understand what *precisely* SEM_NOGPFAULTERRORBOX etc do.\n\nWhat I do know is that without the _set_abort_behavior() stuff abort() doesn't\ntrigger windows' \"crash\" paths in at least debugging builds, and that the\nSetErrorMode() and _CrtSetReportMode() changes are necessary to get segfaults\nto reach the crash paths.\n\nThe in-tree behaviour turns out to make debugging on windows a major pain, at\nleast when compiling with msvc. Crashes never trigger core dumps or \"just in\ntime\" debugging (their term for invoking a debugger upon crash), so one has to\nattach to processes before they crash, to have any chance of debugging.\n\nAs far as I can tell this also means that at least for debugging builds,\npgwin32_install_crashdump_handler() is pretty much dead weight -\ncrashDumpHandler() never gets invoked. I think it may get invoked for abort()s\nin production builds, but probably not for segfaults.\n\nAnd despite SEM_NOGPFAULTERRORBOX we display those annoying \"popup\" boxes\ntelling us about the crash and giving the option to retry, ignore, something\nsomething. It's all a bit baffling.\n\n\n\n> As for 0003, wasn't that committed already?\n\nNot at the time I was writing the email, but now it is, yes.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 2 Oct 2021 12:59:09 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2021-10-02 20:42:00 +0200, Daniel Gustafsson wrote:\n> Same, and for my case I run several CI jobs to compile/test against different\n> OpenSSL versions etc.\n\nOn that note: Did you do this for windows? If so, I'd rather not figure that\nout myself...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 2 Oct 2021 13:01:51 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "> On 2 Oct 2021, at 22:01, Andres Freund <andres@anarazel.de> wrote:\n> On 2021-10-02 20:42:00 +0200, Daniel Gustafsson wrote:\n>> Same, and for my case I run several CI jobs to compile/test against different\n>> OpenSSL versions etc.\n> \n> On that note: Did you do this for windows? If so, I'd rather not figure that\n> out myself...\n\nNot with Cirrus, I've been using Appveyor for Windows and they provide 1.0.2 -\n3.0.0 which can easily set in config.pl with for example:\n\n $config->{openssl} = 'C:\\OpenSSL-v111-Win64';\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Sat, 2 Oct 2021 22:18:38 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "\nOn 10/2/21 11:05 AM, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> It's not like this forces you to use cirrus or anything. For people that don't\n>> want to use CI, It'll make cfbot a bit more effective (because people can\n>> adjust what it tests as appropriate for $patch), but that's it.\n> Yeah. I cannot see any reason to object to Andres' 0002 patch: you can\n> just ignore those files if you don't want to use cirrus. \n\n\n\nYeah. I enable cirrus selectively on my github repos, which makes it \nclose to impossible to get an unwanted effect.\n\n\nOne of the things I like about this is that it institutionalizes some\nknowledge that has hitherto been mostly private. I have a lot of this in\na setup I use for spinning up test instances, but this makes a lot of\nthat sort of knowledge more broadly available.\n\n\nI hope it will also encourage people to test more widely, given how easy\nit will make it.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 2 Oct 2021 16:41:30 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> I hope it will also encourage people to test more widely, given how easy\n> it will make it.\n\nIf you'd like that, there would need to be some (ahem) documentation\nof how to use it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 02 Oct 2021 16:44:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2021-10-02 16:44:44 -0400, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > I hope it will also encourage people to test more widely, given how easy\n> > it will make it.\n>\n> If you'd like that, there would need to be some (ahem) documentation\n> of how to use it.\n\nYea, definitely necessary. Where would we want it to be? ci/README.md? That'd\nbe viewable on the various git hosting platforms. I guess there's an argument\nfor it to be in the sgml docs, but that doesn't seem all that useful in this\ncase.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 2 Oct 2021 14:10:09 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-10-02 16:44:44 -0400, Tom Lane wrote:\n>> If you'd like that, there would need to be some (ahem) documentation\n>> of how to use it.\n\n> Yea, definitely necessary. Where would we want it to be? ci/README.md? That'd\n> be viewable on the various git hosting platforms. I guess there's an argument\n> for it to be in the sgml docs, but that doesn't seem all that useful in this\n> case.\n\nA README seems plenty good enough to me. Maybe -0.1 for making\nit .md rather than plain text ... plain text is our habit everywhere\nelse AFAIR.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 02 Oct 2021 17:55:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi, \n\nOn October 2, 2021 1:18:38 PM PDT, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> On 2 Oct 2021, at 22:01, Andres Freund <andres@anarazel.de> wrote:\n>> On 2021-10-02 20:42:00 +0200, Daniel Gustafsson wrote:\n>>> Same, and for my case I run several CI jobs to compile/test against different\n>>> OpenSSL versions etc.\n>> \n>> On that note: Did you do this for windows? If so, I'd rather not figure that\n>> out myself...\n>\n>Not with Cirrus, I've been using Appveyor for Windows and they provide 1.0.2 -\n>3.0.0 which can easily set in config.pl with for example:\n>\n> $config->{openssl} = 'C:\\OpenSSL-v111-Win64';\n\nGot the build part working (although the state of msvc compatible openssl distribution on windows seems a bit scary). However the ssl tests don't fully succeed:\n\nhttps://cirrus-ci.com/task/6264790323560448?logs=ssl#L655\n\n I didn't see code in the bf client code running the test so perhaps that's not too surprising :/\n\nDid you run those tests on windows?\n\nRegards,\n\nAndres\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Sat, 02 Oct 2021 21:05:17 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2021-10-02 21:05:17 -0700, Andres Freund wrote:\n> Got the build part working (although the state of msvc compatible openssl\n> distribution on windows seems a bit scary). However the ssl tests don't\n> fully succeed:\n> \n> https://cirrus-ci.com/task/6264790323560448?logs=ssl#L655\n> \n> I didn't see code in the bf client code running the test so perhaps that's\n> not too surprising :/\n> \n> Did you run those tests on windows?\n\nAs you can see in the test output, every mismatch prints the whole file,\ndespite only intending to show the tail. Which appears to be because the\nwindows portion of 3c5b0685b921 doesn't actually work. The reason for that in\nturn is that afaict the setFilePointer doesn't change the file position in a\nway that affects perl.\n\nConsequently, if I force the !win32 path, the tests pass.\n\nAt first I assumed the cause of this is that while the setFilePointer() modifies the\nstate of the underlying handle, it doesn't actually let perl know about\nthat. Due to buffering etc perl likely has its own bookeeping about the\nposition in the file. There's some pretty clear hints in\nhttps://perldoc.perl.org/functions/seek\n\nBut the problem turns out to be that it's bogus to pass $fh to\nsetFilePointer(). That's a perl handle, not an win32 handle. Fixing that seems\nto make the tests pass.\n\n\nWhy did 3c5b0685b921 choose to use setFilePointer() in the first place? At\nthis point it's a perl filehandle, so we should just use perl seek?\n\n\nLeaving the concrete breakage aside, I'm somewhat unhappy that there's not a\nsingle comment explaining why TestLib.pm is trying to use native windows\nAPIs.\n\nIsn't the code as-is also \"leaking\" an open IO::Handle? There's a\nCloseHandle($fHandle), but nothing is done to $fh. But perhaps there's some\nperl magic cleaning things up? Even if so, loks like just closing $fh will\nclose the handle as well...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 3 Oct 2021 10:18:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "ssl tests fail on windows / slurp_file() offset doesn't work on win"
},
{
"msg_contents": "Hi,\n\nOn 2021-10-03 10:18:31 -0700, Andres Freund wrote:\n> As you can see in the test output, every mismatch prints the whole file,\n> despite only intending to show the tail. Which appears to be because the\n> windows portion of 3c5b0685b921 doesn't actually work. The reason for that in\n> turn is that afaict the setFilePointer doesn't change the file position in a\n> way that affects perl.\n> \n> Consequently, if I force the !win32 path, the tests pass.\n> \n> At first I assumed the cause of this is that while the setFilePointer() modifies the\n> state of the underlying handle, it doesn't actually let perl know about\n> that. Due to buffering etc perl likely has its own bookeeping about the\n> position in the file. There's some pretty clear hints in\n> https://perldoc.perl.org/functions/seek\n> \n> But the problem turns out to be that it's bogus to pass $fh to\n> setFilePointer(). That's a perl handle, not an win32 handle. Fixing that seems\n> to make the tests pass.\n\nIt does (I only let it run to the ssl test, then pushed a newer revision):\nhttps://cirrus-ci.com/task/5345293928497152?logs=ssl#L5\n\n\n> Why did 3c5b0685b921 choose to use setFilePointer() in the first place? At\n> this point it's a perl filehandle, so we should just use perl seek?\n> \n> \n> Leaving the concrete breakage aside, I'm somewhat unhappy that there's not a\n> single comment explaining why TestLib.pm is trying to use native windows\n> APIs.\n> \n> Isn't the code as-is also \"leaking\" an open IO::Handle? There's a\n> CloseHandle($fHandle), but nothing is done to $fh. But perhaps there's some\n> perl magic cleaning things up? Even if so, loks like just closing $fh will\n> close the handle as well...\n\nI think something roughly like the attached might be a good idea. Runs locally\non linux, and hopefully still on windows\n\nhttps://cirrus-ci.com/build/4857291573821440\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sun, 3 Oct 2021 10:30:38 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: ssl tests fail on windows / slurp_file() offset doesn't work on\n win"
},
{
"msg_contents": "> On 3 Oct 2021, at 06:05, Andres Freund <andres@anarazel.de> wrote:\n\n> Did you run those tests on windows?\n\nSorry, failed to mention I only compile it for now, I hadn't reached trying to\nrun the tests yet. I see you started on that in this thread, so thank you for\nthat!\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Sun, 3 Oct 2021 22:37:58 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "\nOn 10/3/21 1:30 PM, Andres Freund wrote:\n>\n>> Why did 3c5b0685b921 choose to use setFilePointer() in the first place? At\n>> this point it's a perl filehandle, so we should just use perl seek?\n>>\n>>\n>> Leaving the concrete breakage aside, I'm somewhat unhappy that there's not a\n>> single comment explaining why TestLib.pm is trying to use native windows\n>> APIs.\n>>\n>> Isn't the code as-is also \"leaking\" an open IO::Handle? There's a\n>> CloseHandle($fHandle), but nothing is done to $fh. But perhaps there's some\n>> perl magic cleaning things up? Even if so, loks like just closing $fh will\n>> close the handle as well...\n> I think something roughly like the attached might be a good idea. Runs locally\n> on linux, and hopefully still on windows\n>\n> https://cirrus-ci.com/build/4857291573821440\n>\n\nLooks sane, thanks.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 4 Oct 2021 11:07:07 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: ssl tests fail on windows / slurp_file() offset doesn't work on\n win"
},
{
"msg_contents": "On 2021-10-04 11:07:07 -0400, Andrew Dunstan wrote:\n> Looks sane, thanks.\n\nThanks for looking. Pushed to all branches.\n\n\n",
"msg_date": "Mon, 4 Oct 2021 13:49:45 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: ssl tests fail on windows / slurp_file() offset doesn't work on\n win"
},
{
"msg_contents": "On Sat, Oct 2, 2021 at 11:27 AM Andres Freund <andres@anarazel.de> wrote:\n> - runs check-world on FreeBSD, Linux, macOS - all using gcc\n\nSmall correction: on macOS and FreeBSD it's using the vendor compiler,\nwhich is some kind of clang.\n\nBTW, on those two OSes there are some messages like this each time a\nsubmake dumps its output to the log:\n\n[03:36:16.591] fcntl(): Bad file descriptor\n\nIt seems worth putting up with these compared to the alternatives of\neither not using -j, not using -Otarget and having the output of\nparallel tests all mashed up and unreadable (that still happen\nsometimes but it's unlikely, because the submakes write() whole output\nchunks at infrequent intervals), or redirecting to a file so you can't\nsee the realtime test output on the main CI page (not so fun, you have\nto wait until it's finished and view it as an 'artifact'). I tried to\nwrite a patch for GNU make to fix that[1], let's see if something\nhappens.\n\n[1] https://savannah.gnu.org/bugs/?52922\n\n\n",
"msg_date": "Wed, 6 Oct 2021 17:01:53 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2021-10-06 17:01:53 +1300, Thomas Munro wrote:\n> On Sat, Oct 2, 2021 at 11:27 AM Andres Freund <andres@anarazel.de> wrote:\n> > - runs check-world on FreeBSD, Linux, macOS - all using gcc\n>\n> Small correction: on macOS and FreeBSD it's using the vendor compiler,\n> which is some kind of clang.\n\nOh, oops. I guess that's even better ;).\n\n\n> I tried to write a patch for GNU make to fix that[1], let's see if something\n> happens.\n>\n> [1] https://savannah.gnu.org/bugs/?52922\n\nIt'd be nice to get rid of these... They're definitely confusing.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 5 Oct 2021 21:54:37 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On 02.10.21 00:27, Andres Freund wrote:\n> The attached patch adds CI using cirrus-ci.\n\nI like this in principle. But I don't understand what the docker stuff \nis about. I have used Cirrus CI before, and didn't have to do anything \nabout Docker. This could use some explanation.\n\n\n",
"msg_date": "Sun, 10 Oct 2021 21:48:09 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2021-10-10 21:48:09 +0200, Peter Eisentraut wrote:\n> On 02.10.21 00:27, Andres Freund wrote:\n> > The attached patch adds CI using cirrus-ci.\n> \n> I like this in principle. But I don't understand what the docker stuff is\n> about. I have used Cirrus CI before, and didn't have to do anything about\n> Docker. This could use some explanation.\n\nYou don't *have* to do anything about docker - but especially for windows it\ntakes longer to build without your own container, because we'd need to install\nour dependencies every time. And that turns out to take a while.\n\nRight now the docker containers are built as part of CI (cirrus rebuilds them\nwhen the container definition changes), but that doesn't have to be that way,\nwe could do so independently of cirrus, so that they are usable on other\nplatforms as well - although it's advantageous to use the cirrus containers as\nthe base, as they're cached on the buildhosts.\n\n\nIn principle we could also use docker for the linux tests, but I found that we\ncan get better results using full blown virtual machines. Those I currently\nbuild from a separate repo, as mentioned upthread.\n\n\nThere is a linux docker container, but that currently runs a separate task\nthat compiles with -Werror for gcc, clang with / without asserts. That's a\nseparate task so that compile warnings don't prevent one from seeing whether\ntests worked etc.\n\nOne thing I was thinking of adding to the \"compile warning\" task was to\ncross-compile postgres from linux using mingw - that's a lot faster than\nrunning the windows builds, and it's not too hard to break that accidentally.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 10 Oct 2021 13:22:45 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On Sat, 2 Oct 2021 at 17:05, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andres Freund <andres@anarazel.de> writes:\n> > It's not like this forces you to use cirrus or anything. For people that don't\n> > want to use CI, It'll make cfbot a bit more effective (because people can\n> > adjust what it tests as appropriate for $patch), but that's it.\n\nI don't disagree on that part, but I fail to see what makes the\nsituations of an unused CI config file in the tree and an unused\n`/.idea/` or `/.vs/` specifier in the .gitignore [0][1] distinct\nenough for it to be resolved differently. Both are quality-of-life\nadditions for those that use that tool, while non-users of that tool\ncan ignore those configuration entries.\n\n> Yeah. I cannot see any reason to object to Andres' 0002 patch: you can\n> just ignore those files if you don't want to use cirrus. It does set a\n> precedent that we'd also accept infrastructure for other CI systems,\n> but as long as they're similarly noninvasive, why not? (Maybe there\n> needs to be one more directory level though, ie ci/cirrus/whatever.\n> I don't want to end up with one toplevel directory per CI platform.)\n\nWith the provided arguments I won't object to the addition of these\nconfig files, but I would appreciate it if a clear policy could be\nprovided on the inclusion of configurations for external tools that\nare not expected to be used by all users of the repository, such as\nCI, editors and IDEs.\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://www.postgresql.org/message-id/flat/OS3PR01MB71593D78DD857C2BBA9FB824F2A69%40OS3PR01MB7159.jpnprd01.prod.outlook.com\n[1] https://www.postgresql.org/message-id/flat/15BFD11D-5D72-46B2-BDB1-2DF4E049C13D%40me.com\n\n\n",
"msg_date": "Thu, 21 Oct 2021 17:55:32 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Matthias van de Meent <boekewurm+postgres@gmail.com> writes:\n> I don't disagree on that part, but I fail to see what makes the\n> situations of an unused CI config file in the tree and an unused\n> `/.idea/` or `/.vs/` specifier in the .gitignore [0][1] distinct\n> enough for it to be resolved differently. Both are quality-of-life\n> additions for those that use that tool, while non-users of that tool\n> can ignore those configuration entries.\n\nUm ... I don't see a connection at all. One is talking about files\nwe put into the git tree, and one is talking about files that are\n*not* in the tree.\n\nWe do have a policy that files that are created by a supported build\nprocess should be .gitignore'd, so that might lead to more .gitignore\nentries as this idea moves ahead. I'm not on board though with the\nidea of .gitignore'ing anything that anybody anywhere thinks is junk.\nThat's more likely to lead to conflicts and mistakes than anything\nuseful. We expect developers to have personal excludesfile lists\nthat block off editor backup files and other cruft from the tools\nthat they personally use but are not required by the build.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 21 Oct 2021 12:04:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On 10/21/21 5:55 PM, Matthias van de Meent wrote:\n> On Sat, 2 Oct 2021 at 17:05, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> Andres Freund <andres@anarazel.de> writes:\n>>> It's not like this forces you to use cirrus or anything. For people that don't\n>>> want to use CI, It'll make cfbot a bit more effective (because people can\n>>> adjust what it tests as appropriate for $patch), but that's it.\n> \n> I don't disagree on that part, but I fail to see what makes the\n> situations of an unused CI config file in the tree and an unused\n> `/.idea/` or `/.vs/` specifier in the .gitignore [0][1] distinct\n> enough for it to be resolved differently. Both are quality-of-life\n> additions for those that use that tool, while non-users of that tool\n> can ignore those configuration entries.\n\nThere is a better solution to that. Just add those files to the global \ngitignore on your machine. You will want to ignore those files in all \ngit repositories on your machine anyway. On the other hand the \nconfiguration files for the CI are relevant to just the PostgreSQL repo.\n\nAndreas\n\n\n",
"msg_date": "Thu, 21 Oct 2021 18:35:08 +0200",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": ">Just add those files to the global gitignore on your machine\n\nWhile global gitignore is a nice feature, it won't protect users who do not\nknow they need to create a global ignore file.\nAdding explicit excludes for well-known temporary files into PostgreSQL\nsources makes it easier to work with the sources for everybody.\nLess manual configuration is better for having a productive environment.\n\nOn top of that, there is even a use-case for having .idea folder in Git:\n.idea/icon.png and .idea/icon_dark.png files are displayed in JetBrains\nIDEs so the list of projects becomes easier to distinguish.\nAFAIK, a standard icon configuration does not yet exist:\nhttps://github.com/editorconfig/editorconfig/issues/425\n\nHere's are samples:\nhttps://github.com/junit-team/junit5/blob/4ddc786728bc3fbc68d6a35d2eeeb63eb3e85609/.idea/icon.png\n,\nhttps://github.com/gradle/gradle/tree/1be71a9cd8882b08a9f8728d44eac8f65a33fbda/.idea\n\nVladimir\n\n>Just add those files to the global gitignore on your machineWhile global gitignore is a nice feature, it won't protect users who do not know they need to create a global ignore file.Adding explicit excludes for well-known temporary files into PostgreSQL sources makes it easier to work with the sources for everybody.Less manual configuration is better for having a productive environment.On top of that, there is even a use-case for having .idea folder in Git:.idea/icon.png and .idea/icon_dark.png files are displayed in JetBrains IDEs so the list of projects becomes easier to distinguish.AFAIK, a standard icon configuration does not yet exist: https://github.com/editorconfig/editorconfig/issues/425Here's are samples: https://github.com/junit-team/junit5/blob/4ddc786728bc3fbc68d6a35d2eeeb63eb3e85609/.idea/icon.png, https://github.com/gradle/gradle/tree/1be71a9cd8882b08a9f8728d44eac8f65a33fbda/.ideaVladimir",
"msg_date": "Fri, 22 Oct 2021 12:46:38 +0300",
"msg_from": "Vladimir Sitnikov <sitnikov.vladimir@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nAttached is an updated version of the CI patches.\n\nChanges:\n- more optional features are enabled on various platforms, including\n building with openssl on windows\n- added somewhat minimal, README explaining how CI can enabled in a\n repository\n- some cleanup to the windows crash reporting support. I also moved the\n code main.c code changes to after the CI stuff themselves, as it might\n be a bit harder to get into a committable shape (at least for me)\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sun, 31 Oct 2021 22:57:20 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nI reviewed the first patch in this set\n(v2-0001-ci-Add-CI-for-FreeBSD-Linux-MacOS-and-Windows-uti.patch).\n\nFor the README, I found the instructions very clear. My only concern is\nthat the cirrus-ci UI will change and the instructions on how to enable\ncirrus-ci on a repository will not be accessible in the same way in the\nfuture.\nThat being said, I found your instructions easier to follow than those\non [1].\nPerhaps it is better to wait until it becomes a problem and then, at\nthat point, change the README to guide people to the quickstart link.\n\nI have attached a patch which does a small refactor using a yaml anchor\nand aliases (tried it and it seems to work for me).\n\nA few questions and thoughts:\n\n- do you not need to change the default core resource limits for\n FreeBSD?\n\n- Would you find it valuable to set a few more coredump_filter bits?\n Might be worth setting bits 2 and 3 (see [2])-- in addition to the\n defaults (on Linux -- I don't know what the equivalent is on other\n platforms).\n\n- I found this line a bit confusing, so maybe it is worth a comment\n sysinfo_script:\n - export || true\n\n- For the docker files, I think it is recommended to run \"docker build\"\n only from within the specific build context (not in the top-level\n directory), so I don't think you should need the dockerignore file.\n\n Also, instead of putting the two docker files in the same directory,\n you could put them in dedicated directories (making those directories\n their build context). That way if you change one you don't end up\n rebuilding the other.\n\n- In ci/docker/linux_debian_bullseye, you can make this change:\n - apt-get clean\n + apt-get clean && \\\n + rm -f /var/lib/apt/lists/*\n\n to make that layer smaller.\n\n- Melanie\n\n[1] https://cirrus-ci.org/guide/quick-start/\n[2] https://man7.org/linux/man-pages/man5/core.5.html",
"msg_date": "Fri, 19 Nov 2021 17:17:44 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On Sat, Nov 20, 2021 at 11:17 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> - do you not need to change the default core resource limits for\n> FreeBSD?\n\nUnfortunately the performance really sucks on that FreeBSD CI system\nif you crank it up, and I haven't had time to figure out why yet :-/\nPossibly something to do with large numbers of files being created and\nunlinked concurrently under -jX on slow storage is going awry...\n\n\n",
"msg_date": "Sat, 20 Nov 2021 11:23:01 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2021-11-19 17:17:44 -0500, Melanie Plageman wrote:\n> For the README, I found the instructions very clear. My only concern is\n> that the cirrus-ci UI will change and the instructions on how to enable\n> cirrus-ci on a repository will not be accessible in the same way in the\n> future.\n\nI think we can just adjust things at that point, I'm not too worried about\npast instructions not working.\n\n\n> I have attached a patch which does a small refactor using a yaml anchor\n> and aliases (tried it and it seems to work for me).\n\nOh, neat. Yaml is so weird.\n\n\n\n> - Would you find it valuable to set a few more coredump_filter bits?\n> Might be worth setting bits 2 and 3 (see [2])-- in addition to the\n> defaults (on Linux -- I don't know what the equivalent is on other\n> platforms).\n\nI don't think we need 2/3 - we don't have file backed mappings. In some\nsituations setting bit 6 (shared huge pages) would make sense - but here we\ndon't configure them...\n\n\n> - I found this line a bit confusing, so maybe it is worth a comment\n> sysinfo_script:\n> - export || true\n\nWe can probably just get rid of the ||. It was just so that a missing 'export'\nbuiltin didn't cause execution to abort. But that won't randomly vanish, so\nit's fine.\n\n\n> - For the docker files, I think it is recommended to run \"docker build\"\n> only from within the specific build context (not in the top-level\n> directory), so I don't think you should need the dockerignore file.\n\nWe don't have control over that in this case - it's cirrus invoking docker,\nand it just uses the whole repo as the context.\n\n\n> - In ci/docker/linux_debian_bullseye, you can make this change:\n> - apt-get clean\n> + apt-get clean && \\\n> + rm -f /var/lib/apt/lists/*\n\nMight not matter too much compared to the size of the whole thing, but it\ndefinitely won't hurt...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 19 Nov 2021 15:02:04 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nAttached is an updated version of the CI patches. An example of a test run\nwith the attached version of this\nhttps://cirrus-ci.com/build/6501998521483264\n\nI again included the commit allowing crash dumps to be collected on windows,\nbut I don't think it can be merged as-is, and should be left for later.\n\n\nChanges since v2:\n- Address review comments\n\n- Build with further optional features enabled. I think just about everything\n reasonable is now enabled on freebsd, linux and macos. There's quite a bit\n more that could be done on windows, but I think it's good enough for now.\n\n- I added cross-compilation to windows from linux, to the \"warnings\"\n task. Occasionally there are build-system issues specific to\n cross-compilation, and the set of warnings are different.\n\n- Docs are now built as part of the 'CompilerWarnings' task.\n\n- I improved the CI README a bit more, in particular I added docs for the\n 'ci-os-only' tag I added to the CI logic, which lets one select which\n operating systems test get to run on.\n\n- Some of the 'Warnings' tasks now build with --enable-dtrace (once with\n optimizations, once without). It's pretty easy to break probes without\n seeing the problem locally.\n\n- Switched to using PG_TEST_USE_UNIX_SOCKETS for windows. Without that I was\n seeing occasional spurious test failures due to the use of PROVE_FLAGS=\n -j10, to make the otherwise serial execution of tests on windows bearable.\n\n- switch macos task to use monterey\n\n- plenty smaller changes / cleanups\n\n\nThere of course is a lot more that can be done [1], but I am pretty happy with\nwhat this covers.\n\n\nI'd like to commit this soon. There's two aspects that perhaps deserve a bit\nmore discussion before doing so though:\n\nOne I explicitly brought up before:\n\nOn 2021-10-01 15:27:52 -0700, Andres Freund wrote:\n> One policy discussion that we'd have to have is who should control the images\n> used for CI. Right now that's on my personal google cloud account - which I am\n> happy to do, but medium - long term that'd not be optimal.\n\nThe proposed CI script uses custom images to run linux and freebsd tests. They\nare automatically built every day from the repository https://github.com/anarazel/pg-vm-images/\n\nThese images have all the prerequisites pre-installed. For Linux something\nsimilar can be achieved by using dockerfiles referenced the .cirrus.yml file,\nbut for FreeBSD that's not available. Installing the necessary dependencies on\nevery run is too time intensive. For linux, the tests run a lot faster in\nfull-blown VMs than in docker, and full VMs allow a lot more control /\ndebugging.\n\nI think this may be OK for now, but I also could see arguments for wanting to\ntransfer the image specifications and the google account to PG properties.\n\n\nThe second attention-worthy point is the choice of a new toplevel ci/\ndirectory as the location for resources referencenced by CI. A few other\nprojects also use ci/, but I can also see arguments for moving the contents to\ne.g. src/tools/ci or such?\n\nGreetings,\n\nAndres Freund\n\n\n[1] Some ideas for what could make sense to extend CI to in the future:\n\n- also test with msys / mingw on windows\n\n- provide more optional dependencies for windows build\n\n- Extend the set of compiler warnings - as the compiler version is controlled,\n we could be more aggressive than we can be via configure.ac.\n\n- Add further distributions / platforms. Possibly as \"manual\" tasks - the\n amount resources one CI user gets is limited, so running tests on all\n platforms all the time would make tests take longer. Interesting things\n could be:\n\n - further linux distributions, particularly long-term supported ones\n\n - Some of the other BSDs. There currently are no pre-made images for\n openbsd/netbsd, but it shouldn't be too hard to script building them.\n\n - running some tests on ARM could be interesting, cirrus supports that for\n container based builds now\n\n- run checks like cpluspluscheck as part of the CompilerWarnings task\n\n- add tasks for running tests with tools like asan / ubsan (valgrind will be\n too slow).\n\n- consider enable compile-time debugging options like COPY_PARSE_PLAN_TREES,\n and run-time force_parallel_mode = regress on some platforms. They seem to\n catch a lot of problems during development and are likely affordable enough.",
"msg_date": "Mon, 13 Dec 2021 13:12:23 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On Mon, Dec 13, 2021 at 01:12:23PM -0800, Andres Freund wrote:\n> Hi,\n> \n> Attached is an updated version of the CI patches. An example of a test run\n> with the attached version of this\n> https://cirrus-ci.com/build/6501998521483264\n\nsudo is used exactly twice; maybe it's not needed at all ?\n\n> +task:\n> + name: FreeBSD\n...\n> + sysconfig_script:\n> + - sudo sysctl kern.corefile='/tmp/%N.%P.core'\n\n> +task:\n> + name: macOS\n...\n> + core_install_script:\n> + - sudo chmod 777 /cores\n\ntypos:\nbinararies\ndont't\n\n\n",
"msg_date": "Mon, 13 Dec 2021 16:02:50 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2021-12-13 16:02:50 -0600, Justin Pryzby wrote:\n> On Mon, Dec 13, 2021 at 01:12:23PM -0800, Andres Freund wrote:\n> > Hi,\n> > \n> > Attached is an updated version of the CI patches. An example of a test run\n> > with the attached version of this\n> > https://cirrus-ci.com/build/6501998521483264\n> \n> sudo is used exactly twice; maybe it's not needed at all ?\n\nThe macos one is needed, but the freebsd one indeed isn't.\n\n\n> > +task:\n> > + name: FreeBSD\n> ...\n> > + sysconfig_script:\n> > + - sudo sysctl kern.corefile='/tmp/%N.%P.core'\n> \n> > +task:\n> > + name: macOS\n> ...\n> > + core_install_script:\n> > + - sudo chmod 777 /cores\n> \n> typos:\n> binararies\n> dont't\n\nOops, thanks.\n\n\nTypos and sudo use are fixed in the repo [1].\n\nGreetings,\n\nAndres Freund\n\n[1] https://github.com/anarazel/postgres/tree/ci\n\n\n",
"msg_date": "Mon, 13 Dec 2021 14:47:20 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-12-13 16:02:50 -0600, Justin Pryzby wrote:\n>> sudo is used exactly twice; maybe it's not needed at all ?\n\n> The macos one is needed, but the freebsd one indeed isn't.\n\nI'm with Justin on this one. I would view a script trying to\nmess with /cores as a hostile act. PG cores on macOS tend to\nbe extremely large and can fill up your disk fairly quickly\nif you don't know they're being accumulated. I think it's okay\nto suggest in the documentation that people might want to allow\ncores to be dropped, but the script has NO business trying to\nforce that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 13 Dec 2021 18:14:52 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2021-12-13 18:14:52 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2021-12-13 16:02:50 -0600, Justin Pryzby wrote:\n> >> sudo is used exactly twice; maybe it's not needed at all ?\n>\n> > The macos one is needed, but the freebsd one indeed isn't.\n>\n> I'm with Justin on this one. I would view a script trying to\n> mess with /cores as a hostile act. PG cores on macOS tend to\n> be extremely large and can fill up your disk fairly quickly\n> if you don't know they're being accumulated. I think it's okay\n> to suggest in the documentation that people might want to allow\n> cores to be dropped, but the script has NO business trying to\n> force that.\n\nI'm not quite following. This is a ephemeral CI instance?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Dec 2021 15:45:23 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On Tue, Dec 14, 2021 at 10:12 AM Andres Freund <andres@anarazel.de> wrote:\n> Attached is an updated version of the CI patches. An example of a test run\n> with the attached version of this\n> https://cirrus-ci.com/build/6501998521483264\n\nI've been pushing various versions of these patches into my own\ndevelopment branches for a while now; they're working very nicely and\nhelping me a lot. This is vastly better than anything I was doing\nbefore, especially on Windows which is a blind spot for most of us.\nIt'll be great to see this committed, and continue improving it\nin-tree. I'd better go and figure out how to fix cfbot when this\nlands...\n\n> I think this may be OK for now, but I also could see arguments for wanting to\n> transfer the image specifications and the google account to PG properties.\n\nNo clue on the GCP account side of it (does pginfra already have\none?), but for the repo I guess it would seem natural to have one on\ngit.postgresql.org infra, mirrored (just like the main repo) to a repo\non project-owned github.com/postgres, from which image building is\ntriggered. Then it could be maintained by the whole PostgreSQL\nproject, patches discussed on -hackers, a bit like pg_bsd_indent.\nPerhaps with some way to trigger test image builds, so that people\nworking on it don't need their own GCP account to do a trial run.\n\n+ # Test that code can be built with gcc/clang without warnings\n\nAs a TODO note, I think we should eventually run a warnings check for\nMSVC too. IIUC we only aim to be warning free in assertion builds on\nthat platform, because it has no PG_USED_FOR_ASSERTS_ONLY (I think it\nhas it in C++ but not C?), but that's something.\n\n+ # XXX: Only do this if there have been changes in doc/ since last build\n+ always:\n+ docs_build_script: |\n+ time ./configure \\\n+ --cache gcc.cache CC=\"ccache gcc\"\n+ time make -s -j4 -C doc\n\nAnother TODO note: perhaps we could also make the documentation\nresults a browsable artefact with a short retention time, if that's a\nthing. (I've been confused about how to spell \"artefact\" for some\ntime now, and finally I know why: in the US it has an i; I blame Noah\nWebsta, whose name I have decided to improve.)\n\nI feel like I should apologise in advance for this level of\nnit-picking about English grammar, but:\n\n+2) For not yet merged development work, CI can be enabled for some git hosting\n+ providers. This allows to test patches on a number of platforms before they\n+ are merged (or even submitted).\n\nYou can \"allow testing\" (gerund verb), you can \"allow\ndevelopers/us/one/... to test\" (infinitive, but with a noun phrase to\nsay who's allowed to do the thing), you can \"allow verification of\n...\" (noun phrase), you can \"be allowed to test\" (passive), but you\ncan't \"allow to test\": it's not allowed![1] :-)\n\n+# It might be nicer to switch to the openssl built as part of curl-for-win,\n+# but recent releases only build openssl 3, and that still seems troublesome\n+# on windows,\n\ns/windows,/Windows./\n\n+ name: FreeBSD\n\nFreeBSD is missing --with-llvm. If you add package \"llvm\" to your\nimage builder you'll currently get LLVM 9, then\nLLVM_CONFIG=\"llvm-config\" CXX=\"ccache c++\" CLANG=\"ccache clang\". Or\nwe could opt for something more modern with package llvm13 and program\nnames llvm-config13 and clang13.\n\n> The second attention-worthy point is the choice of a new toplevel ci/\n> directory as the location for resources referencenced by CI. A few other\n> projects also use ci/, but I can also see arguments for moving the contents to\n> e.g. src/tools/ci or such?\n\nI'd be +0.75 for moving it to src/tools/ci.\n\n> [1] Some ideas for what could make sense to extend CI to in the future:\n\nTo your list, I'd add:\n\n* 32 bit\n* test coverage report\n* ability to capture complete Window install directory as an artefact\nso a Windows user without a dev environment could try out a proposed\nchange/CF entry/...\n\nI hope we can get ccache working on Windows.\n\n[1] https://english.stackexchange.com/questions/60271/grammatical-complements-for-allow/60285#60285\n\n\n",
"msg_date": "Tue, 14 Dec 2021 16:51:58 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2021-12-14 16:51:58 +1300, Thomas Munro wrote:\n> I'd better go and figure out how to fix cfbot when this lands...\n\nI assume it'd be:\n- stop adding the CI stuff\n- adjust links to CI tasks, appveyor wouldn't be used anymore\n- perhaps reference individual tasks from the cfbot page?\n\n\n> > I think this may be OK for now, but I also could see arguments for wanting to\n> > transfer the image specifications and the google account to PG properties.\n> \n> No clue on the GCP account side of it (does pginfra already have\n> one?), but for the repo I guess it would seem natural to have one on\n> git.postgresql.org infra, mirrored (just like the main repo) to a repo\n> on project-owned github.com/postgres, from which image building is\n> triggered. Then it could be maintained by the whole PostgreSQL\n> project, patches discussed on -hackers, a bit like pg_bsd_indent.\n> Perhaps with some way to trigger test image builds, so that people\n> working on it don't need their own GCP account to do a trial run.\n\nI think that's a good medium-term goal, I'd not make it a prerequisite for\nmerging myself.\n\n\n> + # Test that code can be built with gcc/clang without warnings\n> \n> As a TODO note, I think we should eventually run a warnings check for\n> MSVC too. IIUC we only aim to be warning free in assertion builds on\n> that platform, because it has no PG_USED_FOR_ASSERTS_ONLY (I think it\n> has it in C++ but not C?), but that's something.\n\nHm. Not entirely sure how to do that without doing a separate windows build,\nwhich is too slow...\n\n\n> + # XXX: Only do this if there have been changes in doc/ since last build\n> + always:\n> + docs_build_script: |\n> + time ./configure \\\n> + --cache gcc.cache CC=\"ccache gcc\"\n> + time make -s -j4 -C doc\n> \n> Another TODO note: perhaps we could also make the documentation\n> results a browsable artefact with a short retention time, if that's a\n> thing.\n\nMight be doable, but I'd guess that the volume of data it'd generate make it\nnot particularly attractive.\n\n\n> I feel like I should apologise in advance for this level of\n> nit-picking about English grammar, but:\n\n:)\n\nWill try to fix.\n\n\n> + name: FreeBSD\n> \n> FreeBSD is missing --with-llvm.\n\nThat was kind of intentional, I guess I should add a comment about it. The CI\nimage for freebsd already starts slower due to its size, and is on the slower\nside compared to anything !windows, so I'm not sure it's worth enabling llvm\nthere? It's probably not bad to have one platform testing without llvm.\n\n\n> > [1] Some ideas for what could make sense to extend CI to in the future:\n> \n> To your list, I'd add:\n> \n> * 32 bit\n\nThat'd be easy.\n\n\n> * test coverage report\n\nIf the output size is reasonable, that should be doable as well.\n\n\n> * ability to capture complete Window install directory as an artefact\n> so a Windows user without a dev environment could try out a proposed\n> change/CF entry/...\n\nI think the size of these artifacts would make this not something to enable by\ndefault. But perhaps a manually triggered task would make sense?\n\n\n> I hope we can get ccache working on Windows.\n\nThey did merge a number of the other required changes for that over the\nweekend. I'll try once they released...\n\nThanks!\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Dec 2021 20:11:54 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "> On 14 Dec 2021, at 05:11, Andres Freund <andres@anarazel.de> wrote:\n> On 2021-12-14 16:51:58 +1300, Thomas Munro wrote:\n>> I'd better go and figure out how to fix cfbot when this lands...\n> \n> I assume it'd be:\n> - stop adding the CI stuff\n> - adjust links to CI tasks, appveyor wouldn't be used anymore\n> - perhaps reference individual tasks from the cfbot page?\n\n+1 on leveraging the CI tasks in the tree in the CFBot. For a patch like the\nlibnss TLS backend one it would be a great help to both developer and reviewer\nto have that codepath actually built and tested as part of the CFBot; being\nable to tweak the CI tasks used in the CFBot per patch would be very helpful.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 14 Dec 2021 10:15:54 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On Mon, Dec 13, 2021 at 03:45:23PM -0800, Andres Freund wrote:\n> Hi,\n> \n> On 2021-12-13 18:14:52 -0500, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > On 2021-12-13 16:02:50 -0600, Justin Pryzby wrote:\n> > >> sudo is used exactly twice; maybe it's not needed at all ?\n> >\n> > > The macos one is needed, but the freebsd one indeed isn't.\n> >\n> > I'm with Justin on this one. I would view a script trying to\n> > mess with /cores as a hostile act. PG cores on macOS tend to\n> > be extremely large and can fill up your disk fairly quickly\n> > if you don't know they're being accumulated. I think it's okay\n> > to suggest in the documentation that people might want to allow\n> > cores to be dropped, but the script has NO business trying to\n> > force that.\n> \n> I'm not quite following. This is a ephemeral CI instance?\n\nAs for myself, all I meant is that it's better to write it with zero sudos than\none (for the same reason that it's better to write with one than with two).\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 15 Dec 2021 08:42:45 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Mon, Dec 13, 2021 at 03:45:23PM -0800, Andres Freund wrote:\n>> On 2021-12-13 18:14:52 -0500, Tom Lane wrote:\n>>> I'm with Justin on this one. I would view a script trying to\n>>> mess with /cores as a hostile act.\n\n>> I'm not quite following. This is a ephemeral CI instance?\n\n> As for myself, all I meant is that it's better to write it with zero sudos than\n> one (for the same reason that it's better to write with one than with two).\n\nWhat I'm concerned about is that it's unsafe to run the script in\nany non-throwaway environment. That doesn't seem desirable.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 Dec 2021 10:21:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "> On 15 Dec 2021, at 16:21, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Justin Pryzby <pryzby@telsasoft.com> writes:\n>> On Mon, Dec 13, 2021 at 03:45:23PM -0800, Andres Freund wrote:\n>>> On 2021-12-13 18:14:52 -0500, Tom Lane wrote:\n>>>> I'm with Justin on this one. I would view a script trying to\n>>>> mess with /cores as a hostile act.\n> \n>>> I'm not quite following. This is a ephemeral CI instance?\n> \n>> As for myself, all I meant is that it's better to write it with zero sudos than\n>> one (for the same reason that it's better to write with one than with two).\n> \n> What I'm concerned about is that it's unsafe to run the script in\n> any non-throwaway environment. That doesn't seem desirable.\n\nI don't think anyone should expect any part of the .cirrus.yml script to be\nsafe or applicable to a local environment, so I think this is something we can\nsolve with documentation.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 15 Dec 2021 16:45:52 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On 13.12.21 22:12, Andres Freund wrote:\n> Attached is an updated version of the CI patches. An example of a test run\n> with the attached version of this\n> https://cirrus-ci.com/build/6501998521483264\n\n+ only_if: $CIRRUS_CHANGE_MESSAGE !=~ '.*\\nci-os-only:.*' || \n$CIRRUS_CHANGE_MESSAGE =~ '.*\\nci-os-only:[^\\n]*freebsd.*'\n\nI'm not in favor of this kind of thing. I don't understand how this is \nuseful, other than perhaps for developing *this* patch. I don't think \npeople would like having these tags in the mainline, and if it's for \nlocal use, then people can adjust the file locally.\n\n+ CC=\"ccache cc\" CFLAGS=\"-O0 -ggdb\"'\n\nI don't think using -O0 is the right thing. It will miss some compiler \nwarnings, and it will not thoroughly test the compiler. We should test \nusing the configurations that are close to what users actually use.\n\n+ - su postgres -c 'gmake -s -j3 && gmake -s -j3 -C contrib'\n\nWhy doesn't this use make world (or world-bin, if you prefer).\n\nWhy does this use -j3 if there are two CPUs configured? (Perhaps the \nnumber of CPUs should be put into a variable.)\n\nI don't like that the -s option is used. I would like to see what \ncommands are executed.\n\n+ cpu: 4\n\nWhy does the Linux job use 4 CPUs and the FreeBSD job 2?\n\n+ - export\n\nI don't think that is portable to all shells.\n\n+ - su postgres -c 'time script test.log gmake -s -j2 ${CHECK} \n${CHECKFLAGS}'\n\n+ su postgres -c '\\\n+ ulimit -c unlimited; \\\n+ make -s ${CHECK} ${CHECKFLAGS} -j8 \\\n+ '\n\nNot clear why these are so different. Don't we need the test.log file \nfor Linux? Don't we need the ulimit call for FreeBSD? Why the -j8 \noption even though 4 CPUs have been configured?\n\n+ brew install \\\n+ ccache \\\n+ coreutils \\\n+ icu4c \\\n+ krb5 \\\n+ llvm \\\n+ lz4 \\\n+ make \\\n+ openldap \\\n+ openssl \\\n+ python@3.10 \\\n+ tcl-tk\n\nCurious why coreutils and make are installed? The system-supplied tools \nought to work.\n\n+ brew cleanup -s\n\nSeems unnecessary?\n\n+ PKG_CONFIG_PATH=\"/usr/local/opt/krb5/lib/pkgconfig:$PKG_CONFIG_PATH\"\n\nAFAICT, we don't use pkg-config for the krb5 package.\n\n+ - gmake -s -j12 && gmake -s -j12 -C contrib\n\nThese numbers should also be explained or configured somewhere. \nPossibly query the number of CPUs on the instance.\n\n+ PROVE_FLAGS: -j10\n\nWhy only on Windows?\n\n+ # Installation on windows currently only completely works from \nsrc\\tools\\msvc\n\nIf that is so, let's fix that. But see that install.pl contains\n\nif (-e \"src/tools/msvc/buildenv.pl\")\n\netc. it seems to want to be able to be invoked from the top level.\n\n+ - cd src\\tools\\msvc && perl .\\install.pl \n%CIRRUS_WORKING_DIR%\\tmp_install\n\nConfusing mix of forward and backward slashes in the Windows section. I \nthink forward slashes should work everywhere.\n\n+ test_plcheck_script:\n+ - perl src/tools/msvc/vcregress.pl plcheck\n\netc. Couldn't we enhance vcregress.pl to take multiple arguments or take \na \"check-world\" argument or something. Otherwise, this will be tedious \nto keep updated.\n\n+ test_subscriptioncheck_script:\n+ - perl src/tools/msvc/vcregress.pl taptest .\\src\\test\\subscription\\\n\nThis is even worse. I don't want to have to hand-register every new TAP \ntest.\n\n+ always:\n+ gcc_warning_script: |\n+ time ./configure \\\n+ --cache gcc.cache CC=\"ccache gcc\" \\\n+ --enable-dtrace\n\nI don't know why we wouldn't need the full set of options here. It's \nnot like optional code never has compiler warnings.\n\n+ # cross-compile to windows\n+ always:\n+ mingw_cross_warning_script: |\n\nI would welcome a native mingw build with full options and test suite \nrun. This cross-compiling stuff is of course interesting, but I'm not \nsure why it is being preferred over a full native run.\n\n--- /dev/null\n+++ b/.dockerignore\n@@ -0,0 +1,7 @@\n+# Ignore everything, except ci/\n\nI wonder whether this would interfere with other uses of docker. I \nsuppose people might have their custom setups for building docker images \nfrom PostgreSQL sources. It seems weird that this file gets this \nprominence, saying that the canonical use of docker inside PostgreSQL \nsources is for Cirrus CI.\n\nIt would be useful if the README explained the use of docker. As I \nmentioned before, it's not immediately clear why docker is used at all \nin this.\n\nThe docker file for Windows contains a lot of hardcoded version numbers. \n This should at least be refactored a little bit so that it is clear \nwhich version numbers should be updated and how. Or better yet, avoid \nthe need to constantly update version numbers. For example, the Python \npatch release changes every few weeks (e.g., 3.10.0 isn't current \nanymore). Also, the way OpenSSL is installed looks a bit fishy. Is \nthis what people actually use in practice? How can we make it match \nactual practice better?\n\n+# So that tests using the \"manually\" started postgres on windows can use\n+# prepared statements\n+max_prepared_transactions = 10\n\nMaybe add that to the pg_ctl invocation in the Windows section instead.\n\n+# Settings that make logs more useful\n+log_autovacuum_min_duration = 0\n+log_checkpoints = true\n+log_connections = true\n+log_disconnections = true\n+log_line_prefix = '%m [%p][%b] %q[%a][%v:%x] '\n+log_lock_waits = true\n\nIf we think these are useful, we should make the test suite drivers set \nthem for all users.\n\n> One I explicitly brought up before:\n> \n> On 2021-10-01 15:27:52 -0700, Andres Freund wrote:\n>> One policy discussion that we'd have to have is who should control the images\n>> used for CI. Right now that's on my personal google cloud account - which I am\n>> happy to do, but medium - long term that'd not be optimal.\n> \n> The proposed CI script uses custom images to run linux and freebsd tests. They\n> are automatically built every day from the repository https://github.com/anarazel/pg-vm-images/\n> \n> These images have all the prerequisites pre-installed. For Linux something\n> similar can be achieved by using dockerfiles referenced the .cirrus.yml file,\n> but for FreeBSD that's not available. Installing the necessary dependencies on\n> every run is too time intensive. For linux, the tests run a lot faster in\n> full-blown VMs than in docker, and full VMs allow a lot more control /\n> debugging.\n> \n> I think this may be OK for now, but I also could see arguments for wanting to\n> transfer the image specifications and the google account to PG properties.\n\nFor the above reasons of lack of documentation, I still don't understand \nthe whole docker flow here. Do you mean, the docker files included in \nyour patch are not actually used as part of the CI run; instead you use \nthem to build images manually, which are then pulled in by the test runs?\n\nIf so, apart from the general problem of having this go through some \npersonal account, I also have concerns how this can be kept up to date, \ngiven how often the dependent software changes, as mentioned above.\n\nI think it would be much easier to get this project over the initial \nhump if we skipped the whole docker business and just set the images up \nfrom scratch on each run.\n\n> The second attention-worthy point is the choice of a new toplevel ci/\n> directory as the location for resources referencenced by CI. A few other\n> projects also use ci/, but I can also see arguments for moving the contents to\n> e.g. src/tools/ci or such?\n\nOr src/tools/cirrus/? This providers came and go, and before long there \nmight be interest in another one.\n\n> - Extend the set of compiler warnings - as the compiler version is controlled,\n> we could be more aggressive than we can be via configure.ac.\n\nNot sure about that. I don't want this to evolve into some separate \npool of policies that yells at you because of some settings that you \nnever heard of. If we think other warnings are useful, we should \nprovide a way to select them, perhaps optionally, from the usual build \nsystem.\n\n> - consider enable compile-time debugging options like COPY_PARSE_PLAN_TREES,\n> and run-time force_parallel_mode = regress on some platforms. They seem to\n> catch a lot of problems during development and are likely affordable enough.\n\nThat would be useful if we can think of a way to select it optionally.\n\n\n",
"msg_date": "Fri, 17 Dec 2021 12:34:36 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "\nOn 12/13/21 16:12, Andres Freund wrote:\n> Hi,\n>\n> Attached is an updated version of the CI patches. An example of a test run\n> with the attached version of this\n> https://cirrus-ci.com/build/6501998521483264\n>\n\nMaye I have missed it, but why are we using ccache here? That seems a\nbit pointless in an ephemeral instance.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 17 Dec 2021 09:08:53 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> Maye I have missed it, but why are we using ccache here? That seems a\n> bit pointless in an ephemeral instance.\n\nI believe Munro's cfbot tooling is able to save and re-use ccache\nacross successive instantiations of a build instance. I've not\nlooked at this code, but if it can do that there'd be point to it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 Dec 2021 09:36:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2021-12-17 12:34:36 +0100, Peter Eisentraut wrote:\n> On 13.12.21 22:12, Andres Freund wrote:\n> > Attached is an updated version of the CI patches. An example of a test run\n> > with the attached version of this\n> > https://cirrus-ci.com/build/6501998521483264\n> \n> + only_if: $CIRRUS_CHANGE_MESSAGE !=~ '.*\\nci-os-only:.*' ||\n> $CIRRUS_CHANGE_MESSAGE =~ '.*\\nci-os-only:[^\\n]*freebsd.*'\n> \n> I'm not in favor of this kind of thing. I don't understand how this is\n> useful, other than perhaps for developing *this* patch. I don't think\n> people would like having these tags in the mainline, and if it's for local\n> use, then people can adjust the file locally.\n\nDefinitely not for mainline. But it's extremely useful for development. If you\niteratively try to fix windows, running all the other tests can be slower -\nthere's a concurrency limit in how many tests you can run for free...\n\n\n> + CC=\"ccache cc\" CFLAGS=\"-O0 -ggdb\"'\n> \n> I don't think using -O0 is the right thing. It will miss some compiler\n> warnings, and it will not thoroughly test the compiler. We should test\n> using the configurations that are close to what users actually use.\n\nHm. I personally always end up using -O0 for the actual development tree, and\nit seems a lot of others do as well. Building with -O2 makes backtraces etc\njust less useful.\n\n\n> + - su postgres -c 'gmake -s -j3 && gmake -s -j3 -C contrib'\n> \n> Why doesn't this use make world (or world-bin, if you prefer).\n\nI started working on this well before world-bin existed. And using 'world' as\nthe target builds the docs, which is quite expensive... I happened to actually\nmake the change to world-bin yesterday for the next version to send :)\n\n\n> Why does this use -j3 if there are two CPUs configured? (Perhaps the number\n> of CPUs should be put into a variable.)\n\nI tried a few and that worked best.\n\n\n> I don't like that the -s option is used. I would like to see what commands\n> are executed.\n\nI can change it - but it makes it *much* harder to spot compiler warnings.\n\n\n> + cpu: 4\n> \n> Why does the Linux job use 4 CPUs and the FreeBSD job 2?\n\nI'll add a comment about it. Two reasons\n1) the limits on cirrus are lower for linux than freebsd:\n https://cirrus-ci.org/faq/\n2) There's some issues on freebsd where test performance regressess *very*\n substantially with higher concurrency. Thomas and I looked a bunch into it\n without figuring out the details.\n\n\n> + - export\n\n> I don't think that is portable to all shells.\n\nDoesn't really need to be?\n\n\n> + - su postgres -c 'time script test.log gmake -s -j2 ${CHECK}\n> ${CHECKFLAGS}'\n> \n> + su postgres -c '\\\n> + ulimit -c unlimited; \\\n> + make -s ${CHECK} ${CHECKFLAGS} -j8 \\\n> + '\n> \n> Not clear why these are so different. Don't we need the test.log file for\n> Linux?\n\nThere's a comment about the use of script:\n # Use of script is to avoid make complaints about fcntl()\n # https://savannah.gnu.org/bugs/?60774\n\nthat bug is specific to platforms that don't allow locking pipes. Which linux\ndoes allow, but freebsd doesn't.\n\n\n> Don't we need the ulimit call for FreeBSD?\n\nI think the default core limits were different, I will check.\n\n\n> Why the -j8 option even though 4 CPUs have been configured?\n\nThat might have been an accident.\n\n\n> + brew install \\\n> + ccache \\\n> + coreutils \\\n> + icu4c \\\n> + krb5 \\\n> + llvm \\\n> + lz4 \\\n> + make \\\n> + openldap \\\n> + openssl \\\n> + python@3.10 \\\n> + tcl-tk\n> \n> Curious why coreutils and make are installed? The system-supplied tools\n> ought to work.\n\nmake because newer versions of make have -Otarget, which makes concurrent\ncheck-world output at least kind-of readable.\n\n\n> + brew cleanup -s\n> \n> Seems unnecessary?\n\nIt reduces the size of the cached downloads. Not much point in keeping older\nversions of the package around. Populating the cache takes time.\n\n\n> + PKG_CONFIG_PATH=\"/usr/local/opt/krb5/lib/pkgconfig:$PKG_CONFIG_PATH\"\n> \n> AFAICT, we don't use pkg-config for the krb5 package.\n\nI now converted this to a loop.\n\n\n> + - gmake -s -j12 && gmake -s -j12 -C contrib\n> \n> These numbers should also be explained or configured somewhere. Possibly\n> query the number of CPUs on the instance.\n\nmacOS instances have a fixed number of cores - 12. Might make sense to query\nit, but not sure what a good portable way there is.\n\n\n> + PROVE_FLAGS: -j10\n> \n> Why only on Windows?\n\nBecause windows doesn't have a way to run tests in parallel in another\nway. prove-level concurrency is the only thing. Whereas other platforms can\nrun tests in parallel via make. Combining both tends to not work very well in\nmy experience.\n\n\n> + # Installation on windows currently only completely works from\n> src\\tools\\msvc\n> \n> If that is so, let's fix that.\n\nI did report the problem - just haven't gotten around to fixing it. Note this\nis also how the buildfarm invokes installation... The problem is that\nInstall.pm includes config.pl from the current directory, IIRC.\n\nAt some point I needed to restrict to dealing with the current state - there's\nplenty other bugs.\n\n\n> + - cd src\\tools\\msvc && perl .\\install.pl\n> %CIRRUS_WORKING_DIR%\\tmp_install\n> \n> Confusing mix of forward and backward slashes in the Windows section. I\n> think forward slashes should work everywhere.\n\nThey would work here I think, but no, they don't work everywhere :(\n\n\n> + test_plcheck_script:\n> + - perl src/tools/msvc/vcregress.pl plcheck\n> \n> etc. Couldn't we enhance vcregress.pl to take multiple arguments or take a\n> \"check-world\" argument or something. Otherwise, this will be tedious to\n> keep updated.\n\n> + test_subscriptioncheck_script:\n> + - perl src/tools/msvc/vcregress.pl taptest .\\src\\test\\subscription\\\n> \n> This is even worse. I don't want to have to hand-register every new TAP\n> test.\n\nI strongly agree. There were several tests that the buildfarm on windows\ndidn't ever run before I started working on this. And clearly no windows\ndeveloper is going to manually invoke ~10 test steps.\n\n\n> + always:\n> + gcc_warning_script: |\n> + time ./configure \\\n> + --cache gcc.cache CC=\"ccache gcc\" \\\n> + --enable-dtrace\n> \n> I don't know why we wouldn't need the full set of options here. It's not\n> like optional code never has compiler warnings.\n\nI mostly didn't like the repetition of long argument lists. There's probably a\ndecent way to deal with that.\n\n\n> + # cross-compile to windows\n> + always:\n> + mingw_cross_warning_script: |\n> \n> I would welcome a native mingw build with full options and test suite run.\n> This cross-compiling stuff is of course interesting, but I'm not sure why it\n> is being preferred over a full native run.\n\nI have a new colleague working on scripting the setup of mingw on\nwindows. Besides not being available yet, it's *much* *much* slower to build\npostgres. This is a useful and quicker screening.\n\n\n> --- /dev/null\n> +++ b/.dockerignore\n> @@ -0,0 +1,7 @@\n> +# Ignore everything, except ci/\n> \n> I wonder whether this would interfere with other uses of docker. I suppose\n> people might have their custom setups for building docker images from\n> PostgreSQL sources. It seems weird that this file gets this prominence,\n> saying that the canonical use of docker inside PostgreSQL sources is for\n> Cirrus CI.\n\nI have a hard time seeing uses of docker where this would be a problem. If you\nactually use the whole tree as context you pretty much need to exclude most of\nit, otherwise docker (and other tools) tar the whole tree and send it to the\ndaemon.\n\n\n> It would be useful if the README explained the use of docker. As I\n> mentioned before, it's not immediately clear why docker is used at all in\n> this.\n\nIt boils down to this: Windows tests on cirrus are always run via docker\n(presumably because the licensing otherwise is more expensive). And for linux,\nit's considerably quicker to start up a container, rather than a full VM - but\na full VM is slower.\n\n\n> The docker file for Windows contains a lot of hardcoded version numbers.\n> This should at least be refactored a little bit so that it is clear which\n> version numbers should be updated and how. Or better yet, avoid the need to\n> constantly update version numbers.\n\nI don't really see a great way to avoid them in general. And e.g. with perl\nthe issue is that plperl straight up doesn't work with a newer perl version :(.\n\n\n> Also, the way OpenSSL is installed looks a bit fishy. Is this what people\n> actually use in practice? How can we make it match actual practice better?\n\nI wish I knew. I didn't see any good practice for this anywhere.\n\n\n> +# So that tests using the \"manually\" started postgres on windows can use\n> +# prepared statements\n> +max_prepared_transactions = 10\n> \n> Maybe add that to the pg_ctl invocation in the Windows section instead.\n\nIt doesn't hurt anything else, so I don't really think it's worth going a\nplatform dependent way?\n\n\n> +# Settings that make logs more useful\n> +log_autovacuum_min_duration = 0\n> +log_checkpoints = true\n> +log_connections = true\n> +log_disconnections = true\n> +log_line_prefix = '%m [%p][%b] %q[%a][%v:%x] '\n> +log_lock_waits = true\n> \n> If we think these are useful, we should make the test suite drivers set them\n> for all users.\n\nSome of these are set for some but not some other test drivers. And it's quite\nuseful for out-of-tree development to be able to quickly add options to all\ntests... And no test driver helps the windows case that needs the manually\nstarted postgres (the state of windows testing really is dreadful).\n\n\n> For the above reasons of lack of documentation, I still don't understand the\n> whole docker flow here. Do you mean, the docker files included in your\n> patch are not actually used as part of the CI run; instead you use them to\n> build images manually, which are then pulled in by the test runs?\n\nAs submitted this is about VM images, not docker images. Used by linux (for\nthe main test run) and freebsd (cirrus doesn't support anything else). Some\nother platforms could also be supported that way with a bit more work\n(openbsd, netbsd). For some other platforms cirrus only supports docker\ncontainers (windows, linux on arm).\n\nThe docker containers can be built on-demand (that's what the dockerfile:\n... syntax for e.g. windows does). So yes, they currently are used. cirrus\nrebuilds them whenever their content (including referenced files) changes.\n\n\nBut the building of containers happens in every repo enabling the tests. That\ntakes quite a while and uses a lot of space (the additional windows\ninstallation is like ~6GB). So I was working over the last few days on moving\nthe containers to be built the alongside the VM images. Then it looks more\nsimilar across all the platforms.\n\n\n> I think it would be much easier to get this project over the initial hump if\n> we skipped the whole docker business and just set the images up from scratch\n> on each run.\n\nIt's not feasible. The windows stuff takes *way* too long (as in 40min),\nfreebsd takes quite long, linux long. And the amount of traffic it generates\nto install packages over and over again isn't great either. The containers /\nimages are from within google's network, and thus free - the package repos\ndon't have that advantage.\n\nFWIW, homebrew at some point complained about a huge fraction of their cost\nbeing from CI. I know debian had issues with that on and off as well. While I\ncouldn't solve the homebrew issue via VM images, I made it at least cache the\nhomebrew downloads between runs.\n\n\nMy current (not yet submitted version) comment about this is:\n\n> Images used for CI\n> ==================\n> \n> To keep CI times tolerable, most platforms use pre-generated images. Some\n> platforms use containers, others use full VMs. Images for both are generated\n> separately from CI runs, otherwise each git repository that is being tested\n> would need to build its own set of containers, which would be wasteful (both\n> in space and time.\n> \n> These images are built from the specifications in github.com/anarazel/pg-vm-images/\n\nI also did the work of redoing the necessary setup and documenting all the\nsteps ([1] although there could be a bit more handholding) . It'd now not be\nhard to transfer the image building into different / shared ownership.\n\nI'll expand this section.\n\n\n> > The second attention-worthy point is the choice of a new toplevel ci/\n> > directory as the location for resources referencenced by CI. A few other\n> > projects also use ci/, but I can also see arguments for moving the contents to\n> > e.g. src/tools/ci or such?\n> \n> Or src/tools/cirrus/? This providers came and go, and before long there\n> might be interest in another one.\n\nI think quite a bit of the work will be portable between CI providers. I think\nhaving all the different CI things in one directory will make it more obvious\nthan having a bunch of provider names one might or might not know.\n\nI'm imaginging that over time we'd put some of the stuff in the .cirrus.yml\nfile into scripts in src/tools/ci/, so they can be used by different CI\nproviders.\n\n\n> > - Extend the set of compiler warnings - as the compiler version is controlled,\n> > we could be more aggressive than we can be via configure.ac.\n> \n> Not sure about that. I don't want this to evolve into some separate pool of\n> policies that yells at you because of some settings that you never heard of.\n> If we think other warnings are useful, we should provide a way to select\n> them, perhaps optionally, from the usual build system.\n\nI'd be happy with that as well. I do think that the compiler version\ndependency makes it bit harder to this usefully from configure though.\n\n\n> > - consider enable compile-time debugging options like COPY_PARSE_PLAN_TREES,\n> > and run-time force_parallel_mode = regress on some platforms. They seem to\n> > catch a lot of problems during development and are likely affordable enough.\n> \n> That would be useful if we can think of a way to select it optionally.\n\nWhat kind of optionally are you thinking? Commit message contents? A central\nflag somewhere?\n\nI was thinking it might be worth enabling such options on one of the\nplatforms, so that one gets coverage by default. People remembering\ne.g. COPY_PARSE_PLAN_TREES don't tend to need it...\n\nGreetings,\n\nAndres Freund\n\n[1] https://github.com/anarazel/pg-vm-images/blob/main/gcp_project_setup.txt\n\n\n",
"msg_date": "Fri, 17 Dec 2021 11:31:59 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2021-12-17 09:36:05 -0500, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > Maye I have missed it, but why are we using ccache here? That seems a\n> > bit pointless in an ephemeral instance.\n> \n> I believe Munro's cfbot tooling is able to save and re-use ccache\n> across successive instantiations of a build instance. I've not\n> looked at this code, but if it can do that there'd be point to it.\n\nYes, the ccache cache is persisted across runs (see the *_cache and\nupload_cache inststructions). It makes a quite substantial difference. One\nreason the windows runs are a lot slower than the others is just that visual\nstudio isn't yet supported by ccache, and that there doesn't seem to be good\nother tools.\n\nThe ccache maintainers merged more of the msvc support last weekend! So I have\nquite a bit of hope of being able to use ccache there as well.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 Dec 2021 11:34:36 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "> On 17 Dec 2021, at 20:31, Andres Freund <andres@anarazel.de> wrote:\n> On 2021-12-17 12:34:36 +0100, Peter Eisentraut wrote:\n\n>> I don't like that the -s option is used. I would like to see what commands\n>> are executed.\n> \n> I can change it - but it makes it *much* harder to spot compiler warnings.\n\nHaving used Cirrus et.al a fair bit I strongly agree with Andres, working with\nhuge logs in the browser is painful whereas -s makes it useable even on mobile\ndevices.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Fri, 17 Dec 2021 21:42:19 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On 2021-12-17 11:31:59 -0800, Andres Freund wrote:\n> > Don't we need the ulimit call for FreeBSD?\n> \n> I think the default core limits were different, I will check.\n\nYep, freebsd has -c unlimited by default:\nhttps://cirrus-ci.com/task/6199382239346688?logs=sysinfo#L23\nvs\nhttps://cirrus-ci.com/task/4792007355793408?logs=sysinfo#L32\n\n\n",
"msg_date": "Fri, 17 Dec 2021 13:24:05 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "\nOn 12/17/21 14:34, Andres Freund wrote:\n> Hi,\n>\n> On 2021-12-17 09:36:05 -0500, Tom Lane wrote:\n>> Andrew Dunstan <andrew@dunslane.net> writes:\n>>> Maye I have missed it, but why are we using ccache here? That seems a\n>>> bit pointless in an ephemeral instance.\n>> I believe Munro's cfbot tooling is able to save and re-use ccache\n>> across successive instantiations of a build instance. I've not\n>> looked at this code, but if it can do that there'd be point to it.\n> Yes, the ccache cache is persisted across runs (see the *_cache and\n> upload_cache inststructions). It makes a quite substantial difference. One\n> reason the windows runs are a lot slower than the others is just that visual\n> studio isn't yet supported by ccache, and that there doesn't seem to be good\n> other tools.\n>\n> The ccache maintainers merged more of the msvc support last weekend! So I have\n> quite a bit of hope of being able to use ccache there as well.\n>\n\nOk. I have had to disable ccache for fairywren (msys2) because it caused\nserious instability.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 18 Dec 2021 08:29:01 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nAttached is v4 of the CI patch.\n\nChanges:\n\n- Move docker image specification out of the patch and generate them together\n with the VM images. The main reason for this is that I got worried about all\n repositories having to recreate the images - they're large.\n\n- Moved the core dump handling for *nix systems into a helper shell script,\n they were a bit long for the .cirrus.yml. And this way the logic can be\n reused for other CI providers\n\n- renamed the task names to include a bit more OS information\n\n- renamed the images to remove -aio- from the name\n\n- deduplicated a few more steps\n\n- Address Thomas' feeback\n\n- Try to address Peter's feedback\n\nRegards,\n\nAndres",
"msg_date": "Mon, 20 Dec 2021 11:21:05 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2021-12-20 11:21:05 -0800, Andres Freund wrote:\n> Attached is v4 of the CI patch.\n\nI'd like to push this - any objections? It's not disruptive to anything but\ncfbot, so we can incrementally improve it further.\n\nI'll try to sync pushing with Thomas, so that he can adjust cfbot to not add\nthe CI changes anymore / adjust the links to the CI status URLs etc.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 29 Dec 2021 12:17:37 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "> On 29 Dec 2021, at 21:17, Andres Freund <andres@anarazel.de> wrote:\n> On 2021-12-20 11:21:05 -0800, Andres Freund wrote:\n\n>> Attached is v4 of the CI patch.\n> \n> I'd like to push this - any objections? It's not disruptive to anything but\n> cfbot, so we can incrementally improve it further.\n\nNo objection, I'm +1 on getting this in.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 29 Dec 2021 23:14:09 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nAttached is v5 of the CI patch. Not a lot of changes:\n- a bunch of copy-editing, wrote a commit message etc\n- use ccache for CXX/CLANG in the CompilerWarnings task, I had missed\n that when making the task use all --with-* flags\n- switch linux to use ossp-uuid. I tried to switch macos at first, but\n that doesn't currently seem to work.\n- minor other cleanups\n\nThis time I've only attached the main CI patch, not the one making core\ndumps on windows work. That's not yet committable...\n\nI plan to push this after another cycle of tests passing (and driving\nover the bay bridge...) unless I hear protests.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Thu, 30 Dec 2021 17:46:52 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "commit message: agithub\n\nthe the the buildfarm.\n=> the\n\naccess too.\n=> to\n\n# Due to that it using concurrency within prove is helpful.\n=> Due to that, it's useful to run prove with multiple jobs.\n\nfurther details src/tools/ci/README\n=> further details , see src/tools/ci/README\n\nscript uses a pseudo-tty, which do support locking.\n=> which does\n\nTo limit unneccessary work only run this once normal linux test succeeded\n=> unnecessary\n=> succeeds\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 30 Dec 2021 20:28:46 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2021-12-30 20:28:46 -0600, Justin Pryzby wrote:\n> [ language fixes]\n\nThanks!\n\n> script uses a pseudo-tty, which do support locking.\n> => which does\n\nThis didn't seem right either way - it's pseudo-ttys that don't support\nlocking, so plural seemed appropriate? I changed it to \"script uses\npseudo-ttys, which do\" instead...\n\nRegards,\n\nAndres\n\n\n",
"msg_date": "Thu, 30 Dec 2021 19:03:51 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On 2021-12-30 17:46:52 -0800, Andres Freund wrote:\n> I plan to push this after another cycle of tests passing (and driving\n> over the bay bridge...) unless I hear protests.\n\nPushed.\n\nMarked CF entry as committed.\n\n\n",
"msg_date": "Thu, 30 Dec 2021 19:17:33 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On 2021-Dec-30, Andres Freund wrote:\n\n> On 2021-12-30 17:46:52 -0800, Andres Freund wrote:\n> > I plan to push this after another cycle of tests passing (and driving\n> > over the bay bridge...) unless I hear protests.\n> \n> Pushed.\n> \n> Marked CF entry as committed.\n\nI tried it using the column filter patch. It worked on the first\nattempt.\n\nThanks!\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 31 Dec 2021 11:14:34 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "I noticed a patch failing in cfbot everywhere except windows:\n\nhttps://commitfest.postgresql.org/36/3476/\n| Invalid relcache when ADD PRIMARY KEY USING INDEX\n\nIt's because vcregress skips tests which have NO_INSTALLCHECK=1.\n\nIs it desirable to enable more module/contrib tests for windows CI ?\n\nThis does a few, but there's a few others which would require the server to\nbe restarted to set shared_preload_libraries for each module.\n\ndiff --git a/.cirrus.yml b/.cirrus.yml\nindex 19b3737fa11..c427b468334 100644\n--- a/.cirrus.yml\n+++ b/.cirrus.yml\n@@ -390,7 +390,7 @@ task:\n - perl src/tools/msvc/vcregress.pl check parallel\n startcreate_script:\n # paths to binaries need backslashes\n- - tmp_install\\bin\\pg_ctl.exe initdb -D tmp_check/db -l tmp_check/initdb.log\n+ - tmp_install\\bin\\pg_ctl.exe initdb -D tmp_check/db -l tmp_check/initdb.log --options=--no-sync\n - echo include '%TEMP_CONFIG%' >> tmp_check/db/postgresql.conf\n - tmp_install\\bin\\pg_ctl.exe start -D tmp_check/db -l tmp_check/postmaster.log\n test_pl_script:\ndiff --git a/contrib/test_decoding/Makefile b/contrib/test_decoding/Makefile\nindex 9a31e0b8795..14fd847ba7f 100644\n--- a/contrib/test_decoding/Makefile\n+++ b/contrib/test_decoding/Makefile\n@@ -10,7 +10,7 @@ ISOLATION = mxact delayed_startup ondisk_startup concurrent_ddl_dml \\\n \toldest_xmin snapshot_transfer subxact_without_top concurrent_stream \\\n \ttwophase_snapshot\n \n-REGRESS_OPTS = --temp-config $(top_srcdir)/contrib/test_decoding/logical.conf\n+REGRESS_OPTS = --temp-config=$(top_srcdir)/contrib/test_decoding/logical.conf\n ISOLATION_OPTS = --temp-config $(top_srcdir)/contrib/test_decoding/logical.conf\n \n # Disabled because these tests require \"wal_level=logical\", which\ndiff --git a/src/tools/ci/pg_ci_base.conf b/src/tools/ci/pg_ci_base.conf\nindex d8faa9c26c1..52cdb697a57 100644\n--- a/src/tools/ci/pg_ci_base.conf\n+++ b/src/tools/ci/pg_ci_base.conf\n@@ -12,3 +12,24 @@ log_connections = true\n log_disconnections = true\n log_line_prefix = '%m [%p][%b] %q[%a][%v:%x] '\n log_lock_waits = true\n+\n+# test_decoding\n+wal_level = logical\n+max_replication_slots = 4\n+logical_decoding_work_mem = 64kB\n+\n+# commit_ts\n+track_commit_timestamp = on\n+\n+## worker_spi\n+#shared_preload_libraries = worker_spi\n+#worker_spi.database = contrib_regression\n+\n+## pg_stat_statements\n+##shared_preload_libraries=pg_stat_statements\n+\n+## test_rls_hooks\n+#shared_preload_libraries=test_rls_hooks\n+\n+## snapshot_too_old\n+#old_snapshot_threshold = 60min\ndiff --git a/src/tools/msvc/vcregress.pl b/src/tools/msvc/vcregress.pl\nindex 8f3e3fa937b..7e2cc971a42 100644\n--- a/src/tools/msvc/vcregress.pl\n+++ b/src/tools/msvc/vcregress.pl\n@@ -443,6 +443,7 @@ sub plcheck\n sub subdircheck\n {\n \tmy $module = shift;\n+\tmy $obey_installcheck = shift || 1;\n \n \tif ( !-d \"$module/sql\"\n \t\t|| !-d \"$module/expected\"\n@@ -452,7 +453,7 @@ sub subdircheck\n \t}\n \n \tchdir $module;\n-\tmy @tests = fetchTests();\n+\tmy @tests = fetchTests($obey_installcheck);\n \n \t# Leave if no tests are listed in the module.\n \tif (scalar @tests == 0)\n@@ -516,6 +517,14 @@ sub contribcheck\n \t\tmy $status = $? >> 8;\n \t\t$mstat ||= $status;\n \t}\n+\n+\tsubdircheck('test_decoding', -1);\n+\t$mstat ||= $? >> 8;\n+\n+\t# The DB would need to be restarted\n+\t#subdircheck('pg_stat_statements', -1);\n+\t#$mstat ||= $? >> 8;\n+\n \texit $mstat if $mstat;\n \treturn;\n }\n@@ -530,6 +539,19 @@ sub modulescheck\n \t\tmy $status = $? >> 8;\n \t\t$mstat ||= $status;\n \t}\n+\n+\tsubdircheck('commit_ts', -1);\n+\t$mstat ||= $? >> 8;\n+\n+\tsubdircheck('test_rls_hooks', -1);\n+\t$mstat ||= $? >> 8;\n+\n+\t## The DB would need to be restarted\n+\t#subdircheck('worker_spi', -1);\n+\t#$mstat ||= $? >> 8;\n+\n+\t# src/test/modules/snapshot_too_old/Makefile\n+\n \texit $mstat if $mstat;\n \treturn;\n }\n@@ -726,6 +748,7 @@ sub fetchTests\n \tmy $m = <$handle>;\n \tclose($handle);\n \tmy $t = \"\";\n+\tmy $obey_installcheck = shift || 1;\n \n \t$m =~ s{\\\\\\r?\\n}{}g;\n \n@@ -733,7 +756,7 @@ sub fetchTests\n \t# so bypass its run by returning an empty set of tests.\n \tif ($m =~ /^\\s*NO_INSTALLCHECK\\s*=\\s*\\S+/m)\n \t{\n-\t\treturn ();\n+\t\treturn () if $obey_installcheck == 1;\n \t}\n \n \tif ($m =~ /^REGRESS\\s*=\\s*(.*)$/gm)\n\n\n",
"msg_date": "Sun, 9 Jan 2022 13:16:50 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-09 13:16:50 -0600, Justin Pryzby wrote:\n> I noticed a patch failing in cfbot everywhere except windows:\n> \n> https://commitfest.postgresql.org/36/3476/\n> | Invalid relcache when ADD PRIMARY KEY USING INDEX\n> \n> It's because vcregress skips tests which have NO_INSTALLCHECK=1.\n\n> Is it desirable to enable more module/contrib tests for windows CI ?\n\nYes. I think the way we run windows tests is pretty bad - it's not reasonable\nthat each developer needs to figure out 20 magic incantations to run all tests\non windows.\n\n\n> This does a few, but there's a few others which would require the server to\n> be restarted to set shared_preload_libraries for each module.\n> \n> diff --git a/.cirrus.yml b/.cirrus.yml\n> index 19b3737fa11..c427b468334 100644\n> --- a/.cirrus.yml\n> +++ b/.cirrus.yml\n> @@ -390,7 +390,7 @@ task:\n> - perl src/tools/msvc/vcregress.pl check parallel\n> startcreate_script:\n> # paths to binaries need backslashes\n> - - tmp_install\\bin\\pg_ctl.exe initdb -D tmp_check/db -l tmp_check/initdb.log\n> + - tmp_install\\bin\\pg_ctl.exe initdb -D tmp_check/db -l tmp_check/initdb.log --options=--no-sync\n> - echo include '%TEMP_CONFIG%' >> tmp_check/db/postgresql.conf\n> - tmp_install\\bin\\pg_ctl.exe start -D tmp_check/db -l tmp_check/postmaster.log\n> test_pl_script:\n\n> diff --git a/contrib/test_decoding/Makefile b/contrib/test_decoding/Makefile\n> index 9a31e0b8795..14fd847ba7f 100644\n> --- a/contrib/test_decoding/Makefile\n> +++ b/contrib/test_decoding/Makefile\n> @@ -10,7 +10,7 @@ ISOLATION = mxact delayed_startup ondisk_startup concurrent_ddl_dml \\\n> \toldest_xmin snapshot_transfer subxact_without_top concurrent_stream \\\n> \ttwophase_snapshot\n> \n> -REGRESS_OPTS = --temp-config $(top_srcdir)/contrib/test_decoding/logical.conf\n> +REGRESS_OPTS = --temp-config=$(top_srcdir)/contrib/test_decoding/logical.conf\n> ISOLATION_OPTS = --temp-config $(top_srcdir)/contrib/test_decoding/logical.conf\n>\n\nNot sure why these are part of the diff?\n\n\n> diff --git a/src/tools/ci/pg_ci_base.conf b/src/tools/ci/pg_ci_base.conf\n> index d8faa9c26c1..52cdb697a57 100644\n> --- a/src/tools/ci/pg_ci_base.conf\n> +++ b/src/tools/ci/pg_ci_base.conf\n> @@ -12,3 +12,24 @@ log_connections = true\n> log_disconnections = true\n> log_line_prefix = '%m [%p][%b] %q[%a][%v:%x] '\n> log_lock_waits = true\n> +\n> +# test_decoding\n> +wal_level = logical\n> +max_replication_slots = 4\n> +logical_decoding_work_mem = 64kB\n> [ more ]\n\nThis doesn't really seem like a scalable path forward - duplicating\nconfiguration in more places doesn't seem sane. It seems it'd make more sense\nto teach vcregress.pl to run NO_INSTALLCHECK targets properly? ISTM that\nchanging the options passed to pg_regress based on fetchTests() return value\nwouldn't be too hard?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 9 Jan 2022 11:57:44 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2021-10-02 12:59:09 -0700, Andres Freund wrote:\n> On 2021-10-02 11:05:20 -0400, Tom Lane wrote:\n> > I don't know enough about Windows to evaluate 0001, but I'm a little\n> > worried about it because it looks like it's changing our *production*\n> > error handling on that platform.\n> \n> Yea. It's clearly not ready as-is - it's the piece that I was planning to\n> write a separate email about.\n\n> \n> It's hard to understand what *precisely* SEM_NOGPFAULTERRORBOX etc do.\n> \n> What I do know is that without the _set_abort_behavior() stuff abort() doesn't\n> trigger windows' \"crash\" paths in at least debugging builds, and that the\n> SetErrorMode() and _CrtSetReportMode() changes are necessary to get segfaults\n> to reach the crash paths.\n> \n> The in-tree behaviour turns out to make debugging on windows a major pain, at\n> least when compiling with msvc. Crashes never trigger core dumps or \"just in\n> time\" debugging (their term for invoking a debugger upon crash), so one has to\n> attach to processes before they crash, to have any chance of debugging.\n> \n> As far as I can tell this also means that at least for debugging builds,\n> pgwin32_install_crashdump_handler() is pretty much dead weight -\n> crashDumpHandler() never gets invoked. I think it may get invoked for abort()s\n> in production builds, but probably not for segfaults.\n> \n> And despite SEM_NOGPFAULTERRORBOX we display those annoying \"popup\" boxes\n> telling us about the crash and giving the option to retry, ignore, something\n> something. It's all a bit baffling.\n\nFWIW, the latest version of this patch (including an explanation why\nSEM_NOGPFAULTERRORBOX isn't useful for our purposes [anymore]) is at (and\nabove)\nhttps://postgr.es/m/20220110005704.es4el6i2nxlxzwof%40alap3.anarazel.de\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 9 Jan 2022 16:59:11 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On Sun, Jan 09, 2022 at 11:57:44AM -0800, Andres Freund wrote:\n> On 2022-01-09 13:16:50 -0600, Justin Pryzby wrote:\n> > diff --git a/contrib/test_decoding/Makefile b/contrib/test_decoding/Makefile\n> > index 9a31e0b8795..14fd847ba7f 100644\n> > --- a/contrib/test_decoding/Makefile\n> > +++ b/contrib/test_decoding/Makefile\n> > @@ -10,7 +10,7 @@ ISOLATION = mxact delayed_startup ondisk_startup concurrent_ddl_dml \\\n> > \toldest_xmin snapshot_transfer subxact_without_top concurrent_stream \\\n> > \ttwophase_snapshot\n> > \n> > -REGRESS_OPTS = --temp-config $(top_srcdir)/contrib/test_decoding/logical.conf\n> > +REGRESS_OPTS = --temp-config=$(top_srcdir)/contrib/test_decoding/logical.conf\n> > ISOLATION_OPTS = --temp-config $(top_srcdir)/contrib/test_decoding/logical.conf\n> \n> Not sure why these are part of the diff?\n\nBecause otherwise vcregress runs pg_regress --temp-config test1 test2 [...]\n..which means test1 gets eaten as the argument to --temp-config\n\n> > diff --git a/src/tools/ci/pg_ci_base.conf b/src/tools/ci/pg_ci_base.conf\n> > index d8faa9c26c1..52cdb697a57 100644\n> > --- a/src/tools/ci/pg_ci_base.conf\n> > +++ b/src/tools/ci/pg_ci_base.conf\n> > @@ -12,3 +12,24 @@ log_connections = true\n> > log_disconnections = true\n> > log_line_prefix = '%m [%p][%b] %q[%a][%v:%x] '\n> > log_lock_waits = true\n> > +\n> > +# test_decoding\n> > +wal_level = logical\n> > +max_replication_slots = 4\n> > +logical_decoding_work_mem = 64kB\n> > [ more ]\n> \n> This doesn't really seem like a scalable path forward - duplicating\n> configuration in more places doesn't seem sane. It seems it'd make more sense\n> to teach vcregress.pl to run NO_INSTALLCHECK targets properly? ISTM that\n> changing the options passed to pg_regress based on fetchTests() return value\n> wouldn't be too hard?\n\nIt needs to run the tests with separate instance. Maybe you're suggesting to\nuse --temp-instance.\n\nIt needs to avoid running on the buildfarm, right ?\n\n-- \nJustin",
"msg_date": "Mon, 10 Jan 2022 16:07:48 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2021-12-30 17:46:52 -0800, Andres Freund wrote:\n> I plan to push this after another cycle of tests passing (and driving\n> over the bay bridge...) unless I hear protests.\n\nI noticed that it's harder to see compile warnings on msvc in CI than it was\nat an earlier point. There used to be a summary of errors at the end.\n\nThat turns out to be an uninteded consequence of the option to reduce msbuild\nverbosity.\n\n\n> + # Use parallelism, disable file tracker, we're never going to rebuild...\n> + MSBFLAGS: -m -verbosity:minimal /p:TrackFileAccess=false\n\nUnless somebody protests quickly, I'm going to add\n \"-consoleLoggerParameters:Summary;ForceNoAlign\"\nto that.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 13 Jan 2022 09:55:54 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-10 16:07:48 -0600, Justin Pryzby wrote:\n> On Sun, Jan 09, 2022 at 11:57:44AM -0800, Andres Freund wrote:\n> > On 2022-01-09 13:16:50 -0600, Justin Pryzby wrote:\n> > > diff --git a/contrib/test_decoding/Makefile b/contrib/test_decoding/Makefile\n> > > index 9a31e0b8795..14fd847ba7f 100644\n> > > --- a/contrib/test_decoding/Makefile\n> > > +++ b/contrib/test_decoding/Makefile\n> > > @@ -10,7 +10,7 @@ ISOLATION = mxact delayed_startup ondisk_startup concurrent_ddl_dml \\\n> > > \toldest_xmin snapshot_transfer subxact_without_top concurrent_stream \\\n> > > \ttwophase_snapshot\n> > >\n> > > -REGRESS_OPTS = --temp-config $(top_srcdir)/contrib/test_decoding/logical.conf\n> > > +REGRESS_OPTS = --temp-config=$(top_srcdir)/contrib/test_decoding/logical.conf\n> > > ISOLATION_OPTS = --temp-config $(top_srcdir)/contrib/test_decoding/logical.conf\n> >\n> > Not sure why these are part of the diff?\n>\n> Because otherwise vcregress runs pg_regress --temp-config test1 test2 [...]\n> ..which means test1 gets eaten as the argument to --temp-config\n\nAh. I see you changed that globally, good...\n\nI'll probably apply that part and 0002 separately.\n\n\n> > This doesn't really seem like a scalable path forward - duplicating\n> > configuration in more places doesn't seem sane. It seems it'd make more sense\n> > to teach vcregress.pl to run NO_INSTALLCHECK targets properly? ISTM that\n> > changing the options passed to pg_regress based on fetchTests() return value\n> > wouldn't be too hard?\n>\n> It needs to run the tests with separate instance. Maybe you're suggesting to\n> use --temp-instance.\n\nYes.\n\n\n> It needs to avoid running on the buildfarm, right ?\n\nI guess so. It currently appears to have its own logic for finding contrib\n(and other) tap tests:\n\n foreach my $testdir (glob(\"$pgsql/contrib/*\"))\n {\n next unless -d \"$testdir/t\";\n my $testname = basename($testdir);\n next unless step_wanted(\"contrib-$testname\");\n print time_str(), \"running contrib test $testname ...\\n\" if $verbose;\n run_tap_test(\"$testdir\", \"contrib-$testname\", undef);\n }\n\nbut does run vcregress contribcheck, modulescheck.\n\n\nAndrew, do you see a better way than what Justin is proposing here?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 13 Jan 2022 10:55:27 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On Thu, Jan 13, 2022 at 10:55:27AM -0800, Andres Freund wrote:\n> I'll probably apply that part and 0002 separately.\n\nI've hacked on it a bit more now..\n\nQuestion: are data-checksums tested at all ? The only thing I can find is that\nsome buildfarm members *might* exercise it during installcheck.\n\nI added pg_regress --initdb-opts since that seems to be a deficiency.\n\n-- \nJustin",
"msg_date": "Thu, 13 Jan 2022 13:06:42 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-13 13:06:42 -0600, Justin Pryzby wrote:\n> Question: are data-checksums tested at all ? The only thing I can find is that\n> some buildfarm members *might* exercise it during installcheck.\n\nThere's some coverage via src/bin/pg_basebackup/t/010_pg_basebackup.pl and\nsrc/bin/pg_checksums/t/002_actions.pl - but that's not a whole lot.\n\nMight be worth converting one of the \"additional\" pg_regress runs to use\ndata-checksums? E.g. pg_upgrade's, or the one being introduced in the \"replay\"\ntest?\nhttps://postgr.es/m/CA%2BhUKGK-%2Bmg6RWiDu0JudF6jWeL5%2BgPmi8EKUm1eAzmdbwiE_A%40mail.gmail.com\n\n\n> From b67cd05895c8fa42e13e6743db36412a68292607 Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Sun, 9 Jan 2022 22:54:32 -0600\n> Subject: [PATCH 2/7] CI: run initdb with --no-sync for windows\n\nApplied this already.\n\n\n\n> From 885becd19f630a69ab1de44cefcdda21ca8146d6 Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Tue, 11 Jan 2022 01:30:37 -0600\n> Subject: [PATCH 4/7] cirrus/linux: script test.log..\n> \n> For consistency, and because otherwise errors can be lost (?) or hard to find.\n\n> - make -s ${CHECK} ${CHECKFLAGS} -j${TEST_JOBS}\n> + script --command \"make -s ${CHECK} ${CHECKFLAGS} -j${TEST_JOBS}\" test.log\n> EOF\n\nI'm not following this one? all the output is in the CI run already, you can\ndownload it already as well?\n\nThe only reason to use script as a wrapper is that otherwise make on\nfreebsd/macos warns about fcntl failures?\n\n\n> only_if: $CIRRUS_CHANGE_MESSAGE !=~ '.*\\nci-os-only:.*' || $CIRRUS_CHANGE_MESSAGE =~ '.*\\nci-os-only:[^\\n]*linux.*'\n> @@ -183,7 +178,7 @@ task:\n> mkdir -p ${CCACHE_DIR}\n> chown -R postgres:postgres ${CCACHE_DIR}\n> echo '* - memlock 134217728' > /etc/security/limits.d/postgres.conf\n> - su postgres -c \"ulimit -l -H && ulimit -l -S\"\n> + su postgres -c \"ulimit -l -H && ulimit -l -S\" # XXX\n\nHm?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 13 Jan 2022 11:32:38 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "\nOn 1/13/22 13:55, Andres Freund wrote:\n>> It needs to avoid running on the buildfarm, right ?\n> I guess so. It currently appears to have its own logic for finding contrib\n> (and other) tap tests:\n>\n> foreach my $testdir (glob(\"$pgsql/contrib/*\"))\n> {\n> next unless -d \"$testdir/t\";\n> my $testname = basename($testdir);\n> next unless step_wanted(\"contrib-$testname\");\n> print time_str(), \"running contrib test $testname ...\\n\" if $verbose;\n> run_tap_test(\"$testdir\", \"contrib-$testname\", undef);\n> }\n>\n> but does run vcregress contribcheck, modulescheck.\n>\n>\n> Andrew, do you see a better way than what Justin is proposing here?\n>\n\nI can probably adjust to whatever we decide to do. But I think we're\nreally just tinkering at the edges here. What I think we really need is\nthe moral equivalent of `make check-world` in one invocation of\nvcregress.pl.\n\nWhile we're on the subject of vcregress.pl, there's this recent\ndiscussion, which is on my list of things to return to:\n<https://www.postgresql.org/message-id/46c40cc7-db28-b684-379d-43b34daa5ffa%40dunslane.net>\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 13 Jan 2022 15:27:40 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-13 15:27:40 -0500, Andrew Dunstan wrote:\n> I can probably adjust to whatever we decide to do. But I think we're\n> really just tinkering at the edges here. What I think we really need is\n> the moral equivalent of `make check-world` in one invocation of\n> vcregress.pl.\n\nI agree strongly that we need that. But I think a good chunk of Justin's\nchanges are actually required to get there?\n\nSpecifically, unless we want lots of duplicated logic in vcregress.pl, we\nneed to make vcregress know how to run NO_INSTALLCHECK test. The option added\nwas just so the buildfarm doesn't start to run tests multiple times...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 14 Jan 2022 15:34:11 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On Fri, Jan 14, 2022 at 03:34:11PM -0800, Andres Freund wrote:\n> Hi,\n> \n> On 2022-01-13 15:27:40 -0500, Andrew Dunstan wrote:\n> > I can probably adjust to whatever we decide to do. But I think we're\n> > really just tinkering at the edges here. What I think we really need is\n> > the moral equivalent of `make check-world` in one invocation of\n> > vcregress.pl.\n> \n> I agree strongly that we need that. But I think a good chunk of Justin's\n> changes are actually required to get there?\n> \n> Specifically, unless we want lots of duplicated logic in vcregress.pl, we\n> need to make vcregress know how to run NO_INSTALLCHECK test. The option added\n> was just so the buildfarm doesn't start to run tests multiple times...\n\nThe main reason I made the INSTALLCHECK runs conditional (they only run if a\nnew option is specified) is because of these comments:\n\n| # Disabled because these tests require \"shared_preload_libraries=pg_stat_statements\",\n| # which typical installcheck users do not have (e.g. buildfarm clients).\n| NO_INSTALLCHECK = 1\n\nAlso, I saw that you saw that Thomas discovered/pointed out that a bunch of TAP\ntests aren't being run by CI. I think vcregress should have an \"alltap\"\ntarget that runs everything like glob(\"**/t/\"). CI would use that instead of\nthe existing ssl, auth, subscription, recovery, and bin targets. The buildfarm\ncould switch to that after it's been published.\n\nhttps://www.postgresql.org/message-id/20220114234947.av4kkhuj7netsy5r%40alap3.anarazel.de\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 14 Jan 2022 17:54:57 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "\nOn 1/14/22 18:54, Justin Pryzby wrote:\n> On Fri, Jan 14, 2022 at 03:34:11PM -0800, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2022-01-13 15:27:40 -0500, Andrew Dunstan wrote:\n>>> I can probably adjust to whatever we decide to do. But I think we're\n>>> really just tinkering at the edges here. What I think we really need is\n>>> the moral equivalent of `make check-world` in one invocation of\n>>> vcregress.pl.\n>> I agree strongly that we need that. But I think a good chunk of Justin's\n>> changes are actually required to get there?\n>>\n>> Specifically, unless we want lots of duplicated logic in vcregress.pl, we\n>> need to make vcregress know how to run NO_INSTALLCHECK test. The option added\n>> was just so the buildfarm doesn't start to run tests multiple times...\n> The main reason I made the INSTALLCHECK runs conditional (they only run if a\n> new option is specified) is because of these comments:\n>\n> | # Disabled because these tests require \"shared_preload_libraries=pg_stat_statements\",\n> | # which typical installcheck users do not have (e.g. buildfarm clients).\n> | NO_INSTALLCHECK = 1\n>\n> Also, I saw that you saw that Thomas discovered/pointed out that a bunch of TAP\n> tests aren't being run by CI. I think vcregress should have an \"alltap\"\n> target that runs everything like glob(\"**/t/\"). CI would use that instead of\n> the existing ssl, auth, subscription, recovery, and bin targets. The buildfarm\n> could switch to that after it's been published.\n>\n> https://www.postgresql.org/message-id/20220114234947.av4kkhuj7netsy5r%40alap3.anarazel.de\n\n\n\n\nThe buildfarm is moving in the opposite direction, to disaggregate\nsteps. There are several reasons for that, including that it makes for\nless log output that you need to churn through o find out what's gone\nwrong in a particular case, and that it makes disabling certain test\nsets via the buildfarm client's `skip-steps' feature possible.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 17 Jan 2022 10:25:12 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-17 10:25:12 -0500, Andrew Dunstan wrote:\n> The buildfarm is moving in the opposite direction, to disaggregate\n> steps.\n\nI'm a bit confused as to where you want changes to vcregress.pl\ngoing. Upthread you argued against adding more granular targets to\nvcregress. But this seems to be arguing against that?\n\n\n> There are several reasons for that, including that it makes for\n> less log output that you need to churn through o find out what's gone\n> wrong in a particular case, and that it makes disabling certain test\n> sets via the buildfarm client's `skip-steps' feature possible.\n\nFWIW, to me this shouldn't require a lot of separate manual test\ninvocations. And continuing to have lots of granular test invocations from the\nbuildfarm client is *bad*, because it requires constantly syncing up the set\nof test targets.\n\nIt also makes the buildfarm far slower than necessary, because it runs a lot\nof stuff serially that it could run in parallel. This is particularly a\nproblem for things like valgrind runs, where individual tests are quite slow -\nbut just throwing more CPUs at it would help a lot.\n\nWe should set things up so that:\n\na) The output of each test can easily be associated with the corresponding set\n of log files.\nb) The list of tests run can be determined generically by looking at the\n filesystems\nc) For each test run, it's easy to see whether it failed or succeeded\n\nAs part of the meson stuff (which has its own test runner), I managed to get a\nbit towards this by changing the log output hierarchy so that each test gets\nits own directory for log files (regress_log_*, initdb.log, postmaster.log,\nregression.diffs, server log files). What's missing is a\ntest.{failed,succeeded} marker or such, to make it easy to report the log\nfiles for just failed tasks.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 17 Jan 2022 10:19:46 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On Mon, Jan 17, 2022 at 1:19 PM Andres Freund <andres@anarazel.de> wrote:\n> FWIW, to me this shouldn't require a lot of separate manual test\n> invocations. And continuing to have lots of granular test invocations from the\n> buildfarm client is *bad*, because it requires constantly syncing up the set\n> of test targets.\n\nI have a lot of sympathy with Andrew here, actually. If you just do\n'make check-world' and assume that will cover everything, you get one\ngiant output file. That is not great at all. People who are looking\nthrough buildfarm results do not want to have to look through giant\nlogfiles hunting for the failure; they want to look at the stuff\nthat's just directly relevant to the failure they saw. The current BF\nis actually pretty bad at this. You can click on various things on a\nbuildfarm results page, but it's not very clear where those links are\ntaking you, and the pages at least in my browser (normally Chrome)\nrender so slowly as to make the whole thing almost unusable. I'd like\nto have a thing where the buildfarm shows a list of tests in red or\ngreen and I can click links next to each test to see the various logs\nthat test produced. That's really hard to accomplish if you just run\nall the tests with one invocation - any technique you use to find the\nboundaries between one test's output and the next will prove to be\nunreliable.\n\nBut having said that, I also agree that it sucks to have to keep\nupdating the BF client every time we want to do any kind of\ntest-related changes in the main source tree. One way around that\nwould be to put a file in the main source tree that the build farm\nclient can read to know what to do. Another would be to have the BF\nclient download the latest list of steps from somewhere instead of\nhaving it in the source code, so that it can be updated without\neveryone needing to update their machine. There might well be other\napproaches that are even better. But the \"ask Andrew to adjust the BF\nclient and then have everybody install the new version\" approach upon\nwhich we have been relying up until now is not terribly scalable.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 17 Jan 2022 13:50:04 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I have a lot of sympathy with Andrew here, actually. If you just do\n> 'make check-world' and assume that will cover everything, you get one\n> giant output file. That is not great at all.\n\nYeah. I agree with Andrew that we want output that is more modular,\nnot less so. But we do need to find a way to have less knowledge\nhard-wired in the buildfarm client script.\n\n> But having said that, I also agree that it sucks to have to keep\n> updating the BF client every time we want to do any kind of\n> test-related changes in the main source tree. One way around that\n> would be to put a file in the main source tree that the build farm\n> client can read to know what to do. Another would be to have the BF\n> client download the latest list of steps from somewhere instead of\n> having it in the source code, so that it can be updated without\n> everyone needing to update their machine.\n\nThe obvious place for \"somewhere\" is \"the main source tree\", so I\ndoubt your second suggestion is better than your first. But your\nfirst does seem like a plausible way to proceed.\n\nAnother way to think of it, maybe, is to migrate chunks of the\nbuildfarm client script itself into the source tree. I'd rather\nthat developers not need to become experts on the buildfarm client\nto make adjustments to the test process --- but I suspect that\na simple script like \"run make check in these directories\" is\nnot going to be flexible enough for everything.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Jan 2022 14:18:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-17 13:50:04 -0500, Robert Haas wrote:\n> On Mon, Jan 17, 2022 at 1:19 PM Andres Freund <andres@anarazel.de> wrote:\n> > FWIW, to me this shouldn't require a lot of separate manual test\n> > invocations. And continuing to have lots of granular test invocations from the\n> > buildfarm client is *bad*, because it requires constantly syncing up the set\n> > of test targets.\n> \n> I have a lot of sympathy with Andrew here, actually. If you just do\n> 'make check-world' and assume that will cover everything, you get one\n> giant output file. That is not great at all.\n\nI very much agree with check-world output being near unusable.\n\n\n> That's really hard to accomplish if you just run all the tests with one\n> invocation - any technique you use to find the boundaries between one test's\n> output and the next will prove to be unreliable.\n\nI think it's not actually that hard, with something like I described in the\nemail upthread, with each tests going into a prescribed location, and the\non-disk status being inspectable in an automated way. check-world could invoke\na command to summarize the tests at the end in a .NOTPARALLEL, to make the\nlocal case easier.\n\npg_regress/isolation style tests have the \"summary\" output in regression.out,\nbut we remove it on success.\nTap tests have their output in the regress_log_* files, however these are far\nmore verbose than the output from running the tests directly.\n\nI wonder if we should keep regression.out and also keep a copy of the\n\"shorter\" tap test output in a file?\n\n\n> But having said that, I also agree that it sucks to have to keep\n> updating the BF client every time we want to do any kind of\n> test-related changes in the main source tree.\n\nIt also sucks locally. I *hate* having to dig through a long check-world\noutput to find the first failure.\n\nThis subthread is about the windows tests specifically, where it's even worse\n- there's no way to run all tests. Nor a list of test targets that's even\nhalfway complete :/. One just has to know that one has to invoke a myriad of\nvcregress.pl taptest path/to/directory\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 17 Jan 2022 11:25:10 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I think it's not actually that hard, with something like I described in the\n> email upthread, with each tests going into a prescribed location, and the\n> on-disk status being inspectable in an automated way. check-world could invoke\n> a command to summarize the tests at the end in a .NOTPARALLEL, to make the\n> local case easier.\n\nThat sounds a bit, um, make-centric. At this point it seems to me\nwe ought to be thinking about how it'd work under meson.\n\n> This subthread is about the windows tests specifically, where it's even worse\n> - there's no way to run all tests.\n\nThat's precisely because the windows build doesn't use make.\nWe shouldn't be thinking about inventing two separate dead-end\nsolutions to this problem.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Jan 2022 14:30:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-17 14:30:53 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I think it's not actually that hard, with something like I described in the\n> > email upthread, with each tests going into a prescribed location, and the\n> > on-disk status being inspectable in an automated way. check-world could invoke\n> > a command to summarize the tests at the end in a .NOTPARALLEL, to make the\n> > local case easier.\n>\n> That sounds a bit, um, make-centric. At this point it seems to me\n> we ought to be thinking about how it'd work under meson.\n\nSome of this is a lot easier with meson. It has a builtin test runner, which\nunderstands tap (thereby not requiring prove anymore). Those tests can be\nlisted (meson test --list).\n\nDepending on the option this results in a list of all tests with just the\n\"topline\" name of passing tests and error output from failing tests, or all\noutput all the time or ... At the end it prints a summary of test counts and a\nlist of failed tests.\n\nHere's an example (the leading timestamps are from CI, not meson), on windows:\nhttps://api.cirrus-ci.com/v1/task/6009638771490816/logs/check.log\n\nThe test naming isn't necessarily great, but that's my fault.\n\nRunning all the tests with meson takes a good bit less time than running most,\nbut far from all, tests using vcregress.pl:\nhttps://cirrus-ci.com/build/4611852939296768\n\n\n\nmeson test makes it far easier to spot which tests failed, it's consistent\nacross operating systems, allows to skip individual tests, etc.\n\nHowever: It doesn't address the log collection issue in itself. For that we'd\nstill need to collect them in a way that's easier to associate with individual\ntests.\n\nIn the meson branch I made it so that each test (including individual tap\nones) has it's own log directory, which makes it easier to select all the logs\nfor a failing test etc.\n\n\n> > This subthread is about the windows tests specifically, where it's even worse\n> > - there's no way to run all tests.\n>\n> That's precisely because the windows build doesn't use make.\n> We shouldn't be thinking about inventing two separate dead-end\n> solutions to this problem.\n\nAgreed. I think some improvements, e.g. around making the logs easier to\nassociate with an individual test, is orthogonal to the buildsystem issue.\n\n\nI think it might still be worth adding stopgap way of running all tap tests on\nwindows though. Having a vcregress.pl function to find all directories with t/\nand run the tests there, shouldn't be a lot of code...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 17 Jan 2022 12:16:19 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "\nOn 1/17/22 13:19, Andres Freund wrote:\n> Hi,\n>\n> On 2022-01-17 10:25:12 -0500, Andrew Dunstan wrote:\n>> The buildfarm is moving in the opposite direction, to disaggregate\n>> steps.\n> I'm a bit confused as to where you want changes to vcregress.pl\n> going. Upthread you argued against adding more granular targets to\n> vcregress. But this seems to be arguing against that?\n\n\nOK, let me clear that up. Yes I argued upthread for a 'make check-world'\nequivalent, because it's useful for developers on Windows. But I don't\nreally want to use it in the buildfarm client, where I prefer to run\nfine-grained tests.\n\n\n>\n>\n>> There are several reasons for that, including that it makes for\n>> less log output that you need to churn through o find out what's gone\n>> wrong in a particular case, and that it makes disabling certain test\n>> sets via the buildfarm client's `skip-steps' feature possible.\n> FWIW, to me this shouldn't require a lot of separate manual test\n> invocations. And continuing to have lots of granular test invocations from the\n> buildfarm client is *bad*, because it requires constantly syncing up the set\n> of test targets.\n\n\nWell, the logic we use for TAP tests is we run them for the following if\nthey have a t subdirectory, subject to configuration settings like\nPG_TEST_EXTRA:\n\n src/test/bin/*\n\n contrib/*\n\n src/test/modules/*\n\n src/test/*\n\n\nAs long as any new TAP test gets place in such a location nothing new is\nrequired in the buildfarm client.\n\n\n>\n> It also makes the buildfarm far slower than necessary, because it runs a lot\n> of stuff serially that it could run in parallel. \n\n\nThat's a legitimate concern. I have made some strides towards gross\nparallelism in the buildfarm by providing for different branches to be\nrun in parallel. This has proven to be fairly successful (i.e. I have\nnot run into any complaints, and several of my own animals utilize it).\nI can certainly look at doing something of the sort for an individual\nbranch run.\n\n\n> This is particularly a\n> problem for things like valgrind runs, where individual tests are quite slow -\n> but just throwing more CPUs at it would help a lot.\n\n\nWhen I ran a valgrind animal, `make check` was horribly slow, and it's\nalready parallelized. But it was on a VM and I forget how many CPUs the\nVM had.\n\n\n>\n> We should set things up so that:\n>\n> a) The output of each test can easily be associated with the corresponding set\n> of log files.\n> b) The list of tests run can be determined generically by looking at the\n> filesystems\n> c) For each test run, it's easy to see whether it failed or succeeded\n>\n> As part of the meson stuff (which has its own test runner), I managed to get a\n> bit towards this by changing the log output hierarchy so that each test gets\n> its own directory for log files (regress_log_*, initdb.log, postmaster.log,\n> regression.diffs, server log files). What's missing is a\n> test.{failed,succeeded} marker or such, to make it easy to report the log\n> files for just failed tasks.\n\n\nOne thing I have been working on is a way to split the log output of an\nindividual buildfarm step, hitherto just a text blob, , so that you can\neasily navigate to say regress_log_0003-foo-step.log without having to\npage through myriads of stuff. It's been on the back burner but I need\nto prioritize it, maybe when the current CF is done.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 17 Jan 2022 16:13:28 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On 17.01.22 22:13, Andrew Dunstan wrote:\n> Well, the logic we use for TAP tests is we run them for the following if\n> they have a t subdirectory, subject to configuration settings like\n> PG_TEST_EXTRA:\n> \n> src/test/bin/*\n> contrib/*\n> src/test/modules/*\n> src/test/*\n> \n> As long as any new TAP test gets place in such a location nothing new is\n> required in the buildfarm client.\n\nBut if I wanted to add TAP tests to libpq, then I'd still be stuck. Why \nnot change the above list to \"anywhere\"?\n\n\n\n",
"msg_date": "Tue, 18 Jan 2022 14:06:18 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "\nOn 1/18/22 08:06, Peter Eisentraut wrote:\n> On 17.01.22 22:13, Andrew Dunstan wrote:\n>> Well, the logic we use for TAP tests is we run them for the following if\n>> they have a t subdirectory, subject to configuration settings like\n>> PG_TEST_EXTRA:\n>>\n>> src/test/bin/*\n>> contrib/*\n>> src/test/modules/*\n>> src/test/*\n>>\n>> As long as any new TAP test gets place in such a location nothing new is\n>> required in the buildfarm client.\n>\n> But if I wanted to add TAP tests to libpq, then I'd still be stuck. \n> Why not change the above list to \"anywhere\"?\n\n\n\nSure, very doable, although we will still need some special logic for\nsrc/test - there are security implication from running the ssl, ldap and\nkerberos tests by default. See its Makefile.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 18 Jan 2022 11:20:08 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-18 11:20:08 -0500, Andrew Dunstan wrote:\n> Sure, very doable, although we will still need some special logic for\n> src/test - there are security implication from running the ssl, ldap and\n> kerberos tests by default. See its Makefile.\n\nISTM that we should move the PG_TEST_EXTRA handling into the tests. Then we'd\nnot need to duplicate them in the make / msvc world and we'd see them as\nskipped tests when not enabled.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 18 Jan 2022 09:44:13 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "\nOn 1/18/22 12:44, Andres Freund wrote:\n> Hi,\n>\n> On 2022-01-18 11:20:08 -0500, Andrew Dunstan wrote:\n>> Sure, very doable, although we will still need some special logic for\n>> src/test - there are security implication from running the ssl, ldap and\n>> kerberos tests by default. See its Makefile.\n> ISTM that we should move the PG_TEST_EXTRA handling into the tests. Then we'd\n> not need to duplicate them in the make / msvc world and we'd see them as\n> skipped tests when not enabled.\n>\n\nYeah, good idea. Especially if we can backpatch that. The buildfarm\nclient would also get simpler, so it would be doubleplusgood.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 18 Jan 2022 13:58:34 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On Mon, Jan 17, 2022 at 12:16:19PM -0800, Andres Freund wrote:\n> I think it might still be worth adding stopgap way of running all tap tests on\n> windows though. Having a vcregress.pl function to find all directories with t/\n> and run the tests there, shouldn't be a lot of code...\n\nI started doing that, however it makes CI/windows even slower. I think it'll\nbe necessary to run prove with all the tap tests to parallelize them, rather\nthan looping around directories, many of which have only a single file, and are\nrun serially.\n\nhttps://github.com/justinpryzby/postgres/commits/citest-cirrus\nThis has the link to a recent cirrus result if you'd want to look.\nI suppose I should start a new thread. \n\nThere's a bunch of other stuff for cirrus in there (and bits and pieces of\nother branches).\n\n . cirrus: spell ccache_maxsize\n . cirrus: avoid unnecessary double star **\n . vcregress/ci: test modules/contrib with NO_INSTALLCHECK=1\n . vcregress: style\n . wip: vcsregress: add alltaptests\n . wip: run upgrade tests with data-checksums\n . pg_regress --initdb-opts\n . wip: pg_upgrade: show list of files copied only in vebose mode\n . wip: cirrus: include hints how to install OS packages..\n . wip: cirrus: code coverage\n . cirrus: upload html docs as artifacts\n . wip: cirrus/windows: save compiled build as an artifact\n\n\n",
"msg_date": "Tue, 18 Jan 2022 15:08:47 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-18 15:08:47 -0600, Justin Pryzby wrote:\n> On Mon, Jan 17, 2022 at 12:16:19PM -0800, Andres Freund wrote:\n> > I think it might still be worth adding stopgap way of running all tap tests on\n> > windows though. Having a vcregress.pl function to find all directories with t/\n> > and run the tests there, shouldn't be a lot of code...\n> \n> I started doing that, however it makes CI/windows even slower.\n\nTo be expected... Perhaps the caching approach I just posted in [1] would buy\nmost of it back though...\n\n\n> I think it'll be necessary to run prove with all the tap tests to\n> parallelize them, rather than looping around directories, many of which have\n> only a single file, and are run serially.\n\nThat's unfortunately not trivially possible. Quite a few tests currently rely\non being called in a specific directory. We should fix this, but it's not a\ntrivial amount of work.\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/20220119010034.javla5sakeh2a4fa%40alap3.anarazel.de\n\n\n",
"msg_date": "Tue, 18 Jan 2022 17:16:26 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "I just found one thing making check-world slower than it ought to be:\nsrc/test/recovery/t/008_fsm_truncation.pl does\n\n$node_primary->append_conf(\n\t'postgresql.conf', qq{\nfsync = on\nwal_log_hints = on\nmax_prepared_transactions = 5\nautovacuum = off\n});\n\nThere is no reason for this script to be overriding Cluster.pm's\nfsync = off setting. This actually causes parallel check-world to\nfail altogether on florican's host, because the initial fsync of\nthe recovered primary takes more than 3 minutes when there's\nconflicting I/O traffic, causing pg_ctl to time out.\n\nThis appears to go back to 917dc7d23 of 2016, so I think it just\npredates our recognition that we should disable fsync in routine\ntests.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 Jan 2022 21:50:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-18 21:50:07 -0500, Tom Lane wrote:\n> I just found one thing making check-world slower than it ought to be:\n> src/test/recovery/t/008_fsm_truncation.pl does\n> \n> $node_primary->append_conf(\n> \t'postgresql.conf', qq{\n> fsync = on\n> wal_log_hints = on\n> max_prepared_transactions = 5\n> autovacuum = off\n> });\n> \n> There is no reason for this script to be overriding Cluster.pm's\n> fsync = off setting.\n> \n> This appears to go back to 917dc7d23 of 2016, so I think it just\n> predates our recognition that we should disable fsync in routine\n> tests.\n\nYea, I noticed this too. I was wondering if there's a conceivable reason to\nactually want fsyncs, but I couldn't come up with one.\n\nOn systems where IO isn't overloaded, the main problem with this test are\nelsewhere: It multiple times waits for VACUUMs that are blocked truncating the\ntable. Which these days takes 5 seconds. Thus the test takes quite a while.\n\nTo me VACUUM_TRUNCATE_LOCK_TIMEOUT = 5s seems awfully long. On a system with a\nlot of tables that's much more than vacuum will take. So this can easily lead\nto using up all autovacuum workers...\n\n\n\n> This actually causes parallel check-world to fail altogether on florican's\n> host, because the initial fsync of the recovered primary takes more than 3\n> minutes when there's conflicting I/O traffic, causing pg_ctl to time out.\n\nUgh.\n\nI noticed a few other sources of \"unnecessary\" fsyncs. The most frequent\nbeing the durable_rename() of backup_manifest in pg_basebackup.c. Manifests are\nsurprisingly large, 135k for a freshly initdb'd cluster.\n\n\nThere's an fsync in walmethods.c:tar_close() that sounds intentional, but I\ndon't really understand what the comment:\n\n\t/* Always fsync on close, so the padding gets fsynced */\n\tif (tar_sync(f) < 0)\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 18 Jan 2022 20:16:46 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-01-18 21:50:07 -0500, Tom Lane wrote:\n>> There is no reason for this script to be overriding Cluster.pm's\n>> fsync = off setting.\n>> This appears to go back to 917dc7d23 of 2016, so I think it just\n>> predates our recognition that we should disable fsync in routine\n>> tests.\n\n> Yea, I noticed this too. I was wondering if there's a conceivable reason to\n> actually want fsyncs, but I couldn't come up with one.\n\nOn the one hand, it feels a little wrong if our test suites never\nreach our fsync calls at all. On the other hand, it's not clear\nwhat is meaningful about testing fsync when your test scenario\ndoesn't include a plug pull.\n\nI'd be okay with having some exercise of the fsync code paths in\na test that is (a) designated for the purpose and (b) arranged\nto not take an excessive amount of time, even under heavy load.\n008_fsm_truncation.pl is neither of those things. It seems\nentirely random that it has fsync = on when we don't test that\nelsewhere.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 Jan 2022 23:39:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-01-18 21:50:07 -0500, Tom Lane wrote:\n>> This actually causes parallel check-world to fail altogether on florican's\n>> host, because the initial fsync of the recovered primary takes more than 3\n>> minutes when there's conflicting I/O traffic, causing pg_ctl to time out.\n\n> Ugh.\n\nI misspoke there: it's the standby that is performing an fsync'd\ncheckpoint and timing out, during the test's promote-the-standby\nstep.\n\nThis test attempt revealed another problem too: the standby never\nshut down, and thus the calling \"make\" never quit, until I intervened\nmanually. I'm not sure why. I see that Cluster::promote uses\nsystem_or_bail() to run \"pg_ctl promote\" ... could it be that\nBAIL_OUT causes the normal script END hooks to not get run?\nBut it seems like we'd have noticed that long ago.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 Jan 2022 23:54:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "I wrote:\n> This test attempt revealed another problem too: the standby never\n> shut down, and thus the calling \"make\" never quit, until I intervened\n> manually. I'm not sure why. I see that Cluster::promote uses\n> system_or_bail() to run \"pg_ctl promote\" ... could it be that\n> BAIL_OUT causes the normal script END hooks to not get run?\n> But it seems like we'd have noticed that long ago.\n\nI failed to reproduce any failure in the promote step, and I now\nthink I was mistaken and it happened during the standby's initial\nstart. I can reproduce that very easily by setting PGCTLTIMEOUT\nto a second or two; with fsync enabled, it takes the standby more\nthan that to reach a consistent state. And the cause of that\nis obvious: Cluster::start thinks that if \"pg_ctl start\" failed,\nthere couldn't possibly be a postmaster running. So it doesn't\nbother to update self->_pid, and then the END hook thinks there\nis nothing to do.\n\nNow, leaving an idle postmaster hanging around isn't a mortal sin,\nsince it'll go away by itself shortly after the next cycle of\ntesting does an \"rm -rf\" on its data directory. But it's ugly,\nand conceivably it could cause problems for later testing on\nmachines with limited shmem or semaphore space.\n\nThe attached simple fix gets rid of this problem. Any objections?\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 19 Jan 2022 15:05:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-19 15:05:44 -0500, Tom Lane wrote:\n> And the cause of that is obvious: Cluster::start thinks that if \"pg_ctl\n> start\" failed, there couldn't possibly be a postmaster running. So it\n> doesn't bother to update self->_pid, and then the END hook thinks there is\n> nothing to do.\n\nAh, right.\n\nI'm doubtful that it's good that we use BAIL_OUT so liberally, because it\nprevents further tests from running (i.e. if 001 bails, 002+ doesn't run),\nwhich doesn't generally seem like the right thing to do after a single test\nfails. But that's really independent of the fix you make here.\n\n\n> The attached simple fix gets rid of this problem. Any objections?\n\nNope, sounds like a plan.\n\n> diff --git a/src/test/perl/PostgreSQL/Test/Cluster.pm b/src/test/perl/PostgreSQL/Test/Cluster.pm\n> index 7af0f8db13..fd0738d12d 100644\n> --- a/src/test/perl/PostgreSQL/Test/Cluster.pm\n> +++ b/src/test/perl/PostgreSQL/Test/Cluster.pm\n> @@ -845,6 +845,11 @@ sub start\n> \t{\n> \t\tprint \"# pg_ctl start failed; logfile:\\n\";\n> \t\tprint PostgreSQL::Test::Utils::slurp_file($self->logfile);\n> +\n> +\t\t# pg_ctl could have timed out, so check to see if there's a pid file;\n> +\t\t# without this, we fail to shut down the new postmaster later.\n> +\t\t$self->_update_pid(-1);\n\nI'd maybe do s/later/in the END block/ or such, so that one knows where to\nlook. Took me a minute to find it again.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 19 Jan 2022 12:29:26 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I'm doubtful that it's good that we use BAIL_OUT so liberally, because it\n> prevents further tests from running (i.e. if 001 bails, 002+ doesn't run),\n> which doesn't generally seem like the right thing to do after a single test\n> fails. But that's really independent of the fix you make here.\n\nAgreed, that's a separate question. It does seem like \"stop this script\nand move to the next one\" would be better behavior, though.\n\n> I'd maybe do s/later/in the END block/ or such, so that one knows where to\n> look. Took me a minute to find it again.\n\nOK.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 19 Jan 2022 15:43:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-18 20:16:46 -0800, Andres Freund wrote:\n> I noticed a few other sources of \"unnecessary\" fsyncs. The most frequent\n> being the durable_rename() of backup_manifest in pg_basebackup.c. Manifests are\n> surprisingly large, 135k for a freshly initdb'd cluster.\n\nRobert, I assume the fsync for manifests isn't ignoring --no-sync for a\nparticular reason?\n\nThe attached patch adds no-sync handling to the manifest rename, as well as\none case in the directory wal method.\n\n\nIt's a bit painful that we have to have code like\n\n\t\t\tif (dir_data->sync)\n\t\t\t\tr = durable_rename(tmppath, tmppath2);\n\t\t\telse\n\t\t\t{\n\t\t\t\tif (rename(tmppath, tmppath2) != 0)\n\t\t\t\t{\n\t\t\t\t\tpg_log_error(\"could not rename file \\\"%s\\\" to \\\"%s\\\": %m\",\n\t\t\t\t\t\t\t\t tmppath, tmppath2);\n\t\t\t\t\tr = -1;\n\t\t\t\t}\n\t\t\t}\n\nIt seems like it'd be better to set it up so that durable_rename() could\ndecide internally wether to fsync, or have a wrapper around durable_rename()?\n\n\n> There's an fsync in walmethods.c:tar_close() that sounds intentional, but I\n> don't really understand what the comment:\n> \n> \t/* Always fsync on close, so the padding gets fsynced */\n> \tif (tar_sync(f) < 0)\n\ntar_sync() actually checks for tar_data->sync, so it doesn't do an\nfsync. Arguably the comment is a bit confusing, but ...\n\nGreetings,\n\nAndres Freund",
"msg_date": "Fri, 21 Jan 2022 12:00:57 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "pg_basebackup fsyncs some files despite --no-sync (was: Adding CI to\n our tree)"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-21 12:00:57 -0800, Andres Freund wrote:\n> The attached patch adds no-sync handling to the manifest rename, as well as\n> one case in the directory wal method.\n\nPushed that.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 23 Jan 2022 14:11:16 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup fsyncs some files despite --no-sync (was: Adding\n CI to our tree)"
},
{
"msg_contents": "On Tue, Jan 18, 2022 at 03:08:47PM -0600, Justin Pryzby wrote:\n> On Mon, Jan 17, 2022 at 12:16:19PM -0800, Andres Freund wrote:\n> > I think it might still be worth adding stopgap way of running all tap tests on\n> > windows though. Having a vcregress.pl function to find all directories with t/\n> > and run the tests there, shouldn't be a lot of code...\n> \n> I started doing that, however it makes CI/windows even slower. I think it'll\n> be necessary to run prove with all the tap tests to parallelize them, rather\n> than looping around directories, many of which have only a single file, and are\n> run serially.\n\nFYI: I've rebased these against your cirrus/windows changes.\n\nA recent cirrus result is here; this has other stuff in the branch, so you can\nsee the artifacts with HTML docs and code coverage.\n\nhttps://github.com/justinpryzby/postgres/runs/5046465342\n\n95793675633 cirrus: spell ccache_maxsize\ne5286ede1b4 cirrus: avoid unnecessary double star **\n03f6de4643e cirrus: include hints how to install OS packages..\n39cc2130e76 cirrus/linux: script test.log..\n398b7342349 cirrus/macos: uname -a and export at end of sysinfo\n9d0f03d3450 wip: cirrus: code coverage\nbff64e8b998 cirrus: upload html docs as artifacts\n80f52c3b172 wip: only upload changed docs\n7f3b50f1847 pg_upgrade: show list of files copied only in vebose mode\nba229165fe1 wip: run pg_upgrade tests with data-checksums\n97d7b039b9b vcregress/ci: test modules/contrib with NO_INSTALLCHECK=1\n654b6375401 wip: vcsregress: add alltaptests\n\nBTW, I think the double star added in f3feff825 probably doesn't need to be\ndouble, either:\n\npath: \"crashlog-**.txt\"\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 2 Feb 2022 21:58:28 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-02 21:58:28 -0600, Justin Pryzby wrote:\n> FYI: I've rebased these against your cirrus/windows changes.\n\nDid you put then on a dedicated branch, or only intermixed with other changes?\n\n\n> A recent cirrus result is here; this has other stuff in the branch, so you can\n> see the artifacts with HTML docs and code coverage.\n\nI'm a bit worried about the increased storage and runtime overhead due to the\ndocs changes. We probably can make it a good bit cheaper though.\n\n\n> https://github.com/justinpryzby/postgres/runs/5046465342\n\n\n> 95793675633 cirrus: spell ccache_maxsize\n\nYep, will apply with a bunch of your other changes, if you answer a question\nor two...\n\n\n> e5286ede1b4 cirrus: avoid unnecessary double star **\n\nCan't get excited about this, but whatever.\n\nWhat I am excited about is that some of your other changes showed that we\ndon't need separate *_artifacts for separate directories anymore. That used to\nbe the case, but an array of paths is now supported. Putting log, diffs, and\nregress_log in one directory will be considerably more convenient...\n\n\n> 03f6de4643e cirrus: include hints how to install OS packages..\n\nWhat's the idea behind\n\n#echo 'deb http://deb.debian.org/debian bullseye main' >>/etc/apt/sources.list\n\nThat's already in sources.list, so I'm not sure what this shows?\n\n\nI think it may be a bit cleaner to have the \"install additional packages\"\n\"template step\" be just that, and not merge in other contents into it?\n\n\n> 39cc2130e76 cirrus/linux: script test.log..\n\nI still don't understand what this commit is trying to achieve.\n\n\n> 398b7342349 cirrus/macos: uname -a and export at end of sysinfo\n\nShrug.\n\n\n> 9d0f03d3450 wip: cirrus: code coverage\n\nI don't think it's good to just unconditionally reference the master branch\nhere. It'll do bogus stuff once 15 is branched off. It works for cfbot, but\nnot other uses.\n\nPerhaps we could have a cfbot special case (by checking for a repository\nvariable variable indicating the base branch) and show the incremental changes\nto that branch? Or we could just check which branch has the smallest distance\nin #commits?\n\n\nIf cfbot weren't a thing, I'd just make a code coverage / docs generation a\nmanual task (startable by a click in the UI). But that requires permission on\nthe repository...\n\n\nHm. I wonder if cfbot could maintain the code not as branches as such, but as\npull requests? Those include information about what the base branch is ;)\n\n\nIs looking at the .c files in the change really a reliable predictor of where\ncode coverage changes? I'm doubtful. Consider stuff like removing the last\nuser of some infrastructure or such. Or adding the first.\n\n\n> bff64e8b998 cirrus: upload html docs as artifacts\n> 80f52c3b172 wip: only upload changed docs\n\nSimilar to the above.\n\n\n> 7f3b50f1847 pg_upgrade: show list of files copied only in vebose mode\n\nI think that should be discussed on a different thread.\n\n\n> 97d7b039b9b vcregress/ci: test modules/contrib with NO_INSTALLCHECK=1\n\nProbably also worth breaking out into a new thread.\n\n\n> 654b6375401 wip: vcsregress: add alltaptests\n\nI assume this doesn't yet work to a meaningful degree? Last time I checked\nthere were quite a few tests that needed to be invoked in a specific\ndirectory. In the meson branch I worked around that by having a wrapper\naround the invocation of individual tap tests that changes CWD.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 3 Feb 2022 11:57:18 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On Thu, Feb 03, 2022 at 11:57:18AM -0800, Andres Freund wrote:\n> On 2022-02-02 21:58:28 -0600, Justin Pryzby wrote:\n> > FYI: I've rebased these against your cirrus/windows changes.\n> \n> Did you put then on a dedicated branch, or only intermixed with other changes?\n\nYes it's intermixed (because I have too many branches, and because in this case\nit's useful to show the doc builds and coverage artifacts).\n\n> > A recent cirrus result is here; this has other stuff in the branch, so you can\n> > see the artifacts with HTML docs and code coverage.\n> \n> I'm a bit worried about the increased storage and runtime overhead due to the\n> docs changes. We probably can make it a good bit cheaper though.\n\nIf you mean overhead of additional disk operations, it shouldn't be an issue,\nsince the git clone uses shared references (not even hardlinks).\n\nIf you meant storage capacity, I'm only uploading the *changed* docs as\nartifacts. The motivation being that it's a lot more convenient to look though\na single .html, and not hundreds.\n\n> What's the idea behind\n> #echo 'deb http://deb.debian.org/debian bullseye main' >>/etc/apt/sources.list\n> That's already in sources.list, so I'm not sure what this shows?\n\nAt one point I thought I needed it - maybe all I needed was \"apt-get update\"..\n\n> > 9d0f03d3450 wip: cirrus: code coverage\n>\n> I don't think it's good to just unconditionally reference the master branch\n> here. It'll do bogus stuff once 15 is branched off. It works for cfbot, but\n> not other uses.\n\nThat's only used for filtering changed files. It uses git diff --cherry-pick\npostgres/master..., which would *try* to avoid showing anything which is also\nin master.\n\n> Is looking at the .c files in the change really a reliable predictor of where\n> code coverage changes? I'm doubtful. Consider stuff like removing the last\n> user of some infrastructure or such. Or adding the first.\n\nYou're right that it isn't particularly accurate, but it's a step forward if\nlots of patches use it to check/improve coverge of new code.\n\nIn addition to the HTML generated for changed .c files, it's possible to create\na coverage.gcov output for everything, which could be retrieved to generate\nfull HTML locally. It's ~8MB (or 2MB after gzip).\n\n> > bff64e8b998 cirrus: upload html docs as artifacts\n> > 80f52c3b172 wip: only upload changed docs\n> \n> Similar to the above.\n\nDo you mean it's not reliable ? This is looking at which .html have changed\n(not sgml). Surely that's reliable ?\n\n> > 654b6375401 wip: vcsregress: add alltaptests\n> \n> I assume this doesn't yet work to a meaningful degree? Last time I checked\n> there were quite a few tests that needed to be invoked in a specific\n> directory.\n\nIt works - tap_check() does chdir(). But it's slow, and maybe should try to\nimplement a check-world target. It currently fails in 027_stream_regress.pl,\nalthough I keep hoping that it had been fixed...\nhttps://cirrus-ci.com/task/6116235950686208\n\n(BTW, I just realized that that commit should also remove the recoverycheck\ncall.)\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 3 Feb 2022 23:04:04 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi, \n\nOn February 3, 2022 9:04:04 PM PST, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>On Thu, Feb 03, 2022 at 11:57:18AM -0800, Andres Freund wrote:\n>> On 2022-02-02 21:58:28 -0600, Justin Pryzby wrote:\n>> > FYI: I've rebased these against your cirrus/windows changes.\n>> \n>> What's the idea behind\n>> #echo 'deb http://deb.debian.org/debian bullseye main' >>/etc/apt/sources.list\n>> That's already in sources.list, so I'm not sure what this shows?\n>\n>At one point I thought I needed it - maybe all I needed was \"apt-get update\"..\n\nYes, that's needed. There's no old pre fetched package list, because it'd be outdated anyway, and work *sometimes* for some packages... They're also not small (image size influences job start speed heavily).\n\n\n>> > 9d0f03d3450 wip: cirrus: code coverage\n>>\n>> I don't think it's good to just unconditionally reference the master branch\n>> here. It'll do bogus stuff once 15 is branched off. It works for cfbot, but\n>> not other uses.\n>\n>That's only used for filtering changed files. It uses git diff --cherry-pick\n>postgres/master..., which would *try* to avoid showing anything which is also\n>in master.\n\nThe point is that master is not a relevant point of comparison when a commit in the 14 branch is tested.\n\n\nRegards,\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Thu, 03 Feb 2022 23:36:51 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-03 23:04:04 -0600, Justin Pryzby wrote:\n> > I assume this doesn't yet work to a meaningful degree? Last time I checked\n> > there were quite a few tests that needed to be invoked in a specific\n> > directory.\n> \n> It works - tap_check() does chdir().\n\nAh, I thought you'd implemented a target that does it all in one prove\ninvocation...\n\n\n> It currently fails in 027_stream_regress.pl, although I keep hoping that it\n> had been fixed...\n\nThat's likely because you're not setting REGRESS_OUTPUTDIR like\nsrc/test/recovery/Makefile and recoverycheck() are doing.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 5 Feb 2022 19:23:39 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On Tue, Jan 18, 2022 at 05:16:26PM -0800, Andres Freund wrote:\n> On 2022-01-18 15:08:47 -0600, Justin Pryzby wrote:\n> > On Mon, Jan 17, 2022 at 12:16:19PM -0800, Andres Freund wrote:\n> > > I think it might still be worth adding stopgap way of running all tap tests on\n> > > windows though. Having a vcregress.pl function to find all directories with t/\n> > > and run the tests there, shouldn't be a lot of code...\n> > \n> > I started doing that, however it makes CI/windows even slower.\n...\n> > I think it'll be necessary to run prove with all the tap tests to\n> > parallelize them, rather than looping around directories, many of which have\n> > only a single file, and are run serially.\n> \n> That's unfortunately not trivially possible. Quite a few tests currently rely\n> on being called in a specific directory. We should fix this, but it's not a\n> trivial amount of work.\n\nOn Sat, Feb 05, 2022 at 07:23:39PM -0800, Andres Freund wrote:\n> On 2022-02-03 23:04:04 -0600, Justin Pryzby wrote:\n> > > I assume this doesn't yet work to a meaningful degree? Last time I checked\n> > > there were quite a few tests that needed to be invoked in a specific\n> > > directory.\n> > \n> > It works - tap_check() does chdir().\n> \n> Ah, I thought you'd implemented a target that does it all in one prove\n> invocation...\n\nI had some success with that, but it doesn't seem to be significantly faster -\nit looks a lot like the tests are not actually running in parallel. I tried\nsome variations like passing the list of dirs vs the list of files, and\n--jobs=9 vs -j9, without success.\n\nhttps://cirrus-ci.com/task/5580584675180544\n\nhttps://github.com/justinpryzby/postgres/commit/a865adc5b8c\nfc7b3ea8bce vcregress/ci: test modules/contrib with NO_INSTALLCHECK=1\n03adb043d16 wip: vcsregress: add alltaptests\n63bf0796ffd wip: vcregress: run alltaptests in parallel\n9dc327f6b30 f!wip: vcregress: run alltaptests in a single prove invocation\na865adc5b8c tmp: run tap tests first\n\n> > It currently fails in 027_stream_regress.pl, although I keep hoping that it\n> > had been fixed...\n> \n> That's likely because you're not setting REGRESS_OUTPUTDIR like\n> src/test/recovery/Makefile and recoverycheck() are doing.\n\nYes, thanks.\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 12 Feb 2022 16:06:40 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-12 16:06:40 -0600, Justin Pryzby wrote:\n> I had some success with that, but it doesn't seem to be significantly faster -\n> it looks a lot like the tests are not actually running in parallel.\n\nNote that prove unfortunately serializes the test output to be in the order it\nstarted them, even when tests run concurrently. Extremely unhelpful, but ...\n\nIsn't this kind of a good test time? I thought earlier your alltaptests target\ntook a good bit longer?\n\nOne nice bit is that the output is a *lot* easier to read.\n\n\nYou could try improving the total time by having prove remember slow tests and\nuse that file to run the slowest tests first next time. --state slow,save or\nsuch I believe. Of course we'd have to save that state file...\n\nTo verify that tests actually run concurrently you could emit a few\nnotes. Looks like those are output immediately...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 12 Feb 2022 14:26:25 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-03 11:57:18 -0800, Andres Freund wrote:\n> > 95793675633 cirrus: spell ccache_maxsize\n> \n> Yep, will apply with a bunch of your other changes, if you answer a question\n> or two...\n\nPushed.\n\n\n> > e5286ede1b4 cirrus: avoid unnecessary double star **\n> \n> Can't get excited about this, but whatever.\n> \n> What I am excited about is that some of your other changes showed that we\n> don't need separate *_artifacts for separate directories anymore. That used to\n> be the case, but an array of paths is now supported. Putting log, diffs, and\n> regress_log in one directory will be considerably more convenient...\n\npushed together.\n\n\n> > 398b7342349 cirrus/macos: uname -a and export at end of sysinfo\n> \n> Shrug.\n\nPushed.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 12 Feb 2022 16:24:20 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nAlvaro adding you because the \"branch\" discussion in the MERGE thread.\n\n\nOn 2022-02-03 23:04:04 -0600, Justin Pryzby wrote:\n> > I'm a bit worried about the increased storage and runtime overhead due to the\n> > docs changes. We probably can make it a good bit cheaper though.\n>\n> If you mean overhead of additional disk operations, it shouldn't be an issue,\n> since the git clone uses shared references (not even hardlinks).\n\nI was concerned about those and just the increased runtime of the script. Our\nsources are 130MB, leaving the shared .git aside. But maybe it's just fine.\n\nWe probably can just get rid of the separate clone and configure run though?\nBuild the docs, copy the output, do a git co parent docs/, build again?\n\n\nWhat was the reason behind moving the docs stuff from the compiler warnings\ntask to linux? Not that either fits very well... I think it might be worth\nmoving the docs stuff into its own task, using a 1 CPU container (docs build\nisn't parallel anyway). Think that'll be easier to see in the cfbot page. I\nthink it's also good to run it independent of the linux task succeeding - a\ndocs failure seems like a separate enough issue.\n\n\n> > > 9d0f03d3450 wip: cirrus: code coverage\n> >\n> > I don't think it's good to just unconditionally reference the master branch\n> > here. It'll do bogus stuff once 15 is branched off. It works for cfbot, but\n> > not other uses.\n>\n> That's only used for filtering changed files. It uses git diff --cherry-pick\n> postgres/master..., which would *try* to avoid showing anything which is also\n> in master.\n\nYou commented in another email on this:\n\nOn 2022-02-11 12:51:50 -0600, Justin Pryzby wrote:\n> Because I put your patch on top of some other branch with the CI coverage (and\n> other stuff).\n>\n> It tries to only show files changed by the branch being checked:\n> https://github.com/justinpryzby/postgres/commit/d668142040031915\n>\n> But it has to figure out where the branch \"starts\". Which I did by looking at\n> \"git diff --cherry-pick origin...\"\n>\n> Andres thinks that does the wrong thing if CI is run manually (not by CFBOT)\n> for patches against backbranches. I'm not sure git diff --cherry-pick is\n> widely known/used, but I think using that relative to master may be good\n> enough.\n\nNote that I'm not concerned about \"manually\" running CI against other branches\n- I'm concerned about the point where where 15 branches off and CI will\nautomatically also run against 15. E.g. in the postgres repo\nhttps://cirrus-ci.com/github/postgres/postgres/\n\nI can see a few ways to deal with this:\n1) iterate over release branches and see which has the smallest diff\n2) parse the branch name, if it's a cfbot run, we know it's master, otherwise skip\n3) change cfbot so that it maintains things as pull requests, which have a\n base branch\n\n\n> > Is looking at the .c files in the change really a reliable predictor of where\n> > code coverage changes? I'm doubtful. Consider stuff like removing the last\n> > user of some infrastructure or such. Or adding the first.\n>\n> You're right that it isn't particularly accurate, but it's a step forward if\n> lots of patches use it to check/improve coverge of new code.\n\nMaybe it's good enough... The overhead in test runtime is noticeable (~5.30m\n-> ~7.15m), but probably acceptable. Although I also would like to enable\n-fsanitize=alignment and -fsanitize=alignment, which add about 2 minutes as\nwell (-fsanitize=address is a lot more expensive), they both work best on\nlinux.\n\n\n> In addition to the HTML generated for changed .c files, it's possible to create\n> a coverage.gcov output for everything, which could be retrieved to generate\n> full HTML locally. It's ~8MB (or 2MB after gzip).\n\nNote sure that doing doing it locally adds much over just running tests\nlocally.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 12 Feb 2022 17:20:08 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On Sat, Feb 12, 2022 at 04:24:20PM -0800, Andres Freund wrote:\n> > > e5286ede1b4 cirrus: avoid unnecessary double star **\n> > \n> > Can't get excited about this, but whatever.\n> > \n> > What I am excited about is that some of your other changes showed that we\n> > don't need separate *_artifacts for separate directories anymore. That used to\n> > be the case, but an array of paths is now supported. Putting log, diffs, and\n> > regress_log in one directory will be considerably more convenient...\n> \n> pushed together.\n\nWhile rebasing, I noticed an error.\n\nYou wrote **/.diffs, but should be **/*.diffs\n\n--- a/.cirrus.yml\n+++ b/.cirrus.yml\n@@ -30,15 +30,11 @@ env:\n # What files to preserve in case tests fail\n on_failure: &on_failure\n log_artifacts:\n- path: \"**/**.log\"\n+ paths:\n+ - \"**/*.log\"\n+ - \"**/.diffs\"\n+ - \"**/regress_log_*\"\n type: text/plain\n- regress_diffs_artifacts:\n- path: \"**/**.diffs\"\n- type: text/plain\n- tap_artifacts:\n- path: \"**/regress_log_*\"\n- type: text/plain\n-\n\n\n",
"msg_date": "Sat, 12 Feb 2022 20:47:04 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On 2022-02-12 20:47:04 -0600, Justin Pryzby wrote:\n> While rebasing, I noticed an error.\n> \n> You wrote **/.diffs, but should be **/*.diffs\n\nEmbarassing. Thanks for noticing! Pushed the fix...\n\n\n",
"msg_date": "Sat, 12 Feb 2022 19:43:37 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On Sat, Feb 12, 2022 at 05:20:08PM -0800, Andres Freund wrote:\n> On 2022-02-03 23:04:04 -0600, Justin Pryzby wrote:\n> > > I'm a bit worried about the increased storage and runtime overhead due to the\n> > > docs changes. We probably can make it a good bit cheaper though.\n> >\n> > If you mean overhead of additional disk operations, it shouldn't be an issue,\n> > since the git clone uses shared references (not even hardlinks).\n> \n> I was concerned about those and just the increased runtime of the script. Our\n> sources are 130MB, leaving the shared .git aside. But maybe it's just fine.\n> \n> We probably can just get rid of the separate clone and configure run though?\n> Build the docs, copy the output, do a git co parent docs/, build again?\n\nYes - works great, thanks.\n\n> What was the reason behind moving the docs stuff from the compiler warnings\n> task to linux?\n\nI wanted to build docs even if the linux task fails. To allow CFBOT to link to\nthem, so somoene can always review the docs, in HTML (rather than reading SGML\nwith lines prefixed with +).\n\n> Not that either fits very well... I think it might be worth\n> moving the docs stuff into its own task, using a 1 CPU container (docs build\n> isn't parallel anyway). Think that'll be easier to see in the cfbot page. I\n\nYeah. The only drawback is the duplication (including the \"git parent\" stuff).\n\nBTW, docs can be built in parallel, and CI is using BUILD_JOBS: 4.\n/usr/bin/xmllint --path . --noout --valid postgres.sgml\n/usr/bin/xmllint --path . --noout --valid postgres.sgml\n/usr/bin/xsltproc --path . --stringparam pg.version '15devel' stylesheet.xsl postgres.sgml\n/usr/bin/xsltproc --path . --stringparam pg.version '15devel' stylesheet-man.xsl postgres.sgml\n\n> 1) iterate over release branches and see which has the smallest diff\n\nMaybe for each branch: do echo `git revlist or merge base |wc -l` $branch; done |sort -n |head -1\n\n> > > Is looking at the .c files in the change really a reliable predictor of where\n> > > code coverage changes? I'm doubtful. Consider stuff like removing the last\n> > > user of some infrastructure or such. Or adding the first.\n> >\n> > You're right that it isn't particularly accurate, but it's a step forward if\n> > lots of patches use it to check/improve coverge of new code.\n> \n> Maybe it's good enough... The overhead in test runtime is noticeable (~5.30m\n> -> ~7.15m), but probably acceptable. Although I also would like to enable\n> -fsanitize=alignment and -fsanitize=alignment, which add about 2 minutes as\n> well (-fsanitize=address is a lot more expensive), they both work best on\n> linux.\n\nThere's other things that it'd be nice to enable, but maybe these don't need to\nbe on by default. Maybe just have a list of optional compiler flags (and hints\nwhen they're useful). Like WRITE_READ_PARSE_PLAN_TREES.\n\n> > In addition to the HTML generated for changed .c files, it's possible to create\n> > a coverage.gcov output for everything, which could be retrieved to generate\n> > full HTML locally. It's ~8MB (or 2MB after gzip).\n> \n> Note sure that doing doing it locally adds much over just running tests\n> locally.\n\nYou're right, since one needs to have the patched source files to generate the\nHTML.\n\nOn Thu, Feb 03, 2022 at 11:57:18AM -0800, Andres Freund wrote:\n> I think it may be a bit cleaner to have the \"install additional packages\"\n> \"template step\" be just that, and not merge in other contents into it?\n\nI renamed the \"cores\" task since it consistently makes me think you're doing\nwith CPU cores. It took it as an opportunity to generalize the task.\n\nThese patches are ready for review. I'll plan to mail about TAP stuff\ntomorrow.",
"msg_date": "Sat, 12 Feb 2022 23:19:38 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-12 23:19:38 -0600, Justin Pryzby wrote:\n> On Sat, Feb 12, 2022 at 05:20:08PM -0800, Andres Freund wrote:\n> > What was the reason behind moving the docs stuff from the compiler warnings\n> > task to linux?\n> \n> I wanted to build docs even if the linux task fails. To allow CFBOT to link to\n> them, so somoene can always review the docs, in HTML (rather than reading SGML\n> with lines prefixed with +).\n\nI'd be ok with running the compiler warnings job without the dependency, if\nthat's the connection.\n\n\n> BTW, docs can be built in parallel, and CI is using BUILD_JOBS: 4.\n> /usr/bin/xmllint --path . --noout --valid postgres.sgml\n> /usr/bin/xmllint --path . --noout --valid postgres.sgml\n> /usr/bin/xsltproc --path . --stringparam pg.version '15devel' stylesheet.xsl postgres.sgml\n> /usr/bin/xsltproc --path . --stringparam pg.version '15devel' stylesheet-man.xsl postgres.sgml\n\nSure, it just doesn't make a difference:\n\nmake -j48 -C doc/src/sgml/ maintainer-clean && time make -j48 -C doc/src/sgml/\nreal\t0m34.626s\nuser\t0m34.342s\nsys\t0m0.298s\n\nmake -j48 -C doc/src/sgml/ maintainer-clean && time make -C doc/src/sgml/\n\nreal\t0m34.780s\nuser\t0m34.494s\nsys\t0m0.285s\n\n\n\n> > 1) iterate over release branches and see which has the smallest diff\n> \n> Maybe for each branch: do echo `git revlist or merge base |wc -l` $branch; done |sort -n |head -1\n> \n> > > > Is looking at the .c files in the change really a reliable predictor of where\n> > > > code coverage changes? I'm doubtful. Consider stuff like removing the last\n> > > > user of some infrastructure or such. Or adding the first.\n> > >\n> > > You're right that it isn't particularly accurate, but it's a step forward if\n> > > lots of patches use it to check/improve coverge of new code.\n> > \n> > Maybe it's good enough... The overhead in test runtime is noticeable (~5.30m\n> > -> ~7.15m), but probably acceptable. Although I also would like to enable\n> > -fsanitize=alignment and -fsanitize=alignment, which add about 2 minutes as\n> > well (-fsanitize=address is a lot more expensive), they both work best on\n> > linux.\n> \n> There's other things that it'd be nice to enable, but maybe these don't need to\n> be on by default. Maybe just have a list of optional compiler flags (and hints\n> when they're useful). Like WRITE_READ_PARSE_PLAN_TREES.\n\nI think it'd be good to enable a reasonable set by default. Particularly for\nnewer contributors stuff like forgetting in/out/readfuncs, or not knowing\nabout some undefined behaviour, is easy. Probably makes sense to use different\nsettings on different tasks.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 13 Feb 2022 00:30:06 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On Sun, Feb 13, 2022 at 3:30 AM Andres Freund <andres@anarazel.de> wrote:\n> > There's other things that it'd be nice to enable, but maybe these don't need to\n> > be on by default. Maybe just have a list of optional compiler flags (and hints\n> > when they're useful). Like WRITE_READ_PARSE_PLAN_TREES.\n>\n> I think it'd be good to enable a reasonable set by default. Particularly for\n> newer contributors stuff like forgetting in/out/readfuncs, or not knowing\n> about some undefined behaviour, is easy. Probably makes sense to use different\n> settings on different tasks.\n\nThis is exactly why I'm not a huge fan of having ci stuff in the tree.\nIt supposes that there's one right way to do a build, but in reality,\ndifferent people want and indeed need to use different options for all\nkinds of reasons. That's the whole value of having things like\nconfigure and pg_config_manual.h. When we start arguing about whether\nor ci builds should use -DWRITE_READ_PARSE_PLAN_TREES we're inevitably\ninto the realm where no choice is objectively better, and whoever\nyells the loudest will get it the way they want, and then somebody\nelse later will say \"well that's dumb I don't like that\" or even just\n\"well that's not the right thing for testing MY patch,\" at which point\nthe previous mailing list discussion will be cited as \"precedent\" for\nwhat was essentially an arbitrary decision made by 1 or 2 people.\n\nMind you, I'm not trying to hold back the tide. I realize that in-tree\nci stuff is very much in vogue. But I think it would be a very healthy\nthing if we acknowledged that what goes in there is far more arbitrary\nthan most of what we put in the tree.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 13 Feb 2022 11:39:33 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> This is exactly why I'm not a huge fan of having ci stuff in the tree.\n> It supposes that there's one right way to do a build, but in reality,\n> different people want and indeed need to use different options for all\n> kinds of reasons. That's the whole value of having things like\n> configure and pg_config_manual.h. When we start arguing about whether\n> or ci builds should use -DWRITE_READ_PARSE_PLAN_TREES we're inevitably\n> into the realm where no choice is objectively better,\n\nRight. Can we set things up so that it's not too painful to inject\ncustom build options into a CI build? I should think that at the\nvery least one needs to be able to vary the configure switches and\nCPPFLAGS/CFLAGS.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 13 Feb 2022 12:13:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-13 12:13:17 -0500, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > This is exactly why I'm not a huge fan of having ci stuff in the tree.\n> > It supposes that there's one right way to do a build, but in reality,\n> > different people want and indeed need to use different options for all\n> > kinds of reasons.\n\nSure. But why is that an argument against \"having ci stuff in the tree\"?\n\nAll it does is to make sure that a certain base-level of testing is easy to\nachieve for everyone. I don't like working on windows or mac, but my patches\noften have platform dependent bits. Now it's less likely that I need to\nmanually interact with windows.\n\nI don't think we can (or well should) replace the buildfarm with the CI\nstuff. The buildfarm provides extensive and varied coverage for master/release\nbranches. Which isn't feasible for unmerged development work, including cfbot,\nfrom a resource usage POV alone.\n\n\n> > That's the whole value of having things like\n> > configure and pg_config_manual.h. When we start arguing about whether\n> > or ci builds should use -DWRITE_READ_PARSE_PLAN_TREES we're inevitably\n> > into the realm where no choice is objectively better,\n\n> Right. Can we set things up so that it's not too painful to inject\n> custom build options into a CI build?\n\nWhat kind of injection are you thinking about? A patch author can obviously\njust add options in .cirrus.yml. That's something possible now, that was not\npossible with cfbot applying its own .cirrus.yml\n\nIt'd be nice if there were a way to do it more easily for msvc and configure\nbuilds together, right now it'd require modifying those tasks in different\nways. But that's not really a CI question.\n\n\nI'd like to have things like -fanitize=aligned and\n-DWRITE_READ_PARSE_PLAN_TREES on by default for CI, primarily for cfbot's\nbenefit. Most patch authors won't know about using\n-DWRITE_READ_PARSE_PLAN_TREES etc, so they won't even think about enabling\nthem. We're *really* not doing well on the \"timely review\" side of things, so\nwe at least should not waste time on high latency back-forth for easily\nautomatically detectable things.\n\n\n> I should think that at the very least one needs to be able to vary the\n> configure switches and CPPFLAGS/CFLAGS.\n\nDo you mean as part of a patch tested with cfbot, CI running for pushes to\nyour own repository, or ...?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 13 Feb 2022 11:14:56 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-02-13 12:13:17 -0500, Tom Lane wrote:\n>> Right. Can we set things up so that it's not too painful to inject\n>> custom build options into a CI build?\n\n> What kind of injection are you thinking about?\n\nThat's exactly what needs to be decided.\n\n> A patch author can obviously\n> just add options in .cirrus.yml. That's something possible now, that was not\n> possible with cfbot applying its own .cirrus.yml\n\nFine, but are committers supposed to keep track of the fact that they\nshouldn't commit that part of a patch? I'd prefer something a bit more\nout-of-band. I don't know this technology well enough to propose a way.\n\n> I'd like to have things like -fanitize=aligned and\n> -DWRITE_READ_PARSE_PLAN_TREES on by default for CI, primarily for cfbot's\n> benefit. Most patch authors won't know about using\n> -DWRITE_READ_PARSE_PLAN_TREES etc, so they won't even think about enabling\n> them. We're *really* not doing well on the \"timely review\" side of things, so\n> we at least should not waste time on high latency back-forth for easily\n> automatically detectable things.\n\nI don't personally have an objection to either of those; maybe Robert\ndoes.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 13 Feb 2022 15:09:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-13 15:09:20 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-02-13 12:13:17 -0500, Tom Lane wrote:\n> >> Right. Can we set things up so that it's not too painful to inject\n> >> custom build options into a CI build?\n> \n> > What kind of injection are you thinking about?\n> \n> That's exactly what needs to be decided.\n> [...]\n> I'd prefer something a bit more out-of-band. I don't know this technology\n> well enough to propose a way.\n\nI don't yet understand the precise use case for adjustments well enough to\npropose something. Who would like to adjust something for what purpose? The\n\"original\" author, for a one-off test? A reviewer / committer, to track down a\nhunch?\n\nIf it's about CI runs in in a personal repository, one can set additional\nenvironment vars from the CI management interface. We can make sure they work\n(the ci stuff probably overrides CFLAGS, but COPT should work) and document\nthe way to do so.\n\n\n> > A patch author can obviously\n> > just add options in .cirrus.yml. That's something possible now, that was not\n> > possible with cfbot applying its own .cirrus.yml\n> \n> Fine, but are committers supposed to keep track of the fact that they\n> shouldn't commit that part of a patch?\n\nI'd say it depends on the the specific modification - there's some patches\nwhere it seems to make sense to adjust extend CI as part of it and have it\nmerged. But yea, in other cases committers would have to take them out.\n\n\nFor more on-off stuff one would presumably not want to spam the list the list\nwith a full patchset to trigger a cfbot run, nor wait till cfbot gets round to\nthat patch again. When pushing to a personal repo it's of course easy to just\nhave a commit on-top of what's submitted to play around with compile options.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 13 Feb 2022 12:24:03 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On Sat, Feb 12, 2022 at 04:24:20PM -0800, Andres Freund wrote:\n> > What I am excited about is that some of your other changes showed that we\n> > don't need separate *_artifacts for separate directories anymore. That used to\n> > be the case, but an array of paths is now supported. Putting log, diffs, and\n> > regress_log in one directory will be considerably more convenient...\n> \n> pushed together.\n\nThis change actually complicates things.\n\nBefore, there was log/src/test/recovery/tmp_check/log, with a few files for\n001, a few for 002, a few for 003. This are a lot of output files, but at\nleast they're all related.\n\nNow, there's a single log/tmp_check/log, which has logs for the entire tap\ntests. There's 3 pages of 001*, 2 pages of 002*, 3 pages of 003, etc. \n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 13 Feb 2022 15:02:50 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-13 15:02:50 -0600, Justin Pryzby wrote:\n> On Sat, Feb 12, 2022 at 04:24:20PM -0800, Andres Freund wrote:\n> > > What I am excited about is that some of your other changes showed that we\n> > > don't need separate *_artifacts for separate directories anymore. That used to\n> > > be the case, but an array of paths is now supported. Putting log, diffs, and\n> > > regress_log in one directory will be considerably more convenient...\n> >\n> > pushed together.\n>\n> This change actually complicates things.\n>\n> Before, there was log/src/test/recovery/tmp_check/log, with a few files for\n> 001, a few for 002, a few for 003. This are a lot of output files, but at\n> least they're all related.\n\n> Now, there's a single log/tmp_check/log, which has logs for the entire tap\n> tests. There's 3 pages of 001*, 2 pages of 002*, 3 pages of 003, etc.\n\nHm? Doesn't look like that to me, and I don't see why it would work that way?\nThis didn't do anything to flatten the directory hierarchy, just combine three\nhierarchies into one?\n\nWhat I see, and what I expect, is that logs end up in\ne.g. log/src/test/recovery/tmp_check/log but that that directory contains\nregress_log_*, as well as *.log? Before one needed to go through the\nhierarchy multiple times to see both regress_log_ (i.e. tap test log) as well\nas 0*.log (i.e. server logs).\n\nA random example: https://cirrus-ci.com/task/5152523873943552 shows the logs\nfor the failure in log/src/bin/pg_upgrade/tmp_check/log\n\n\nIf you're seeing this on windows on one of your test branches, that's much\nmore likely to be caused by the alltaptests stuff, than by the change in\nartifact instruction.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 13 Feb 2022 13:23:16 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On Sun, Feb 13, 2022 at 01:23:16PM -0800, Andres Freund wrote:\n> If you're seeing this on windows on one of your test branches, that's much\n> more likely to be caused by the alltaptests stuff, than by the change in\n> artifact instruction.\n\nOh - I suppose you're right. That's an unfortunate consequence of running a\nsingle prove instance without chdir.\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 13 Feb 2022 15:31:20 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On Sat, Feb 12, 2022 at 02:26:25PM -0800, Andres Freund wrote:\n> On 2022-02-12 16:06:40 -0600, Justin Pryzby wrote:\n> > I had some success with that, but it doesn't seem to be significantly faster -\n> > it looks a lot like the tests are not actually running in parallel.\n\nNote that the total test time is close to the sum of the individual test times.\nBut I think that may be an artifact of how prove is showing/attributing times\nto each test (which, if so, is misleading).\n\n> Note that prove unfortunately serializes the test output to be in the order it\n> started them, even when tests run concurrently. Extremely unhelpful, but ...\n\nAre you sure ? When I run it locally, I see:\nrm -fr src/test/recovery/tmp_check ; time PERL5LIB=`pwd`/src/test/perl TESTDIR=`pwd`/src/test/recovery PATH=`pwd`/tmp_install/usr/local/pgsql/bin:$PATH PG_REGRESS=`pwd`/src/test/regress/pg_regress REGRESS_SHLIB=`pwd`/src/test/regress/regress.so prove --time -j4 --ext '*.pl' `find src -name t`\n...\n[15:34:48] src/bin/scripts/t/101_vacuumdb_all.pl ....................... ok 104 ms ( 0.00 usr 0.00 sys + 2.35 cusr 0.47 csys = 2.82 CPU)\n[15:34:49] src/bin/scripts/t/090_reindexdb.pl .......................... ok 8894 ms ( 0.06 usr 0.01 sys + 14.45 cusr 3.38 csys = 17.90 CPU)\n[15:34:50] src/bin/pg_config/t/001_pg_config.pl ........................ ok 79 ms ( 0.00 usr 0.01 sys + 0.23 cusr 0.04 csys = 0.28 CPU)\n[15:34:50] src/bin/pg_waldump/t/001_basic.pl ........................... ok 35 ms ( 0.00 usr 0.00 sys + 0.26 cusr 0.02 csys = 0.28 CPU)\n[15:34:51] src/bin/pg_test_fsync/t/001_basic.pl ........................ ok 100 ms ( 0.01 usr 0.00 sys + 0.24 cusr 0.04 csys = 0.29 CPU)\n[15:34:51] src/bin/pg_archivecleanup/t/010_pg_archivecleanup.pl ........ ok 177 ms ( 0.02 usr 0.00 sys + 0.26 cusr 0.03 csys = 0.31 CPU)\n[15:34:55] src/bin/scripts/t/100_vacuumdb.pl ........................... ok 11267 ms ( 0.12 usr 0.04 sys + 13.47 cusr 3.20 csys = 16.83 CPU)\n[15:34:57] src/bin/scripts/t/102_vacuumdb_stages.pl .................... ok 5802 ms ( 0.06 usr 0.01 sys + 7.70 cusr 1.37 csys = 9.14 CPU)\n...\n\n=> scripts/ stuff, followed by other stuff, followed by more, slower, scripts/ stuff.\n\nBut I never saw that in cirrus.\n\n> Isn't this kind of a good test time? I thought earlier your alltaptests target\n> took a good bit longer?\n\nThe original alltaptests runs in 16m 21s.\nhttps://cirrus-ci.com/task/6679061752709120\n\n2 weeks ago, it was ~14min with your patch to cache initdb.\nhttps://cirrus-ci.com/task/5439320633901056\n\nAs I recall, that didn't seem to improve runtime when combined with my parallel\npatch.\n\n> One nice bit is that the output is a *lot* easier to read.\n> \n> You could try improving the total time by having prove remember slow tests and\n> use that file to run the slowest tests first next time. --state slow,save or\n> such I believe. Of course we'd have to save that state file...\n\nIn a test, this hurt rather than helped (13m 42s).\nhttps://cirrus-ci.com/task/6359167186239488\n\nI'm not surprised - it makes sense to run 10 fast tests at once, but usually\ndoesn't make sense to run 10 slow tests tests at once (at least a couple of\nwhich are doing something intensive). It was faster (12m16s) to do it\nbackwards (fastest tests first).\nhttps://cirrus-ci.com/task/5745115443494912\n\nBTW, does it make sense to remove test_regress_parallel_script ? The\npg_upgrade run would do the same things, no ? If so, it might make sense to\nrun that first. OTOH, you suggested to run the upgrade tests with checksums\nenabled, which seems like a good idea.\n\nNote that in the attached patches, I changed the msvc \"warnings\" to use \"tee\".\n\nI don't know how to fix the pipeline test in a less hacky way...\n\nYou said the docs build should be a separate task, but then said that it'd be\nokay to remove the dependency. So I did it both ways. There's currently some\nduplication between the docs patch and code coverage patch.\n\n-- \nJustin",
"msg_date": "Sun, 13 Feb 2022 15:42:13 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-13 15:31:20 -0600, Justin Pryzby wrote:\n> Oh - I suppose you're right. That's an unfortunate consequence of running a\n> single prove instance without chdir.\n\nI don't think it's chdir that's relevant (that changes into the source\ndirectory after all). It's the TESTDIR environment variable.\n\nI was thinking that we should make Utils.pm's INIT block responsible for\nfiguring out both the directory a test should run in and the log location,\ninstead having that in vcregress.pl and Makefile.global.in. Mostly because\ndoing it in the latter means we can't start tests with different TESTDIR and\nworking dir at the same time.\n\nIf instead we pass the location of the top-level build and top-level source\ndirectory from vcregress.pl / Makefile.global, the tap test infrastructure can\nfigure out that stuff themselves, on a per-test basis.\n\nFor msvc builds we probably would need to pass in some information that allow\nUtils.pm to set up PATH appropriately. I think that might just require knowing\nthat a) msvc build system is used b) Release vs Debug.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 13 Feb 2022 13:53:19 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-13 15:42:13 -0600, Justin Pryzby wrote:\n> > Note that prove unfortunately serializes the test output to be in the order it\n> > started them, even when tests run concurrently. Extremely unhelpful, but ...\n> \n> Are you sure ?\n\nSomewhat. I think it's a question of the prove version and some autodetection\nof what type of environment prove is running in (stdin/stdout/stderr). I don't\nremember the details, but at some point I pinpointed the source of the\nserialization, and verified that parallelization makes a significant\ndifference in runtime even without being easily visible :(. But this is all\nvague memory, so I might be wrong.\n\nReminds me that somebody (ugh, me???) should fix the perl > 5.26\nincompatibilities on windows, then we'd also get a newer prove...\n\n\n\n> > One nice bit is that the output is a *lot* easier to read.\n> > \n> > You could try improving the total time by having prove remember slow tests and\n> > use that file to run the slowest tests first next time. --state slow,save or\n> > such I believe. Of course we'd have to save that state file...\n> \n> In a test, this hurt rather than helped (13m 42s).\n> https://cirrus-ci.com/task/6359167186239488\n> \n> I'm not surprised - it makes sense to run 10 fast tests at once, but usually\n> doesn't make sense to run 10 slow tests tests at once (at least a couple of\n> which are doing something intensive). It was faster (12m16s) to do it\n> backwards (fastest tests first).\n> https://cirrus-ci.com/task/5745115443494912\n\nHm.\n\nI know I saw significant reduction in test times locally with meson by\nstarting slow tests earlier, because they're the limiting factor for the\n*overall* test runtime - but I have more resources than on cirrus. Even\nlocally on a windows VM, with the current buildsystem, I found that moving 027\nto earlier withing recoverycheck reduced the test time.\n\nBut it's possible that with all tests being scheduled concurrently, starting\nthe slow tests early leads to sufficient resource overcommit to be\nproblematic.\n\n\n> BTW, does it make sense to remove test_regress_parallel_script ? The\n> pg_upgrade run would do the same things, no ? If so, it might make sense to\n> run that first. OTOH, you suggested to run the upgrade tests with checksums\n> enabled, which seems like a good idea.\n\nNo, I don't think so. The main regression tests are by far the most important\nthing during normal development. Just relying on main regression test runs\nembedded in other tests, with different output and config of the main\nregression test imo is just confusing.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 13 Feb 2022 14:07:09 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On Sun, Feb 13, 2022 at 01:53:19PM -0800, Andres Freund wrote:\n> Hi,\n> \n> On 2022-02-13 15:31:20 -0600, Justin Pryzby wrote:\n> > Oh - I suppose you're right. That's an unfortunate consequence of running a\n> > single prove instance without chdir.\n> \n> I don't think it's chdir that's relevant (that changes into the source\n> directory after all). It's the TESTDIR environment variable.\n> \n> I was thinking that we should make Utils.pm's INIT block responsible for\n> figuring out both the directory a test should run in and the log location,\n> instead having that in vcregress.pl and Makefile.global.in. Mostly because\n> doing it in the latter means we can't start tests with different TESTDIR and\n> working dir at the same time.\n> \n> If instead we pass the location of the top-level build and top-level source\n> directory from vcregress.pl / Makefile.global, the tap test infrastructure can\n> figure out that stuff themselves, on a per-test basis.\n> \n> For msvc builds we probably would need to pass in some information that allow\n> Utils.pm to set up PATH appropriately. I think that might just require knowing\n> that a) msvc build system is used b) Release vs Debug.\n\nI'm totally unsure if this resembles what you're thinking of, and I'm surprised\nI got it working so easily. But it gets the tap test output in separate dirs,\nand CI is passing for everyone (windows failed because I injected a \"false\" to\nforce it to upload artifacts).\n\nhttps://github.com/justinpryzby/postgres/runs/5211673291\n\ncommit 899e562102dd7a663cb087cdf88f0f78f8302492\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Tue Feb 15 20:02:36 2022 -0600\n\n wip: set TESTDIR from src/test/perl rather than Makefile/vcregress\n\ndiff --git a/src/Makefile.global.in b/src/Makefile.global.in\nindex 05c54b27def..1e49d8c8c37 100644\n--- a/src/Makefile.global.in\n+++ b/src/Makefile.global.in\n@@ -450,7 +450,7 @@ define prove_installcheck\n rm -rf '$(CURDIR)'/tmp_check\n $(MKDIR_P) '$(CURDIR)'/tmp_check\n cd $(srcdir) && \\\n- TESTDIR='$(CURDIR)' PATH=\"$(bindir):$(CURDIR):$$PATH\" \\\n+ PATH=\"$(bindir):$(CURDIR):$$PATH\" \\\n PGPORT='6$(DEF_PGPORT)' top_builddir='$(CURDIR)/$(top_builddir)' \\\n PG_REGRESS='$(CURDIR)/$(top_builddir)/src/test/regress/pg_regress' \\\n $(PROVE) $(PG_PROVE_FLAGS) $(PROVE_FLAGS) $(if $(PROVE_TESTS),$(PROVE_TESTS),t/*.pl)\n@@ -460,7 +460,7 @@ define prove_installcheck\n rm -rf '$(CURDIR)'/tmp_check\n $(MKDIR_P) '$(CURDIR)'/tmp_check\n cd $(srcdir) && \\\n- TESTDIR='$(CURDIR)' PATH=\"$(bindir):$(CURDIR):$$PATH\" \\\n+ PATH=\"$(bindir):$(CURDIR):$$PATH\" \\\n PGPORT='6$(DEF_PGPORT)' top_builddir='$(top_builddir)' \\\n PG_REGRESS='$(top_builddir)/src/test/regress/pg_regress' \\\n $(PROVE) $(PG_PROVE_FLAGS) $(PROVE_FLAGS) $(if $(PROVE_TESTS),$(PROVE_TESTS),t/*.pl)\n@@ -471,7 +471,7 @@ define prove_check\n rm -rf '$(CURDIR)'/tmp_check\n $(MKDIR_P) '$(CURDIR)'/tmp_check\n cd $(srcdir) && \\\n- TESTDIR='$(CURDIR)' $(with_temp_install) PGPORT='6$(DEF_PGPORT)' \\\n+ $(with_temp_install) PGPORT='6$(DEF_PGPORT)' \\\n PG_REGRESS='$(CURDIR)/$(top_builddir)/src/test/regress/pg_regress' \\\n $(PROVE) $(PG_PROVE_FLAGS) $(PROVE_FLAGS) $(if $(PROVE_TESTS),$(PROVE_TESTS),t/*.pl)\n endef\ndiff --git a/src/bin/psql/t/010_tab_completion.pl b/src/bin/psql/t/010_tab_completion.pl\nindex 005961f34d4..a86dc78a365 100644\n--- a/src/bin/psql/t/010_tab_completion.pl\n+++ b/src/bin/psql/t/010_tab_completion.pl\n@@ -70,7 +70,7 @@ delete $ENV{LS_COLORS};\n # to run in the build directory so that we can use relative paths to\n # access the tmp_check subdirectory; otherwise the output from filename\n # completion tests is too variable.\n-if ($ENV{TESTDIR})\n+if ($ENV{TESTDIR} && 0)\n {\n \tchdir $ENV{TESTDIR} or die \"could not chdir to \\\"$ENV{TESTDIR}\\\": $!\";\n }\ndiff --git a/src/test/modules/libpq_pipeline/t/001_libpq_pipeline.pl b/src/test/modules/libpq_pipeline/t/001_libpq_pipeline.pl\nindex facfec5cad4..2a0eca77440 100644\n--- a/src/test/modules/libpq_pipeline/t/001_libpq_pipeline.pl\n+++ b/src/test/modules/libpq_pipeline/t/001_libpq_pipeline.pl\n@@ -49,9 +49,7 @@ for my $testname (@tests)\n \t\tmy $expected;\n \t\tmy $result;\n \n-\t\t# Hack to allow TESTDIR=. during parallel tap tests\n-\t\tmy $inputdir = \"$ENV{'TESTDIR'}/src/test/modules/libpq_pipeline\";\n-\t\t$inputdir = \"$ENV{'TESTDIR'}\" if ! -e $inputdir;\n+\t\tmy $inputdir = \"$ENV{'TESTDIR'}/tmp_check\";\n \t\t$expected = slurp_file_eval(\"$inputdir/traces/$testname.trace\");\n \t\tnext unless $expected ne \"\";\n \t\t$result = slurp_file_eval($traceout);\ndiff --git a/src/test/perl/PostgreSQL/Test/Utils.pm b/src/test/perl/PostgreSQL/Test/Utils.pm\nindex 57fcb240898..5429de41ed5 100644\n--- a/src/test/perl/PostgreSQL/Test/Utils.pm\n+++ b/src/test/perl/PostgreSQL/Test/Utils.pm\n@@ -184,19 +184,21 @@ INIT\n \t# test may still fail, but it's more likely to report useful facts.\n \t$SIG{PIPE} = 'IGNORE';\n \n-\t# Determine output directories, and create them. The base path is the\n-\t# TESTDIR environment variable, which is normally set by the invoking\n-\t# Makefile.\n-\t$tmp_check = $ENV{TESTDIR} ? \"$ENV{TESTDIR}/tmp_check\" : \"tmp_check\";\n+\tmy $test_dir = File::Spec->rel2abs(dirname($0));\n+\tmy $test_name = basename($0);\n+\t$test_name =~ s/\\.[^.]+$//;\n+\n+\t# Determine output directories, and create them.\n+\t# TODO: set TESTDIR and srcdir?\n+\t$tmp_check = \"$test_dir/tmp_check\";\n \t$log_path = \"$tmp_check/log\";\n+\t$ENV{TESTDIR} = $test_dir;\n \n \tmkdir $tmp_check;\n \tmkdir $log_path;\n \n \t# Open the test log file, whose name depends on the test name.\n-\t$test_logfile = basename($0);\n-\t$test_logfile =~ s/\\.[^.]+$//;\n-\t$test_logfile = \"$log_path/regress_log_$test_logfile\";\n+\t$test_logfile = \"$log_path/regress_log_$test_name\";\n \topen my $testlog, '>', $test_logfile\n \t or die \"could not open STDOUT to logfile \\\"$test_logfile\\\": $!\";\n \ndiff --git a/src/tools/msvc/vcregress.pl b/src/tools/msvc/vcregress.pl\nindex fdb6f44eded..d7794c5766a 100644\n--- a/src/tools/msvc/vcregress.pl\n+++ b/src/tools/msvc/vcregress.pl\n@@ -261,10 +261,8 @@ sub tap_check\n \t$ENV{PG_REGRESS} = \"$topdir/$Config/pg_regress/pg_regress\";\n \t$ENV{REGRESS_SHLIB} = \"$topdir/src/test/regress/regress.dll\";\n \n-\t$ENV{TESTDIR} = \"$dir\";\n \tmy $module = basename $dir;\n-\t# add the module build dir as the second element in the PATH\n-\t$ENV{PATH} =~ s!;!;$topdir/$Config/$module;!;\n+\t$ENV{VCREGRESS_MODE} = $Config;\n \n \tprint \"============================================================\\n\";\n \tprint \"Checking @args\\n\";\n\n\n",
"msg_date": "Wed, 16 Feb 2022 00:12:36 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn February 15, 2022 10:12:36 PM PST, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>On Sun, Feb 13, 2022 at 01:53:19PM -0800, Andres Freund wrote:\n>> Hi,\n>> \n>> On 2022-02-13 15:31:20 -0600, Justin Pryzby wrote:\n>> > Oh - I suppose you're right. That's an unfortunate consequence of running a\n>> > single prove instance without chdir.\n>> \n>> I don't think it's chdir that's relevant (that changes into the source\n>> directory after all). It's the TESTDIR environment variable.\n>> \n>> I was thinking that we should make Utils.pm's INIT block responsible for\n>> figuring out both the directory a test should run in and the log location,\n>> instead having that in vcregress.pl and Makefile.global.in. Mostly because\n>> doing it in the latter means we can't start tests with different TESTDIR and\n>> working dir at the same time.\n>> \n>> If instead we pass the location of the top-level build and top-level source\n>> directory from vcregress.pl / Makefile.global, the tap test infrastructure can\n>> figure out that stuff themselves, on a per-test basis.\n>> \n>> For msvc builds we probably would need to pass in some information that allow\n>> Utils.pm to set up PATH appropriately. I think that might just require knowing\n>> that a) msvc build system is used b) Release vs Debug.\n>\n>I'm totally unsure if this resembles what you're thinking of, and I'm surprised\n>I got it working so easily. But it gets the tap test output in separate dirs,\n>and CI is passing for everyone (windows failed because I injected a \"false\" to\n>force it to upload artifacts).\n>\n>https://github.com/justinpryzby/postgres/runs/5211673291\n\nYes, that's along the lines I was thinking. I only checked it on my phone, so it certainly isn't a careful look...\n\nI think this should be discussed in a separate thread, for visibility.\n\nFWIW, I'd like to additionally add marker files in INIT and remove them in END. And create files signaling success and failure in END. That would allow automated selection of log files of failed tests...\n\nAndres\n\nRegards,\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Tue, 15 Feb 2022 22:42:09 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On 13.02.22 09:30, Andres Freund wrote:\n>> BTW, docs can be built in parallel, and CI is using BUILD_JOBS: 4.\n>> /usr/bin/xmllint --path . --noout --valid postgres.sgml\n>> /usr/bin/xmllint --path . --noout --valid postgres.sgml\n>> /usr/bin/xsltproc --path . --stringparam pg.version '15devel' stylesheet.xsl postgres.sgml\n>> /usr/bin/xsltproc --path . --stringparam pg.version '15devel' stylesheet-man.xsl postgres.sgml\n> Sure, it just doesn't make a difference:\n> \n> make -j48 -C doc/src/sgml/ maintainer-clean && time make -j48 -C doc/src/sgml/\n> real\t0m34.626s\n> user\t0m34.342s\n> sys\t0m0.298s\n> \n> make -j48 -C doc/src/sgml/ maintainer-clean && time make -C doc/src/sgml/\n> \n> real\t0m34.780s\n> user\t0m34.494s\n> sys\t0m0.285s\n\nNote that the default target in doc/src/sgml/ is \"html\", not \"all\". If \nyou build \"all\", you build \"html\" plus \"man\", which can be run in \nparallel. (It's only two jobs, of course.) If you're more ambitious, \nyou could also run the PDF builds.\n\n\n\n",
"msg_date": "Wed, 16 Feb 2022 13:00:24 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Have you tried to use the yet-to-be-released ccache with MSVC ?\n\nAlso, do you know about msbuild /outputResultsCache ?\nWhen I tried that, it gave a bunch of error.\n\nhttps://cirrus-ci.com/task/5697497241747456\n\n|[16:35:13.605] 1>c:\\cirrus\\pgsql.sln.metaproj : error : MSB4252: Project \"c:\\cirrus\\pgsql.sln\" with global properties [c:\\cirrus\\pgsql.sln]\n|[16:35:13.615] c:\\cirrus\\pgsql.sln.metaproj : error : (TrackFileAccess=false; CLToolExe=clcache.exe) [c:\\cirrus\\pgsql.sln]\n|[16:35:13.615] c:\\cirrus\\pgsql.sln.metaproj : error : is building project \"c:\\cirrus\\initdb.vcxproj\" with global properties [c:\\cirrus\\pgsql.sln]\n|[16:35:13.615] c:\\cirrus\\pgsql.sln.metaproj : error : (TrackFileAccess=false; CLToolExe=clcache.exe; BuildingSolutionFile=true; CurrentSolutionConfigurationContents=<SolutionConfiguration> [c:\\cirrus\\pgsql.sln]\n|[16:35:13.615] c:\\cirrus\\pgsql.sln.metaproj : error : <ProjectConfiguration Project=\"{1BD4D6DB-9B78-4A46-B2A7-04508802E281}\" AbsolutePath=\"c:\\cirrus\\initdb.vcxproj\" BuildProjectInSolution=\"True\">Debug|x64</ProjectConfiguration> [c:\\cirrus\\pgsql.sln]\n|...\n|[16:35:14.518] c:\\cirrus\\pgsql.sln.metaproj : error : <ProjectConfiguration Project=\"{7E9336CA-5E94-4D99-9D34-BF65ED440A6F}\" AbsolutePath=\"c:\\cirrus\\euc2004_sjis2004.vcxproj\" BuildProjectInSolution=\"True\">Debug|x64</ProjectConfiguration> [c:\\cirrus\\pgsql.sln]\n|[16:35:14.518] c:\\cirrus\\pgsql.sln.metaproj : error : </SolutionConfiguration>; SolutionDir=c:\\cirrus\\; SolutionExt=.sln; SolutionFileName=pgsql.sln; SolutionName=pgsql; SolutionPath=c:\\cirrus\\pgsql.sln; Configuration=Debug; Platform=x64) [c:\\cirrus\\pgsql.sln]\n|[16:35:14.518] c:\\cirrus\\pgsql.sln.metaproj : error : with the (default) target(s) but the build result for the built project is not in the engine cache. In isolated builds this could mean one of the following: [c:\\cirrus\\pgsql.sln]\n|[16:35:14.518] c:\\cirrus\\pgsql.sln.metaproj : error : - the reference was called with a target which is not specified in the ProjectReferenceTargets item in project \"c:\\cirrus\\pgsql.sln\" [c:\\cirrus\\pgsql.sln]\n|[16:35:14.518] c:\\cirrus\\pgsql.sln.metaproj : error : - the reference was called with global properties that do not match the static graph inferred nodes [c:\\cirrus\\pgsql.sln]\n|[16:35:14.518] c:\\cirrus\\pgsql.sln.metaproj : error : - the reference was not explicitly specified as a ProjectReference item in project \"c:\\cirrus\\pgsql.sln\" [c:\\cirrus\\pgsql.sln]\n|[16:35:14.518] c:\\cirrus\\pgsql.sln.metaproj : error : [c:\\cirrus\\pgsql.sln]\n|[16:35:14.518] \n|[16:35:14.518] 0 Warning(s)\n|[16:35:14.518] 149 Error(s)\n\nDid you ever try to use clcache (or others) ?\n\nWhen I tried, it refused to cache because of our debug settings\n(DebugInformationFormat) - which seem to be enabled even in release mode.\n\nI wonder if that'll be an issue for ccache, too. I think that line may need to\nbe conditional on debug mode.\n\nhttps://cirrus-ci.com/task/4808554103177216\n\n|[17:14:28.765] C:\\ProgramData\\chocolatey\\lib\\clcache\\clcache\\clcache.py Expanded commandline '['/c', '/Isrc/include', '/Isrc/include/port/win32', '/Isrc/include/port/win32_msvc', '/Ic:/openssl/1.1/\\\\include', '/Zi', '/nologo', '/W3', '/WX-', '/diagnostics:column', '/Ox', '/D', 'WIN32', '/D', '_WINDOWS', '/D', '__WINDOWS__', '/D', '__WIN32__', '/D', 'WIN32_STACK_RLIMIT=4194304', '/D', '_CRT_SECURE_NO_DEPRECATE', '/D', '_CRT_NONSTDC_NO_DEPRECATE', '/D', 'FRONTEND', '/D', '_MBCS', '/GF', '/Gm-', '/EHsc', '/MD', '/GS', '/fp:precise', '/Zc:wchar_t', '/Zc:forScope', '/Zc:inline', '/Fo.\\\\Release\\\\libpgcommon\\\\', '/Fd.\\\\Release\\\\libpgcommon\\\\libpgcommon.pdb', '/external:W3', '/Gd', '/TC', '/wd4018', '/wd4244', '/wd4273', '/wd4101', '/wd4102', '/wd4090', '/wd4267', '/FC', '/errorReport:queue', '/MP', 'src/common/archive.c', 'src/common/base64.c', 'src/common/checksum_helper.c', 'src/common/config_info.c', 'src/common/controldata_utils.c', 'src/common/cryptohash_openssl.c', 'src/common/d2s.c', 'src/common/encnames.c', 'src/common/exec.c', 'src/common/f2s.c', 'src/common/fe_memutils.c', 'src/common/file_perm.c', 'src/common/file_utils.c', 'src/common/hashfn.c', 'src/common/hmac_openssl.c', 'src/common/ip.c', 'src/common/jsonapi.c', 'src/common/keywords.c', 'src/common/kwlookup.c', 'src/common/link-canary.c', 'src/common/logging.c', 'src/common/md5_common.c', 'src/common/pg_get_line.c', 'src/common/pg_lzcompress.c', 'src/common/pg_prng.c', 'src/common/pgfnames.c', 'src/common/protocol_openssl.c', 'src/common/psprintf.c', 'src/common/relpath.c', 'src/common/restricted_token.c', 'src/common/rmtree.c', 'src/common/saslprep.c', 'src/common/scram-common.c', 'src/common/sprompt.c', 'src/common/string.c', 'src/common/stringinfo.c', 'src/common/unicode_norm.c', 'src/common/username.c', 'src/common/wait_error.c', 'src/common/wchar.c']'\n|[17:14:28.765] C:\\ProgramData\\chocolatey\\lib\\clcache\\clcache\\clcache.py Cannot cache invocation as ['/c', '/Isrc/include', '/Isrc/include/port/win32', '/Isrc/include/port/win32_msvc', '/Ic:/openssl/1.1/\\\\include', '/Zi', '/nologo', '/W3', '/WX-', '/diagnostics:column', '/Ox', '/D', 'WIN32', '/D', '_WINDOWS', '/D', '__WINDOWS__', '/D', '__WIN32__', '/D', 'WIN32_STACK_RLIMIT=4194304', '/D', '_CRT_SECURE_NO_DEPRECATE', '/D', '_CRT_NONSTDC_NO_DEPRECATE', '/D', 'FRONTEND', '/D', '_MBCS', '/GF', '/Gm-', '/EHsc', '/MD', '/GS', '/fp:precise', '/Zc:wchar_t', '/Zc:forScope', '/Zc:inline', '/Fo.\\\\Release\\\\libpgcommon\\\\', '/Fd.\\\\Release\\\\libpgcommon\\\\libpgcommon.pdb', '/external:W3', '/Gd', '/TC', '/wd4018', '/wd4244', '/wd4273', '/wd4101', '/wd4102', '/wd4090', '/wd4267', '/FC', '/errorReport:queue', '/MP', 'src/common/archive.c', 'src/common/base64.c', 'src/common/checksum_helper.c', 'src/common/config_info.c', 'src/common/controldata_utils.c', 'src/common/cryptohash_openssl.c', 'src/common/d2s.c', 'src/common/encnames.c', 'src/common/exec.c', 'src/common/f2s.c', 'src/common/fe_memutils.c', 'src/common/file_perm.c', 'src/common/file_utils.c', 'src/common/hashfn.c', 'src/common/hmac_openssl.c', 'src/common/ip.c', 'src/common/jsonapi.c', 'src/common/keywords.c', 'src/common/kwlookup.c', 'src/common/link-canary.c', 'src/common/logging.c', 'src/common/md5_common.c', 'src/common/pg_get_line.c', 'src/common/pg_lzcompress.c', 'src/common/pg_prng.c', 'src/common/pgfnames.c', 'src/common/protocol_openssl.c', 'src/common/psprintf.c', 'src/common/relpath.c', 'src/common/restricted_token.c', 'src/common/rmtree.c', 'src/common/saslprep.c', 'src/common/scram-common.c', 'src/common/sprompt.c', 'src/common/string.c', 'src/common/stringinfo.c', 'src/common/unicode_norm.c', 'src/common/username.c', 'src/common/wait_error.c', 'src/common/wchar.c']: external debug information (/Zi) is not supported\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 20 Feb 2022 13:36:55 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree (ccache)"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-20 13:36:55 -0600, Justin Pryzby wrote:\n> Have you tried to use the yet-to-be-released ccache with MSVC ?\n\nYes, it doesn't work, because it requires cl.exe to be used in a specific way\n(only a single input file, specific output file naming). Which would require a\ndecent amount of changes to src/tools/msvc. I think it's more realistic with\nmeson etc.\n\n\n> Also, do you know about msbuild /outputResultsCache ?\n\nI don't think it's really usable for what we need. But it's hard to tell.\n\n\n> Did you ever try to use clcache (or others) ?\n> \n> When I tried, it refused to cache because of our debug settings\n> (DebugInformationFormat) - which seem to be enabled even in release mode.\n\n> I wonder if that'll be an issue for ccache, too. I think that line may need to\n> be conditional on debug mode.\n\nThat's relatively easily solvable by using a different debug format IIRC (/Z7\nor such).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 20 Feb 2022 12:47:31 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree (ccache)"
},
{
"msg_contents": "On Sun, Feb 20, 2022 at 12:47:31PM -0800, Andres Freund wrote:\n> > Did you ever try to use clcache (or others) ?\n> > \n> > When I tried, it refused to cache because of our debug settings\n> > (DebugInformationFormat) - which seem to be enabled even in release mode.\n> \n> > I wonder if that'll be an issue for ccache, too. I think that line may need to\n> > be conditional on debug mode.\n> \n> That's relatively easily solvable by using a different debug format IIRC (/Z7\n> or such).\n\nYes. I got that working for CI by overriding with a value from the environment.\nhttps://cirrus-ci.com/task/6191974075072512\n\nThis is right after rebasing, so it doesn't save anything, but normally cuts\nbuild time to 90sec, which isn't impressive, but it's something.\n\nBTW, I think it's worth compiling the windows build with optimizations (as I\ndid here). At least with all the tap tests, this pays for itself. I suppose\nyou don't want to use a Release build, but optimizations could be enabled by\nan(other) environment variable.\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 20 Feb 2022 14:57:33 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree (ccache)"
},
{
"msg_contents": "This is the other half of my CI patches, which are unrelated to the TAP ones on\nthe other thread.",
"msg_date": "Fri, 25 Feb 2022 20:51:16 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\n> Subject: [PATCH 2/7] cirrus: upload changed html docs as artifacts\n\nI still think the determination of the base branch needs to be resolved before\nthis can be considered.\n\n\n> Always run doc build; to allow them to be shown in cfbot, they should not be\n> skipped if the linux build fails.\n> \n> This could be done on the client side (cfbot). One advantage of doing it here\n> is that fewer docs are uploaded - many patches won't upload docs at all.\n\nImo this stuff is largely independent from the commit subject....\n\n\n> XXX: if this is run in the same task, the configure flags should probably be\n> consistent ?\n\nWhat do you mean?\n\n\n> Subject: [PATCH 3/7] s!build docs as a separate task..\n\nCould you reorder this to earlier, then we can merge it before resolving the\nbranch issues. And we don't waffle on the CompilerWarnings dependency.\n\n\n> I believe this'll automatically show up as a separate \"column\" on the cfbot\n> page.\n\nYup.\n\n\n> +# Verify docs can be built, and upload changed docs as artifacts\n> +task:\n> + name: HTML docs\n> +\n> + env:\n> + CPUS: 1\n> +\n> + only_if: $CIRRUS_CHANGE_MESSAGE !=~ '.*\\nci-os-only:.*' || $CIRRUS_CHANGE_MESSAGE =~ '.*\\nci-os-only:[^\\n]*(docs|html).*'\n> +\n> + container:\n> + image: $CONTAINER_REPO/linux_debian_bullseye_ci:latest\n> + cpu: $CPUS\n> +\n\nhow about using something like (the syntax might be slightly off)\n skip: !changesInclude('doc/**')\nto avoid running it for the many pushes where no docs are changed?\n\n\n> + sysinfo_script: |\n> + id\n> + uname -a\n> + cat /proc/cmdline\n> + ulimit -a -H && ulimit -a -S\n> + export\n> +\n> + git remote -v\n> + git branch -a\n> + git remote add postgres https://github.com/postgres/postgres\n> + time git fetch -v postgres master\n> + git log -1 postgres/master\n> + git diff --name-only postgres/master..\n\nHardly \"sysinfo\"?\n\n\n> Subject: [PATCH 4/7] wip: cirrus: code coverage\n> \n> XXX: lcov should be installed in the OS image\n\nFWIW, you can open a PR in https://github.com/anarazel/pg-vm-images/\nboth the debian docker and VM have their packages installed\nvia scripts/linux_debian_install_deps.sh\n\n\n> From 226699150e3e224198fc297689add21bece51c4b Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Sun, 9 Jan 2022 18:25:02 -0600\n> Subject: [PATCH 5/7] cirrus/vcregress: test modules/contrib with\n> NO_INSTALLCHECK=1\n\nI don't want to commit the vcregress.pl part myself. But if you split off I'm\nhappy to push the --temp-config bits.\n\n\n> From 08933bcd93d4f57ad73ab6df2f1627b93e61b459 Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Sun, 16 Jan 2022 12:51:13 -0600\n> Subject: [PATCH 6/7] wip: cirrus/windows: save tmp_install as an artifact..\n> \n> ..to allow users to easily download compiled binaries to try a patch.\n> If they don't have a development environment handy or not familiar with\n> compilation.\n> \n> XXX: maybe this should be conditional or commented out ?\n\nYea, I don't want to do this by default, that's a fair bit of data that very\nlikely nobody will ever access. One can make entire tasks triggered manually,\nbut that'd then require building again :/.\n\n\n\n> From a7d2bba6f51d816412fb645b0d4821c36ee5c400 Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Sun, 20 Feb 2022 15:01:59 -0600\n> Subject: [PATCH 7/7] wip: cirrus/windows: add compiler_warnings_script\n> \n> I'm not sure how to write this test in windows shell; it's also not easy to\n> write it in posix sh, since windows shell is somehow interpretting && and ||...\n\nYou could put the script in src/tools/ci and call it from the script to avoid\nthe quoting issues.\n\nWould be good to add a comment explaining the fileLoggerParameters1 thing and\na warning that compiler_warnings_script should be the last script.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 26 Feb 2022 17:09:08 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On 2022-02-26 17:09:08 -0800, Andres Freund wrote:\n> You could put the script in src/tools/ci and call it from the script to avoid\n> the quoting issues.\n\nMight also be a good idea for the bulk of the docs / coverage stuff, even if\nthere are no quoting issues.\n\n\n",
"msg_date": "Sat, 26 Feb 2022 17:11:31 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On Sat, Feb 26, 2022 at 05:09:08PM -0800, Andres Freund wrote:\n> > XXX: if this is run in the same task, the configure flags should probably be\n> > consistent ?\n> \n> What do you mean?\n\nI mean that commit to run CompilerWarnings unconditionally built docs with\ndifferent flags than the other stuff in that task. If it's going to be a\nseparate task, then that doesn't matter.\n\n> > +# Verify docs can be built, and upload changed docs as artifacts\n> > +task:\n> > + name: HTML docs\n> > +\n> > + env:\n> > + CPUS: 1\n> > +\n> > + only_if: $CIRRUS_CHANGE_MESSAGE !=~ '.*\\nci-os-only:.*' || $CIRRUS_CHANGE_MESSAGE =~ '.*\\nci-os-only:[^\\n]*(docs|html).*'\n> > +\n> > + container:\n> > + image: $CONTAINER_REPO/linux_debian_bullseye_ci:latest\n> > + cpu: $CPUS\n> > +\n> \n> how about using something like (the syntax might be slightly off)\n> skip: !changesInclude('doc/**')\n> to avoid running it for the many pushes where no docs are changed?\n\nThis doesn't do the right thing - I just tried.\nhttps://cirrus-ci.org/guide/writing-tasks/#environment-variables\n| changesInclude function can be very useful for skipping some tasks when no changes to sources have been made since the last successful Cirrus CI build.\n\nThat means it will not normally rebuild docs (and then this still requires\nresolving the \"base branch\").\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 26 Feb 2022 20:43:52 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-26 20:43:52 -0600, Justin Pryzby wrote:\n> This doesn't do the right thing - I just tried.\n> https://cirrus-ci.org/guide/writing-tasks/#environment-variables\n> | changesInclude function can be very useful for skipping some tasks when no changes to sources have been made since the last successful Cirrus CI build.\n\n> That means it will not normally rebuild docs (and then this still requires\n> resolving the \"base branch\").\n\nWhy would we want to rebuild docs if they're the same as in the last build for\nthe same branch? For cfbot purposes each commit is independent from the prior\ncommit, so it should rebuild it every time if the CF entry has changes to the\ndocs.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 26 Feb 2022 18:50:00 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On Sat, Feb 26, 2022 at 06:50:00PM -0800, Andres Freund wrote:\n> Hi,\n> \n> On 2022-02-26 20:43:52 -0600, Justin Pryzby wrote:\n> > This doesn't do the right thing - I just tried.\n> > https://cirrus-ci.org/guide/writing-tasks/#environment-variables\n> > | changesInclude function can be very useful for skipping some tasks when no changes to sources have been made since the last successful Cirrus CI build.\n> \n> > That means it will not normally rebuild docs (and then this still requires\n> > resolving the \"base branch\").\n> \n> Why would we want to rebuild docs if they're the same as in the last build for\n> the same branch? For cfbot purposes each commit is independent from the prior\n> commit, so it should rebuild it every time if the CF entry has changes to the\n> docs.\n\nI did git commit --amend --no-edit and repushed to github to trigger a new CI\nrun, and it did this: https://github.com/justinpryzby/postgres/runs/5347878714\n\nThis is in a branch with changes to doc. I wasn't intending it to skip\nbuilding docs on this branch just because the same, changed docs were\npreviously built.\n\nWhy wouldn't the docs be built following the same logic as the rest of the\nsources ? If someone renames or removes an xref target, shouldn't CI fail on\nits next run for a patch which tries to reference it ? It would fail on the\nbuildfarm, and I think one major use for the CI is to minimize the post-push\ncleanup cycles.\n\nAre you sure about cfbot ? AIUI cirrus would see that docs didn't change\nrelative to the previous run for branch: commitfest/NN/MMMM.\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 26 Feb 2022 21:10:57 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-26 21:10:57 -0600, Justin Pryzby wrote:\n> I did git commit --amend --no-edit and repushed to github to trigger a new CI\n> run, and it did this: https://github.com/justinpryzby/postgres/runs/5347878714\n>\n> This is in a branch with changes to doc. I wasn't intending it to skip\n> building docs on this branch just because the same, changed docs were\n> previously built.\n\nBut why not? If nothing in docs/ changes, there's little point? It'd probably\nbe good to check .cirrus.yml and docs/**, to make sure that .cirrus logic\nchanges rerun as well.\n\n\n> Why wouldn't the docs be built following the same logic as the rest of the\n> sources?\n\nTests have a certain rate of spurious failure, so rerunning them makes\nsense. But what do we gain by rebuilding the docs? And especially, what do we\ngain about uploading the docs e.g. in the postgres/postgres repo?\n\n\n> If someone renames or removes an xref target, shouldn't CI fail on its next\n> run for a patch which tries to reference it ?\n\nWhy wouldn't it?\n\n\n> It would fail on the buildfarm, and I think one major use for the CI is to\n> minimize the post-push cleanup cycles.\n\nI personally see it more as access to a \"mini buildfarm\" before patches are\ncommittable, but that's of course compatible.\n\n\n> Are you sure about cfbot ? AIUI cirrus would see that docs didn't change\n> relative to the previous run for branch: commitfest/NN/MMMM.\n\nNot entirely sure, but it's what I remember observing when ammending commits\nin a repo using changesInclues. If I were to build a feature like it I'd look\nat the list of files of\n git diff $(git merge-base last_green new_commit)..new_commit\n\nor such. cfbot doesn't commit incrementally but replaces the prior commit, so\nI suspect it'll always be viewn as new. But who knows, shouldn't be hard to\nfigure out.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 26 Feb 2022 20:08:38 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On Sat, Feb 26, 2022 at 08:08:38PM -0800, Andres Freund wrote:\n> On 2022-02-26 21:10:57 -0600, Justin Pryzby wrote:\n> > If someone renames or removes an xref target, shouldn't CI fail on its next\n> > run for a patch which tries to reference it ?\n> \n> Why wouldn't it?\n\nI suppose you're right - I was thinking that cirrus was checking whether the\n*patch* had changed any matching files, but it probably checks (as it should)\nwhether \"the sources\" have changed.\n\nHmm, it's behaving strangely...if there's a single argument ('docs/**'), then\nit will skip the docs task if I resubmit it after git commit --amend --no-edit.\nBut with multiple args ('.cirrus.yaml', 'docs/**') it reruns it...\nI tried it as skip: !changesInclude() and by adding it to the existing only_if:\n(.. || ..) && changesInclude().\n\n> > Are you sure about cfbot ? AIUI cirrus would see that docs didn't change\n> > relative to the previous run for branch: commitfest/NN/MMMM.\n> \n> Not entirely sure, but it's what I remember observing when ammending commits\n> in a repo using changesInclues. If I were to build a feature like it I'd look\n> at the list of files of\n> git diff $(git merge-base last_green new_commit)..new_commit\n> \n> or such. cfbot doesn't commit incrementally but replaces the prior commit, so\n> I suspect it'll always be viewn as new. But who knows, shouldn't be hard to\n> figure out.\n\nAnyway...\n\nI still think that if \"Build Docs\" is a separate cirrus task, it should rebuild\ndocs on every CI run, even if they haven't changed, for any patch that touches\ndocs/. It'll be confusing if cfbot shows 5 green circles and 4 of them were\nbuilt 1 day ago, and 1 was built 3 weeks ago. Docs are the task that runs\nquickest, so I don't think it's worth doing anything special there (especially\nwithout understanding the behavior of changesInclude()).\n\nAlso, to allow users to view the built HTML docs, cfbot would need to 1) keep\ntrack of previous CI runs; and 2) logic to handle \"skipped\" CI runs, to allow\nshowing artifacts from the previous run. If it's not already done, I think the\nfirst half is a good idea on its own. But the 2nd part doesn't seem desirable.\n\nHowever, I realized that we can filter on cfbot with either of these:\n| $CIRRUS_CHANGE_TITLE =~ '^\\[CF...'\n| git log -1 |grep '^Author: Commitfest Bot <cfbot@cputube.org>'\nIf we can assume that cfbot will continue submitting branches as a single\npatch, this resolves the question of a \"base branch\", for cfbot.\n\n(Actually, I'd prefer if it preserved the original patches as separate commits,\nbut that isn't what it does). \n\nThese patches implement that idea, and make \"code coverage\" and \"HTML diffs\"\nstuff only run for cfbot commits. This still needs another round of testing,\nthough.\n\n-- \nJustin\n\nPS. I've just done this. I'm unsure whether to say that it's wonderful or\nterrible. This would certainly be better if each branch preserved the original\nset of patches.\n\n$ git remote add cfbot https://github.com/postgresql-cfbot/postgresql\n$ git fetch cfbot\n$ git branch -a |grep -c cfbot\n5417",
"msg_date": "Mon, 28 Feb 2022 14:58:02 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On Mon, Feb 28, 2022 at 02:58:02PM -0600, Justin Pryzby wrote:\n> I still think that if \"Build Docs\" is a separate cirrus task, it should rebuild\n> docs on every CI run, even if they haven't changed, for any patch that touches\n> docs/. It'll be confusing if cfbot shows 5 green circles and 4 of them were\n> built 1 day ago, and 1 was built 3 weeks ago. Docs are the task that runs\n> quickest, so I don't think it's worth doing anything special there (especially\n> without understanding the behavior of changesInclude()).\n> \n> Also, to allow users to view the built HTML docs, cfbot would need to 1) keep\n> track of previous CI runs; and 2) logic to handle \"skipped\" CI runs, to allow\n> showing artifacts from the previous run. If it's not already done, I think the\n> first half is a good idea on its own. But the 2nd part doesn't seem desirable.\n\nMaybe changesInclude() could work if we use this URL (from cirrus'\ndocumentation), which uses the artifacts from the last successful build.\nhttps://api.cirrus-ci.com/v1/artifact/github/justinpryzby/postgres/Documentation/html_docs/html_docs/00-doc.html?branch=citest-cirrus2\n\nThat requires knowing the file being modified, so we'd have to generate an\nindex of changed files - which I've started doing here.\n\n> However, I realized that we can filter on cfbot with either of these:\n> | $CIRRUS_CHANGE_TITLE =~ '^\\[CF...'\n> | git log -1 |grep '^Author: Commitfest Bot <cfbot@cputube.org>'\n> If we can assume that cfbot will continue submitting branches as a single\n> patch, this resolves the question of a \"base branch\", for cfbot.\n\nI don't know what you think of that idea, but I think I want to amend my\nproposal: show HTML and coverage artifacts for HEAD~1, unless set otherwise by\nan environment var. Today, that'd do the right thing for cfbot, and also for\nany 1-patch commits.\n\n> These patches implement that idea, and make \"code coverage\" and \"HTML diffs\"\n> stuff only run for cfbot commits. This still needs another round of testing,\n> though.\n\nThe patch was missing a file due to an issue while rebasing - oops.\n\nBTW (regarding the last patch), I just noticed that -Og optimization can cause\nwarnings with gcc-4.8.5-39.el7.x86_64.\n\nbe-fsstubs.c: In function 'be_lo_export':\nbe-fsstubs.c:522:24: warning: 'fd' may be used uninitialized in this function [-Wmaybe-uninitialized]\n if (CloseTransientFile(fd) != 0)\n ^\ntrigger.c: In function 'ExecCallTriggerFunc':\ntrigger.c:2400:2: warning: 'result' may be used uninitialized in this function [-Wmaybe-uninitialized]\n return (HeapTuple) DatumGetPointer(result);\n ^\nxml.c: In function 'xml_pstrdup_and_free':\nxml.c:1205:2: warning: 'result' may be used uninitialized in this function [-Wmaybe-uninitialized]\n return result;\n\n-- \nJustin",
"msg_date": "Wed, 2 Mar 2022 14:50:58 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On Sun, Feb 20, 2022 at 12:47:31PM -0800, Andres Freund wrote:\n> On 2022-02-20 13:36:55 -0600, Justin Pryzby wrote:\n> > Have you tried to use the yet-to-be-released ccache with MSVC ?\n> \n> Yes, it doesn't work, because it requires cl.exe to be used in a specific way\n> (only a single input file, specific output file naming). Which would require a\n> decent amount of changes to src/tools/msvc. I think it's more realistic with\n> meson etc.\n\nDid you get to the point that that causes a problem, or did you just realize\nthat it was a limitation that seems to preclude its use ? If so, could you\nsend the branch/commit you had ?\n\nThe error I'm getting when I try to use ccache involves .rst files, which don't\nexist (and which ccache doesn't know how to find or ignore).\nhttps://cirrus-ci.com/task/5441491957972992\n\nI gather this is the difference between \"compiling with MSVC\" and compiling\nwith a visual studio project.\n\n\n",
"msg_date": "Thu, 3 Mar 2022 22:56:15 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree (ccache)"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-03 22:56:15 -0600, Justin Pryzby wrote:\n> On Sun, Feb 20, 2022 at 12:47:31PM -0800, Andres Freund wrote:\n> > On 2022-02-20 13:36:55 -0600, Justin Pryzby wrote:\n> > > Have you tried to use the yet-to-be-released ccache with MSVC ?\n> > \n> > Yes, it doesn't work, because it requires cl.exe to be used in a specific way\n> > (only a single input file, specific output file naming). Which would require a\n> > decent amount of changes to src/tools/msvc. I think it's more realistic with\n> > meson etc.\n> \n> Did you get to the point that that causes a problem, or did you just realize\n> that it was a limitation that seems to preclude its use ?\n\nI tried to use it, but saw that no caching was happening, and debugged\nit. Which yielded that it can't be used due to the way output files are\nspecified (and due to multiple files, but that can be prevented with an\nmsbuild parameter).\n\n\n> If so, could you send the branch/commit you had ?\n\nThis was in a local VM, not cirrus. I ended up making it work with the meson\nbuild, after a bit of fiddling. Although bypassing msbuild (by building with\nninja, using cl.exe) is a larger win...\n\n\n> The error I'm getting when I try to use ccache involves .rst files, which don't\n> exist (and which ccache doesn't know how to find or ignore).\n> https://cirrus-ci.com/task/5441491957972992\n\nccache has code to deal with response files. I suspect the problem here is\nrather that ccache expects the compiler as an argument.\n\n\n> I gather this is the difference between \"compiling with MSVC\" and compiling\n> with a visual studio project.\n\nI doubt it's related to that.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 4 Mar 2022 17:30:03 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree (ccache)"
},
{
"msg_contents": "On Fri, Mar 04, 2022 at 05:30:03PM -0800, Andres Freund wrote:\n> I tried to use it, but saw that no caching was happening, and debugged\n> it. Which yielded that it can't be used due to the way output files are\n> specified (and due to multiple files, but that can be prevented with an\n> msbuild parameter).\n\nCould you give a hint about to the msbuild param to avoid processing multiple\nfiles with cl.exe? I'm not able to find it...\n\nI don't know about the issue with output filenames ..\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 6 Mar 2022 10:16:54 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree (ccache)"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-06 10:16:54 -0600, Justin Pryzby wrote:\n> On Fri, Mar 04, 2022 at 05:30:03PM -0800, Andres Freund wrote:\n> > I tried to use it, but saw that no caching was happening, and debugged\n> > it. Which yielded that it can't be used due to the way output files are\n> > specified (and due to multiple files, but that can be prevented with an\n> > msbuild parameter).\n> \n> Could you give a hint about to the msbuild param to avoid processing multiple\n> files with cl.exe? I'm not able to find it...\n\n/p:UseMultiToolTask=true\n\n\n> I don't know about the issue with output filenames ..\n\nI don't remember the precise details anymore, but it boils down to ccache\nrequiring the output filename to be specified, but we only specify the output\ndirectory. Or very similar.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 7 Mar 2022 11:10:54 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree (ccache)"
},
{
"msg_contents": "On Mon, Mar 07, 2022 at 11:10:54AM -0800, Andres Freund wrote:\n> On 2022-03-06 10:16:54 -0600, Justin Pryzby wrote:\n> > On Fri, Mar 04, 2022 at 05:30:03PM -0800, Andres Freund wrote:\n> > > I tried to use it, but saw that no caching was happening, and debugged\n> > > it. Which yielded that it can't be used due to the way output files are\n> > > specified (and due to multiple files, but that can be prevented with an\n> > > msbuild parameter).\n> > \n> > Could you give a hint about to the msbuild param to avoid processing multiple\n> > files with cl.exe? I'm not able to find it...\n> \n> /p:UseMultiToolTask=true\n\nWow - thanks ;)\n\n> > I don't know about the issue with output filenames ..\n> \n> I don't remember the precise details anymore, but it boils down to ccache\n> requiring the output filename to be specified, but we only specify the output\n> directory. Or very similar.\n\nThere's already a problem report and PR for this.\nI didn't test it, but I hope it'll be fixed in their next minor release.\n\nhttps://github.com/ccache/ccache/issues/1018\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 8 Mar 2022 00:34:47 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree (ccache)"
},
{
"msg_contents": "I'm curious what you think of this patch.\n\nIt makes check-world on freebsd over 30% faster - saving 5min.\n\nApparently gcc -Og was added in gcc 4.8 (c. 2013).\n\nOn Wed, Mar 02, 2022 at 02:50:58PM -0600, Justin Pryzby wrote:\n> From d180953d273c221a30c5e9ad8d74b1b4dfc60bd1 Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Sun, 27 Feb 2022 15:17:50 -0600\n> Subject: [PATCH 7/7] cirrus: compile with -Og..\n> \n> To improve performance of check-world, and improve debugging, without\n> significantly slower builds (they're cached anyway).\n> \n> This makes freebsd check-world run in 8.5 minutes rather than 15 minutes.\n> ---\n> .cirrus.yml | 12 +++++++-----\n> src/tools/msvc/MSBuildProject.pm | 4 ++--\n> 2 files changed, 9 insertions(+), 7 deletions(-)\n> \n> diff --git a/.cirrus.yml b/.cirrus.yml\n> index 6f05d420c85..8b673bf58cf 100644\n> --- a/.cirrus.yml\n> +++ b/.cirrus.yml\n> @@ -113,7 +113,7 @@ task:\n> \\\n> CC=\"ccache cc\" \\\n> CXX=\"ccache c++\" \\\n> - CFLAGS=\"-O0 -ggdb\"\n> + CFLAGS=\"-Og -ggdb\"\n> EOF\n> build_script: su postgres -c \"gmake -s -j${BUILD_JOBS} world-bin\"\n> upload_caches: ccache\n> @@ -208,8 +208,8 @@ task:\n> CC=\"ccache gcc\" \\\n> CXX=\"ccache g++\" \\\n> CLANG=\"ccache clang\" \\\n> - CFLAGS=\"-O0 -ggdb\" \\\n> - CXXFLAGS=\"-O0 -ggdb\"\n> + CFLAGS=\"-Og -ggdb\" \\\n> + CXXFLAGS=\"-Og -ggdb\"\n> EOF\n> build_script: su postgres -c \"make -s -j${BUILD_JOBS} world-bin\"\n> upload_caches: ccache\n> @@ -329,8 +329,8 @@ task:\n> CC=\"ccache cc\" \\\n> CXX=\"ccache c++\" \\\n> CLANG=\"ccache ${brewpath}/llvm/bin/ccache\" \\\n> - CFLAGS=\"-O0 -ggdb\" \\\n> - CXXFLAGS=\"-O0 -ggdb\" \\\n> + CFLAGS=\"-Og -ggdb\" \\\n> + CXXFLAGS=\"-Og -ggdb\" \\\n> \\\n> LLVM_CONFIG=${brewpath}/llvm/bin/llvm-config \\\n> PYTHON=python3\n> @@ -383,6 +383,8 @@ task:\n> # -fileLoggerParameters1: write to msbuild.warn.log.\n> MSBFLAGS: -m -verbosity:minimal \"-consoleLoggerParameters:Summary;ForceNoAlign\" /p:TrackFileAccess=false -nologo -fileLoggerParameters1:warningsonly;logfile=msbuild.warn.log\n> \n> + MSBUILD_OPTIMIZE: MaxSpeed\n> +\n> # If tests hang forever, cirrus eventually times out. In that case log\n> # output etc is not uploaded, making the problem hard to debug. Of course\n> # tests internally should have shorter timeouts, but that's proven to not\n> diff --git a/src/tools/msvc/MSBuildProject.pm b/src/tools/msvc/MSBuildProject.pm\n> index 5e312d232e9..05e0c41eb5c 100644\n> --- a/src/tools/msvc/MSBuildProject.pm\n> +++ b/src/tools/msvc/MSBuildProject.pm\n> @@ -85,7 +85,7 @@ EOF\n> \t\t$f, 'Debug',\n> \t\t{\n> \t\t\tdefs => \"_DEBUG;DEBUG=1\",\n> -\t\t\topt => 'Disabled',\n> +\t\t\topt => $ENV{MSBUILD_OPTIMIZE} || 'Disabled',\n> \t\t\tstrpool => 'false',\n> \t\t\truntime => 'MultiThreadedDebugDLL'\n> \t\t});\n> @@ -94,7 +94,7 @@ EOF\n> \t\t'Release',\n> \t\t{\n> \t\t\tdefs => \"\",\n> -\t\t\topt => 'Full',\n> +\t\t\topt => $ENV{MSBUILD_OPTIMIZE} || 'Full',\n> \t\t\tstrpool => 'true',\n> \t\t\truntime => 'MultiThreadedDLL'\n> \t\t});\n> -- \n> 2.17.1\n> \n\n\n",
"msg_date": "Wed, 9 Mar 2022 11:47:23 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-09 11:47:23 -0600, Justin Pryzby wrote:\n> I'm curious what you think of this patch.\n> \n> It makes check-world on freebsd over 30% faster - saving 5min.\n\nThat's nice! While -Og makes interactive debugging noticeably harder IME, it's\nnot likely to be a large enough difference just for backtraces etc.\n\nI'm far less convinced that using \"MaxSpeed\" for the msvc build is a good\nidea. I've looked at one or two backtraces of optimized msvc builds and\nbacktraces were quite a bit worse - and they're not great to start with. What\nwas the win there?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 9 Mar 2022 10:12:54 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On Wed, Mar 09, 2022 at 10:12:54AM -0800, Andres Freund wrote:\n> On 2022-03-09 11:47:23 -0600, Justin Pryzby wrote:\n> > I'm curious what you think of this patch.\n> > \n> > It makes check-world on freebsd over 30% faster - saving 5min.\n> \n> That's nice! While -Og makes interactive debugging noticeably harder IME, it's\n> not likely to be a large enough difference just for backtraces etc.\n\nYeah. gcc(1) claims that -Og can improve debugging.\n\nI should've mentioned that this seems to mitigate the performance effect of\n--coverage on linux, too.\n\n> I'm far less convinced that using \"MaxSpeed\" for the msvc build is a good\n> idea. I've looked at one or two backtraces of optimized msvc builds and\n> backtraces were quite a bit worse - and they're not great to start with. What\n> was the win there?\n\nDid you compare FULL optimization or \"favor speed/size\" or \"default\"\noptimization ?\n\nIt's worth trading some some build time (especially with a compiler cache) for\ntest time (especially with alltaptests). But I didn't check backtraces, and I\ndidn't compare the various optimization options. The argument may not be as\nstrong for windows, since it has no build cache (and it has no -Og). We'd save\na bit more when also running the other tap tests.\n\nCI runs are probably not very consistent, but I've just run\nhttps://cirrus-ci.com/task/5236145167532032\nmaster is the average of 4 patches at the top of cfbot.\n\n / master / patched / change\nsubscription / 197s / 195s / +2s\nrecovery / 234s / 212s / -22s\nbin / 383s / 373s / -10s\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 9 Mar 2022 14:37:31 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On Thu, Mar 10, 2022 at 9:37 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Wed, Mar 09, 2022 at 10:12:54AM -0800, Andres Freund wrote:\n> > On 2022-03-09 11:47:23 -0600, Justin Pryzby wrote:\n> > > I'm curious what you think of this patch.\n> > >\n> > > It makes check-world on freebsd over 30% faster - saving 5min.\n> >\n> > That's nice! While -Og makes interactive debugging noticeably harder IME, it's\n> > not likely to be a large enough difference just for backtraces etc.\n>\n> Yeah. gcc(1) claims that -Og can improve debugging.\n\nWow, I see the effect on Cirrus -- test_world ran in 8:55 instead of\n12:43 when I tried (terrible absolute times, but fantastic\nimprovement!). Hmm, on my local FreeBSD 13 box I saw 5:07 -> 5:03\nwith this change. My working theory had been that there is something\nbad happening in the I/O stack under concurrency making FreeBSD on\nCirrus/GCP very slow (ie patterns to stall on slow cloud I/O waits,\nsomething I hope to dig into when post-freeze round tuits present\nthemselves), but that doesn't gel with this huge improvement from\nnoodling with optimiser details, and I don't know why I don't see\nsomething similar locally. I'm confused.\n\nJust BTW it's kinda funny that we say -ggdb for macOS and then we use\nlldb to debug cores in cores_backtrace.sh. I suppose it would be more\ncorrect to say -glldb, but doubt it matters much...\n\n\n",
"msg_date": "Thu, 10 Mar 2022 15:43:16 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-10 15:43:16 +1300, Thomas Munro wrote:\n> Wow, I see the effect on Cirrus -- test_world ran in 8:55 instead of\n> 12:43 when I tried (terrible absolute times, but fantastic\n> improvement!). Hmm, on my local FreeBSD 13 box I saw 5:07 -> 5:03\n> with this change. My working theory had been that there is something\n> bad happening in the I/O stack under concurrency making FreeBSD on\n> Cirrus/GCP very slow (ie patterns to stall on slow cloud I/O waits,\n> something I hope to dig into when post-freeze round tuits present\n> themselves), but that doesn't gel with this huge improvement from\n> noodling with optimiser details, and I don't know why I don't see\n> something similar locally. I'm confused.\n\nThe \"terrible IO wait\" thing was before we reduced the number of CPUs and\nconcurrent jobs. It makes sense to me that with just two CPUs we're CPU bound,\nin which case -Og obviously can make a difference.\n\n\n> Just BTW it's kinda funny that we say -ggdb for macOS and then we use\n> lldb to debug cores in cores_backtrace.sh. I suppose it would be more\n> correct to say -glldb, but doubt it matters much...\n\nYea. I used -ggdb because I didn't know -glldb existed :). And there's also\nthe advantage that -ggdb works both with gcc and clang, whereas -glldb doesn't\nseem to be known to gcc.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 9 Mar 2022 19:33:47 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On Thu, Mar 10, 2022 at 4:33 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-03-10 15:43:16 +1300, Thomas Munro wrote:\n> > I'm confused.\n>\n> The \"terrible IO wait\" thing was before we reduced the number of CPUs and\n> concurrent jobs. It makes sense to me that with just two CPUs we're CPU bound,\n> in which case -Og obviously can make a difference.\n\nOh, duh, yeah, that makes sense when you put it that way and\nconsidering the CPU graph:\n\n-O0: https://cirrus-ci.com/task/4578631912521728\n-Og: https://cirrus-ci.com/task/5239486182326272\n\n\n",
"msg_date": "Thu, 10 Mar 2022 16:54:13 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-02 14:50:58 -0600, Justin Pryzby wrote:\n> From 883edaa653bcf7f1a2369d8edf46eaaac1ba0ba2 Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Mon, 17 Jan 2022 00:53:04 -0600\n> Subject: [PATCH 1/7] cirrus: include hints how to install OS packages..\n> \n> This is useful for patches during development, but once a feature is merged,\n> new libraries should be added to the OS image files, rather than installed\n> during every CI run forever into the future.\n> ---\n> .cirrus.yml | 16 +++++++++++++---\n> 1 file changed, 13 insertions(+), 3 deletions(-)\n> \n> diff --git a/.cirrus.yml b/.cirrus.yml\n> index d10b0a82f9f..1b7c36283e9 100644\n> --- a/.cirrus.yml\n> +++ b/.cirrus.yml\n> @@ -73,10 +73,11 @@ task:\n> chown -R postgres:postgres .\n> mkdir -p ${CCACHE_DIR}\n> chown -R postgres:postgres ${CCACHE_DIR}\n> - setup_cores_script: |\n> + setup_os_script: |\n> mkdir -m 770 /tmp/cores\n> chown root:postgres /tmp/cores\n> sysctl kern.corefile='/tmp/cores/%N.%P.core'\n> + #pkg install -y ...\n\nWould you mind if I split this into setup_core_files_script and\nsetup_additional_packages_script:?\n\n\n> + # The commit that this branch is rebased on. There's no easy way to find this.\n> + # This does the right thing for cfbot, which always squishes all patches into a single commit.\n> + # And does the right thing for any 1-patch commits.\n> + # Patches series manually submitted to cirrus may benefit from setting this\n> + # to the number of patches in the series (or directly to the commit the series was rebased on).\n> + BASE_COMMIT: HEAD~1\n\nStill think that something using\n git merge-base $CIRRUS_LAST_GREEN_CHANGE HEAD\n\nmight be better. With a bit of error handling for unset\nCIRRUS_LAST_GREEN_CHANGE and for git not seeing enough history for\nCIRRUS_LAST_GREEN_CHANGE.\n\n\n> + apt-get update\n> + apt-get -y install lcov\n\nFWIW, I just added that to the install script used for the container / VM\nimage build. So it'll be pre-installed once that completes.\n\nhttps://cirrus-ci.com/build/5818788821073920\n\n\n> From feceea4413b84f478e6a0888cdfab4be1c80767a Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Sun, 20 Feb 2022 15:01:59 -0600\n> Subject: [PATCH 6/7] wip: cirrus/windows: add compiler_warnings_script\n> \n> I'm not sure how to write this test in windows shell; it's also not easy to\n> write it in posix sh, since windows shell is somehow interpretting && and ||...\n\nThat comment isn't accurate anymore now that it's in an external script,\nright?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 10 Mar 2022 12:50:15 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On Thu, Mar 10, 2022 at 12:50:15PM -0800, Andres Freund wrote:\n> > - setup_cores_script: |\n> > + setup_os_script: |\n> > mkdir -m 770 /tmp/cores\n> > chown root:postgres /tmp/cores\n> > sysctl kern.corefile='/tmp/cores/%N.%P.core'\n> > + #pkg install -y ...\n> \n> Would you mind if I split this into setup_core_files_script and\n> setup_additional_packages_script:?\n\nThat's fine. FYI I'm also planning on using choco install --no-progress\nI could resend my latest patches shortly.\n\n> > Subject: [PATCH 6/7] wip: cirrus/windows: add compiler_warnings_script\n> > \n> > I'm not sure how to write this test in windows shell; it's also not easy to\n> > write it in posix sh, since windows shell is somehow interpretting && and ||...\n> \n> That comment isn't accurate anymore now that it's in an external script,\n> right?\n\nNo, it is accurate. What I mean is that it's also hard to write it as a\n1-liner using posix sh, since the || (and &&) seemed to be interpretted by\ncmd.exe and needed escaping - gross.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 10 Mar 2022 15:00:10 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "See attached, or at\nhttps://github.com/justinpryzby/postgres/runs/5503079878",
"msg_date": "Thu, 10 Mar 2022 16:06:12 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nPushed 0001, 0002. Only change I made was to add\nDEBIAN_FRONTEND=noninteractive to the apt-get invocations, because some\npackages will fail / complain verbosely if there's no interactive prompt\nduring installation.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 18 Mar 2022 15:45:03 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On Fri, Mar 18, 2022 at 03:45:03PM -0700, Andres Freund wrote:\n> Pushed 0001, 0002. Only change I made was to add\n\nThanks - is there any reason not to do the MSVC compiler warnings one, too ?\n\nI see that it'll warn about issues with at least 3 patches (including one of\nyours).\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 22 Mar 2022 23:14:23 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-22 23:14:23 -0500, Justin Pryzby wrote:\n> On Fri, Mar 18, 2022 at 03:45:03PM -0700, Andres Freund wrote:\n> > Pushed 0001, 0002. Only change I made was to add\n> \n> Thanks - is there any reason not to do the MSVC compiler warnings one, too ?\n\nPurely a lack of round tuits. IIRC I thought there was a small aspect that\nstill needed some polishing, but didn't have time to tackle it yet.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 23 Mar 2022 08:54:38 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On Thu, Mar 10, 2022 at 9:37 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> -Og\n\nAdding this to CXXFLAGS caused a torrent of warnings from g++ about\nLLVM headers, which I also see on my local system for LLVM 11 and LLVM\n14:\n\n[19:47:11.047] /usr/lib/llvm-11/include/llvm/ADT/Twine.h: In member\nfunction ‘llvm::CallInst*\nllvm::IRBuilderBase::CreateCall(llvm::FunctionType*, llvm::Value*,\nllvm::ArrayRef<llvm::Value*>, const llvm::Twine&, llvm::MDNode*)’:\n[19:47:11.047] /usr/lib/llvm-11/include/llvm/ADT/Twine.h:229:16:\nwarning: ‘<anonymous>.llvm::Twine::LHS.llvm::Twine::Child::twine’ may\nbe used uninitialized in this function [-Wmaybe-uninitialized]\n[19:47:11.047] 229 | !LHS.twine->isBinary())\n[19:47:11.047] | ~~~~^~~~~\n\nhttps://cirrus-ci.com/task/5597526098182144?logs=build#L6\n\nNot sure who to complain to about that...\n\n\n",
"msg_date": "Thu, 24 Mar 2022 09:52:39 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On Thu, Mar 24, 2022 at 09:52:39AM +1300, Thomas Munro wrote:\n> On Thu, Mar 10, 2022 at 9:37 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > -Og\n> \n> Adding this to CXXFLAGS caused a torrent of warnings from g++ about\n> LLVM headers, which I also see on my local system for LLVM 11 and LLVM\n> 14:\n\nYes, I mentioned seeing some other warnings here.\n20220302205058.GJ15744@telsasoft.com\n\nI think warnings were cleaned up with -O0/1/2 but not with -Og.\n\n\n",
"msg_date": "Wed, 23 Mar 2022 16:01:47 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "Hi,\n\nNow that zstd is used, enable it in CI. I plan to commit this shortly, unless\nsomebody sees a reason not to do so.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 30 Mar 2022 08:50:17 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Adding CI to our tree"
},
{
"msg_contents": "On Wed, Oct 6, 2021 at 5:01 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> BTW, on those two OSes there are some messages like this each time a\n> submake dumps its output to the log:\n>\n> [03:36:16.591] fcntl(): Bad file descriptor\n>\n> It seems worth putting up with these compared to the alternatives of\n> either not using -j, not using -Otarget and having the output of\n> parallel tests all mashed up and unreadable (that still happen\n> sometimes but it's unlikely, because the submakes write() whole output\n> chunks at infrequent intervals), or redirecting to a file so you can't\n> see the realtime test output on the main CI page (not so fun, you have\n> to wait until it's finished and view it as an 'artifact'). I tried to\n> write a patch for GNU make to fix that[1], let's see if something\n> happens.\n>\n> [1] https://savannah.gnu.org/bugs/?52922\n\nFor the record, GNU make finally fixed this problem (though Andres\nfound a workaround anyway), but in any case it won't be in the\nrelevant package repos before we switch over to Meson/Ninja :-)\n\n\n",
"msg_date": "Wed, 7 Sep 2022 08:15:17 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Adding CI to our tree"
}
] |
[
{
"msg_contents": "I got the error message in the subject and was unsure how to continue; I\ndidn't see any hits for the error message on the mailing list, and it was\nhard to determine from the context around the error in the\nMakefile.global.in about the best way to solve the problem.\n\nThis patch amends the error message to give help to the user.\n\nI ran \"make check\" at the top level with this patch enabled and 209 tests\npassed. I also ran \"make check\" in src/test/ssl without TAP enabled and\nverified that I got the new error message. I also verified that compiling\nwith --enable-tap-tests fixes the error in question.\n\nThis patch does not include regression tests.\n\nAnother way to fix this issue could be to put the exact text of the error\nmessage in the documentation or the wiki, with instructions on how to fix\nit - the first thing I did was punch the error message into Google, if a\nmatch for the error message came up with instructions on how to fix it,\nthat would also help.\n\nThis is the first patch that I've submitted to Postgres, I believe that\nI've followed the guidelines on the patch submission page, but please let\nme know if I missed anything.\n\nKevin\n\n--\nKevin Burke\nphone: 925-271-7005 | kevin.burke.dev",
"msg_date": "Fri, 1 Oct 2021 17:09:17 -0700",
"msg_from": "Kevin Burke <kevin@burke.dev>",
"msg_from_op": true,
"msg_subject": "Better context for \"TAP tests not enabled\" error message"
},
{
"msg_contents": "> On 2 Oct 2021, at 02:09, Kevin Burke <kevin@burke.dev> wrote:\n\n> This patch amends the error message to give help to the user.\n\nI think this makes sense, and is in line with Rachels patch [0] a few days ago\nthat I plan on pushing; small error hints which wont get in the way of\nestablished developers, but which can help new developers onboard onto our tree\nand processes around it.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n[0] https://postgr.es/m/CADJcwiVL20955HCNzDqz9BEDr6A77pz6-nac5sbZVvhAEMijLg@mail.gmail.com\n\n",
"msg_date": "Sat, 2 Oct 2021 20:15:18 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Better context for \"TAP tests not enabled\" error message"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 2 Oct 2021, at 02:09, Kevin Burke <kevin@burke.dev> wrote:\n>> This patch amends the error message to give help to the user.\n\n> I think this makes sense,\n\n+1. I'd take out the \"Maybe\"; the diagnosis seems pretty certain.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 02 Oct 2021 14:19:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Better context for \"TAP tests not enabled\" error message"
},
{
"msg_contents": "Updated patch that removes the \"Maybe\"\n\n\n--\nKevin Burke\nphone: 925-271-7005 | kevin.burke.dev\n\n\nOn Sat, Oct 2, 2021 at 11:19 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Daniel Gustafsson <daniel@yesql.se> writes:\n> >> On 2 Oct 2021, at 02:09, Kevin Burke <kevin@burke.dev> wrote:\n> >> This patch amends the error message to give help to the user.\n>\n> > I think this makes sense,\n>\n> +1. I'd take out the \"Maybe\"; the diagnosis seems pretty certain.\n>\n> regards, tom lane\n>",
"msg_date": "Sat, 2 Oct 2021 15:38:55 -0700",
"msg_from": "Kevin Burke <kevin@burke.dev>",
"msg_from_op": true,
"msg_subject": "Re: Better context for \"TAP tests not enabled\" error message"
},
{
"msg_contents": "> On 3 Oct 2021, at 00:39, Kevin Burke <kevin@burke.dev> wrote:\n\n> Updated patch that removes the \"Maybe\" \n\nThanks, I’ll take care of this tomorrow along with Rachels patch.\n\n./daniel\n\n",
"msg_date": "Sun, 3 Oct 2021 01:27:56 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Better context for \"TAP tests not enabled\" error message"
},
{
"msg_contents": "> On 3 Oct 2021, at 01:27, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 3 Oct 2021, at 00:39, Kevin Burke <kevin@burke.dev> wrote:\n> \n>> Updated patch that removes the \"Maybe\" \n> \n> Thanks, I’ll take care of this tomorrow along with Rachels patch.\n\nI was off-by-one on date, but it's been pushed to master now. Thanks!\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 4 Oct 2021 11:49:36 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Better context for \"TAP tests not enabled\" error message"
}
] |
[
{
"msg_contents": "When I click the mail archive link\n(https://www.postgresql.org/message-id/flat/72a0d590d6ba06f242d75c2e641820ec@postgrespro.ru)\nin CF app web page of this entry:\nhttps://commitfest.postgresql.org/34/3194/\n\nI got:\n\nError 503 Backend fetch failed\n\nBackend fetch failed\nGuru Meditation:\n\nXID: 83477623\n\nVarnish cache server\n\nIs there anything wrong with CF app?\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Sat, 02 Oct 2021 11:33:47 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Problem with CF app?"
},
{
"msg_contents": "> When I click the mail archive link\n> (https://www.postgresql.org/message-id/flat/72a0d590d6ba06f242d75c2e641820ec@postgrespro.ru)\n> in CF app web page of this entry:\n> https://commitfest.postgresql.org/34/3194/\n> \n> I got:\n> \n> Error 503 Backend fetch failed\n> \n> Backend fetch failed\n> Guru Meditation:\n> \n> XID: 83477623\n> \n> Varnish cache server\n> \n> Is there anything wrong with CF app?\n\nI can access the mail archive link now. It looks like a temporary\nfailure.\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Sun, 03 Oct 2021 07:04:43 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Problem with CF app?"
}
] |
[
{
"msg_contents": "The following bug has been logged on the website:\n\nBug reference: 17212\nLogged by: Alexander Lakhin\nEmail address: exclusion@gmail.com\nPostgreSQL version: 14.0\nOperating system: Ubuntu 20.04\nDescription: \n\nWhen pg_amcheck runs against a database containing temporary tables:\r\necho \"\r\nCREATE TEMP TABLE t(i int);\r\nCREATE INDEX t_idx ON t(i);\r\nINSERT INTO t VALUES (1);\r\n\r\nSELECT pg_sleep(5);\r\n\" | psql &\r\npg_amcheck --install-missing -a --heapallindexed --parent-check\n--rootdescend --progress || echo \"FAIL\"\r\n\r\nit fails with the following errors:\r\nbtree index \"regression.pg_temp_4.t_idx\":0%)\r\n ERROR: cannot access temporary tables of other sessions\r\n DETAIL: Index \"t_idx\" is associated with temporary relation.\r\nheap table \"regression.pg_temp_4.t\":\r\n ERROR: cannot access temporary tables of other sessions\r\n779/779 relations (100%), 2806/2806 pages (100%)\r\nFAIL\r\n\r\nAlthough you can add --exclude-relation=*.pg_temp*.*, this behaviour differs\nfrom the behaviour of pg_dump and friends, which skip such relations\nsilently.",
"msg_date": "Sat, 02 Oct 2021 11:00:02 +0000",
"msg_from": "PG Bug reporting form <noreply@postgresql.org>",
"msg_from_op": true,
"msg_subject": "BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Sat, Oct 2, 2021 at 4:49 AM PG Bug reporting form\n<noreply@postgresql.org> wrote:\n> Although you can add --exclude-relation=*.pg_temp*.*, this behaviour differs\n> from the behaviour of pg_dump and friends, which skip such relations\n> silently.\n\nI agree -- this behavior is a bug.\n\nCan you propose a fix, Mark?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 2 Oct 2021 10:32:57 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "> On Oct 2, 2021, at 10:32 AM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> On Sat, Oct 2, 2021 at 4:49 AM PG Bug reporting form\n> <noreply@postgresql.org> wrote:\n>> Although you can add --exclude-relation=*.pg_temp*.*, this behaviour differs\n>> from the behaviour of pg_dump and friends, which skip such relations\n>> silently.\n> \n> I agree -- this behavior is a bug.\n> \n> Can you propose a fix, Mark?\n\nThe attached patch includes a test case for this, which shows the problems against the current pg_amcheck.c, and a new version of pg_amcheck.c which fixes the bug. Could you review it?\n\nThanks for bringing this to my attention.\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 3 Oct 2021 10:04:19 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "> On Oct 3, 2021, at 10:04 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n>> On Oct 2, 2021, at 10:32 AM, Peter Geoghegan <pg@bowt.ie> wrote:\n>> \n>> On Sat, Oct 2, 2021 at 4:49 AM PG Bug reporting form\n>> <noreply@postgresql.org> wrote:\n>>> Although you can add --exclude-relation=*.pg_temp*.*, this behaviour differs\n>>> from the behaviour of pg_dump and friends, which skip such relations\n>>> silently.\n>> \n>> I agree -- this behavior is a bug.\n>> \n>> Can you propose a fix, Mark?\n> \n> The attached patch includes a test case for this, which shows the problems against the current pg_amcheck.c, and a new version of pg_amcheck.c which fixes the bug. Could you review it?\n> \n> Thanks for bringing this to my attention.\n\nReposting to pgsql-hackers in preparation for making a commitfest entry.\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 3 Oct 2021 15:20:20 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "Hello Mark,\n\n04.10.2021 01:20, Mark Dilger wrote:\n> The attached patch includes a test case for this, which shows the problems against the current pg_amcheck.c, and a new version of pg_amcheck.c which fixes the bug. Could you review it?\n>\n> Thanks for bringing this to my attention.\nThere is another issue, that maybe should be discussed separately (or\nthis thread could be renamed to \"... on checking specific relations\"),\nbut the solution could be similar to that.\npg_amcheck also fails on checking invalid indexes, that could be created\nlegitimately by the CREATE INDEX CONCURRENTLY command.\nFor example, consider the following script:\npsql -c \"CREATE TABLE t(i numeric); INSERT INTO t VALUES\n(generate_series(1, 10000000));\"\npsql -c \"CREATE INDEX CONCURRENTLY t_idx ON t(i);\" &\npg_amcheck -a --install-missing --heapallindexed --rootdescend\n--progress || echo \"FAIL\"\n\npg_amcheck fails with:\nbtree index \"regression.public.t_idx\":\n��� ERROR:� cannot check index \"t_idx\"\n��� DETAIL:� Index is not valid.\n781/781 relations (100%), 2806/2806 pages (100%)\nFAIL\n\nWhen an index created without CONCURRENTLY, it runs successfully.\n\nBeside that, it seems that pg_amcheck produces a deadlock in such a case:\n2021-10-04 11:23:38.584 MSK [1451296] ERROR:� deadlock detected\n2021-10-04 11:23:38.584 MSK [1451296] DETAIL:� Process 1451296 waits for\nShareLock on virtual transaction 5/542; blocked by process 1451314.\n��� Process 1451314 waits for ShareLock on relation 16385 of database\n16384; blocked by process 1451296.\n��� Process 1451296: CREATE INDEX CONCURRENTLY t_idx ON t(i);\n��� Process 1451314: SELECT * FROM\n\"pg_catalog\".bt_index_parent_check(index := '16390'::regclass,\nheapallindexed := true, rootdescend := true)\n2021-10-04 11:23:38.584 MSK [1451296] HINT:� See server log for query\ndetails.\n2021-10-04 11:23:38.584 MSK [1451296] STATEMENT:� CREATE INDEX\nCONCURRENTLY t_idx ON t(i);\n\nI think that the deadlock is yet another issue, as invalid indexes could\nappear in other circumstances too.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Mon, 4 Oct 2021 12:00:01 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "\n\n> On Oct 4, 2021, at 2:00 AM, Alexander Lakhin <exclusion@gmail.com> wrote:\n\nThank you, Alexander, for these bug reports.\n\n> There is another issue, that maybe should be discussed separately (or\n> this thread could be renamed to \"... on checking specific relations\"),\n> but the solution could be similar to that.\n> pg_amcheck also fails on checking invalid indexes, that could be created\n> legitimately by the CREATE INDEX CONCURRENTLY command.\n\nI believe this is a bug in amcheck's btree checking functions. Peter, can you take a look?\n\n> For example, consider the following script:\n> psql -c \"CREATE TABLE t(i numeric); INSERT INTO t VALUES\n> (generate_series(1, 10000000));\"\n> psql -c \"CREATE INDEX CONCURRENTLY t_idx ON t(i);\" &\n> pg_amcheck -a --install-missing --heapallindexed --rootdescend\n> --progress || echo \"FAIL\"\n> \n> pg_amcheck fails with:\n> btree index \"regression.public.t_idx\":\n> ERROR: cannot check index \"t_idx\"\n> DETAIL: Index is not valid.\n> 781/781 relations (100%), 2806/2806 pages (100%)\n> FAIL\n\nYes, I can reproduce this following your steps. (It's always appreciated to have steps to reproduce.)\n\nI can also get this failure without pg_amcheck, going directly to the btree checking code. Having already built the table as you prescribe:\n\namcheck % psql -c \"CREATE INDEX CONCURRENTLY t_idx ON t(i);\" & sleep 0.1 && psql -c \"SELECT * FROM pg_catalog.bt_index_parent_check(index := 't_idx'::regclass, heapallindexed := true, rootdescend := true)\" \n[1] 9553\nERROR: deadlock detected\nDETAIL: Process 9555 waits for ShareLock on virtual transaction 5/11; blocked by process 9558.\nProcess 9558 waits for ShareLock on relation 16406 of database 16384; blocked by process 9555.\nHINT: See server log for query details.\nERROR: cannot check index \"t_idx\"\nDETAIL: Index is not valid.\n[1] + exit 1 psql -c \"CREATE INDEX CONCURRENTLY t_idx ON t(i);\"\n\nIf Peter agrees that this is not pg_amcheck specific, then we should start a new thread to avoid confusing the commitfest tickets for these two items.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 4 Oct 2021 08:10:28 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Mon, Oct 4, 2021 at 8:10 AM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> > There is another issue, that maybe should be discussed separately (or\n> > this thread could be renamed to \"... on checking specific relations\"),\n> > but the solution could be similar to that.\n> > pg_amcheck also fails on checking invalid indexes, that could be created\n> > legitimately by the CREATE INDEX CONCURRENTLY command.\n>\n> I believe this is a bug in amcheck's btree checking functions. Peter, can you take a look?\n\nWhy do you say that? verify_nbtree.c will throw an error when called\nwith an invalid index -- which is what we actually see here. Obviously\nthat is the intended behavior, and always has been. This hasn't been a\nproblem before now, probably because the sample verification query in\nthe docs (under bt_index_check()) accounts for this directly.\n\nWhy shouldn't we expect pg_amcheck to do the same thing, at the SQL\nlevel? It's practically the same thing as the temp table issue.\nIndeed, verify_nbtree.c will throw an error on a temp table (at least\nif it's from another session).\n\n> I can also get this failure without pg_amcheck, going directly to the btree checking code. Having already built the table as you prescribe:\n\n> ERROR: deadlock detected\n> DETAIL: Process 9555 waits for ShareLock on virtual transaction 5/11; blocked by process 9558.\n> Process 9558 waits for ShareLock on relation 16406 of database 16384; blocked by process 9555.\n> HINT: See server log for query details.\n> ERROR: cannot check index \"t_idx\"\n> DETAIL: Index is not valid.\n\nI think that the deadlock is just another symptom of the same problem.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 4 Oct 2021 10:58:09 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Mon, Oct 4, 2021 at 2:00 AM Alexander Lakhin <exclusion@gmail.com> wrote:\n> There is another issue, that maybe should be discussed separately (or\n> this thread could be renamed to \"... on checking specific relations\"),\n> but the solution could be similar to that.\n\nThanks for the report!\n\nI wonder if verify_heapam.c does the right thing with unlogged tables\nwhen verification runs on a standby -- a brief glance at the code\nleaves me with the impression that it's not handled there. Note that\nverify_nbtree.c initially got it wrong. The issue was fixed by bugfix\ncommit 6754fe65. Before then nbtree verification could throw a nasty\nlow-level smgr error, just because we had an unlogged table in hot\nstandby mode.\n\nNote that we deliberately skip indexes when this happens (we don't\nerror out), unlike the temp buffers (actually temp table) case. This\nseems like the right set of behaviors. We really don't want to have to\nthrow an \"invalid object type\" style error just because verification\nruns during recovery. Plus it just seems logical to assume that\nunlogged indexes/tables don't have storage when we're in hot standby\nmode, and so must simply have nothing for us to verify.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 4 Oct 2021 13:37:30 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "\n\n> On Oct 4, 2021, at 10:58 AM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> On Mon, Oct 4, 2021 at 8:10 AM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>>> There is another issue, that maybe should be discussed separately (or\n>>> this thread could be renamed to \"... on checking specific relations\"),\n>>> but the solution could be similar to that.\n>>> pg_amcheck also fails on checking invalid indexes, that could be created\n>>> legitimately by the CREATE INDEX CONCURRENTLY command.\n>> \n>> I believe this is a bug in amcheck's btree checking functions. Peter, can you take a look?\n> \n> Why do you say that?\n\nBecause REINDEX CONCURRENTLY and the bt_index_parent_check() function seem to have lock upgrade hazards that are unrelated to pg_amcheck.\n\n> This hasn't been a\n> problem before now, probably because the sample verification query in\n> the docs (under bt_index_check()) accounts for this directly.\n\nIt doesn't say anything about deadlocks, but yes, it mentions errors will be raised unless the caller filters out indexes that are invalid or not ready.\n\n\nOn to pg_amcheck's behavior....\n\nI see no evidence in the OP's complaint that pg_amcheck is misbehaving. It launches a worker to check each relation, prints for the user's benefit any errors those checks raise, and finally returns 0 if they all pass and 2 otherwise. Since not all relations could be checked, 2 is returned. Returning 0 would be misleading, as it implies everything was checked and passed, and it can't honestly say that. The return value 2 does not mean that anything failed. It means that not all checks passed. When a 2 is returned, the user is expected to read the output and decide what, if anything, they want to do about it. In this case, the user might decide to wait until the reindex finishes and check again, or they might decide they don't care.\n\nIt is true that pg_amcheck is calling bt_index_parent_check() on an invalid index, but so what? If it chose not to do so, it would still need to print a message about the index being unavailable for checking, and it would still have to return 2. It can't return 0, and it is unhelpful to leave the user in the dark about the fact that not all indexes are in the right state for checking. So it would still print the same error message and still return 2.\n\nI think this bug report is really a feature request. The OP appears to want an option to toggle on/off the printing of such information, perhaps with not printing it as the default. \n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 4 Oct 2021 15:36:06 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Mon, Oct 4, 2021 at 3:36 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> >> I believe this is a bug in amcheck's btree checking functions. Peter, can you take a look?\n> >\n> > Why do you say that?\n>\n> Because REINDEX CONCURRENTLY and the bt_index_parent_check() function seem to have lock upgrade hazards that are unrelated to pg_amcheck.\n\nThe problem with that argument is that the bt_index_parent_check()\nfunction isn't doing anything particularly special, apart from\ndropping the lock. That has been its behavior for many years now.\n\n> On to pg_amcheck's behavior....\n>\n> I see no evidence in the OP's complaint that pg_amcheck is misbehaving. It launches a worker to check each relation, prints for the user's benefit any errors those checks raise, and finally returns 0 if they all pass and 2 otherwise. Since not all relations could be checked, 2 is returned. Returning 0 would be misleading, as it implies everything was checked and passed, and it can't honestly say that. The return value 2 does not mean that anything failed. It means that not all checks passed. When a 2 is returned, the user is expected to read the output and decide what, if anything, they want to do about it. In this case, the user might decide to wait until the reindex finishes and check again, or they might decide they don't care.\n>\n> It is true that pg_amcheck is calling bt_index_parent_check() on an invalid index, but so what? If it chose not to do so, it would still need to print a message about the index being unavailable for checking, and it would still have to return 2.\n\nWhy would it have to print such a message? You seem to be presenting\nthis as if there is some authoritative, precise, relevant definition\nof \"the relations that pg_amcheck sees\". But to me the relevant\ndetails look arbitrary at best.\n\n> It can't return 0, and it is unhelpful to leave the user in the dark about the fact that not all indexes are in the right state for checking.\n\nWhy is that unhelpful? More to the point, *why* would this alternative\nbehavior constitute \"leaving the user in the dark\"?\n\nWhat about the case where the pg_class entry isn't visible to our MVCC\nsnapshot? Why is \"skipping\" such a relation not just as unhelpful?\n\n> I think this bug report is really a feature request. The OP appears to want an option to toggle on/off the printing of such information, perhaps with not printing it as the default.\n\nAnd I don't understand why you think that clearly-accidental\nimplementation details (really just bugs) should be treated as\naxiomatic truths about how pg_amcheck must work. Should we now \"fix\"\npg_dump so that it matches pg_amcheck?\n\nAll of the underlying errors are cases that were clearly intended to\ncatch user error -- every single one. But apparently pg_amcheck is\nincapable of error, by definition. Like HAL 9000.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 4 Oct 2021 16:10:05 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "\n\n> On Oct 4, 2021, at 4:10 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> And I don't understand why you think that clearly-accidental\n> implementation details (really just bugs) should be treated as\n> axiomatic truths about how pg_amcheck must work. Should we now \"fix\"\n> pg_dump so that it matches pg_amcheck?\n> \n> All of the underlying errors are cases that were clearly intended to\n> catch user error -- every single one. But apparently pg_amcheck is\n> incapable of error, by definition. Like HAL 9000.\n\nOn the contrary, I got all the way finished writing a patch to have pg_amcheck do as you suggest before it dawned on me to wonder if that was the right way to go. I certainly don't assume pg_amcheck is correct by definition. I already posted a patch for the temporary tables bug upthread having never argued that it was anything other than a bug. I also wrote a patch for verify_heapam to fix the problem with unlogged tables on standbys, and was developing a test for that, when I got your email. I'm not arguing against that being a bug, either. Hopefully, I can get that properly tested and post it before too long.\n\nI am concerned about giving the user the false impression that an index (or table) was checked when it was not. I don't see the logic in\n\n pg_amcheck -i idx1 -i idx2 -i idx3\n\nskipping all three indexes and then reporting success. What if the user launches the pg_amcheck command precisely because they see error messages in the logs during a long running reindex command, and are curious if the index so generated is corrupt. You can't assume the user knows the index is still being reindexed. If the last message logged was some time ago, they might assume the process has finished. So something other than a silent success is needed to let them know what is going on.\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 4 Oct 2021 16:28:53 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "\n\n> On Oct 4, 2021, at 4:28 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> pg_amcheck -i idx1 -i idx2 -i idx3\n\nI forgot to mention: There's a continuum between `pg_amcheck -a` which checks everything in all databases of the cluster, and `pg_amcheck -i just_one_index`. There are any number of combinations of object names, schema names, database names, and patterns over the same, which select anything from an empty set to a huge set of things to check. I'm trying to keep the behavior the same for all of those, and that's why I'm trying to avoid having `pg_amcheck -a` silently skip indexes that are unavailable for checking while having `pg_amcheck -i just_one_index` give a report about the index. I wouldn't know where to draw the line between reporting the issue and not, and I doubt whatever line I choose will be intuitive to users.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 4 Oct 2021 16:34:53 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Mon, Oct 4, 2021 at 4:28 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> I am concerned about giving the user the false impression that an index (or table) was checked when it was not. I don't see the logic in\n>\n> pg_amcheck -i idx1 -i idx2 -i idx3\n>\n> skipping all three indexes and then reporting success.\n\nThis is the first time that anybody mentioned the -i option on the\nthread. I read your previous remarks as making a very broad statement,\nabout every single issue.\n\nAnyway, the issue with -i doesn't seem like it changes that much. Why\nnot just behave as if there is no such \"visible\" index? That's the\nsame condition, for all practical purposes. If that approach doesn't\nseem good enough, then the error message can be refined to make the\nuser aware of the specific issue.\n\n> What if the user launches the pg_amcheck command precisely because they see error messages in the logs during a long running reindex command, and are curious if the index so generated is corrupt.\n\nI'm guessing that you meant REINDEX CONCURRENTLY.\n\nSince you're talking about the case where it has an error, the whole\nindex build must have failed. So the user would get exactly what\nthey'd expect -- verification of the original index, without any\nhindrance from the new/failed index.\n\n(Thinks for a moment...)\n\nActually, I think that we'd only verify the original index, even\nbefore the error with CONCURRENTLY (though I've not checked that point\nmyself).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 4 Oct 2021 16:45:14 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "\n\n> On Oct 4, 2021, at 4:45 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> I'm guessing that you meant REINDEX CONCURRENTLY.\n\nYes.\n\n> Since you're talking about the case where it has an error\n\nSorry, I realized after hitting <send> that you might take it that way, but I mean the logs generally, not just postgres logs. If somebody runs \"reindex concurrently\" on all tables at midnight every night, and they see one morning in the (non-postgres) logs from the midnight hour weird error messages about their RAID controller, they may well want to check all their indexes to see if any of them were corrupted. This is a totally made-up example, but the idea that a user might want to check their indexes, tables, or both owing to errors of some kind is not far-fetched. \n\n> , the whole\n> index build must have failed. So the user would get exactly what\n> they'd expect -- verification of the original index, without any\n> hindrance from the new/failed index.\n\nRight, in that case, but not if hardware errors corrupted the index, and generated logs, without happening to trip up the reindex concurrently statement itself.\n\n> (Thinks for a moment...)\n> \n> Actually, I think that we'd only verify the original index, even\n> before the error with CONCURRENTLY (though I've not checked that point\n> myself).\n\nTo get back on track, let me say that I'm not taking the position that what pg_amcheck currently does is necessarily correct, but just that I'd like to be careful about what we change, if anything. There are three things that seem to irritate people:\n\n1) A non-zero exit code means \"not everything was checked and passed\" rather than \"at least one thing is definitely corrupt\".\n\n2) Skipping of indexes is reported to the user with the word 'ERROR' in the report rather than, say, 'NOTICE'.\n\n3) Deadlocks can occur\n\nI have resisted changing #1 on the theory that `pg_amcheck --all && ./post_all_checks_pass.sh` should only run the post_all_checks_pass.sh if indeed all checks have passed, and I'm interpreting skipping an index check as being contrary to that. But maybe that's wrong of me. I don't know. There is already sloppiness between the time that pg_amcheck resolves which database relations are matched by --all, --table, --index, etc. and the time that all the checks are started, and again between that time and when the last one is complete. Database objects could be created or dropped during those spans of time, in which case --all doesn't have quite so well defined a meaning. But the user running pg_amcheck might also *know* that they aren't running any such DDL, and therefore expect --all to actually result in everything being checked.\n\nI find it strange that I should do anything about #2 in pg_amcheck, since it's the function in verify_nbtree that phrases the situation as an error. But I suppose I can just ignore that and have it print as a notice. I'm genuinely not trying to give you grief here -- I simply don't like that pg_amcheck is adding commentary atop what the checking functions are doing. I see a clean division between what pg_amcheck is doing and what amcheck is doing, and this feels to me to put that on the wrong side of the divide. If refusing to check the index because it is not in the requisite state is a notice, then why wouldn't verify_nbtree raise it as one and return early rather than raising an error?\n\nI also find it strange that #3 is being attributed to pg_amcheck's choice of how to call the checking function, because I can't think of any other function where we require the SQL caller to do anything like what you are requiring here in order to prevent deadlocks, and also because the docs for the functions don't say that a deadlock is possible, merely that the function may return with an error. I was totally content to get an error back, since errors are how the verify_nbtree functions communicate everything else, and the handler for those functions is already prepared to deal with the error messages so returned. But it clearly annoys you that pg_amcheck is doing this, so I'll go forward with the patch that I already have written which does otherwise.\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 4 Oct 2021 17:32:54 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Mon, Oct 4, 2021 at 5:32 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> Sorry, I realized after hitting <send> that you might take it that way, but I mean the logs generally, not just postgres logs. If somebody runs \"reindex concurrently\" on all tables at midnight every night, and they see one morning in the (non-postgres) logs from the midnight hour weird error messages about their RAID controller, they may well want to check all their indexes to see if any of them were corrupted.\n\nI don't see what the point of this example is. Why is the REINDEX\nCONCURRENTLY index special here? Presumably the user is using\npg_amcheck with its -i option in this scenario, since you've scoped it\nthat way. Where did they get that index name from? Presumably it's\njust the original familiar index name? How did an error message that's\nnot from the Postgres logs (or something similar) contain any index\nname?\n\nIf the overnight rebuild completed successfully then we'll verify the\nnewly rebuilt smgr relfilenode for the index. It if failed then we'll\njust verify the original. In other words, if we treat the validity of\nindexes as a \"visibility concern\", everything works out just fine.\n\nIf there is an orphaned index (because of the implementation issue\nwith CONCURRENTLY) then it is *definitely* \"corrupt\" -- but not in any\nsense that pg_amcheck ought to concern itself with. Such an orphaned\nindex can never actually be used by anybody. (We should fix this wart\nin the CONCURRENTLY implementation some day.)\n\n> To get back on track, let me say that I'm not taking the position that what pg_amcheck currently does is necessarily correct, but just that I'd like to be careful about what we change, if anything. There are three things that seem to irritate people:\n>\n> 1) A non-zero exit code means \"not everything was checked and passed\" rather than \"at least one thing is definitely corrupt\".\n\nRight.\n\n> 2) Skipping of indexes is reported to the user with the word 'ERROR' in the report rather than, say, 'NOTICE'.\n\nRight -- but it's also the specifics of the error. These are errors\nthat only make sense when there was specific human error. Which is\nclearly not the case at all, except perhaps in the narrow -i case.\n\n> 3) Deadlocks can occur\n\nRight.\n\n> I have resisted changing #1 on the theory that `pg_amcheck --all && ./post_all_checks_pass.sh` should only run the post_all_checks_pass.sh if indeed all checks have passed, and I'm interpreting skipping an index check as being contrary to that.\n\nYou're also interpreting it as \"skipping\". This is a subjective\ninterpretation. Which is fair enough - I can see why you'd put it that\nway. But that's not how I see it. Again, consider that pg_dump cares\nabout the \"indisready\" status of indexes, for a variety of reasons.\n\nNow, the pg_dump example doesn't necessarily mean that pg_amcheck\n*must* do the same thing (though it certainly suggests as much). To me\nthe important point is that we are perfectly entitled to define \"the\nindexes that pg_amcheck can see\" in whatever way seems to make most\nsense overall, based on practical considerations.\n\n> But the user running pg_amcheck might also *know* that they aren't running any such DDL, and therefore expect --all to actually result in everything being checked.\n\nThe user would also have to know precisely how the system catalogs\nwork during DDL. They'd have to know that the pg_class entry might\nbecome visible very early on, rather than at the end, in some cases.\nThey'd know all that, but still be surprised by the current pg_amcheck\nbehavior. Which is itself not consistent with pg_dump.\n\n> I find it strange that I should do anything about #2 in pg_amcheck, since it's the function in verify_nbtree that phrases the situation as an error.\n\nI don't find it strange. It does that because it *is* an error. There\nis simply no alternative.\n\nThe solution for amcheck is the same as it has always been: just write\nthe SQL query in a way that avoids it entirely.\n\n> I'm genuinely not trying to give you grief here -- I simply don't like that pg_amcheck is adding commentary atop what the checking functions are doing.\n\npg_amcheck would not be adding commentary if this was addressed in the\nway that I have in mind. It would merely be dealing with the issue in\nthe way that the amcheck docs have recommended, for years. The problem\nhere (as I see it) is that pg_amcheck is already adding commentary, by\nnot just doing that.\n\n> I also find it strange that #3 is being attributed to pg_amcheck's choice of how to call the checking function, because I can't think of any other function where we require the SQL caller to do anything like what you are requiring here in order to prevent deadlocks, and also because the docs for the functions don't say that a deadlock is possible, merely that the function may return with an error.\n\nI will need to study the deadlock issue separately.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 4 Oct 2021 18:19:54 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "\n\n> On Oct 4, 2021, at 6:19 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> I don't see what the point of this example is.\n\nIt doesn't matter.\n\nI am changing pg_amcheck to filter out indexes as you say. Since the btree check should no longer error in these cases, the issue of pg_amcheck exit(2) sorts itself out without further code changes.\n\nI am changing verify_heapam to skip unlogged tables during recovery. In testing, checking such a table results in a simple notice:\n\n NOTICE: cannot verify unlogged relation \"u_tbl\" during recovery, skipping\n\nWhile testing, I also created an index on the unlogged table and checked that index using bt_index_parent_check, and was surprised that checking it using bt_index_parent_check raises an error:\n\n ERROR: cannot acquire lock mode ShareLock on database objects while recovery is in progress\n HINT: Only RowExclusiveLock or less can be acquired on database objects during recovery.\n\nIt doesn't get as far as btree_index_mainfork_expected(). So I am changing pg_amcheck to filter out indexes when pg_is_in_recovery() is true and relpersistence='u'. Does that sound right to you?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 4 Oct 2021 20:19:02 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Mon, Oct 4, 2021 at 8:19 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> I am changing pg_amcheck to filter out indexes as you say. Since the btree check should no longer error in these cases, the issue of pg_amcheck exit(2) sorts itself out without further code changes.\n\nCool.\n\n> I am changing verify_heapam to skip unlogged tables during recovery. In testing, checking such a table results in a simple notice:\n>\n> NOTICE: cannot verify unlogged relation \"u_tbl\" during recovery, skipping\n\nThat makes sense to me.\n\n> While testing, I also created an index on the unlogged table and checked that index using bt_index_parent_check, and was surprised that checking it using bt_index_parent_check raises an error:\n>\n> ERROR: cannot acquire lock mode ShareLock on database objects while recovery is in progress\n> HINT: Only RowExclusiveLock or less can be acquired on database objects during recovery.\n\nCalling bt_index_parent_check() in hot standby mode is kind of asking\nfor it to error-out. It requires a ShareLock on the relation, which is\ninherently not possible during recovery. So I don't feel too badly\nabout letting it just happen.\n\n> So I am changing pg_amcheck to filter out indexes when pg_is_in_recovery() is true and relpersistence='u'. Does that sound right to you?\n\nYes, that all sounds right to me.\n\nThanks\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 4 Oct 2021 20:28:46 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Mon, Oct 4, 2021 at 7:10 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> All of the underlying errors are cases that were clearly intended to\n> catch user error -- every single one. But apparently pg_amcheck is\n> incapable of error, by definition. Like HAL 9000.\n\nAfter some thought, I agree with the idea that pg_amcheck ought to\nskip relations that can't be expected to be valid -- which includes\nboth unlogged relations while in recovery, and also invalid indexes\nleft behind by failed index builds. Otherwise it can only find\nnon-problems, which we don't want to do.\n\nBut this comment seems like mockery to me, and I don't like that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Oct 2021 12:41:27 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Tue, Oct 5, 2021 at 9:41 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Mon, Oct 4, 2021 at 7:10 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > All of the underlying errors are cases that were clearly intended to\n> > catch user error -- every single one. But apparently pg_amcheck is\n> > incapable of error, by definition. Like HAL 9000.\n\n> But this comment seems like mockery to me, and I don't like that.\n\nIt was certainly not a constructive way of getting my point across.\n\nI apologize to Mark.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 5 Oct 2021 09:58:24 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "\n\n> On Oct 5, 2021, at 9:58 AM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> I apologize to Mark.\n\nI took no offense. Actually, I owe you a thank-you for having put so much effort into debating the behavior with me. I think the patch to fix bug #17212 will be better for it.\n\n(And thanks to Robert for the concern.)\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 5 Oct 2021 10:03:44 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Tue, Oct 5, 2021 at 10:03 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I took no offense. Actually, I owe you a thank-you for having put so much effort into debating the behavior with me. I think the patch to fix bug #17212 will be better for it.\n\nGlad that you think so. And, thanks for working on the issue so promptly.\n\nThis was a question of fundamental definitions. Those are often very tricky.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 5 Oct 2021 10:22:53 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "Hello Mark, Peter, Robert,\n05.10.2021 20:22, Peter Geoghegan пишет:\n> On Tue, Oct 5, 2021 at 10:03 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> I took no offense. Actually, I owe you a thank-you for having put so much effort into debating the behavior with me. I think the patch to fix bug #17212 will be better for it.\n> Glad that you think so. And, thanks for working on the issue so promptly.\n>\n> This was a question of fundamental definitions. Those are often very tricky.\nThanks for the discussion and fixing the issues! (I haven't found the\nlatest fix in the thread yet, but I agree with the approach.)\n\nI think that ideally pg_amcheck should not fail on a live database, that\ndoes not contain corrupted data, and should not affect the database\nusage by other users (as it's \"only a check\").\nSo for example, pg_amcheck should run successfully in parallel with\n`make installcheck` and should not cause any of the tests fail. (There\ncould be nuances with, say, volatile functions called by the index\nexpressions, but in general it could be possible.)\nI tried to run the following script:\n(for i in `seq 100`; do echo \"=== iteration $i ===\" >>pg_amcheck.log;\npg_amcheck -a --install-missing --heapallindexed --rootdescend\n--progress >>pg_amcheck.log 2>&1 || echo \"FAIL\" >>pg_amcheck.log; done) &\nmake installcheck\n\nAnd got several deadlocks again (manifested by some tests failing) and\nalso \"ERROR: could not open relation with OID xxxx\" - that could be\nconsidered as a transition state (it is possible without locking), that\ncause pg_amcheck to report an overall error.\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Wed, 6 Oct 2021 09:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Tue, Oct 5, 2021 at 11:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n> I think that ideally pg_amcheck should not fail on a live database, that\n> does not contain corrupted data, and should not affect the database\n> usage by other users (as it's \"only a check\").\n\nI agree that that's ideal. As you said, one or two narrow exceptions\nmay need to be made -- cases where there is unavoidable though weird\nambiguity (and not a report of true corruption). Overall the user\nshould never see failure from pg_amcheck unless the database is\ncorrupt, or unless things are defined in a pretty odd way, that\ncreates ambiguity. Ordinary DDL certainly doesn't count as unusual\nhere.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 5 Oct 2021 23:21:00 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "Hi, hackers!\n\nWe've looked through the initial patch and the exclusion of temporary\ntables from pg_amcheck seems the right thing. Also it is not the matter\nanyone disagrees here, and we propose to commit it alone.\nSupplementary things/features might be left for further discussion but\nrefusing to check temporary tables is the only option IMO.\n\nThe patch applies cleanly, tests succeed. I'd propose to set it as RFC.\n--\nBest regards,\nPavel Borisov, Maxim Orlov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nHi, hackers!We've looked through the initial patch and the exclusion of temporary tables from pg_amcheck seems the right thing. Also it is not the matter anyone disagrees here, and we propose to commit it alone.Supplementary things/features might be left for further discussion but refusing to check temporary tables is the only option IMO.The patch applies cleanly, tests succeed. I'd propose to set it as RFC.--Best regards,Pavel Borisov, Maxim OrlovPostgres Professional: http://postgrespro.com",
"msg_date": "Wed, 6 Oct 2021 19:14:21 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "\n\n> On Oct 6, 2021, at 8:14 AM, Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> \n> We've looked through the initial patch and the exclusion of temporary tables from pg_amcheck seems the right thing. Also it is not the matter anyone disagrees here, and we propose to commit it alone.\n\nThanks for reviewing!\n\nI expect to post a new version shortly. \n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 6 Oct 2021 09:25:49 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Wed, Oct 6, 2021 at 9:25 AM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> Thanks for reviewing!\n>\n> I expect to post a new version shortly.\n\nNot sure how much it matters, but I have some thoughts on the return\nvalue of pg_amcheck. (I'm mostly going into this now because it seems\nrelated to how we discuss these issues generally.)\n\nA return value of 0 cannot be said to indicate that the database is\nnot corrupt; strictly speaking the verification process doesn't\nactually verify anything. The null hypothesis is that the database\nisn't corrupt. pg_amcheck looks for disconfirmatory evidence (evidence\nof corruption) on a best-effort basis. This seems fundamental.\n\nIf this philosophy of science stuff seems too abstract, then I can be\nmore concrete: pg_amcheck doesn't even attempt to verify indexes that\naren't B-Tree indexes. Clearly we cannot be sure that the database\ncontains no corruption when there happens to be even one such index.\nAnd yet the return value from pg_amcheck is still 0 (barring problems\nelsewhere). I think that it'll always be possible to make *some*\nargument like that, even in a world where pg_amcheck + amcheck are\nvery feature complete. As I said, it seems fundamental.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 6 Oct 2021 10:16:09 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "\n\n> On Oct 6, 2021, at 10:16 AM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> A return value of 0 cannot be said to indicate that the database is\n> not corrupt;\n\nNor can a non-zero value be said to indicate that the database is corrupt.\n\nThese invocations will still return a non-zero exit status:\n\n\tpg_amcheck -D no_privs_database\n\tpg_amcheck --index=\"no_such_index\"\n\tpg_amcheck --table=\"somebody_elses_temp_table\"\n\tpg_amcheck --index=\"somebody_elses_temp_index\"\n\nbut these have been modified to no longer do so:\n\n\tpg_amcheck -D my_database_in_recovery --parent-check\n\tpg_amcheck -D my_database_in_recovery --heapallindexed\n\tpg_amcheck --all\n\nPlease compare to:\n\n\tfind /private || echo \"FAIL\"\n\trm /not/my/file || echo \"FAIL\"\n\nI'm not sure how the idea that pg_amcheck should never give back a failure code unless there is corruption got inserted into this thread, but I'm not on board with that as an invariant statement. The differences in the upcoming version are\n\n1) --all no longer means \"all relations\" but rather \"all checkable relations\"\n2) checking options should be automatically downgraded under circumstances where they cannot be applied\n3) unlogged relations during replication are by definition not corrupt\n\nI think #1 and #3 are unsurprising enough that they need no documentation update. #2 is slightly surprising (at least to me) so I updated the docs for it.\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 6 Oct 2021 10:19:01 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Wed, Oct 6, 2021 at 10:19 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> > A return value of 0 cannot be said to indicate that the database is\n> > not corrupt;\n>\n> Nor can a non-zero value be said to indicate that the database is corrupt.\n\nI never said otherwise. I think it's perfectly fine that there are\nmultiple non-zero return values. It's totally unrelated.\n\n> I'm not sure how the idea that pg_amcheck should never give back a failure code unless there is corruption got inserted into this thread, but I'm not on board with that as an invariant statement.\n\nI agree; I'm also not on board with it as an invariant statement.\n\n> The differences in the upcoming version are\n>\n> 1) --all no longer means \"all relations\" but rather \"all checkable relations\"\n\nClearly pg_amcheck never checked all relations, because it never\nchecked indexes that are not B-Tree indexes. I'm pretty sure that I\ncan poke big holes in almost any positivist statement like that with\nlittle effort.\n\n> 2) checking options should be automatically downgraded under circumstances where they cannot be applied\n> 3) unlogged relations during replication are by definition not corrupt\n>\n> I think #1 and #3 are unsurprising enough that they need no documentation update. #2 is slightly surprising (at least to me) so I updated the docs for it.\n\nTo me #2 sounds like a tautology. It could almost be phrased as\n\"pg_amcheck does not check the things that it cannot check\".\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 6 Oct 2021 10:39:03 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "\n\n> On Oct 6, 2021, at 10:39 AM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n>> The differences in the upcoming version are\n>> \n>> 1) --all no longer means \"all relations\" but rather \"all checkable relations\"\n> \n> Clearly pg_amcheck never checked all relations, because it never\n> checked indexes that are not B-Tree indexes. I'm pretty sure that I\n> can poke big holes in almost any positivist statement like that with\n> little effort.\n\nThere is a distinction here that you are (intentionally?) failing to acknowledge. On the one hand, there are relation types that cannot be checked because no checking functions for them exist. (Hash, gin, gist, etc.) On the other hand, there are relations which could be check but for the current state of the system, or could be checked in some particular way but for the current state of the system. One of those has to do with code that doesn't exist, and the other has to do with the state of the system. I'm only talking about the second.\n\n> \n>> 2) checking options should be automatically downgraded under circumstances where they cannot be applied\n>> 3) unlogged relations during replication are by definition not corrupt\n>> \n>> I think #1 and #3 are unsurprising enough that they need no documentation update. #2 is slightly surprising (at least to me) so I updated the docs for it.\n> \n> To me #2 sounds like a tautology. It could almost be phrased as\n> \"pg_amcheck does not check the things that it cannot check\".\n\nI totally disagree. It is uncomfortable to me that `pg_amcheck --parent-check` will now silently not perform the parent check that was explicitly requested. That reported an error before, and now it just downgrades the check. This is hardly tautological. I'm only willing to post a patch with that change because I can see a practical argument that somebody might run that as a cron job and they don't want the cron job failing when the database happens to go into recovery. But again, not at all tautological.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 6 Oct 2021 10:57:20 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Wed, Oct 6, 2021 at 10:57 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> > Clearly pg_amcheck never checked all relations, because it never\n> > checked indexes that are not B-Tree indexes. I'm pretty sure that I\n> > can poke big holes in almost any positivist statement like that with\n> > little effort.\n>\n> There is a distinction here that you are (intentionally?) failing to acknowledge. On the one hand, there are relation types that cannot be checked because no checking functions for them exist. (Hash, gin, gist, etc.) On the other hand, there are relations which could be check but for the current state of the system, or could be checked in some particular way but for the current state of the system. One of those has to do with code that doesn't exist, and the other has to do with the state of the system. I'm only talking about the second.\n\nI specifically acknowledge and reject that distinction. That's my whole point.\n\nYour words were: '--all no longer means \"all relations\" but rather\n\"all checkable relations\"'. But somehow the original clean definition\nof \"--all\" was made no less clean by not including GiST indexes and so\non from the start. You're asking me to believe that it was really\nimplied all along that \"all checkable relations\" didn't include the\nrelations that obviously weren't checkable. You're probably going to\nhave to keep making post-hoc amendments to your original statement\nlike this.\n\nObviously the gap in functionality from non-standard index AMs is far\nmore important than the totally theoretical issue with failed\nCONCURRENTLY indexes. But even if they were equally important, your\nemphasis on the latter would still be arbitrary.\n\n> I totally disagree. It is uncomfortable to me that `pg_amcheck --parent-check` will now silently not perform the parent check that was explicitly requested.\n\nBut the whole definition of \"check that was explicitly requested\"\nrelies on your existing understanding of what pg_amcheck is supposed\nto do. That's not actually essential. I don't see it that way, for\nexample.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 6 Oct 2021 11:20:08 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Wed, Oct 6, 2021 at 1:57 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> > To me #2 sounds like a tautology. It could almost be phrased as\n> > \"pg_amcheck does not check the things that it cannot check\".\n>\n> I totally disagree. It is uncomfortable to me that `pg_amcheck --parent-check` will now silently not perform the parent check that was explicitly requested. That reported an error before, and now it just downgrades the check. This is hardly tautological. I'm only willing to post a patch with that change because I can see a practical argument that somebody might run that as a cron job and they don't want the cron job failing when the database happens to go into recovery. But again, not at all tautological.\n\nYeah, I don't think that's OK. -1 from me on making any such change.\nIf I say pg_amcheck --heapallindexed, I expect it to pass\nheapallindexed = true to bt_index_check(). I don't expect it to make a\ndecision internally whether I really meant it when I said I wanted\n--heapallindexed checking.\n\nAll of the decisions we're talking about here really have to do with\ndetermining the user's intent. I think that if the user says\npg_amcheck --all, there's a good argument that they don't want us to\ncheck unlogged relations on a standby which will never be valid, or\nfailed index builds which need not be valid. But even that is not\nnecessarily true. If the user typed pg_amcheck -i\nsome_index_that_failed_to_build, there is a pretty strong argument\nthat they want us to check that index and maybe fail, not skip\nchecking that index and report success without doing anything. I think\nit's reasonable to accept that unfortunate deviation from the user's\nintent in order to get the benefit of not failing for silly reasons\nwhen, as will normally be the case, somebody just tries to check the\nentire database, or some subset of tables and their corresponding\nindexes. In those cases the user pretty clearly only wants to check\nthe valid things. So I agree, with some reservations, that excluding\nunlogged relations while in recovery and invalid indexes is probably\nthe thing which is most likely to give the users what they want.\n\nBut how can we possibly say that a user who specifies --heapallindexed\ndoesn't really mean what they said?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 6 Oct 2021 14:32:03 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Wed, Oct 6, 2021 at 11:32 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> All of the decisions we're talking about here really have to do with\n> determining the user's intent. I think that if the user says\n> pg_amcheck --all, there's a good argument that they don't want us to\n> check unlogged relations on a standby which will never be valid, or\n> failed index builds which need not be valid. But even that is not\n> necessarily true. If the user typed pg_amcheck -i\n> some_index_that_failed_to_build, there is a pretty strong argument\n> that they want us to check that index and maybe fail, not skip\n> checking that index and report success without doing anything. I think\n> it's reasonable to accept that unfortunate deviation from the user's\n> intent in order to get the benefit of not failing for silly reasons\n> when, as will normally be the case, somebody just tries to check the\n> entire database, or some subset of tables and their corresponding\n> indexes. In those cases the user pretty clearly only wants to check\n> the valid things. So I agree, with some reservations, that excluding\n> unlogged relations while in recovery and invalid indexes is probably\n> the thing which is most likely to give the users what they want.\n>\n> But how can we possibly say that a user who specifies --heapallindexed\n> doesn't really mean what they said?\n\nI am pretty sure that I agree with you about all these details. We\nneed to tease them apart some more.\n\n--heapallindexed doesn't complicate things for us at all. It changes\nnothing about the locking considerations. It's just an additive thing,\nsome extra checks with the same basic underlying requirements. Maybe\nyou meant to say --parent-check, not --heapallindexed?\n\n--parent-check does present us with the question of what to do in Hot\nStandby mode, where it will surely fail (because it requires a\nrelation level ShareLock, etc). But I actually don't think it's\ncomplicated: we must throw an error, because it's fundamentally not\nsomething that will ever work (with any index). Whether the error\ncomes from pg_amcheck or amcheck proper doesn't seem important to me.\n\nI think it's pretty clear that verify_heapam.c (from amcheck proper)\nshould just follow verify_nbtree.c when directly invoked against an\nunlogged index in Hot Standby. That is, it should assume that the\nrelation has no storage, but still \"verify\" it conceptually. Just show\na NOTICE about it. Assume no storage to verify.\n\nFinally, there is the question of what happens inside pg_amcheck (not\namcheck proper) deals with unlogged relations in Hot Standby mode.\nThere are two reasonable options: it can either \"verify\" the indexes\n(actually just show those NOTICE messages), or skip them entirely. I\nlean towards the former option, on the grounds that I don't think it\nshould be special-cased. But I don't feel very strongly about it.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 6 Oct 2021 11:55:59 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Wed, Oct 6, 2021 at 11:55 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> I am pretty sure that I agree with you about all these details. We\n> need to tease them apart some more.\n\nI think that what I've said boils down to this:\n\n* pg_amcheck shouldn't attempt to verify temp relations, on the\ngrounds that this is fundamentally not useful, and not something that\ncould ever be sensibly interpreted as \"just doing what the user asked\nfor\".\n\n* pg_amcheck calls to bt_index_check()/bt_index_parent_check() must\nonly be made with \"i.indisready AND i.indisvalid\" indexes, just like\nthe old query from the docs. (Actually, the same query also filters\nout temp relations -- which is why I view this issue as almost\nidentical to the first.)\n\nWhy would the user ask for something that fundamentally doesn't make\nany sense? The argument \"that's just what they asked for\" has it\nbackwards, because *not* asking for it is very difficult, while asking\nfor it (which, remember, fundamentally makes no sense) is very easy.\n\n* --parent-check can and should fail in hot standby mode.\n\nThe argument \"that's just what the user asked for\" works perfectly here.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 6 Oct 2021 12:28:20 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Wed, Oct 6, 2021 at 2:56 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> --heapallindexed doesn't complicate things for us at all. It changes\n> nothing about the locking considerations. It's just an additive thing,\n> some extra checks with the same basic underlying requirements. Maybe\n> you meant to say --parent-check, not --heapallindexed?\n\nTo me, it doesn't matter which specific option we're talking about. If\nI tell pg_amcheck to pass a certain flag to the underlying functions,\nthen it should do that. If the behavior needs to be changed, it should\nbe changed in those underlying functions, not in pg_amcheck. If we\nstart putting some of the intelligence into amcheck itself, and some\nof it into pg_amcheck, I think it's going to become confusing and in\nfact I think it's going to become unreliable, at least from the user\npoint of view. People will get confused if they run pg_amcheck and get\nsome result (either pass or fail) and then they do the same thing with\npg_amcheck and get a different result.\n\n> --parent-check does present us with the question of what to do in Hot\n> Standby mode, where it will surely fail (because it requires a\n> relation level ShareLock, etc). But I actually don't think it's\n> complicated: we must throw an error, because it's fundamentally not\n> something that will ever work (with any index). Whether the error\n> comes from pg_amcheck or amcheck proper doesn't seem important to me.\n\nThat detail, to me, is actually very important.\n\n> I think it's pretty clear that verify_heapam.c (from amcheck proper)\n> should just follow verify_nbtree.c when directly invoked against an\n> unlogged index in Hot Standby. That is, it should assume that the\n> relation has no storage, but still \"verify\" it conceptually. Just show\n> a NOTICE about it. Assume no storage to verify.\n\nI haven't checked the code, but that sounds right. I interpret this to\nmean that the different sub-parts of amcheck don't handle this case in\nways that are consistent with each other, and that seems wrong. We\nshould make them consistent.\n\n> Finally, there is the question of what happens inside pg_amcheck (not\n> amcheck proper) deals with unlogged relations in Hot Standby mode.\n> There are two reasonable options: it can either \"verify\" the indexes\n> (actually just show those NOTICE messages), or skip them entirely. I\n> lean towards the former option, on the grounds that I don't think it\n> should be special-cased. But I don't feel very strongly about it.\n\nI like having it do this:\n\n ereport(NOTICE,\n (errcode(ERRCODE_READ_ONLY_SQL_TRANSACTION),\n errmsg(\"cannot verify unlogged index \\\"%s\\\"\nduring recovery, skipping\",\n RelationGetRelationName(rel))));\n\nI think the fewer decisions the command-line tool makes, the better.\nWe should put the policy decisions in amcheck itself.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 6 Oct 2021 15:33:37 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "\n\n> On Oct 6, 2021, at 12:28 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> I think that what I've said boils down to this:\n> \n> * pg_amcheck shouldn't attempt to verify temp relations, on the\n> grounds that this is fundamentally not useful, and not something that\n> could ever be sensibly interpreted as \"just doing what the user asked\n> for\".\n\nRight. I don't think there has been any disagreement on this. There is a bug in pg_amcheck with respect to this issue, and we all agree on that.\n\n> * pg_amcheck calls to bt_index_check()/bt_index_parent_check() must\n> only be made with \"i.indisready AND i.indisvalid\" indexes, just like\n> the old query from the docs. (Actually, the same query also filters\n> out temp relations -- which is why I view this issue as almost\n> identical to the first.)\n> \n> Why would the user ask for something that fundamentally doesn't make\n> any sense?\n\nThe user may not know that the system has changed.\n\nFor example, if I see errors in the logs suggesting corruption in a relation named \"mark\" and run pg_amcheck --relation=mark, I expect that to check the relation. If that relation is a temporary table, I'd like to know that it's not going to be checked, not just have pg_amcheck report that everything is ok.\n\nAs another example, if I change my environment variables to connect to the standby rather than the primary, and forget that I did so, and then run pg_amcheck --relation=unlogged_relation, I'd rather get a complaint that I can't check an unlogged relation on a standby than get nothing. Sure, what I did doesn't make sense, but why should the application paper over that mistake?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 6 Oct 2021 12:36:50 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Wed, Oct 6, 2021 at 12:33 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> To me, it doesn't matter which specific option we're talking about. If\n> I tell pg_amcheck to pass a certain flag to the underlying functions,\n> then it should do that. If the behavior needs to be changed, it should\n> be changed in those underlying functions, not in pg_amcheck.\n\nI agree, with the stipulation that the caller (in this case\npg_amcheck) is required to know certain basic things about the\nrelation in order to get useful behavior. For example, if you use\nbt_index_check() with a GIN index, you're going to get an error. That\nmuch we can all agree on, I'm sure.\n\nWhere I might go further than you or Mark (not sure) is on this: I\nalso think that it's the caller's job to not call the functions with\ntemp relations, or (in the case of the index verification stuff) with\n!indisready or !indisvalid relations. I believe that these ought to\nalso be treated as basic questions about the relation, just like in my\nGIN example. But that's as far as I go here.\n\n> If we\n> start putting some of the intelligence into amcheck itself, and some\n> of it into pg_amcheck, I think it's going to become confusing and in\n> fact I think it's going to become unreliable, at least from the user\n> point of view. People will get confused if they run pg_amcheck and get\n> some result (either pass or fail) and then they do the same thing with\n> pg_amcheck and get a different result.\n\nAgreed on all that.\n\n> > --parent-check does present us with the question of what to do in Hot\n> > Standby mode, where it will surely fail (because it requires a\n> > relation level ShareLock, etc). But I actually don't think it's\n> > complicated: we must throw an error, because it's fundamentally not\n> > something that will ever work (with any index). Whether the error\n> > comes from pg_amcheck or amcheck proper doesn't seem important to me.\n>\n> That detail, to me, is actually very important.\n\nI believe that you actually reached the same conclusion, though: we\nshould let it just fail. That makes this question easy.\n\n> > I think it's pretty clear that verify_heapam.c (from amcheck proper)\n> > should just follow verify_nbtree.c when directly invoked against an\n> > unlogged index in Hot Standby. That is, it should assume that the\n> > relation has no storage, but still \"verify\" it conceptually. Just show\n> > a NOTICE about it. Assume no storage to verify.\n>\n> I haven't checked the code, but that sounds right. I interpret this to\n> mean that the different sub-parts of amcheck don't handle this case in\n> ways that are consistent with each other, and that seems wrong. We\n> should make them consistent.\n\nI agree that nbtree and heapam verification ought to agree here. But\nmy point was just that this behavior just makes sense: what we have is\nsomething just like an empty relation.\n\n> > Finally, there is the question of what happens inside pg_amcheck (not\n> > amcheck proper) deals with unlogged relations in Hot Standby mode.\n> > There are two reasonable options: it can either \"verify\" the indexes\n> > (actually just show those NOTICE messages), or skip them entirely. I\n> > lean towards the former option, on the grounds that I don't think it\n> > should be special-cased. But I don't feel very strongly about it.\n>\n> I like having it do this:\n>\n> ereport(NOTICE,\n> (errcode(ERRCODE_READ_ONLY_SQL_TRANSACTION),\n> errmsg(\"cannot verify unlogged index \\\"%s\\\"\n> during recovery, skipping\",\n> RelationGetRelationName(rel))));\n>\n> I think the fewer decisions the command-line tool makes, the better.\n> We should put the policy decisions in amcheck itself.\n\nWait, so you're arguing that we should change amcheck (both nbtree and\nheapam verification) to simply reject unlogged indexes during\nrecovery?\n\nThat doesn't seem like very friendly or self-consistent behavior. At\nfirst (in hot standby) it fails. As soon as the DB is promoted, we'll\nthen also have no on-disk storage for the same unlogged relation, but\nnow suddenly it's okay, just because of that. I find it far more\nlogical to just assume that there is no relfilenode storage to check\nwhen in hot standby.\n\nThis isn't the same as the --parent-check thing at all, because that's\nabout an implementation restriction of Hot Standby. Whereas this is\nabout the physical index structure itself.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 6 Oct 2021 12:56:14 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Wed, Oct 6, 2021 at 12:36 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> The user may not know that the system has changed.\n>\n> For example, if I see errors in the logs suggesting corruption in a relation named \"mark\" and run pg_amcheck --relation=mark, I expect that to check the relation. If that relation is a temporary table, I'd like to know that it's not going to be checked, not just have pg_amcheck report that everything is ok.\n\nThis is just a detail to me. I agree that it's reasonable to say \"I\ncan't do that specific thing you asked for with the temp relation\",\ninstead of \"no such verifiable relation\" -- but only because it's more\nspecific and user friendly. Providing a slightly friendlier error\nmessage like this does not actually conflict with the idea of\ngenerally treating temp relations as \"not visible to pg_amcheck\".\nDitto for the similar !indisready/!i.indisvalid B-Tree case.\n\n> As another example, if I change my environment variables to connect to the standby rather than the primary, and forget that I did so, and then run pg_amcheck --relation=unlogged_relation, I'd rather get a complaint that I can't check an unlogged relation on a standby than get nothing. Sure, what I did doesn't make sense, but why should the application paper over that mistake?\n\nI think that it shouldn't get an error at all -- this should be\ntreated like an empty relation, per the verify_nbtree.c precedent.\npg_amcheck doesn't need to concern itself with this at all.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 6 Oct 2021 13:11:58 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Wed, Oct 6, 2021 at 3:56 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I agree, with the stipulation that the caller (in this case\n> pg_amcheck) is required to know certain basic things about the\n> relation in order to get useful behavior. For example, if you use\n> bt_index_check() with a GIN index, you're going to get an error. That\n> much we can all agree on, I'm sure.\n\nYes.\n\n> Where I might go further than you or Mark (not sure) is on this: I\n> also think that it's the caller's job to not call the functions with\n> temp relations, or (in the case of the index verification stuff) with\n> !indisready or !indisvalid relations. I believe that these ought to\n> also be treated as basic questions about the relation, just like in my\n> GIN example. But that's as far as I go here.\n\nI am on board with this, with slight trepidation.\n\n> > > --parent-check does present us with the question of what to do in Hot\n> > > Standby mode, where it will surely fail (because it requires a\n> > > relation level ShareLock, etc). But I actually don't think it's\n> > > complicated: we must throw an error, because it's fundamentally not\n> > > something that will ever work (with any index). Whether the error\n> > > comes from pg_amcheck or amcheck proper doesn't seem important to me.\n> >\n> > That detail, to me, is actually very important.\n>\n> I believe that you actually reached the same conclusion, though: we\n> should let it just fail. That makes this question easy.\n\nGreat.\n\n> > > I think it's pretty clear that verify_heapam.c (from amcheck proper)\n> > > should just follow verify_nbtree.c when directly invoked against an\n> > > unlogged index in Hot Standby. That is, it should assume that the\n> > > relation has no storage, but still \"verify\" it conceptually. Just show\n> > > a NOTICE about it. Assume no storage to verify.\n> >\n> > I haven't checked the code, but that sounds right. I interpret this to\n> > mean that the different sub-parts of amcheck don't handle this case in\n> > ways that are consistent with each other, and that seems wrong. We\n> > should make them consistent.\n>\n> I agree that nbtree and heapam verification ought to agree here. But\n> my point was just that this behavior just makes sense: what we have is\n> something just like an empty relation.\n\nI am not confident that this behavior is optimal. It's pretty\narbitrary. It's like saying \"well, you asked me to check that everyone\nin the car was wearing seatbelts, and the car has no seatbelts, so\nwe're good!\"\n\nTo which I respond: maybe. Were we trying to verify that people are\ncomplying with safety regulations as well as may be possible under the\ncircumstances, or that people are actually safe?\n\nThe analogy here is: are we trying to verify that the relations are\nvalid? Or are we just trying to verify that they are as valid as we\ncan expect them to be?\n\nFor me, the deciding point is that verify_nbtree.c was here first, and\nit set a precedent. Unless there is a compelling reason to do\notherwise, we should make later things conform to that precedent.\nWhether that's actually best, I'm not certain. It might be, but I'm\nnot sure that it is.\n\n> > > Finally, there is the question of what happens inside pg_amcheck (not\n> > > amcheck proper) deals with unlogged relations in Hot Standby mode.\n> > > There are two reasonable options: it can either \"verify\" the indexes\n> > > (actually just show those NOTICE messages), or skip them entirely. I\n> > > lean towards the former option, on the grounds that I don't think it\n> > > should be special-cased. But I don't feel very strongly about it.\n> >\n> > I like having it do this:\n> >\n> > ereport(NOTICE,\n> > (errcode(ERRCODE_READ_ONLY_SQL_TRANSACTION),\n> > errmsg(\"cannot verify unlogged index \\\"%s\\\"\n> > during recovery, skipping\",\n> > RelationGetRelationName(rel))));\n> >\n> > I think the fewer decisions the command-line tool makes, the better.\n> > We should put the policy decisions in amcheck itself.\n>\n> Wait, so you're arguing that we should change amcheck (both nbtree and\n> heapam verification) to simply reject unlogged indexes during\n> recovery?\n\nNo, that's existing code from btree_index_mainfork_expected. I thought\nyou were saying that verify_heapam.c should adopt the same approach,\nand I was agreeing, not because I think it's necessarily the perfect\napproach, but for consistency.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 6 Oct 2021 16:15:31 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Wed, Oct 6, 2021 at 1:15 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Where I might go further than you or Mark (not sure) is on this: I\n> > also think that it's the caller's job to not call the functions with\n> > temp relations, or (in the case of the index verification stuff) with\n> > !indisready or !indisvalid relations. I believe that these ought to\n> > also be treated as basic questions about the relation, just like in my\n> > GIN example. But that's as far as I go here.\n>\n> I am on board with this, with slight trepidation.\n\nIt may not be a great design, or even a good one. My argument is just\nthat it's the least worst design overall.\n\nIt is the most consistent with the general design of the system, for\nreasons that are pretty deeply baked into the system. I'm reminded of\nthe fact that REINDEX CONCURRENTLY's completion became blocked due to\nsimilar trepidations. Understandably so.\n\n> > I agree that nbtree and heapam verification ought to agree here. But\n> > my point was just that this behavior just makes sense: what we have is\n> > something just like an empty relation.\n>\n> I am not confident that this behavior is optimal. It's pretty\n> arbitrary. It's like saying \"well, you asked me to check that everyone\n> in the car was wearing seatbelts, and the car has no seatbelts, so\n> we're good!\"\n\nI prefer to think of it as \"there is nobody in the car, so we're all good!\".\n\n> The analogy here is: are we trying to verify that the relations are\n> valid? Or are we just trying to verify that they are as valid as we\n> can expect them to be?\n\nI think that we do the latter (or something much closer to the latter\nthan to the former). It's actually a very Karl Popper thing. Absence\nof evidence isn't evidence of absence -- period. We can get into a\nconversation about degrees of confidence, but that doesn't seem like\nit'll ever affect how we go about designing these things.\n\nA lot of my disagreements around this stuff (especially with Mark)\nseem to stem from this basic understanding of things, in one way or\nanother.\n\n> No, that's existing code from btree_index_mainfork_expected. I thought\n> you were saying that verify_heapam.c should adopt the same approach,\n> and I was agreeing, not because I think it's necessarily the perfect\n> approach, but for consistency.\n\nSorry, I somehow read that code as having an ERROR, not a NOTICE.\n(Even though I wrote the code myself.)\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 6 Oct 2021 13:49:12 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": ">\n>\n> It is the most consistent with the general design of the system, for\n> reasons that are pretty deeply baked into the system. I'm reminded of\n> the fact that REINDEX CONCURRENTLY's completion became blocked due to\n> similar trepidations. Understandably so.\n\n\nI may mistake, but I recall the fact that all indexes builds started during\nsome other (long) index build do not finish with indexes usable for selects\nuntil that long index is built. This may and may not be a source of amcheck\nmisbehavior. Just a note what could be possibly considered.\n\nBest regards,\nPavel Borisov\n\n\nIt is the most consistent with the general design of the system, for\nreasons that are pretty deeply baked into the system. I'm reminded of\nthe fact that REINDEX CONCURRENTLY's completion became blocked due to\nsimilar trepidations. Understandably so.I may mistake, but I recall the fact that all indexes builds started during some other (long) index build do not finish with indexes usable for selects until that long index is built. This may and may not be a source of amcheck misbehavior. Just a note what could be possibly considered.Best regards,Pavel Borisov",
"msg_date": "Thu, 7 Oct 2021 00:27:54 +0300",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "\n\n> On Oct 6, 2021, at 1:49 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n>> The analogy here is: are we trying to verify that the relations are\n>> valid? Or are we just trying to verify that they are as valid as we\n>> can expect them to be?\n> \n> I think that we do the latter (or something much closer to the latter\n> than to the former). It's actually a very Karl Popper thing. Absence\n> of evidence isn't evidence of absence -- period. We can get into a\n> conversation about degrees of confidence, but that doesn't seem like\n> it'll ever affect how we go about designing these things.\n> \n> A lot of my disagreements around this stuff (especially with Mark)\n> seem to stem from this basic understanding of things, in one way or\n> another.\n\nI think the disagreements are about something else.\n\nTalking about pg_amcheck \"checking\" a database, or \"checking\" a relation, is actually short-hand for saying that pg_amcheck handed off the objects to amcheck's functions. The pg_amcheck client application itself isn't checking anything. This short-hand leads to misunderstandings that makes it really hard for me to understand what people mean in this thread. Your comments suggest that I (or pg_amcheck) take some view on whether the database is corrupt, or whether we've proven that it is corrupt, or whether we've proven that it is not corrupt. In truth, all the pg_amcheck frontend client can take a view on is whether it was able to issue all the commands to the backend that it was asked to issue, and whether any of those commands responded with an error.\n\nTalking about pg_amcheck \"failing\" is also confusing. I don't understand what people mean by this. The example towards the top of this thread from Alexander was about pg_amcheck || echo \"fail\", but that suggests that failure is just a question of whether pg_amcheck exited with non-zero exit code. In other parts of the thead, talking about pg_amcheck \"failing\" seems to be used to mean \"pg_amcheck has diagnosed corruption\". This all gets muddled together.\n\nUpthread, I decided to just make the changes to pg_amcheck that you seemed to want, but now I don't know what you want. Can you opine on each of the following. I need to know what they should print, and whether they should return with a non-zero exit status. I genuinely can't post a patch until I know what these are supposed to do, because I need to update the regression tests accordingly: \n\n\npg_amcheck -d db1 -d db2 -d db3 --table=mytable\n\nIn this case, mytable is a regular table on db1, a temporary table on db2, and an unlogged table on db3, and db3 is in recovery.\n\n\npg_amcheck --all --index=\"*accounting*\" --parent-check --table=\"*human_resources*\" --table=\"*peter*\" --relation=\"*alexander*\"\n\nAssume a multitude of databases, some primary, some standby, some indexes logged, some unlogged, some temporary. Some of the human resources tables are unlogged, some not, and they're scattered across different databases, some in recovery, some not. There is exactly one table per database that matches the pattern /*peter*/, but it's circumstances are different from one database to the next, and likewise for the pattern /*alexander*/ except that in some databases it matches an index and in others it matches a table.\n\n\nI thought that we were headed toward a decision where (despite my discomfort) pg_amcheck would downgrade options as necessary, but now it sounds like that's not so. So what should it do\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 6 Oct 2021 14:45:49 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "\n\n> On Oct 6, 2021, at 2:45 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> and db3 is in recovery.\n\n<snip>\n\n> they're scattered across different databases, some in recovery, some not.\n\nWhat I mean here is that, since pg_amcheck might run for many hours, and database may start in recovery but then exit recovery, or may be restarted and go into recovery while we're not connected to them, the tool may see differences when processing a pattern against one database at one point in time and the same or different patterns against the same or different databases at some other point in time. We don't get the luxury of assuming that nothing changes out from under us.\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 6 Oct 2021 15:03:08 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Wed, Oct 6, 2021 at 2:45 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> I think the disagreements are about something else.\n\nInformally speaking, you could say that pg_amcheck and amcheck verify\nrelations. More formally speaking, both amcheck (whether called by\npg_amcheck or some other thing) can only prove the presence of\ncorruption. They cannot prove its absence. (The amcheck docs have\nalways said almost these exact words.)\n\nThis seems to come up a lot because at various points you seem to be\nconcerned about introducing specific imperfections. But it's not like\nyour starting point was ever perfection, or ever could be. I can\nalways describe a scenario in which amcheck misses real corruption --\na scenario which may be very contrived. So the mere fact that some new\ntheoretical possibility of corruption is introduced by some action\ndoes not in itself mean much. We're dealing with that constantly, and\nalways will be.\n\nLet's suppose I was to \"directly fix amcheck + !indisvalid indexes\". I\ndon't even know what that means -- I honestly don't have a clue.\nYou're focussing on one small piece of code in verify_nbtree.c, that\nseems to punt responsibility, but the fact is that there are deeply\nbaked-in reasons why it does so. That's a reflection of how many\nthings about the system work, in general. Attributing blame to any one\nsmall snippet of code (code in verify_nbtree.c, or wherever) just\nisn't helpful.\n\n> In truth, all the pg_amcheck frontend client can take a view on is whether it was able to issue all the commands to the backend that it was asked to issue, and whether any of those commands responded with an error.\n\nAFAICT that pg_amcheck has to do is follow the amcheck user docs, by\ngeneralizing from the example SQL query for the B-Tree stuff. And, it\nshould separately filter non-temp relations for the heap stuff, for\nthe same reasons (exactly the same situation there).\n\n> pg_amcheck -d db1 -d db2 -d db3 --table=mytable\n>\n> In this case, mytable is a regular table on db1, a temporary table on db2, and an unlogged table on db3, and db3 is in recovery.\n\nI don't think that pg_amcheck needs to care about being in recovery,\nat all. I agreed with you about using pg_is_in_recovery() from at one\npoint. That was a mistake on my part.\n\n> I thought that we were headed toward a decision where (despite my discomfort) pg_amcheck would downgrade options as necessary, but now it sounds like that's not so. So what should it do\n\nDowngrade is how you refer to it. I just think of it as making sure\nthat pg_amcheck only asks amcheck to verify relations that are\nbasically capable of being verified (e.g., not a temp table).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 6 Oct 2021 15:20:14 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "\n\n> On Oct 6, 2021, at 3:20 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n>> I think the disagreements are about something else.\n> \n> Informally speaking, you could say that pg_amcheck and amcheck verify\n> relations. More formally speaking, both amcheck (whether called by\n> pg_amcheck or some other thing) can only prove the presence of\n> corruption. They cannot prove its absence. (The amcheck docs have\n> always said almost these exact words.)\n\nI totally agree that the amcheck functions cannot prove the absence of corruption.\n\nI prefer not to even use language about proving the presence of corruption when discussing pg_amcheck. I have let that slide upthread as a convenient short-hand, but I think it doesn't help. For pg_amcheck to take any view whatsoever on whether a btree index is corrupt, it would have to introspect the error message that it gets back from bt_index_check(). It doesn't do that, nor do I think that it should. It just prints the contents of the error for the user and records that fact and eventually exits with a non-zero exit code. The error might have been something about the command exiting due to the crash of another backend, or to do with a deadlock against some other process, or whatever, and pg_amcheck has no opinion about whether any of that is to do with corruption or not.\n\n> This seems to come up a lot because at various points you seem to be\n> concerned about introducing specific imperfections. But it's not like\n> your starting point was ever perfection, or ever could be.\n\nFrom the point of view of detecting corruptions, I agree that it never could be. But I'm not talking about that. I'm talking about whether pg_amcheck issues all the commands that it is supposed to issue. If I work for Daddy Warbucks and he gives me 30 classic cars to take to 10 different mechanics, I can do that job perfectly even if the mechanics do less than perfect work. If I leave three cars in the driveway, that's on me. Likewise, it's not on pg_amcheck if the checking functions can't do perfect work, but it is on pg_amcheck if it doesn't issue all the expected commands. But later on in this email, it appears we don't have any remaining disagreements about that. Read on....\n\n> I can\n> always describe a scenario in which amcheck misses real corruption --\n> a scenario which may be very contrived. So the mere fact that some new\n> theoretical possibility of corruption is introduced by some action\n> does not in itself mean much. We're dealing with that constantly, and\n> always will be.\n\nI wish we could stop discussing this. I really don't think this ticket has anything to do with how well or how poorly or how completely the amcheck functions work.\n\n> Let's suppose I was to \"directly fix amcheck + !indisvalid indexes\". I\n> don't even know what that means -- I honestly don't have a clue.\n> You're focussing on one small piece of code in verify_nbtree.c, that\n> seems to punt responsibility, but the fact is that there are deeply\n> baked-in reasons why it does so. That's a reflection of how many\n> things about the system work, in general. Attributing blame to any one\n> small snippet of code (code in verify_nbtree.c, or wherever) just\n> isn't helpful.\n\nI think we have agreed that pg_amcheck can filter out invalid indexes. I don't have a problem with that. I admit that I did have a problem with that upthread, but its been a while since I conceded that point so I'd rather not have to argue it again.\n\n>> In truth, all the pg_amcheck frontend client can take a view on is whether it was able to issue all the commands to the backend that it was asked to issue, and whether any of those commands responded with an error.\n> \n> AFAICT that pg_amcheck has to do is follow the amcheck user docs, by\n> generalizing from the example SQL query for the B-Tree stuff. And, it\n> should separately filter non-temp relations for the heap stuff, for\n> the same reasons (exactly the same situation there).\n\nI think we have agreed on that one, too, without me having ever argued it. I posted a patch to filter out the temporary tables already.\n\n>> pg_amcheck -d db1 -d db2 -d db3 --table=mytable\n>> \n>> In this case, mytable is a regular table on db1, a temporary table on db2, and an unlogged table on db3, and db3 is in recovery.\n> \n> I don't think that pg_amcheck needs to care about being in recovery,\n> at all. I agreed with you about using pg_is_in_recovery() from at one\n> point. That was a mistake on my part.\n\nOk, excellent, that was probably the only thing that had me really hung up. I thought you were still asking for pg_amcheck to filter out the --parent-check option when in recovery, but if you're not asking for that, then I might have enough to go on now.\n\n>> I thought that we were headed toward a decision where (despite my discomfort) pg_amcheck would downgrade options as necessary, but now it sounds like that's not so. So what should it do\n> \n> Downgrade is how you refer to it. I just think of it as making sure\n> that pg_amcheck only asks amcheck to verify relations that are\n> basically capable of being verified (e.g., not a temp table).\n\nI was using \"downgrading\" to mean downgrading from bt_index_parent_check() to bt_index_check() when pg_is_in_recovery() is true, but you've clarified that you're not requesting that downgrade, so I think we've now gotten past the last sticking point about that whole issue.\n\nThere are other sticking points that don't seem to be things you have taken a view on. Specifically, pg_amcheck complains if a relation pattern doesn't match anything, so that\n\npg_amcheck --table=\"*acountng*\"\n\nwill complain if no tables match, giving the user the opportunity to notice that they spelled \"accounting\" wrong. If there happens to be a table named \"xyzacountngo\", and that matches, too bad. There isn't any way pg_amcheck can be responsible for that. But if there is a temporary table named \"xyzacountngo\" and that gets skipped because it's a temp table, I don't know what feedback the user should get. That's a thorny user interfaces question, not a corruption checking question, and I don't think you need to weigh in unless you want to. I'll most likely go with whatever is the simplest to code and/or most similar to what is currently in the tree, because I don't see any knock-down arguments one way or the other.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 6 Oct 2021 15:47:27 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Wed, Oct 6, 2021 at 3:47 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> I totally agree that the amcheck functions cannot prove the absence of corruption.\n>\n> I prefer not to even use language about proving the presence of corruption when discussing pg_amcheck.\n\nI agree that it doesn't usually help. But sometimes it is important.\n\n> > This seems to come up a lot because at various points you seem to be\n> > concerned about introducing specific imperfections. But it's not like\n> > your starting point was ever perfection, or ever could be.\n\n> From the point of view of detecting corruptions, I agree that it never could be. But I'm not talking about that. I'm talking about whether pg_amcheck issues all the commands that it is supposed to issue. If I work for Daddy Warbucks and he gives me 30 classic cars to take to 10 different mechanics, I can do that job perfectly even if the mechanics do less than perfect work. If I leave three cars in the driveway, that's on me. Likewise, it's not on pg_amcheck if the checking functions can't do perfect work, but it is on pg_amcheck if it doesn't issue all the expected commands. But later on in this email, it appears we don't have any remaining disagreements about that. Read on....\n\nWhen you say \"expected commands\", I am entitled to ask: expected by\nwhom, based on what underlying principle? Similarly, when you suggest\nthat amcheck should directly deal with !indisvalid indexes itself, it\nnaturally leads to a tricky discussion of the precise definition of a\nrelation (in particular in the presence of REINDEX CONCURRENTLY), and\nthe limits of what is possible with amcheck. That's just where the\ndiscussion has to go.\n\nYou cannot say that amcheck must (say) \"directly deal with indisvalid\nindexes\", without at least saying why. pg_amcheck works by querying\npg_class, finding relations to verify. There is no way that that can\nwork that allows pg_amcheck to completely sidestep these awkward\nquestions -- just like with pg_dump. There is no safe neutral starting\npoint for a program like that.\n\n> > I can\n> > always describe a scenario in which amcheck misses real corruption --\n> > a scenario which may be very contrived. So the mere fact that some new\n> > theoretical possibility of corruption is introduced by some action\n> > does not in itself mean much. We're dealing with that constantly, and\n> > always will be.\n>\n> I wish we could stop discussing this. I really don't think this ticket has anything to do with how well or how poorly or how completely the amcheck functions work.\n\nIt's related to !indisvalid indexes. At one point you were concerned\nabout not having coverage of them in certain scenarios. Which is fine.\nBut the inevitable direction of that conversation is towards\nfundamental definitional questions.\n\nQuite happy to drop all of this now, though.\n\n> Ok, excellent, that was probably the only thing that had me really hung up. I thought you were still asking for pg_amcheck to filter out the --parent-check option when in recovery, but if you're not asking for that, then I might have enough to go on now.\n\nSorry about that. I realized my mistake (not specifically addressing\npg_is_in_recovery()) after I hit \"send\", and should have corrected the\nrecord sooner.\n\n> I was using \"downgrading\" to mean downgrading from bt_index_parent_check() to bt_index_check() when pg_is_in_recovery() is true, but you've clarified that you're not requesting that downgrade, so I think we've now gotten past the last sticking point about that whole issue.\n\nRight. I never meant anything like making a would-be\nbt_index_parent_check() call into a bt_index_check() call, just\nbecause of the state of the system (e.g., it's in recovery). That\nseems awful, in fact.\n\n> will complain if no tables match, giving the user the opportunity to notice that they spelled \"accounting\" wrong. If there happens to be a table named \"xyzacountngo\", and that matches, too bad. There isn't any way pg_amcheck can be responsible for that. But if there is a temporary table named \"xyzacountngo\" and that gets skipped because it's a temp table, I don't know what feedback the user should get. That's a thorny user interfaces question, not a corruption checking question, and I don't think you need to weigh in unless you want to. I'll most likely go with whatever is the simplest to code and/or most similar to what is currently in the tree, because I don't see any knock-down arguments one way or the other.\n\nI agree with you that this is a UI thing, since in any case the temp\ntable is pretty much \"not visible to pg_amcheck\". I have no particular\nfeelings about it.\n\nThanks\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 6 Oct 2021 16:12:15 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Wed, Oct 6, 2021 at 2:28 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n>> It is the most consistent with the general design of the system, for\n>> reasons that are pretty deeply baked into the system. I'm reminded of\n>> the fact that REINDEX CONCURRENTLY's completion became blocked due to\n>> similar trepidations. Understandably so.\n>\n>\n> I may mistake, but I recall the fact that all indexes builds started during some other (long) index build do not finish with indexes usable for selects until that long index is built. This may and may not be a source of amcheck misbehavior. Just a note what could be possibly considered.\n\nI may have been unclear. I meant that work on the REINDEX CONCURRENTLY\nfeature (several years ago) was very difficult. It seemed to challenge\nwhat \"Postgres relation\" really means.\n\nVarious community members had concerns about the definition at the\ntime. Remember, plain REINDEX actually gets a full AccessExclusiveLock\non the target index relation. This is practically as bad as getting\nthe same lock on the table itself for most users -- which is very\ndisruptive indeed. It's much more disruptive than plain CREATE INDEX\n-- CREATE INDEX generally only blocks write DML. Whereas REINDEX tends\nto block both writes and reads (in practice, barring some narrow cases\nwith prepared statements that are too confusing to users to be worth\ndiscussing). Which is surprising in itself to users. Why should plain\nREINDEX be so different to plain CREATE INDEX?\n\nThe weird (but also helpful) thing about the implementation of REINDEX\nCONCURRENTLY is that we can have *two* pg_class entries for what the\nuser thinks of as one index/relation. Having two pg_class entries is\nalso why plain REINDEX had problems that plain CREATE INDEX never had\n-- having only one pg_class entry was actually the true underlying\nproblem, all along.\n\nSometimes we have to make a difficult choice between \"weird rules but\nnice behavior\" (as with REINDEX CONCURRENTLY), and \"nice rules but\nweird behavior\" (as with plain REINDEX).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 6 Oct 2021 18:03:11 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "> On Oct 6, 2021, at 4:12 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n>> \n>> Ok, excellent, that was probably the only thing that had me really hung up. I thought you were still asking for pg_amcheck to filter out the --parent-check option when in recovery, but if you're not asking for that, then I might have enough to go on now.\n> \n> Sorry about that. I realized my mistake (not specifically addressing\n> pg_is_in_recovery()) after I hit \"send\", and should have corrected the\n> record sooner.\n> \n>> I was using \"downgrading\" to mean downgrading from bt_index_parent_check() to bt_index_check() when pg_is_in_recovery() is true, but you've clarified that you're not requesting that downgrade, so I think we've now gotten past the last sticking point about that whole issue.\n> \n> Right. I never meant anything like making a would-be\n> bt_index_parent_check() call into a bt_index_check() call, just\n> because of the state of the system (e.g., it's in recovery). That\n> seems awful, in fact.\n\nPlease find attached the latest version of the patch which includes the changes we discussed.\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 11 Oct 2021 09:53:45 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Mon, Oct 11, 2021 at 9:53 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> > Right. I never meant anything like making a would-be\n> > bt_index_parent_check() call into a bt_index_check() call, just\n> > because of the state of the system (e.g., it's in recovery). That\n> > seems awful, in fact.\n>\n> Please find attached the latest version of the patch which includes the changes we discussed.\n\nThis mostly looks good to me. Just one thing occurs to me: I suspect\nthat we don't need to call pg_is_in_recovery() from SQL at all. What's\nwrong with just letting verify_heapam() (the C function from amcheck\nproper) show those notice messages where appropriate?\n\nIn general I don't like the idea of making the behavior of pg_amcheck\nconditioned on the state of the system (e.g., whether we're in\nrecovery) -- we should just let amcheck throw \"invalid option\" type\nerrors when that's the logical outcome (e.g., when --parent-check is\nused on a replica). To me this seems rather different than not\nchecking temporary tables, because that's something that inherently\nwon't work. (Also, I consider the index-is-being-built stuff to be\nvery similar to the temp table stuff -- same basic situation.)\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 11 Oct 2021 10:10:25 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "\n\n> On Oct 11, 2021, at 10:10 AM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> This mostly looks good to me. Just one thing occurs to me: I suspect\n> that we don't need to call pg_is_in_recovery() from SQL at all. What's\n> wrong with just letting verify_heapam() (the C function from amcheck\n> proper) show those notice messages where appropriate?\n\nI thought a big part of the debate upthread was over exactly this point, that pg_amcheck should not attempt to check (a) temporary relations, (b) indexes that are invalid or unready, and (c) unlogged relations during recovery.\n\n> In general I don't like the idea of making the behavior of pg_amcheck\n> conditioned on the state of the system (e.g., whether we're in\n> recovery) -- we should just let amcheck throw \"invalid option\" type\n> errors when that's the logical outcome (e.g., when --parent-check is\n> used on a replica). To me this seems rather different than not\n> checking temporary tables, because that's something that inherently\n> won't work. (Also, I consider the index-is-being-built stuff to be\n> very similar to the temp table stuff -- same basic situation.)\n\nI don't like having pg_amcheck parse the error message that comes back from amcheck. If amcheck throws an error, pg_amcheck considers that a failure and ultimately exists with a non-zero status. So, if we're going to have amcheck handle these cases, it will have to be with a NOTICE (or perhaps a WARNING) rather than an error. That's not what happens now, but if you'd rather we fixed this problem that way, I can go do that, or perhaps as the author of the bt_*_check functions, you can do that and I can just do the pg_amcheck changes.\n\nHow shall we proceed?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 11 Oct 2021 10:46:16 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Mon, Oct 11, 2021 at 10:46 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> > On Oct 11, 2021, at 10:10 AM, Peter Geoghegan <pg@bowt.ie> wrote:\n> > This mostly looks good to me. Just one thing occurs to me: I suspect\n> > that we don't need to call pg_is_in_recovery() from SQL at all. What's\n> > wrong with just letting verify_heapam() (the C function from amcheck\n> > proper) show those notice messages where appropriate?\n>\n> I thought a big part of the debate upthread was over exactly this point, that pg_amcheck should not attempt to check (a) temporary relations, (b) indexes that are invalid or unready, and (c) unlogged relations during recovery.\n\nAgain, I consider (a) and (b) very similar to each other, but very\ndissimilar to (c). Only (a) and (b) are *inherently* not verifiable by\namcheck.\n\nTo me, giving pg_amcheck responsibility for only calling amcheck\nfunctions when (a) and (b) are sane is akin to expecting pg_amcheck to\nonly call bt_index_check() with a B-Tree index. Giving pg_amcheck\nthese responsibilities is not a case of \"pg_amcheck presuming to know\nwhat's best for the user, or too much about amcheck\", because amcheck\nitself pretty clearly expects this from the user (and always has). The\nuser is no worse off for having used pg_amcheck rather than calling\namcheck functions from SQL themselves. pg_amcheck is literally just\nfulfilling basic expectations held by amcheck, that are pretty much\ndocumented as such.\n\nSure, the user might not be happy with --parent-check throwing an\nerror on a replica. But in practice most users won't want to do that\nanyway. Even on a primary it's usually not possible as a practical\nmatter, because the locking implications are *bad* -- it's just too\ndisruptive, for too little extra coverage. And so when --parent-check\nfails on a replica, it really is very likely that the user should just\nnot do that. Which is easy: just remove --parent-check, and try again.\n\nMost scenarios where --parent-check is useful involve the user already\nknowing that there is some corruption. In other words, scenarios where\nalmost nothing could be considered overkill. Presumably this is very\nrare.\n\n> I don't like having pg_amcheck parse the error message that comes back from amcheck.\n\n> How shall we proceed?\n\nWhat's the problem with just having pg_amcheck pass through the notice\nto the user, without it affecting anything else? Why should a simple\nnotice message need to affect its return code, or anything else?\n\nIt's not like I feel very strongly about this question. Ultimately it\nprobably doesn't matter very much -- if pg_amcheck just can't deal\nwith these notice messages for some reason, then I can let it go. But\nwhat's the reason? If there is a good reason, then maybe we should\njust not have the notice messages (so we would just remove the\nexisting one from verify_nbtree.c, while still interpreting the case\nin the same way -- index has no storage to check, and so is trivially\nverified).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 11 Oct 2021 11:12:07 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "\n\n> On Oct 11, 2021, at 11:12 AM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> What's the problem with just having pg_amcheck pass through the notice\n> to the user, without it affecting anything else? Why should a simple\n> notice message need to affect its return code, or anything else?\n\nThat's fine by me, but I was under the impression that people wanted the extraneous noise removed. Since pg_amcheck can know the command is going to draw a \"you can't check that right now\" type message, one might argue that it is drawing these notices for no particular benefit. Somebody could quite reasonably complain about this on a hot standby with millions of unlogged relations. Actual ERROR messages might get lost in all the noise.\n\nIt's true that these NOTICEs do not change the return code. I was thinking about the ERRORs we get on failed lock acquisition, but that is unrelated.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 11 Oct 2021 11:26:24 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Mon, Oct 11, 2021 at 11:12 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> Sure, the user might not be happy with --parent-check throwing an\n> error on a replica. But in practice most users won't want to do that\n> anyway. Even on a primary it's usually not possible as a practical\n> matter, because the locking implications are *bad* -- it's just too\n> disruptive, for too little extra coverage. And so when --parent-check\n> fails on a replica, it really is very likely that the user should just\n> not do that. Which is easy: just remove --parent-check, and try again.\n\nWe should have a warning box about this in the pg_amcheck docs. Users\nshould think carefully about ever using --parent-check, since it alone\ntotally changes the locking requirements (actually --rootdescend will\ndo that too, but only because that option also implies\n--parent-check).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 11 Oct 2021 11:26:29 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "\n\n> On Oct 11, 2021, at 11:26 AM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> We should have a warning box about this in the pg_amcheck docs. Users\n> should think carefully about ever using --parent-check, since it alone\n> totally changes the locking requirements (actually --rootdescend will\n> do that too, but only because that option also implies\n> --parent-check).\n\nThe recently submitted patch already contains a short paragraph for each of these, but not a warning box. Should I reformat those as warning boxes? I don't know the current thinking on the appropriateness of that documentation style.\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 11 Oct 2021 11:29:12 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Mon, Oct 11, 2021 at 11:29 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> The recently submitted patch already contains a short paragraph for each of these, but not a warning box. Should I reformat those as warning boxes? I don't know the current thinking on the appropriateness of that documentation style.\n\nI definitely think that it warrants a warning box. This is a huge\npractical difference.\n\nNote that I'm talking about a standard thing, which there are\ncertainly a dozen or more examples of in the docs already. Just grep\nfor \"<warning> </warning>\" tags to see the existing warning boxes.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 11 Oct 2021 11:33:40 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "\n\n> On Oct 11, 2021, at 11:33 AM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> I definitely think that it warrants a warning box. This is a huge\n> practical difference.\n> \n> Note that I'm talking about a standard thing, which there are\n> certainly a dozen or more examples of in the docs already. Just grep\n> for \"<warning> </warning>\" tags to see the existing warning boxes.\n\nYes, sure, I know they exist. It's just that I have a vague recollection of a discussion on -hackers about whether we should be using them so much.\n\nThe documentation for contrib/amcheck has a paragraph but not a warning box. Should that be changed also?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 11 Oct 2021 11:37:15 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Mon, Oct 11, 2021 at 11:26 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> That's fine by me, but I was under the impression that people wanted the extraneous noise removed.\n\nA NOTICE message is supposed to be surfaced to clients (but not stored\nin the server log), pretty much by definition.\n\nIt's not unreasonable to argue that I was mistaken to ever think that\nabout this particular message. In fact, I suspect that I was.\n\n> Since pg_amcheck can know the command is going to draw a \"you can't check that right now\" type message, one might argue that it is drawing these notices for no particular benefit.\n\nBut technically it *was* checked. That's how I think of it, at least.\nIf a replica comes out of recovery, and we run pg_amcheck immediately\nafterwards, are we now \"checking it for real\"? I don't think that\ndistinction is meaningful.\n\n> Somebody could quite reasonably complain about this on a hot standby with millions of unlogged relations. Actual ERROR messages might get lost in all the noise.\n\nThat's a good point.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 11 Oct 2021 11:40:34 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Mon, Oct 11, 2021 at 11:37 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> The documentation for contrib/amcheck has a paragraph but not a warning box. Should that be changed also?\n\nMaybe. I think that the pg_amcheck situation is a lot worse, because\nusers could easily interpret --parent-check as an additive thing.\nTotally changing the general locking requirements seems like a POLA\nviolation. Besides, amcheck proper is now very much the low level tool\nthat most users won't ever bother with.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 11 Oct 2021 11:46:43 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Mon, Oct 11, 2021 at 11:40 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> A NOTICE message is supposed to be surfaced to clients (but not stored\n> in the server log), pretty much by definition.\n>\n> It's not unreasonable to argue that I was mistaken to ever think that\n> about this particular message. In fact, I suspect that I was.\n\n> > Somebody could quite reasonably complain about this on a hot standby with millions of unlogged relations. Actual ERROR messages might get lost in all the noise.\n\nHow about this: we can just lower the elevel, from NOTICE to DEBUG1.\nWe'd then be able to keep the message we have today in\nverify_nbtree.c. We'd also add a matching message (and logic) to\nverify_heapam.c, keeping them consistent.\n\nI find your argument about spammy messages convincing. But it's no\nless valid for any other user of amcheck. So we really should just fix\nthat at the amcheck level. That way you can get rid of the call to\npg_is_in_recovery() from the SQL statements in pg_amcheck, while still\nfixing everything that needs to be fixed in pg_amcheck.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 11 Oct 2021 11:53:38 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "\n\n> On Oct 11, 2021, at 11:53 AM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> On Mon, Oct 11, 2021 at 11:40 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>> A NOTICE message is supposed to be surfaced to clients (but not stored\n>> in the server log), pretty much by definition.\n>> \n>> It's not unreasonable to argue that I was mistaken to ever think that\n>> about this particular message. In fact, I suspect that I was.\n> \n>>> Somebody could quite reasonably complain about this on a hot standby with millions of unlogged relations. Actual ERROR messages might get lost in all the noise.\n> \n> How about this: we can just lower the elevel, from NOTICE to DEBUG1.\n> We'd then be able to keep the message we have today in\n> verify_nbtree.c. We'd also add a matching message (and logic) to\n> verify_heapam.c, keeping them consistent.\n> \n> I find your argument about spammy messages convincing. But it's no\n> less valid for any other user of amcheck. So we really should just fix\n> that at the amcheck level. That way you can get rid of the call to\n> pg_is_in_recovery() from the SQL statements in pg_amcheck, while still\n> fixing everything that needs to be fixed in pg_amcheck.\n\nYour proposal sounds good. Let me try it and get back to you shortly.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 11 Oct 2021 12:25:03 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "> On Oct 11, 2021, at 12:25 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> Your proposal sounds good. Let me try it and get back to you shortly.\n\nOk, I went with this suggestion, and also your earlier suggestion to have a <warning> in the pg_amcheck docs about using --parent-check and/or --rootdescend against servers in recovery.\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 11 Oct 2021 13:20:26 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Mon, Oct 11, 2021 at 1:20 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Ok, I went with this suggestion, and also your earlier suggestion to have a <warning> in the pg_amcheck docs about using --parent-check and/or --rootdescend against servers in recovery.\n\nMy concern with --parent-check (and with --rootdescend) had little to\ndo with Hot Standby. I suggested using a warning because these options\nalone can pretty much cause bedlam on a production database. At least\nif they're used carelessly. Again, bt_index_parent_check()'s relation\nlevel locks will block all DML, as well as VACUUM. That isn't the case\nwith any of the other pg_amcheck options, including those that call\nbt_index_check(), and including the heapam verification functionality.\n\nIt's also true that --parent-check won't work in Hot Standby mode, of\ncourse. So it couldn't hurt to mention that in passing, at the same\npoint. But that's a secondary point, at best. We don't need to use a\nwarning box because of that.\n\nOverall, your approach looks good to me. Will Robert take care of\ncommitting this, or should I?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 11 Oct 2021 14:33:23 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "\n\n> On Oct 11, 2021, at 2:33 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> On Mon, Oct 11, 2021 at 1:20 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> Ok, I went with this suggestion, and also your earlier suggestion to have a <warning> in the pg_amcheck docs about using --parent-check and/or --rootdescend against servers in recovery.\n> \n> My concern with --parent-check (and with --rootdescend) had little to\n> do with Hot Standby. I suggested using a warning because these options\n> alone can pretty much cause bedlam on a production database.\n\nOk, that makes more sense. Would you care to rephrase them? I don't think we need another round of patches posted.\n\n> At least\n> if they're used carelessly. Again, bt_index_parent_check()'s relation\n> level locks will block all DML, as well as VACUUM. That isn't the case\n> with any of the other pg_amcheck options, including those that call\n> bt_index_check(), and including the heapam verification functionality.\n> \n> It's also true that --parent-check won't work in Hot Standby mode, of\n> course. So it couldn't hurt to mention that in passing, at the same\n> point. But that's a secondary point, at best. We don't need to use a\n> warning box because of that.\n> \n> Overall, your approach looks good to me. Will Robert take care of\n> committing this, or should I?\n\nI'd appreciate if you could fix up the <warning> in the docs and do the commit.\n\nThanks!\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 11 Oct 2021 14:41:31 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Mon, Oct 11, 2021 at 2:41 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> > Overall, your approach looks good to me. Will Robert take care of\n> > committing this, or should I?\n>\n> I'd appreciate if you could fix up the <warning> in the docs and do the commit.\n\nCool. I pushed just the amcheck changes a moment ago. I attach the\nremaining changes from your v3, with a new draft commit message (no\nreal changes). I didn't push the rest (what remains in the attached\nrevision) just yet because I'm not quite sure about the approach used\nto exclude temp tables.\n\nDo we really need the redundancy between prepare_btree_command(),\nprepare_heap_command(), and compile_relation_list_one_db()? All three\nexclude temp relations, plus you have extra stuff in\nprepare_btree_command(). There is some theoretical value in delaying\nthe index specific stuff until the query actually runs, at least in\ntheory. But it also seems unlikely to make any appreciable difference\nto the overall level of coverage in practice.\n\nWould it be simpler to do it all together, in\ncompile_relation_list_one_db()? Were you concerned about things\nchanging when parallel workers are run? Or something else?\n\nMany thanks\n--\nPeter Geoghegan",
"msg_date": "Mon, 11 Oct 2021 17:37:51 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "\n\n> On Oct 11, 2021, at 5:37 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> Cool. I pushed just the amcheck changes a moment ago. I attach the\n> remaining changes from your v3, with a new draft commit message (no\n> real changes). I didn't push the rest (what remains in the attached\n> revision) just yet because I'm not quite sure about the approach used\n> to exclude temp tables.\n\nThanks for that.\n\n> Do we really need the redundancy between prepare_btree_command(),\n> prepare_heap_command(), and compile_relation_list_one_db()? All three\n> exclude temp relations, plus you have extra stuff in\n> prepare_btree_command(). There is some theoretical value in delaying\n> the index specific stuff until the query actually runs, at least in\n> theory. But it also seems unlikely to make any appreciable difference\n> to the overall level of coverage in practice.\n\nI agree that it is unlikely to make much difference in practice. Another session running reindex concurrently is, I think, the most likely to conflict, but it is just barely imaginable that a relation will be dropped, and its OID reused for something unrelated, by the time the check command gets run. The new object might be temporary where the old object was not. On a properly functioning database, that may be too remote a possibility to be worth worrying about, but on a corrupt database, most bets are off, and I can't really tell you if that's a likely scenario, because it is hard to think about all the different ways corruption might cause a database to behave. On the other hand, the join against pg_class might fail due to unspecified corruption, so my attempt to play it safe may backfire.\n\nI don't feel strongly about this. If you'd like me to remove those checks, I can do so. These are just my thoughts on the subject.\n\n> Would it be simpler to do it all together, in\n> compile_relation_list_one_db()? Were you concerned about things\n> changing when parallel workers are run? Or something else?\n\nYeah, I was contemplating things changing by the time the parallel workers run the command. I don't know how important that is.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 11 Oct 2021 19:22:14 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Mon, Oct 11, 2021 at 7:22 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I agree that it is unlikely to make much difference in practice.\n\n> I don't feel strongly about this. If you'd like me to remove those checks, I can do so. These are just my thoughts on the subject.\n\nOkay. I don't feel strongly about it either.\n\nI just pushed v4, with the additional minor pg_amcheck documentation\nupdates we talked about. No other changes.\n\nThanks again\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 13 Oct 2021 14:09:48 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Wed, Oct 13, 2021 at 2:09 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I just pushed v4, with the additional minor pg_amcheck documentation\n> updates we talked about. No other changes.\n\nAny idea what the problems on drongo are?\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2021-10-14%2001%3A27%3A19\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 13 Oct 2021 20:48:04 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> Any idea what the problems on drongo are?\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2021-10-14%2001%3A27%3A19\n\nIt says\n\n# pg_ctl start failed; logfile:\n2021-10-14 02:10:33.996 UTC [491848:1] LOG: starting PostgreSQL 14.0, compiled by Visual C++ build 1923, 64-bit\n2021-10-14 02:10:33.999 UTC [491848:2] LOG: could not bind IPv4 address \"127.0.0.1\": Only one usage of each socket address (protocol/network address/port) is normally permitted.\n2021-10-14 02:10:33.999 UTC [491848:3] HINT: Is another postmaster already running on port 54407? If not, wait a few seconds and retry.\n2021-10-14 02:10:33.999 UTC [491848:4] WARNING: could not create listen socket for \"127.0.0.1\"\n2021-10-14 02:10:33.999 UTC [491848:5] FATAL: could not create any TCP/IP sockets\n2021-10-14 02:10:34.000 UTC [491848:6] LOG: database system is shut down\nBail out! pg_ctl start failed\n\nLooks like a transient/phase of the moon issue to me.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Oct 2021 00:15:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Wed, Oct 13, 2021 at 9:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Looks like a transient/phase of the moon issue to me.\n\nYeah, I noticed that drongo is prone to them, though only after I hit send.\n\nThanks\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 13 Oct 2021 21:18:30 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "\nOn 10/14/21 12:15 AM, Tom Lane wrote:\n> Peter Geoghegan <pg@bowt.ie> writes:\n>> Any idea what the problems on drongo are?\n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2021-10-14%2001%3A27%3A19\n> It says\n>\n> # pg_ctl start failed; logfile:\n> 2021-10-14 02:10:33.996 UTC [491848:1] LOG: starting PostgreSQL 14.0, compiled by Visual C++ build 1923, 64-bit\n> 2021-10-14 02:10:33.999 UTC [491848:2] LOG: could not bind IPv4 address \"127.0.0.1\": Only one usage of each socket address (protocol/network address/port) is normally permitted.\n> 2021-10-14 02:10:33.999 UTC [491848:3] HINT: Is another postmaster already running on port 54407? If not, wait a few seconds and retry.\n> 2021-10-14 02:10:33.999 UTC [491848:4] WARNING: could not create listen socket for \"127.0.0.1\"\n> 2021-10-14 02:10:33.999 UTC [491848:5] FATAL: could not create any TCP/IP sockets\n> 2021-10-14 02:10:34.000 UTC [491848:6] LOG: database system is shut down\n> Bail out! pg_ctl start failed\n>\n> Looks like a transient/phase of the moon issue to me.\n>\n> \t\t\t\n\n\n\nBowerbird is having similar issues, so I don't think this is just a\ntransient.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 14 Oct 2021 16:50:51 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "\n\n> On Oct 14, 2021, at 1:50 PM, Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n> Bowerbird is having similar issues, so I don't think this is just a\n> transient.\n\nThe pg_amcheck patch Peter committed for me adds a new test, src/bin/pg_amcheck/t/006_bad_targets.pl, which creates two PostgresNode objects (a primary and a standby) and uses PostgresNode::background_psql(). It doesn't bother to \"finish\" the returned harness, which may be the cause of an installation hanging around long enough to be in the way when another test tries to start.\n\nAssuming this is right, the fix is just one line. Thoughts? \n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 14 Oct 2021 14:06:00 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 10/14/21 12:15 AM, Tom Lane wrote:\n>> Looks like a transient/phase of the moon issue to me.\n\n> Bowerbird is having similar issues, so I don't think this is just a\n> transient.\n\nYeah, I noticed that too today, and poked around a bit. But I don't\nsee what this test is doing differently from other tests that\nbowerbird is running successfully. It's failing while trying to crank\nup a replica using init_from_backup, which has lots of precedent.\n\nI do see that bowerbird is skipping some comparable tests due\nto using \"--skip-steps misc-check\". But it's not skipping,\neg, pg_rewind's 008_min_recovery_point.pl; and the setup steps\nin that sure look just the same. What's different?\n\n(BTW, I wondered if PostgresNode->new's own_host hack could fix this.\nBut AFAICS that's dead code, with no existing test using it. Seems\nlike we should nuke it.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Oct 2021 17:09:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n> The pg_amcheck patch Peter committed for me adds a new test, src/bin/pg_amcheck/t/006_bad_targets.pl, which creates two PostgresNode objects (a primary and a standby) and uses PostgresNode::background_psql(). It doesn't bother to \"finish\" the returned harness, which may be the cause of an installation hanging around long enough to be in the way when another test tries to start.\n\n(a) Isn't that just holding open one connection, not the whole instance?\n\n(b) Wouldn't finish()ing that connection cause the temp tables to be\ndropped, negating the entire point of the test?\n\nTBH, I seriously doubt this test case is worth expending buildfarm\ncycles on forevermore. I'm more than a bit tempted to just drop\nit, rather than also expending developer time figuring out why it's\nnot as portable as it looks.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Oct 2021 17:13:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "\n\n> On Oct 14, 2021, at 2:13 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> (a) Isn't that just holding open one connection, not the whole instance?\n\nYes.\n\n> (b) Wouldn't finish()ing that connection cause the temp tables to be\n> dropped, negating the entire point of the test?\n\nThe finish() would have to be the last line of the test.\n\n> TBH, I seriously doubt this test case is worth expending buildfarm\n> cycles on forevermore. I'm more than a bit tempted to just drop\n> it, rather than also expending developer time figuring out why it's\n> not as portable as it looks.\n\nI'm curious if the test is indicating something about the underlying test system. Only one other test in the tree uses background_psql(). I was hoping Andrew would have something to say about whether this is a bug with that function or just user error on my part.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 14 Oct 2021 14:18:01 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Thu, Oct 14, 2021 at 2:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> TBH, I seriously doubt this test case is worth expending buildfarm\n> cycles on forevermore. I'm more than a bit tempted to just drop\n> it, rather than also expending developer time figuring out why it's\n> not as portable as it looks.\n\nI agree. I can go remove the whole file now, and will.\n\nMark: Any objections?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 14 Oct 2021 14:21:47 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "\n\n> On Oct 14, 2021, at 2:21 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> I agree. I can go remove the whole file now, and will.\n> \n> Mark: Any objections?\n\nNone of the \"pride of ownership\" type, but I would like to see something more about the limitations of background_psql(). It's the closest thing we have to being able to run things in parallel from TAP tests. There's no isolationtester equivalent, and PostgresNode doesn't allow you to fork() in tests without hacking PostgresNodes END{} block. So if we don't debug this, we never get any further towards parallel testing from perl. Or do you have a different way forward for that?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 14 Oct 2021 14:24:33 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Thu, Oct 14, 2021 at 2:24 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> None of the \"pride of ownership\" type, but I would like to see something more about the limitations of background_psql().\n\nI'm not sure what that means for the buildfarm. Are you suggesting\nthat we leave things as-is pending an investigation on affected BF\nanimals, or something else?\n\n> Or do you have a different way forward for that?\n\nI don't know enough about this stuff to be able to comment.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 14 Oct 2021 14:28:10 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "\n\n> On Oct 14, 2021, at 2:28 PM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> I'm not sure what that means for the buildfarm. Are you suggesting\n> that we leave things as-is pending an investigation on affected BF\n> animals, or something else?\n\nI was just waiting a couple minutes to see if Andrew wanted to jump in. Having heard nothing, I guess you can revert it.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 14 Oct 2021 14:31:20 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "Mark Dilger <mark.dilger@enterprisedb.com> writes:\n>> On Oct 14, 2021, at 2:13 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> (b) Wouldn't finish()ing that connection cause the temp tables to be\n>> dropped, negating the entire point of the test?\n\n> The finish() would have to be the last line of the test.\n> ...\n> I'm curious if the test is indicating something about the underlying test system. Only one other test in the tree uses background_psql(). I was hoping Andrew would have something to say about whether this is a bug with that function or just user error on my part.\n\nNeither of these things could explain the problem at hand, AFAICS,\nbecause it's failing to start up the standby.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Oct 2021 17:39:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "\nOn 10/14/21 5:09 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 10/14/21 12:15 AM, Tom Lane wrote:\n>>> Looks like a transient/phase of the moon issue to me.\n>> Bowerbird is having similar issues, so I don't think this is just a\n>> transient.\n> Yeah, I noticed that too today, and poked around a bit. But I don't\n> see what this test is doing differently from other tests that\n> bowerbird is running successfully. It's failing while trying to crank\n> up a replica using init_from_backup, which has lots of precedent.\n\n\nYes, that's been puzzling me too. I've just been staring at it again and\nnothing jumps out. But maybe we can investigate that offline if this\ntest is deemed not worth keeping.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 14 Oct 2021 17:40:21 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "On Thu, Oct 14, 2021 at 2:31 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> I was just waiting a couple minutes to see if Andrew wanted to jump in. Having heard nothing, I guess you can revert it.\n\nOkay. Pushed a commit removing the test case just now.\n\nThanks\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 14 Oct 2021 14:52:23 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> Yes, that's been puzzling me too. I've just been staring at it again and\n> nothing jumps out. But maybe we can investigate that offline if this\n> test is deemed not worth keeping.\n\nAs Mark says, it'd be interesting to know whether the use of\nbackground_psql is related, because if it is, we'd want to debug that.\n(I don't really see how it could be related, but maybe I just lack\nsufficient imagination today.)\n\nBeyond that, ISTM this is blocking all TAP testing on the Windows\nmachines, which is pretty bad to leave in place for long.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Oct 2021 17:52:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "\nOn 10/14/21 5:52 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> Yes, that's been puzzling me too. I've just been staring at it again and\n>> nothing jumps out. But maybe we can investigate that offline if this\n>> test is deemed not worth keeping.\n> As Mark says, it'd be interesting to know whether the use of\n> background_psql is related, because if it is, we'd want to debug that.\n> (I don't really see how it could be related, but maybe I just lack\n> sufficient imagination today.)\n\n\n\nYeah. I'm working on getting a cut-down reproducible failure case.\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 15 Oct 2021 10:46:03 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
},
{
"msg_contents": "\nOn 10/15/21 10:46 AM, Andrew Dunstan wrote:\n> On 10/14/21 5:52 PM, Tom Lane wrote:\n>> Andrew Dunstan <andrew@dunslane.net> writes:\n>>> Yes, that's been puzzling me too. I've just been staring at it again and\n>>> nothing jumps out. But maybe we can investigate that offline if this\n>>> test is deemed not worth keeping.\n>> As Mark says, it'd be interesting to know whether the use of\n>> background_psql is related, because if it is, we'd want to debug that.\n>> (I don't really see how it could be related, but maybe I just lack\n>> sufficient imagination today.)\n>\n>\n> Yeah. I'm working on getting a cut-down reproducible failure case.\n>\n\nI spend a good deal of time poking at this on Friday and Saturday.\n\nIt's quite clear that the use of\n\n my $h = $node->background_psql(...);\n $h->pump_nb;\n\nis the root of the problem.\n\nIf that code is commented out, or even just moved to just after the\nstandby is started and before we check that replication has caught up\n(which should meet the needs of the case where we found this), then the\nproblem goes away.\n\nIPC::Run deals with this setup in a different way on Windows, mainly\nbecause its select() only works on sockets and not other types of file\nhandles.\n\nIt does appear that TestLib::get_free_port() is not sufficiently robust,\nas it should guarantee that the port/address can be bound.\n\nI haven't got further that that, and I have other things I need to be\ndoing, but for now I think we just need to be careful wherever possible\nto try to set up servers before trying to calling start/pump.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 18 Oct 2021 14:07:38 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: BUG #17212: pg_amcheck fails on checking temporary relations"
}
] |
[
{
"msg_contents": "Hi,\n\nI am trying to improve (i.e. have any at all) test coverage of the\nExprContextCallback for a ValuePerCall SRF function handler.\n\nI'm having difficulty coming up with a query that actually doesn't\nrun the SRF to completion.\n\nThe form I've been trying looks like\n\nSELECT *\nFROM\n executeSelectToRecords('SELECT * FROM generate_series(1,1000000)')\n AS (thing int)\n LIMIT 10;\n\nbut even that query calls executeSelectToRecords for all 1000000 rows\nand then shows me ten of them, and the callback isn't tested.\n\nIs there a way to write a simple query that won't run the SRF to\ncompletion?\n\nThanks!\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Sat, 2 Oct 2021 19:32:21 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": true,
"msg_subject": "Query that will not run a ValuePerCall SRF to completion?"
},
{
"msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> I'm having difficulty coming up with a query that actually doesn't\n> run the SRF to completion.\n\n From memory, nodeFunctionscan always populates the tuplestore immediately.\nI've looked into changing that but not got it done.\n\nIf you write the function in the targetlist, ie\n\n\tselect srf(...) limit N;\n\nI think it will act more like you expect.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 02 Oct 2021 19:44:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Query that will not run a ValuePerCall SRF to completion?"
}
] |
[
{
"msg_contents": "As I threatened in another thread, I've looked through all of the\noldest commitfest entries to see which ones should maybe be tossed,\non the grounds that they're unlikely to ever get committed so we\nshould stop pushing them forward to the next CF.\n\nAn important note to make here is that we don't have any explicit\nmechanism for saying \"sorry, this patch is perhaps useful but it\nseems that nobody is going to take an interest in it\". Closing\nsuch a patch as \"rejected\" seems harsh, but R-W-F isn't very\nappropriate either if the patch never got any real review.\nPerhaps we should create a new closure state?\n\nI looked at entries that are at least 10 CFs old, as indicated by\nthe handy sort field. That's a pretty small population: 16 items\nout of the 317 listed in the 2021-09 CF. A quick look in recent\nCFs shows that it's very rare that we commit entries older than\n10 CFs.\n\nHere's what I found, along with some commentary about each one.\n\nPatch\t\tAge in CFs\n\nProtect syscache from bloating with negative cache entries\t23\n\tLast substantive discussion 2021-01, currently passing cfbot\n\n\tIt's well known that I've never liked this patch, so I can't\n\tclaim to be unbiased. But what I see here is a lot of focus\n\ton specific test scenarios with little concern for the\n\tpossibility that other scenarios will be made worse.\n\tI think we need some new ideas to make progress.\n\tProposed action: RWF\n\nTransactions involving multiple postgres foreign servers\t18\n\tLast substantive discussion 2021-07, currently failing cfbot\n\n\tThis has been worked on fairly recently, but frankly I'm\n\tdubious that we want to integrate a 2PC XM into Postgres.\n\tProposed action: Reject\n\nschema variables, LET command\t18\n\tLast substantive discussion 2021-09, currently passing cfbot\n\n\tSeems to be actively worked on, but is it ever going to get\n\tcommitted?\n\nRemove self join on a unique column\t16\n\tLast substantive discussion 2021-07, currently passing cfbot\n\n\tI'm not exactly sold that this has a good planning-cost-to-\n\tusefulness ratio.\n\tProposed action: RWF\n\nIndex Skip Scan\t16\n\tLast substantive discussion 2021-05, currently passing cfbot\n\n\tSeems possibly useful, but we're not making progress.\n\nstandby recovery fails when re-replaying due to missing directory which was removed in previous replay\t13\n\tLast substantive discussion 2021-09, currently passing cfbot\n\n\tThis is a bug fix, so we shouldn't drop it.\n\nRemove page-read callback from XLogReaderState\t12\n\tLast substantive discussion 2021-04, currently failing cfbot\n\n\tNot sure what to think about this one, but given that it\n\twas pushed and later reverted, I'm suspicious of it.\n\nIncremental Materialized View Maintenance\t12\n\tLast substantive discussion 2021-09, currently passing cfbot\n\n\tSeems to be actively worked on.\n\npg_upgrade fails with non-standard ACL\t12\n\tLast substantive discussion 2021-03, currently passing cfbot\n\n\tThis is a bug fix, so we shouldn't drop it.\n\nFix up partitionwise join on how equi-join conditions between the partition keys are identified\t11\n\tLast substantive discussion 2021-07, currently passing cfbot\n\n\tThis is another one where I feel we need new ideas to make\n\tprogress.\n\tProposed action: RWF\n\nA hook for path-removal decision on add_path\t11\n\tLast substantive discussion 2021-03, currently passing cfbot\n\n\tI don't think this is a great idea: a hook there will be\n\tcostly, and it's very unclear how multiple extensions could\n\tinteract correctly.\n\tProposed action: Reject\n\nImplement INSERT SET syntax\t11\n\tLast substantive discussion 2020-03, currently passing cfbot\n\n\tThis one is clearly stalled. I don't think it's necessarily\n\ta bad idea, but we seem not to be very interested.\n\tProposed action: Reject for lack of interest\n\nSQL:2011 application time\t11\n\tLast substantive discussion 2021-10, currently failing cfbot\n\n\tActively worked on, and it's a big feature so long gestation\n\tisn't surprising.\n\nWITH SYSTEM VERSIONING Temporal Tables\t11\n\tLast substantive discussion 2021-09, currently failing cfbot\n\n\tActively worked on, and it's a big feature so long gestation\n\tisn't surprising.\n\npsql - add SHOW_ALL_RESULTS option\t11\n\tLast substantive discussion 2021-09, currently passing cfbot\n\n\tThis got committed and reverted once already. I have to be\n\tsuspicious of whether this is a good design.\n\nSplit StdRdOptions into HeapOptions and ToastOptions\t10\n\tLast substantive discussion 2021-06, currently failing cfbot\n\n\tI think the author has despaired of anyone else taking an\n\tinterest here. Unless somebody intends to take an interest,\n\twe should put this one out of its misery.\n\tProposed action: Reject for lack of interest\n\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 03 Oct 2021 15:14:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Triage on old commitfest entries"
},
{
"msg_contents": "Op 03-10-2021 om 21:14 schreef Tom Lane:\n> As I threatened in another thread, I've looked through all of the\n> oldest commitfest entries to see which ones should maybe be tossed,\n> on the grounds that they're unlikely to ever get committed so we\n> should stop pushing them forward to the next CF.\n> \n> An important note to make here is that we don't have any explicit\n> mechanism for saying \"sorry, this patch is perhaps useful but it\n> seems that nobody is going to take an interest in it\". Closing\n> such a patch as \"rejected\" seems harsh, but R-W-F isn't very\n> appropriate either if the patch never got any real review.\n> Perhaps we should create a new closure state?\n> \n> I looked at entries that are at least 10 CFs old, as indicated by\n> the handy sort field. That's a pretty small population: 16 items\n> out of the 317 listed in the 2021-09 CF. A quick look in recent\n> CFs shows that it's very rare that we commit entries older than\n> 10 CFs.\n> \n> Here's what I found, along with some commentary about each one.\n> \n> Patch\t\tAge in CFs\n\nMay I add one more?\n\nSQL/JSON: JSON_TABLE started 2018 (the commitfest page shows only 4 as \n'Age in CFs' but that obviously can't be right)\n\nAlthough I like the patch & new functionality and Andrew Dunstan has \nworked to keep it up-to-date, there seems to be very little further \ndiscussion. I makes me a little worried that the time I put in will end \nup sunk in a dead project.\n\n\nErik Rijkers\n\n> \n> Protect syscache from bloating with negative cache entries\t23\n> \tLast substantive discussion 2021-01, currently passing cfbot\n> \n> \tIt's well known that I've never liked this patch, so I can't\n> \tclaim to be unbiased. But what I see here is a lot of focus\n> \ton specific test scenarios with little concern for the\n> \tpossibility that other scenarios will be made worse.\n> \tI think we need some new ideas to make progress.\n> \tProposed action: RWF\n> \n> Transactions involving multiple postgres foreign servers\t18\n> \tLast substantive discussion 2021-07, currently failing cfbot\n> \n> \tThis has been worked on fairly recently, but frankly I'm\n> \tdubious that we want to integrate a 2PC XM into Postgres.\n> \tProposed action: Reject\n> \n> schema variables, LET command\t18\n> \tLast substantive discussion 2021-09, currently passing cfbot\n> \n> \tSeems to be actively worked on, but is it ever going to get\n> \tcommitted?\n> \n> Remove self join on a unique column\t16\n> \tLast substantive discussion 2021-07, currently passing cfbot\n> \n> \tI'm not exactly sold that this has a good planning-cost-to-\n> \tusefulness ratio.\n> \tProposed action: RWF\n> \n> Index Skip Scan\t16\n> \tLast substantive discussion 2021-05, currently passing cfbot\n> \n> \tSeems possibly useful, but we're not making progress.\n> \n> standby recovery fails when re-replaying due to missing directory which was removed in previous replay\t13\n> \tLast substantive discussion 2021-09, currently passing cfbot\n> \n> \tThis is a bug fix, so we shouldn't drop it.\n> \n> Remove page-read callback from XLogReaderState\t12\n> \tLast substantive discussion 2021-04, currently failing cfbot\n> \n> \tNot sure what to think about this one, but given that it\n> \twas pushed and later reverted, I'm suspicious of it.\n> \n> Incremental Materialized View Maintenance\t12\n> \tLast substantive discussion 2021-09, currently passing cfbot\n> \n> \tSeems to be actively worked on.\n> \n> pg_upgrade fails with non-standard ACL\t12\n> \tLast substantive discussion 2021-03, currently passing cfbot\n> \n> \tThis is a bug fix, so we shouldn't drop it.\n> \n> Fix up partitionwise join on how equi-join conditions between the partition keys are identified\t11\n> \tLast substantive discussion 2021-07, currently passing cfbot\n> \n> \tThis is another one where I feel we need new ideas to make\n> \tprogress.\n> \tProposed action: RWF\n> \n> A hook for path-removal decision on add_path\t11\n> \tLast substantive discussion 2021-03, currently passing cfbot\n> \n> \tI don't think this is a great idea: a hook there will be\n> \tcostly, and it's very unclear how multiple extensions could\n> \tinteract correctly.\n> \tProposed action: Reject\n> \n> Implement INSERT SET syntax\t11\n> \tLast substantive discussion 2020-03, currently passing cfbot\n> \n> \tThis one is clearly stalled. I don't think it's necessarily\n> \ta bad idea, but we seem not to be very interested.\n> \tProposed action: Reject for lack of interest\n> \n> SQL:2011 application time\t11\n> \tLast substantive discussion 2021-10, currently failing cfbot\n> \n> \tActively worked on, and it's a big feature so long gestation\n> \tisn't surprising.\n> \n> WITH SYSTEM VERSIONING Temporal Tables\t11\n> \tLast substantive discussion 2021-09, currently failing cfbot\n> \n> \tActively worked on, and it's a big feature so long gestation\n> \tisn't surprising.\n> \n> psql - add SHOW_ALL_RESULTS option\t11\n> \tLast substantive discussion 2021-09, currently passing cfbot\n> \n> \tThis got committed and reverted once already. I have to be\n> \tsuspicious of whether this is a good design.\n> \n> Split StdRdOptions into HeapOptions and ToastOptions\t10\n> \tLast substantive discussion 2021-06, currently failing cfbot\n> \n> \tI think the author has despaired of anyone else taking an\n> \tinterest here. Unless somebody intends to take an interest,\n> \twe should put this one out of its misery.\n> \tProposed action: Reject for lack of interest\n> \n> \n> Thoughts?\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n",
"msg_date": "Sun, 3 Oct 2021 21:56:10 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: Triage on old commitfest entries - JSON_PATH"
},
{
"msg_contents": "Erik Rijkers <er@xs4all.nl> writes:\n> Op 03-10-2021 om 21:14 schreef Tom Lane:\n>> I looked at entries that are at least 10 CFs old, as indicated by\n>> the handy sort field. That's a pretty small population: 16 items\n>> out of the 317 listed in the 2021-09 CF. A quick look in recent\n>> CFs shows that it's very rare that we commit entries older than\n>> 10 CFs.\n\n> May I add one more?\n\n> SQL/JSON: JSON_TABLE started 2018 (the commitfest page shows only 4 as \n> 'Age in CFs' but that obviously can't be right)\n\nHm. It's being actively worked on, so I wouldn't have proposed\nkilling it even if its age had been shown correctly. Unless you\nthink it has no hope of ever reaching committability?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 03 Oct 2021 16:16:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Triage on old commitfest entries - JSON_PATH"
},
{
"msg_contents": "On Sun, Oct 3, 2021 at 12:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> An important note to make here is that we don't have any explicit\n> mechanism for saying \"sorry, this patch is perhaps useful but it\n> seems that nobody is going to take an interest in it\". Closing\n> such a patch as \"rejected\" seems harsh, but R-W-F isn't very\n> appropriate either if the patch never got any real review.\n> Perhaps we should create a new closure state?\n\nWe don't reject patches, except in very rare cases where the whole\nconcept is wildly unreasonable, or when the patch author decides to\nmark their own patch rejected. In other words, we only reject patches\nwhere the formal status of being rejected hardly matters at all. I\nhave to wonder what the point of the status of \"rejected\" really is.\nAmbiguity about what the best way forward is seems to be the thing\nthat kills patches -- it is seldom mistakes or design problems. They\ncan usually be corrected easily. Sometimes the ambiguity is very\nbroad, other times it's just one aspect of the design (e.g., the\nplanner aspects).\n\nI'd rather go in the opposite direction here: merge \"Rejected\" and\n\"Returned with Feedback\" into a single \"Patch Returned\" category\n(without adding a third category). The odds of a CF entry that gets\nmarked R-W-F eventually being committed is, in general, totally\nunclear, or seems to be. I myself have zero faith that that status\nalone predicts anything, good or bad. I think that under-specifying\nwhy a patch has been returned like this would actually be *more*\ninformative. Less experienced contributors wouldn't have to waste\ntheir time looking for some signal, when in fact there is little more\nthan noise.\n\n> Index Skip Scan 16\n> Last substantive discussion 2021-05, currently passing cfbot\n>\n> Seems possibly useful, but we're not making progress.\n\nThis feature is definitely useful. My pet theory is that it hasn't\nmade more progress because it requires expertise in two fairly\ndistinct areas of the system. There is a lot of B-Tree stuff here,\nwhich is clearly my thing. But I know that I personally am much less\nlikely to work on a patch that requires significant changes to the\nplanner. Maybe this is a coordination problem.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 3 Oct 2021 13:18:24 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Triage on old commitfest entries"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Sun, Oct 3, 2021 at 12:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Perhaps we should create a new closure state?\n\n> I'd rather go in the opposite direction here: merge \"Rejected\" and\n> \"Returned with Feedback\" into a single \"Patch Returned\" category\n> (without adding a third category).\n\nHm, perhaps. You're right that the classification might be slippery.\nI do feel it's useful to distinguish \"this is a bad idea overall,\nwe don't want to see follow-on patches\" from \"this needs work, please\nsend a follow-on patch when you've done the work\". But maybe more\nthought could get an idea out of the first category and into the\nsecond.\n\n>> Index Skip Scan 16\n>> Last substantive discussion 2021-05, currently passing cfbot\n>> \n>> Seems possibly useful, but we're not making progress.\n\n> This feature is definitely useful. My pet theory is that it hasn't\n> made more progress because it requires expertise in two fairly\n> distinct areas of the system. There is a lot of B-Tree stuff here,\n> which is clearly my thing. But I know that I personally am much less\n> likely to work on a patch that requires significant changes to the\n> planner. Maybe this is a coordination problem.\n\nFair. My concern here is mostly that we not just keep kicking the\ncan down the road. If we see that a patch has been hanging around\nthis long without reaching commit, we should either kill it or\nform a specific plan for how to advance it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 03 Oct 2021 16:30:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Triage on old commitfest entries"
},
{
"msg_contents": "On Sun, Oct 3, 2021 at 1:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Hm, perhaps. You're right that the classification might be slippery.\n> I do feel it's useful to distinguish \"this is a bad idea overall,\n> we don't want to see follow-on patches\" from \"this needs work, please\n> send a follow-on patch when you've done the work\". But maybe more\n> thought could get an idea out of the first category and into the\n> second.\n\nI agree in principle, but experience suggests that there is\napproximately zero practical difference.\n\nMy whole approach is to filter aggressively. I can only speak for\nmyself, but I have to imagine that this is what most committers do, in\none way or another. I am focussed on what I can understand with a high\ndegree of confidence, that seems likely to be relatively beneficial to\nusers -- nothing more. So patch authors that receive no feedback from\nme ought to assume that that means absolutely nothing, even in areas\nwhere my input might be expected. I'm not saying that I *never*\nmentally write-off patches without saying anything, but it's rare, and\nwhen it happens it tends to be in the least interesting, most obvious\ncases -- cases where speaking up is clearly unnecessary. I would hate\nto think that less experienced patch authors are taking radio silence\nas a meaningful signal, whether it's from me or from somebody else --\nbecause it's really not like that at all.\n\nMy argument boils down to this: I think that less experienced\ncontributors are better served by a system that plainly admits this\nuncertainty. At the same time I think that old patches need to get\nbumped for the good of all patch authors collectively. We have a hard\ntime bumping patches today because we seem to feel the need to justify\nit, based on facts about the patch. The reality has always been that\nPostgres patches are rejected by default, not accepted by default. We\nshould be clear about this.\n\n> Fair. My concern here is mostly that we not just keep kicking the\n> can down the road. If we see that a patch has been hanging around\n> this long without reaching commit, we should either kill it or\n> form a specific plan for how to advance it.\n\nAlso fair.\n\nThe pandemic has made the kind of coordination I refer to harder in\npractice. It's the kind of thing that face to face communication\nreally helps with.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 3 Oct 2021 14:34:00 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Triage on old commitfest entries"
},
{
"msg_contents": "Hi\n\nschema variables, LET command 18\n> Last substantive discussion 2021-09, currently passing cfbot\n>\n> Seems to be actively worked on, but is it ever going to get\n> committed?\n>\n>\nThis patch was originally very dirty with a strange design - something\nbetween command and query. But on second hand, these issues are real and\nthere was a lot of work to have good performance for CALL statements and\nstill CALL statements is limited to using just simple expressions.\n\nIn January of this year I completely rewrote this feature (significant\npart). So the implementation is very new, and I hope it can be better\nincluded in Postgres concepts.\n\nThis feature is interesting mainly for RLS - it allows secure space in\nmemory, and it is available from all environments in Postgres. Second usage\ncan be emulation of package variables. Current emulations are very slow or\nrequire extensions. The schema variables (session variables) can be used\nbadly or well. I migrated one Oracle's application, where it was an hell,\nbut when you do migration, then is not too much possibility for complex\nredesign. I hope so this feature can be nice for users who need to write\nSQL scripts, because it reduce an necessary work for pushing values to\nserver side. It can be used for parametrisation of \"DO\" blocks.\n\nThe current patch is trimmed to implementation not transactional variables,\nwhat I think should be default behaviour (like any other databases do it).\nThis limit is just for reducing of necessity work with maintaining of this\npatch. I have prepared patch with support transactional behaviour too (that\ncan have nice uses cases too). But is hard to maintain this part of patch\nto be applicable every week, so I postponed this part of patch.\n\nRegards\n\nPavel\n\nHi\nschema variables, LET command 18\n Last substantive discussion 2021-09, currently passing cfbot\n\n Seems to be actively worked on, but is it ever going to get\n committed?\nThis patch was originally very dirty with a strange design - something between command and query. But on second hand, these issues are real and there was a lot of work to have good performance for CALL statements and still CALL statements is limited to using just simple expressions. In January of this year I completely rewrote this feature (significant part). So the implementation is very new, and I hope it can be better included in Postgres concepts.This feature is interesting mainly for RLS - it allows secure space in memory, and it is available from all environments in Postgres. Second usage can be emulation of package variables. Current emulations are very slow or require extensions. The schema variables (session variables) can be used badly or well. I migrated one Oracle's application, where it was an hell, but when you do migration, then is not too much possibility for complex redesign. I hope so this feature can be nice for users who need to write SQL scripts, because it reduce an necessary work for pushing values to server side. It can be used for parametrisation of \"DO\" blocks.The current patch is trimmed to implementation not transactional variables, what I think should be default behaviour (like any other databases do it). This limit is just for reducing of necessity work with maintaining of this patch. I have prepared patch with support transactional behaviour too (that can have nice uses cases too). But is hard to maintain this part of patch to be applicable every week, so I postponed this part of patch. RegardsPavel",
"msg_date": "Mon, 4 Oct 2021 05:56:59 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Triage on old commitfest entries"
},
{
"msg_contents": "ne 3. 10. 2021 v 22:16 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Erik Rijkers <er@xs4all.nl> writes:\n> > Op 03-10-2021 om 21:14 schreef Tom Lane:\n> >> I looked at entries that are at least 10 CFs old, as indicated by\n> >> the handy sort field. That's a pretty small population: 16 items\n> >> out of the 317 listed in the 2021-09 CF. A quick look in recent\n> >> CFs shows that it's very rare that we commit entries older than\n> >> 10 CFs.\n>\n> > May I add one more?\n>\n> > SQL/JSON: JSON_TABLE started 2018 (the commitfest page shows only 4 as\n> > 'Age in CFs' but that obviously can't be right)\n>\n> Hm. It's being actively worked on, so I wouldn't have proposed\n> killing it even if its age had been shown correctly. Unless you\n> think it has no hope of ever reaching committability?\n>\n\nThis is a pretty important feature and a nice patch.\n\nUnfortunately, it is a pretty complex patch - JSON_TABLE is a really\ncomplex function, and this patch does complete implementation. I checked\nthis patch more times, and I think it is good. There is only one problem -\nthe size (there are not any problems in code, or in behaviour) . In MySQL\nor MariaDB, there is a much more simple implementation, that covers maybe\n10% of standard. But it is available, and people can use it. Isn't it\npossible to reduce this patch to some basic functionality, and commit it\nquickly, and later commit step by step all parts.\n\nRegards\n\nPavel\n\n\n\n\n> regards, tom lane\n>\n>\n>\n\nne 3. 10. 2021 v 22:16 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Erik Rijkers <er@xs4all.nl> writes:\n> Op 03-10-2021 om 21:14 schreef Tom Lane:\n>> I looked at entries that are at least 10 CFs old, as indicated by\n>> the handy sort field. That's a pretty small population: 16 items\n>> out of the 317 listed in the 2021-09 CF. A quick look in recent\n>> CFs shows that it's very rare that we commit entries older than\n>> 10 CFs.\n\n> May I add one more?\n\n> SQL/JSON: JSON_TABLE started 2018 (the commitfest page shows only 4 as \n> 'Age in CFs' but that obviously can't be right)\n\nHm. It's being actively worked on, so I wouldn't have proposed\nkilling it even if its age had been shown correctly. Unless you\nthink it has no hope of ever reaching committability?This is a pretty important feature and a nice patch. Unfortunately, it is a pretty complex patch - JSON_TABLE is a really complex function, and this patch does complete implementation. I checked this patch more times, and I think it is good. There is only one problem - the size (there are not any problems in code, or in behaviour) . In MySQL or MariaDB, there is a much more simple implementation, that covers maybe 10% of standard. But it is available, and people can use it. Isn't it possible to reduce this patch to some basic functionality, and commit it quickly, and later commit step by step all parts.RegardsPavel \n\n regards, tom lane",
"msg_date": "Mon, 4 Oct 2021 06:06:28 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Triage on old commitfest entries - JSON_PATH"
},
{
"msg_contents": "\nHello Tom,\n\n> As I threatened in another thread, I've looked through all of the\n> oldest commitfest entries to see which ones should maybe be tossed,\n> on the grounds that they're unlikely to ever get committed so we\n> should stop pushing them forward to the next CF.\n\n\n> psql - add SHOW_ALL_RESULTS option\t11\n> \tLast substantive discussion 2021-09, currently passing cfbot\n>\n> \tThis got committed and reverted once already. I have to be\n> \tsuspicious of whether this is a good design.\n\n> Thoughts?\n\nISTM that the main problem with this patch is that it touches a barely \ntested piece of software, aka \"psql\":-( The second problem is that the \ninitial code is fragile because it handles different modes with pretty \nintricate code.\n\nSo, on the first commit it broke a few untested things, among the many \nuntested things.\n\nThis resulted in more tests being added (sql, tap) so that the relevant \nfeatures are covered, so my point of view is that this patch is currently \na net improvement both from an engineering perspective and for features \nand enabling other features. Also, there is some interest to get it in.\n\nSo I do not think that it deserves to be dropped.\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 4 Oct 2021 07:10:30 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Triage on old commitfest entries"
},
{
"msg_contents": "On Sun, Oct 03, 2021 at 03:14:58PM -0400, Tom Lane wrote:\n[...]\n> \n> Here's what I found, along with some commentary about each one.\n> \n> Patch\t\tAge in CFs\n> \n> Protect syscache from bloating with negative cache entries\t23\n> \tLast substantive discussion 2021-01, currently passing cfbot\n> \n> \tIt's well known that I've never liked this patch, so I can't\n> \tclaim to be unbiased. But what I see here is a lot of focus\n> \ton specific test scenarios with little concern for the\n> \tpossibility that other scenarios will be made worse.\n> \tI think we need some new ideas to make progress.\n> \tProposed action: RWF\n\nif we RwF this patch we should add the thread to the TODO entry \nit refers to \n\n> \n> Transactions involving multiple postgres foreign servers\t18\n> \tLast substantive discussion 2021-07, currently failing cfbot\n> \n> \tThis has been worked on fairly recently, but frankly I'm\n> \tdubious that we want to integrate a 2PC XM into Postgres.\n> \tProposed action: Reject\n> \n\nMasahiko has marked the patch as RwF already\n\n> schema variables, LET command\t18\n> \tLast substantive discussion 2021-09, currently passing cfbot\n> \n> \tSeems to be actively worked on, but is it ever going to get\n> \tcommitted?\n> \n\nI had already moved this to Next CF when I read this, but I found this \nsounds useful\n\n> Remove self join on a unique column\t16\n> \tLast substantive discussion 2021-07, currently passing cfbot\n> \n> \tI'm not exactly sold that this has a good planning-cost-to-\n> \tusefulness ratio.\n> \tProposed action: RWF\n> \n\nIt seems there is no proof that this will increase performance in the\nthread.\nDavid you're reviewer on this patch, what your opinion on this is?\n\n> Index Skip Scan\t16\n> \tLast substantive discussion 2021-05, currently passing cfbot\n> \n> \tSeems possibly useful, but we're not making progress.\n> \n\nPeter G mentioned this would be useful. What we need to advance this\none? \n\n> standby recovery fails when re-replaying due to missing directory which was removed in previous replay\t13\n> \tLast substantive discussion 2021-09, currently passing cfbot\n> \n> \tThis is a bug fix, so we shouldn't drop it.\n> \n\nMoved to Next CF\n\n> Remove page-read callback from XLogReaderState\t12\n> \tLast substantive discussion 2021-04, currently failing cfbot\n> \n> \tNot sure what to think about this one, but given that it\n> \twas pushed and later reverted, I'm suspicious of it.\n> \n\nI guess those are enough for a decision: marked as RwF\nIf this is useful a new patch would be sent.\n\n> Incremental Materialized View Maintenance\t12\n> \tLast substantive discussion 2021-09, currently passing cfbot\n> \n> \tSeems to be actively worked on.\n\nMoved to Next CF\n\n> \n> pg_upgrade fails with non-standard ACL\t12\n> \tLast substantive discussion 2021-03, currently passing cfbot\n> \n> \tThis is a bug fix, so we shouldn't drop it.\n> \n\nMoved to Next CF\n\n> Fix up partitionwise join on how equi-join conditions between the partition keys are identified\t11\n> \tLast substantive discussion 2021-07, currently passing cfbot\n> \n> \tThis is another one where I feel we need new ideas to make\n> \tprogress.\n> \tProposed action: RWF\n\nIt seems there has been no activity since last version of the patch so I\ndon't think RwF is correct. What do we need to advance on this one?\n\n> \n> A hook for path-removal decision on add_path\t11\n> \tLast substantive discussion 2021-03, currently passing cfbot\n> \n> \tI don't think this is a great idea: a hook there will be\n> \tcostly, and it's very unclear how multiple extensions could\n> \tinteract correctly.\n> \tProposed action: Reject\n> \n\nAny other comments on this one?\n\n> Implement INSERT SET syntax\t11\n> \tLast substantive discussion 2020-03, currently passing cfbot\n> \n> \tThis one is clearly stalled. I don't think it's necessarily\n> \ta bad idea, but we seem not to be very interested.\n> \tProposed action: Reject for lack of interest\n> \n\nAgain, no activity after last patch. \n\n> SQL:2011 application time\t11\n> \tLast substantive discussion 2021-10, currently failing cfbot\n> \n> \tActively worked on, and it's a big feature so long gestation\n> \tisn't surprising.\n> \n\nMoved to Next CF\n\n> WITH SYSTEM VERSIONING Temporal Tables\t11\n> \tLast substantive discussion 2021-09, currently failing cfbot\n> \n> \tActively worked on, and it's a big feature so long gestation\n> \tisn't surprising.\n> \n\nMoved to Next CF\n\n> psql - add SHOW_ALL_RESULTS option\t11\n> \tLast substantive discussion 2021-09, currently passing cfbot\n> \n> \tThis got committed and reverted once already. I have to be\n> \tsuspicious of whether this is a good design.\n> \n\nNo activity after last patch\n\n> Split StdRdOptions into HeapOptions and ToastOptions\t10\n> \tLast substantive discussion 2021-06, currently failing cfbot\n> \n> \tI think the author has despaired of anyone else taking an\n> \tinterest here. Unless somebody intends to take an interest,\n> \twe should put this one out of its misery.\n> \tProposed action: Reject for lack of interest\n> \n\nThe author of the patch claimed that a rebased version should happen at\nmid-august but it hasn't happened. RwF seems reasonable to me, done\nthat.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n",
"msg_date": "Mon, 4 Oct 2021 02:12:49 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": false,
"msg_subject": "Re: Triage on old commitfest entries"
},
{
"msg_contents": "\nOn 10/3/21 3:56 PM, Erik Rijkers wrote:\n> Op 03-10-2021 om 21:14 schreef Tom Lane:\n>> As I threatened in another thread, I've looked through all of the\n>> oldest commitfest entries to see which ones should maybe be tossed,\n>> on the grounds that they're unlikely to ever get committed so we\n>> should stop pushing them forward to the next CF.\n>>\n>> An important note to make here is that we don't have any explicit\n>> mechanism for saying \"sorry, this patch is perhaps useful but it\n>> seems that nobody is going to take an interest in it\". Closing\n>> such a patch as \"rejected\" seems harsh, but R-W-F isn't very\n>> appropriate either if the patch never got any real review.\n>> Perhaps we should create a new closure state?\n>>\n>> I looked at entries that are at least 10 CFs old, as indicated by\n>> the handy sort field. That's a pretty small population: 16 items\n>> out of the 317 listed in the 2021-09 CF. A quick look in recent\n>> CFs shows that it's very rare that we commit entries older than\n>> 10 CFs.\n>>\n>> Here's what I found, along with some commentary about each one.\n>>\n>> Patch Age in CFs\n>\n> May I add one more?\n>\n> SQL/JSON: JSON_TABLE started 2018 (the commitfest page shows only 4 as\n> 'Age in CFs' but that obviously can't be right)\n>\n> Although I like the patch & new functionality and Andrew Dunstan has\n> worked to keep it up-to-date, there seems to be very little further\n> discussion. I makes me a little worried that the time I put in will\n> end up sunk in a dead project.\n>\n>\n\n\nI'm working on the first piece of it, i.e. the SQL/JSON functions.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 4 Oct 2021 08:19:21 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Triage on old commitfest entries - JSON_PATH"
},
{
"msg_contents": "On 10/3/21 16:18, Peter Geoghegan wrote:\n>> Index Skip Scan 16\n>> Last substantive discussion 2021-05, currently passing cfbot\n>>\n>> Seems possibly useful, but we're not making progress.\n> This feature is definitely useful. My pet theory is that it hasn't\n> made more progress because it requires expertise in two fairly\n> distinct areas of the system. There is a lot of B-Tree stuff here,\n> which is clearly my thing. But I know that I personally am much less\n> likely to work on a patch that requires significant changes to the\n> planner. Maybe this is a coordination problem.\n\n\nI still believe that this is an important user-visible improvement.\n\n\nHowever, there has been conflicting feedback on the necessary planner \nchanges leading to doing double work in order to figure the best way \nforward.\n\n\nDmitry and Andy are doing a good job on keeping the patches current, but \nmaybe there needs to be a firm decision from a committer on what the \nplanner changes should look like before these patches can move forward.\n\n\nSo, is RfC the best state for that ?\n\n\nBest regards,\n\n Jesper\n\n\n\n\n",
"msg_date": "Mon, 4 Oct 2021 08:34:11 -0400",
"msg_from": "Jesper Pedersen <jpederse@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Triage on old commitfest entries"
},
{
"msg_contents": "Hi,\n\nOn 10/3/21 16:18, Peter Geoghegan wrote:\n>> Index Skip Scan 16\n>> Last substantive discussion 2021-05, currently passing cfbot\n>>\n>> Seems possibly useful, but we're not making progress.\n> This feature is definitely useful. My pet theory is that it hasn't\n> made more progress because it requires expertise in two fairly\n> distinct areas of the system. There is a lot of B-Tree stuff here,\n> which is clearly my thing. But I know that I personally am much less\n> likely to work on a patch that requires significant changes to the\n> planner. Maybe this is a coordination problem.\n>\n\nI still believe that this is an important user-visible improvement.\n\n\nHowever, there has been conflicting feedback on the necessary planner \nchanges leading to doing double work in order to figure the best way \nforward.\n\n\nDmitry and Andy are doing a good job on keeping the patches current, but \nmaybe there needs to be a firm decision from a committer on what the \nplanner changes should look like before these patches can move forward.\n\n\nSo, is RfC the best state for that ?\n\n\nBest regards,\n\n Jesper\n\n\n\n\n",
"msg_date": "Mon, 4 Oct 2021 08:36:01 -0400",
"msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Triage on old commitfest entries"
},
{
"msg_contents": "Op 04-10-2021 om 14:19 schreef Andrew Dunstan:\n> \n> On 10/3/21 3:56 PM, Erik Rijkers wrote:\n>> Op 03-10-2021 om 21:14 schreef Tom Lane:\n>>> As I threatened in another thread, I've looked through all of the\n>>> oldest commitfest entries to see which ones should maybe be tossed,\n>>>\n>>> Patch Age in CFs\n>>\n>> May I add one more?\n>>\n>> SQL/JSON: JSON_TABLE started 2018 (the commitfest page shows only 4 as\n>> 'Age in CFs' but that obviously can't be right)\n>>\n>> Although I like the patch & new functionality and Andrew Dunstan has\n>> worked to keep it up-to-date, there seems to be very little further\n>> discussion. I makes me a little worried that the time I put in will\n>> end up sunk in a dead project.\n>>\n>>\n> \n> \n> I'm working on the first piece of it, i.e. the SQL/JSON functions.\n> \n\nThank you. I am glad to hear that.\n\n\n\n> cheers\n> \n> \n> andrew\n> \n> \n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n> \n\n\n",
"msg_date": "Mon, 4 Oct 2021 16:16:43 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: Triage on old commitfest entries - JSON_PATH"
},
{
"msg_contents": "Greetings,\n\n* Peter Geoghegan (pg@bowt.ie) wrote:\n> On Sun, Oct 3, 2021 at 1:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Fair. My concern here is mostly that we not just keep kicking the\n> > can down the road. If we see that a patch has been hanging around\n> > this long without reaching commit, we should either kill it or\n> > form a specific plan for how to advance it.\n> \n> Also fair.\n> \n> The pandemic has made the kind of coordination I refer to harder in\n> practice. It's the kind of thing that face to face communication\n> really helps with.\n\nEntirely agree with this. Index skip scan is actually *ridiculously*\nuseful in terms of an improvement, and we need to get the right people\ntogether to work on it and get it implemented. I'd love to see this\ndone for v15, in particular. Who do we need to coordinate getting\ntogether to make it happen? I doubt that I'm alone in wanting to make\nthis happen and I'd be pretty surprised if we weren't able to bring the\nright folks together this fall to make it a reality.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 4 Oct 2021 22:29:17 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Triage on old commitfest entries"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> Entirely agree with this. Index skip scan is actually *ridiculously*\n> useful in terms of an improvement, and we need to get the right people\n> together to work on it and get it implemented. I'd love to see this\n> done for v15, in particular. Who do we need to coordinate getting\n> together to make it happen?\n\nIt sounds like Peter is willing to take point on the executor end\nof things (b-tree in particular). If he can explain what a reasonable\ncost model would look like, I'm willing to see about making that happen\nin the planner.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 04 Oct 2021 22:45:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Triage on old commitfest entries"
},
{
"msg_contents": "On Mon, Oct 4, 2021 at 7:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> It sounds like Peter is willing to take point on the executor end\n> of things (b-tree in particular). If he can explain what a reasonable\n> cost model would look like, I'm willing to see about making that happen\n> in the planner.\n\nI would be happy to work with you on this. It's clearly an important project.\n\nHaving you involved with the core planner aspects (as well as general\ndesign questions) significantly derisks everything. That's *very*\nvaluable to me.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 4 Oct 2021 21:51:56 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Triage on old commitfest entries"
},
{
"msg_contents": "On 10/5/21 4:29 AM, Stephen Frost wrote:\n> Greetings,\n> \n> * Peter Geoghegan (pg@bowt.ie) wrote:\n>> On Sun, Oct 3, 2021 at 1:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Fair. My concern here is mostly that we not just keep kicking the\n>>> can down the road. If we see that a patch has been hanging around\n>>> this long without reaching commit, we should either kill it or\n>>> form a specific plan for how to advance it.\n>>\n>> Also fair.\n>>\n>> The pandemic has made the kind of coordination I refer to harder in\n>> practice. It's the kind of thing that face to face communication\n>> really helps with.\n> \n> Entirely agree with this. Index skip scan is actually *ridiculously*\n> useful in terms of an improvement, and we need to get the right people\n> together to work on it and get it implemented. I'd love to see this\n> done for v15, in particular. Who do we need to coordinate getting\n> together to make it happen? I doubt that I'm alone in wanting to make\n> this happen and I'd be pretty surprised if we weren't able to bring the\n> right folks together this fall to make it a reality.\nI don't have the skills to work on either side of this, but I would like\nto voice my support in favor of having this feature and I would be happy\nto help test it on a user level (as opposed to reviewing code).\n-- \nVik Fearing\n\n\n",
"msg_date": "Tue, 5 Oct 2021 12:57:07 +0200",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: Triage on old commitfest entries"
},
{
"msg_contents": "On Mon, Oct 04, 2021 at 02:12:49AM -0500, Jaime Casanova wrote:\n> On Sun, Oct 03, 2021 at 03:14:58PM -0400, Tom Lane wrote:\n> [...]\n> > \n> > Here's what I found, along with some commentary about each one.\n> > \n> > Patch\t\tAge in CFs\n> > \n> > Protect syscache from bloating with negative cache entries\t23\n> > \tLast substantive discussion 2021-01, currently passing cfbot\n> > \n> > \tIt's well known that I've never liked this patch, so I can't\n> > \tclaim to be unbiased. But what I see here is a lot of focus\n> > \ton specific test scenarios with little concern for the\n> > \tpossibility that other scenarios will be made worse.\n> > \tI think we need some new ideas to make progress.\n> > \tProposed action: RWF\n> \n> if we RwF this patch we should add the thread to the TODO entry \n> it refers to \n> \n\ndone this way\n\n> \n> > Remove self join on a unique column\t16\n> > \tLast substantive discussion 2021-07, currently passing cfbot\n> > \n> > \tI'm not exactly sold that this has a good planning-cost-to-\n> > \tusefulness ratio.\n> > \tProposed action: RWF\n> > \n> \n> It seems there is no proof that this will increase performance in the\n> thread.\n> David you're reviewer on this patch, what your opinion on this is?\n> \n\nThe last action here was a rebased patch.\nSo, I will try to follow on this one and will make some performance an\nfunctional tests. \nBased on that, I will move this to the next CF and put myself as\nreviewer.\nBut of course, I will be happy if some committer/more experienced dev\ncould look at the design/planner bits.\n\n\n> > Index Skip Scan\t16\n> > \tLast substantive discussion 2021-05, currently passing cfbot\n> > \n> > \tSeems possibly useful, but we're not making progress.\n> > \n> \n> Peter G mentioned this would be useful. What we need to advance this\n> one? \n> \n\nMoved to next CF based on several comments\n\n> > Fix up partitionwise join on how equi-join conditions between the partition keys are identified\t11\n> > \tLast substantive discussion 2021-07, currently passing cfbot\n> > \n> > \tThis is another one where I feel we need new ideas to make\n> > \tprogress.\n> > \tProposed action: RWF\n> \n> It seems there has been no activity since last version of the patch so I\n> don't think RwF is correct. What do we need to advance on this one?\n> \n\nOk. You're a reviewer in that patch and know the problems that we \nmere mortals are not able to understand.\n\nSo will do as you suggest, and then will write to Richard to send the\nnew version he was talking about in a new entry in the CF\n\n> > \n> > A hook for path-removal decision on add_path\t11\n> > \tLast substantive discussion 2021-03, currently passing cfbot\n> > \n> > \tI don't think this is a great idea: a hook there will be\n> > \tcostly, and it's very unclear how multiple extensions could\n> > \tinteract correctly.\n> > \tProposed action: Reject\n> > \n> \n> Any other comments on this one?\n> \n\nWill do as you suggest\n\n> > Implement INSERT SET syntax\t11\n> > \tLast substantive discussion 2020-03, currently passing cfbot\n> > \n> > \tThis one is clearly stalled. I don't think it's necessarily\n> > \ta bad idea, but we seem not to be very interested.\n> > \tProposed action: Reject for lack of interest\n> > \n> \n> Again, no activity after last patch. \n> \n\nI'm not a fan of not SQL Standard syntax but seems there were some\ninterest on this. \nAnd will follow this one as reviewer.\n\n> \n> > psql - add SHOW_ALL_RESULTS option\t11\n> > \tLast substantive discussion 2021-09, currently passing cfbot\n> > \n> > \tThis got committed and reverted once already. I have to be\n> > \tsuspicious of whether this is a good design.\n> > \n> \n> No activity after last patch\n> \n\nMoved to next CF \n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n",
"msg_date": "Tue, 5 Oct 2021 11:56:33 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": false,
"msg_subject": "Re: Triage on old commitfest entries"
},
{
"msg_contents": "On Tue, Oct 5, 2021 at 12:56 PM Jaime Casanova\n<jcasanov@systemguards.com.ec> wrote:\n> > > psql - add SHOW_ALL_RESULTS option 11\n> > > Last substantive discussion 2021-09, currently passing cfbot\n> > >\n> > > This got committed and reverted once already. I have to be\n> > > suspicious of whether this is a good design.\n> > >\n> >\n> > No activity after last patch\n> >\n>\n> Moved to next CF\n\nThis seems like the kind of thing we should not do. Patches without\nactivity need to be aggressively booted out of the system. Otherwise\nthey just generate a lot of noise that makes it harder to identify\npatches that should be reviewed and perhaps committed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 6 Oct 2021 10:41:43 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Triage on old commitfest entries"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhile reading the code I realized that the following comment of\nSnapBuildOnDick is obsolete:\n\n/*\n * We store current state of struct SnapBuild on disk in the following manner:\n *\n * struct SnapBuildOnDisk;\n * TransactionId * running.xcnt_space;\n * TransactionId * committed.xcnt; (*not xcnt_space*)\n *\n */\ntypedef struct SnapBuildOnDisk\n\nSince SnapBuild has no longer \"running\" struct, it should be removed.\nPlease find an attached patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Mon, 4 Oct 2021 16:53:28 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Remove an obsolete comment in snapbuild.c"
},
{
"msg_contents": "On Mon, Oct 4, 2021 at 1:24 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Hi all,\n>\n> While reading the code I realized that the following comment of\n> SnapBuildOnDick is obsolete:\n>\n> /*\n> * We store current state of struct SnapBuild on disk in the following manner:\n> *\n> * struct SnapBuildOnDisk;\n> * TransactionId * running.xcnt_space;\n> * TransactionId * committed.xcnt; (*not xcnt_space*)\n> *\n> */\n> typedef struct SnapBuildOnDisk\n>\n> Since SnapBuild has no longer \"running\" struct, it should be removed.\n> Please find an attached patch.\n>\n\nLGTM. I'll push this tomorrow unless someone thinks otherwise.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 4 Oct 2021 14:07:39 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove an obsolete comment in snapbuild.c"
}
] |
[
{
"msg_contents": "Hi,\n\nIt seems like we have macro InvalidTransactionId but InvalidXid is\nused in some of the code comments.Here's a small patch that does\n$subject?\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Mon, 4 Oct 2021 13:49:23 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "replace InvalidXid(a macro that doesn't exist) with\n InvalidTransactionId(a macro that exists) in code comments"
},
{
"msg_contents": "> On 4 Oct 2021, at 10:19, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n\n> It seems like we have macro InvalidTransactionId but InvalidXid is\n> used in some of the code comments.Here's a small patch that does\n> $subject?\n\nWhile I doubt anyone would be confused by these, I do agree it's worth being\nconsistent and use the right terms. Pushed to master, thanks!\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 4 Oct 2021 10:39:25 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: replace InvalidXid(a macro that doesn't exist) with\n InvalidTransactionId(a macro that exists) in code comments"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI realized a subtle behavior with ALTER INDEX .. RENAME. It seems like a bug to me, please see the steps below.\n\nTest 1: Rename table via RENAME .. INDEX\n\nCREATE TABLE test_table (a int);\nSELECT 'test_table'::regclass::oid;\n oid\n-------\n34470\n(1 row)\n-- rename table using ALTER INDEX ..\nALTER INDEX test_table RENAME TO test_table_2;\n\n-- see that table is rename\nSELECT 34470::regclass;\n regclass\n--------------\ntest_table_2\n(1 row)\n\n\nTest 2: Rename view via RENAME .. INDEX\nCREATE VIEW test_view AS SELECT * FROM pg_class;\nSELECT 'test_view'::regclass::oid;\n oid\n-------\n34473\n(1 row)\n\nALTER INDEX test_view RENAME TO test_view_2;\nELECT 34473::regclass;\n regclass\n-------------\ntest_view_2\n(1 row)\n\n\nIt seems like an oversight in ExecRenameStmt(), and probably applies to sequences, mat. views and foreign tables as well.\n\nI can reproduce this on both 13.2 and 14.0. Though haven’t checked earlier versions.\n\nThanks,\nOnder\n\n\n\n\n\n\n\n\n\nHi hackers,\n\nI realized a subtle behavior with ALTER INDEX .. RENAME. It seems like a bug to me, please see the steps below.\n \nTest 1: Rename table via RENAME .. INDEX\n\nCREATE TABLE test_table (a int);\nSELECT 'test_table'::regclass::oid;\n oid \n-------\n34470\n(1 row)\n-- rename table using ALTER INDEX ..\nALTER INDEX test_table RENAME TO test_table_2;\n\n-- see that table is rename\nSELECT 34470::regclass;\n regclass \n--------------\ntest_table_2\n(1 row)\n\n\nTest 2: Rename view via RENAME .. INDEX\nCREATE VIEW test_view AS SELECT * FROM pg_class;\nSELECT 'test_view'::regclass::oid;\n oid \n-------\n34473\n(1 row)\n \nALTER INDEX test_view RENAME TO test_view_2;\nELECT 34473::regclass;\n regclass \n-------------\ntest_view_2\n(1 row)\n \n\nIt seems like an oversight in ExecRenameStmt(), and probably applies to sequences, mat. views and foreign tables as well. \n\n \nI can reproduce this on both 13.2 and 14.0. Though haven’t checked earlier versions.\n \nThanks,\nOnder",
"msg_date": "Mon, 4 Oct 2021 10:23:23 +0000",
"msg_from": "Onder Kalaci <onderk@microsoft.com>",
"msg_from_op": true,
"msg_subject": "ALTER INDEX .. RENAME allows to rename tables/views as well"
},
{
"msg_contents": "\nI can confirm this bug in git head, and I think it should be fixed.\n\n---------------------------------------------------------------------------\n\nOn Mon, Oct 4, 2021 at 10:23:23AM +0000, Onder Kalaci wrote:\n> Hi hackers,\n> \n> I realized a subtle behavior with ALTER INDEX .. RENAME. It seems like a bug to\n> me, please see the steps below.\n> \n> \n> \n> Test 1: Rename table via RENAME .. INDEX\n> \n> CREATE TABLE test_table (a int);\n> \n> SELECT 'test_table'::regclass::oid;\n> \n> oid \n> \n> -------\n> \n> 34470\n> \n> (1 row)\n> \n> -- rename table using ALTER INDEX ..\n> \n> ALTER INDEX test_table RENAME TO test_table_2;\n> \n> \n> -- see that table is rename\n> \n> SELECT 34470::regclass;\n> \n> regclass \n> \n> --------------\n> \n> test_table_2\n> \n> (1 row)\n> \n> \n> Test 2: Rename view via RENAME .. INDEX\n> CREATE VIEW test_view AS SELECT * FROM pg_class;\n> \n> SELECT 'test_view'::regclass::oid;\n> \n> oid \n> \n> -------\n> \n> 34473\n> \n> (1 row)\n> \n> \n> \n> ALTER INDEX test_view RENAME TO test_view_2;\n> \n> ELECT 34473::regclass;\n> \n> regclass \n> \n> -------------\n> \n> test_view_2\n> \n> (1 row)\n> \n> \n> \n> \n> It seems like an oversight in ExecRenameStmt(), and probably applies to\n> sequences, mat. views and foreign tables as well. \n> \n> \n> \n> I can reproduce this on both 13.2 and 14.0. Though haven’t checked earlier\n> versions.\n> \n> \n> \n> Thanks,\n> \n> Onder\n> \n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Wed, 6 Oct 2021 16:51:57 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: ALTER INDEX .. RENAME allows to rename tables/views as well"
},
{
"msg_contents": "On 10/6/21, 1:52 PM, \"Bruce Momjian\" <bruce@momjian.us> wrote:\r\n> I can confirm this bug in git head, and I think it should be fixed.\r\n\r\nHere's a patch that ERRORs if the object type and statement type do\r\nnot match. Interestingly, some of the regression tests were relying\r\non this behavior. I considered teaching RenameRelation() how to\r\nhandle such mismatches, but we have to choose the lock level before we\r\nknow the object type, so that might be more trouble than it's worth.\r\n\r\nI'm not too happy with the error message format, but I'm not sure we\r\ncan do much better without listing all the object types or doing some\r\nmore invasive refactoring.\r\n\r\nNathan",
"msg_date": "Wed, 6 Oct 2021 22:35:39 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER INDEX .. RENAME allows to rename tables/views as well"
},
{
"msg_contents": "\"Bossart, Nathan\" <bossartn@amazon.com> writes:\n> On 10/6/21, 1:52 PM, \"Bruce Momjian\" <bruce@momjian.us> wrote:\n>> I can confirm this bug in git head, and I think it should be fixed.\n\n> Here's a patch that ERRORs if the object type and statement type do\n> not match. Interestingly, some of the regression tests were relying\n> on this behavior.\n\n... as, no doubt, are a lot of applications that this will gratuitously\nbreak. We've long had a policy that ALTER TABLE will work on relations\nthat aren't tables, so long as the requested operation is sensible.\n\nThe situation for \"ALTER some-other-relation-kind\" is a bit more\nconfused, because some cases throw errors and some don't; but I really\ndoubt that tightening things up here will earn you anything but\nbrickbats. I *definitely* don't agree with discarding the policy\nabout ALTER TABLE, especially if it's only done for RENAME.\n\nIn short: no, I do not agree that this is a bug to be fixed. Perhaps\nwe should have done things differently years ago, but it's too late to\nredefine it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 06 Oct 2021 18:43:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ALTER INDEX .. RENAME allows to rename tables/views as well"
},
{
"msg_contents": "On 10/6/21, 3:44 PM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\r\n> \"Bossart, Nathan\" <bossartn@amazon.com> writes:\r\n>> Here's a patch that ERRORs if the object type and statement type do\r\n>> not match. Interestingly, some of the regression tests were relying\r\n>> on this behavior.\r\n>\r\n> ... as, no doubt, are a lot of applications that this will gratuitously\r\n> break. We've long had a policy that ALTER TABLE will work on relations\r\n> that aren't tables, so long as the requested operation is sensible.\r\n\r\nRight.\r\n\r\n> The situation for \"ALTER some-other-relation-kind\" is a bit more\r\n> confused, because some cases throw errors and some don't; but I really\r\n> doubt that tightening things up here will earn you anything but\r\n> brickbats. I *definitely* don't agree with discarding the policy\r\n> about ALTER TABLE, especially if it's only done for RENAME.\r\n\r\nI think we should at least consider adding this check for ALTER INDEX\r\nsince we choose a different lock level in that case.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Wed, 6 Oct 2021 22:55:49 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER INDEX .. RENAME allows to rename tables/views as well"
},
{
"msg_contents": "On 2021-Oct-06, Bossart, Nathan wrote:\n\n> On 10/6/21, 3:44 PM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\n\n> > The situation for \"ALTER some-other-relation-kind\" is a bit more\n> > confused, because some cases throw errors and some don't; but I really\n> > doubt that tightening things up here will earn you anything but\n> > brickbats. I *definitely* don't agree with discarding the policy\n> > about ALTER TABLE, especially if it's only done for RENAME.\n> \n> I think we should at least consider adding this check for ALTER INDEX\n> since we choose a different lock level in that case.\n\nI agree -- letting ALTER INDEX process relations that aren't indexes is\ndangerous, with its current coding that uses a reduced lock level. But\nmaybe erroring out is not necessary; can we instead loop, locking the\nobject with ShareUpdateExclusive first, assuming it *is* an index, and\nif it isn't then we release and restart using the stronger lock this\ntime?\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\nSyntax error: function hell() needs an argument.\nPlease choose what hell you want to involve.\n\n\n",
"msg_date": "Wed, 6 Oct 2021 20:44:16 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: ALTER INDEX .. RENAME allows to rename tables/views as well"
},
{
"msg_contents": "On 10/6/21, 4:45 PM, \"Alvaro Herrera\" <alvherre@alvh.no-ip.org> wrote:\r\n> On 2021-Oct-06, Bossart, Nathan wrote:\r\n>> I think we should at least consider adding this check for ALTER INDEX\r\n>> since we choose a different lock level in that case.\r\n>\r\n> I agree -- letting ALTER INDEX process relations that aren't indexes is\r\n> dangerous, with its current coding that uses a reduced lock level. But\r\n> maybe erroring out is not necessary; can we instead loop, locking the\r\n> object with ShareUpdateExclusive first, assuming it *is* an index, and\r\n> if it isn't then we release and restart using the stronger lock this\r\n> time?\r\n\r\nGood idea. Patch attached.\r\n\r\nNathan",
"msg_date": "Thu, 7 Oct 2021 00:41:23 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER INDEX .. RENAME allows to rename tables/views as well"
},
{
"msg_contents": "On Wed, Oct 06, 2021 at 06:43:25PM -0400, Tom Lane wrote:\n> ... as, no doubt, are a lot of applications that this will gratuitously\n> break. We've long had a policy that ALTER TABLE will work on relations\n> that aren't tables, so long as the requested operation is sensible.\n\nYeah, that was my first thought after seeing this thread. There is a\nrisk in breaking something that was working previously. Perhaps it\nwas just working by accident, but that could be surprising if an\napplication relied on the existing behavior.\n--\nMichael",
"msg_date": "Thu, 7 Oct 2021 11:00:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: ALTER INDEX .. RENAME allows to rename tables/views as well"
},
{
"msg_contents": "On 2021-Oct-07, Bossart, Nathan wrote:\n\n> Good idea. Patch attached.\n\nYeah, that sounds exactly what I was thinking.\n\nNow, what is the worst that can happen if we rename a table under SUE\nand somebody else is using the table concurrently? Is there any way to\ncause a backend crash or something like that? As far as I can see,\nbecause we grab a fresh catalog snapshot for each query, you can't cause\nanything worse than reading from a different table. I do lack\nimagination for creating attacks, though.\n\nSo my inclination would be to apply this to master only.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"The Gord often wonders why people threaten never to come back after they've\nbeen told never to return\" (www.actsofgord.com)\n\n\n",
"msg_date": "Mon, 18 Oct 2021 20:55:25 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: ALTER INDEX .. RENAME allows to rename tables/views as well"
},
{
"msg_contents": "On 10/18/21, 4:56 PM, \"Alvaro Herrera\" <alvherre@alvh.no-ip.org> wrote:\r\n> Now, what is the worst that can happen if we rename a table under SUE\r\n> and somebody else is using the table concurrently? Is there any way to\r\n> cause a backend crash or something like that? As far as I can see,\r\n> because we grab a fresh catalog snapshot for each query, you can't cause\r\n> anything worse than reading from a different table. I do lack\r\n> imagination for creating attacks, though.\r\n\r\nThis message [0] in the thread for lowering the lock level for\r\nrenaming indexes seems to indicate that there may be some risk of\r\ncrashing.\r\n\r\nNathan\r\n\r\n[0] https://postgr.es/m/CA%2BTgmobtmFT5g-0dA%3DvEFFtogjRAuDHcYPw%2BqEdou5dZPnF%3Dpg%40mail.gmail.com\r\n\r\n",
"msg_date": "Tue, 19 Oct 2021 00:19:24 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER INDEX .. RENAME allows to rename tables/views as well"
},
{
"msg_contents": "I was about to push this when it occurred to me that it seems a bit\npointless to release AEL in order to retry with the lighter lock; once\nwe have AEL, let's just keep it and proceed. So how about the attached?\n\nI'm now thinking that this is to back-patch all the way to 12.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/",
"msg_date": "Tue, 19 Oct 2021 17:36:04 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: ALTER INDEX .. RENAME allows to rename tables/views as well"
},
{
"msg_contents": "On 10/19/21, 1:36 PM, \"Alvaro Herrera\" <alvherre@alvh.no-ip.org> wrote:\r\n> I was about to push this when it occurred to me that it seems a bit\r\n> pointless to release AEL in order to retry with the lighter lock; once\r\n> we have AEL, let's just keep it and proceed. So how about the attached?\r\n\r\nI did consider this, but I figured it might be better to keep the lock\r\nlevel consistent for a given object type no matter what the statement\r\ntype is. I don't have a strong opinion about this, though.\r\n\r\n> I'm now thinking that this is to back-patch all the way to 12.\r\n\r\n+1. The patch LGTM. I like the test additions to check the lock\r\nlevel.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Tue, 19 Oct 2021 20:44:35 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER INDEX .. RENAME allows to rename tables/views as well"
},
{
"msg_contents": "On 2021-Oct-19, Bossart, Nathan wrote:\n\n> I did consider this, but I figured it might be better to keep the lock\n> level consistent for a given object type no matter what the statement\n> type is. I don't have a strong opinion about this, though.\n\nYeah, the problem is that if there is a concurrent process waiting on\nyour lock, we'll release ours and they'll grab theirs, so we'll be\nwaiting on them afterwards, which is worse.\n\nBTW I noticed that the case of partitioned indexes was wrong too. I\nfixed that, added it to the tests, and pushed.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"People get annoyed when you try to debug them.\" (Larry Wall)\n\n\n",
"msg_date": "Tue, 19 Oct 2021 19:12:51 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: ALTER INDEX .. RENAME allows to rename tables/views as well"
},
{
"msg_contents": "On 10/19/21, 3:13 PM, \"Alvaro Herrera\" <alvherre@alvh.no-ip.org> wrote:\r\n> On 2021-Oct-19, Bossart, Nathan wrote:\r\n>\r\n>> I did consider this, but I figured it might be better to keep the lock\r\n>> level consistent for a given object type no matter what the statement\r\n>> type is. I don't have a strong opinion about this, though.\r\n>\r\n> Yeah, the problem is that if there is a concurrent process waiting on\r\n> your lock, we'll release ours and they'll grab theirs, so we'll be\r\n> waiting on them afterwards, which is worse.\r\n\r\nMakes sense.\r\n\r\n> BTW I noticed that the case of partitioned indexes was wrong too. I\r\n> fixed that, added it to the tests, and pushed.\r\n\r\nAh, good catch. Thanks!\r\n\r\nNathan\r\n\r\n",
"msg_date": "Tue, 19 Oct 2021 22:25:35 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER INDEX .. RENAME allows to rename tables/views as well"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI noticed a duplicate-word typo in a comments recently, and cooked up\nthe following ripgrep command to find some more.\n\n rg --multiline --pcre2 --type=c '(?<!struct )(?<!union )\\b((?!long\\b|endif\\b|that\\b)\\w+)\\s+(^\\s*[*#]\\s*)?\\b\\1\\b'\n\nPFA a patch with the result of that.\n\n- ilmari",
"msg_date": "Mon, 04 Oct 2021 13:56:23 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": true,
"msg_subject": "Duplicat-word typos in code comments"
},
{
"msg_contents": "> On 4 Oct 2021, at 14:56, Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> wrote:\n\n> I noticed a duplicate-word typo in a comments recently, and cooked up\n> the following ripgrep command to find some more.\n\nPushed to master, thanks! I avoided the reflow of the comments though to make\nit the minimal change.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 4 Oct 2021 15:15:29 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Duplicat-word typos in code comments"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n\n>> On 4 Oct 2021, at 14:56, Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> wrote:\n>\n>> I noticed a duplicate-word typo in a comments recently, and cooked up\n>> the following ripgrep command to find some more.\n>\n> Pushed to master, thanks!\n\nThanks!\n\n> I avoided the reflow of the comments though to make it the minimal\n> change.\n\nFair enough. I wasn't sure myself whether to do it or not.\n\n- ilmari\n\n\n",
"msg_date": "Mon, 04 Oct 2021 14:30:53 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": true,
"msg_subject": "Re: Duplicat-word typos in code comments"
},
{
"msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> I avoided the reflow of the comments though to make it the minimal\n>> change.\n\n> Fair enough. I wasn't sure myself whether to do it or not.\n\nThe next pgindent run will do it anyway (except in comment blocks\nstarting in column 1).\n\nI used to think it was better to go ahead and manually reflow, if you\nuse an editor that makes that easy. That way there are fewer commits\ntouching any one line of code, which is good when trying to review\ncode history. However, now that we've got the ability to make \"git\nblame\" ignore pgindent commits, maybe it's better to leave that sort\nof mechanical cleanup to pgindent, so that the substantive patch is\neasier to review.\n\n(But I'm not sure how well the ignore-these-commits behavior actually\nworks for cases like this.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 04 Oct 2021 09:56:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Duplicat-word typos in code comments"
},
{
"msg_contents": "> On 4 Oct 2021, at 15:56, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I used to think it was better to go ahead and manually reflow, if you\n> use an editor that makes that easy. That way there are fewer commits\n> touching any one line of code, which is good when trying to review\n> code history. However, now that we've got the ability to make \"git\n> blame\" ignore pgindent commits, maybe it's better to leave that sort\n> of mechanical cleanup to pgindent, so that the substantive patch is\n> easier to review.\n\nYeah, that's precisely why I did it. Since we can skip over pgindent sweeps it\nmakes sense to try and minimize such changes to make code archaeology easier.\nThere are of course cases when the result will be such an eyesore that we'd\nprefer to have it done sooner, but in cases like these where line just got one\nword shorter it seemed an easy choice.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 4 Oct 2021 21:19:50 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Duplicat-word typos in code comments"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 4 Oct 2021, at 15:56, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I used to think it was better to go ahead and manually reflow, if you\n>> use an editor that makes that easy. That way there are fewer commits\n>> touching any one line of code, which is good when trying to review\n>> code history. However, now that we've got the ability to make \"git\n>> blame\" ignore pgindent commits, maybe it's better to leave that sort\n>> of mechanical cleanup to pgindent, so that the substantive patch is\n>> easier to review.\n\n> Yeah, that's precisely why I did it. Since we can skip over pgindent sweeps it\n> makes sense to try and minimize such changes to make code archaeology easier.\n> There are of course cases when the result will be such an eyesore that we'd\n> prefer to have it done sooner, but in cases like these where line just got one\n> word shorter it seemed an easy choice.\n\nActually though, there's another consideration: if you leave\nnot-correctly-pgindented code laying around, it causes problems\nfor the next hacker who modifies that file and wishes to neaten\nup their own work by pgindenting it. They can either tediously\nreverse out part of the delta, or commit a patch that includes\nentirely-unrelated cosmetic changes, neither of which is\npleasant.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 04 Oct 2021 15:54:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Duplicat-word typos in code comments"
},
{
"msg_contents": "> On 4 Oct 2021, at 21:54, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>>> On 4 Oct 2021, at 15:56, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> I used to think it was better to go ahead and manually reflow, if you\n>>> use an editor that makes that easy. That way there are fewer commits\n>>> touching any one line of code, which is good when trying to review\n>>> code history. However, now that we've got the ability to make \"git\n>>> blame\" ignore pgindent commits, maybe it's better to leave that sort\n>>> of mechanical cleanup to pgindent, so that the substantive patch is\n>>> easier to review.\n> \n>> Yeah, that's precisely why I did it. Since we can skip over pgindent sweeps it\n>> makes sense to try and minimize such changes to make code archaeology easier.\n>> There are of course cases when the result will be such an eyesore that we'd\n>> prefer to have it done sooner, but in cases like these where line just got one\n>> word shorter it seemed an easy choice.\n> \n> Actually though, there's another consideration: if you leave\n> not-correctly-pgindented code laying around, it causes problems\n> for the next hacker who modifies that file and wishes to neaten\n> up their own work by pgindenting it. They can either tediously\n> reverse out part of the delta, or commit a patch that includes\n> entirely-unrelated cosmetic changes, neither of which is\n> pleasant.\n\nRight, this is mainly targeting comments where changing a word on the first\nline in an N line long comment can have the knock-on effect of changing N-1\nlines just due to reflowing. This is analogous to wrapping existing code in a\nnew block, causing a re-indentation to happen, except that for comments it can\nsometimes be Ok to leave (as in this particular case). At the end of the day,\nit's all a case-by-case basis trade-off call.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 5 Oct 2021 10:53:12 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Duplicat-word typos in code comments"
}
] |
[
{
"msg_contents": "\nAt\n<https://www.postgresql.org/message-id/543620.1629899413%40sss.pgh.pa.us>\nTom noted:\n\n> You have to be very careful these days when applying stale patches to\n> func.sgml --- there's enough duplicate boilerplate that \"patch' can easily\n> be fooled into dumping an addition into the wrong place. \n\nThis is yet another indication to me that there's probably a good case\nfor breaking func.sgml up into sections. It is by a very large margin\nthe biggest file in our document sources (the next largest is less than\nhalf the number of lines).\n\n\nthoughts?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 4 Oct 2021 10:33:36 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "func.sgml"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> Tom noted:\n>> You have to be very careful these days when applying stale patches to\n>> func.sgml --- there's enough duplicate boilerplate that \"patch' can easily\n>> be fooled into dumping an addition into the wrong place. \n\n> This is yet another indication to me that there's probably a good case\n> for breaking func.sgml up into sections. It is by a very large margin\n> the biggest file in our document sources (the next largest is less than\n> half the number of lines).\n\nWhat are you envisioning ... a file per <sect1>, or something else?\n\nI'm not sure that a split-up would really fix the problem I mentioned;\nbut at least it'd reduce the scope for things to go into *completely*\nthe wrong place.\n\nI think to make things safer for \"patch\", we'd have to give up a lot\nof vertical space around function-table entries. For example,\ninstead of\n\n <row>\n <entry role=\"func_table_entry\"><para role=\"func_signature\">\n <indexterm>\n <primary>num_nonnulls</primary>\n </indexterm>\n <function>num_nonnulls</function> ( <literal>VARIADIC</literal> <type>\"any\"</type> )\n <returnvalue>integer</returnvalue>\n ...\n </para></entry>\n </row>\n\nmaybe\n\n <row><entry role=\"func_table_entry\"><para role=\"func_signature\">\n <indexterm><primary>num_nonnulls</primary></indexterm>\n <function>num_nonnulls</function> ( <literal>VARIADIC</literal> <type>\"any\"</type> )\n <returnvalue>integer</returnvalue>\n ...\n </para></entry></row>\n\nIn this way, there'd be something at least a little bit unique within\nthe first couple of lines of an entry, so that the standard amount of\ncontext in a diff would provide some genuine indication of where a\nnew entry is supposed to go.\n\nThe main problem with this formatting is that I'm not sure that\nanybody's editors' SGML modes would be on board with it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 04 Oct 2021 10:52:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: func.sgml"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n\n> At\n> <https://www.postgresql.org/message-id/543620.1629899413%40sss.pgh.pa.us>\n> Tom noted:\n>\n>> You have to be very careful these days when applying stale patches to\n>> func.sgml --- there's enough duplicate boilerplate that \"patch' can easily\n>> be fooled into dumping an addition into the wrong place. \n>\n> This is yet another indication to me that there's probably a good case\n> for breaking func.sgml up into sections. It is by a very large margin\n> the biggest file in our document sources (the next largest is less than\n> half the number of lines).\n>\n> thoughts?\n\nIt would make sense to follow a similar pattern to datatype.sgml and\nbreak out the largest sections. I whipped up a quick awk script to get\nan idea of the sizes of the sections in the file:\n\n$ awk '$1 == \"<sect1\" { start = NR; name = $2 }\n $1 == \"</sect1>\" { print NR-start, name }' \\\n func.sgml | sort -rn\n3076 id=\"functions-info\">\n2506 id=\"functions-admin\">\n2463 id=\"functions-json\">\n2352 id=\"functions-matching\">\n2028 id=\"functions-datetime\">\n1672 id=\"functions-string\">\n1466 id=\"functions-math\">\n1263 id=\"functions-geometry\">\n1252 id=\"functions-xml\">\n1220 id=\"functions-aggregate\">\n1165 id=\"functions-formatting\">\n1053 id=\"functions-textsearch\">\n1049 id=\"functions-range\">\n785 id=\"functions-binarystring\">\n625 id=\"functions-comparison\">\n591 id=\"functions-net\">\n552 id=\"functions-array\">\n357 id=\"functions-bitstring\">\n350 id=\"functions-comparisons\">\n348 id=\"functions-subquery\">\n327 id=\"functions-event-triggers\">\n284 id=\"functions-conditional\">\n283 id=\"functions-window\">\n282 id=\"functions-srf\">\n181 id=\"functions-sequence\">\n145 id=\"functions-logical\">\n134 id=\"functions-trigger\">\n120 id=\"functions-enum\">\n84 id=\"functions-statistics\">\n31 id=\"functions-uuid\">\n\nTangentially, running the same on datatype.sgml indicates that the\ndatetime section might do with splitting out:\n\n$ awk '$1 == \"<sect1\" { start = NR; name = $2 }\n $1 == \"</sect1>\" { print NR-start, name }' \\\n datatype.sgml | sort -rn\n1334 id=\"datatype-datetime\">\n701 id=\"datatype-numeric\">\n374 id=\"datatype-net-types\">\n367 id=\"datatype-oid\">\n320 id=\"datatype-geometric\">\n310 id=\"datatype-pseudo\">\n295 id=\"datatype-binary\">\n256 id=\"datatype-character\">\n245 id=\"datatype-textsearch\">\n197 id=\"datatype-xml\">\n160 id=\"datatype-enum\">\n119 id=\"datatype-boolean\">\n81 id=\"datatype-money\">\n74 id=\"datatype-bit\">\n51 id=\"domains\">\n49 id=\"datatype-uuid\">\n30 id=\"datatype-pg-lsn\">\n\nThe existing split-out sections of datatype.sgml are:\n\n$ wc -l json.sgml array.sgml rowtypes.sgml rangetypes.sgml | grep -v total | sort -rn\n 1006 json.sgml\n 797 array.sgml\n 592 rangetypes.sgml\n 540 rowtypes.sgml\n\nThe names are also rather inconsistent and vague, especially \"json\" and\n\"array\". If we split the json section out of func.sgml, we might want to\nrename these datatype-foo.sgml instead of foo(types).sgml, or go the\nwhole hog and create subdirectories and move all the sections into\nseparate files in them, like with reference.sgml.\n\n- ilmari\n\n\n",
"msg_date": "Mon, 04 Oct 2021 16:06:48 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: func.sgml"
},
{
"msg_contents": ">> You have to be very careful these days when applying stale patches to\n>> func.sgml --- there's enough duplicate boilerplate that \"patch' can easily\n>> be fooled into dumping an addition into the wrong place. \n> \n> This is yet another indication to me that there's probably a good case\n> for breaking func.sgml up into sections. It is by a very large margin\n> the biggest file in our document sources (the next largest is less than\n> half the number of lines).\n\nI am welcome this by a different reason. I have been involved in a\ntranslation (to Japanese) project for long time. For this work we are\nusing Github. Translation works are submitted as pull requests. With\nlarge sgml files (not only func.sgml, but config.sgml, catalogs.sgml\nand libpq.sgml), Github's UI cannot handle them correctly. Sometimes\nthey don't show certain lines, which makes the review process\nsignificantly hard. Because of this, we have to split those large\nsgml files into small files, typically 4 to 5 segments for each large\nsgml file.\n\nSplitting those large sgml files in upstream woudl greatly help us\nbecause we don't need to split the large sgml files.\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Tue, 05 Oct 2021 14:40:35 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: func.sgml"
}
] |
[
{
"msg_contents": "Hi,\n\nMy colleague, Alex Kozhemyakin, stumbled upon a bug in DefineRange().\nThe problem is here:\n\n @@ -1707,7 +1707,6 @@ DefineRange(ParseState *pstate, \nCreateRangeStmt *stmt)\n /* Create cast from the range type to its multirange type */\n CastCreate(typoid, multirangeOid, castFuncOid, 'e', 'f', \nDEPENDENCY_INTERNAL);\n\n - pfree(multirangeTypeName);\n pfree(multirangeArrayName);\n\n return address;\n\n\nGiven a query\n\n create type textrange1 as range(subtype=text, \nmultirange_type_name=multirange_of_text, collation=\"C\");\n\nthe string \"multirange_of_text\" in the parse tree is erroneously\npfree'd. The corrupted parse tree is then passed to event triggers.\n\nThere is another branch in DefineRange() that genereates a multirange\ntype name which is fine to free.\n\nI wonder what is the proper fix. Just drop pfree() altogether or add\npstrdup() instead? I see that makeMultirangeTypeName() doesn't bother\nfreeing its buf.\n\nHere is a gdb session demonstating the bug:\n\nBreakpoint 1, ProcessUtilitySlow (pstate=0x5652e80c7730, \npstmt=0x5652e80a6a40, queryString=0x5652e80a5790 \"create type textrange1 \nas range(subtype=text, multirange_type_name=multirange_of_text, \ncollation=\\\"C\\\");\",\n context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, \nqc=0x7ffe835b4be0, dest=<optimized out>) at \n/pgwork/REL_14_STABLE/src/src/backend/tcop/utility.c:1621\n1621 address = DefineRange((CreateRangeStmt *) \nparsetree);\n(gdb) p *(Value *)((TypeName *)((DefElem *)((CreateRangeStmt \n*)parsetree)->params->elements[1].ptr_value)->arg)->names->elements[0].ptr_value\n$1 = {type = T_String, val = {ival = -401972176, str = 0x5652e80a6430 \n\"multirange_of_text\"}}\n(gdb) n\n1900 if (!commandCollected)\n(gdb) p *(Value *)((TypeName *)((DefElem *)((CreateRangeStmt \n*)parsetree)->params->elements[1].ptr_value)->arg)->names->elements[0].ptr_value\n$2 = {type = T_String, val = {ival = -401972176, str = 0x5652e80a6430 \n'\\177' <repeats 32 times>, \"\\020\"}}\n\n\nRegards,\n\n-- \nSergey Shinderuk\t\thttps://postgrespro.com/\n\n\n",
"msg_date": "Mon, 4 Oct 2021 20:09:28 +0300",
"msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Bug in DefineRange() with multiranges"
},
{
"msg_contents": "On 04.10.21 19:09, Sergey Shinderuk wrote:\n> I wonder what is the proper fix. Just drop pfree() altogether or add\n> pstrdup() instead? I see that makeMultirangeTypeName() doesn't bother\n> freeing its buf.\n\nI think removing the pfree()s is a correct fix.\n\n\n\n",
"msg_date": "Sun, 10 Oct 2021 19:12:30 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Bug in DefineRange() with multiranges"
},
{
"msg_contents": "On 10.10.2021 20:12, Peter Eisentraut wrote:\n> On 04.10.21 19:09, Sergey Shinderuk wrote:\n>> I wonder what is the proper fix. Just drop pfree() altogether or add\n>> pstrdup() instead? I see that makeMultirangeTypeName() doesn't bother\n>> freeing its buf.\n> \n> I think removing the pfree()s is a correct fix.\n> \n\nThanks, here is a patch.\n\n\n-- \nSergey Shinderuk\t\thttps://postgrespro.com/",
"msg_date": "Tue, 12 Oct 2021 08:52:29 +0300",
"msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Bug in DefineRange() with multiranges"
},
{
"msg_contents": "On Tue, Oct 12, 2021 at 08:52:29AM +0300, Sergey Shinderuk wrote:\n> Thanks, here is a patch.\n\nLooks fine seen from here, so I'll apply shortly. I was initially\ntempted to do pstrdup() on the object name returned by\nQualifiedNameGetCreationNamespace(), but just removing the pfree() is\nsimpler. \n\nI got to wonder about similar mistakes from the other callers of\nQualifiedNameGetCreationNamespace(), so I have double-checked but\nnothing looks wrong.\n--\nMichael",
"msg_date": "Wed, 13 Oct 2021 13:21:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Bug in DefineRange() with multiranges"
},
{
"msg_contents": "On 13.10.2021 07:21, Michael Paquier wrote:\n> Looks fine seen from here, so I'll apply shortly.\n\nThank you!\n\n-- \nSergey Shinderuk\t\thttps://postgrespro.com/\n\n\n",
"msg_date": "Wed, 13 Oct 2021 11:35:34 +0300",
"msg_from": "Sergey Shinderuk <s.shinderuk@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Bug in DefineRange() with multiranges"
}
] |
[
{
"msg_contents": "Hi,\n\nHere's the list mentioned in ${SUBJECT}, please can the committers\nmention what they want to do with those?\n\nno committer assigned:\n\n- Bug fix for tab completion of ALTER TABLE\n This seems to have activity, last patch from is from two weeks ago.\n Any intention of committing this soom? Or should I move it to next CF?\n\n- Extending amcheck to check toast size and compression\n Last patch from may-2021, also it seems there is no activity since\n Jul-2021. Peter Geoghegan, are you planning to look at this one?\n\n- Parallel Hash Full Join\n cfbot says it's failing on frebsd on parallel group tests: join_hash,\n brin_bloom, brin_multi, create_table_like, async, misc_functions, \n collate.icu.utf8, dbsize, incremental_sort, tidscan, tsrf, tidrangescan, \n tid, sysviews, misc, alter_operator, alter_generic\n Thomas said he was working on this one, but it sounds we shouldn't\n expect this to be committed in the next days. So I will move this to\n next CF. Thomas, are you alright with that?\n\n- Simplify some RI checks to reduce SPI overhead\n Last patch is from Jul-2021, little activity since then. Peter\n Eisentraut you're marked as reviewer here, do you intend to take the\n patch as the committer?\n\n- enhancing plpgsql API for debugging and tracing\n Last patch is from aug-2021, which was the last activity on this.\n Suggestions?\n\n- Identify missing publications from publisher while create/alter subscription\n Last patch is from Aug-2021, cfbot says it cannot be applied but is\n only a .sgml the one that fails. Patch actually compiles (i tried it\n myself 3 days ago but did no further tests).\n https://www.postgresql.org/message-id/20210928021944.GA18070%40ahch-to\n Suggestions?\n\n- Minimal logical decoding on standbys (take 6)\n It seems it has activity and it's a useful improvement.\n Any one is going to take it?\n\n- Allow providing restore_command as a command line option to pg_rewind\n Last patch is from aug-2021. Comments?\n\n- global temporary table\n This has activity. And seems a good improvement. Comments?\n\n- Fix pg_rewind race condition just after promotion\n Last patch is from mar-2021. Heikki, are you going to take this one?\n\nEtsuro Fujita:\n\n- Fast COPY FROM command for the foreign tables\n Last patch was on Jun-2021, no further activity after that.\n Etsuro-san, are you going to commit this soon?\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n",
"msg_date": "Mon, 4 Oct 2021 14:08:58 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": true,
"msg_subject": "RfC entries in CF 2021-09"
},
{
"msg_contents": "Hi Jaime,\n\nOn Tue, Oct 5, 2021 at 4:09 AM Jaime Casanova\n<jcasanov@systemguards.com.ec> wrote:\n> - Fast COPY FROM command for the foreign tables\n> Last patch was on Jun-2021, no further activity after that.\n> Etsuro-san, are you going to commit this soon?\n\nUnfortunately, I didn’t have time for this in the September\ncommitfest. I’m planning on working on it in the next commitfest.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Tue, 5 Oct 2021 15:24:40 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: RfC entries in CF 2021-09"
},
{
"msg_contents": "On Tue, Oct 05, 2021 at 03:24:40PM +0900, Etsuro Fujita wrote:\n> Hi Jaime,\n> \n> On Tue, Oct 5, 2021 at 4:09 AM Jaime Casanova\n> <jcasanov@systemguards.com.ec> wrote:\n> > - Fast COPY FROM command for the foreign tables\n> > Last patch was on Jun-2021, no further activity after that.\n> > Etsuro-san, are you going to commit this soon?\n> \n> Unfortunately, I didn’t have time for this in the September\n> commitfest. I’m planning on working on it in the next commitfest.\n> \n\nThanks. Moving to next CF.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n",
"msg_date": "Tue, 5 Oct 2021 09:28:51 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": true,
"msg_subject": "Re: RfC entries in CF 2021-09"
},
{
"msg_contents": "On Mon, Oct 4, 2021 at 12:09 PM Jaime Casanova\n<jcasanov@systemguards.com.ec> wrote:\n\n> - Extending amcheck to check toast size and compression\n> Last patch from may-2021, also it seems there is no activity since\n> Jul-2021. Peter Geoghegan, are you planning to look at this one?\n\nI didn't plan on it, no.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 5 Oct 2021 08:04:43 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: RfC entries in CF 2021-09"
},
{
"msg_contents": "On Mon, 4 Oct 2021 at 15:09, Jaime Casanova\n<jcasanov@systemguards.com.ec> wrote:\n>\n\n> - Extending amcheck to check toast size and compression\n> Last patch from may-2021, also it seems there is no activity since\n> Jul-2021. Peter Geoghegan, are you planning to look at this one?\n\nI'll look at this if nobody minds.\n\n\nOther patches I could maybe look at might be these two:\n\n> - Simplify some RI checks to reduce SPI overhead\n> Last patch is from Jul-2021, little activity since then. Peter\n> Eisentraut you're marked as reviewer here, do you intend to take the\n> patch as the committer?\n>\n> - global temporary table\n> This has activity. And seems a good improvement. Comments?\n\n\n-- \ngreg\n\n\n",
"msg_date": "Tue, 5 Oct 2021 11:11:22 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: RfC entries in CF 2021-09"
},
{
"msg_contents": "On Mon, Oct 04, 2021 at 02:08:58PM -0500, Jaime Casanova wrote:\n> Hi,\n> \n> Here's the list mentioned in ${SUBJECT}, please can the committers\n> mention what they want to do with those?\n> \n\nTo move forward I have moved all RfC entries to next CF \n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n",
"msg_date": "Tue, 5 Oct 2021 12:40:47 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": true,
"msg_subject": "Re: RfC entries in CF 2021-09"
},
{
"msg_contents": "On Tue, Oct 5, 2021 at 11:28 PM Jaime Casanova\n<jcasanov@systemguards.com.ec> wrote:\n> On Tue, Oct 05, 2021 at 03:24:40PM +0900, Etsuro Fujita wrote:\n> > On Tue, Oct 5, 2021 at 4:09 AM Jaime Casanova\n> > <jcasanov@systemguards.com.ec> wrote:\n> > > - Fast COPY FROM command for the foreign tables\n> > > Last patch was on Jun-2021, no further activity after that.\n> > > Etsuro-san, are you going to commit this soon?\n> >\n> > I’m planning on working on it in the next commitfest.\n>\n> Thanks. Moving to next CF.\n\nOk, thanks!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Wed, 6 Oct 2021 17:30:54 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: RfC entries in CF 2021-09"
}
] |
[
{
"msg_contents": "Hi,\n\nFor https://postgr.es/m/20211001222752.wrz7erzh4cajvgp6@alap3.anarazel.de I\nwas trying to enable plperl on windows. And run into quite a few roadblocks -\nenough that I gave up.\n\n1) plperl doesn't build against a modern-ish perl. The fix for that seems easy\n enough: https://postgr.es/m/20200501134711.08750c5f@antares.wagner.home\n\n2) For some reason src/tools/install.pl doesn't install plperl[u].control,\n plperl[u]--1.0.sql - But apparently the buildfarm doesn't have that issue,\n because drongo successfully ran the plperl tests?\n\n3) When building against strawberry perl 5.32.1.1 I see errors when loading\n plperl\n\n4) When building against strawberry perl 5.30.3.1 I see a crash during\n execution of very simple statements [1]\n\n5) Finally when building against strawberry perl 5.28.2.1, plperl kinda\n works. But there's a lot of regression test failures, many of them\n seemingly around error trapping.\n\n\nI saw that there's also active state perl, but it seems to require clicking\nthrough some terms and conditions for every download that I don't want to\nagree to.\n\nGreetings,\n\nAndres Freund\n\n[1]\nException thrown at 0x000000006FD75DB8 (perl530.dll) in postgres.exe: 0xC0000005: Access violation reading location 0x0000000000000008.\n \tperl530.dll!Perl_mg_get() + 56 bytes\tUnknown\n\tplperl.dll!select_perl_context(bool trusted) Line 667\tC\n \tplperl.dll!plperl_inline_handler(FunctionCallInfoBaseData * fcinfo) Line 1941\tC\n \tplperl.dll!plperlu_inline_handler(FunctionCallInfoBaseData * fcinfo) Line 2064\tC\n \tpostgres.exe!FunctionCall1Coll(FmgrInfo * flinfo, unsigned int collation, unsigned __int64 arg1) Line 1138\tC\n \tpostgres.exe!OidFunctionCall1Coll(unsigned int functionId, unsigned int collation, unsigned __int64 arg1) Line 1417\tC\n \tpostgres.exe!ExecuteDoStmt(ParseState * pstate, DoStmt * stmt, bool atomic) Line 2146\tC\n \tpostgres.exe!standard_ProcessUtility(PlannedStmt * pstmt, const char * queryString, bool readOnlyTree, ProcessUtilityContext context, ParamListInfoData * params, QueryEnvironment * queryEnv, _DestReceiver * dest, QueryCompletion * qc) Line 712\tC\n \tpostgres.exe!ProcessUtility(PlannedStmt * pstmt, const char * queryString, bool readOnlyTree, ProcessUtilityContext context, ParamListInfoData * params, QueryEnvironment * queryEnv, _DestReceiver * dest, QueryCompletion * qc) Line 530\tC\n \tpostgres.exe!PortalRunUtility(PortalData * portal, PlannedStmt * pstmt, bool isTopLevel, bool setHoldSnapshot, _DestReceiver * dest, QueryCompletion * qc) Line 1157\tC\n \tpostgres.exe!PortalRunMulti(PortalData * portal, bool isTopLevel, bool setHoldSnapshot, _DestReceiver * dest, _DestReceiver * altdest, QueryCompletion * qc) Line 1306\tC\n \tpostgres.exe!PortalRun(PortalData * portal, long count, bool isTopLevel, bool run_once, _DestReceiver * dest, _DestReceiver * altdest, QueryCompletion * qc) Line 790\tC\n \tpostgres.exe!exec_simple_query(const char * query_string) Line 1222\tC\n \tpostgres.exe!PostgresMain(const char * dbname, const char * username) Line 4499\tC\n \tpostgres.exe!BackendRun(Port * port) Line 4561\tC\n \tpostgres.exe!SubPostmasterMain(int argc, char * * argv) Line 5066\tC\n \tpostgres.exe!main(int argc, char * * argv) Line 190\tC\n \tpostgres.exe!invoke_main() Line 79\tC++\n \tpostgres.exe!__scrt_common_main_seh() Line 288\tC++\n \tpostgres.exe!__scrt_common_main() Line 331\tC++\n \tpostgres.exe!mainCRTStartup(void * __formal) Line 17\tC++\n \tkernel32.dll!BaseThreadInitThunk()\tUnknown\n \tntdll.dll!RtlUserThreadStart()\tUnknown\n\n[2]\n--- C:/Users/anfreund/src/postgres/src/pl/plperl/expected/plperl.out 2021-03-02 00:29:34.416742000 -0800\n+++ C:/Users/anfreund/src/postgres/src/pl/plperl/results/plperl.out 2021-10-04 14:31:45.773612500 -0700\n@@ -660,8 +660,11 @@\n return $result;\n $$ LANGUAGE plperl;\n SELECT perl_spi_prepared_bad(4.35) as \"double precision\";\n-ERROR: type \"does_not_exist\" does not exist at line 2.\n-CONTEXT: PL/Perl function \"perl_spi_prepared_bad\"\n+ double precision\n+------------------\n+\n+(1 row)\n+\n -- Test with a row type\n CREATE OR REPLACE FUNCTION perl_spi_prepared() RETURNS INTEGER AS $$\n my $x = spi_prepare('select $1::footype AS a', 'footype');\n@@ -696,37 +699,28 @@\n NOTICE: This is a test\n -- check that restricted operations are rejected in a plperl DO block\n DO $$ system(\"/nonesuch\"); $$ LANGUAGE plperl;\n-ERROR: 'system' trapped by operation mask at line 1.\n-CONTEXT: PL/Perl anonymous code block\n...\n\n--- C:/Users/anfreund/src/postgres/src/pl/plperl/expected/plperl_plperlu.out 2021-03-02 00:29:34.425742300 -0800\n+++ C:/Users/anfreund/src/postgres/src/pl/plperl/results/plperl_plperlu.out 2021-10-04 14:31:48.065612400 -0700\n@@ -10,11 +10,17 @@\n return 1;\n $$ LANGUAGE plperlu; -- compile plperlu code\n SELECT * FROM bar(); -- throws exception normally (running plperl)\n-ERROR: syntax error at or near \"invalid\" at line 4.\n-CONTEXT: PL/Perl function \"bar\"\n+ bar\n+-----\n+\n+(1 row)\n+\n SELECT * FROM foo(); -- used to cause backend crash (after switching to plperlu)\n-ERROR: syntax error at or near \"invalid\" at line 4. at line 2.\n-CONTEXT: PL/Perl function \"foo\"\n+ foo\n+-----\n+ 1\n+(1 row)\n+\n-ERROR: Unable to load Errno.pm into plperl at line 2.\n-BEGIN failed--compilation aborted at line 2.\n+ERROR: didn't get a CODE reference from compiling function \"use_plperl\"\n CONTEXT: compilation of PL/Perl function \"use_plperl\"\n -- make sure our overloaded require op gets restored/set correctly\n select use_plperlu();\n@@ -86,6 +91,5 @@\n AS $$\n use Errno;\n $$;\n-ERROR: Unable to load Errno.pm into plperl at line 2.\n-BEGIN failed--compilation aborted at line 2.\n+ERROR: didn't get a CODE reference from compiling function \"use_plperl\"\n CONTEXT: compilation of PL/Perl function \"use_plperl\"\n\n\n",
"msg_date": "Mon, 4 Oct 2021 14:38:16 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "plperl on windows"
},
{
"msg_contents": "Hi,\n\nOn 2021-10-04 14:38:16 -0700, Andres Freund wrote:\n> 2) For some reason src/tools/install.pl doesn't install plperl[u].control,\n> plperl[u]--1.0.sql - But apparently the buildfarm doesn't have that issue,\n> because drongo successfully ran the plperl tests?\n\nOh, figured that one out: Install.pm checks the current directory for\nconfig.pl - but my invocation was from the source tree root (which is\nsupported for most things). Because of that it skipped installing plperl, as\nit though it wasn't installed.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 4 Oct 2021 15:02:13 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: plperl on windows"
},
{
"msg_contents": "Hi,\n\nOn 2021-10-04 14:38:16 -0700, Andres Freund wrote:\n> 3) When building against strawberry perl 5.32.1.1 I see errors when loading\n> plperl\n>\n> 4) When building against strawberry perl 5.30.3.1 I see a crash during\n> execution of very simple statements [1]\n>\n> 5) Finally when building against strawberry perl 5.28.2.1, plperl kinda\n> works. But there's a lot of regression test failures, many of them\n> seemingly around error trapping.\n\nHere's a CI run testing various strawberry perl versions on windows. I did\napply Victor's patch to make things at least compile on newer versions of perl.\n\nhttps://cirrus-ci.com/build/6290387791773696\n- 5.32.1.1: fails with \"src/pl/plperl/Util.c: loadable library and perl binaries are mismatched (got handshake key 0000000012800080, needed 0000000012900080)\"\n- 5.30.3.1: crashes in plperl_trusted_init(), see \"cat_dumps\" step for backtrace\n- 5.28.2.1: doesn't crash, but lots of things don't seem to work, particularly\n around error handling (to see regression diff, click on regress_diffs near\n the top, and navigate to src/pl/plperl)\n- 5.24.4.1 and 5.26.3.1: pass\n\nThe 5.32.1.1 issue looks like it might actually a problem in strawberry perl\nperhaps? But the rest not so much.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 4 Oct 2021 17:43:34 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: plperl on windows"
},
{
"msg_contents": "Hi,\n\nOn 2021-10-04 14:38:16 -0700, Andres Freund wrote:\n> 3) When building against strawberry perl 5.32.1.1 I see errors when loading\n> plperl\n\nThe error is:\nloadable library and perl binaries are mismatched (got handshake key 0000000012800080, needed 0000000012900080)\n\nA bunch of research led me to believe this is because the struct sizes of\nPerlInterpreter differ between perl being compiled and us embedding\nperl.\n\nAfter a lot of headscratching [1], I got a struct layout of both a gcc compiled\n(just a test.c including the relevant headers) and and the msvc compiled\nplperl.dll. And indeed they differ:\n\nmsvc:\n\n +0x42d Iin_utf8_COLLATE_locale : Bool\n +0x42e Iin_utf8_turkic_locale : Bool\n +0x42f Ilocale_utf8ness : [256] Char\n +0x530 Iwarn_locale : Ptr64 sv\n +0x538 Icolors : [6] Ptr64 Char\n +0x568 Ipeepp : Ptr64 void\n..\n +0x1278 IPrivate_Use : Ptr64 sv\n\ngcc:\n/* 0x042d | 0x0001 */ _Bool Iin_utf8_COLLATE_locale;\n/* 0x042e | 0x0001 */ _Bool Iin_utf8_turkic_locale;\n/* 0x0430 | 0x0004 */ int Ilc_numeric_mutex_depth;\n/* 0x0434 | 0x0100 */ char Ilocale_utf8ness[256];\n/* 0x0538 | 0x0008 */ SV *Iwarn_locale;\n/* 0x0540 | 0x0030 */ char *Icolors[6];\n/* 0x0570 | 0x0008 */ peep_t Ipeepp;\n...\n/* 0x1280 | 0x0008 */ SV *IPrivate_Use;\n\nThe gcc version has a Ilc_numeric_mutex_depth that the msvc version\ndoesn't. The relevant part of intrpvar.h:\n\nPERLVAR(I, in_utf8_turkic_locale, bool)\n#if defined(USE_ITHREADS) && ! defined(USE_THREAD_SAFE_LOCALE)\nPERLVARI(I, lc_numeric_mutex_depth, int, 0) /* Emulate general semaphore */\n#endif\nPERLVARA(I, locale_utf8ness, 256, char)\n\nThis conditional piece didn't yet exist in 5.26.n. Which is why that's the\nlast version that actually works.\n\nUSE_ITHREADS is defined in perls' config.h, but USE_THREAD_SAFE_LOCALE is\nderived from some other stuff. So that's the culprit.\n\n\nI gotta do something else for a bit, so I'll stop here for now.\n\n\nThe error message about mismatched lib / perl binary could really use a bit\nmore detail. It's pretty darn annoying to figure out right now what it could\nmean.\n\n\nGreetings,\n\nAndres Freund\n\n\n[1] On linux I'd just use pahole to display struct layouts, but on\nwindows... Neither of the windows perl installations comes with debug symbols,\nafaict.\nFor the gcc definition:\n I compiled a test.c with msys ucrt64, including -g3, set a breakpoint on main,\n dumped the struct with \"ptype /ox my_interp\"\nFor the msvc definition:\n connected cdb.exe to a live backend, did a CREATE EXTENSION plperl, cdb\n stopped at exit, then I could dump the type with \"dt plperl!PerlInterpreter\"\n\nI'm sure there's a better way. And of course I'm mainly including this for a\nfuture self that might remember needing to something like this before...\n\n\n",
"msg_date": "Sun, 30 Jan 2022 12:56:16 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: plperl on windows"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-30 12:56:16 -0800, Andres Freund wrote:\n> The gcc version has a Ilc_numeric_mutex_depth that the msvc version\n> doesn't. The relevant part of intrpvar.h:\n> \n> PERLVAR(I, in_utf8_turkic_locale, bool)\n> #if defined(USE_ITHREADS) && ! defined(USE_THREAD_SAFE_LOCALE)\n> PERLVARI(I, lc_numeric_mutex_depth, int, 0) /* Emulate general semaphore */\n> #endif\n> PERLVARA(I, locale_utf8ness, 256, char)\n> \n> This conditional piece didn't yet exist in 5.26.n. Which is why that's the\n> last version that actually works.\n> \n> USE_ITHREADS is defined in perls' config.h, but USE_THREAD_SAFE_LOCALE is\n> derived from some other stuff. So that's the culprit.\n> \n> \n> I gotta do something else for a bit, so I'll stop here for now.\n\nThe difference originates in this bit in plperl.h:\n\n/* XXX The next few defines are unfortunately duplicated in makedef.pl, and\n * changes here MUST also be made there */\n\n# if ! defined(HAS_SETLOCALE) && defined(HAS_POSIX_2008_LOCALE)\n# define USE_POSIX_2008_LOCALE\n# ifndef USE_THREAD_SAFE_LOCALE\n# define USE_THREAD_SAFE_LOCALE\n# endif\n /* If compiled with\n * -DUSE_THREAD_SAFE_LOCALE, will do so even\n * on unthreaded builds */\n# elif (defined(USE_ITHREADS) || defined(USE_THREAD_SAFE_LOCALE)) \\\n && ( defined(HAS_POSIX_2008_LOCALE) \\\n || (defined(WIN32) && defined(_MSC_VER) && _MSC_VER >= 1400)) \\\n && ! defined(NO_THREAD_SAFE_LOCALE)\n# ifndef USE_THREAD_SAFE_LOCALE\n# define USE_THREAD_SAFE_LOCALE\n# endif\n# ifdef HAS_POSIX_2008_LOCALE\n# define USE_POSIX_2008_LOCALE\n# endif\n# endif\n#endif\n\nSpecifically where USE_THREAD_SAFE_LOCALE is defined for msvc. Which explains\nwhy the same perl build ends up with different definitions for\nPerlInterpreter, depending on headers getting compiled with gcc or\nmsvc.\n\nSeems pretty clear that this is something that should be determined at build,\nrather than at #include time?\n\nI tested that just forcing the msvc build to behave the same using\nNO_THREAD_SAFE_LOCALE makes the tests pass. Yay. But it's obviously not a\ngreat solution - I'm not aware of a windows perl distribution that uses msvc,\nbut who knows.\n\n\n> The error message about mismatched lib / perl binary could really use a bit\n> more detail. It's pretty darn annoying to figure out right now what it could\n> mean.\n\nI wonder if we could do something to improve that on our side. This isn't the\nfirst time we've hunted down this kind of mismatch. It'd be much friendlier if\nwe could get an error at build time, rather than runtime.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 30 Jan 2022 14:16:59 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: plperl on windows"
},
{
"msg_contents": "On Sun, Jan 30, 2022 at 02:16:59PM -0800, Andres Freund wrote:\n> Specifically where USE_THREAD_SAFE_LOCALE is defined for msvc. Which explains\n> why the same perl build ends up with different definitions for\n> PerlInterpreter, depending on headers getting compiled with gcc or\n> msvc.\n> \n> Seems pretty clear that this is something that should be determined at build,\n> rather than at #include time?\n\nAgreed.\n\n> I tested that just forcing the msvc build to behave the same using\n> NO_THREAD_SAFE_LOCALE makes the tests pass. Yay. But it's obviously not a\n> great solution - I'm not aware of a windows perl distribution that uses msvc,\n> but who knows.\n\nLast I looked (~2017), EDB distributed an MSVC-built Perl as the designated\nPerl to use with https://www.postgresql.org/download/windows/ plperl.\n\n> > The error message about mismatched lib / perl binary could really use a bit\n> > more detail. It's pretty darn annoying to figure out right now what it could\n> > mean.\n> \n> I wonder if we could do something to improve that on our side. This isn't the\n> first time we've hunted down this kind of mismatch. It'd be much friendlier if\n> we could get an error at build time, rather than runtime.\n\nThe MSVC build system does give a build-time error (\"Perl test fails with or\nwithout ...\") for a Perl ABI mismatch. It would be a simple matter of\nprogramming to have the configure+gmake build system do the same.\n\n\n",
"msg_date": "Sun, 30 Jan 2022 15:14:32 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: plperl on windows"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-30 15:14:32 -0800, Noah Misch wrote:\n> Last I looked (~2017), EDB distributed an MSVC-built Perl as the designated\n> Perl to use with https://www.postgresql.org/download/windows/ plperl.\n\nAh, interesting. I didn't find a perl binary in the archive offered, and I\ndidn't immediately figure out how to extract the files from the installer, so\nI didn't check further.\n\n\n> > > The error message about mismatched lib / perl binary could really use a bit\n> > > more detail. It's pretty darn annoying to figure out right now what it could\n> > > mean.\n> > \n> > I wonder if we could do something to improve that on our side. This isn't the\n> > first time we've hunted down this kind of mismatch. It'd be much friendlier if\n> > we could get an error at build time, rather than runtime.\n> \n> The MSVC build system does give a build-time error (\"Perl test fails with or\n> without ...\") for a Perl ABI mismatch.\n\nHm? I encountered this on an msvc build, building against strawberry perl (and\nthen also against msys ucrt perl, I was trying to exclude a problem in\nstrawberry perl). So perl is gcc built and postgres with msvc. It fails when\ncreating the plperl extension, with\n loadable library and binaries are mismatched (got handshake key 0000000012800080, needed 0000000012900080)\nbut not at build time.\n\nAh, I see. The problem is that the test is only done for 32bit perl. I guess\nthis stuff would need to be extracted in a helper function, so we can use it\nfor different defines without a lot of repetition.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 30 Jan 2022 15:34:14 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: plperl on windows"
},
{
"msg_contents": "\nOn 10/4/21 18:02, Andres Freund wrote:\n> Hi,\n>\n> On 2021-10-04 14:38:16 -0700, Andres Freund wrote:\n>> 2) For some reason src/tools/install.pl doesn't install plperl[u].control,\n>> plperl[u]--1.0.sql - But apparently the buildfarm doesn't have that issue,\n>> because drongo successfully ran the plperl tests?\n> Oh, figured that one out: Install.pm checks the current directory for\n> config.pl - but my invocation was from the source tree root (which is\n> supported for most things). Because of that it skipped installing plperl, as\n> it though it wasn't installed.\n\n\nWe should fix that, maybe along these lines?\n\n\niff --git a/src/tools/msvc/Install.pm b/src/tools/msvc/Install.pm\nindex 8de79c618c..75e91f73b3 100644\n--- a/src/tools/msvc/Install.pm\n+++ b/src/tools/msvc/Install.pm\n@@ -59,6 +59,8 @@ sub Install\n our $config = shift;\n unless ($config)\n {\n+ # we expect config.pl and config_default.pl to be here\n+ chdir 'src/tools/msvc' if -d 'src/tools/msvc';\n \n # suppress warning about harmless redeclaration of $config\n no warnings 'misc';\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 31 Jan 2022 10:43:31 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: plperl on windows"
},
{
"msg_contents": "On 2022-01-31 10:43:31 -0500, Andrew Dunstan wrote:\n> We should fix that, maybe along these lines?\n\nWFM.\n\n\n",
"msg_date": "Mon, 31 Jan 2022 10:02:05 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: plperl on windows"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nAndres' recent post¹ about PL/Perl on Windows and linked-to² patch\ncontaining an erroneous version check made me realise that we haven't\nupdated our copy of ppport.h since 2009. Attached is a patch that does\nthat, and applies code changes suggested by running it. I've tested\n`make check-world` with `--with-perl` on both the oldest (5.8.9) and\nnewest (5.34.0) perls I have handy.\n\nI also noticed that PL/Perl itself (via plc_perlboot.pl) requires Perl\n5.8.1, but configure only checks for 5.8 (i.e. 5.8.0). The second patch\nupdates the latter to match.\n\n- ilmari\n[1] https://www.postgresql.org/message-id/20211004213816.t5zgv4ba5zfijqzc%40alap3.anarazel.de\n[2] https://www.postgresql.org/message-id/20200501134711.08750c5f@antares.wagner.home",
"msg_date": "Tue, 05 Oct 2021 00:40:45 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": true,
"msg_subject": "plperl: update ppport.h and fix configure version check"
},
{
"msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n> Andres' recent post¹ about PL/Perl on Windows and linked-to² patch\n> containing an erroneous version check made me realise that we haven't\n> updated our copy of ppport.h since 2009. Attached is a patch that does\n> that, and applies code changes suggested by running it. I've tested\n> `make check-world` with `--with-perl` on both the oldest (5.8.9) and\n> newest (5.34.0) perls I have handy.\n\nI haven't looked at this patch's details, but I can confirm that it\nalso builds and passes regression on prairiedog's 5.8.3 perl.\n\n> I also noticed that PL/Perl itself (via plc_perlboot.pl) requires Perl\n> 5.8.1, but configure only checks for 5.8 (i.e. 5.8.0). The second patch\n> updates the latter to match.\n\nHmm ... Perl 5.8.x is old enough that probably it matters to nobody in\nthe real world, but if we're going to mess with this, is 5.8.1 the right\ncutoff? I wonder about this because I believe prairiedog's perl to be\nthe oldest that we have tested in a good long while, so that we shouldn't\nassert with any confidence that 5.8.1 would actually work. The last\ntime I surveyed the buildfarm's perl versions, in 2017, these were the\nonly 5.8.x animals:\n\n Animal | Surveyed build | Configure's version report\n castoroides | 2017-07-27 12:03:05 | configure: using perl 5.8.4\n protosciurus | 2017-07-27 13:24:42 | configure: using perl 5.8.4\n prairiedog | 2017-07-27 22:51:11 | configure: using perl 5.8.6\n aholehole | 2017-07-27 19:31:40 | configure: using perl 5.8.8\n anole | 2017-07-28 00:27:38 | configure: using perl 5.8.8\n arapaima | 2017-07-27 19:30:52 | configure: using perl 5.8.8\n gharial | 2017-07-27 20:26:16 | configure: using perl 5.8.8\n locust | 2017-07-28 00:13:01 | configure: using perl 5.8.8\n narwhal | 2017-03-17 05:00:02 | configure: using perl 5.8.8\n gaur | 2017-07-22 21:02:43 | configure: using perl 5.8.9\n pademelon | 2017-07-22 23:56:59 | configure: using perl 5.8.9\n\nNotice that here, prairiedog is running 5.8.6, which is Apple's\nvendor-installed perl on that stone-age version of macOS.\nShortly after that, I *downgraded* it to 5.8.3. I do not recall\nexactly why I chose that precise perl version, but it seems\npretty likely that the reason was \"I couldn't get anything older\nto build\".\n\nIn short: (a) we're not testing against anything older than 5.8.3\nand (b) it seems quite unlikely that anybody cares about 5.8.x anyway.\nSo if we want to mess with this, maybe we should set the cutoff\nto 5.8.3 not 5.8.1.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 04 Oct 2021 23:12:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: plperl: update ppport.h and fix configure version check"
},
{
"msg_contents": "> On 5 Oct 2021, at 05:12, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> In short: (a) we're not testing against anything older than 5.8.3\n> and (b) it seems quite unlikely that anybody cares about 5.8.x anyway.\n> So if we want to mess with this, maybe we should set the cutoff\n> to 5.8.3 not 5.8.1.\n\nNot being able to test against older versions in the builfarm seems like a\npretty compelling reason to set 5.8.3 as the required version.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 5 Oct 2021 09:41:08 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: plperl: update ppport.h and fix configure version check"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n\n>> On 5 Oct 2021, at 05:12, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> In short: (a) we're not testing against anything older than 5.8.3\n>> and (b) it seems quite unlikely that anybody cares about 5.8.x anyway.\n>> So if we want to mess with this, maybe we should set the cutoff\n>> to 5.8.3 not 5.8.1.\n>\n> Not being able to test against older versions in the builfarm seems like a\n> pretty compelling reason to set 5.8.3 as the required version.\n\nLooking at the list of Perl versions shipped with various OSes\n(https://www.cpan.org/ports/binaries.html), bumping the minimum\nrequirement from 5.8.1 to 5.8.3 will affect the following OS versions,\nwhich shipped 5.8.1 or 5.8.2:\n\nAIX: 5.3, 6.1\nFedora: 1 (Yarrow)\nmacOS: 10.3 (Panther)\nRedhat: 2.1\nSlackware: 9.0, 9.1\nOpenSUSE: 8.2\n\nThe only one of these that I can imagine we might possibly care about is\nAIX, but I don't know what versions we claim to support or people\nactually run PostgreSQL on (and want to upgrade to 15). The docs at\nhttps://www.postgresql.org/docs/current/installation-platform-notes.html\njust say that \"AIX versions before about 6.1 […] are not recommended\".\n\nFor reference, 6.1 was released on 2007-11-09 and EOL on 2017-04-30, and\n7.1 was released on 2010-09-10 and is supported until 2023-04-30.\n\n- ilmari\n\n\n",
"msg_date": "Tue, 05 Oct 2021 10:59:02 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": true,
"msg_subject": "Re: plperl: update ppport.h and fix configure version check"
},
{
"msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> Not being able to test against older versions in the builfarm seems like a\n>> pretty compelling reason to set 5.8.3 as the required version.\n\n> Looking at the list of Perl versions shipped with various OSes\n> (https://www.cpan.org/ports/binaries.html), bumping the minimum\n> requirement from 5.8.1 to 5.8.3 will affect the following OS versions,\n> which shipped 5.8.1 or 5.8.2:\n\n> AIX: 5.3, 6.1\n> Fedora: 1 (Yarrow)\n> macOS: 10.3 (Panther)\n> Redhat: 2.1\n> Slackware: 9.0, 9.1\n> OpenSUSE: 8.2\n\n> The only one of these that I can imagine we might possibly care about is\n> AIX, but I don't know what versions we claim to support or people\n> actually run PostgreSQL on (and want to upgrade to 15).\n\nWe do have a couple of buildfarm animals on AIX 7.1, but nothing older.\nThe other systems you mention are surely dead and buried.\n\nInterestingly, although cpan's table says AIX 7.1 shipped with perl\n5.10.1, what's actually on those buildfarm animals is\n\ntgl@gcc111:[/home/tgl]which perl\n/usr/bin/perl\ntgl@gcc111:[/home/tgl]ls -l /usr/bin/perl\nlrwxrwxrwx 1 root system 29 Nov 09 2020 /usr/bin/perl -> /usr/opt/perl5/bin/perl5.28.1\n\nHard to tell if that is a local update or official IBM distribution.\n\n> For reference, 6.1 was released on 2007-11-09 and EOL on 2017-04-30, and\n> 7.1 was released on 2010-09-10 and is supported until 2023-04-30.\n\nSo 6.1 will be five years out of support by the time we release PG 15.\nI'm inclined to just update the docs to say we don't support anything\nolder than 7.1.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 05 Oct 2021 07:54:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: plperl: update ppport.h and fix configure version check"
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Interestingly, although cpan's table says AIX 7.1 shipped with perl\n> 5.10.1, what's actually on those buildfarm animals is\n>\n> tgl@gcc111:[/home/tgl]which perl\n> /usr/bin/perl\n> tgl@gcc111:[/home/tgl]ls -l /usr/bin/perl\n> lrwxrwxrwx 1 root system 29 Nov 09 2020 /usr/bin/perl -> /usr/opt/perl5/bin/perl5.28.1\n>\n> Hard to tell if that is a local update or official IBM distribution.\n\nLooks like they update the Perl version in OS updates and service packs:\nhttps://www.ibm.com/support/pages/aix-perl-updates-and-support-perlrte\n\n>> For reference, 6.1 was released on 2007-11-09 and EOL on 2017-04-30, and\n>> 7.1 was released on 2010-09-10 and is supported until 2023-04-30.\n>\n> So 6.1 will be five years out of support by the time we release PG 15.\n\nAnd PG 14 will be supported until nine years after the 6.1 EOL date.\n\n> I'm inclined to just update the docs to say we don't support anything\n> older than 7.1.\n\nI concur.\n\n- ilmari\n\n\n",
"msg_date": "Tue, 05 Oct 2021 13:05:26 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": true,
"msg_subject": "Re: plperl: update ppport.h and fix configure version check"
},
{
"msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>> Hard to tell if that is a local update or official IBM distribution.\n\n> Looks like they update the Perl version in OS updates and service packs:\n> https://www.ibm.com/support/pages/aix-perl-updates-and-support-perlrte\n\nOh, interesting. So even if someone still had AIX 6.1 in the wild,\nthey'd likely have some newer-than-5.8.x Perl on it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 05 Oct 2021 08:10:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: plperl: update ppport.h and fix configure version check"
},
{
"msg_contents": "\nOn 10/4/21 11:12 PM, Tom Lane wrote:\n>\n> In short: (a) we're not testing against anything older than 5.8.3\n> and (b) it seems quite unlikely that anybody cares about 5.8.x anyway.\n> So if we want to mess with this, maybe we should set the cutoff\n> to 5.8.3 not 5.8.1.\n>\n> \t\n\n\nSeems OK. Note that the Msys DTK perl currawong uses to build with is\nancient (5.6.1). That's going to stay as it is until it goes completely\nout of scope in about 13 months. The perl it builds plperl against is\nmuch more modern - 5.16.3.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 5 Oct 2021 10:17:46 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: plperl: update ppport.h and fix configure version check"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> Seems OK. Note that the Msys DTK perl currawong uses to build with is\n> ancient (5.6.1). That's going to stay as it is until it goes completely\n> out of scope in about 13 months. The perl it builds plperl against is\n> much more modern - 5.16.3.\n\nThat brings up something I was intending to ask you about -- any special\ntips about running the buildfarm script with a different Perl version\nthan is used in the PG build itself? I'm trying to modernize a couple\nof my buildfarm animals to use non-stone-age SSL, but I don't really\nwant to move the goalposts on what they're testing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 05 Oct 2021 10:30:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: plperl: update ppport.h and fix configure version check"
},
{
"msg_contents": "\nOn 10/5/21 10:30 AM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> Seems OK. Note that the Msys DTK perl currawong uses to build with is\n>> ancient (5.6.1). That's going to stay as it is until it goes completely\n>> out of scope in about 13 months. The perl it builds plperl against is\n>> much more modern - 5.16.3.\n> That brings up something I was intending to ask you about -- any special\n> tips about running the buildfarm script with a different Perl version\n> than is used in the PG build itself? I'm trying to modernize a couple\n> of my buildfarm animals to use non-stone-age SSL, but I don't really\n> want to move the goalposts on what they're testing.\n>\n> \t\n\n\nMostly if you set the perl you're building against in the path ahead of\nthe perl you running with things just work. A notable exception is TAP\ntests, where you have to set PROVE in the config_env to point to the\nprove script you're going to use.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 6 Oct 2021 18:58:25 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: plperl: update ppport.h and fix configure version check"
},
{
"msg_contents": "AFAICS we have consensus on doing these things (in HEAD only):\n\n* update ppport.h to perl 5.34.0\n\n* adjust configure and docs to set 5.8.3 as the minimum perl version\n\n* adjust docs to say we don't test or support AIX below 7.1.\n\nI'll go make these things happen.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 07 Oct 2021 12:23:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: plperl: update ppport.h and fix configure version check"
},
{
"msg_contents": "I wrote:\n> * adjust configure and docs to set 5.8.3 as the minimum perl version\n\nWhen I went to update the docs, I found they already said 5.8.3\nis the minimum. Excavating in the git log led me to this old\ndiscussion:\n\nhttps://www.postgresql.org/message-id/flat/16894.1501392088%40sss.pgh.pa.us#2c7641fa2459e84049301f185d74d429\n\nSo it was intentional at the time to leave configure's check\nas 5.8.0. However, given that the functionality available is\nless than you'd expect, and that we've not tested any such\nconfiguration in several years, I still concur with adjusting\nconfigure to require 5.8.3. Pushed it that way just now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 07 Oct 2021 14:32:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: plperl: update ppport.h and fix configure version check"
}
] |
[
{
"msg_contents": "Greetings,\n\nThere's been various discussions about CREATEROLE, EVENT TRIGGERs, and\nother things which hinge around the general idea that we can create a\n'tree' of roles where there's some root and then from that root there's\na set of created roles, or at least roles which have been GRANT'd other\nroles as part of an explicit arrangement.\n\nThe issue with many of these suggestions is that roles, currently, are\nable to 'administer' themselves. That means that such role memberships\naren't suitable for such controls. \n\nTo wit, this happens:\n\nSuperuser:\n\n=# create user u1;\nCREATE ROLE\n=# create user u2;\nCREATE ROLE\n=# grant u2 to u1;\nGRANT ROLE\n\n...\n\nLog in as u2:\n\n=> revoke u2 from u1;\nREVOKE ROLE\n\n...\n\nThis is because we allow 'self administration' of roles, meaning that\nthey can decide what other roles they are a member of. This is\ndocumented as:\n\n\"A role is not considered to hold WITH ADMIN OPTION on itself, but it\nmay grant or revoke membership in itself from a database session where\nthe session user matches the role.\"\n\nat: https://www.postgresql.org/docs/current/sql-grant.html\n\nFurther, we comment this in the code:\n\n * A role can admin itself when it matches the session user and we're\n * outside any security-restricted operation, SECURITY DEFINER or\n * similar context. SQL-standard roles cannot self-admin. However,\n * SQL-standard users are distinct from roles, and they are not\n * grantable like roles: PostgreSQL's role-user duality extends the\n * standard. Checking for a session user match has the effect of\n * letting a role self-admin only when it's conspicuously behaving\n * like a user. Note that allowing self-admin under a mere SET ROLE\n * would make WITH ADMIN OPTION largely irrelevant; any member could\n * SET ROLE to issue the otherwise-forbidden command.\n\nin src/backend/utils/adt/acl.c\n\nHere's the thing - having looked back through the standard, it seems\nwe're missing a bit that's included there and that makes a heap of\ndifference. Specifically, the SQL standard basically says that to\nrevoke a privilege, you need to have been able to grant that privilege\nin the first place (as Andrew Dunstan actually also brought up in a\nrecent thread about related CREATEROLE things- \nhttps://www.postgresql.org/message-id/837cc50a-532a-85f5-a231-9d68f2184e52%40dunslane.net\n) and that isn't something we've been considering when it comes to role\n'self administration' thus far, at least as it relates to the particular\nfield of the \"grantor\".\n\nWe can't possibly make things like EVENT TRIGGERs or CREATEROLE work\nwith role trees if a given role can basically just 'opt out' of being\npart of the tree to which they were assigned by the user who created\nthem. Therefore, I suggest we contemplate two changes in this area:\n\n- Allow a user who is able to create roles decide if the role created is\n able to 'self administor' (that is- GRANT their own role to someone\n else) itself.\n\n- Disallow roles from being able to REVOKE role membership that they\n didn't GRANT in the first place.\n\nThis isn't as big a change as it might seem as we already track which\nrole issued a given GRANT. We should probably do a more thorough review\nto see if there's other cases where a given role is able to REVOKE\nrights that have been GRANT'd by some other role on a particular object,\nas it seems like we should probably be consistent in this regard across\neverything and not just for roles. That might be a bit of a pain but it\nseems likely to be worth it in the long run and feels like it'd bring us\nmore in-line with the SQL standard too.\n\nSo, thoughts?\n\nThanks!\n\nStephen",
"msg_date": "Mon, 4 Oct 2021 22:57:46 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Role Self-Administration"
},
{
"msg_contents": "On Mon, Oct 04, 2021 at 10:57:46PM -0400, Stephen Frost wrote:\n> \"A role is not considered to hold WITH ADMIN OPTION on itself, but it\n> may grant or revoke membership in itself from a database session where\n> the session user matches the role.\"\n\n> Here's the thing - having looked back through the standard, it seems\n> we're missing a bit that's included there and that makes a heap of\n> difference. Specifically, the SQL standard basically says that to\n> revoke a privilege, you need to have been able to grant that privilege\n> in the first place (as Andrew Dunstan actually also brought up in a\n> recent thread about related CREATEROLE things- \n> https://www.postgresql.org/message-id/837cc50a-532a-85f5-a231-9d68f2184e52%40dunslane.net\n> ) and that isn't something we've been considering when it comes to role\n> 'self administration' thus far, at least as it relates to the particular\n> field of the \"grantor\".\n\nWhich SQL standard clauses are you paraphrasing? (A reference could take the\nform of a spec version number, section number, and rule number. Alternately,\na page number and URL to a PDF would suffice.)\n\n> We can't possibly make things like EVENT TRIGGERs or CREATEROLE work\n> with role trees if a given role can basically just 'opt out' of being\n> part of the tree to which they were assigned by the user who created\n> them. Therefore, I suggest we contemplate two changes in this area:\n\nI suspect we'll regret using the GRANT system to modify behaviors other than\nwhether or not one gets \"permission denied\". Hence, -1 on using role\nmembership to control event trigger firing, whether or not $SUBJECT changes.\n\n> - Allow a user who is able to create roles decide if the role created is\n> able to 'self administor' (that is- GRANT their own role to someone\n> else) itself.\n> \n> - Disallow roles from being able to REVOKE role membership that they\n> didn't GRANT in the first place.\n\nEither of those could be reasonable. Does the SQL standard take a position\nrelevant to the decision? A third option is to track each role's creator and\nmake is_admin_of_role() return true for the creator, whether or not the\ncreator remains a member. That would also benefit cases where the creator is\nrolinherit and wants its ambient privileges to shed the privileges of the role\nit's creating.\n\n> We should probably do a more thorough review\n> to see if there's other cases where a given role is able to REVOKE\n> rights that have been GRANT'd by some other role on a particular object,\n> as it seems like we should probably be consistent in this regard across\n> everything and not just for roles. That might be a bit of a pain but it\n> seems likely to be worth it in the long run and feels like it'd bring us\n> more in-line with the SQL standard too.\n\nDoes the SQL standard take a position on whether REVOKE SELECT should work\nthat way?\n\n\n",
"msg_date": "Mon, 4 Oct 2021 21:34:38 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "Greetings,\n\n* Noah Misch (noah@leadboat.com) wrote:\n> On Mon, Oct 04, 2021 at 10:57:46PM -0400, Stephen Frost wrote:\n> > \"A role is not considered to hold WITH ADMIN OPTION on itself, but it\n> > may grant or revoke membership in itself from a database session where\n> > the session user matches the role.\"\n> \n> > Here's the thing - having looked back through the standard, it seems\n> > we're missing a bit that's included there and that makes a heap of\n> > difference. Specifically, the SQL standard basically says that to\n> > revoke a privilege, you need to have been able to grant that privilege\n> > in the first place (as Andrew Dunstan actually also brought up in a\n> > recent thread about related CREATEROLE things- \n> > https://www.postgresql.org/message-id/837cc50a-532a-85f5-a231-9d68f2184e52%40dunslane.net\n> > ) and that isn't something we've been considering when it comes to role\n> > 'self administration' thus far, at least as it relates to the particular\n> > field of the \"grantor\".\n> \n> Which SQL standard clauses are you paraphrasing? (A reference could take the\n> form of a spec version number, section number, and rule number. Alternately,\n> a page number and URL to a PDF would suffice.)\n\n12.7 <revoke statement>\n\nSpecifically the bit about how a role authorization is said to be\nidentified if it defines the grant of the role revoked to the grantee\n*with grantor A*. Reading it again these many years later, that seems\nto indicate that you need to actually be the grantor or able to be the\ngrantor who performed the original grant in order to revoke it,\nsomething that wasn't done in the original implementation of roles.\n\n> > We can't possibly make things like EVENT TRIGGERs or CREATEROLE work\n> > with role trees if a given role can basically just 'opt out' of being\n> > part of the tree to which they were assigned by the user who created\n> > them. Therefore, I suggest we contemplate two changes in this area:\n> \n> I suspect we'll regret using the GRANT system to modify behaviors other than\n> whether or not one gets \"permission denied\". Hence, -1 on using role\n> membership to control event trigger firing, whether or not $SUBJECT changes.\n\nI've not been entirely sure if that's a great idea or not either, but I\ndidn't see any particular holes in Tom's suggestion that we use this as\na way to identify a tree of roles, except for this particular issue that\na role is currently able to 'opt out', which seems like a mistake in the\noriginal role implementation and not an issue with Tom's actual idea to\nuse it in this way.\n\nI do think that getting the role management sorted out with just the\ngeneral concepts of 'tenant' and 'landlord' as discussed in the thread\nwith Mark about changes to CREATEROLE and adding of other predefined\nroles is a necessary first step, and only after we feel like we've\nsolved that should we come back to the idea of using that for other\nthings, such as event trigger firing.\n\n> > - Allow a user who is able to create roles decide if the role created is\n> > able to 'self administor' (that is- GRANT their own role to someone\n> > else) itself.\n> > \n> > - Disallow roles from being able to REVOKE role membership that they\n> > didn't GRANT in the first place.\n> \n> Either of those could be reasonable. Does the SQL standard take a position\n> relevant to the decision? A third option is to track each role's creator and\n> make is_admin_of_role() return true for the creator, whether or not the\n> creator remains a member. That would also benefit cases where the creator is\n> rolinherit and wants its ambient privileges to shed the privileges of the role\n> it's creating.\n\nIt's a bit dense, but my take on the revoke statement description is\nthat the short answer is \"yes, the standard does take a position on\nthis\" at least as it relates to role memberships. As for if a role\nwould have the ability to control it for themselves, that seems like a\nnatural extension of the general approach whereby a role can't grant\nthemselves admin role on their own role if they don't already have it,\nbut some other, appropriately privileged role, could.\n\nI don't feel it's necessary to track additional information about who\ncreated a specific role. Simply having, when that role is created,\nthe creator be automatically granted admin rights on the role created\nseems like it'd be sufficient.\n\n> > We should probably do a more thorough review\n> > to see if there's other cases where a given role is able to REVOKE\n> > rights that have been GRANT'd by some other role on a particular object,\n> > as it seems like we should probably be consistent in this regard across\n> > everything and not just for roles. That might be a bit of a pain but it\n> > seems likely to be worth it in the long run and feels like it'd bring us\n> > more in-line with the SQL standard too.\n> \n> Does the SQL standard take a position on whether REVOKE SELECT should work\n> that way?\n\nIn my reading, yes, it's much the same. I invite others to try and read\nthrough it and see if they agree with my conclusions. Again, this is\nreally all on the 'revoke statement' side and isn't really covered on\nthe 'grant' side.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 5 Oct 2021 08:58:57 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "On Mon, Oct 4, 2021 at 10:57 PM Stephen Frost <sfrost@snowman.net> wrote:\n> - Disallow roles from being able to REVOKE role membership that they\n> didn't GRANT in the first place.\n\nI think that's not quite the right test. For example, if alice and bob\nare superusers and alice grants pg_monitor to doug, bob should be able\nto revoke that grant even though he is not alice.\n\nI think the rule should be: roles shouldn't be able to REVOKE role\nmemberships unless they can become the grantor.\n\nBut I think maybe if it should even be more general than that and\napply to all sorts of grants, rather than just roles and role\nmemberships: roles shouldn't be able to REVOKE any granted permission\nunless they can become the grantor.\n\nFor example, if bob grants SELECT on one of his tables to alice, he\nshould be able to revoke the grant, too. But if the superuser performs\nthe grant, why should bob be able to revoke it? The superuser has\nspoken, and bob shouldn't get to interfere ... unless of course he's\nalso a superuser.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Oct 2021 12:23:33 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "\n\n> On Oct 5, 2021, at 9:23 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n>> - Disallow roles from being able to REVOKE role membership that they\n>> didn't GRANT in the first place.\n> \n> I think that's not quite the right test. For example, if alice and bob\n> are superusers and alice grants pg_monitor to doug, bob should be able\n> to revoke that grant even though he is not alice.\n\nAdditionally, role \"alice\" might not exist anymore, which would leave the privilege irrevocable. It's helpful to think in terms of role ownership rather than role creation:\n\nsuperuser\n +---> alice\n +---> charlie\n +---> diane\n +---> bob\n\nIt makes sense that alice can take ownership of diane and drop charlie, but not that bob can do so. Nor should charlie be able to transfer ownership of diane to alice. Nor should charlie be able to drop himself.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 5 Oct 2021 09:38:03 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "Greetings,\n\nOn Tue, Oct 5, 2021 at 12:23 Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Oct 4, 2021 at 10:57 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > - Disallow roles from being able to REVOKE role membership that they\n> > didn't GRANT in the first place.\n>\n> I think that's not quite the right test. For example, if alice and bob\n> are superusers and alice grants pg_monitor to doug, bob should be able\n> to revoke that grant even though he is not alice.\n>\n> I think the rule should be: roles shouldn't be able to REVOKE role\n> memberships unless they can become the grantor.\n\n\nYes, role membership still equating to “being” that role still holds with\nthis, even though I didn’t say so explicitly.\n\nBut I think maybe if it should even be more general than that and\n> apply to all sorts of grants, rather than just roles and role\n> memberships: roles shouldn't be able to REVOKE any granted permission\n> unless they can become the grantor.\n\n\nRight, this was covered towards the end of my email, though again evidently\nnot clearly enough, sorry about that.\n\nFor example, if bob grants SELECT on one of his tables to alice, he\n> should be able to revoke the grant, too. But if the superuser performs\n> the grant, why should bob be able to revoke it? The superuser has\n> spoken, and bob shouldn't get to interfere ... unless of course he's\n> also a superuser.\n\n\nMostly agreed except I’d exclude the explicit “superuser” flag bit and just\nsay if r1 granted the right, r2 shouldn’t be the one who is allowed to\nrevoke it until r2 happens to also be a member of r1.\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Tue, Oct 5, 2021 at 12:23 Robert Haas <robertmhaas@gmail.com> wrote:On Mon, Oct 4, 2021 at 10:57 PM Stephen Frost <sfrost@snowman.net> wrote:\n> - Disallow roles from being able to REVOKE role membership that they\n> didn't GRANT in the first place.\n\nI think that's not quite the right test. For example, if alice and bob\nare superusers and alice grants pg_monitor to doug, bob should be able\nto revoke that grant even though he is not alice.\n\nI think the rule should be: roles shouldn't be able to REVOKE role\nmemberships unless they can become the grantor.Yes, role membership still equating to “being” that role still holds with this, even though I didn’t say so explicitly.\nBut I think maybe if it should even be more general than that and\napply to all sorts of grants, rather than just roles and role\nmemberships: roles shouldn't be able to REVOKE any granted permission\nunless they can become the grantor.Right, this was covered towards the end of my email, though again evidently not clearly enough, sorry about that. \nFor example, if bob grants SELECT on one of his tables to alice, he\nshould be able to revoke the grant, too. But if the superuser performs\nthe grant, why should bob be able to revoke it? The superuser has\nspoken, and bob shouldn't get to interfere ... unless of course he's\nalso a superuser.Mostly agreed except I’d exclude the explicit “superuser” flag bit and just say if r1 granted the right, r2 shouldn’t be the one who is allowed to revoke it until r2 happens to also be a member of r1.Thanks,Stephen",
"msg_date": "Tue, 5 Oct 2021 13:08:28 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "On Tue, Oct 5, 2021 at 12:38 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Additionally, role \"alice\" might not exist anymore, which would leave the privilege irrevocable.\n\nI thought that surely this couldn't be right, but apparently we have\nabsolutely no problem with leaving the \"grantor\" column in pg_authid\nas a dangling reference to a pg_authid role that no longer exists:\n\nrhaas=# select * from pg_auth_members where grantor not in (select oid\nfrom pg_authid);\n roleid | member | grantor | admin_option\n--------+--------+---------+--------------\n 3373 | 16412 | 16410 | f\n(1 row)\n\nYikes. We'd certainly have to do something about that if we want to\nuse the grantor field for anything security-sensitive, since otherwise\nhilarity would ensue if that OID got recycled for a new role at any\nlater point in time.\n\nThis seems weirdly inconsistent with what we do in other cases:\n\nrhaas=# create table foo (a int, b text);\nCREATE TABLE\nrhaas=# grant select on table foo to alice with grant option;\nGRANT\nrhaas=# \\c rhaas alice\nYou are now connected to database \"rhaas\" as user \"alice\".\nrhaas=> grant select on table foo to bob;\nGRANT\nrhaas=> \\c - rhaas\nYou are now connected to database \"rhaas\" as user \"rhaas\".\nrhaas=# drop role alice;\nERROR: role \"alice\" cannot be dropped because some objects depend on it\nDETAIL: privileges for table foo\nrhaas=#\n\nHere, because the ACL on table foo records alice as a grantor, alice\ncannot be dropped. But when alice is the grantor of a role, the same\nrule doesn't apply. I think the behavior shown in this example, where\nalice can't be dropped, is the right behavior, and the behavior for\nroles is just plain broken.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Oct 2021 13:09:06 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "Greetings,\n\nOn Tue, Oct 5, 2021 at 12:38 Mark Dilger <mark.dilger@enterprisedb.com>\nwrote:\n\n>\n>\n> > On Oct 5, 2021, at 9:23 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> >> - Disallow roles from being able to REVOKE role membership that they\n> >> didn't GRANT in the first place.\n> >\n> > I think that's not quite the right test. For example, if alice and bob\n> > are superusers and alice grants pg_monitor to doug, bob should be able\n> > to revoke that grant even though he is not alice.\n>\n> Additionally, role \"alice\" might not exist anymore, which would leave the\n> privilege irrevocable.\n\n\nDo we actually allow that case to happen today..? I didn’t think we did\nand instead there’s a dependency from the grant on to the Alice role. If\nthat doesn’t exist today then I would think we’d need that and therefore\nthis concern isn’t an issue.\n\n\nIt's helpful to think in terms of role ownership rather than role creation:\n>\n> superuser\n> +---> alice\n> +---> charlie\n> +---> diane\n> +---> bob\n>\n> It makes sense that alice can take ownership of diane and drop charlie,\n> but not that bob can do so. Nor should charlie be able to transfer\n> ownership of diane to alice. Nor should charlie be able to drop himself.\n\n\nI dislike moving away from the ADMIN OPTION when it comes to roles as it\nputs us outside of the SQL standard. Having the ADMIN OPTION for a role\nseems, at least to me, to basically mean the things you’re suggesting\n“ownership” to mean- so why have two different things, especially when one\ndoesn’t exist as a concept in the standard..?\n\nI agree that Charlie shouldn’t be able to drop themselves in general, but I\ndon’t think we need an “ownership” concept for that. We also prevent loops\nalready which I think is called for in the standard already (would need to\ngo reread and make sure though) which already prevents Charlie from\ngranting Diane to Alice. What does the “ownership” concept actually buy us\nthen?\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Tue, Oct 5, 2021 at 12:38 Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n\n> On Oct 5, 2021, at 9:23 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n>> - Disallow roles from being able to REVOKE role membership that they\n>> didn't GRANT in the first place.\n> \n> I think that's not quite the right test. For example, if alice and bob\n> are superusers and alice grants pg_monitor to doug, bob should be able\n> to revoke that grant even though he is not alice.\n\nAdditionally, role \"alice\" might not exist anymore, which would leave the privilege irrevocable. Do we actually allow that case to happen today..? I didn’t think we did and instead there’s a dependency from the grant on to the Alice role. If that doesn’t exist today then I would think we’d need that and therefore this concern isn’t an issue.It's helpful to think in terms of role ownership rather than role creation:\n\nsuperuser\n +---> alice\n +---> charlie\n +---> diane\n +---> bob\n\nIt makes sense that alice can take ownership of diane and drop charlie, but not that bob can do so. Nor should charlie be able to transfer ownership of diane to alice. Nor should charlie be able to drop himself.I dislike moving away from the ADMIN OPTION when it comes to roles as it puts us outside of the SQL standard. Having the ADMIN OPTION for a role seems, at least to me, to basically mean the things you’re suggesting “ownership” to mean- so why have two different things, especially when one doesn’t exist as a concept in the standard..?I agree that Charlie shouldn’t be able to drop themselves in general, but I don’t think we need an “ownership” concept for that. We also prevent loops already which I think is called for in the standard already (would need to go reread and make sure though) which already prevents Charlie from granting Diane to Alice. What does the “ownership” concept actually buy us then?Thanks,Stephen",
"msg_date": "Tue, 5 Oct 2021 13:14:26 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "Greetings,\n\nOn Tue, Oct 5, 2021 at 13:09 Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Oct 5, 2021 at 12:38 PM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n> > Additionally, role \"alice\" might not exist anymore, which would leave\n> the privilege irrevocable.\n>\n> I thought that surely this couldn't be right, but apparently we have\n> absolutely no problem with leaving the \"grantor\" column in pg_authid\n> as a dangling reference to a pg_authid role that no longer exists:\n\n\n> rhaas=# select * from pg_auth_members where grantor not in (select oid\n> from pg_authid);\n> roleid | member | grantor | admin_option\n> --------+--------+---------+--------------\n> 3373 | 16412 | 16410 | f\n> (1 row)\n>\n> Yikes. We'd certainly have to do something about that if we want to\n> use the grantor field for anything security-sensitive, since otherwise\n> hilarity would ensue if that OID got recycled for a new role at any\n> later point in time.\n\n\nYeah, ew. We should just fix this.\n\nThis seems weirdly inconsistent with what we do in other cases:\n>\n> rhaas=# create table foo (a int, b text);\n> CREATE TABLE\n> rhaas=# grant select on table foo to alice with grant option;\n> GRANT\n> rhaas=# \\c rhaas alice\n> You are now connected to database \"rhaas\" as user \"alice\".\n> rhaas=> grant select on table foo to bob;\n> GRANT\n> rhaas=> \\c - rhaas\n> You are now connected to database \"rhaas\" as user \"rhaas\".\n> rhaas=# drop role alice;\n> ERROR: role \"alice\" cannot be dropped because some objects depend on it\n> DETAIL: privileges for table foo\n> rhaas=#\n>\n> Here, because the ACL on table foo records alice as a grantor, alice\n> cannot be dropped. But when alice is the grantor of a role, the same\n> rule doesn't apply. I think the behavior shown in this example, where\n> alice can't be dropped, is the right behavior, and the behavior for\n> roles is just plain broken.\n\n\nAgreed.\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Tue, Oct 5, 2021 at 13:09 Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Oct 5, 2021 at 12:38 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> Additionally, role \"alice\" might not exist anymore, which would leave the privilege irrevocable.\n\nI thought that surely this couldn't be right, but apparently we have\nabsolutely no problem with leaving the \"grantor\" column in pg_authid\nas a dangling reference to a pg_authid role that no longer exists:\nrhaas=# select * from pg_auth_members where grantor not in (select oid\nfrom pg_authid);\n roleid | member | grantor | admin_option\n--------+--------+---------+--------------\n 3373 | 16412 | 16410 | f\n(1 row)\n\nYikes. We'd certainly have to do something about that if we want to\nuse the grantor field for anything security-sensitive, since otherwise\nhilarity would ensue if that OID got recycled for a new role at any\nlater point in time.Yeah, ew. We should just fix this. \nThis seems weirdly inconsistent with what we do in other cases:\n\nrhaas=# create table foo (a int, b text);\nCREATE TABLE\nrhaas=# grant select on table foo to alice with grant option;\nGRANT\nrhaas=# \\c rhaas alice\nYou are now connected to database \"rhaas\" as user \"alice\".\nrhaas=> grant select on table foo to bob;\nGRANT\nrhaas=> \\c - rhaas\nYou are now connected to database \"rhaas\" as user \"rhaas\".\nrhaas=# drop role alice;\nERROR: role \"alice\" cannot be dropped because some objects depend on it\nDETAIL: privileges for table foo\nrhaas=#\n\nHere, because the ACL on table foo records alice as a grantor, alice\ncannot be dropped. But when alice is the grantor of a role, the same\nrule doesn't apply. I think the behavior shown in this example, where\nalice can't be dropped, is the right behavior, and the behavior for\nroles is just plain broken.Agreed.Thanks,Stephen",
"msg_date": "Tue, 5 Oct 2021 13:15:40 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "\n\n> On Oct 5, 2021, at 10:14 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> What does the “ownership” concept actually buy us then?\n\nDROP ... CASCADE.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 5 Oct 2021 10:17:04 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "Greetings,\n\nOn Tue, Oct 5, 2021 at 13:17 Mark Dilger <mark.dilger@enterprisedb.com>\nwrote:\n\n> > On Oct 5, 2021, at 10:14 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> >\n> > What does the “ownership” concept actually buy us then?\n>\n> DROP ... CASCADE\n\n\nI’m not convinced that we need to invent the concept of ownership in order\nto find a sensible way to make this work- though it would be helpful to\nfirst get everyone’s idea of just what *would* this command do if run on a\nrole who “owns” or has “admin rights” of another role?\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Tue, Oct 5, 2021 at 13:17 Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> On Oct 5, 2021, at 10:14 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> What does the “ownership” concept actually buy us then?\n\nDROP ... CASCADEI’m not convinced that we need to invent the concept of ownership in order to find a sensible way to make this work- though it would be helpful to first get everyone’s idea of just what *would* this command do if run on a role who “owns” or has “admin rights” of another role?Thanks,Stephen",
"msg_date": "Tue, 5 Oct 2021 13:20:16 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "\n\n> On Oct 5, 2021, at 10:20 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> Greetings,\n> \n> On Tue, Oct 5, 2021 at 13:17 Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> > On Oct 5, 2021, at 10:14 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> > \n> > What does the “ownership” concept actually buy us then?\n> \n> DROP ... CASCADE\n> \n> I’m not convinced that we need to invent the concept of ownership in order to find a sensible way to make this work- though it would be helpful to first get everyone’s idea of just what *would* this command do if run on a role who “owns” or has “admin rights” of another role?\n\nOk, I'll start. Here is how I envision it:\n\nIf roles have owners, then DROP ROLE bob CASCADE drops bob, bob's objects, roles owned by bob, their objects and any roles they own, recursively. Roles which bob merely has admin rights on are unaffected, excepting that they are administered by one fewer roles once bob is gone. \n\nThis design allows you to delegate to a new role some task, and you don't have to worry what network of other roles and objects they create, because in the end you just drop the one role cascade and all that other stuff is guaranteed to be cleaned up without any leaks.\n\nIf roles do not have owners, then DROP ROLE bob CASCADE drops role bob plus all objects that bob owns. It doesn't cascade to other roles because the concept of \"roles that bob owns\" doesn't exist. If bob created other roles, those will be left around. Objects that bob created and then transferred to these other roles are also left around.\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 5 Oct 2021 12:41:37 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "On Tue, Oct 5, 2021 at 3:41 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> If roles have owners, then DROP ROLE bob CASCADE drops bob, bob's objects, roles owned by bob, their objects and any roles they own, recursively. Roles which bob merely has admin rights on are unaffected, excepting that they are administered by one fewer roles once bob is gone.\n>\n> This design allows you to delegate to a new role some task, and you don't have to worry what network of other roles and objects they create, because in the end you just drop the one role cascade and all that other stuff is guaranteed to be cleaned up without any leaks.\n>\n> If roles do not have owners, then DROP ROLE bob CASCADE drops role bob plus all objects that bob owns. It doesn't cascade to other roles because the concept of \"roles that bob owns\" doesn't exist. If bob created other roles, those will be left around. Objects that bob created and then transferred to these other roles are also left around.\n\nI'm not sure that I'm totally on board with the role ownership\nconcept, but I do think it has some potential advantages. For\ninstance, suppose the dba creates a bunch of \"master tenant\" roles\nwhich correspond to particular customers: say, amazon, google, and\nfacebook. Now each of those master tenant rolls creates roles under it\nwhich represent the various people or applications from those\ncompanies that will be accessing the server: e.g. sbrin and lpage.\nNow, if Google runs out of money and stops paying the hosting bill, we\ncan just \"DROP ROLE google CASCADE\" and sbrin and lpage get nuked too.\nThat's kind of cool. What happens if we don't have that? Then we'll\nneed to work harder to make sure all traces of Google are expunged\nfrom the system.\n\nIn fact, how do we do that, exactly? In this particular instance it\nshould be straightforward: if we see that google can administrer sbrin\nand lpage and nobody else can, then it probably follows that those\nroles should be nuked when the google role is nuked. But what if we\nhave separate users apple and nextstep either of whom can administer\nthe role sjobs? If nextstep goes away, we had better not remove sjobs\nbecause he's still able to be administered by apple, but if apple also\ngoes away, then we'll want to remove sjobs then. That's doable, but\ncomplicated, and all the logic that figures this out now lives outside\nthe database. With role ownership, we can enforce that the roles form\na tree, and subtrees can be easily lopped off without the user needing\nto do anything complicated.\n\nWithout role ownership, we've just got a directed graph of who can\nadminister who, and it need not be connected or acyclic. Now we may be\nable to cope with that, or we may be able to set things up so that\nusers can cope with that using logic external to the database without\nanything getting too complicated. But I certainly see the appeal of a\nsystem where the lines of control form a DAG rather than a general\ndirected graph. It seems to make it a whole lot easier to reason about\nwhat operations should and should not be permitted and how the whole\nthing should actually work. It's a fairly big change from the status\nquo, though, and maybe it has disadvantages that make it a suboptimal\nchoice.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 6 Oct 2021 11:38:35 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "Greetings,\n\n* Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > On Oct 5, 2021, at 10:20 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> > On Tue, Oct 5, 2021 at 13:17 Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> > > On Oct 5, 2021, at 10:14 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> > > \n> > > What does the “ownership” concept actually buy us then?\n> > \n> > DROP ... CASCADE\n> > \n> > I’m not convinced that we need to invent the concept of ownership in order to find a sensible way to make this work- though it would be helpful to first get everyone’s idea of just what *would* this command do if run on a role who “owns” or has “admin rights” of another role?\n> \n> Ok, I'll start. Here is how I envision it:\n> \n> If roles have owners, then DROP ROLE bob CASCADE drops bob, bob's objects, roles owned by bob, their objects and any roles they own, recursively. Roles which bob merely has admin rights on are unaffected, excepting that they are administered by one fewer roles once bob is gone. \n> \n> This design allows you to delegate to a new role some task, and you don't have to worry what network of other roles and objects they create, because in the end you just drop the one role cascade and all that other stuff is guaranteed to be cleaned up without any leaks.\n> \n> If roles do not have owners, then DROP ROLE bob CASCADE drops role bob plus all objects that bob owns. It doesn't cascade to other roles because the concept of \"roles that bob owns\" doesn't exist. If bob created other roles, those will be left around. Objects that bob created and then transferred to these other roles are also left around.\n\nI can see how what you describe as the behavior you'd like to see of\nDROP ROLE ... CASCADE could be useful... However, at least in the\nlatest version of the standard that I'm looking at, when a\nDROP ROLE ... CASCADE is executed, what happens for all authorization\nidentifiers is:\n\nREVOKE R FROM A DB\n\nWhere R is the role being dropped and A is the authoriztaion identifier.\n\nIn other words, the SQL committee seems to disagree with you when it\ncomes to what CASCADE on DROP ROLE means (though I can't say I'm too\nsurprised- generally speaking, CASCADE is about getting rid of the\ndependency so the system stays consistent, not as a method of object\nmanagement...).\n\nI'm not against having something that would do what you want, but it\nseems like we'd have to at least call it something else and maybe we\nshould worry about that later, once we've addressed the bigger issue of\nmaking the system handle GRANTORs correctly.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 6 Oct 2021 12:01:16 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "\n\n> On Oct 6, 2021, at 9:01 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> I can see how what you describe as the behavior you'd like to see of\n> DROP ROLE ... CASCADE could be useful... However, at least in the\n> latest version of the standard that I'm looking at, when a\n> DROP ROLE ... CASCADE is executed, what happens for all authorization\n> identifiers is:\n> \n> REVOKE R FROM A DB\n> \n> Where R is the role being dropped and A is the authoriztaion identifier.\n\nI'm not proposing that all roles with membership in bob be dropped when role bob is dropped. I'm proposing that all roles *owned by* role bob also be dropped. Postgres doesn't currently have a concept of roles owning other roles, but I'm proposing that we add such a concept. Of course, any role with membership in role bob would no longer have that membership, and any role managed by bob would not longer be managed by bob. The CASCADE would not result drop those other roles merely due to membership or management relationships.\n\n> In other words, the SQL committee seems to disagree with you when it\n> comes to what CASCADE on DROP ROLE means (though I can't say I'm too\n> surprised- generally speaking, CASCADE is about getting rid of the\n> dependency so the system stays consistent, not as a method of object\n> management...).\n\nI'm not sure I understand how what they are saying disagrees with what I am saying, unless they are saying that REVOKE R FROM A DB is the one and only thing that DROP ROLE .. CASCADE can do. If they are excluding that it do anything else, then yes, that would be an incompatibility.\n\nAs far as keeping the system consistent, I think that's what this does. As soon as a role is defined as owning other stuff, then dropping the role cascade means dropping the other stuff.\n\nCould you elaborate more on the difference between object management and consistency as it applies to this issue?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 6 Oct 2021 09:20:37 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "Greetings,\n\n* Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > On Oct 6, 2021, at 9:01 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> > I can see how what you describe as the behavior you'd like to see of\n> > DROP ROLE ... CASCADE could be useful... However, at least in the\n> > latest version of the standard that I'm looking at, when a\n> > DROP ROLE ... CASCADE is executed, what happens for all authorization\n> > identifiers is:\n> > \n> > REVOKE R FROM A DB\n> > \n> > Where R is the role being dropped and A is the authoriztaion identifier.\n> \n> I'm not proposing that all roles with membership in bob be dropped when role bob is dropped. I'm proposing that all roles *owned by* role bob also be dropped. Postgres doesn't currently have a concept of roles owning other roles, but I'm proposing that we add such a concept. Of course, any role with membership in role bob would no longer have that membership, and any role managed by bob would not longer be managed by bob. The CASCADE would not result drop those other roles merely due to membership or management relationships.\n\nI get all of that ... but you're also talking about changing the\nbehavior of something which is defined pretty clearly in the standard to\nbe something that's very different from what the standard says.\n\n> > In other words, the SQL committee seems to disagree with you when it\n> > comes to what CASCADE on DROP ROLE means (though I can't say I'm too\n> > surprised- generally speaking, CASCADE is about getting rid of the\n> > dependency so the system stays consistent, not as a method of object\n> > management...).\n> \n> I'm not sure I understand how what they are saying disagrees with what I am saying, unless they are saying that REVOKE R FROM A DB is the one and only thing that DROP ROLE .. CASCADE can do. If they are excluding that it do anything else, then yes, that would be an incompatibility.\n\nThat is exactly what DROP ROLE ... CASCADE is defined in the standard to\ndo. That definition covers not just permissions on objects but also\npermissions on roles. To take that and turn it into a DROP ROLE for\nroles looks like a *very* clear and serious deviation from the standard.\n\nIf we were to go down this road, I'd suggest we have some *other* syntax\nthat isn't defined by the standard to do something else. eg:\n\nDROP ROLES OWNED BY R;\n\nor something along those lines. I'm not saying that your idea is\nwithout merit or that it wouldn't be useful, I'm just trying to make it\nclear that the standard already says what DROP ROLE .. CASCADE means and\nwe should be loath to deviate very far from that.\n\n> As far as keeping the system consistent, I think that's what this does. As soon as a role is defined as owning other stuff, then dropping the role cascade means dropping the other stuff.\n> \n> Could you elaborate more on the difference between object management and consistency as it applies to this issue?\n\nConsistency is not having dangling pointers around to things which no\nlonger exist- FK reference kind of things. Object management is about\nactual *removal* of full blown objects like roles, tables, etc. DROP\nTABLE ... CASCADE doesn't drop tables which haven an FK dependency on\nthe dropped table, the FK is just removed.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 6 Oct 2021 13:20:10 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "\n\n> On Oct 6, 2021, at 10:20 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> Consistency is not having dangling pointers around to things which no\n> longer exist- FK reference kind of things. Object management is about\n> actual *removal* of full blown objects like roles, tables, etc. DROP\n> TABLE ... CASCADE doesn't drop tables which haven an FK dependency on\n> the dropped table, the FK is just removed.\n\nRight, but DROP SCHEMA ... CASCADE does remove the tables within, no? I would see alice being a member of role bob as being analogous to the foreign key example, and charlie being owned by bob as being more like the table within a schema.\n\nI'm fine with using a different syntax for this if what i'm proposing violates the spec. I'm just trying to wrap my head around how to interpret the spec (of which i have no copy, mind you.) I'm trying to distinguish between statements like X SHALL DO Y and X SHALL DO NOTHING BUT Y. I don't know if the spec contains a concept of roles owning other roles, and if not, does it forbid that concept? I should think that if that concept is a postgres extension not present in the spec, then we can make it do anything we want.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 6 Oct 2021 10:28:35 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "Greetings,\n\n* Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > On Oct 6, 2021, at 10:20 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> > \n> > Consistency is not having dangling pointers around to things which no\n> > longer exist- FK reference kind of things. Object management is about\n> > actual *removal* of full blown objects like roles, tables, etc. DROP\n> > TABLE ... CASCADE doesn't drop tables which haven an FK dependency on\n> > the dropped table, the FK is just removed.\n> \n> Right, but DROP SCHEMA ... CASCADE does remove the tables within, no? I would see alice being a member of role bob as being analogous to the foreign key example, and charlie being owned by bob as being more like the table within a schema.\n\nObjects aren't able to live outside of a schema, so it doesn't seem to\nbe quite the same case there. Further, DROP SCHEMA is defined in the\nstandard as saying:\n\nDROP (TABLE, VIEW, DOMAIN, etc) T CASCADE\n\n> I'm fine with using a different syntax for this if what i'm proposing violates the spec. I'm just trying to wrap my head around how to interpret the spec (of which i have no copy, mind you.) I'm trying to distinguish between statements like X SHALL DO Y and X SHALL DO NOTHING BUT Y. I don't know if the spec contains a concept of roles owning other roles, and if not, does it forbid that concept? I should think that if that concept is a postgres extension not present in the spec, then we can make it do anything we want.\n\nI do think what you're suggesting is pretty clearly not what the SQL\ncommittee imagined DROP ROLE ... CASCADE to do. After all, it says\n\"REOKVE R FROM A DB\", not \"DROP ROLE A CASCADE\". Unfortunately, more\nrecent versions of the spec don't seem to be available very easily and\nthe older draft that I've seen around doesn't have CASCADE on DROP ROLE.\nWorking with roles, which are defined in the spec, it seems pretty\nimportant to have access to the spec though to see these things.\n\nAs far as I can tell, no, there isn't a concept of role 'ownership' in\nthe spec. If there was then perhaps things would be different ... but\nthat's not the case. I disagree quite strongly that adding such an\nextension would allow us to seriuosly deviate from what the spec says\nshould happen regarding DROP ROLE ... CASCADE though. If that argument\nheld water, we could ignore what the spec says about just about anything\nbecause PG has features that aren't in the spec.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 6 Oct 2021 14:09:19 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "\n\n> On Oct 6, 2021, at 11:09 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> After all, it says\n> \"REOKVE R FROM A DB\", not \"DROP ROLE A CASCADE\". \n\nWait, are you arguing what DROP ROLE A CASCADE should do based on what the spec says REVOKE R FROM A DB should do? If so, I'd say that's irrelevant. I'm not proposing to change what REVOKE does. If not, could you clarify? Did I misunderstand?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 6 Oct 2021 11:29:48 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "Greetings,\n\n* Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > On Oct 6, 2021, at 11:09 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> > After all, it says\n> > \"REOKVE R FROM A DB\", not \"DROP ROLE A CASCADE\". \n> \n> Wait, are you arguing what DROP ROLE A CASCADE should do based on what the spec says REVOKE R FROM A DB should do? If so, I'd say that's irrelevant. I'm not proposing to change what REVOKE does. If not, could you clarify? Did I misunderstand?\n\nNo, that's not what I'm saying.\n\nIn the spec, under <drop role statement>, there is a 'General Rules'\nsection (as there is with most statements) and in that section it says\nthat for every authorization identifier (that is, some privilege, be it\na GRANT of SELECT rights on an object, or GRANT of role membership in\nsome role) which references the role being dropped, the command:\n\nREVOKE R FROM A DB\n\nis effectively executed (without further access rule checking).\n\nWhat I'm saying above is that the command explicitly listed there\n*isn't* 'DROP ROLE A DB', even though that is something which the spec\n*could* have done, had they wished to. Given that they didn't, it seems\nvery clear that making such a change would very much be a deviation and\nviolation of the spec. That we invented some behind-the-scenes concept\nof role ownership where we track who actually created what role and then\nuse that info to transform a REVOKE into a DROP doesn't make such a\ntransformation OK.\n\nConsider that with what you're proposing, a user could execute the\nfollowing series of entirely SQL-spec compliant statements, and get\nvery different results depending on if we have this 'ownership' concept\nor not:\n\nSET ROLE postgres;\nCREATE ROLE r1;\n\nSET ROLE r1;\nCREATE ROLE r2;\n\nSET ROLE postgres;\nDROP ROLE r1 CASCADE;\n\nWith what you're suggesting, the end result would be that r2 no longer\nexists, whereas with the spec-defined behvaior, r2 *would* still exist.\n\nIf that doesn't make it clear enough then I'm afraid you'll just need to\neither acquire a copy of the spec and point out what I'm\nmisunderstanding in it (or get someone else to who has access to it), or\naccept that we need to use some other syntax for this capability. I\ndon't think it's unreasonable to have different syntax for this,\nparticularly as it's a concept that doesn't even exist in the standard\n(as far as I can tell, anyway). Adopting SQL defined syntax to use with\na concept that the standard doesn't have sure seems like a violation of\nthe POLA.\n\nIf you feel really strongly that this must be part of DROP ROLE then\nmaybe we could do something like:\n\nDROP ROLE r1 CASCADE OWNED ROLES;\n\nor come up with something else, but just changing what DROP ROLE ..\nCASCADE is defined by the spec to do isn't the right approach, imv.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 6 Oct 2021 14:48:10 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "On Wed, Oct 6, 2021 at 2:48 PM Stephen Frost <sfrost@snowman.net> wrote:\n> What I'm saying above is that the command explicitly listed there\n> *isn't* 'DROP ROLE A DB', even though that is something which the spec\n> *could* have done, had they wished to. Given that they didn't, it seems\n> very clear that making such a change would very much be a deviation and\n> violation of the spec. That we invented some behind-the-scenes concept\n> of role ownership where we track who actually created what role and then\n> use that info to transform a REVOKE into a DROP doesn't make such a\n> transformation OK.\n\nIf PostgreSQL implements extensions to the SQL specification, then we\nget to decide how those features interact with the features that are\nspecified.\n\nFor example, I presume the spec doesn't say that you can drop a\nfunction by dropping the extension that contains it, but that's just\nbecause extensions as we have them in PostgreSQL are not part of the\nSQL standard. It would be silly to have rejected that feature on those\ngrounds, because nobody is forced to use extensions, and if you don't,\nthen they do not cause any deviation from spec-mandated behavior.\n\nIn the same way, nobody would be forced to make a role own another\nrole, and if you don't, then you'll never notice any deviation from\nspec-mandated behavior on account of that feature.\n\nIf the SQL specification says that roles can own other roles, but that\nDROP has to have some special behavior in regards to that feature,\nthen we should probably try to do what the spec says. But if the spec\ndoesn't think that the concept of roles owning other roles even\nexists, but we choose to invent such a concept, then I think we can\nmake it work however we like without worrying about\nspec-compatibility. We've already invented lots of other things like\nthat, and the project is the better for it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 6 Oct 2021 15:15:02 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Wed, Oct 6, 2021 at 2:48 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > What I'm saying above is that the command explicitly listed there\n> > *isn't* 'DROP ROLE A DB', even though that is something which the spec\n> > *could* have done, had they wished to. Given that they didn't, it seems\n> > very clear that making such a change would very much be a deviation and\n> > violation of the spec. That we invented some behind-the-scenes concept\n> > of role ownership where we track who actually created what role and then\n> > use that info to transform a REVOKE into a DROP doesn't make such a\n> > transformation OK.\n> \n> If PostgreSQL implements extensions to the SQL specification, then we\n> get to decide how those features interact with the features that are\n> specified.\n\nDoes that mean that we also get to change what a specific set of\ncommands, which are all well-defined in the standard, do even when that\ngoes against what an SQL compliant implementation would do? I really\ndon't think so. If this was *new* syntax to go along with some new\nfeature or extension in PG, sure, we can define what that syntax does\nbecause the standard doesn't. In this case we're talking entirely about\nobjects and statements which the standard does define.\n\n> For example, I presume the spec doesn't say that you can drop a\n> function by dropping the extension that contains it, but that's just\n> because extensions as we have them in PostgreSQL are not part of the\n> SQL standard. It would be silly to have rejected that feature on those\n> grounds, because nobody is forced to use extensions, and if you don't,\n> then they do not cause any deviation from spec-mandated behavior.\n\nThe prior example that I used didn't include *any* non-SQL standard\nstatements, so I don't view this argument as applicable.\n\n> In the same way, nobody would be forced to make a role own another\n> role, and if you don't, then you'll never notice any deviation from\n> spec-mandated behavior on account of that feature.\n\nSo you're suggesting that roles created by other roles wouldn't\n*automatically* by owned by the creating role and that, instead, someone\nwould have to explicitly say something like:\n\nALTER ROLE x OWNED BY y;\n\nafter the role is created, and only then would a DROP ROLE y CASCADE;\nturn into DROP ROLE x CASCADE; DROP ROLE y CASCADE; and that, absent\nthat happening, a DROP ROLE y CASCADE; would do what the standard says,\nand not actually DROP all the associated objects but only run the REVOKE\nstatements?\n\nI'll accept that, in such a case, we could argue that we're no longer\nfollowing the spec because the user has started to use some PG extension\nto the spec, but, I've got a really hard time seeing how such a massive\ndifference in what DROP ROLE x CASCADE; does would be acceptable or at\nall reasonable.\n\nOne could lead to hundreds of tables being dropped out of the database\nand a massive outage while the other would just mean some role\nmemberships get cleaned up as part of a role being dropped. Having one\ncommand that does two vastly different things like that is a massive,\nloaded, foot-pointed gun.\n\n> If the SQL specification says that roles can own other roles, but that\n> DROP has to have some special behavior in regards to that feature,\n> then we should probably try to do what the spec says. But if the spec\n> doesn't think that the concept of roles owning other roles even\n> exists, but we choose to invent such a concept, then I think we can\n> make it work however we like without worrying about\n> spec-compatibility. We've already invented lots of other things like\n> that, and the project is the better for it.\n\nThe SQL spec doesn't say that roles can own other roles. I don't think\nthat means we get to rewrite what DROP ROLE ... CASCADE does. Extend\nDROP ROLE with other parameters which are relevant to our extension of\nthe spec? Sure, perhaps, but not entirely redefine what the base\ncommand does to be different from what the SQL spec says it does.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 6 Oct 2021 15:29:48 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "\n\n> On Oct 6, 2021, at 11:48 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> In the spec, under <drop role statement>, there is a 'General Rules'\n> section (as there is with most statements) and in that section it says\n> that for every authorization identifier (that is, some privilege, be it\n> a GRANT of SELECT rights on an object, or GRANT of role membership in\n> some role) which references the role being dropped, the command:\n> \n> REVOKE R FROM A DB\n> \n> is effectively executed (without further access rule checking).\n\nI think you are saying that \"DROP ROLE bob\" implies revoking \"bob\" from anybody who has membership in role bob. I agree with that entirely, and my proposal does not change that. (Roles owned by \"bob\" are not typically members of role \"bob\" to begin with.)\n\n> What I'm saying above is that the command explicitly listed there\n> *isn't* 'DROP ROLE A DB', even though that is something which the spec\n> *could* have done, had they wished to.\n\nClearly the spec could have said that \"DROP ROLE bob\" implies \"and drop all roles which are members of bob\" and did not. I fullly agree with that decision, and I'm not trying to change it one iota.\n\n> Given that they didn't, it seems\n> very clear that making such a change would very much be a deviation and\n> violation of the spec. \n\nSure, and I'm not proposing any such change.\n\n> That we invented some behind-the-scenes concept\n> of role ownership where we track who actually created what role and then\n> use that info to transform a REVOKE into a DROP doesn't make such a\n> transformation OK.\n\nI think I understand why you say this. You seem to be conflating the idea of having privileges on role \"bob\" to being owned by role \"bob\". That's not the case. Maybe you are not conflating them, but I can't interpret what you are saying otherwise.\n\n> Consider that with what you're proposing, a user could execute the\n> following series of entirely SQL-spec compliant statements, and get\n> very different results depending on if we have this 'ownership' concept\n> or not:\n> \n> SET ROLE postgres;\n> CREATE ROLE r1;\n> \n> SET ROLE r1;\n> CREATE ROLE r2;\n> \n> SET ROLE postgres;\n> DROP ROLE r1 CASCADE;\n> \n> With what you're suggesting, the end result would be that r2 no longer\n> exists, whereas with the spec-defined behvaior, r2 *would* still exist.\n\nIf you try this on postgres 14, you get a syntax error because CASCADE is not supported in the grammar for DROP ROLE:\n\nmark.dilger=# drop role bob cascade;\nERROR: syntax error at or near \"cascade\"\n\nI don't know if those statements are \"entirely SQL-spec compliant\" because I have yet to find a reference to the spec saying what DROP ROLE ... CASCADE is supposed to do. I found some Vertica docs that say what Vertica does. I found some Enterprise DB docs about what Advanced Server does (or course, since I work here.) I don't see much else.\n\nYou have quoted me parts of the spec about what REVOKE is supposed to do, and I have responded about why I don't see the connection to DROP ROLE...CASCADE.\n\nAre there any other references to either the spec or how other common databases handle this?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 6 Oct 2021 13:01:26 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "On Wed, Oct 6, 2021 at 3:29 PM Stephen Frost <sfrost@snowman.net> wrote:\n> Does that mean that we also get to change what a specific set of\n> commands, which are all well-defined in the standard, do even when that\n> goes against what an SQL compliant implementation would do? I really\n> don't think so. If this was *new* syntax to go along with some new\n> feature or extension in PG, sure, we can define what that syntax does\n> because the standard doesn't. In this case we're talking entirely about\n> objects and statements which the standard does define.\n\nWell, I think what we're talking about is saying something like:\n\nCREATE USER mybigcustomer CREATEROLE;\n\nAnd then having the mybigcustomer role be able to create other roles,\nwhich would be automatically dropped if I later said:\n\nDROP USER mybigcustomer CASCADE;\n\nSince AFAIK CREATEROLE is not in the specification, I think we're\nperfectly free to say that it alters the behavior of the subsequent\nDROP USER command in any way that we judge reasonable. I agree that we\nneed to have SQL-standard syntax do SQL-standard things, but it\ndoesn't have to be the case that the whole command goes unmentioned by\nthe specification. Options that we add to CREATE USER or CREATE TABLE\nor any other command can modify the behavior of those objects, and the\nspec has nothing to say about it.\n\nNow that doesn't intrinsically mean that it's a good idea. I think\nwhat I hear you saying is that you find it pretty terrifying that\n\"DROP USER mybigcustomer CASCADE;\" could blow away a lot of users and\na lot of tables and that could be scary. And I agree, but that's a\ndesign question, not a spec question. Today, there is not, in\nPostgreSQL, a DROP USER .. CASCADE variant. If there are objects that\ndepend on the user, DROP USER fails. So we could for example decide\nthat DROP USER .. CASCADE will cascade to other users, but not to\nregular objects. Or maybe that's too inconsistent, and we should do\nsomething like DROP ROLES OWNED BY [role]. Or maybe having both DROP\nOWNED BY and DROP ROLES OWNED BY is too weird, and the existing DROP\nOWNED BY [role] command should also cascade to roles. Those kinds of\nthings seem worth discussing to me, to come up with the behavior that\nwill work best for people. But I do disagree with the idea that we're\nnot free to innovate here. We make up new SQL syntax and new\nconfiguration variables and all kinds of new things all the time, and\nI don't think this is any different.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 6 Oct 2021 16:27:41 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "Greetings,\n\nOn Wed, Oct 6, 2021 at 16:01 Mark Dilger <mark.dilger@enterprisedb.com>\nwrote:\n\n> > On Oct 6, 2021, at 11:48 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> >\n> > In the spec, under <drop role statement>, there is a 'General Rules'\n> > section (as there is with most statements) and in that section it says\n> > that for every authorization identifier (that is, some privilege, be it\n> > a GRANT of SELECT rights on an object, or GRANT of role membership in\n> > some role) which references the role being dropped, the command:\n> >\n> > REVOKE R FROM A DB\n> >\n> > is effectively executed (without further access rule checking).\n>\n> I think you are saying that \"DROP ROLE bob\" implies revoking \"bob\" from\n> anybody who has membership in role bob. I agree with that entirely, and my\n> proposal does not change that. (Roles owned by \"bob\" are not typically\n> members of role \"bob\" to begin with.)\n\n\nYes and no…. Specifically the spec says that “DROP ROLE bob CASCADE”\nimplies revoking memberships that bob is in. The other drop behavior is\n“RESTRICT”, which, as you might expect, implies throwing an error instead.\n\n> What I'm saying above is that the command explicitly listed there\n> > *isn't* 'DROP ROLE A DB', even though that is something which the spec\n> > *could* have done, had they wished to.\n>\n> Clearly the spec could have said that \"DROP ROLE bob\" implies \"and drop\n> all roles which are members of bob\" and did not. I fullly agree with that\n> decision, and I'm not trying to change it one iota.\n\n\nI’m not talking about what the spec says for just “DROP ROLE bob”, but\nrather what the spec says for “DROP ROLE bob CASCADE”. The latest versions\nadd the drop behavior syntax to the end of DROP ROLE and it can be either\nCASACDE or RESTRICT, and if it’s CASCADE then the rule is to run the\nREVOKEs that I’ve been talking about.\n\n> Given that they didn't, it seems\n> > very clear that making such a change would very much be a deviation and\n> > violation of the spec.\n>\n> Sure, and I'm not proposing any such change.\n\n\nBut.. you are, because what I’ve been talking about has specifically been\nthe spec-defined “CASCADE” case, not bare DROP ROLE.\n\n> That we invented some behind-the-scenes concept\n> > of role ownership where we track who actually created what role and then\n> > use that info to transform a REVOKE into a DROP doesn't make such a\n> > transformation OK.\n>\n> I think I understand why you say this. You seem to be conflating the idea\n> of having privileges on role \"bob\" to being owned by role \"bob\". That's\n> not the case. Maybe you are not conflating them, but I can't interpret\n> what you are saying otherwise.\n\n\nI’m talking specifically about what happens when someone runs a DROP ROLE\nwith CASCADE.\n\n> Consider that with what you're proposing, a user could execute the\n> > following series of entirely SQL-spec compliant statements, and get\n> > very different results depending on if we have this 'ownership' concept\n> > or not:\n> >\n> > SET ROLE postgres;\n> > CREATE ROLE r1;\n> >\n> > SET ROLE r1;\n> > CREATE ROLE r2;\n> >\n> > SET ROLE postgres;\n> > DROP ROLE r1 CASCADE;\n> >\n> > With what you're suggesting, the end result would be that r2 no longer\n> > exists, whereas with the spec-defined behvaior, r2 *would* still exist.\n>\n> If you try this on postgres 14, you get a syntax error because CASCADE is\n> not supported in the grammar for DROP ROLE:\n>\n> mark.dilger=# drop role bob cascade;\n> ERROR: syntax error at or near \"cascade\"\n>\n> I don't know if those statements are \"entirely SQL-spec compliant\" because\n> I have yet to find a reference to the spec saying what DROP ROLE ...\n> CASCADE is supposed to do. I found some Vertica docs that say what Vertica\n> does. I found some Enterprise DB docs about what Advanced Server does (or\n> course, since I work here.) I don't see much else.\n\n\nThey’re valid commands in the version I’m looking at, though I think\nactually that this is a pre-release as apparently 2016 is the latest when I\nthought there was something more recent. I’m not sure if the 2016 version\nincluded the CASCADE option for DROP ROLE or not. Even if it’s only a\npreview, sure looks like this is the direction they’re going in and it\nseems consistent, at least to me, with other things they’ve done in this\narea…\n\nYou have quoted me parts of the spec about what REVOKE is supposed to do,\n> and I have responded about why I don't see the connection to DROP\n> ROLE...CASCADE.\n\n\nThe bits from REVOKE that I quoted were only at the very start of this\nthread…. This entire sub thread has only been about the DROP ROLE\nstatement..\n\nAre there any other references to either the spec or how other common\n> databases handle this?\n\n\nTrying to get some more insight into the version of the spec I’m looking at\nand if I can figure out a way that you’d be able to see what I’m talking\nabout directly.\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Wed, Oct 6, 2021 at 16:01 Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> On Oct 6, 2021, at 11:48 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> In the spec, under <drop role statement>, there is a 'General Rules'\n> section (as there is with most statements) and in that section it says\n> that for every authorization identifier (that is, some privilege, be it\n> a GRANT of SELECT rights on an object, or GRANT of role membership in\n> some role) which references the role being dropped, the command:\n> \n> REVOKE R FROM A DB\n> \n> is effectively executed (without further access rule checking).\n\nI think you are saying that \"DROP ROLE bob\" implies revoking \"bob\" from anybody who has membership in role bob. I agree with that entirely, and my proposal does not change that. (Roles owned by \"bob\" are not typically members of role \"bob\" to begin with.)Yes and no…. Specifically the spec says that “DROP ROLE bob CASCADE” implies revoking memberships that bob is in. The other drop behavior is “RESTRICT”, which, as you might expect, implies throwing an error instead.> What I'm saying above is that the command explicitly listed there\n> *isn't* 'DROP ROLE A DB', even though that is something which the spec\n> *could* have done, had they wished to.\n\nClearly the spec could have said that \"DROP ROLE bob\" implies \"and drop all roles which are members of bob\" and did not. I fullly agree with that decision, and I'm not trying to change it one iota.I’m not talking about what the spec says for just “DROP ROLE bob”, but rather what the spec says for “DROP ROLE bob CASCADE”. The latest versions add the drop behavior syntax to the end of DROP ROLE and it can be either CASACDE or RESTRICT, and if it’s CASCADE then the rule is to run the REVOKEs that I’ve been talking about.\n> Given that they didn't, it seems\n> very clear that making such a change would very much be a deviation and\n> violation of the spec. \n\nSure, and I'm not proposing any such change.But.. you are, because what I’ve been talking about has specifically been the spec-defined “CASCADE” case, not bare DROP ROLE. \n> That we invented some behind-the-scenes concept\n> of role ownership where we track who actually created what role and then\n> use that info to transform a REVOKE into a DROP doesn't make such a\n> transformation OK.\n\nI think I understand why you say this. You seem to be conflating the idea of having privileges on role \"bob\" to being owned by role \"bob\". That's not the case. Maybe you are not conflating them, but I can't interpret what you are saying otherwise.I’m talking specifically about what happens when someone runs a DROP ROLE with CASCADE. \n> Consider that with what you're proposing, a user could execute the\n> following series of entirely SQL-spec compliant statements, and get\n> very different results depending on if we have this 'ownership' concept\n> or not:\n> \n> SET ROLE postgres;\n> CREATE ROLE r1;\n> \n> SET ROLE r1;\n> CREATE ROLE r2;\n> \n> SET ROLE postgres;\n> DROP ROLE r1 CASCADE;\n> \n> With what you're suggesting, the end result would be that r2 no longer\n> exists, whereas with the spec-defined behvaior, r2 *would* still exist.\n\nIf you try this on postgres 14, you get a syntax error because CASCADE is not supported in the grammar for DROP ROLE:\n\nmark.dilger=# drop role bob cascade;\nERROR: syntax error at or near \"cascade\"\n\nI don't know if those statements are \"entirely SQL-spec compliant\" because I have yet to find a reference to the spec saying what DROP ROLE ... CASCADE is supposed to do. I found some Vertica docs that say what Vertica does. I found some Enterprise DB docs about what Advanced Server does (or course, since I work here.) I don't see much else.They’re valid commands in the version I’m looking at, though I think actually that this is a pre-release as apparently 2016 is the latest when I thought there was something more recent. I’m not sure if the 2016 version included the CASCADE option for DROP ROLE or not. Even if it’s only a preview, sure looks like this is the direction they’re going in and it seems consistent, at least to me, with other things they’ve done in this area…You have quoted me parts of the spec about what REVOKE is supposed to do, and I have responded about why I don't see the connection to DROP ROLE...CASCADE.The bits from REVOKE that I quoted were only at the very start of this thread…. This entire sub thread has only been about the DROP ROLE statement..\nAre there any other references to either the spec or how other common databases handle this?Trying to get some more insight into the version of the spec I’m looking at and if I can figure out a way that you’d be able to see what I’m talking about directly.Thanks,Stephen",
"msg_date": "Wed, 6 Oct 2021 16:42:35 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "Greetings,\n\nOn Wed, Oct 6, 2021 at 16:28 Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Oct 6, 2021 at 3:29 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > Does that mean that we also get to change what a specific set of\n> > commands, which are all well-defined in the standard, do even when that\n> > goes against what an SQL compliant implementation would do? I really\n> > don't think so. If this was *new* syntax to go along with some new\n> > feature or extension in PG, sure, we can define what that syntax does\n> > because the standard doesn't. In this case we're talking entirely about\n> > objects and statements which the standard does define.\n>\n> Well, I think what we're talking about is saying something like:\n>\n> CREATE USER mybigcustomer CREATEROLE;\n>\n> And then having the mybigcustomer role be able to create other roles,\n> which would be automatically dropped if I later said:\n>\n> DROP USER mybigcustomer CASCADE;\n>\n> Since AFAIK CREATEROLE is not in the specification, I think we're\n> perfectly free to say that it alters the behavior of the subsequent\n> DROP USER command in any way that we judge reasonable. I agree that we\n> need to have SQL-standard syntax do SQL-standard things, but it\n> doesn't have to be the case that the whole command goes unmentioned by\n> the specification. Options that we add to CREATE USER or CREATE TABLE\n> or any other command can modify the behavior of those objects, and the\n> spec has nothing to say about it.\n>\n> Now that doesn't intrinsically mean that it's a good idea. I think\n> what I hear you saying is that you find it pretty terrifying that\n> \"DROP USER mybigcustomer CASCADE;\" could blow away a lot of users and\n> a lot of tables and that could be scary. And I agree, but that's a\n> design question, not a spec question. Today, there is not, in\n> PostgreSQL, a DROP USER .. CASCADE variant. If there are objects that\n> depend on the user, DROP USER fails. So we could for example decide\n> that DROP USER .. CASCADE will cascade to other users, but not to\n> regular objects. Or maybe that's too inconsistent, and we should do\n> something like DROP ROLES OWNED BY [role]. Or maybe having both DROP\n> OWNED BY and DROP ROLES OWNED BY is too weird, and the existing DROP\n> OWNED BY [role] command should also cascade to roles. Those kinds of\n> things seem worth discussing to me, to come up with the behavior that\n> will work best for people. But I do disagree with the idea that we're\n> not free to innovate here. We make up new SQL syntax and new\n> configuration variables and all kinds of new things all the time, and\n> I don't think this is any different.\n\n\nThis specific syntax, including the CASCADE bit, has, at minimum, at least\nbeen contemplate by the SQL folks sufficiently to be described in one\nspecific way. I don’t have a copy of 2016 handy, unfortunately, and so I’m\nnot sure if it’s described that way in a “stable” version of the standard\nor not (it isn’t defined in the 2006 draft I’ve seen), but ultimately I\ndon’t think we are really talking about entirely net-new syntax here…\n\nIf we were, that would be different and perhaps we would just be guessing\nat what the standard might do in the future, but I don’t think it’s an open\nended question at this point..\n\n(Even if it was, I have to say that the direction that they’re going in\ncertainly seems consistent to me, anyway, with what’s been done in the past\nand I think it’d be bad of us to go in a different direction from that\nsince it’d be difficult for us to change it later when the new spec comes\nout and contradicts what we decided to do..)\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Wed, Oct 6, 2021 at 16:28 Robert Haas <robertmhaas@gmail.com> wrote:On Wed, Oct 6, 2021 at 3:29 PM Stephen Frost <sfrost@snowman.net> wrote:\n> Does that mean that we also get to change what a specific set of\n> commands, which are all well-defined in the standard, do even when that\n> goes against what an SQL compliant implementation would do? I really\n> don't think so. If this was *new* syntax to go along with some new\n> feature or extension in PG, sure, we can define what that syntax does\n> because the standard doesn't. In this case we're talking entirely about\n> objects and statements which the standard does define.\n\nWell, I think what we're talking about is saying something like:\n\nCREATE USER mybigcustomer CREATEROLE;\n\nAnd then having the mybigcustomer role be able to create other roles,\nwhich would be automatically dropped if I later said:\n\nDROP USER mybigcustomer CASCADE;\n\nSince AFAIK CREATEROLE is not in the specification, I think we're\nperfectly free to say that it alters the behavior of the subsequent\nDROP USER command in any way that we judge reasonable. I agree that we\nneed to have SQL-standard syntax do SQL-standard things, but it\ndoesn't have to be the case that the whole command goes unmentioned by\nthe specification. Options that we add to CREATE USER or CREATE TABLE\nor any other command can modify the behavior of those objects, and the\nspec has nothing to say about it.\n\nNow that doesn't intrinsically mean that it's a good idea. I think\nwhat I hear you saying is that you find it pretty terrifying that\n\"DROP USER mybigcustomer CASCADE;\" could blow away a lot of users and\na lot of tables and that could be scary. And I agree, but that's a\ndesign question, not a spec question. Today, there is not, in\nPostgreSQL, a DROP USER .. CASCADE variant. If there are objects that\ndepend on the user, DROP USER fails. So we could for example decide\nthat DROP USER .. CASCADE will cascade to other users, but not to\nregular objects. Or maybe that's too inconsistent, and we should do\nsomething like DROP ROLES OWNED BY [role]. Or maybe having both DROP\nOWNED BY and DROP ROLES OWNED BY is too weird, and the existing DROP\nOWNED BY [role] command should also cascade to roles. Those kinds of\nthings seem worth discussing to me, to come up with the behavior that\nwill work best for people. But I do disagree with the idea that we're\nnot free to innovate here. We make up new SQL syntax and new\nconfiguration variables and all kinds of new things all the time, and\nI don't think this is any different.This specific syntax, including the CASCADE bit, has, at minimum, at least been contemplate by the SQL folks sufficiently to be described in one specific way. I don’t have a copy of 2016 handy, unfortunately, and so I’m not sure if it’s described that way in a “stable” version of the standard or not (it isn’t defined in the 2006 draft I’ve seen), but ultimately I don’t think we are really talking about entirely net-new syntax here…If we were, that would be different and perhaps we would just be guessing at what the standard might do in the future, but I don’t think it’s an open ended question at this point..(Even if it was, I have to say that the direction that they’re going in certainly seems consistent to me, anyway, with what’s been done in the past and I think it’d be bad of us to go in a different direction from that since it’d be difficult for us to change it later when the new spec comes out and contradicts what we decided to do..)Thanks,Stephen",
"msg_date": "Wed, 6 Oct 2021 16:48:38 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "\n\n> On Oct 6, 2021, at 1:48 PM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> This specific syntax, including the CASCADE bit, has, at minimum, at least been contemplate by the SQL folks sufficiently to be described in one specific way. I don’t have a copy of 2016 handy, unfortunately, and so I’m not sure if it’s described that way in a “stable” version of the standard or not (it isn’t defined in the 2006 draft I’ve seen), but ultimately I don’t think we are really talking about entirely net-new syntax here…\n> \n> If we were, that would be different and perhaps we would just be guessing at what the standard might do in the future, but I don’t think it’s an open ended question at this point..\n> \n> (Even if it was, I have to say that the direction that they’re going in certainly seems consistent to me, anyway, with what’s been done in the past and I think it’d be bad of us to go in a different direction from that since it’d be difficult for us to change it later when the new spec comes out and contradicts what we decided to do..)\n\nAssuming no concept of role ownership exists, but that DROP ROLE bob CASCADE is implemented in a spec compliant way, if there is a role \"bob\" who owns various objects, what happens when DROP ROLE bob CASCADE is performed? Do bob's objects get dropped, do they get orphaned, or do they get assigned to some other owner? I would expect that they get dropped, but I'd like to know what the spec says about it before going any further with this discussion. \n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 6 Oct 2021 16:01:56 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "On 10/6/21 8:48 PM, Stephen Frost wrote:\n> Consider that with what you're proposing, a user could execute the\n> following series of entirely SQL-spec compliant statements, and get\n> very different results depending on if we have this 'ownership' concept\n> or not:\n> \n> SET ROLE postgres;\n> CREATE ROLE r1;\n> \n> SET ROLE r1;\n> CREATE ROLE r2;\n> \n> SET ROLE postgres;\n> DROP ROLE r1 CASCADE;\n> \n> With what you're suggesting, the end result would be that r2 no longer\n> exists, whereas with the spec-defined behvaior, r2 *would* still exist.\n\nThe way I read the spec, r2 would be destroyed along with its objects.\n\n12.7 GR 30.b.i says to destroy all abandoned role authorization\ndescriptors, and r2 matches that according to my reading of 12.7 GR 7.\n-- \nVik Fearing\n\n\n",
"msg_date": "Thu, 7 Oct 2021 11:06:06 +0200",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "Greetings,\n\n* Vik Fearing (vik@postgresfriends.org) wrote:\n> On 10/6/21 8:48 PM, Stephen Frost wrote:\n> > Consider that with what you're proposing, a user could execute the\n> > following series of entirely SQL-spec compliant statements, and get\n> > very different results depending on if we have this 'ownership' concept\n> > or not:\n> > \n> > SET ROLE postgres;\n> > CREATE ROLE r1;\n> > \n> > SET ROLE r1;\n> > CREATE ROLE r2;\n> > \n> > SET ROLE postgres;\n> > DROP ROLE r1 CASCADE;\n> > \n> > With what you're suggesting, the end result would be that r2 no longer\n> > exists, whereas with the spec-defined behvaior, r2 *would* still exist.\n> \n> The way I read the spec, r2 would be destroyed along with its objects.\n> \n> 12.7 GR 30.b.i says to destroy all abandoned role authorization\n> descriptors, and r2 matches that according to my reading of 12.7 GR 7.\n\n12.7 refers to the \"revoke statement\", just so folks are able to follow.\n\nI concur that 30.b.1 says that.\n\nWhat I disagree with, however, is that a 'role authorization descriptor'\nequates to a 'role'.\n\n12.6 is 'drop role statement' and it's \"Function\" is \"Destroy a role\"\n\n12.7 is 'revoke statement' and it's \"Function\" is \"Destroy privileges\nand role authorizations\".\n\nIn other words, my reading is that a \"role authorization descriptor\" is\nthe equivilant of a row in pg_auth_members, not one in pg_authid. This\nis further substantiated in Framework, 4.4.6 Roles, which makes a clear\ndistinction between \"role\" and \"role authorization\".\n\nI certainly don't think that \"REVOKE R FROM A;\" should be going around\ndropping roles, yet your reading would imply that it should be.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 7 Oct 2021 10:21:45 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "Greetings,\n\n* Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > On Oct 6, 2021, at 1:48 PM, Stephen Frost <sfrost@snowman.net> wrote:\n> > This specific syntax, including the CASCADE bit, has, at minimum, at least been contemplate by the SQL folks sufficiently to be described in one specific way. I don’t have a copy of 2016 handy, unfortunately, and so I’m not sure if it’s described that way in a “stable” version of the standard or not (it isn’t defined in the 2006 draft I’ve seen), but ultimately I don’t think we are really talking about entirely net-new syntax here…\n> > \n> > If we were, that would be different and perhaps we would just be guessing at what the standard might do in the future, but I don’t think it’s an open ended question at this point..\n> > \n> > (Even if it was, I have to say that the direction that they’re going in certainly seems consistent to me, anyway, with what’s been done in the past and I think it’d be bad of us to go in a different direction from that since it’d be difficult for us to change it later when the new spec comes out and contradicts what we decided to do..)\n> \n> Assuming no concept of role ownership exists, but that DROP ROLE bob CASCADE is implemented in a spec compliant way, if there is a role \"bob\" who owns various objects, what happens when DROP ROLE bob CASCADE is performed? Do bob's objects get dropped, do they get orphaned, or do they get assigned to some other owner? I would expect that they get dropped, but I'd like to know what the spec says about it before going any further with this discussion. \n\nWhile the spec does talk about roles and how they can own objects, such\nas schemas, the 'drop role statement' doesn't appear to say anything\nabout what happens to the objects which that role owns (in any case\nof CASCADE, RESTRICT, or no drop behavior, is specified).\n\nThanks,\n\nStephen",
"msg_date": "Thu, 7 Oct 2021 10:43:54 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "\n\n> On Oct 7, 2021, at 7:43 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n>> Assuming no concept of role ownership exists, but that DROP ROLE bob CASCADE is implemented in a spec compliant way, if there is a role \"bob\" who owns various objects, what happens when DROP ROLE bob CASCADE is performed? Do bob's objects get dropped, do they get orphaned, or do they get assigned to some other owner? I would expect that they get dropped, but I'd like to know what the spec says about it before going any further with this discussion. \n> \n> While the spec does talk about roles and how they can own objects, such\n> as schemas, the 'drop role statement' doesn't appear to say anything\n> about what happens to the objects which that role owns (in any case\n> of CASCADE, RESTRICT, or no drop behavior, is specified).\n\nHmmph. I think it would be strange if all of the following were true:\n\n1) DROP ROLE bob CASCADE drops all objects owned by bob\n2) Roles can own other roles\n3) DROP ROLE bob CASCADE never cascades to other roles\n\nI'm assuming you see the inconsistency in that set of rules. So, one of them must be wrong. You've just replied that the spec is mute on the subject of #1. Is there any support in the spec for claiming that #2 is wrong?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 7 Oct 2021 07:48:38 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "Greetings,\n\n* Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > On Oct 7, 2021, at 7:43 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> >> Assuming no concept of role ownership exists, but that DROP ROLE bob CASCADE is implemented in a spec compliant way, if there is a role \"bob\" who owns various objects, what happens when DROP ROLE bob CASCADE is performed? Do bob's objects get dropped, do they get orphaned, or do they get assigned to some other owner? I would expect that they get dropped, but I'd like to know what the spec says about it before going any further with this discussion. \n> > \n> > While the spec does talk about roles and how they can own objects, such\n> > as schemas, the 'drop role statement' doesn't appear to say anything\n> > about what happens to the objects which that role owns (in any case\n> > of CASCADE, RESTRICT, or no drop behavior, is specified).\n> \n> Hmmph. I think it would be strange if all of the following were true:\n> \n> 1) DROP ROLE bob CASCADE drops all objects owned by bob\n> 2) Roles can own other roles\n> 3) DROP ROLE bob CASCADE never cascades to other roles\n> \n> I'm assuming you see the inconsistency in that set of rules. So, one of them must be wrong. You've just replied that the spec is mute on the subject of #1. Is there any support in the spec for claiming that #2 is wrong?\n\nPretty sure I mentioned this before, but the spec doesn't seem to really\nsay anything about roles owning other roles, so #2 isn't part of the\nspec. #1 also isn't supported by the spec from what I can see.\n\nWhen the statement is:\n\nDROP ROLE bob;\n\nor\n\nDROP ROLE bob RESTRICT;\n\nthen the command \"REVOKE bob FROM A RESTRICT;\" is supposed to be run BUT\nis supposed to throw an exception if there are \"any dependencies on the\nrole.\"\n\nIf the statement is:\n\nDROP ROLE bob CASCADE;\n\nthen the command \"REVOKE bob FROM A CASCADE;\" is run and shouldn't throw\nan exception.\n\nI don't think the spec supports any of the three rules you list.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 7 Oct 2021 12:05:19 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "\n\n> On Oct 7, 2021, at 9:05 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n>> Hmmph. I think it would be strange if all of the following were true:\n>> \n>> 1) DROP ROLE bob CASCADE drops all objects owned by bob\n>> 2) Roles can own other roles\n>> 3) DROP ROLE bob CASCADE never cascades to other roles\n>> \n>> I'm assuming you see the inconsistency in that set of rules. So, one of them must be wrong. You've just replied that the spec is mute on the subject of #1. Is there any support in the spec for claiming that #2 is wrong?\n> \n> Pretty sure I mentioned this before, but the spec doesn't seem to really\n> say anything about roles owning other roles, so #2 isn't part of the\n> spec.\n\nRegulations and specifications are usually thought about as either \"permissive\" or \"prohibitory\". Permissive rules allow anything that isn't expressly prohibited. Prohibitive rules prohibit anything that isn't explicitly permitted. I'm taking the SQL spec to be a permissive set of rules. \n\nI'm reasonable enough to concede that even if something is not explicitly prohibited, it is still effectively prohibited if it cannot be done without also doing some other thing that is prohibited. \n\nFrom your statements, I take it that #2 is allowed, at least if it doesn't necessarily lead to some other violation. So tentatively, I conclude that roles may own other roles.\n\n> #1 also isn't supported by the spec from what I can see.\n\nFrom that, I tentatively conclude that #1 is allowed, though I am aware that you may argue that it necessarily violates this next thing...\n\n> When the statement is:\n> \n> DROP ROLE bob;\n> \n> or\n> \n> DROP ROLE bob RESTRICT;\n> \n> then the command \"REVOKE bob FROM A RESTRICT;\" is supposed to be run BUT\n> is supposed to throw an exception if there are \"any dependencies on the\n> role.\"\n\nYeah, I don't think my proposal violates this.\n\n> If the statement is:\n> \n> DROP ROLE bob CASCADE;\n> \n> then the command \"REVOKE bob FROM A CASCADE;\" is run and shouldn't throw\n> an exception.\n\nRight, and this will be run. It's just that other stuff, like dropping owned objects, will also be run. I'm not seeing a prohibition here, just a mandate, and the proposal fulfills the mandate.\n\n> I don't think the spec supports any of the three rules you list.\n\nAnd I'm not seeing that it prohibits any of them.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 7 Oct 2021 09:46:39 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "On 10/7/21 4:21 PM, Stephen Frost wrote:\n> Greetings,\n> \n> * Vik Fearing (vik@postgresfriends.org) wrote:\n>> On 10/6/21 8:48 PM, Stephen Frost wrote:\n>>> Consider that with what you're proposing, a user could execute the\n>>> following series of entirely SQL-spec compliant statements, and get\n>>> very different results depending on if we have this 'ownership' concept\n>>> or not:\n>>>\n>>> SET ROLE postgres;\n>>> CREATE ROLE r1;\n>>>\n>>> SET ROLE r1;\n>>> CREATE ROLE r2;\n>>>\n>>> SET ROLE postgres;\n>>> DROP ROLE r1 CASCADE;\n>>>\n>>> With what you're suggesting, the end result would be that r2 no longer\n>>> exists, whereas with the spec-defined behvaior, r2 *would* still exist.\n>>\n>> The way I read the spec, r2 would be destroyed along with its objects.\n>>\n>> 12.7 GR 30.b.i says to destroy all abandoned role authorization\n>> descriptors, and r2 matches that according to my reading of 12.7 GR 7.\n> \n> 12.7 refers to the \"revoke statement\", just so folks are able to follow.\n> \n> I concur that 30.b.1 says that.\n> \n> What I disagree with, however, is that a 'role authorization descriptor'\n> equates to a 'role'.\n\nOkay.\n\n> 12.6 is 'drop role statement' and it's \"Function\" is \"Destroy a role\"\n> \n> 12.7 is 'revoke statement' and it's \"Function\" is \"Destroy privileges\n> and role authorizations\".\n> \n> In other words, my reading is that a \"role authorization descriptor\" is\n> the equivilant of a row in pg_auth_members, not one in pg_authid. This\n> is further substantiated in Framework, 4.4.6 Roles, which makes a clear\n> distinction between \"role\" and \"role authorization\".\n\nI was looking for this distinction in Foundation and didn't think to\nlook in Framework (I wish this thing would be just one huge document),\nso thanks for pointing me to that.\n\nI think I got confused by 12.4 <role definition> putting in the General\nRules that a role authorization descriptor is created, but putting that\na role descriptor is created in the *Syntax Rules*. And that is in fact\nthe *only* place \"role descriptor\" appears in Foundation.\n\n> I certainly don't think that \"REVOKE R FROM A;\" should be going around\n> dropping roles, yet your reading would imply that it should be.\n\nI can agree with you now, but it's certainly not the easiest thing to\ninterpret.\n-- \nVik Fearing\n\n\n",
"msg_date": "Thu, 7 Oct 2021 18:52:09 +0200",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "On Thu, Oct 7, 2021 at 12:52 PM Vik Fearing <vik@postgresfriends.org> wrote:\n> I can agree with you now, but it's certainly not the easiest thing to\n> interpret.\n\nThat's putting it mildly.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 7 Oct 2021 12:54:17 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "Greetings,\n\n* Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > On Oct 7, 2021, at 9:05 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> > I don't think the spec supports any of the three rules you list.\n> \n> And I'm not seeing that it prohibits any of them.\n\nI don't agree that we can decide to have random statements which are\ndefined explicitly in the standard to do X end up doing X+Y, simply\nbecause the standard didn't explicitly say \"you can't have Y happen when\nX does\".\n\nI hate to think what the standard would look like if it was required\nthat every possible thing that could happen when a statement is run had\nto be explicitly listed as \"don't have this happen when this command\nruns\" except for the few things that the standard defines the statement\nto do.\n\nThe argument being presented here would allow us to have INSERTs perform\nCREATE ROLEs, or have DELETEs also TRUNCATE other tables that aren't\neven mentioned in the command, and still claim to be in compliance with\nthe standard.\n\nExtending the language with new syntax and then deciding how that new\nsyntax works is one thing, but taking existing, defined, syntax and\nmaking it do something other than what the standard is saying does, imv\nanyway, go against the standard. Sure, we've gone against the standard\nat times for good reasons, but I don't agree that this is anywhere close\nto a reasonable case for that.\n\nLet's just invent some new syntax for what you're looking for here that\nworks the way you want and doesn't have this issue. As I said before, I\nagree with the general usefulness of this idea, and I can even generally\nget behind the idea of role ownership to allow us to do that, but we\ncan't make 'DROP ROLE bob CASCADE;' do it, it needs to be something\nmore, like 'DROP ROLE bob CASCADE OBJECTS;' or such.\n\nI really don't understand why there's so much push back to go in that\ndirection. Why must 'DROP ROLE bob CASCADE;' drop all of bob's objects\nand roles \"owned\" by bob?\n\nThanks,\n\nStephen",
"msg_date": "Thu, 7 Oct 2021 13:23:56 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "\n\n> On Oct 7, 2021, at 10:23 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n>> And I'm not seeing that it prohibits any of them.\n> \n> I don't agree that we can decide to have random statements which are\n> defined explicitly in the standard to do X end up doing X+Y, simply\n> because the standard didn't explicitly say \"you can't have Y happen when\n> X does\".\n\nI agree that a clean design is important, and I wouldn't want to do this if I didn't think it was the cleanest way to go. But I am mindful of the problem you raised upthread about the spec going in some other direction, and ultimately prohibiting what I've proposed, after we've already gone and done it. I'm not as interested in what a bunch of philosophers writing a spec think, but if all the other major SQL databases go that direction and we're off in a different direction, I can certainly see the problems that would entail both for community Postgres and for my employer.\n\n> I hate to think what the standard would look like if it was required\n> that every possible thing that could happen when a statement is run had\n> to be explicitly listed as \"don't have this happen when this command\n> runs\" except for the few things that the standard defines the statement\n> to do.\n> \n> The argument being presented here would allow us to have INSERTs perform\n> CREATE ROLEs, or have DELETEs also TRUNCATE other tables that aren't\n> even mentioned in the command, and still claim to be in compliance with\n> the standard.\n\nI don't mean to be flippant, but we do allow both of those things to be done with triggers. It's not the same as if we did them automatically, but there seems to be some wiggle room concerning what a system can do.\n\n> Extending the language with new syntax and then deciding how that new\n> syntax works is one thing, but taking existing, defined, syntax and\n> making it do something other than what the standard is saying does, imv\n> anyway, go against the standard. Sure, we've gone against the standard\n> at times for good reasons, but I don't agree that this is anywhere close\n> to a reasonable case for that.\n> \n> Let's just invent some new syntax for what you're looking for here that\n> works the way you want and doesn't have this issue. As I said before, I\n> agree with the general usefulness of this idea, and I can even generally\n> get behind the idea of role ownership to allow us to do that, but we\n> can't make 'DROP ROLE bob CASCADE;' do it, it needs to be something\n> more, like 'DROP ROLE bob CASCADE OBJECTS;' or such.\n> \n> I really don't understand why there's so much push back to go in that\n> direction. Why must 'DROP ROLE bob CASCADE;' drop all of bob's objects\n> and roles \"owned\" by bob?\n\nBecause we've already decided how object ownership works. I didn't write any code to have roles get dropped when their owners get dropped. I just put ownership into the system and this is how it naturally works. So you are advocating that DROP...CASCADE works one way for every object type save one. I think that's an incredibly unclean design. Having DROP...CASCADE work the same way for all ownership relations for all object types without exception makes so much more sense to me.\n\nWhat if we go with what you are saying, the spec never resolves in the direction you are predicting, and all the other database vendors go the way I'm proposing, and we're the only ones with this ugly wart that you have to use a different syntax for roles than for everything else? We'll be supporting that ugly wart for years and years to come, and look ridiculous, and rightly so. I don't want to invent an ugly wart unless I'm completely forced to do so.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 7 Oct 2021 10:47:07 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "Greetings,\n\n* Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > On Oct 7, 2021, at 10:23 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> >> And I'm not seeing that it prohibits any of them.\n> > \n> > I don't agree that we can decide to have random statements which are\n> > defined explicitly in the standard to do X end up doing X+Y, simply\n> > because the standard didn't explicitly say \"you can't have Y happen when\n> > X does\".\n> \n> I agree that a clean design is important, and I wouldn't want to do this if I didn't think it was the cleanest way to go. But I am mindful of the problem you raised upthread about the spec going in some other direction, and ultimately prohibiting what I've proposed, after we've already gone and done it. I'm not as interested in what a bunch of philosophers writing a spec think, but if all the other major SQL databases go that direction and we're off in a different direction, I can certainly see the problems that would entail both for community Postgres and for my employer.\n\nIf we can agree that the proposed spec is, in fact, prohibiting what\nyou've proposed without it having to explicitly spell that out, then\nthat's progress.\n\n> > I hate to think what the standard would look like if it was required\n> > that every possible thing that could happen when a statement is run had\n> > to be explicitly listed as \"don't have this happen when this command\n> > runs\" except for the few things that the standard defines the statement\n> > to do.\n> > \n> > The argument being presented here would allow us to have INSERTs perform\n> > CREATE ROLEs, or have DELETEs also TRUNCATE other tables that aren't\n> > even mentioned in the command, and still claim to be in compliance with\n> > the standard.\n> \n> I don't mean to be flippant, but we do allow both of those things to be done with triggers. It's not the same as if we did them automatically, but there seems to be some wiggle room concerning what a system can do.\n\n... triggers are defined in the standard. This isn't a trigger. If\nyou'd like to be able to create an EVENT TRIGGER on DROP ROLE to do\nwhatever you want, I wouldn't have any issue with that.\n\n> > Extending the language with new syntax and then deciding how that new\n> > syntax works is one thing, but taking existing, defined, syntax and\n> > making it do something other than what the standard is saying does, imv\n> > anyway, go against the standard. Sure, we've gone against the standard\n> > at times for good reasons, but I don't agree that this is anywhere close\n> > to a reasonable case for that.\n> > \n> > Let's just invent some new syntax for what you're looking for here that\n> > works the way you want and doesn't have this issue. As I said before, I\n> > agree with the general usefulness of this idea, and I can even generally\n> > get behind the idea of role ownership to allow us to do that, but we\n> > can't make 'DROP ROLE bob CASCADE;' do it, it needs to be something\n> > more, like 'DROP ROLE bob CASCADE OBJECTS;' or such.\n> > \n> > I really don't understand why there's so much push back to go in that\n> > direction. Why must 'DROP ROLE bob CASCADE;' drop all of bob's objects\n> > and roles \"owned\" by bob?\n> \n> Because we've already decided how object ownership works. I didn't write any code to have roles get dropped when their owners get dropped. I just put ownership into the system and this is how it naturally works. So you are advocating that DROP...CASCADE works one way for every object type save one. I think that's an incredibly unclean design. Having DROP...CASCADE work the same way for all ownership relations for all object types without exception makes so much more sense to me.\n\nWe've decided how object ownership works related to DROP ROLE ...\nCASCADE..? I don't follow how that is the case. What we *do* have is\ndependency handling, but that isn't the same as ownership.\n\nFurther, DROP SCHEMA ... CASCADE is also defined in the standard and\nexplicitly says that it cascades down with DROP TABLE for tables, et al.\nThat you don't like that the standard says one thing for\nDROP SCHEMA ... CASCADE; and something else for DROP ROLE ... CASCADE;\nis laudable but doesn't change that fact that that's the case, at least\ntoday.\n\n> What if we go with what you are saying, the spec never resolves in the direction you are predicting, and all the other database vendors go the way I'm proposing, and we're the only ones with this ugly wart that you have to use a different syntax for roles than for everything else? We'll be supporting that ugly wart for years and years to come, and look ridiculous, and rightly so. I don't want to invent an ugly wart unless I'm completely forced to do so.\n\nI can't predict the future any better than the next person, I'm afraid,\nso I don't have any particular insight into when this might become\nfinal. If we want to avoid any risk here of conflicting with what the\nstandard might do in this area then the best way to do that would be to\nsimply not implement anything for the exact 'DROP ROLE bob CASCADE;'\nsyntax and instead come up with something else, at least initially.\nThat way, whenever the standard comes out which has something definitive\nto say about how 'DROP ROLE bob CASCADE;' should work, we can implement\nwhatever it is that they decided upon and hope that other databases do\ntoo.\n\nI find it very unlikely that the standard will come out any time soon\nwith a concept of role ownership though, making it very unlikely that a\ndifferent decision will be made regarding how DROP ROLE ... CASCADE;\nworks. That said, the way to avoid such a possibility is to use some\nother syntax, which is what I've been advocating for since the start.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 7 Oct 2021 14:30:12 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "\n\n> On Oct 7, 2021, at 11:30 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n>> Because we've already decided how object ownership works. I didn't write any code to have roles get dropped when their owners get dropped. I just put ownership into the system and this is how it naturally works. So you are advocating that DROP...CASCADE works one way for every object type save one. I think that's an incredibly unclean design. Having DROP...CASCADE work the same way for all ownership relations for all object types without exception makes so much more sense to me.\n> \n> We've decided how object ownership works related to DROP ROLE ...\n> CASCADE..? I don't follow how that is the case. What we *do* have is\n> dependency handling, but that isn't the same as ownership.\n\nWe have a concept of objects being owned, and we prohibit the owner being NULL. You've already said upthread that DROP ROLE bob CASCADE must revoke \"bob\" from other roles, must remove \"bob\", and must not fail. How do you handle this?\n\n CREATE ROLE bob;\n GRANT CREATE ON DATABASE regression TO bob;\n SET SESSION AUTHORIZATION bob;\n CREATE SCHEMA bobs_schema;\n RESET SESSION AUTHORIZATION;\n DROP ROLE bob CASCADE;\n\nYou can't have bobs_schema have a null owner, nor can you refuse to drop bob. Do you just decide that the role dropping \"bob\" automatically become the new owner of bobs_schema? Do you assign it to the database owner? What do you do? And whatever you say we should do, how is that more spec compliant than what I propose we do? I would expect the argument against X performing X+Y would cut against anything you suggest as much as it cuts against what I suggest.\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 7 Oct 2021 12:12:27 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "Greetings,\n\n* Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > On Oct 7, 2021, at 11:30 AM, Stephen Frost <sfrost@snowman.net> wrote:\n> >> Because we've already decided how object ownership works. I didn't write any code to have roles get dropped when their owners get dropped. I just put ownership into the system and this is how it naturally works. So you are advocating that DROP...CASCADE works one way for every object type save one. I think that's an incredibly unclean design. Having DROP...CASCADE work the same way for all ownership relations for all object types without exception makes so much more sense to me.\n> > \n> > We've decided how object ownership works related to DROP ROLE ...\n> > CASCADE..? I don't follow how that is the case. What we *do* have is\n> > dependency handling, but that isn't the same as ownership.\n> \n> We have a concept of objects being owned, and we prohibit the owner being NULL. You've already said upthread that DROP ROLE bob CASCADE must revoke \"bob\" from other roles, must remove \"bob\", and must not fail. How do you handle this?\n\nUh, I didn't say it 'must not fail'.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 7 Oct 2021 15:19:02 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "\n\n> On Oct 7, 2021, at 12:19 PM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> Uh, I didn't say it 'must not fail'.\n\nAh-hah, right, I misremembered. You were quoting the spec at me, and I went to read a copy of the spec as a consequence, and saw something like that there. Let me see if I can find that again. \n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 7 Oct 2021 12:31:56 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "\n\n> On Oct 7, 2021, at 12:31 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> Let me see if I can find that again.\n\n\n12.6 <drop role statement>\n\n<drop role statement> ::=\nDROP ROLE <role name>\n\nSyntax Rules\n1) Let R be the role identified by the specified <role name>.\n\nGeneral Rules\n1) Let A be any <authorization identifier> identified by a role authorization descriptor as having been granted\nto R.\n2) The following <revoke role statement> is effectively executed without further Access Rule checking:\nREVOKE R FROM A\n3) The descriptor of R is destroyed.\n\n\nSo DROP ROLE bob is expected to execute the revoke command. Let's see what that says....\n\n<revoke role statement> ::=\nREVOKE [ ADMIN OPTION FOR ] <role revoked> [ { <comma> <role revoked> }... ]\nFROM <grantee> [ { <comma> <grantee> }... ]\n[ GRANTED BY <grantor> ]\n<drop behavior>\n\n31) If RESTRICT is specified, and there exists an abandoned privilege descriptor, abandoned view,\nabandoned table constraint, abandoned assertion, abandoned domain constraint, lost domain, lost column,\nlost schema, or a descriptor that includes an impacted data type descriptor, impacted collation, impacted\ncharacter set, abandoned user-defined type, or abandoned routine descriptor, then an exception condition\nis raised: dependent privilege descriptors still exist.\n33) Case:\na) If the <revoke statement> is a <revoke privilege statement>, then\n\t\t... SNIP ...\nb) If the <revoke statement> is a <revoke role statement>, then:\ni) If CASCADE is specified, then all abandoned role authorization descriptors are destroyed.\nii) All abandoned privilege descriptors are destroyed.\n34) For every abandoned view descriptor V, let S1.VN be the <table name> of V. The following <drop view\nstatement> is effectively executed without further Access Rule checking:\nDROP VIEW S1.VN CASCADE\n35) For every abandoned table descriptor T, let S1.TN be the <table name> of T. The following <drop table\nstatement> is effectively executed without further Access Rule checking:\nDROP TABLE S1.TN CASCADE\n\n\n\nThe way I read that, DROP ROLE implies REVOKE ROLE, and I'm inferring that DROP ROLE CASCADE would therefore imply REVOKE ROLE CASCADE. Then interpreting 31's description of how REVOKE ROLE RESTRICT works under the principle Expressio Unius Est Exclusio Alterius I conclude that REVOKE ROLE CASCADE must not raise an exception. That leads me to the conclusion that DROP ROLE CASCADE must not raise an exception.\n\nSorry for misremembering this as something you said.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 7 Oct 2021 13:50:27 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "Greetings,\n\n* Mark Dilger (mark.dilger@enterprisedb.com) wrote:\n> > On Oct 7, 2021, at 12:31 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> > \n> > Let me see if I can find that again.\n> \n> 12.6 <drop role statement>\n> \n> <drop role statement> ::=\n> DROP ROLE <role name>\n> \n> Syntax Rules\n> 1) Let R be the role identified by the specified <role name>.\n> \n> General Rules\n> 1) Let A be any <authorization identifier> identified by a role authorization descriptor as having been granted\n> to R.\n> 2) The following <revoke role statement> is effectively executed without further Access Rule checking:\n> REVOKE R FROM A\n> 3) The descriptor of R is destroyed.\n> \n> \n> So DROP ROLE bob is expected to execute the revoke command. Let's see what that says....\n> \n> <revoke role statement> ::=\n> REVOKE [ ADMIN OPTION FOR ] <role revoked> [ { <comma> <role revoked> }... ]\n> FROM <grantee> [ { <comma> <grantee> }... ]\n> [ GRANTED BY <grantor> ]\n> <drop behavior>\n> \n> 31) If RESTRICT is specified, and there exists an abandoned privilege descriptor, abandoned view,\n> abandoned table constraint, abandoned assertion, abandoned domain constraint, lost domain, lost column,\n> lost schema, or a descriptor that includes an impacted data type descriptor, impacted collation, impacted\n> character set, abandoned user-defined type, or abandoned routine descriptor, then an exception condition\n> is raised: dependent privilege descriptors still exist.\n> 33) Case:\n> a) If the <revoke statement> is a <revoke privilege statement>, then\n> \t\t... SNIP ...\n> b) If the <revoke statement> is a <revoke role statement>, then:\n> i) If CASCADE is specified, then all abandoned role authorization descriptors are destroyed.\n> ii) All abandoned privilege descriptors are destroyed.\n> 34) For every abandoned view descriptor V, let S1.VN be the <table name> of V. The following <drop view\n> statement> is effectively executed without further Access Rule checking:\n> DROP VIEW S1.VN CASCADE\n> 35) For every abandoned table descriptor T, let S1.TN be the <table name> of T. The following <drop table\n> statement> is effectively executed without further Access Rule checking:\n> DROP TABLE S1.TN CASCADE\n> \n> The way I read that, DROP ROLE implies REVOKE ROLE, and I'm inferring that DROP ROLE CASCADE would therefore imply REVOKE ROLE CASCADE. Then interpreting 31's description of how REVOKE ROLE RESTRICT works under the principle Expressio Unius Est Exclusio Alterius I conclude that REVOKE ROLE CASCADE must not raise an exception. That leads me to the conclusion that DROP ROLE CASCADE must not raise an exception.\n\nI don't actually think REVOKE ROLE CASCADE must not fail, nor do I see\nthat as explicit in anything you quote above.\n\nWhat also is missing from the quotes above is what actually defines an\nabandoned object. If you read back through how the spec explains when\nan object is considered to be 'abandoned', it's more complicated. The\ngist of it, however, is that if the role loses access rights to a type,\nfor example, and that type is used in a table, then a cascade does\nremove that table (and various permutations of that for other object\ntypes). There isn't any equivilant for roles and it isn't really about\n'ownership' but about USAGE rights. In some cases (such as that of a\nVIEW), while we don't explicitly perform the DROP that the spec calls\nfor, we check the privileges at VIEW access time, making the view not\nusable if the owner of the view no longer has access to the underlying\ntables.\n\nI do appreciate that this illustrates that you can end up with things\nbeing DROP'd, if you explicitly follow the spec, due to a REVOKE\nCASCADE statement, something which I had argued seemed rather dangerous\nand counter-intuitive (and still do) but that case isn't quite the same\nand is something we've also already deviated from- in the direction of\navoiding having objects get DROP'd in such cases.\n\n> Sorry for misremembering this as something you said.\n\nNo worries.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 7 Oct 2021 22:44:38 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": true,
"msg_subject": "Re: Role Self-Administration"
},
{
"msg_contents": "\n\n> On Oct 7, 2021, at 7:44 PM, Stephen Frost <sfrost@snowman.net> wrote:\n> \n> I don't actually think REVOKE ROLE CASCADE must not fail, nor do I see\n> that as explicit in anything you quote above.\n\nI don't see that myself, but I thought that you would, given your other statements about how we shouldn't take a spec requirement to do X and turn it into doing X+Y, because the user wouldn't be expecting Y. So I thought that if DROP ROLE bob was defined in the spec to basically just do REVOKE bob FROM EVERYBODY, and if the CASCADE version of that wasn't supposed to fail, then you'd say that DROP ROLE bob CASCADE wasn't supposed to fail either. (Failing is the unexpected action Y that I expected your rule to prohibit.)\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 8 Oct 2021 15:30:47 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Role Self-Administration"
}
] |
[
{
"msg_contents": "Hi,\n\nLog output takes time between several seconds to a few tens when using \n‘SELECT pg_log_backend_memory_contexts(1234)’ with PID of ‘autovacuum \nlauncher’.\nI made a patch for this problem.\n\nregards,\nKoyu Tanigawa",
"msg_date": "Tue, 05 Oct 2021 18:20:11 +0900",
"msg_from": "bt21tanigaway <bt21tanigaway@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Fix pg_log_backend_memory_contexts() 's delay"
},
{
"msg_contents": "On Tue, Oct 5, 2021 at 2:50 PM bt21tanigaway\n<bt21tanigaway@oss.nttdata.com> wrote:\n>\n> Hi,\n>\n> Log output takes time between several seconds to a few tens when using\n> ‘SELECT pg_log_backend_memory_contexts(1234)’ with PID of ‘autovacuum\n> launcher’.\n> I made a patch for this problem.\n\nThanks for the patch. Do we also need to do the change in\nHandleMainLoopInterrupts, HandleCheckpointerInterrupts,\nHandlePgArchInterrupts, HandleWalWriterInterrupts as we don't call\nCHECK_FOR_INTERRUPTS() there? And there are also other processes:\npgstat process/statistics collector, syslogger, walreceiver,\nwalsender, background workers, parallel workers and so on. I think we\nneed to change in all the processes where we don't call\nCHECK_FOR_INTERRUPTS() in their main loops.\n\nBefore doing that, we need to be sure of whether we allow only the\nuser sessions/backend process's memory contexts with\npg_log_backend_memory_contexts or any process that is forked by\npostmaster i.e. auxiliary process? The function\npg_log_backend_memory_contexts supports the processes that return a\npgproc structure from this function BackendPidGetProc, it doesn't\nattempt to get pgproc structure from AuxiliaryPidGetProc.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Tue, 5 Oct 2021 16:05:04 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix pg_log_backend_memory_contexts() 's delay"
},
{
"msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n> On Tue, Oct 5, 2021 at 2:50 PM bt21tanigaway\n> <bt21tanigaway@oss.nttdata.com> wrote:\n>> Log output takes time between several seconds to a few tens when using\n>> ‘SELECT pg_log_backend_memory_contexts(1234)’ with PID of ‘autovacuum\n>> launcher’.\n>> I made a patch for this problem.\n\n> Thanks for the patch. Do we also need to do the change in\n> HandleMainLoopInterrupts, HandleCheckpointerInterrupts,\n> HandlePgArchInterrupts, HandleWalWriterInterrupts as we don't call\n> CHECK_FOR_INTERRUPTS() there?\n\nIt's not real clear to me why we need to care about this in those\nprocesses' idle loops. Their memory consumption is unlikely to be\nvery interesting in that state, nor could it change before they\nwake up.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 05 Oct 2021 08:27:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix pg_log_backend_memory_contexts() 's delay"
},
{
"msg_contents": "On Tue, Oct 5, 2021 at 8:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> It's not real clear to me why we need to care about this in those\n> processes' idle loops. Their memory consumption is unlikely to be\n> very interesting in that state, nor could it change before they\n> wake up.\n\nPerhaps that's so, but it doesn't seem like a good reason not to make\nthem more responsive.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Oct 2021 12:16:06 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix pg_log_backend_memory_contexts() 's delay"
},
{
"msg_contents": "On Tue, Oct 05, 2021 at 12:16:06PM -0400, Robert Haas wrote:\n> Perhaps that's so, but it doesn't seem like a good reason not to make\n> them more responsive.\n\nYeah, that's still some information that the user asked for. Looking\nat the things that have a PGPROC entry, should we worry about the main\nloop of the logical replication launcher?\n--\nMichael",
"msg_date": "Wed, 6 Oct 2021 08:40:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix pg_log_backend_memory_contexts() 's delay"
},
{
"msg_contents": "On Wed, Oct 6, 2021 at 5:10 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Oct 05, 2021 at 12:16:06PM -0400, Robert Haas wrote:\n> > Perhaps that's so, but it doesn't seem like a good reason not to make\n> > them more responsive.\n>\n> Yeah, that's still some information that the user asked for. Looking\n> at the things that have a PGPROC entry, should we worry about the main\n> loop of the logical replication launcher?\n\nIMHO, we can support all the processes which return a PGPROC entry by\nboth BackendPidGetProc and AuxiliaryPidGetProc where the\nAuxiliaryPidGetProc would cover the following processes. I'm not sure\none is interested in the memory context info of auxiliary processes.\n\n/*\n * We set aside some extra PGPROC structures for auxiliary processes,\n * ie things that aren't full-fledged backends but need shmem access.\n *\n * Background writer, checkpointer, WAL writer and archiver run during normal\n * operation. Startup process and WAL receiver also consume 2 slots, but WAL\n * writer is launched only after startup has exited, so we only need 5 slots.\n */\n#define NUM_AUXILIARY_PROCS 5\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Wed, 6 Oct 2021 07:43:20 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix pg_log_backend_memory_contexts() 's delay"
},
{
"msg_contents": "Thanks for your review.\n\n>> Thanks for the patch. Do we also need to do the change in\n>> HandleMainLoopInterrupts, HandleCheckpointerInterrupts,\n>> HandlePgArchInterrupts, HandleWalWriterInterrupts as we don't call\n>> CHECK_FOR_INTERRUPTS() there?\n\n> Yeah, that's still some information that the user asked for. Looking\n> at the things that have a PGPROC entry, should we worry about the main\n> loop of the logical replication launcher?\n\n・Now, the target of “pg_log_backend_memory_contexts()” is “autovacuum \nlauncher” and “logical replication launcher”. I observed that the delay \noccurred only in “autovacuum launcher” not in “logical replication \nlauncher”.\n・”autovacuum launcher” used “HandleAutoVacLauncherInterrupts()” ( not \nincluding “ProcessLogMemoryContextInterrupt()” ) instead of \n“ProcessInterrupts() ( including “ProcessLogMemoryContextInterrupt()” ). \n“ProcessLogMemoryContextInterrupt()” will not be executed until the next \n“ProcessInterrupts()” is executed. So, I added \n“ProcessLogMemoryContextInterrupt()”.\n・”logical replication launcher” uses only “ProcessInterrupts()”. So, We \ndon’t have to fix it.\n\n> IMHO, we can support all the processes which return a PGPROC entry by\n> both BackendPidGetProc and AuxiliaryPidGetProc where the\n> AuxiliaryPidGetProc would cover the following processes. I'm not sure\n> one is interested in the memory context info of auxiliary processes.\n\n・The purpose of this patch is to solve the delay problem, so I would \nlike another patch to deal with “ BackendPidGetProc” and \n“AuxiliaryPidGetProc”.\n\nRegards,\nKoyu Tanigawa\n\n\n",
"msg_date": "Wed, 06 Oct 2021 17:14:51 +0900",
"msg_from": "bt21tanigaway <bt21tanigaway@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix pg_log_backend_memory_contexts() 's delay"
},
{
"msg_contents": "\n\nOn 2021/10/06 17:14, bt21tanigaway wrote:\n> Thanks for your review.\n> \n>>> Thanks for the patch. Do we also need to do the change in\n>>> HandleMainLoopInterrupts, HandleCheckpointerInterrupts,\n>>> HandlePgArchInterrupts, HandleWalWriterInterrupts as we don't call\n>>> CHECK_FOR_INTERRUPTS() there?\n> \n>> Yeah, that's still some information that the user asked for. Looking\n>> at the things that have a PGPROC entry, should we worry about the main\n>> loop of the logical replication launcher?\n> \n> ・Now, the target of “pg_log_backend_memory_contexts()” is “autovacuum launcher” and “logical replication launcher”. I observed that the delay occurred only in “autovacuum launcher” not in “logical replication launcher”.\n> ・”autovacuum launcher” used “HandleAutoVacLauncherInterrupts()” ( not including “ProcessLogMemoryContextInterrupt()” ) instead of “ProcessInterrupts() ( including “ProcessLogMemoryContextInterrupt()” ). “ProcessLogMemoryContextInterrupt()” will not be executed until the next “ProcessInterrupts()” is executed. So, I added “ProcessLogMemoryContextInterrupt()”.\n> ・”logical replication launcher” uses only “ProcessInterrupts()”. So, We don’t have to fix it.\n\nYes.\n\n\n>> IMHO, we can support all the processes which return a PGPROC entry by\n>> both BackendPidGetProc and AuxiliaryPidGetProc where the\n>> AuxiliaryPidGetProc would cover the following processes. I'm not sure\n>> one is interested in the memory context info of auxiliary processes.\n\nI like this idea because it seems helpful at least for debug purpose.\n\n\n> ・The purpose of this patch is to solve the delay problem, so I would like another patch to deal with “ BackendPidGetProc” and “AuxiliaryPidGetProc”.\n\n+1 to improve those things separately.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Sat, 9 Oct 2021 00:28:57 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix pg_log_backend_memory_contexts() 's delay"
},
{
"msg_contents": "On Fri, Oct 8, 2021 at 8:58 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>> Thanks for the patch. Do we also need to do the change in\n> >>> HandleMainLoopInterrupts, HandleCheckpointerInterrupts,\n> >>> HandlePgArchInterrupts, HandleWalWriterInterrupts as we don't call\n> >>> CHECK_FOR_INTERRUPTS() there?\n> >\n> >> Yeah, that's still some information that the user asked for. Looking\n> >> at the things that have a PGPROC entry, should we worry about the main\n> >> loop of the logical replication launcher?\n> >\n> > ・Now, the target of “pg_log_backend_memory_contexts()” is “autovacuum launcher” and “logical replication launcher”. I observed that the delay occurred only in “autovacuum launcher” not in “logical replication launcher”.\n> > ・”autovacuum launcher” used “HandleAutoVacLauncherInterrupts()” ( not including “ProcessLogMemoryContextInterrupt()” ) instead of “ProcessInterrupts() ( including “ProcessLogMemoryContextInterrupt()” ). “ProcessLogMemoryContextInterrupt()” will not be executed until the next “ProcessInterrupts()” is executed. So, I added “ProcessLogMemoryContextInterrupt()”.\n> > ・”logical replication launcher” uses only “ProcessInterrupts()”. So, We don’t have to fix it.\n>\n> Yes.\n\n+1 to keep this thread for fixing the pg_log_backend_memory_contexts()\nissue for the autovacuum launcher. And the patch\n\"fix_log_output_delay\" looks good to me. I think we can add a CF\nentry.\n\n> >> IMHO, we can support all the processes which return a PGPROC entry by\n> >> both BackendPidGetProc and AuxiliaryPidGetProc where the\n> >> AuxiliaryPidGetProc would cover the following processes. I'm not sure\n> >> one is interested in the memory context info of auxiliary processes.\n>\n> I like this idea because it seems helpful at least for debug purpose.\n>\n>\n> > ・The purpose of this patch is to solve the delay problem, so I would like another patch to deal with “ BackendPidGetProc” and “AuxiliaryPidGetProc”.\n>\n> +1 to improve those things separately.\n\nI started a separate thread [1], and I have a couple of open points\nthere. Please feel free to provide your thoughts in [1].\n\n[1] https://www.postgresql.org/message-id/flat/CALj2ACU1nBzpacOK2q%3Da65S_4%2BOaz_rLTsU1Ri0gf7YUmnmhfQ%40mail.gmail.com\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Sat, 9 Oct 2021 18:56:06 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix pg_log_backend_memory_contexts() 's delay"
},
{
"msg_contents": "Thanks for the patch!\n\nIt might be self-evident, but since there are comments on other process \nhandlings in HandleAutoVacLauncherInterrupts like below, how about \nadding a comment for the consistency?\n\n /* Process barrier events */\n if (ProcSignalBarrierPending)\n ProcessProcSignalBarrier();\n\n /* Process sinval catchup interrupts that happened while sleeping \n*/\n ProcessCatchupInterrupt();\n\n\nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 11 Oct 2021 14:28:19 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix pg_log_backend_memory_contexts() 's delay"
},
{
"msg_contents": "On 2021/10/11 14:28, torikoshia wrote:\n> Thanks for the patch!\n> \n> It might be self-evident, but since there are comments on other process handlings in HandleAutoVacLauncherInterrupts like below, how about adding a comment for the consistency?\n\n+1\n\nI applied such cosmetic changes to the patch. Patch attached.\nBarring any objection, I will commit it and back-port to v14.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Mon, 11 Oct 2021 14:40:50 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix pg_log_backend_memory_contexts() 's delay"
},
{
"msg_contents": "\n\nOn 2021/10/11 14:40, Fujii Masao wrote:\n> \n> \n> On 2021/10/11 14:28, torikoshia wrote:\n>> Thanks for the patch!\n>>\n>> It might be self-evident, but since there are comments on other process handlings in HandleAutoVacLauncherInterrupts like below, how about adding a comment for the consistency?\n> \n> +1\n> \n> I applied such cosmetic changes to the patch. Patch attached.\n> Barring any objection, I will commit it and back-port to v14.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 12 Oct 2021 09:53:07 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix pg_log_backend_memory_contexts() 's delay"
}
] |
[
{
"msg_contents": "This is an early patch to use as a discussion support, only tested on CentOS 8 on master with GDAL 3.2, to add the generation of gdal*-java RPMs.This is a follow-up to the accepted PR sent to fedora to re-enable gdal-java RPMs generation for EPEL 8 (https://src.fedoraproject.org/rpms/gdal/pull-request/18).GDAL-Java RPMs support is a technical need for my GeoServer, and I'm already using postgres' GDAL RPMs there, so being able to install a single GDAL version in the environment would be a plus.",
"msg_date": "Tue, 5 Oct 2021 17:41:39 +0200 (CEST)",
"msg_from": "Guillaume FOREAU <guillaumeforeau@orange.fr>",
"msg_from_op": true,
"msg_subject": "Add -java RPM generation to GDAL in pgrpms - p1"
}
] |
[
{
"msg_contents": "Hi,\n\n From everything I've seen, the PostgreSQL style seems to be to include\nthe * in a typedef for a function type to which pointers will be held:\n\ntypedef void (*Furbinator)(char *furbee);\n\nstruct Methods\n{\n Furbinator furbinate;\n};\n\n\nAn alternative I've sometimes used elsewhere is to typedef the function\ntype itself, and use the * when declaring a pointer to it:\n\ntypedef void Furbinator(char *furbee);\n\nstruct Methods\n{\n Furbinator *furbinate;\n};\n\n\nWhat I like about that form is it allows reusing the typedef to prototype\nany implementing function:\n\nstatic Furbinator _furbinator0;\nstatic void _furbinator0(char *furbee)\n{\n}\n\nIt doesn't completely eliminate repeating myself, because the function\ndefinition still has to be spelled out. But it's a bit less repetitive,\nand makes it visibly explicit that this function is to be a Furbinator,\nand if I get the repeating-myself part wrong, the compiler catches it\nright on the spot, not only when I try to assign it later to some\n*Furbinator-typed field.\n\nUse of the thing doesn't look any different, thanks to the equivalence\nof a function name and its address:\n\n methods.furbinate = _furbinator0;\n\nNaturally, I'm not proposing any change of existing usages, nor would\nI presume to ever submit a patch using the different style.\nIf anything, maybe I'd consider adding some new code in this style\nin PL/Java, which as an out-of-tree extension maybe isn't bound by\nevery jot and tittle of PG style, but generally has followed\nthe documented coding conventions. They seem to be silent on this\none point.\n\nSo what I'm curious about is: is there a story to how PG settled on\nthe style it uses? Is the typedef-the-function-itself style considered\nobjectionable? For any reason other than being different? If there were\ncompilers at one time that didn't like it, are there still any?\nAny that matter?\n\nI've found two outside references taking different positions.\n\nThe Ghostscript project has coding guidelines [0] recommending against,\nsaying \"Many compilers don't handle this correctly -- they will give\nerrors, or do the wrong thing, ...\". I can't easily tell what year\nthat guideline was written. Ghostscript goes back a long way.\n\nThe SquareSpace OpenSync coding standard [1] describes both styles\n(p. 34) and the benefits of the typedef-the-function-itself style\n(p. 35), without seeming to quite take any final position between them.\n\nRegards,\n-Chap\n\n\n\n[0] https://www.ghostscript.com/doc/9.50/C-style.htm\n[1] https://www.opensync.io/s/EDE-020-041-501_OpenSync_Coding_Standard.pdf\n\n\n",
"msg_date": "Tue, 5 Oct 2021 12:59:38 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": true,
"msg_subject": "style for typedef of function that will be pointed to"
},
{
"msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> From everything I've seen, the PostgreSQL style seems to be to include\n> the * in a typedef for a function type to which pointers will be held:\n> typedef void (*Furbinator)(char *furbee);\n\nYup.\n\n> An alternative I've sometimes used elsewhere is to typedef the function\n> type itself, and use the * when declaring a pointer to it:\n> typedef void Furbinator(char *furbee);\n\nIs that legal C? I doubt that it was before C99 or so. As noted\nin the Ghostscript docs you came across, it certainly wouldn't have\nbeen portable back in the day.\n\n> So what I'm curious about is: is there a story to how PG settled on\n> the style it uses?\n\nSee above.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 05 Oct 2021 13:47:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: style for typedef of function that will be pointed to"
},
{
"msg_contents": "On 10/05/21 13:47, Tom Lane wrote:\n>> An alternative I've sometimes used elsewhere is to typedef the function\n>> type itself, and use the * when declaring a pointer to it:\n>> typedef void Furbinator(char *furbee);\n> \n> Is that legal C? I doubt that it was before C99 or so. As noted\n> in the Ghostscript docs you came across, it certainly wouldn't have\n> been portable back in the day.\n\n\nIt compiles silently for me with gcc --std=c89 -Wpedantic\n\nI think that's the oldest standard I can ask gcc about. Per the manpage,\n'c89' is ISO C90 without its amendment 1, and without any gnuisms.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Tue, 5 Oct 2021 14:00:22 -0400",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": true,
"msg_subject": "Re: style for typedef of function that will be pointed to"
},
{
"msg_contents": "On 10/05/21 14:00, Chapman Flack wrote:\n> On 10/05/21 13:47, Tom Lane wrote:\n>>> An alternative I've sometimes used elsewhere is to typedef the function\n>>> type itself, and use the * when declaring a pointer to it:\n>>> typedef void Furbinator(char *furbee);\n>>\n>> Is that legal C? I doubt that it was before C99 or so. As noted\n>> in the Ghostscript docs you came across, it certainly wouldn't have\n>> been portable back in the day.\n> \n> It compiles silently for me with gcc --std=c89 -Wpedantic\n> \n> I think that's the oldest standard I can ask gcc about. Per the manpage,\n> 'c89' is ISO C90 without its amendment 1, and without any gnuisms.\n\nThere are some places in the tree where AssertVariableIsOfType is being\ncleverly used to achieve the same thing:\n\nvoid\n_PG_output_plugin_init(OutputPluginCallbacks *cb)\n{\n AssertVariableIsOfType(&_PG_output_plugin_init, LogicalOutputPluginInit);\n\n\nvoid\n_PG_archive_module_init(ArchiveModuleCallbacks *cb)\n{\n AssertVariableIsOfType(&_PG_archive_module_init, ArchiveModuleInit);\n\n\nWhile clever, doesn't it seem like a strained way to avoid just saying:\n\ntypedef void ArchiveModuleInit(ArchiveModuleCallbacks *cb);\n\n\nArchiveModuleInit _PG_archive_module_init;\n\nvoid\n_PG_archive_module_init(ArchiveModuleCallbacks *cb)\n{\n\n\nif indeed compilers C90 and later are happy with the straight typedef?\n\nNot that one would go changing existing declarations. But perhaps it\ncould be on the table for new ones?\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Mon, 7 Feb 2022 16:58:21 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": true,
"msg_subject": "Re: style for typedef of function that will be pointed to"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile working one of the internal features, I figured out that we\ndon't have subscription TAP tests option for \"vcregress\" tool for msvc\nbuilds. Is there any specific reason that we didn't add \"vcregress\nsubscriptioncheck\" option similar to \"vcregress recoverycheck\"? It\nlooks like one can run with \"vcregress taptest\" option and PROVE_FLAGS\nenvironment variable(I haven't tried it myself), but having\nsubscriptioncheck makes life easier.\n\nHere's a small patch for that. Thoughts?\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Tue, 5 Oct 2021 22:55:30 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "can we add subscription TAP test option \"vcregress subscriptioncheck\"\n for MSVC builds?"
},
{
"msg_contents": "\nOn 10/5/21 1:25 PM, Bharath Rupireddy wrote:\n> Hi,\n>\n> While working one of the internal features, I figured out that we\n> don't have subscription TAP tests option for \"vcregress\" tool for msvc\n> builds. Is there any specific reason that we didn't add \"vcregress\n> subscriptioncheck\" option similar to \"vcregress recoverycheck\"? It\n> looks like one can run with \"vcregress taptest\" option and PROVE_FLAGS\n> environment variable(I haven't tried it myself), but having\n> subscriptioncheck makes life easier.\n>\n> Here's a small patch for that. Thoughts?\n>\n\n\nI would actually prefer to reduce the number of special things in\nvcregress.pl rather than add more. We should be able to add a new set of\nTAP tests somewhere without having to do anything to vcregress.pl.\nThat's more or less why I added the taptest option in the first place.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 5 Oct 2021 16:03:53 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: can we add subscription TAP test option \"vcregress\n subscriptioncheck\" for MSVC builds?"
},
{
"msg_contents": "Hi,\n\nOn 2021-10-05 16:03:53 -0400, Andrew Dunstan wrote:\n> I would actually prefer to reduce the number of special things in\n> vcregress.pl rather than add more. We should be able to add a new set of\n> TAP tests somewhere without having to do anything to vcregress.pl.\n> That's more or less why I added the taptest option in the first place.\n\nMy problem with that is that that means there's no convenient way to discover\nwhat one needs to do to run all tests. Perhaps we could have one all-taptest\ntarget or such?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 5 Oct 2021 13:38:11 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: can we add subscription TAP test option \"vcregress\n subscriptioncheck\" for MSVC builds?"
},
{
"msg_contents": "\nOn 10/5/21 4:38 PM, Andres Freund wrote:\n> Hi,\n>\n> On 2021-10-05 16:03:53 -0400, Andrew Dunstan wrote:\n>> I would actually prefer to reduce the number of special things in\n>> vcregress.pl rather than add more. We should be able to add a new set of\n>> TAP tests somewhere without having to do anything to vcregress.pl.\n>> That's more or less why I added the taptest option in the first place.\n> My problem with that is that that means there's no convenient way to discover\n> what one needs to do to run all tests. Perhaps we could have one all-taptest\n> target or such?\n>\n\nYeah. That's a much better proposal.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 5 Oct 2021 18:18:47 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: can we add subscription TAP test option \"vcregress\n subscriptioncheck\" for MSVC builds?"
},
{
"msg_contents": "On Tue, Oct 05, 2021 at 06:18:47PM -0400, Andrew Dunstan wrote:\n> On 10/5/21 4:38 PM, Andres Freund wrote:\n>> My problem with that is that that means there's no convenient way to discover\n>> what one needs to do to run all tests. Perhaps we could have one all-taptest\n>> target or such?\n>>\n> \n> Yeah. That's a much better proposal.\n\n+1. It is so easy to forget one or more targets when running this\nscript.\n--\nMichael",
"msg_date": "Wed, 6 Oct 2021 08:43:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: can we add subscription TAP test option \"vcregress\n subscriptioncheck\" for MSVC builds?"
},
{
"msg_contents": "On 2021-Oct-06, Michael Paquier wrote:\n\n> On Tue, Oct 05, 2021 at 06:18:47PM -0400, Andrew Dunstan wrote:\n> > On 10/5/21 4:38 PM, Andres Freund wrote:\n> >> My problem with that is that that means there's no convenient way to discover\n> >> what one needs to do to run all tests. Perhaps we could have one all-taptest\n> >> target or such?\n> > \n> > Yeah. That's a much better proposal.\n\nSo how about \"vcregress check-world\"?\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"Doing what he did amounts to sticking his fingers under the hood of the\nimplementation; if he gets his fingers burnt, it's his problem.\" (Tom Lane)\n\n\n",
"msg_date": "Tue, 5 Oct 2021 22:23:08 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: can we add subscription TAP test option \"vcregress\n subscriptioncheck\" for MSVC builds?"
},
{
"msg_contents": "On Wed, Oct 6, 2021 at 6:53 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Oct-06, Michael Paquier wrote:\n>\n> > On Tue, Oct 05, 2021 at 06:18:47PM -0400, Andrew Dunstan wrote:\n> > > On 10/5/21 4:38 PM, Andres Freund wrote:\n> > >> My problem with that is that that means there's no convenient way to discover\n> > >> what one needs to do to run all tests. Perhaps we could have one all-taptest\n> > >> target or such?\n> > >\n> > > Yeah. That's a much better proposal.\n>\n> So how about \"vcregress check-world\"?\n\nI was thinking of the same. +1 for \"vcregress check-world\" which is\nmore in sync with it's peer \"make check-world\" instead of \"vcregress\nall-taptest\". I'm not sure whether we can also have \"vcregress\ninstallcheck-world\" as well.\n\nHaving said that, with these new options, are we going to have only below?\n\nvcregress check\nvcregress installcheck\nvcregress check-world\nvcregress installcheck-world (?)\n\nAnd remove others?\n\nvcregress plcheck\nvcregress contribcheck\nvcregress modulescheck\nvcregress ecpgcheck\nvcregress isolationcheck\nvcregress bincheck\nvcregress recoverycheck\nvcregress upgradecheck\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Wed, 6 Oct 2021 07:19:04 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: can we add subscription TAP test option \"vcregress\n subscriptioncheck\" for MSVC builds?"
},
{
"msg_contents": "On Wed, Oct 06, 2021 at 07:19:04AM +0530, Bharath Rupireddy wrote:\n> I was thinking of the same. +1 for \"vcregress check-world\" which is\n> more in sync with it's peer \"make check-world\" instead of \"vcregress\n> all-taptest\". I'm not sure whether we can also have \"vcregress\n> installcheck-world\" as well.\n\ncheck-world, if it spins new instances for each contrib/ test, would\nbe infinitely slower than installcheck-world. I recall that's why\nAndrew has been doing an installcheck for contribcheck to minimize the\nload. If you run the TAP tests, perhaps you don't really care anyway,\nbut I think that there is a case for making this set of targets run as\nfast as we can, if we can, when TAP is disabled.\n\n> Having said that, with these new options, are we going to have only below?\n> \n> vcregress check\n> vcregress installcheck\n> vcregress check-world\n> vcregress installcheck-world (?)\n> \n> And remove others?\n\nMy take is that there is value for both installcheck-world and\ncheck-world, depending on if we want to test on an installed instance\nor not. For CIs, check-world makes things simpler perhaps?\n\n> vcregress plcheck\n> vcregress contribcheck\n> vcregress modulescheck\n> vcregress ecpgcheck\n> vcregress isolationcheck\n> vcregress bincheck\n> vcregress recoverycheck\n> vcregress upgradecheck\n\nI don't really see why we should do that, the code paths are the same\nand the sub-routines would still be around, but don't cost much in\nmaintenance.\n--\nMichael",
"msg_date": "Wed, 6 Oct 2021 11:22:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: can we add subscription TAP test option \"vcregress\n subscriptioncheck\" for MSVC builds?"
},
{
"msg_contents": "On Wed, Oct 6, 2021 at 7:52 AM Michael Paquier <michael@paquier.xyz> wrote:\n> My take is that there is value for both installcheck-world and\n> check-world, depending on if we want to test on an installed instance\n> or not. For CIs, check-world makes things simpler perhaps?\n>\n> > vcregress plcheck\n> > vcregress contribcheck\n> > vcregress modulescheck\n> > vcregress ecpgcheck\n> > vcregress isolationcheck\n> > vcregress bincheck\n> > vcregress recoverycheck\n> > vcregress upgradecheck\n>\n> I don't really see why we should do that, the code paths are the same\n> and the sub-routines would still be around, but don't cost much in\n> maintenance.\n\nYeah, they can also be useful if someone wants to run tests\nselectively. I'm just thinking that the \"vcregress subscriptioncheck\"\nas proposed in my first email in this thread will also be useful (?)\nalong with the \"vcregress check-world\" and \"vcregress\ninstallcheck-world\". Thoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Wed, 6 Oct 2021 08:49:43 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: can we add subscription TAP test option \"vcregress\n subscriptioncheck\" for MSVC builds?"
},
{
"msg_contents": "On Wed, Oct 6, 2021 at 7:52 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Oct 06, 2021 at 07:19:04AM +0530, Bharath Rupireddy wrote:\n> > I was thinking of the same. +1 for \"vcregress check-world\" which is\n> > more in sync with it's peer \"make check-world\" instead of \"vcregress\n> > all-taptest\". I'm not sure whether we can also have \"vcregress\n> > installcheck-world\" as well.\n>\n> check-world, if it spins new instances for each contrib/ test, would\n> be infinitely slower than installcheck-world. I recall that's why\n> Andrew has been doing an installcheck for contribcheck to minimize the\n> load. If you run the TAP tests, perhaps you don't really care anyway,\n> but I think that there is a case for making this set of targets run as\n> fast as we can, if we can, when TAP is disabled.\n\nOut of all the regression tests available with vcregress command\ntoday, the tests shown at [1] require an already running postgres\ninstance (much like the installcheck). Should we change these for\n\"vcregress checkworld\" so that they spin up tmp instances and run? I\ndon't want to go in this direction and change the code a lot.\n\nTo be simple, we could just have \"vcregress installcheckworld\" which\nrequires users to spin up an instance so that the tests shown at [1]\nwould run along with other tests [2] that spins up their own instance.\nThe problem with this approach is that user might setup a different\nGUC value in the instance that he/she spins up expecting it to be\neffective for the tests at [2] as well. I'm not sure if anyone would\ndo that. We can ignore \"vcregress checkworld\" but mention why we don't\ndo this in the documentation \"something like it makes tests slower as\nit spinus up lot of temporary pg instances\".\n\nAnother idea, simplest of all, is that just have \"vcregress\nsubscriptioncheck\" as proposed in this first mail and not have\n\"\"vcregress checkworld\" or \"vcregress installcheckworld\". With this\nnew option and the existing options of vcregress tool, it sort of\ncovers all the tests - core, TAP, contrib, bin, isolation, modules,\nupgrade, recovery etc.\n\nThoughts?\n\n[1]\nvcregress installcheck\nvcregress plcheck\nvcregress contribcheck\nvcregress modulescheck\nvcregress isolationcheck\n\n[2]\nvcregress check\nvcregress ecpgcheck\nvcregress bincheck\nvcregress recoverycheck\nvcregress upgradecheck\nvcregress subscriptioncheck\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Wed, 6 Oct 2021 16:31:33 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: can we add subscription TAP test option \"vcregress\n subscriptioncheck\" for MSVC builds?"
},
{
"msg_contents": "On Wed, Oct 6, 2021 at 4:31 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Oct 6, 2021 at 7:52 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Wed, Oct 06, 2021 at 07:19:04AM +0530, Bharath Rupireddy wrote:\n> > > I was thinking of the same. +1 for \"vcregress check-world\" which is\n> > > more in sync with it's peer \"make check-world\" instead of \"vcregress\n> > > all-taptest\". I'm not sure whether we can also have \"vcregress\n> > > installcheck-world\" as well.\n> >\n> > check-world, if it spins new instances for each contrib/ test, would\n> > be infinitely slower than installcheck-world. I recall that's why\n> > Andrew has been doing an installcheck for contribcheck to minimize the\n> > load. If you run the TAP tests, perhaps you don't really care anyway,\n> > but I think that there is a case for making this set of targets run as\n> > fast as we can, if we can, when TAP is disabled.\n>\n> Out of all the regression tests available with vcregress command\n> today, the tests shown at [1] require an already running postgres\n> instance (much like the installcheck). Should we change these for\n> \"vcregress checkworld\" so that they spin up tmp instances and run? I\n> don't want to go in this direction and change the code a lot.\n>\n> To be simple, we could just have \"vcregress installcheckworld\" which\n> requires users to spin up an instance so that the tests shown at [1]\n> would run along with other tests [2] that spins up their own instance.\n> The problem with this approach is that user might setup a different\n> GUC value in the instance that he/she spins up expecting it to be\n> effective for the tests at [2] as well. I'm not sure if anyone would\n> do that. We can ignore \"vcregress checkworld\" but mention why we don't\n> do this in the documentation \"something like it makes tests slower as\n> it spinus up lot of temporary pg instances\".\n>\n> Another idea, simplest of all, is that just have \"vcregress\n> subscriptioncheck\" as proposed in this first mail and not have\n> \"\"vcregress checkworld\" or \"vcregress installcheckworld\". With this\n> new option and the existing options of vcregress tool, it sort of\n> covers all the tests - core, TAP, contrib, bin, isolation, modules,\n> upgrade, recovery etc.\n>\n> Thoughts?\n>\n> [1]\n> vcregress installcheck\n> vcregress plcheck\n> vcregress contribcheck\n> vcregress modulescheck\n> vcregress isolationcheck\n>\n> [2]\n> vcregress check\n> vcregress ecpgcheck\n> vcregress bincheck\n> vcregress recoverycheck\n> vcregress upgradecheck\n> vcregress subscriptioncheck\n\nThe problems with having \"vcregress checkworld\" are: 1) required code\nmodifications are more as the available \"vcregress\" test functions,\nwhich required pre-started pg instance, can't be used directly. 2) it\nlooks like spinning up a separate postgres instance for all module\ntests takes time on Windows which might make the test time longer. If\nwe were to have \"vcregress installcheckworld\", we might have to add\nnew code for converting some of the existing functions to not use a\npre-started pg instance.\n\nIMHO, we can just have \"vcregress subscriptioncheck\" and let users\ndecide which tests they want to run.\n\nI would like to hear more opinions on this.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Sat, 16 Oct 2021 16:51:49 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: can we add subscription TAP test option \"vcregress\n subscriptioncheck\" for MSVC builds?"
},
{
"msg_contents": "\nOn 10/16/21 7:21 AM, Bharath Rupireddy wrote:\n> On Wed, Oct 6, 2021 at 4:31 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> On Wed, Oct 6, 2021 at 7:52 AM Michael Paquier <michael@paquier.xyz> wrote:\n>>> On Wed, Oct 06, 2021 at 07:19:04AM +0530, Bharath Rupireddy wrote:\n>>>> I was thinking of the same. +1 for \"vcregress check-world\" which is\n>>>> more in sync with it's peer \"make check-world\" instead of \"vcregress\n>>>> all-taptest\". I'm not sure whether we can also have \"vcregress\n>>>> installcheck-world\" as well.\n>>> check-world, if it spins new instances for each contrib/ test, would\n>>> be infinitely slower than installcheck-world. I recall that's why\n>>> Andrew has been doing an installcheck for contribcheck to minimize the\n>>> load. If you run the TAP tests, perhaps you don't really care anyway,\n>>> but I think that there is a case for making this set of targets run as\n>>> fast as we can, if we can, when TAP is disabled.\n>> Out of all the regression tests available with vcregress command\n>> today, the tests shown at [1] require an already running postgres\n>> instance (much like the installcheck). Should we change these for\n>> \"vcregress checkworld\" so that they spin up tmp instances and run? I\n>> don't want to go in this direction and change the code a lot.\n>>\n>> To be simple, we could just have \"vcregress installcheckworld\" which\n>> requires users to spin up an instance so that the tests shown at [1]\n>> would run along with other tests [2] that spins up their own instance.\n>> The problem with this approach is that user might setup a different\n>> GUC value in the instance that he/she spins up expecting it to be\n>> effective for the tests at [2] as well. I'm not sure if anyone would\n>> do that. We can ignore \"vcregress checkworld\" but mention why we don't\n>> do this in the documentation \"something like it makes tests slower as\n>> it spinus up lot of temporary pg instances\".\n>>\n>> Another idea, simplest of all, is that just have \"vcregress\n>> subscriptioncheck\" as proposed in this first mail and not have\n>> \"\"vcregress checkworld\" or \"vcregress installcheckworld\". With this\n>> new option and the existing options of vcregress tool, it sort of\n>> covers all the tests - core, TAP, contrib, bin, isolation, modules,\n>> upgrade, recovery etc.\n>>\n>> Thoughts?\n>>\n>> [1]\n>> vcregress installcheck\n>> vcregress plcheck\n>> vcregress contribcheck\n>> vcregress modulescheck\n>> vcregress isolationcheck\n>>\n>> [2]\n>> vcregress check\n>> vcregress ecpgcheck\n>> vcregress bincheck\n>> vcregress recoverycheck\n>> vcregress upgradecheck\n>> vcregress subscriptioncheck\n> The problems with having \"vcregress checkworld\" are: 1) required code\n> modifications are more as the available \"vcregress\" test functions,\n> which required pre-started pg instance, can't be used directly. 2) it\n> looks like spinning up a separate postgres instance for all module\n> tests takes time on Windows which might make the test time longer. If\n> we were to have \"vcregress installcheckworld\", we might have to add\n> new code for converting some of the existing functions to not use a\n> pre-started pg instance.\n>\n> IMHO, we can just have \"vcregress subscriptioncheck\" and let users\n> decide which tests they want to run.\n>\n> I would like to hear more opinions on this.\n>\n\nMy opinion hasn't changed. There is already a way to spell this and I'm\nopposed to adding more such specific tests to vcregress.pl. Every such\naddition we make increases the maintenance burden.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 16 Oct 2021 09:05:37 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: can we add subscription TAP test option \"vcregress\n subscriptioncheck\" for MSVC builds?"
},
{
"msg_contents": "On Sat, Oct 16, 2021 at 6:35 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n> > The problems with having \"vcregress checkworld\" are: 1) required code\n> > modifications are more as the available \"vcregress\" test functions,\n> > which required pre-started pg instance, can't be used directly. 2) it\n> > looks like spinning up a separate postgres instance for all module\n> > tests takes time on Windows which might make the test time longer. If\n> > we were to have \"vcregress installcheckworld\", we might have to add\n> > new code for converting some of the existing functions to not use a\n> > pre-started pg instance.\n> >\n> > IMHO, we can just have \"vcregress subscriptioncheck\" and let users\n> > decide which tests they want to run.\n> >\n> > I would like to hear more opinions on this.\n> >\n>\n> My opinion hasn't changed. There is already a way to spell this and I'm\n> opposed to adding more such specific tests to vcregress.pl. Every such\n> addition we make increases the maintenance burden.\n\nThanks for your opinion. IIUC, the subscription tests can be run with\nsetting environment variables PROVE_FLAGS, PROVE_TESTS and the\n\"vcregress taptest\" command right? I failed to set the environment\nvariables appropriately and couldn't run. Can you please let me know\nthe right way to run the test?\n\nIf any test can be run with a set of environment flags and \"vcregress\ntaptest\" command, then in the first place, it doesn't make sense to\nhave recoverycheck, upgragecheck and so on. Another thing is that the\nlist of \"vcregress\" commands cover almost all the tests core, tap,\nbin, isolation, contrib tests except, subscription tests. If we add\n\"vcregress subscrtptioncheck\", the list of tests that can be run with\nthe \"vcregress\" command will be complete without having to depend on\nthe environment variables.\n\nIMHO, we can have \"vcregress subscriptioncheck\" to make it complete\nand easier for the user to run those tests. However, let's hear what\nother hackers have to say about this.\n\nAnother thing I noticed is that we don't have any mention of\n\"vcregress taptest\" command in the docs [1], if I read the docs\ncorrectly. How about we have it along with a sample example on how to\nrun a specific TAP tests with it in the docs?\n\n[1] - https://www.postgresql.org/docs/current/install-windows-full.html\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 18 Oct 2021 11:11:58 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: can we add subscription TAP test option \"vcregress\n subscriptioncheck\" for MSVC builds?"
},
{
"msg_contents": "\nOn 10/18/21 1:41 AM, Bharath Rupireddy wrote:\n> On Sat, Oct 16, 2021 at 6:35 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>>> The problems with having \"vcregress checkworld\" are: 1) required code\n>>> modifications are more as the available \"vcregress\" test functions,\n>>> which required pre-started pg instance, can't be used directly. 2) it\n>>> looks like spinning up a separate postgres instance for all module\n>>> tests takes time on Windows which might make the test time longer. If\n>>> we were to have \"vcregress installcheckworld\", we might have to add\n>>> new code for converting some of the existing functions to not use a\n>>> pre-started pg instance.\n>>>\n>>> IMHO, we can just have \"vcregress subscriptioncheck\" and let users\n>>> decide which tests they want to run.\n>>>\n>>> I would like to hear more opinions on this.\n>>>\n>> My opinion hasn't changed. There is already a way to spell this and I'm\n>> opposed to adding more such specific tests to vcregress.pl. Every such\n>> addition we make increases the maintenance burden.\n> Thanks for your opinion. IIUC, the subscription tests can be run with\n> setting environment variables PROVE_FLAGS, PROVE_TESTS and the\n> \"vcregress taptest\" command right? I failed to set the environment\n> variables appropriately and couldn't run. Can you please let me know\n> the right way to run the test?\n\n\n\nNo extra environment flags should be required for MSVC.\n\n\nThis should suffice:\n\n\n vcregress taptest src/test/subscription\n\n\nIf you want to set PROVE_FLAGS the simplest thing is just to set it in\nthe environment before the above invocation\n\n\n>\n> If any test can be run with a set of environment flags and \"vcregress\n> taptest\" command, then in the first place, it doesn't make sense to\n> have recoverycheck, upgragecheck and so on. Another thing is that the\n> list of \"vcregress\" commands cover almost all the tests core, tap,\n> bin, isolation, contrib tests except, subscription tests. If we add\n> \"vcregress subscrtptioncheck\", the list of tests that can be run with\n> the \"vcregress\" command will be complete without having to depend on\n> the environment variables.\n\n\nThe reason we have some of those other tests is because we didn't start\nwith having a generic taptest command in vcregress.pl. So they are\nsimply legacy code. But that is no reason for adding to them.\n\n\n>\n> IMHO, we can have \"vcregress subscriptioncheck\" to make it complete\n> and easier for the user to run those tests. However, let's hear what\n> other hackers have to say about this.\n\n\n\nI really fail to see how the invocation above is in any sense\nsignificantly more complicated.\n\n\n>\n> Another thing I noticed is that we don't have any mention of\n> \"vcregress taptest\" command in the docs [1], if I read the docs\n> correctly. How about we have it along with a sample example on how to\n> run a specific TAP tests with it in the docs?\n>\n> [1] - https://www.postgresql.org/docs/current/install-windows-full.html\n>\n\n\nYes, that's probably something that should be remedied.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 18 Oct 2021 08:49:28 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: can we add subscription TAP test option \"vcregress\n subscriptioncheck\" for MSVC builds?"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 10/18/21 1:41 AM, Bharath Rupireddy wrote:\n>> Another thing I noticed is that we don't have any mention of\n>> \"vcregress taptest\" command in the docs [1], if I read the docs\n>> correctly. How about we have it along with a sample example on how to\n>> run a specific TAP tests with it in the docs?\n>> \n>> [1] - https://www.postgresql.org/docs/current/install-windows-full.html\n\n> Yes, that's probably something that should be remedied.\n\nWhy would that belong in the installation instructions?\nRunning the TAP tests is documented at\n\nhttps://www.postgresql.org/docs/devel/regress-tap.html\n\nand if we need some Windows-specific instructions, ISTM that's\nwhere to add them.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 18 Oct 2021 09:37:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: can we add subscription TAP test option \"vcregress\n subscriptioncheck\" for MSVC builds?"
},
{
"msg_contents": "\nOn 10/18/21 9:37 AM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 10/18/21 1:41 AM, Bharath Rupireddy wrote:\n>>> Another thing I noticed is that we don't have any mention of\n>>> \"vcregress taptest\" command in the docs [1], if I read the docs\n>>> correctly. How about we have it along with a sample example on how to\n>>> run a specific TAP tests with it in the docs?\n>>>\n>>> [1] - https://www.postgresql.org/docs/current/install-windows-full.html\n>> Yes, that's probably something that should be remedied.\n> Why would that belong in the installation instructions?\n> Running the TAP tests is documented at\n>\n> https://www.postgresql.org/docs/devel/regress-tap.html\n>\n> and if we need some Windows-specific instructions, ISTM that's\n> where to add them.\n\n\n\nWell, see\n<https://www.postgresql.org/docs/current/install-windows-full.html#id-1.6.5.8.12>\n\n\nMaybe we should move that section.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 18 Oct 2021 09:56:41 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: can we add subscription TAP test option \"vcregress\n subscriptioncheck\" for MSVC builds?"
},
{
"msg_contents": "On Mon, Oct 18, 2021 at 09:56:41AM -0400, Andrew Dunstan wrote:\n> Well, see\n> <https://www.postgresql.org/docs/current/install-windows-full.html#id-1.6.5.8.12>\n> \n> Maybe we should move that section.\n\nAs this is the part of the docs where we document the builds, it\nlooks indeed a bit confusing to have all the requirements for the\nTAP tests there. The section \"Regression Tests\" cannot be used for\nthe case of VS, and the section for TAP is independent of that so we\ncould use platform-dependent sub-sections.\n\nCould it be better to move all the contents of \"Running the Regression\nTests\" from the Windows installation page to the section of\n\"Regression Tests\" instead? That would mean spreading the knowledge\nof vcregress.pl to more than one place, though.\n--\nMichael",
"msg_date": "Tue, 19 Oct 2021 09:46:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: can we add subscription TAP test option \"vcregress\n subscriptioncheck\" for MSVC builds?"
},
{
"msg_contents": "On Mon, Oct 18, 2021 at 6:19 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> > Thanks for your opinion. IIUC, the subscription tests can be run with\n> > setting environment variables PROVE_FLAGS, PROVE_TESTS and the\n> > \"vcregress taptest\" command right? I failed to set the environment\n> > variables appropriately and couldn't run. Can you please let me know\n> > the right way to run the test?\n>\n> No extra environment flags should be required for MSVC.\n>\n> This should suffice:\n>\n> vcregress taptest src/test/subscription\n\nWow! This is so simple that I imagined.\n\n> If you want to set PROVE_FLAGS the simplest thing is just to set it in\n> the environment before the above invocation\n\nOkay.\n\n> > If any test can be run with a set of environment flags and \"vcregress\n> > taptest\" command, then in the first place, it doesn't make sense to\n> > have recoverycheck, upgragecheck and so on. Another thing is that the\n> > list of \"vcregress\" commands cover almost all the tests core, tap,\n> > bin, isolation, contrib tests except, subscription tests. If we add\n> > \"vcregress subscrtptioncheck\", the list of tests that can be run with\n> > the \"vcregress\" command will be complete without having to depend on\n> > the environment variables.\n>\n> The reason we have some of those other tests is because we didn't start\n> with having a generic taptest command in vcregress.pl. So they are\n> simply legacy code. But that is no reason for adding to them.\n\nI get it, thanks.\n\n> > IMHO, we can have \"vcregress subscriptioncheck\" to make it complete\n> > and easier for the user to run those tests. However, let's hear what\n> > other hackers have to say about this.\n>\n> I really fail to see how the invocation above is in any sense\n> significantly more complicated.\n\nYes, the command \"vcregress taptest src/test/subscription\" is simple enough.\n\n> > Another thing I noticed is that we don't have any mention of\n> > \"vcregress taptest\" command in the docs [1], if I read the docs\n> > correctly. How about we have it along with a sample example on how to\n> > run a specific TAP tests with it in the docs?\n> >\n> > [1] - https://www.postgresql.org/docs/current/install-windows-full.html\n>\n> Yes, that's probably something that should be remedied.\n\nYes, I will prepare a patch for it.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Tue, 19 Oct 2021 11:48:24 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: can we add subscription TAP test option \"vcregress\n subscriptioncheck\" for MSVC builds?"
},
{
"msg_contents": "On Tue, Oct 19, 2021 at 6:16 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Oct 18, 2021 at 09:56:41AM -0400, Andrew Dunstan wrote:\n> > Well, see\n> > <https://www.postgresql.org/docs/current/install-windows-full.html#id-1.6.5.8.12>\n> >\n> > Maybe we should move that section.\n>\n> As this is the part of the docs where we document the builds, it\n> looks indeed a bit confusing to have all the requirements for the\n> TAP tests there. The section \"Regression Tests\" cannot be used for\n> the case of VS, and the section for TAP is independent of that so we\n> could use platform-dependent sub-sections.\n>\n> Could it be better to move all the contents of \"Running the Regression\n> Tests\" from the Windows installation page to the section of\n> \"Regression Tests\" instead? That would mean spreading the knowledge\n> of vcregress.pl to more than one place, though.\n\nIMO, it is better to add a note in the \"Running the Tests\" section at\n[1] and a link to the windows specific section at [2]. This will keep\nall the windows specific things at one place without any duplication\nof vcregress.pl knowledge. Thoughts? If okay, I will send a patch.\n\n[1] https://www.postgresql.org/docs/devel/regress-run.html\n[2] https://www.postgresql.org/docs/devel/install-windows-full.html#id-1.6.5.8.12\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Tue, 19 Oct 2021 11:49:16 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: can we add subscription TAP test option \"vcregress\n subscriptioncheck\" for MSVC builds?"
},
{
"msg_contents": "On Tue, Oct 19, 2021 at 11:49 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Oct 19, 2021 at 6:16 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Mon, Oct 18, 2021 at 09:56:41AM -0400, Andrew Dunstan wrote:\n> > > Well, see\n> > > <https://www.postgresql.org/docs/current/install-windows-full.html#id-1.6.5.8.12>\n> > >\n> > > Maybe we should move that section.\n> >\n> > As this is the part of the docs where we document the builds, it\n> > looks indeed a bit confusing to have all the requirements for the\n> > TAP tests there. The section \"Regression Tests\" cannot be used for\n> > the case of VS, and the section for TAP is independent of that so we\n> > could use platform-dependent sub-sections.\n> >\n> > Could it be better to move all the contents of \"Running the Regression\n> > Tests\" from the Windows installation page to the section of\n> > \"Regression Tests\" instead? That would mean spreading the knowledge\n> > of vcregress.pl to more than one place, though.\n>\n> IMO, it is better to add a note in the \"Running the Tests\" section at\n> [1] and a link to the windows specific section at [2]. This will keep\n> all the windows specific things at one place without any duplication\n> of vcregress.pl knowledge. Thoughts? If okay, I will send a patch.\n>\n> [1] https://www.postgresql.org/docs/devel/regress-run.html\n> [2] https://www.postgresql.org/docs/devel/install-windows-full.html#id-1.6.5.8.12\n\nHere's the documentation v1 patch that I've come up with. Please review it.\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Thu, 21 Oct 2021 10:51:17 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: can we add subscription TAP test option \"vcregress\n subscriptioncheck\" for MSVC builds?"
},
{
"msg_contents": "On Thu, Oct 21, 2021 at 7:21 AM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n>\n> Here's the documentation v1 patch that I've come up with. Please review it.\n>\n> There's a typo:\n+ To run an arbitrary TAP test set, run <command>vcregress\ntaptest</command>\n+ comamnd. For example, use the following command for running subcription\nTAP\n+ tests:\ns/comamnd/command/\n\nBut also the wording, I like better what vcregress prints as help, so\nsomething like:\n+ You can use <command>vcregress taptest TEST_DIR</command> to run an\n+ arbitrary TAP test set, where TEST_DIR is a required argument pointing\nto\n+ the directory where the tests reside. For example, use the following\n+ command for running the subcription TAP tests:\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Thu, Oct 21, 2021 at 7:21 AM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\nHere's the documentation v1 patch that I've come up with. Please review it.There's a typo:+ To run an arbitrary TAP test set, run <command>vcregress taptest</command>+ comamnd. For example, use the following command for running subcription TAP+ tests:s/comamnd/command/But also the wording, I like better what vcregress prints as help, so something like:+ You can use <command>vcregress taptest TEST_DIR</command> to run an+ arbitrary TAP test set, where TEST_DIR is a required argument pointing to+ the directory where the tests reside. For example, use the following+ command for running the subcription TAP tests:Regards,Juan José Santamaría Flecha",
"msg_date": "Mon, 13 Dec 2021 00:38:53 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: can we add subscription TAP test option \"vcregress\n subscriptioncheck\" for MSVC builds?"
},
{
"msg_contents": "On Mon, Dec 13, 2021 at 5:09 AM Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n>\n>\n> On Thu, Oct 21, 2021 at 7:21 AM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>>\n>> Here's the documentation v1 patch that I've come up with. Please review it.\n>>\n> There's a typo:\n> + To run an arbitrary TAP test set, run <command>vcregress taptest</command>\n> + comamnd. For example, use the following command for running subcription TAP\n> + tests:\n> s/comamnd/command/\n>\n> But also the wording, I like better what vcregress prints as help, so something like:\n> + You can use <command>vcregress taptest TEST_DIR</command> to run an\n> + arbitrary TAP test set, where TEST_DIR is a required argument pointing to\n> + the directory where the tests reside. For example, use the following\n> + command for running the subcription TAP tests:\n\nThanks for the comments. Above looks good to me, changed that way, PSA v2.\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Thu, 10 Feb 2022 22:21:08 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: can we add subscription TAP test option \"vcregress\n subscriptioncheck\" for MSVC builds?"
},
{
"msg_contents": "On Thu, Feb 10, 2022 at 10:21:08PM +0530, Bharath Rupireddy wrote:\n> Thanks for the comments. Above looks good to me, changed that way, PSA v2.\n\nI spy a typo: subcription\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 10 Feb 2022 12:59:39 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: can we add subscription TAP test option \"vcregress\n subscriptioncheck\" for MSVC builds?"
},
{
"msg_contents": "On Fri, Feb 11, 2022 at 12:29 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Feb 10, 2022 at 10:21:08PM +0530, Bharath Rupireddy wrote:\n> > Thanks for the comments. Above looks good to me, changed that way, PSA v2.\n>\n> I spy a typo: subcription\n\nThanks. Corrected in v3 attached.\n\nThe CF entry https://commitfest.postgresql.org/36/3354/ was closed\nwith \"Returned with Feedback\". I'm not sure why. If the patch is still\nof interest, I will add a new one for tracking.\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Fri, 11 Feb 2022 09:10:07 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: can we add subscription TAP test option \"vcregress\n subscriptioncheck\" for MSVC builds?"
}
] |
[
{
"msg_contents": "Hi,\n\nAs threatened in [1]... For CI, originally in the AIO project but now more\ngenerally, I wanted to get windows backtraces as part of CI. I also was\nconfused why visual studio's \"just in time debugging\" (i.e. a window popping\nup offering to debug a process when it crashes) didn't work with postgres.\n\nMy first attempt was to try to use the existing crashdump stuff in\npgwin32_install_crashdump_handler(). That's not really quite what I want,\nbecause it only handles postmaster rather than any binary, but I thought it'd\nbe a good start. But outside of toy situations it didn't work for me.\n\nA bunch of debugging later I figured out that the reason neither the\nSetUnhandledExceptionFilter() nor JIT debugging works is that the\nSEM_NOGPFAULTERRORBOX in the\n SetErrorMode(SEM_FAILCRITICALERRORS | SEM_NOGPFAULTERRORBOX);\nwe do in startup_hacks() prevents the paths dealing with crashes from being\nreached.\n\nThe SEM_NOGPFAULTERRORBOX hails from:\n\ncommit 27bff7502f04ee01237ed3f5a997748ae43d3a81\nAuthor: Bruce Momjian <bruce@momjian.us>\nDate: 2006-06-12 16:17:20 +0000\n\n Prevent Win32 from displaying a popup box on backend crash. Instead let\n the postmaster deal with it.\n\n Magnus Hagander\n\n\nI actually see error popups despite SEM_NOGPFAULTERRORBOX, at least for paths\nreaching abort() (and thus our assertions).\n\nThe reason for abort() error boxes not being suppressed appears to be that in\ndebug mode a separate facility is reponsible for that: [2], [3]\n\n\"The default behavior is to print the message. _CALL_REPORTFAULT, if set,\nspecifies that a Watson crash dump is generated and reported when abort is\ncalled. By default, crash dump reporting is enabled in non-DEBUG builds.\"\n\nWe apparently need _set_abort_behavior(_CALL_REPORTFAULT) to have abort()\nbehave the same between debug and release builds. [4]\n\n\nTo prevent the error popups we appear to at least need to call\n_CrtSetReportMode(). The docs say:\n\n If you do not call _CrtSetReportMode to define the output destination of\n messages, then the following defaults are in effect:\n\n Assertion failures and errors are directed to a debug message window.\n\nWe can configure it so that that stuff goes to stderr, by calling\n _CrtSetReportMode(_CRT_ASSERT, _CRTDBG_MODE_FILE | _CRTDBG_MODE_DEBUG);\n _CrtSetReportFile(_CRT_ASSERT, _CRTDBG_FILE_STDERR);\n(and the same for _CRT_ERROR and perhaps _CRT_WARNING)\nwhich removes the default _CRTDBG_MODE_WNDW.\n\nIt's possible that we'd need to do more than this, but this was sufficient to\nget crash reports for segfaults and abort() in both assert and release builds,\nwithout seeing an error popup.\n\n\nTo actually get the crash reports I ended up doing the following on the OS\nlevel [5]:\n\n Set-ItemProperty -Path 'HKLM:\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\AeDebug' -Name 'Debugger' -Value '\\\"C:\\Windows Kits\\10\\Debuggers\\x64\\cdb.exe\\\" -p %ld -e %ld -g -kqm -c \\\".lines -e; .symfix+ ;.logappend c:\\cirrus\\crashlog.txt ; !peb; ~*kP ; .logclose ; q \\\"' ; `\n New-ItemProperty -Path 'HKLM:\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\AeDebug' -Name 'Auto' -Value 1 -PropertyType DWord ; `\n Get-ItemProperty -Path 'HKLM:\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\AeDebug' -Name Debugger; `\n\nThis requires 'cdb' to be present, which is included in the Windows 10 SDK (or\nother OS versions, it doesn't appear to have changed much). Whenever there's\nan unhandled crash, cdb.exe is invoked with the parameters above, which\nappends the crash report to crashlog.txt.\n\nAlternatively we can generate \"minidumps\" [6], but that doesn't appear to be more\nhelpful for CI purposes at least - all we'd do is to create a backtrace using\nthe same tool. But it might be helpful for local development, to e.g. analyze\ncrashes in more detail.\n\nThe above ends up dumping all crashes into a single file, but that can\nprobably be improved. But cdb is so gnarly that I wanted to stop looking once\nI got this far...\n\n\nAndrew, I wonder if something like this could make sense for windows BF animals?\n\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/20211001222752.wrz7erzh4cajvgp6%40alap3.anarazel.de\n[2] https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/crtsetreportmode?view=msvc-160\n[3] https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/set-abort-behavior?view=msvc-160\n[4] If anybody can explain to me what the two different parameters to\n _set_abort_behavior() do, I'd be all ears\n[5] https://docs.microsoft.com/en-us/windows/win32/debug/configuring-automatic-debugging\n[6] https://docs.microsoft.com/en-us/windows/win32/wer/wer-settings\n\n\n",
"msg_date": "Tue, 5 Oct 2021 12:30:33 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Windows crash / abort handling"
},
{
"msg_contents": "On Wed, 6 Oct 2021, 03:30 Andres Freund, <andres@anarazel.de> wrote:\n\n> Hi,\n>\n>\n> My first attempt was to try to use the existing crashdump stuff in\n> pgwin32_install_crashdump_handler(). That's not really quite what I want,\n> because it only handles postmaster rather than any binary, but I thought\n> it'd\n> be a good start. But outside of toy situations it didn't work for me.\n>\n\nOdd. It usually has for me, and definitely not limited to the postmaster.\nBut it will fall down for OOM, smashed stack, and other states where\nin-process self-debugging is likely to fail.\n\n>\n> A bunch of debugging later I figured out that the reason neither the\n> SetUnhandledExceptionFilter() nor JIT debugging works is that the\n> SEM_NOGPFAULTERRORBOX in the\n> SetErrorMode(SEM_FAILCRITICALERRORS | SEM_NOGPFAULTERRORBOX);\n> we do in startup_hacks() prevents the paths dealing with crashes from being\n> reached.\n>\n\nRight.\n\nI patch this out when working on windows because it's a real pain.\n\nI keep meaning to propose that we remove this functionality entirely. It's\nobsolete. It was introduced back in the days where DrWatson.exe \"windows\nerror reporting\") used to launch an interactive prompt asking the user what\nto do when a process crashed. This would block the crashed process from\nexiting, making everything grind to a halt until the user interacted with\nthe\nUI. Even for a service process.\n\nNot fun on a headless or remote server.\n\nThese days Windows handles all this a lot more sensibly, and blocking crash\nreporting is quite obsolete and unhelpful.\n\nI'd like to just remove it.\n\nIf we can't do that I'd like to at least make it optional.\n\nAlternatively we can generate \"minidumps\" [6], but that doesn't appear to\n> be more\n> helpful for CI purposes at least - all we'd do is to create a backtrace\n> using\n> the same tool. But it might be helpful for local development, to e.g.\n> analyze\n> crashes in more detail.\n>\n\nThey're immensely helpful when a bt isn't enough, but BTs are certainly the\nfirst step for CI use.\n\nOn Wed, 6 Oct 2021, 03:30 Andres Freund, <andres@anarazel.de> wrote:Hi,\n\nMy first attempt was to try to use the existing crashdump stuff in\npgwin32_install_crashdump_handler(). That's not really quite what I want,\nbecause it only handles postmaster rather than any binary, but I thought it'd\nbe a good start. But outside of toy situations it didn't work for me.Odd. It usually has for me, and definitely not limited to the postmaster. But it will fall down for OOM, smashed stack, and other states where in-process self-debugging is likely to fail.\nA bunch of debugging later I figured out that the reason neither the\nSetUnhandledExceptionFilter() nor JIT debugging works is that the\nSEM_NOGPFAULTERRORBOX in the\n SetErrorMode(SEM_FAILCRITICALERRORS | SEM_NOGPFAULTERRORBOX);\nwe do in startup_hacks() prevents the paths dealing with crashes from being\nreached.Right.I patch this out when working on windows because it's a real pain.I keep meaning to propose that we remove this functionality entirely. It's obsolete. It was introduced back in the days where DrWatson.exe \"windows error reporting\") used to launch an interactive prompt asking the user what to do when a process crashed. This would block the crashed process from exiting, making everything grind to a halt until the user interacted with the UI. Even for a service process.Not fun on a headless or remote server.These days Windows handles all this a lot more sensibly, and blocking crash reporting is quite obsolete and unhelpful.I'd like to just remove it.If we can't do that I'd like to at least make it optional.\nAlternatively we can generate \"minidumps\" [6], but that doesn't appear to be more\nhelpful for CI purposes at least - all we'd do is to create a backtrace using\nthe same tool. But it might be helpful for local development, to e.g. analyze\ncrashes in more detail.They're immensely helpful when a bt isn't enough, but BTs are certainly the first step for CI use.",
"msg_date": "Wed, 6 Oct 2021 14:11:51 +0800",
"msg_from": "Craig Ringer <craig.ringer@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows crash / abort handling"
},
{
"msg_contents": "Hi,\n\nOn 2021-10-06 14:11:51 +0800, Craig Ringer wrote:\n> On Wed, 6 Oct 2021, 03:30 Andres Freund, <andres@anarazel.de> wrote:\n>\n> > Hi,\n> >\n> >\n> > My first attempt was to try to use the existing crashdump stuff in\n> > pgwin32_install_crashdump_handler(). That's not really quite what I want,\n> > because it only handles postmaster rather than any binary, but I thought\n> > it'd\n> > be a good start. But outside of toy situations it didn't work for me.\n> >\n>\n> Odd. It usually has for me, and definitely not limited to the postmaster.\n> But it will fall down for OOM, smashed stack, and other states where\n> in-process self-debugging is likely to fail.\n\nI think it's a question of running debug vs optimized builds. At least in\ndebug builds it doesn't appear to work because the debug c runtime abort\npreempts it.\n\n\n> I patch this out when working on windows because it's a real pain.\n>\n> I keep meaning to propose that we remove this functionality entirely. It's\n> obsolete. It was introduced back in the days where DrWatson.exe \"windows\n> error reporting\") used to launch an interactive prompt asking the user what\n> to do when a process crashed. This would block the crashed process from\n> exiting, making everything grind to a halt until the user interacted with\n> the\n> UI. Even for a service process.\n\n> Not fun on a headless or remote server.\n\nYea, the way we do it right now is definitely not helpful. Especially because\nit doesn't actually prevent the \"hang\" issue - the CRT boxes at least cause\nprecisely such stalls.\n\nWe've had a few CI hangs due to such errors.\n\n\n> These days Windows handles all this a lot more sensibly, and blocking crash\n> reporting is quite obsolete and unhelpful.\n\n From what I've seen it didn't actually get particularly sensible, just\ndifferent and more complicated.\n\n From what I've seen one needs at least:\n- _set_abort_behavior(_CALL_REPORTFAULT | _WRITE_ABORT_MSG)\n- _set_error_mode(_OUT_TO_STDERR)\n- _CrtSetReportMode(_CRT_ASSERT/ERROR, _CRTDBG_MODE_FILE | _CRTDBG_MODE_DEBUG)\n- SetErrorMode(SEM_FAILCRITICALERRORS)\n\nThere's many things this hocuspocus can be called, but sensible isn't among my\nword choices for it.\n\n\n> I'd like to just remove it.\n\nI think we need to remove the SEM_NOGPFAULTERRORBOX, but I don't think we can\nremove the SEM_FAILCRITICALERRORS, and I think we need the rest of the above\nto prevent error boxes from happening.\n\n\nI think we ought to actually apply these to all binaries, not just\npostgres. One CI hung was due to psql asserting. But there's currently no easy\nhook point for all binaries afaict. If we were to introduce something it\nshould probably be in pgport? But I'd defer to that to a later step.\n\n\nI've attached a patch implementing these changes.\n\nI also made one run with that intentionally fail (with an Assert(false)), and\nwith the changes the debugger is invoked and creates a backtrace etc:\nhttps://cirrus-ci.com/task/5447300246929408\n(click on crashlog-> at the top)\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sun, 9 Jan 2022 16:57:04 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Windows crash / abort handling"
},
{
"msg_contents": "\nOn 10/5/21 15:30, Andres Freund wrote\n>\n> To actually get the crash reports I ended up doing the following on the OS\n> level [5]:\n>\n> Set-ItemProperty -Path 'HKLM:\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\AeDebug' -Name 'Debugger' -Value '\\\"C:\\Windows Kits\\10\\Debuggers\\x64\\cdb.exe\\\" -p %ld -e %ld -g -kqm -c \\\".lines -e; .symfix+ ;.logappend c:\\cirrus\\crashlog.txt ; !peb; ~*kP ; .logclose ; q \\\"' ; `\n> New-ItemProperty -Path 'HKLM:\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\AeDebug' -Name 'Auto' -Value 1 -PropertyType DWord ; `\n> Get-ItemProperty -Path 'HKLM:\\SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\AeDebug' -Name Debugger; `\n>\n> This requires 'cdb' to be present, which is included in the Windows 10 SDK (or\n> other OS versions, it doesn't appear to have changed much). Whenever there's\n> an unhandled crash, cdb.exe is invoked with the parameters above, which\n> appends the crash report to crashlog.txt.\n>\n> Alternatively we can generate \"minidumps\" [6], but that doesn't appear to be more\n> helpful for CI purposes at least - all we'd do is to create a backtrace using\n> the same tool. But it might be helpful for local development, to e.g. analyze\n> crashes in more detail.\n>\n> The above ends up dumping all crashes into a single file, but that can\n> probably be improved. But cdb is so gnarly that I wanted to stop looking once\n> I got this far...\n>\n>\n> Andrew, I wonder if something like this could make sense for windows BF animals?\n>\n\n\nVery possibly. I wonder how well it will work on machines where I have\nmore than one animal .e.g. lorikeet (cygwin) jacana (msys) and bowerbird\n(MSVC) are all on the same machine. Likewise drongo (MSVC) and fairywren\n(msys2).\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 10 Jan 2022 10:57:00 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Windows crash / abort handling"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-10 10:57:00 -0500, Andrew Dunstan wrote:\n> On 10/5/21 15:30, Andres Freund wrote\n> > The above ends up dumping all crashes into a single file, but that can\n> > probably be improved. But cdb is so gnarly that I wanted to stop looking once\n> > I got this far...\n\nFWIW, I figured out how to put the dumps into separate files by now...\n\n\n> > Andrew, I wonder if something like this could make sense for windows BF animals?\n\n> Very possibly. I wonder how well it will work on machines where I have\n> more than one animal .e.g. lorikeet (cygwin) jacana (msys) and bowerbird\n> (MSVC) are all on the same machine. Likewise drongo (MSVC) and fairywren\n> (msys2).\n\nHm. I can see a few ways to deal with it. Are they running concurrently?\nIf not then it's easy enough to deal with.\n\nIt'd be a bit of a fight with cdb's awfully documented and quirky\nscripting [1], but the best solution would probably be to just use an\nenvironment variable from the target process to determine the dump\nlocation. Then each buildfarm config could set a BF_BACKTRACE_LOCATION\nvariable or such...\n\n[1] So there's !envvar. But that yields a string like\nBF_BACKTRACE_LOCATION = value of environment variable when set to an\nalias. And I haven't found an easy way to get rid of the \"variablename\n= \". There is .foreach /pS [2] which could be used to skip over the\nvarname =, but that then splits on all whitespaces. Gah.\n\n[2] https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/-foreach\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 10 Jan 2022 23:51:19 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Windows crash / abort handling"
},
{
"msg_contents": "\nOn 1/11/22 02:51, Andres Freund wrote:\n> Hi,\n>\n> On 2022-01-10 10:57:00 -0500, Andrew Dunstan wrote:\n>> On 10/5/21 15:30, Andres Freund wrote\n>>> The above ends up dumping all crashes into a single file, but that can\n>>> probably be improved. But cdb is so gnarly that I wanted to stop looking once\n>>> I got this far...\n> FWIW, I figured out how to put the dumps into separate files by now...\n>\n>\n>>> Andrew, I wonder if something like this could make sense for windows BF animals?\n>> Very possibly. I wonder how well it will work on machines where I have\n>> more than one animal .e.g. lorikeet (cygwin) jacana (msys) and bowerbird\n>> (MSVC) are all on the same machine. Likewise drongo (MSVC) and fairywren\n>> (msys2).\n> Hm. I can see a few ways to deal with it. Are they running concurrently?\n> If not then it's easy enough to deal with.\n>\n> It'd be a bit of a fight with cdb's awfully documented and quirky\n> scripting [1], but the best solution would probably be to just use an\n> environment variable from the target process to determine the dump\n> location. Then each buildfarm config could set a BF_BACKTRACE_LOCATION\n> variable or such...\n>\n> [1] So there's !envvar. But that yields a string like\n> BF_BACKTRACE_LOCATION = value of environment variable when set to an\n> alias. And I haven't found an easy way to get rid of the \"variablename\n> = \". There is .foreach /pS [2] which could be used to skip over the\n> varname =, but that then splits on all whitespaces. Gah.\n>\n> [2] https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/-foreach\n>\n\nUgly as heck. I normally don't use locations with spaces in them. Let's\nassume we don't have to deal with that issue at least.\n\nBut in theory these animals could be running in parallel, and in theory\neach animal could have more than one branch being run concurrently. In\nfact locking them against each other can be difficult/impossible. From\nexperience, three different perls might not agree on how file locking\nworks ... In the case of fairywren/drongo I have had to set things up so\nthat there is a single batch file that runs one job after the other.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 11 Jan 2022 12:01:42 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Windows crash / abort handling"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-11 12:01:42 -0500, Andrew Dunstan wrote:\n> On 1/11/22 02:51, Andres Freund wrote:\n> > It'd be a bit of a fight with cdb's awfully documented and quirky\n> > scripting [1], but the best solution would probably be to just use an\n> > environment variable from the target process to determine the dump\n> > location. Then each buildfarm config could set a BF_BACKTRACE_LOCATION\n> > variable or such...\n> >\n> > [1] So there's !envvar. But that yields a string like\n> > BF_BACKTRACE_LOCATION = value of environment variable when set to an\n> > alias. And I haven't found an easy way to get rid of the \"variablename\n> > = \". There is .foreach /pS [2] which could be used to skip over the\n> > varname =, but that then splits on all whitespaces. Gah.\n> >\n> > [2] https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/-foreach\n> >\n> \n> Ugly as heck.\n\nIndeed. I think I figured it out:\n\n0:000> !envvar frak\n frak = C:\\Program Files\\Application Verifier\\\n0:000> ad /q path; .foreach /pS 2 (component {!envvar frak}){ .if (${/d:path}) {aS ${/v:path} ${/f:path} ${component}} .else {aS ${/v:path} ${component}}}; .block {.echo ${path}}\nC:\\Program Files\\Application Verifier\\\n\nI mean, no explanation needed, right?\n\n\n> But in theory these animals could be running in parallel, and in theory\n> each animal could have more than one branch being run concurrently. In\n> fact locking them against each other can be difficult/impossible.\n\nThe environment variable solution directing dumps for each animal / branch to\ndifferent directories should take care of that, right?\n\n\nDo you have a preference where a script file implementing the necessary\ncdb.exe commands would reside? It's getting too long to comfortably implement\nit inside the registry setting itself... That script would be used by all\nbranches, rather than be branch specific.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 11 Jan 2022 13:13:03 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Windows crash / abort handling"
},
{
"msg_contents": "\nOn 1/11/22 16:13, Andres Freund wrote:\n> Hi,\n>\n> On 2022-01-11 12:01:42 -0500, Andrew Dunstan wrote:\n>> On 1/11/22 02:51, Andres Freund wrote:\n>>> It'd be a bit of a fight with cdb's awfully documented and quirky\n>>> scripting [1], but the best solution would probably be to just use an\n>>> environment variable from the target process to determine the dump\n>>> location. Then each buildfarm config could set a BF_BACKTRACE_LOCATION\n>>> variable or such...\n>>>\n>>> [1] So there's !envvar. But that yields a string like\n>>> BF_BACKTRACE_LOCATION = value of environment variable when set to an\n>>> alias. And I haven't found an easy way to get rid of the \"variablename\n>>> = \". There is .foreach /pS [2] which could be used to skip over the\n>>> varname =, but that then splits on all whitespaces. Gah.\n>>>\n>>> [2] https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/-foreach\n>>>\n>> Ugly as heck.\n> Indeed. I think I figured it out:\n>\n> 0:000> !envvar frak\n> frak = C:\\Program Files\\Application Verifier\\\n> 0:000> ad /q path; .foreach /pS 2 (component {!envvar frak}){ .if (${/d:path}) {aS ${/v:path} ${/f:path} ${component}} .else {aS ${/v:path} ${component}}}; .block {.echo ${path}}\n> C:\\Program Files\\Application Verifier\\\n>\n> I mean, no explanation needed, right?\n>\n>\n>> But in theory these animals could be running in parallel, and in theory\n>> each animal could have more than one branch being run concurrently. In\n>> fact locking them against each other can be difficult/impossible.\n> The environment variable solution directing dumps for each animal / branch to\n> different directories should take care of that, right?\n>\n>\n> Do you have a preference where a script file implementing the necessary\n> cdb.exe commands would reside? It's getting too long to comfortably implement\n> it inside the registry setting itself... That script would be used by all\n> branches, rather than be branch specific.\n\n\n\nWell, the buildfarm code sets up a file for gdb in the branch's pgsql.build:\n\n\n my $cmdfile = \"./gdbcmd\";\n my $handle;\n open($handle, '>', $cmdfile) || die \"opening $cmdfile: $!\";\n print $handle \"bt\\n\";\n print $handle 'p $_siginfo', \"\\n\";\n close($handle);\n\n my @trace = (\"\\n\\n\");\n\n foreach my $core (@cores)\n {\n my @onetrace =\n run_log(\"gdb -x $cmdfile --batch $bindir/postgres $core\");\n push(@trace,\n \"$log_file_marker stack trace: $core $log_file_marker\\n\",\n @onetrace);\n }\n\n\nSo by analogy we could do something similar for the Windows debugger. Or\nwe could pick up a file from the repo if we wanted to commit it in\nsrc/tools somewhere.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 11 Jan 2022 16:35:41 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Windows crash / abort handling"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-09 16:57:04 -0800, Andres Freund wrote:\n> I've attached a patch implementing these changes.\n\nUnless somebody is planning to look at this soon, I'm planning to push it to\nmaster. It's too annoying to have these hangs and not see backtraces.\n\n\nWe're going to have to do this in all binaries at some point, but that's\na considerably larger patch...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 29 Jan 2022 13:02:14 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Windows crash / abort handling"
},
{
"msg_contents": "On Sun, Jan 30, 2022 at 10:02 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-01-09 16:57:04 -0800, Andres Freund wrote:\n> > I've attached a patch implementing these changes.\n>\n> Unless somebody is planning to look at this soon, I'm planning to push it to\n> master. It's too annoying to have these hangs and not see backtraces.\n\n+1, I don't know enough about Windows development to have an opinion\non the approach but we've got to try *something*, these hangs are\nterrible.\n\n\n",
"msg_date": "Wed, 2 Feb 2022 11:24:19 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows crash / abort handling"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-02 11:24:19 +1300, Thomas Munro wrote:\n> On Sun, Jan 30, 2022 at 10:02 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-01-09 16:57:04 -0800, Andres Freund wrote:\n> > > I've attached a patch implementing these changes.\n> >\n> > Unless somebody is planning to look at this soon, I'm planning to push it to\n> > master. It's too annoying to have these hangs and not see backtraces.\n> \n> +1, I don't know enough about Windows development to have an opinion\n> on the approach but we've got to try *something*, these hangs are\n> terrible.\n\nI've pushed the patch this thread is about now. Lets see what the buildfarm\nsays. I only could one windows version. Separately I've also pushed a patch\nto run the windows tests under a timeout. I hope in combination these patches\naddress the hangs.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 2 Feb 2022 18:38:00 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Windows crash / abort handling"
},
{
"msg_contents": "On Thu, 3 Feb 2022 at 15:38, Andres Freund <andres@anarazel.de> wrote:\n> I've pushed the patch this thread is about now. Lets see what the buildfarm\n> says. I only could one windows version. Separately I've also pushed a patch\n> to run the windows tests under a timeout. I hope in combination these patches\n> address the hangs.\n\nI tried this out today on a windows machine I have here. I added some\ncode to trigger an Assert failure and the options of attaching a\ndebugger work well. Tested both from running the regression tests and\nrunning a query manually with psql.\n\nTested on Windows 8.1 with VS2017.\n\nThanks for working on this.\n\nDavid\n\n\n",
"msg_date": "Fri, 4 Feb 2022 16:21:12 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows crash / abort handling"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhile reviewing the code of elog.c to plug in JSON as a file-based log\ndestination, I have found what looks like a bug in\nsend_message_to_server_log(). If LOG_DESTINATION_CSVLOG is enabled,\nwe would do the following to make sure that the log entry is not\nmissed:\n- If redirection_done or in the logging collector, call write_csvlog()\nto write the CSV entry using the piped protocol or write directly if\nthe logging collector does the call.\n- If the log redirection is not available yet, we'd just call\nwrite_console() to redirect the message to stderr, which would be done\nif it was not done in the code block for stderr before handling CSV to\navoid duplicates. This uses a condition that matches the one based on\nLog_destination and whereToSendOutput.\n\nNow, in the stderr code path, we would actually do more than that:\n- write_pipe_chunks() for a non-syslogger process if redirection is\ndone.\n- If there is no redirection, redirect to eventlog when running as a\nservice on WIN32, or simply stderr with write_console().\n\nSo at the end, if one enables only csvlog, we would not capture any\nlogs if the redirection is not ready yet on WIN32 when running as a\nservice, meaning that we could lose some precious information if there\nis for example a startup failure.\n\nThis choice comes from fd801f4 in 2007, that introduced csvlog as\na log_destination.\n\nI think that there is a good argument for back-patching a fix, but I\ndon't recall seeing anybody complaining about that and I just need\nthat for the business with JSON. I have thought about various ways to\nfix that, and finished with a solution where we handle csvlog first,\nand fallback to stderr after so as there is only one code path for\nstderr, as of the attached. This reduces a bit the confusion around\nthe handling of the stderr data that gets free()'d in more code paths\nthan really needed.\n\nThoughts or objections?\n--\nMichael",
"msg_date": "Wed, 6 Oct 2021 14:10:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Lost logs with csvlog redirected to stderr under WIN32 service"
},
{
"msg_contents": "On 10/6/21 12:10 AM, Michael Paquier wrote:\n> I have thought about various ways to\n> fix that, and finished with a solution where we handle csvlog first,\n> and fallback to stderr after so as there is only one code path for\n> stderr, as of the attached. This reduces a bit the confusion around\n> the handling of the stderr data that gets free()'d in more code paths\n> than really needed.\n\nI don't have a windows machine to test, but this refactor looks good to me.\n\n> +\t/* Write to CSV log, if enabled */\n> +\tif ((Log_destination & LOG_DESTINATION_CSVLOG) != 0)\n\nThis was originally \"if (Log_destination & LOG_DESTINATION_CSVLOG)\" and\nother conditions nearby still lack the \"!= 0\". Whatever the preferred\nstyle, the lines touched by this patch should probably do this consistently.\n\n-- Chris\n\n\n",
"msg_date": "Wed, 6 Oct 2021 21:33:24 -0500",
"msg_from": "Chris Bandy <bandy.chris@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Lost logs with csvlog redirected to stderr under WIN32 service"
},
{
"msg_contents": "On Wed, Oct 06, 2021 at 09:33:24PM -0500, Chris Bandy wrote:\n> I don't have a windows machine to test, but this refactor looks good to me.\n\nThanks for the review! I did test this on Windows, only MSVC builds.\n\n>> +\t/* Write to CSV log, if enabled */\n>> +\tif ((Log_destination & LOG_DESTINATION_CSVLOG) != 0)\n> \n> This was originally \"if (Log_destination & LOG_DESTINATION_CSVLOG)\" and\n> other conditions nearby still lack the \"!= 0\". Whatever the preferred\n> style, the lines touched by this patch should probably do this consistently.\n\nYeah. It looks like using a boolean expression here is easier for my\nbrain.\n--\nMichael",
"msg_date": "Thu, 7 Oct 2021 13:26:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Lost logs with csvlog redirected to stderr under WIN32 service"
},
{
"msg_contents": "On Thu, Oct 07, 2021 at 01:26:46PM +0900, Michael Paquier wrote:\n> On Wed, Oct 06, 2021 at 09:33:24PM -0500, Chris Bandy wrote:\n>>> +\t/* Write to CSV log, if enabled */\n>>> +\tif ((Log_destination & LOG_DESTINATION_CSVLOG) != 0)\n>> \n>> This was originally \"if (Log_destination & LOG_DESTINATION_CSVLOG)\" and\n>> other conditions nearby still lack the \"!= 0\". Whatever the preferred\n>> style, the lines touched by this patch should probably do this consistently.\n> \n> Yeah. It looks like using a boolean expression here is easier for my\n> brain.\n\nI have played with this patch more this morning, adjusted this part,\nand applied it as of 8b76f89.\n--\nMichael",
"msg_date": "Fri, 8 Oct 2021 11:57:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Lost logs with csvlog redirected to stderr under WIN32 service"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nAt this time I'm looking to close the CF.\n\nSo, I will write and mark as RwF the patches currently failing on the\ncfbot and move to the next CF all other patches.\n\nRwF:\n- ALTER SYSTEM READ { ONLY | WRITE }\n- GUC to disable cancellation of awaiting for synchronous replication\n- Introduce ProcessInterrupts_hook for C extensions\n- Make message at end-of-recovery less scary\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n",
"msg_date": "Wed, 6 Oct 2021 00:21:02 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": true,
"msg_subject": "wrapping CF 2021-09"
},
{
"msg_contents": "On Tue, Oct 5, 2021 at 10:21 PM Jaime Casanova\n<jcasanov@systemguards.com.ec> wrote:\n> At this time I'm looking to close the CF.\n\nThanks, Jaime.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 5 Oct 2021 23:41:09 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: wrapping CF 2021-09"
},
{
"msg_contents": "\n\n> On 6 Oct 2021, at 08:41, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> On Tue, Oct 5, 2021 at 10:21 PM Jaime Casanova\n> <jcasanov@systemguards.com.ec> wrote:\n>> At this time I'm looking to close the CF.\n> \n> Thanks, Jaime.\n\nIndeed, thanks for all your work this CF!\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 6 Oct 2021 10:55:53 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: wrapping CF 2021-09"
},
{
"msg_contents": "On Wed, Oct 06, 2021 at 12:21:02AM -0500, Jaime Casanova wrote:\n> Hi everyone,\n> \n> At this time I'm looking to close the CF.\n> \n\nClosed\n\n> So, I will write and mark as RwF the patches currently failing on the\n> cfbot and move to the next CF all other patches.\n> \n> RwF:\n> - ALTER SYSTEM READ { ONLY | WRITE }\n\nThis one I didn't close it because a new patch was received just a few\nhours ago\n\n> - GUC to disable cancellation of awaiting for synchronous replication\n> - Introduce ProcessInterrupts_hook for C extensions\n> - Make message at end-of-recovery less scary\n> \n\nIt seems this one I didn't close it either, not sure why. Maybe I was\ntired. Anyway will made a follow up on that one quickly.\n\nSo, what happened in this CF?\n\nCommitted: 55. \nMoved to next CF: 206. \nWithdrawn: 2. \nRejected: 3. \nReturned with Feedback: 51. \nTotal: 317.\n\nThis means during this CF we dealt with 78 patches of the 284 active \npatches at the beggining and it seems a lot of them were committed. \nWell done committers!! A terrific job considering you were wrapping \noff PG 14. \n\nStill there is a lot of patches to deal with in next CF and I know part\nof that is my own fear of reject patches to quickly. So I will try to\nfollow up on that list of patches to understand their real state.\n\nThanks everyone for your help and great work in this CF.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n",
"msg_date": "Wed, 6 Oct 2021 08:54:50 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": true,
"msg_subject": "Re: wrapping CF 2021-09"
},
{
"msg_contents": "On Wed, Oct 06, 2021 at 10:55:53AM +0200, Daniel Gustafsson wrote:\n> Indeed, thanks for all your work this CF!\n\n+1. Thanks, Jaime!\n--\nMichael",
"msg_date": "Thu, 7 Oct 2021 11:02:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: wrapping CF 2021-09"
}
] |
[
{
"msg_contents": "Hi\n\nIf you do tab completion in a situation like A, you will see [\"on\"] \ninstead of [on].\n\nA : \"ALTER SYSTEM SET wal_compression TO \"\n\nI made a patch for this problem.\n\nregards,\nKosei Masumura",
"msg_date": "Wed, 06 Oct 2021 14:24:40 +0900",
"msg_from": "bt21masumurak <bt21masumurak@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Improved tab completion for PostgreSQL parameters in enums"
},
{
"msg_contents": "On Wed, Oct 06, 2021 at 02:24:40PM +0900, bt21masumurak wrote:\n> If you do tab completion in a situation like A, you will see [\"on\"] instead\n> of [on].\n> \n> A : \"ALTER SYSTEM SET wal_compression TO \"\n> \n> I made a patch for this problem.\n\nThis would break the completion of enum entries that require quotes to\nwork properly for some of their values, like\ndefault_transaction_isolation.\n--\nMichael",
"msg_date": "Wed, 6 Oct 2021 14:44:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Improved tab completion for PostgreSQL parameters in enums"
},
{
"msg_contents": "bt21masumurak <bt21masumurak@oss.nttdata.com> writes:\n> If you do tab completion in a situation like A, you will see [\"on\"] \n> instead of [on].\n\n> A : \"ALTER SYSTEM SET wal_compression TO \"\n\n> I made a patch for this problem.\n\nI do not think this is an improvement. It will result in omitting\nquotes in some cases where they're not optional. Try it with a\nvalue such as \"all\", for example.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 06 Oct 2021 10:34:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Improved tab completion for PostgreSQL parameters in enums"
},
{
"msg_contents": "Thank you for comments.\n\n> I do not think this is an improvement. It will result in omitting\n> quotes in some cases where they're not optional. Try it with a\n> value such as \"all\", for example.\n\n> This would break the completion of enum entries that require quotes to\n> work properly for some of their values, like\n> default_transaction_isolation.\n\nI understand these comments, and the proposal was withdrawn.\n\nRegards,\nKosei Masumura\n\n2021-10-06 23:34 に Tom Lane さんは書きました:\n> bt21masumurak <bt21masumurak@oss.nttdata.com> writes:\n>> If you do tab completion in a situation like A, you will see [\"on\"]\n>> instead of [on].\n> \n>> A : \"ALTER SYSTEM SET wal_compression TO \"\n> \n>> I made a patch for this problem.\n> \n> I do not think this is an improvement. It will result in omitting\n> quotes in some cases where they're not optional. Try it with a\n> value such as \"all\", for example.\n> \n> \t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 14 Oct 2021 03:08:13 +0900",
"msg_from": "bt21masumurak <bt21masumurak@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Improved tab completion for PostgreSQL parameters in enums"
}
] |
[
{
"msg_contents": "Hi,\n\n\nAfter a recent migration of the skink and a few other animals (sorry for the\nfalse reports on BF, I forgot to adjust a path), I looked at the time it takes\nto complete a valgrind run:\n\n9.6: Consumed 4h 53min 18.518s CPU time\n10: Consumed 5h 32min 50.839s CPU time\n11: Consumed 7h 7min 17.455s CPU time\n14: still going at 11h 51min 57.951s\nHEAD: 14h 32min 29.571s CPU time\n\nI changed it so that HEAD with be built in parallel separately from the other\nbranches, so that HEAD gets results within a useful timeframe. But even with\nthat, the test times are increasing at a rate we're not going to be able to\nkeep up.\n\nPart of this is caused by a lot of tests running serially, rather than in\nparallel. I was pondering setting PROVE_FLAGS=-j5 or something to reduce the\nimpact of tap tests a bit.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 5 Oct 2021 22:57:02 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Running tests under valgrind is getting slower at an alarming pace"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> After a recent migration of the skink and a few other animals (sorry for the\n> false reports on BF, I forgot to adjust a path), I looked at the time it takes\n> to complete a valgrind run:\n\n> 9.6: Consumed 4h 53min 18.518s CPU time\n> 10: Consumed 5h 32min 50.839s CPU time\n> 11: Consumed 7h 7min 17.455s CPU time\n> 14: still going at 11h 51min 57.951s\n> HEAD: 14h 32min 29.571s CPU time\n\nI have observed similar slowdowns across versions on just-plain-slow\nanimals, too. Awhile ago (last December, I think), I tried enabling\n--enable-tap-tests across the board on prairiedog, and observed\nthese buildfarm cycle times:\n\n9.5\t01:50:24\n9.6\t02:06:32\n10\t02:26:34\n11\t02:54:44\n12\t03:41:11\n13\t04:46:31\nHEAD\t04:49:04\n\nI went back to not running TAP tests in the back branches :-(\n\nprairiedog's latest HEAD run consumed 5:30, so it's gotten way\nworse since December.\n\nIn the same comparison, my other animal longfin had gone from 14 to 18\nminutes (and it's now up to 22 minutes). It's not clear to me whether\ngreater available parallelism (12 CPUs vs 1) is alone enough to\nexplain why the more modern machine isn't suffering so badly. As you\nsay, the TAP tests are not well parallelized at all, so that doesn't\nseem to fit the facts.\n\nIn any case, it seems like we do need to be paying more attention to\nhow long it takes to do the TAP tests. We could try to shave more\ncycles off the overhead, and we should think a little harder about\nthe long-term value of every test case we add.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 06 Oct 2021 11:59:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Running tests under valgrind is getting slower at an alarming\n pace"
},
{
"msg_contents": "On Wed, Oct 6, 2021 at 1:57 AM Andres Freund <andres@anarazel.de> wrote:\n> After a recent migration of the skink and a few other animals (sorry for the\n> false reports on BF, I forgot to adjust a path), I looked at the time it takes\n> to complete a valgrind run:\n>\n> 9.6: Consumed 4h 53min 18.518s CPU time\n> 10: Consumed 5h 32min 50.839s CPU time\n> 11: Consumed 7h 7min 17.455s CPU time\n> 14: still going at 11h 51min 57.951s\n> HEAD: 14h 32min 29.571s CPU time\n>\n> I changed it so that HEAD with be built in parallel separately from the other\n> branches, so that HEAD gets results within a useful timeframe. But even with\n> that, the test times are increasing at a rate we're not going to be able to\n> keep up.\n\nIs the problem here that we're adding a lot of new new test cases? Or\nis the problem that valgrind runs are getting slower for the same\nnumber of test cases?\n\nIf it's taking longer because we have more test cases, I'm honestly\nnot sure that's really something we should try to fix. I mean, I'm\nsure we have some bad test cases here and there, but overall I think\nwe still have too little test coverage, not too much. The recent\ndiscovery that recovery_end_command had zero test coverage is one fine\nexample of that.\n\nBut if we've done something that increases the relative cost of\nvalgrind, maybe we can fix that in a centralized way.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 6 Oct 2021 12:09:36 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Running tests under valgrind is getting slower at an alarming\n pace"
},
{
"msg_contents": "Hi,\n\nOn 2021-10-06 12:09:36 -0400, Robert Haas wrote:\n> Is the problem here that we're adding a lot of new new test cases? Or\n> is the problem that valgrind runs are getting slower for the same\n> number of test cases?\n\nI don't know precisely. It's probably a combination of several factors. I do\nthink we regressed somewhere around valgrind specifically - the leak origin\ntracking in older branches seems to work better than in newer branches. But I\ndon't know if that affects performance.\n\n\n> If it's taking longer because we have more test cases, I'm honestly\n> not sure that's really something we should try to fix.\n\nI'm not arguing for having fewer tests. But clearly, executing them serially\nis problematic, when the times are going up like this. Skink is hosted on a\nmachine with a CPU clocking around ~3.9GHZ for most of the test - getting a\nfaster machine won't help that much. But most of the time only a few cores are\nactive.\n\nThis isn't just a problem with valgrind, the reporting times for other animals\nalso aren't getting shorter...\n\nIt takes my workstation 2min20s to execute check-world parallely, but > 16min\nsequentially. The BF executes tap tests sequentially...\n\n\n> I mean, I'm sure we have some bad test cases here and there, but overall I\n> think we still have too little test coverage, not too much. The recent\n> discovery that recovery_end_command had zero test coverage is one fine\n> example of that.\n> \n> But if we've done something that increases the relative cost of\n> valgrind, maybe we can fix that in a centralized way.\n\nThere's probably some of that.\n\nThe fact that the tap test infrastructure does all communication with the\nserver via psql each only execute only a single query is a problem -\nconnection startup is expensive. I think this is particularly a problem for\nthings like PostgresNode::poll_query_until(), which is also used by\n::wait_for_catchup(), because a) process creation is more expensive on\nvalgrind b) things take longer on valgrind, so we pay that price many more\ntimes.\n\nAt the same time increasing the timeout for the poll loop also makes the tests\nslower, because all the waits for things that already finished do add up.\n\nI'd guess that at the very least driving individual poll_query_until() via a\npsql that's running across queries would reduce this substantially, and\nperhaps allow us to reduce the polling time. But there's probably some\nnontrivial challenges around recognizing query boundaries :/\n\n\nBriefly looking at a profile of valgrind, it looks like a lot of the cpu time\nis spent doing syscalls related to logging. So far skink had\nlog_statement=all, log_connections=on, log_disconnections=on - I've turned\nthem off for the next runs. We'll see if that helps.\n\n\nI'll also try to figure out print a bit more detail about timing for each tap\ntest, looks like I need to figure out how to pass PROVE_TEST='--timer' through\nthe buildfarm. Shouldn't be too hard.\n\n\nOne thing I think would really help is having the total time for each run\nvisible in an animals run history. That way we could pinpoint regressions\nreasonably efficiently, right now that's not easily possible without writing\nnontrivial queries to the buildfarm database...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 6 Oct 2021 09:47:54 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Running tests under valgrind is getting slower at an alarming\n pace"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> One thing I think would really help is having the total time for each run\n> visible in an animals run history. That way we could pinpoint regressions\n> reasonably efficiently, right now that's not easily possible without writing\n> nontrivial queries to the buildfarm database...\n\n+1. I've lost count of how often I've had to drill down to an individual\nrun just because I wanted to see how long it took. If we could fit that\ninto the branch history pages like\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=jay&br=HEAD\n\nit'd be really useful IMO.\n\nPerhaps we could replace \"OK\" with the total time, so as to avoid making\nthese tables bigger? (This presumes that the time for a failed run isn't\nso interesting.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 06 Oct 2021 12:58:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Running tests under valgrind is getting slower at an alarming\n pace"
},
{
"msg_contents": "On Wed, Oct 6, 2021 at 12:47 PM Andres Freund <andres@anarazel.de> wrote:\n> There's probably some of that.\n>\n> The fact that the tap test infrastructure does all communication with the\n> server via psql each only execute only a single query is a problem -\n> connection startup is expensive.\n\nAgeed. safe_psql() is a poor-quality interface. I've been surprised in\nthe past that we were relying on something so primitive.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 6 Oct 2021 13:10:29 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Running tests under valgrind is getting slower at an alarming\n pace"
},
{
"msg_contents": "Hi,\n\nOn 2021-10-06 12:58:34 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > One thing I think would really help is having the total time for each run\n> > visible in an animals run history. That way we could pinpoint regressions\n> > reasonably efficiently, right now that's not easily possible without writing\n> > nontrivial queries to the buildfarm database...\n>\n> +1. I've lost count of how often I've had to drill down to an individual\n> run just because I wanted to see how long it took. If we could fit that\n> into the branch history pages like\n\nI queried this in the DB for skink using\n\nselect snapshot::date, substring(git_head_ref, 1, 12) as git_rev, (SELECT SUM(stage_duration) FROM build_status_log_raw bslr WHERE bslr.sysname = bsr.sysname AND bslr.snapshot = bsr.snapshot) FROM build_status_raw bsr WHERE branch = 'HEAD' AND sysname = 'skink' and stage = 'OK' AND snapshot > '2021-01-01' order by snapshot desc;\n\n snapshot | git_rev | sum\n------------+--------------+----------\n 2021-10-06 | ec2133a44731 | 12:09:17\n 2021-10-05 | 0266e98c6b86 | 10:55:10\n 2021-10-03 | 2903f1404df3 | 10:24:11\n 2021-09-30 | 20f8671ef69b | 10:31:43\n...\n 2021-06-14 | 2d689babe3cb | 10:29:07\n 2021-06-12 | f452aaf7d4a9 | 10:26:12\n 2021-06-11 | d08237b5b494 | 10:50:53\n 2021-06-09 | 845cad4d51cb | 10:58:31\n 2021-06-08 | eab81953682d | 09:06:35\n 2021-06-06 | a2dee328bbe5 | 09:02:36\n 2021-06-05 | e6159885b78e | 08:59:14\n 2021-06-03 | 187682c32173 | 09:39:07\n 2021-06-02 | df466d30c6ca | 09:03:05\n 2021-06-03 | 187682c32173 | 09:39:07\n 2021-06-02 | df466d30c6ca | 09:03:05\n 2021-05-31 | 7c544ecdad81 | 09:09:42\n 2021-05-30 | ba356a397de5 | 08:54:29\n 2021-05-28 | d69fcb9caef1 | 09:00:36\n 2021-05-27 | 388e75ad3348 | 09:39:14\n 2021-05-25 | e30e3fdea873 | 08:51:04\n 2021-05-24 | 99c5852e20a0 | 08:57:08\n...\n 2021-03-23 | 1e3e8b51bda8 | 09:19:40\n 2021-03-21 | 96ae658e6238 | 08:29:05\n 2021-03-20 | 61752afb2640 | 08:15:47\n 2021-03-18 | da18d829c281 | 08:34:02\n 2021-03-17 | 6b67d72b604c | 09:11:46\n 2021-03-15 | 146cb3889c3c | 08:20:21\n 2021-03-14 | 58f57490facd | 08:06:07\n 2021-03-12 | d60e61de4fb4 | 08:41:12\n 2021-03-11 | 3f0daeb02f8d | 08:04:44\n 2021-03-08 | 8a812e5106c5 | 08:46:01\n 2021-03-07 | f9a0392e1cf3 | 08:01:47\n 2021-03-05 | 0ce4cd04da55 | 08:01:32\n 2021-03-04 | 040af779382e | 07:56:31\n 2021-03-02 | 5b2f2af3d9d5 | 08:20:50\n 2021-03-01 | f5a5773a9dc4 | 07:59:14\n...\n 2021-01-02 | 4d3f03f42227 | 08:14:41\n 2021-01-01 | 32d6287d2eef | 07:31:56\n\nIt's not too surprising that 2021-10-06 is slower, I yesterday changed things\nso that more valgrind runs are done in parallel (increasing individual test\ntimes, but still allowing to get results faster than testing 1-by-1).\n\n\nI don't see anything immediately suspicious for the slowdowns around\neab81953682d. Perhaps there was a system update at that time causing\nchanges. Unfortunately I don't have logs from back then anymore. OTOH, I don't\nsee a clear slowdown in 13, 12 around that time.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 6 Oct 2021 11:48:10 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Running tests under valgrind is getting slower at an alarming\n pace"
},
{
"msg_contents": "Hi,\n\nOn 2021-10-06 09:47:54 -0700, Andres Freund wrote:\n> I'll also try to figure out print a bit more detail about timing for each tap\n> test, looks like I need to figure out how to pass PROVE_TEST='--timer' through\n> the buildfarm. Shouldn't be too hard.\n\nTurns out that the buildfarm already adds --timer. I added -j4 to allow for\nsome concurrency in tap tests, but unfortunately my animals fell over after\nthat (thanks Michael for noticing).\n\nLooks like the buildfarm client code isn't careful enough quoting PROVE_FLAGS?\n\n my $pflags = \"PROVE_FLAGS=--timer\";\n if (exists $ENV{PROVE_FLAGS})\n {\n $pflags =\n $ENV{PROVE_FLAGS}\n ? \"PROVE_FLAGS=$ENV{PROVE_FLAGS}\"\n : \"\";\n }\n\n @makeout =\n run_log(\"cd $dir && $make NO_LOCALE=1 $pflags $instflags $taptarget\");\n\nWhich doesn't work if pflags ends up as '-j4 --timer' or such...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 6 Oct 2021 19:03:02 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Running tests under valgrind is getting slower at an alarming\n pace"
},
{
"msg_contents": "\nOn 10/6/21 10:03 PM, Andres Freund wrote:\n> Hi,\n>\n> On 2021-10-06 09:47:54 -0700, Andres Freund wrote:\n>> I'll also try to figure out print a bit more detail about timing for each tap\n>> test, looks like I need to figure out how to pass PROVE_TEST='--timer' through\n>> the buildfarm. Shouldn't be too hard.\n> Turns out that the buildfarm already adds --timer. I added -j4 to allow for\n> some concurrency in tap tests, but unfortunately my animals fell over after\n> that (thanks Michael for noticing).\n>\n> Looks like the buildfarm client code isn't careful enough quoting PROVE_FLAGS?\n>\n> my $pflags = \"PROVE_FLAGS=--timer\";\n> if (exists $ENV{PROVE_FLAGS})\n> {\n> $pflags =\n> $ENV{PROVE_FLAGS}\n> ? \"PROVE_FLAGS=$ENV{PROVE_FLAGS}\"\n> : \"\";\n> }\n>\n> @makeout =\n> run_log(\"cd $dir && $make NO_LOCALE=1 $pflags $instflags $taptarget\");\n>\n> Which doesn't work if pflags ends up as '-j4 --timer' or such...\n\n\nsee\n<https://github.com/PGBuildFarm/client-code/commit/85ba5866c334f16c8683b524743f4d714be28d77>\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 8 Oct 2021 15:41:09 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Running tests under valgrind is getting slower at an alarming\n pace"
},
{
"msg_contents": "On 2021-10-08 15:41:09 -0400, Andrew Dunstan wrote:\n> see\n> <https://github.com/PGBuildFarm/client-code/commit/85ba5866c334f16c8683b524743f4d714be28d77>\n\nThanks!\n\n\n",
"msg_date": "Fri, 8 Oct 2021 14:16:02 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Running tests under valgrind is getting slower at an alarming\n pace"
}
] |
[
{
"msg_contents": "Hi all,\n\nFollowing up with Peter E's recent commit 73aa5e0 to add some\nforgotten level incrementations, I got to look again at what I did\nwrong and why this stuff is useful.\n\nI have gone through all the TAP tests and any code paths using\nsubroutines, to note that we could improve the locations of the\nreports we get by adding more $Test::Builder::Level. The context is\nimportant, as some code paths use rather-long routines and also\nargument values that allow to track easily which test path is being\ntaken (like pg_rewind), so there is no need to add anything in such\nplaces. The attached patch adds incrementations for the tests where\nthe debugging becomes much easier if there is a failure.\n\nThoughts?\n--\nMichael",
"msg_date": "Wed, 6 Oct 2021 15:28:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "More business with $Test::Builder::Level in the TAP tests"
},
{
"msg_contents": "On 06.10.21 08:28, Michael Paquier wrote:\n> Following up with Peter E's recent commit 73aa5e0 to add some\n> forgotten level incrementations, I got to look again at what I did\n> wrong and why this stuff is useful.\n> \n> I have gone through all the TAP tests and any code paths using\n> subroutines, to note that we could improve the locations of the\n> reports we get by adding more $Test::Builder::Level. The context is\n> important, as some code paths use rather-long routines and also\n> argument values that allow to track easily which test path is being\n> taken (like pg_rewind), so there is no need to add anything in such\n> places. The attached patch adds incrementations for the tests where\n> the debugging becomes much easier if there is a failure.\n\nThese look correct to me.\n\n\n\n",
"msg_date": "Wed, 6 Oct 2021 10:24:29 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: More business with $Test::Builder::Level in the TAP tests"
},
{
"msg_contents": "\nOn 10/6/21 2:28 AM, Michael Paquier wrote:\n> Hi all,\n>\n> Following up with Peter E's recent commit 73aa5e0 to add some\n> forgotten level incrementations, I got to look again at what I did\n> wrong and why this stuff is useful.\n>\n> I have gone through all the TAP tests and any code paths using\n> subroutines, to note that we could improve the locations of the\n> reports we get by adding more $Test::Builder::Level. The context is\n> important, as some code paths use rather-long routines and also\n> argument values that allow to track easily which test path is being\n> taken (like pg_rewind), so there is no need to add anything in such\n> places. The attached patch adds incrementations for the tests where\n> the debugging becomes much easier if there is a failure.\n>\n> Thoughts?\n\n\n\nWe should probably state a requirement for this somewhere. Maybe in\nsrc/test/perl/README. AIUI, the general rule is that any subroutine that\ndirectly or indirectly calls ok() and friends should increase the level.\nSuch subroutines that don't increase it should probably contain a\ncomment stating why, so we can know in future that it's not just an\noversight.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 6 Oct 2021 07:33:22 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: More business with $Test::Builder::Level in the TAP tests"
},
{
"msg_contents": "> On 6 Oct 2021, at 13:33, Andrew Dunstan <andrew@dunslane.net> wrote:\n\n> We should probably state a requirement for this somewhere. Maybe in\n> src/test/perl/README.\n\n+1, I think that sounds like a very good idea.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 6 Oct 2021 13:53:57 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: More business with $Test::Builder::Level in the TAP tests"
},
{
"msg_contents": "On Wed, Oct 06, 2021 at 07:33:22AM -0400, Andrew Dunstan wrote:\n> We should probably state a requirement for this somewhere. Maybe in\n> src/test/perl/README. AIUI, the general rule is that any subroutine that\n> directly or indirectly calls ok() and friends should increase the level.\n> Such subroutines that don't increase it should probably contain a\n> comment stating why, so we can know in future that it's not just an\n> oversight.\n\nThat makes sense. How about something like that after the part about\nTest::More::like and qr// in the section about writing tests? Here it\nis:\n+Test::Builder::Level controls how far up in the call stack a test will look\n+at when reporting a failure. This should be incremented by any subroutine\n+calling test routines from Test::More, like ok() or is():\n+\n+ local $Test::Builder::Level = $Test::Builder::Level + 1;\n--\nMichael",
"msg_date": "Thu, 7 Oct 2021 10:53:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: More business with $Test::Builder::Level in the TAP tests"
},
{
"msg_contents": "> On 7 Oct 2021, at 03:53, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Wed, Oct 06, 2021 at 07:33:22AM -0400, Andrew Dunstan wrote:\n>> We should probably state a requirement for this somewhere. Maybe in\n>> src/test/perl/README. AIUI, the general rule is that any subroutine that\n>> directly or indirectly calls ok() and friends should increase the level.\n>> Such subroutines that don't increase it should probably contain a\n>> comment stating why, so we can know in future that it's not just an\n>> oversight.\n> \n> That makes sense. How about something like that after the part about\n> Test::More::like and qr// in the section about writing tests? Here it\n> is:\n> +Test::Builder::Level controls how far up in the call stack a test will look\n> +at when reporting a failure. This should be incremented by any subroutine\n> +calling test routines from Test::More, like ok() or is():\n> +\n> + local $Test::Builder::Level = $Test::Builder::Level + 1;\n\nLGTM. Maybe it should be added that it *must* be called before any Test::More\nfunction is called, it's sort of self-explanatory but not everyone writing TAP\ntests will be deeply familiar with Perl.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Fri, 8 Oct 2021 09:28:04 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: More business with $Test::Builder::Level in the TAP tests"
},
{
"msg_contents": "On Fri, Oct 08, 2021 at 09:28:04AM +0200, Daniel Gustafsson wrote:\n> LGTM. Maybe it should be added that it *must* be called before any Test::More\n> function is called, it's sort of self-explanatory but not everyone writing TAP\n> tests will be deeply familiar with Perl.\n\nI think that \"must\" is too strong in this context, as in some cases it\ndoes not really make sense to increment the level, when using for\nexample a rather long routine that's labelled with one of the\nroutine arguments like for pg_rewind. So I would stick with\n\"should\".\n--\nMichael",
"msg_date": "Fri, 8 Oct 2021 16:51:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: More business with $Test::Builder::Level in the TAP tests"
},
{
"msg_contents": "> On 8 Oct 2021, at 09:51, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Fri, Oct 08, 2021 at 09:28:04AM +0200, Daniel Gustafsson wrote:\n>> LGTM. Maybe it should be added that it *must* be called before any Test::More\n>> function is called, it's sort of self-explanatory but not everyone writing TAP\n>> tests will be deeply familiar with Perl.\n> \n> I think that \"must\" is too strong in this context, as in some cases it\n> does not really make sense to increment the level, when using for\n> example a rather long routine that's labelled with one of the\n> routine arguments like for pg_rewind. So I would stick with\n> \"should\".\n\nFair enough.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Fri, 8 Oct 2021 10:06:04 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: More business with $Test::Builder::Level in the TAP tests"
},
{
"msg_contents": "\nOn 10/6/21 9:53 PM, Michael Paquier wrote:\n> On Wed, Oct 06, 2021 at 07:33:22AM -0400, Andrew Dunstan wrote:\n>> We should probably state a requirement for this somewhere. Maybe in\n>> src/test/perl/README. AIUI, the general rule is that any subroutine that\n>> directly or indirectly calls ok() and friends should increase the level.\n>> Such subroutines that don't increase it should probably contain a\n>> comment stating why, so we can know in future that it's not just an\n>> oversight.\n> That makes sense. How about something like that after the part about\n> Test::More::like and qr// in the section about writing tests? Here it\n> is:\n> +Test::Builder::Level controls how far up in the call stack a test will look\n> +at when reporting a failure. This should be incremented by any subroutine\n> +calling test routines from Test::More, like ok() or is():\n> +\n> + local $Test::Builder::Level = $Test::Builder::Level + 1;\n\n\nI think we need to be more explicit about it, especially w.r.t. indirect\ncalls. Every subroutine in the call stack below where you want to error\nreported as coming from should contain this line.\n\nSuppose we have\n\n\nsub a { b();� }\n\nsub b { c();� }\n\nsub c { local $Test::Builder::Level = $Test::Builder::Level + 1;\nok(0,\"should succeed\"); }\n\n\nAIUI a call to a() will show the call in b() as the error source. If we\nwant the error source to be the call to a() we need to add that\nincrement to both b() and a();\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 8 Oct 2021 12:14:57 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: More business with $Test::Builder::Level in the TAP tests"
},
{
"msg_contents": "On Fri, Oct 08, 2021 at 12:14:57PM -0400, Andrew Dunstan wrote:\n> I think we need to be more explicit about it, especially w.r.t. indirect\n> calls. Every subroutine in the call stack below where you want to error\n> reported as coming from should contain this line.\n\nHmm. I got to think about that for a couple of days, and the\nsimplest, still the cleanest, phrasing I can come up with is that:\nThis should be incremented by any subroutine part of a stack calling\ntest routines from Test::More, like ok() or is().\n\nPerhaps you have a better suggestion?\n--\nMichael",
"msg_date": "Sun, 10 Oct 2021 20:18:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: More business with $Test::Builder::Level in the TAP tests"
},
{
"msg_contents": "\nOn 10/10/21 7:18 AM, Michael Paquier wrote:\n> On Fri, Oct 08, 2021 at 12:14:57PM -0400, Andrew Dunstan wrote:\n>> I think we need to be more explicit about it, especially w.r.t. indirect\n>> calls. Every subroutine in the call stack below where you want to error\n>> reported as coming from should contain this line.\n> Hmm. I got to think about that for a couple of days, and the\n> simplest, still the cleanest, phrasing I can come up with is that:\n> This should be incremented by any subroutine part of a stack calling\n> test routines from Test::More, like ok() or is().\n>\n> Perhaps you have a better suggestion?\n\n\nI would say:\n\n This should be incremented by any subroutine which directly or indirectly calls test routines from Test::More, such as ok() or is().\n\n\ncheers\n\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 11 Oct 2021 10:48:54 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: More business with $Test::Builder::Level in the TAP tests"
},
{
"msg_contents": "On Mon, Oct 11, 2021 at 10:48:54AM -0400, Andrew Dunstan wrote:\n> I would say:\n> \n> This should be incremented by any subroutine which directly or\n> indirectly calls test routines from Test::More, such as ok() or\n> is().\n\nIndeed, that looks better. I have just used that and applied the\nchange down to 12 where we have begun playing with level\nincrementations.\n--\nMichael",
"msg_date": "Tue, 12 Oct 2021 11:20:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: More business with $Test::Builder::Level in the TAP tests"
}
] |
[
{
"msg_contents": "Hi all,\n\nA customer reported that during parallel index vacuum, the oldest xmin\ndoesn't advance. Normally, the calculation of oldest xmin\n(ComputeXidHorizons()) ignores xmin/xid of processes having\nPROC_IN_VACUUM flag in MyProc->statusFlags. But since parallel vacuum\nworkers don’t set their statusFlags, the xmin of the parallel vacuum\nworker is considered to calculate the oldest xmin. This issue happens\nfrom PG13 where the parallel vacuum was introduced. I think it's a\nbug.\n\nMoreover, the same problem happens also in CREATE/REINDEX CONCURRENTLY\ncase in PG14 or later for the same reason (due to lack of\nPROC_IN_SAFE_IC flag).\n\nTo fix it, I thought that we change the create index code and the\nvacuum code so that the individual parallel worker sets its status\nflags according to the leader’s one. But ISTM it’d be better to copy\nthe leader’s status flags to workers in ParallelWorkerMain(). I've\nattached a patch for HEAD.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Wed, 6 Oct 2021 16:10:50 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Parallel vacuum workers prevent the oldest xmin from advancing"
},
{
"msg_contents": "On 10/6/21, 12:13 AM, \"Masahiko Sawada\" <sawada.mshk@gmail.com> wrote:\r\n> A customer reported that during parallel index vacuum, the oldest xmin\r\n> doesn't advance. Normally, the calculation of oldest xmin\r\n> (ComputeXidHorizons()) ignores xmin/xid of processes having\r\n> PROC_IN_VACUUM flag in MyProc->statusFlags. But since parallel vacuum\r\n> workers don’t set their statusFlags, the xmin of the parallel vacuum\r\n> worker is considered to calculate the oldest xmin. This issue happens\r\n> from PG13 where the parallel vacuum was introduced. I think it's a\r\n> bug.\r\n\r\n+1\r\n\r\n> To fix it, I thought that we change the create index code and the\r\n> vacuum code so that the individual parallel worker sets its status\r\n> flags according to the leader’s one. But ISTM it’d be better to copy\r\n> the leader’s status flags to workers in ParallelWorkerMain(). I've\r\n> attached a patch for HEAD.\r\n\r\nThe patch seems reasonable to me.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Wed, 6 Oct 2021 16:43:05 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel vacuum workers prevent the oldest xmin from advancing"
},
{
"msg_contents": "On Wed, Oct 6, 2021 at 12:41 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Hi all,\n>\n> A customer reported that during parallel index vacuum, the oldest xmin\n> doesn't advance. Normally, the calculation of oldest xmin\n> (ComputeXidHorizons()) ignores xmin/xid of processes having\n> PROC_IN_VACUUM flag in MyProc->statusFlags. But since parallel vacuum\n> workers don’t set their statusFlags, the xmin of the parallel vacuum\n> worker is considered to calculate the oldest xmin. This issue happens\n> from PG13 where the parallel vacuum was introduced. I think it's a\n> bug.\n>\n\nI agree. Your patch seems to be in the right direction but I haven't\ntested it yet. Feel free to register in the next CF to avoid\nforgetting it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 7 Oct 2021 11:26:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel vacuum workers prevent the oldest xmin from advancing"
},
{
"msg_contents": "On 2021-Oct-06, Masahiko Sawada wrote:\n\n> Hi all,\n> \n> A customer reported that during parallel index vacuum, the oldest xmin\n> doesn't advance. Normally, the calculation of oldest xmin\n> (ComputeXidHorizons()) ignores xmin/xid of processes having\n> PROC_IN_VACUUM flag in MyProc->statusFlags. But since parallel vacuum\n> workers don’t set their statusFlags, the xmin of the parallel vacuum\n> worker is considered to calculate the oldest xmin. This issue happens\n> from PG13 where the parallel vacuum was introduced. I think it's a\n> bug.\n\nAugh, yeah, I agree this is a pretty serious problem.\n\n> But ISTM it’d be better to copy the leader’s status flags to workers\n> in ParallelWorkerMain(). I've attached a patch for HEAD.\n\nHmm, this affects not only PROC_IN_VACUUM and PROC_IN_SAFE_CIC (the bug\nyou're fixing), but also:\n\n* PROC_IS_AUTOVACUUM. That seems reasonable to me -- should a parallel\nworker for autovacuum be considered autovacuum too? AFAICS it's only\nused by the deadlock detector, so it should be okay. However, in the\nnormal path, that flag is set much earlier.\n\n* PROC_VACUUM_FOR_WRAPAROUND. Should be innocuous I think, since the\n\"parent\" process already has this flag and thus shouldn't be cancelled.\n\n* PROC_IN_LOGICAL_DECODING. Surely not set for parallel vacuum workers,\nso not a problem.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"Doing what he did amounts to sticking his fingers under the hood of the\nimplementation; if he gets his fingers burnt, it's his problem.\" (Tom Lane)\n\n\n",
"msg_date": "Fri, 8 Oct 2021 12:13:25 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Parallel vacuum workers prevent the oldest xmin from advancing"
},
{
"msg_contents": "On Fri, Oct 8, 2021 at 8:13 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2021-Oct-06, Masahiko Sawada wrote:\n> > A customer reported that during parallel index vacuum, the oldest xmin\n> > doesn't advance. Normally, the calculation of oldest xmin\n> > (ComputeXidHorizons()) ignores xmin/xid of processes having\n> > PROC_IN_VACUUM flag in MyProc->statusFlags. But since parallel vacuum\n> > workers don’t set their statusFlags, the xmin of the parallel vacuum\n> > worker is considered to calculate the oldest xmin. This issue happens\n> > from PG13 where the parallel vacuum was introduced. I think it's a\n> > bug.\n>\n> Augh, yeah, I agree this is a pretty serious problem.\n\nSo is this comparable problem, which happens to be much older:\nhttps://postgr.es/m/CAH2-WzkjrK556enVtFLmyXEdw91xGuwiyZVep2kp5yQT_-3JDg@mail.gmail.com\n\nIn both cases we see bugs (or implementation deficiencies) that\naccidentally block ComputeXidHorizons() for hours, when that isn't\ntruly necessary. Practically all users are not sure of whether or not\nVACUUM behaves like a long running transaction already, in general, so\nwe shouldn't be surprised that it takes so long for us to hear about\nissues like this.\n\nI think that we should try to find a way of making this whole class of\nproblems easier to identify in production. There needs to be greater\nvisibility into what process holds back VACUUM, and how long that\nlasts -- something easy to use, and *obvious*. That would be a very\nuseful feature in general. It would also make catching these issues\nearly far more likely. It's just *not okay* that you have to follow long\nand complicated instructions [1] to get just some of this information.\nHow can something this important just be an afterthought?\n\n[1] https://www.cybertec-postgresql.com/en/reasons-why-vacuum-wont-remove-dead-rows/\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 8 Oct 2021 10:21:31 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Parallel vacuum workers prevent the oldest xmin from advancing"
},
{
"msg_contents": "On Sat, Oct 9, 2021 at 12:13 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Oct-06, Masahiko Sawada wrote:\n>\n> > Hi all,\n> >\n> > A customer reported that during parallel index vacuum, the oldest xmin\n> > doesn't advance. Normally, the calculation of oldest xmin\n> > (ComputeXidHorizons()) ignores xmin/xid of processes having\n> > PROC_IN_VACUUM flag in MyProc->statusFlags. But since parallel vacuum\n> > workers don’t set their statusFlags, the xmin of the parallel vacuum\n> > worker is considered to calculate the oldest xmin. This issue happens\n> > from PG13 where the parallel vacuum was introduced. I think it's a\n> > bug.\n>\n> Augh, yeah, I agree this is a pretty serious problem.\n>\n> > But ISTM it’d be better to copy the leader’s status flags to workers\n> > in ParallelWorkerMain(). I've attached a patch for HEAD.\n>\n> Hmm, this affects not only PROC_IN_VACUUM and PROC_IN_SAFE_CIC (the bug\n> you're fixing), but also:\n>\n> * PROC_IS_AUTOVACUUM. That seems reasonable to me -- should a parallel\n> worker for autovacuum be considered autovacuum too? AFAICS it's only\n> used by the deadlock detector, so it should be okay. However, in the\n> normal path, that flag is set much earlier.\n>\n> * PROC_VACUUM_FOR_WRAPAROUND. Should be innocuous I think, since the\n> \"parent\" process already has this flag and thus shouldn't be cancelled.\n\nCurrently, we don't support parallel vacuum for autovacuum. So\nparallel workers for vacuum don't have these two flags.\n\n> * PROC_IN_LOGICAL_DECODING. Surely not set for parallel vacuum workers,\n> so not a problem.\n\nAgreed.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 11 Oct 2021 09:23:32 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel vacuum workers prevent the oldest xmin from advancing"
},
{
"msg_contents": "On Mon, Oct 11, 2021 at 09:23:32AM +0900, Masahiko Sawada wrote:\n> On Sat, Oct 9, 2021 at 12:13 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>> * PROC_VACUUM_FOR_WRAPAROUND. Should be innocuous I think, since the\n>> \"parent\" process already has this flag and thus shouldn't be cancelled.\n> \n> Currently, we don't support parallel vacuum for autovacuum. So\n> parallel workers for vacuum don't have these two flags.\n\nThat's something that should IMO be marked in the code as a comment as\nsomething to worry about once/if someone begins playing with parallel\nautovacuum. If the change involving autovacuum is simple, I see no\nreason to not add this part now, though.\n--\nMichael",
"msg_date": "Mon, 11 Oct 2021 09:50:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Parallel vacuum workers prevent the oldest xmin from advancing"
},
{
"msg_contents": "On Wed, Oct 6, 2021 at 6:11 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> To fix it, I thought that we change the create index code and the\n> vacuum code so that the individual parallel worker sets its status\n> flags according to the leader’s one. But ISTM it’d be better to copy\n> the leader’s status flags to workers in ParallelWorkerMain(). I've\n> attached a patch for HEAD.\n>\n\n+1 The fix looks reasonable to me too.\nIs it possible for the patch to add test cases for the two identified\nproblem scenarios? (PROC_IN_VACUUM, PROC_IN_SAFE_IC)\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 11 Oct 2021 17:01:22 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel vacuum workers prevent the oldest xmin from advancing"
},
{
"msg_contents": "On Mon, Oct 11, 2021 at 3:01 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Wed, Oct 6, 2021 at 6:11 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > To fix it, I thought that we change the create index code and the\n> > vacuum code so that the individual parallel worker sets its status\n> > flags according to the leader’s one. But ISTM it’d be better to copy\n> > the leader’s status flags to workers in ParallelWorkerMain(). I've\n> > attached a patch for HEAD.\n> >\n>\n> +1 The fix looks reasonable to me too.\n> Is it possible for the patch to add test cases for the two identified\n> problem scenarios? (PROC_IN_VACUUM, PROC_IN_SAFE_IC)\n\nNot sure we can add stable tests for this. There is no way in the test\ninfra to control parallel workers to suspend and resume etc. and the\noldest xmin can vary depending on the situation. Probably we can add\nan assertion to ensure a parallel worker for vacuum or create index\nhas PROC_IN_VACUUM or PROC_IN_SAFE_IC, respectively.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 12 Oct 2021 09:25:46 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel vacuum workers prevent the oldest xmin from advancing"
},
{
"msg_contents": "On Mon, Oct 11, 2021 at 9:51 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Oct 11, 2021 at 09:23:32AM +0900, Masahiko Sawada wrote:\n> > On Sat, Oct 9, 2021 at 12:13 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >> * PROC_VACUUM_FOR_WRAPAROUND. Should be innocuous I think, since the\n> >> \"parent\" process already has this flag and thus shouldn't be cancelled.\n> >\n> > Currently, we don't support parallel vacuum for autovacuum. So\n> > parallel workers for vacuum don't have these two flags.\n>\n> That's something that should IMO be marked in the code as a comment as\n> something to worry about once/if someone begins playing with parallel\n> autovacuum. If the change involving autovacuum is simple, I see no\n> reason to not add this part now, though.\n\nAgreed. I added the comment. Also, I added an assertion to ensure that\na parallel worker for vacuum has PROC_IN_VACUUM flag (and doesn't have\nother flags). But we cannot do that for parallel workers for building\nbtree index as they don’t know whether or not the CONCURRENTLY option\nis specified.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Tue, 12 Oct 2021 15:53:31 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel vacuum workers prevent the oldest xmin from advancing"
},
{
"msg_contents": "Hmm, I think this should happen before the transaction snapshot is\nestablished in the worker; perhaps immediately after calling\nStartParallelWorkerTransaction(), or anyway not after\nSetTransactionSnapshot. In fact, since SetTransactionSnapshot receives\na 'sourceproc' argument, why not do it exactly there? ISTM that\nProcArrayInstallRestoredXmin() is where this should happen.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 19 Oct 2021 12:35:56 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Parallel vacuum workers prevent the oldest xmin from advancing"
},
{
"msg_contents": "On 2021-Oct-19, Alvaro Herrera wrote:\n\n> Hmm, I think this should happen before the transaction snapshot is\n> established in the worker; perhaps immediately after calling\n> StartParallelWorkerTransaction(), or anyway not after\n> SetTransactionSnapshot. In fact, since SetTransactionSnapshot receives\n> a 'sourceproc' argument, why not do it exactly there? ISTM that\n> ProcArrayInstallRestoredXmin() is where this should happen.\n\n... and there is a question about the lock strength used for\nProcArrayLock. The current routine uses LW_SHARED, but there's no\nclarity that we can modify proc->statusFlags and ProcGlobal->statusFlags\nwithout LW_EXCLUSIVE.\n\nMaybe we can change ProcArrayInstallRestoredXmin so that if it sees that\nproc->statusFlags is not zero, then it grabs LW_EXCLUSIVE (and copies),\notherwise it keeps using LW_SHARED as it does now (and does not copy.)\n\n(This also suggests that using LW_EXCLUSIVE inconditionally for all\ncases as your patch does is not great. OTOH it's just once at every\nbgworker start, so it's not *that* frequent.)\n\n\nInitially, I was a bit nervous about copying flags willy-nilly. Do we\nneed to be more careful? I mean, have a way for the code to specify\nflags to copy, maybe something like\n\nMyProc->statusFlags |= proc->statusFlags & copyableFlags;\nProcGlobal->statusFlags[MyProc->pgxactoff] = MyProc->statusFlags;\n\nwith this coding,\n1. we do not unset flags that the bgworker already has for whatever\nreason\n2. we do not copy flags that may be unrelated to the effect we desire.\n\nThe problem, and it's something I don't have an answer for, is how to\nspecify copyableFlags. This code is the generic ParallelWorkerMain()\nand there's little-to-no chance to pass stuff from the process that\nrequested the bgworker. So maybe Sawada-san's original coding of just\ncopying everything is okay.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 19 Oct 2021 15:07:17 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Parallel vacuum workers prevent the oldest xmin from advancing"
},
{
"msg_contents": "On Wed, Oct 20, 2021 at 3:07 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Oct-19, Alvaro Herrera wrote:\n>\n\nThank you for the comment.\n\n> > Hmm, I think this should happen before the transaction snapshot is\n> > established in the worker; perhaps immediately after calling\n> > StartParallelWorkerTransaction(), or anyway not after\n> > SetTransactionSnapshot. In fact, since SetTransactionSnapshot receives\n> > a 'sourceproc' argument, why not do it exactly there? ISTM that\n> > ProcArrayInstallRestoredXmin() is where this should happen.\n>\n> ... and there is a question about the lock strength used for\n> ProcArrayLock. The current routine uses LW_SHARED, but there's no\n> clarity that we can modify proc->statusFlags and ProcGlobal->statusFlags\n> without LW_EXCLUSIVE.\n>\n> Maybe we can change ProcArrayInstallRestoredXmin so that if it sees that\n> proc->statusFlags is not zero, then it grabs LW_EXCLUSIVE (and copies),\n> otherwise it keeps using LW_SHARED as it does now (and does not copy.)\n\nInitially, I've considered copying statusFlags in\nProcArrayInstallRestoredXmin() but I hesitated to do that because\nstatusFlags is not relevant with xmin and snapshot stuff. But I agree\nthat copying statusFlags should happen before restoring the snapshot.\n\nIf we copy statusFlags in ProcArrayInstallRestoredXmin() there is\nstill little window that the restored snapshot holds back the oldest\nxmin? If so it would be better to call ProcArrayCopyStatusFlags()\nright after StartParallelWorker().\n\n> (This also suggests that using LW_EXCLUSIVE inconditionally for all\n> cases as your patch does is not great. OTOH it's just once at every\n> bgworker start, so it's not *that* frequent.)\n\nAgreed.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 20 Oct 2021 09:27:40 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel vacuum workers prevent the oldest xmin from advancing"
},
{
"msg_contents": "On Wed, Oct 20, 2021 at 9:27 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Oct 20, 2021 at 3:07 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2021-Oct-19, Alvaro Herrera wrote:\n> >\n>\n> Thank you for the comment.\n>\n> > > Hmm, I think this should happen before the transaction snapshot is\n> > > established in the worker; perhaps immediately after calling\n> > > StartParallelWorkerTransaction(), or anyway not after\n> > > SetTransactionSnapshot. In fact, since SetTransactionSnapshot receives\n> > > a 'sourceproc' argument, why not do it exactly there? ISTM that\n> > > ProcArrayInstallRestoredXmin() is where this should happen.\n> >\n> > ... and there is a question about the lock strength used for\n> > ProcArrayLock. The current routine uses LW_SHARED, but there's no\n> > clarity that we can modify proc->statusFlags and ProcGlobal->statusFlags\n> > without LW_EXCLUSIVE.\n> >\n> > Maybe we can change ProcArrayInstallRestoredXmin so that if it sees that\n> > proc->statusFlags is not zero, then it grabs LW_EXCLUSIVE (and copies),\n> > otherwise it keeps using LW_SHARED as it does now (and does not copy.)\n>\n> Initially, I've considered copying statusFlags in\n> ProcArrayInstallRestoredXmin() but I hesitated to do that because\n> statusFlags is not relevant with xmin and snapshot stuff. But I agree\n> that copying statusFlags should happen before restoring the snapshot.\n>\n> If we copy statusFlags in ProcArrayInstallRestoredXmin() there is\n> still little window that the restored snapshot holds back the oldest\n> xmin?\n\nThat's wrong, I'd misunderstood.\n\nI agree to copy statusFlags in ProcArrayInstallRestoredXmin(). I've\nupdated the patch accordingly.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Fri, 22 Oct 2021 14:38:02 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel vacuum workers prevent the oldest xmin from advancing"
},
{
"msg_contents": "On Fri, 22 Oct 2021 at 07:38, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Oct 20, 2021 at 9:27 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Oct 20, 2021 at 3:07 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > >\n> > > On 2021-Oct-19, Alvaro Herrera wrote:\n> > >\n> >\n> > Thank you for the comment.\n> >\n> > > > Hmm, I think this should happen before the transaction snapshot is\n> > > > established in the worker; perhaps immediately after calling\n> > > > StartParallelWorkerTransaction(), or anyway not after\n> > > > SetTransactionSnapshot. In fact, since SetTransactionSnapshot receives\n> > > > a 'sourceproc' argument, why not do it exactly there? ISTM that\n> > > > ProcArrayInstallRestoredXmin() is where this should happen.\n> > >\n> > > ... and there is a question about the lock strength used for\n> > > ProcArrayLock. The current routine uses LW_SHARED, but there's no\n> > > clarity that we can modify proc->statusFlags and ProcGlobal->statusFlags\n> > > without LW_EXCLUSIVE.\n> > >\n> > > Maybe we can change ProcArrayInstallRestoredXmin so that if it sees that\n> > > proc->statusFlags is not zero, then it grabs LW_EXCLUSIVE (and copies),\n> > > otherwise it keeps using LW_SHARED as it does now (and does not copy.)\n> >\n> > Initially, I've considered copying statusFlags in\n> > ProcArrayInstallRestoredXmin() but I hesitated to do that because\n> > statusFlags is not relevant with xmin and snapshot stuff. But I agree\n> > that copying statusFlags should happen before restoring the snapshot.\n> >\n> > If we copy statusFlags in ProcArrayInstallRestoredXmin() there is\n> > still little window that the restored snapshot holds back the oldest\n> > xmin?\n>\n> That's wrong, I'd misunderstood.\n>\n> I agree to copy statusFlags in ProcArrayInstallRestoredXmin(). I've\n> updated the patch accordingly.\n\nI've tested this patch, and it correctly fixes the issue of blocking\nxmin from advancing, and also fixes an issue of retreating the\nobserved *_oldest_nonremovable in XidHorizons through parallel\nworkers.\n\nThere are still some other soundness issues with xmin handling (see\n[0]), but that should not prevent this patch from landing in the\nrelevant branches.\n\nKind regards,\n\nMatthias van de Meent\n\n[0] https://www.postgresql.org/message-id/flat/17257-1e46de26bec11433%40postgresql.org\n\n\n",
"msg_date": "Fri, 5 Nov 2021 15:45:51 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel vacuum workers prevent the oldest xmin from advancing"
},
{
"msg_contents": "On Fri, Oct 22, 2021 at 11:08 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Oct 20, 2021 at 9:27 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I agree to copy statusFlags in ProcArrayInstallRestoredXmin(). I've\n> updated the patch accordingly.\n>\n\n1.\n@@ -2663,7 +2677,16 @@ ProcArrayInstallRestoredXmin(TransactionId\nxmin, PGPROC *proc)\n TransactionIdIsNormal(xid) &&\n TransactionIdPrecedesOrEquals(xid, xmin))\n {\n+ /* restore xmin */\n MyProc->xmin = TransactionXmin = xmin;\n+\n+ /* copy statusFlags */\n+ if (flags != 0)\n+ {\n+ MyProc->statusFlags = proc->statusFlags;\n+ ProcGlobal->statusFlags[MyProc->pgxactoff] = MyProc->statusFlags;\n+ }\n\nIs there a reason to tie the logic of copying status flags with the\nlast two transaction-related conditions?\n\n2.\n LWLockAcquire(ProcArrayLock, LW_SHARED);\n\n+ flags = proc->statusFlags;\n+\n+ /*\n+ * If the source xact has any statusFlags, we re-grab ProcArrayLock\n+ * on exclusive mode so we can copy it to MyProc->statusFlags.\n+ */\n+ if (flags != 0)\n+ {\n+ LWLockRelease(ProcArrayLock);\n+ LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\n+ }\n\n\nThis looks a bit odd to me. It would have been better if we know when\nto acquire an exclusive lock without first acquiring the shared lock.\nI see why it could be a good idea to do this stuff in\nProcArrayInstallRestoredXmin() but seeing the patch it seems better to\ndo this separately for the parallel worker as is done in your previous\npatch version but do it after we call\nStartParallelWorkerTransaction(). I am also not very sure if the other\ncallers of this code path will expect ProcArrayInstallRestoredXmin()\nto do this assignment and also the function name appears to be very\nspecific to what it is currently doing.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 10 Nov 2021 14:44:00 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel vacuum workers prevent the oldest xmin from advancing"
},
{
"msg_contents": "On Fri, Nov 5, 2021 at 8:16 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Fri, 22 Oct 2021 at 07:38, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Oct 20, 2021 at 9:27 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Wed, Oct 20, 2021 at 3:07 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > > >\n> > > > On 2021-Oct-19, Alvaro Herrera wrote:\n> > > >\n> > >\n> > > Thank you for the comment.\n> > >\n> > > > > Hmm, I think this should happen before the transaction snapshot is\n> > > > > established in the worker; perhaps immediately after calling\n> > > > > StartParallelWorkerTransaction(), or anyway not after\n> > > > > SetTransactionSnapshot. In fact, since SetTransactionSnapshot receives\n> > > > > a 'sourceproc' argument, why not do it exactly there? ISTM that\n> > > > > ProcArrayInstallRestoredXmin() is where this should happen.\n> > > >\n> > > > ... and there is a question about the lock strength used for\n> > > > ProcArrayLock. The current routine uses LW_SHARED, but there's no\n> > > > clarity that we can modify proc->statusFlags and ProcGlobal->statusFlags\n> > > > without LW_EXCLUSIVE.\n> > > >\n> > > > Maybe we can change ProcArrayInstallRestoredXmin so that if it sees that\n> > > > proc->statusFlags is not zero, then it grabs LW_EXCLUSIVE (and copies),\n> > > > otherwise it keeps using LW_SHARED as it does now (and does not copy.)\n> > >\n> > > Initially, I've considered copying statusFlags in\n> > > ProcArrayInstallRestoredXmin() but I hesitated to do that because\n> > > statusFlags is not relevant with xmin and snapshot stuff. But I agree\n> > > that copying statusFlags should happen before restoring the snapshot.\n> > >\n> > > If we copy statusFlags in ProcArrayInstallRestoredXmin() there is\n> > > still little window that the restored snapshot holds back the oldest\n> > > xmin?\n> >\n> > That's wrong, I'd misunderstood.\n> >\n> > I agree to copy statusFlags in ProcArrayInstallRestoredXmin(). I've\n> > updated the patch accordingly.\n>\n> I've tested this patch, and it correctly fixes the issue of blocking\n> xmin from advancing, and also fixes an issue of retreating the\n> observed *_oldest_nonremovable in XidHorizons through parallel\n> workers.\n>\n> There are still some other soundness issues with xmin handling (see\n> [0]), but that should not prevent this patch from landing in the\n> relevant branches.\n>\n\nAFAICU, in the thread referred by you, it seems that the main reported\nissue will be resolved by this patch but there is a discussion about\nxmin moving backward which seems to be the case with the current code\nas per code comments mentioned by Andres. Is my understanding correct?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 10 Nov 2021 16:21:30 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel vacuum workers prevent the oldest xmin from advancing"
},
{
"msg_contents": "On Wed, 10 Nov 2021 at 11:51, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Nov 5, 2021 at 8:16 PM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n>\n> AFAICU, in the thread referred by you, it seems that the main reported\n> issue will be resolved by this patch but there is a discussion about\n> xmin moving backward which seems to be the case with the current code\n> as per code comments mentioned by Andres. Is my understanding correct?\n\nThat is correct.\n\n\n",
"msg_date": "Wed, 10 Nov 2021 13:11:56 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel vacuum workers prevent the oldest xmin from advancing"
},
{
"msg_contents": "On Wed, Nov 10, 2021 at 6:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Oct 22, 2021 at 11:08 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Oct 20, 2021 at 9:27 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I agree to copy statusFlags in ProcArrayInstallRestoredXmin(). I've\n> > updated the patch accordingly.\n> >\n>\n> 1.\n> @@ -2663,7 +2677,16 @@ ProcArrayInstallRestoredXmin(TransactionId\n> xmin, PGPROC *proc)\n> TransactionIdIsNormal(xid) &&\n> TransactionIdPrecedesOrEquals(xid, xmin))\n> {\n> + /* restore xmin */\n> MyProc->xmin = TransactionXmin = xmin;\n> +\n> + /* copy statusFlags */\n> + if (flags != 0)\n> + {\n> + MyProc->statusFlags = proc->statusFlags;\n> + ProcGlobal->statusFlags[MyProc->pgxactoff] = MyProc->statusFlags;\n> + }\n>\n> Is there a reason to tie the logic of copying status flags with the\n> last two transaction-related conditions?\n\nMy wrong. It should not be tied.\n\n>\n> 2.\n> LWLockAcquire(ProcArrayLock, LW_SHARED);\n>\n> + flags = proc->statusFlags;\n> +\n> + /*\n> + * If the source xact has any statusFlags, we re-grab ProcArrayLock\n> + * on exclusive mode so we can copy it to MyProc->statusFlags.\n> + */\n> + if (flags != 0)\n> + {\n> + LWLockRelease(ProcArrayLock);\n> + LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\n> + }\n>\n>\n> This looks a bit odd to me. It would have been better if we know when\n> to acquire an exclusive lock without first acquiring the shared lock.\n\nI think we should acquire an exclusive lock only if status flags are\nnot empty. But to check the status flags we need to acquire a shared\nlock. No?\n\n> I see why it could be a good idea to do this stuff in\n> ProcArrayInstallRestoredXmin() but seeing the patch it seems better to\n> do this separately for the parallel worker as is done in your previous\n> patch version but do it after we call\n> StartParallelWorkerTransaction(). I am also not very sure if the other\n> callers of this code path will expect ProcArrayInstallRestoredXmin()\n> to do this assignment and also the function name appears to be very\n> specific to what it is currently doing.\n\nFair enough. I was also concerned that but since\nProcArrayInstallRestoredXmin() is a convenient place to set status\nflags I changed the patch accordingly. As you pointed out, doing that\nseparately for the parallel worker is clearer.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 11 Nov 2021 12:22:42 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel vacuum workers prevent the oldest xmin from advancing"
},
{
"msg_contents": "Hi,\n\nOn 2021-11-11 12:22:42 +0900, Masahiko Sawada wrote:\n> > 2.\n> > LWLockAcquire(ProcArrayLock, LW_SHARED);\n> >\n> > + flags = proc->statusFlags;\n> > +\n> > + /*\n> > + * If the source xact has any statusFlags, we re-grab ProcArrayLock\n> > + * on exclusive mode so we can copy it to MyProc->statusFlags.\n> > + */\n> > + if (flags != 0)\n> > + {\n> > + LWLockRelease(ProcArrayLock);\n> > + LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\n> > + }\n> >\n> >\n> > This looks a bit odd to me. It would have been better if we know when\n> > to acquire an exclusive lock without first acquiring the shared lock.\n> \n> I think we should acquire an exclusive lock only if status flags are\n> not empty. But to check the status flags we need to acquire a shared\n> lock. No?\n\nThis seems like an unnecessary optimization. ProcArrayInstallRestoredXmin()\nonly happens in the context of much more expensive operations.\n\nI think it might be worth asserting that the set of flags we're copying is a\nknown subset of the flags that are valid to copy from the source.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 10 Nov 2021 19:41:53 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Parallel vacuum workers prevent the oldest xmin from advancing"
},
{
"msg_contents": "On Thu, Nov 11, 2021 at 9:11 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-11-11 12:22:42 +0900, Masahiko Sawada wrote:\n> > > 2.\n> > > LWLockAcquire(ProcArrayLock, LW_SHARED);\n> > >\n> > > + flags = proc->statusFlags;\n> > > +\n> > > + /*\n> > > + * If the source xact has any statusFlags, we re-grab ProcArrayLock\n> > > + * on exclusive mode so we can copy it to MyProc->statusFlags.\n> > > + */\n> > > + if (flags != 0)\n> > > + {\n> > > + LWLockRelease(ProcArrayLock);\n> > > + LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\n> > > + }\n> > >\n> > >\n> > > This looks a bit odd to me. It would have been better if we know when\n> > > to acquire an exclusive lock without first acquiring the shared lock.\n> >\n> > I think we should acquire an exclusive lock only if status flags are\n> > not empty. But to check the status flags we need to acquire a shared\n> > lock. No?\n>\n> This seems like an unnecessary optimization. ProcArrayInstallRestoredXmin()\n> only happens in the context of much more expensive operations.\n>\n\nFair point. I think that will also make the change in\nProcArrayInstallRestoredXmin() appear neat.\n\n> I think it might be worth asserting that the set of flags we're copying is a\n> known subset of the flags that are valid to copy from the source.\n>\n\nSounds reasonable.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 11 Nov 2021 09:23:22 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel vacuum workers prevent the oldest xmin from advancing"
},
{
"msg_contents": "On Thu, Nov 11, 2021 at 12:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Nov 11, 2021 at 9:11 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2021-11-11 12:22:42 +0900, Masahiko Sawada wrote:\n> > > > 2.\n> > > > LWLockAcquire(ProcArrayLock, LW_SHARED);\n> > > >\n> > > > + flags = proc->statusFlags;\n> > > > +\n> > > > + /*\n> > > > + * If the source xact has any statusFlags, we re-grab ProcArrayLock\n> > > > + * on exclusive mode so we can copy it to MyProc->statusFlags.\n> > > > + */\n> > > > + if (flags != 0)\n> > > > + {\n> > > > + LWLockRelease(ProcArrayLock);\n> > > > + LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\n> > > > + }\n> > > >\n> > > >\n> > > > This looks a bit odd to me. It would have been better if we know when\n> > > > to acquire an exclusive lock without first acquiring the shared lock.\n> > >\n> > > I think we should acquire an exclusive lock only if status flags are\n> > > not empty. But to check the status flags we need to acquire a shared\n> > > lock. No?\n> >\n> > This seems like an unnecessary optimization. ProcArrayInstallRestoredXmin()\n> > only happens in the context of much more expensive operations.\n> >\n>\n> Fair point. I think that will also make the change in\n> ProcArrayInstallRestoredXmin() appear neat.\n\nAgreed.\n\nThis makes me think that it'd be better to copy status flags in a\nseparate function rather than ProcArrayInstallRestoredXmin(). The\ncurrent patch makes use of the fact that ProcArrayInstallRestoedXmin()\nacquires a shared lock in order to check the source's status flags.\nBut if we can acquire an exclusive lock unconditionally in this\ncontext, it’s clearer to do in a separate function.\n\n>\n> > I think it might be worth asserting that the set of flags we're copying is a\n> > known subset of the flags that are valid to copy from the source.\n> >\n>\n> Sounds reasonable.\n\n+1\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 11 Nov 2021 14:09:42 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel vacuum workers prevent the oldest xmin from advancing"
},
{
"msg_contents": "On Thu, Nov 11, 2021 at 10:40 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Nov 11, 2021 at 12:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Nov 11, 2021 at 9:11 AM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > Hi,\n> > >\n> > > On 2021-11-11 12:22:42 +0900, Masahiko Sawada wrote:\n> > > > > 2.\n> > > > > LWLockAcquire(ProcArrayLock, LW_SHARED);\n> > > > >\n> > > > > + flags = proc->statusFlags;\n> > > > > +\n> > > > > + /*\n> > > > > + * If the source xact has any statusFlags, we re-grab ProcArrayLock\n> > > > > + * on exclusive mode so we can copy it to MyProc->statusFlags.\n> > > > > + */\n> > > > > + if (flags != 0)\n> > > > > + {\n> > > > > + LWLockRelease(ProcArrayLock);\n> > > > > + LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\n> > > > > + }\n> > > > >\n> > > > >\n> > > > > This looks a bit odd to me. It would have been better if we know when\n> > > > > to acquire an exclusive lock without first acquiring the shared lock.\n> > > >\n> > > > I think we should acquire an exclusive lock only if status flags are\n> > > > not empty. But to check the status flags we need to acquire a shared\n> > > > lock. No?\n> > >\n> > > This seems like an unnecessary optimization. ProcArrayInstallRestoredXmin()\n> > > only happens in the context of much more expensive operations.\n> > >\n> >\n> > Fair point. I think that will also make the change in\n> > ProcArrayInstallRestoredXmin() appear neat.\n>\n> Agreed.\n>\n> This makes me think that it'd be better to copy status flags in a\n> separate function rather than ProcArrayInstallRestoredXmin(). The\n> current patch makes use of the fact that ProcArrayInstallRestoedXmin()\n> acquires a shared lock in order to check the source's status flags.\n> But if we can acquire an exclusive lock unconditionally in this\n> context, it’s clearer to do in a separate function.\n>\n\nDo you mean to say that do it in a separate function and call\nimmediately after StartParallelWorkerTransaction or do you mean to do\nit in a separate function and invoke it from\nProcArrayInstallRestoedXmin()? I think the disadvantage I see by not\ndoing in ProcArrayInstallRestoedXmin is that we need to take procarray\nlock twice (once in exclusive mode and then in shared mode) so doing\nit in ProcArrayInstallRestoedXmin is beneficial from that angle. The\nmain reason why I was not very happy with the last patch was due to\nreleasing and reacquiring the lock but if we directly acquire it in\nexclusive mode then that shouldn't be a problem. OTOH, doing it via a\nseparate function is also not that bad.\n\n> >\n> > > I think it might be worth asserting that the set of flags we're copying is a\n> > > known subset of the flags that are valid to copy from the source.\n> > >\n> >\n> > Sounds reasonable.\n>\n> +1\n>\n> Regards,\n>\n> --\n> Masahiko Sawada\n> EDB: https://www.enterprisedb.com/\n\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 11 Nov 2021 11:37:20 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel vacuum workers prevent the oldest xmin from advancing"
},
{
"msg_contents": "On Thu, Nov 11, 2021 at 3:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Nov 11, 2021 at 10:40 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Nov 11, 2021 at 12:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Nov 11, 2021 at 9:11 AM Andres Freund <andres@anarazel.de> wrote:\n> > > >\n> > > > Hi,\n> > > >\n> > > > On 2021-11-11 12:22:42 +0900, Masahiko Sawada wrote:\n> > > > > > 2.\n> > > > > > LWLockAcquire(ProcArrayLock, LW_SHARED);\n> > > > > >\n> > > > > > + flags = proc->statusFlags;\n> > > > > > +\n> > > > > > + /*\n> > > > > > + * If the source xact has any statusFlags, we re-grab ProcArrayLock\n> > > > > > + * on exclusive mode so we can copy it to MyProc->statusFlags.\n> > > > > > + */\n> > > > > > + if (flags != 0)\n> > > > > > + {\n> > > > > > + LWLockRelease(ProcArrayLock);\n> > > > > > + LWLockAcquire(ProcArrayLock, LW_EXCLUSIVE);\n> > > > > > + }\n> > > > > >\n> > > > > >\n> > > > > > This looks a bit odd to me. It would have been better if we know when\n> > > > > > to acquire an exclusive lock without first acquiring the shared lock.\n> > > > >\n> > > > > I think we should acquire an exclusive lock only if status flags are\n> > > > > not empty. But to check the status flags we need to acquire a shared\n> > > > > lock. No?\n> > > >\n> > > > This seems like an unnecessary optimization. ProcArrayInstallRestoredXmin()\n> > > > only happens in the context of much more expensive operations.\n> > > >\n> > >\n> > > Fair point. I think that will also make the change in\n> > > ProcArrayInstallRestoredXmin() appear neat.\n> >\n> > Agreed.\n> >\n> > This makes me think that it'd be better to copy status flags in a\n> > separate function rather than ProcArrayInstallRestoredXmin(). The\n> > current patch makes use of the fact that ProcArrayInstallRestoedXmin()\n> > acquires a shared lock in order to check the source's status flags.\n> > But if we can acquire an exclusive lock unconditionally in this\n> > context, it’s clearer to do in a separate function.\n> >\n>\n> Do you mean to say that do it in a separate function and call\n> immediately after StartParallelWorkerTransaction or do you mean to do\n> it in a separate function and invoke it from\n> ProcArrayInstallRestoedXmin()?\n\nI meant the former.\n\n> I think the disadvantage I see by not\n> doing in ProcArrayInstallRestoedXmin is that we need to take procarray\n> lock twice (once in exclusive mode and then in shared mode) so doing\n> it in ProcArrayInstallRestoedXmin is beneficial from that angle.\n\nRight. I thought that this overhead is also negligible in this\ncontext. If that’s right, it’d be better to do it in a separate\nfunction from the clearness point of view. Also if we raise the lock\nlevel in ProcArrayInstallRestoredXmin(), a caller of the function who\nwants just to set xmin will end up acquiring an exclusive lock. Which\nis unnecessary for the caller.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 11 Nov 2021 17:06:31 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel vacuum workers prevent the oldest xmin from advancing"
},
{
"msg_contents": "On Thu, Nov 11, 2021 at 1:37 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Nov 11, 2021 at 3:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> > I think the disadvantage I see by not\n> > doing in ProcArrayInstallRestoedXmin is that we need to take procarray\n> > lock twice (once in exclusive mode and then in shared mode) so doing\n> > it in ProcArrayInstallRestoedXmin is beneficial from that angle.\n>\n> Right. I thought that this overhead is also negligible in this\n> context. If that’s right, it’d be better to do it in a separate\n> function from the clearness point of view. Also if we raise the lock\n> level in ProcArrayInstallRestoredXmin(), a caller of the function who\n> wants just to set xmin will end up acquiring an exclusive lock. Which\n> is unnecessary for the caller.\n>\n\nAs mentioned by Andres, ProcArrayInstallRestoredXmin() happens in an\nexpensive context apart from this which is while creating logical\nreplication, so the cost might not matter but I see your point about\nclarity. Basically, this function can get called from two different\ncode paths i.e creation of logical replication slot and parallel\nworker startup but as of today we need it only in the latter case, so\nit is better to it in that code path (after calling\nStartParallelWorkerTransaction()). I think we can do that way unless\nAlvaro thinks otherwise as he had proposed to do it in\nProcArrayInstallRestoredXmin(). Alvaro, others, do you favor any\nparticular way to deal with this case?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 12 Nov 2021 09:13:59 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel vacuum workers prevent the oldest xmin from advancing"
},
{
"msg_contents": "On 2021-Nov-11, Masahiko Sawada wrote:\n\n> On Thu, Nov 11, 2021 at 12:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Nov 11, 2021 at 9:11 AM Andres Freund <andres@anarazel.de> wrote:\n\n> > > This seems like an unnecessary optimization. ProcArrayInstallRestoredXmin()\n> > > only happens in the context of much more expensive operations.\n> >\n> > Fair point. I think that will also make the change in\n> > ProcArrayInstallRestoredXmin() appear neat.\n> \n> This makes me think that it'd be better to copy status flags in a\n> separate function rather than ProcArrayInstallRestoredXmin().\n\nTo me, and this is perhaps just personal opinion, it seems conceptually\nsimpler to have ProcArrayInstallRestoredXmin acquire exclusive and do\nboth things. Why? Because if you have two functions, you have to be\ncareful not to call the new function after ProcArrayInstallRestoredXmin;\notherwise you would create an instant during which you make an\nXmin-without-flag visible to other procs; this causes the computed xmin\ngo backwards, which is verboten.\n\nIf I understand Amit correctly, his point is about the callers of\nRestoreTransactionSnapshot, which are two: CreateReplicationSlot and\nParallelWorkerMain. He wants you hypothetical new function called from\nthe latter but not the former. Looking at both, it seems a bit strange\nto make them responsible for a detail such as \"copy ->statusFlags from\nsource proc to mine\". It seems more reasonable to add a third flag to\n RestoreTransactionSnapshot(Snapshot snapshot, void *source_proc, bool is_vacuum)\nand if that is true, tell SetTransactionSnapshot to copy flags,\notherwise not.\n\n\n... unrelated to this, and looking at CreateReplicationSlot, I wonder\nwhy does it pass MyProc as the source_pgproc parameter. What good is\nthat doing? I mean, if the only thing we do with source_pgproc is to\ncopy stuff from source_pgproc to MyProc, then if source_pgproc is\nMyProc, we're effectively doing nothing at all. (You can't \"fix\" this\nby merely passing NULL, because what that would do is change the calling\nof ProcArrayInstallRestoredXmin into a call of\nProcArrayInstallImportedXmin and that would presumably have different\nbehavior.) I may be misreading the code of course, but it sounds like\nthe intention of CreateReplicationSlot is to \"do nothing\" with the\ntransaction snapshot in a complicated way.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 12 Nov 2021 10:14:11 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Parallel vacuum workers prevent the oldest xmin from advancing"
},
{
"msg_contents": "On Fri, Nov 12, 2021 at 6:44 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Nov-11, Masahiko Sawada wrote:\n>\n> > On Thu, Nov 11, 2021 at 12:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Nov 11, 2021 at 9:11 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> > > > This seems like an unnecessary optimization. ProcArrayInstallRestoredXmin()\n> > > > only happens in the context of much more expensive operations.\n> > >\n> > > Fair point. I think that will also make the change in\n> > > ProcArrayInstallRestoredXmin() appear neat.\n> >\n> > This makes me think that it'd be better to copy status flags in a\n> > separate function rather than ProcArrayInstallRestoredXmin().\n>\n> To me, and this is perhaps just personal opinion, it seems conceptually\n> simpler to have ProcArrayInstallRestoredXmin acquire exclusive and do\n> both things. Why? Because if you have two functions, you have to be\n> careful not to call the new function after ProcArrayInstallRestoredXmin;\n> otherwise you would create an instant during which you make an\n> Xmin-without-flag visible to other procs; this causes the computed xmin\n> go backwards, which is verboten.\n>\n> If I understand Amit correctly, his point is about the callers of\n> RestoreTransactionSnapshot, which are two: CreateReplicationSlot and\n> ParallelWorkerMain. He wants you hypothetical new function called from\n> the latter but not the former. Looking at both, it seems a bit strange\n> to make them responsible for a detail such as \"copy ->statusFlags from\n> source proc to mine\". It seems more reasonable to add a third flag to\n> RestoreTransactionSnapshot(Snapshot snapshot, void *source_proc, bool is_vacuum)\n> and if that is true, tell SetTransactionSnapshot to copy flags,\n> otherwise not.\n>\n\nIf we decide to go this way then I suggest adding a comment to convey\nwhy we choose to copy status flags in ProcArrayInstallRestoredXmin()\nas the function name doesn't indicate it.\n\n>\n> ... unrelated to this, and looking at CreateReplicationSlot, I wonder\n> why does it pass MyProc as the source_pgproc parameter. What good is\n> that doing? I mean, if the only thing we do with source_pgproc is to\n> copy stuff from source_pgproc to MyProc, then if source_pgproc is\n> MyProc, we're effectively doing nothing at all. (You can't \"fix\" this\n> by merely passing NULL, because what that would do is change the calling\n> of ProcArrayInstallRestoredXmin into a call of\n> ProcArrayInstallImportedXmin and that would presumably have different\n> behavior.) I may be misreading the code of course, but it sounds like\n> the intention of CreateReplicationSlot is to \"do nothing\" with the\n> transaction snapshot in a complicated way.\n>\n\nIt ensures that the source transaction is still running, otherwise, it\nwon't allow the import to be successful. It also seems to help by\nupdating the state for GlobalVis* stuff. I think in the current form\nit seems to help in not moving MyProc-xmin and TransactionXmin\nbackward due to checks in ProcArrayInstallRestoredXmin() and also\nchange them to the value in source snapshot.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 13 Nov 2021 10:40:30 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel vacuum workers prevent the oldest xmin from advancing"
},
{
"msg_contents": "On Sat, Nov 13, 2021 at 2:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Nov 12, 2021 at 6:44 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2021-Nov-11, Masahiko Sawada wrote:\n> >\n> > > On Thu, Nov 11, 2021 at 12:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Thu, Nov 11, 2021 at 9:11 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > > > > This seems like an unnecessary optimization. ProcArrayInstallRestoredXmin()\n> > > > > only happens in the context of much more expensive operations.\n> > > >\n> > > > Fair point. I think that will also make the change in\n> > > > ProcArrayInstallRestoredXmin() appear neat.\n> > >\n> > > This makes me think that it'd be better to copy status flags in a\n> > > separate function rather than ProcArrayInstallRestoredXmin().\n> >\n\nThank you for the comment!\n\n> > To me, and this is perhaps just personal opinion, it seems conceptually\n> > simpler to have ProcArrayInstallRestoredXmin acquire exclusive and do\n> > both things. Why? Because if you have two functions, you have to be\n> > careful not to call the new function after ProcArrayInstallRestoredXmin;\n> > otherwise you would create an instant during which you make an\n> > Xmin-without-flag visible to other procs; this causes the computed xmin\n> > go backwards, which is verboten.\n\nI agree that it's simpler.\n\nI thought statusFlags and xmin are conceptually separate things since\nPROC_VACUUM_FOR_WRAPAROUND is not related to xid at all for example.\nBut given that the use case of copying statusFlags from someone is\nonly parallel worker startup for now, copying statusFlags while\nsetting xmin seems convenient and simple. If we want to only copy\nstatusFlags in some use cases in the future, we can have a separate\nfunction for that.\n\n> >\n> > If I understand Amit correctly, his point is about the callers of\n> > RestoreTransactionSnapshot, which are two: CreateReplicationSlot and\n> > ParallelWorkerMain. He wants you hypothetical new function called from\n> > the latter but not the former. Looking at both, it seems a bit strange\n> > to make them responsible for a detail such as \"copy ->statusFlags from\n> > source proc to mine\". It seems more reasonable to add a third flag to\n> > RestoreTransactionSnapshot(Snapshot snapshot, void *source_proc, bool is_vacuum)\n> > and if that is true, tell SetTransactionSnapshot to copy flags,\n> > otherwise not.\n\nFor the idea of is_vacuum flag, we don't know if a parallel worker is\nlaunched for parallel vacuum at the time of ParallelWorkerMain().\n\n> >\n>\n> If we decide to go this way then I suggest adding a comment to convey\n> why we choose to copy status flags in ProcArrayInstallRestoredXmin()\n> as the function name doesn't indicate it.\n\nAgreed.\n\nI've updated the patch so that ProcArrayInstallRestoredXmin() sets\nboth xmin and statusFlags only when the source proc is still running\nand xmin doesn't go backwards. IOW it doesn't happen that only one of\nthem is set by this function, which seems more understandable\nbehavior.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Mon, 15 Nov 2021 16:08:00 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel vacuum workers prevent the oldest xmin from advancing"
},
{
"msg_contents": "On Mon, Nov 15, 2021 at 12:38 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've updated the patch so that ProcArrayInstallRestoredXmin() sets\n> both xmin and statusFlags only when the source proc is still running\n> and xmin doesn't go backwards. IOW it doesn't happen that only one of\n> them is set by this function, which seems more understandable\n> behavior.\n>\n\nHow have you tested this patch? As there was no test case presented in\nthis thread, I used the below manual test to verify that the patch\nworks. The idea is to generate a scenario where a parallel vacuum\nworker holds back the xmin from advancing.\n\nSetup:\n-- keep autovacuum = off in postgresql.conf\ncreate table t1(c1 int, c2 int);\ninsert into t1 values(generate_series(1,1000),100);\ncreate index idx_t1_c1 on t1(c1);\ncreate index idx_t1_c2 on t1(c2);\n\ncreate table t2(c1 int, c2 int);\ninsert into t2 values(generate_series(1,1000),100);\ncreate index idx_t2_c1 on t1(c1);\n\nSession-1:\ndelete from t1 where c1 < 10; --this is to ensure that vacuum has some\nwork to do\n\nSession-2:\n-- this is done just to ensure the Session-1's xmin captures the value\nof this xact\nbegin;\nselect txid_current(); -- say value is 725\ninsert into t2 values(1001, 100);\n\nSession-1:\nset min_parallel_index_scan_size=0;\n-- attach a debugger and ensure to stop parallel worker somewhere\nbefore it completes and the leader after launching parallel worker\nvacuum t1;\n\nSession-2:\n-- commit the open transaction\ncommit;\n\nSession-3:\n-- attach a debugger and break at the caller of vacuum_set_xid_limits.\nvacuum t2;\n\nI noticed that before the patch the value of oldestXmin in Session-3\nis 725 but after the patch it got advanced. I have made minor edits to\nthe attached patch. See, if this looks okay to you then please prepare\nand test the patch for back-branches as well. If you have some other\nway to test the patch then do share the same and let me know if you\nsee any flaw in the above verification method.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Tue, 16 Nov 2021 17:15:21 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel vacuum workers prevent the oldest xmin from advancing"
},
{
"msg_contents": "On Tue, Nov 16, 2021 at 8:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Nov 15, 2021 at 12:38 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've updated the patch so that ProcArrayInstallRestoredXmin() sets\n> > both xmin and statusFlags only when the source proc is still running\n> > and xmin doesn't go backwards. IOW it doesn't happen that only one of\n> > them is set by this function, which seems more understandable\n> > behavior.\n> >\n>\n> How have you tested this patch? As there was no test case presented in\n> this thread, I used the below manual test to verify that the patch\n> works. The idea is to generate a scenario where a parallel vacuum\n> worker holds back the xmin from advancing.\n>\n> Setup:\n> -- keep autovacuum = off in postgresql.conf\n> create table t1(c1 int, c2 int);\n> insert into t1 values(generate_series(1,1000),100);\n> create index idx_t1_c1 on t1(c1);\n> create index idx_t1_c2 on t1(c2);\n>\n> create table t2(c1 int, c2 int);\n> insert into t2 values(generate_series(1,1000),100);\n> create index idx_t2_c1 on t1(c1);\n>\n> Session-1:\n> delete from t1 where c1 < 10; --this is to ensure that vacuum has some\n> work to do\n>\n> Session-2:\n> -- this is done just to ensure the Session-1's xmin captures the value\n> of this xact\n> begin;\n> select txid_current(); -- say value is 725\n> insert into t2 values(1001, 100);\n>\n> Session-1:\n> set min_parallel_index_scan_size=0;\n> -- attach a debugger and ensure to stop parallel worker somewhere\n> before it completes and the leader after launching parallel worker\n> vacuum t1;\n>\n> Session-2:\n> -- commit the open transaction\n> commit;\n>\n> Session-3:\n> -- attach a debugger and break at the caller of vacuum_set_xid_limits.\n> vacuum t2;\n\nYes, I've tested this patch in a similar way; while running pgbench in\nthe background in order to constantly consume XID, I checked if the\noldest xmin in VACUUM VERBOSE log is advancing even during parallel\nvacuum running.\n\n>\n> I noticed that before the patch the value of oldestXmin in Session-3\n> is 725 but after the patch it got advanced. I have made minor edits to\n> the attached patch. See, if this looks okay to you then please prepare\n> and test the patch for back-branches as well. If you have some other\n> way to test the patch then do share the same and let me know if you\n> see any flaw in the above verification method.\n\nThe patch looks good to me. But I can't come up with a stable test for\nthis. It seems to be hard without stopping and resuming parallel\nvacuum workers. Do you have any good idea?\n\nI've attached patches for back branches (13 and 14).\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Wed, 17 Nov 2021 16:35:47 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel vacuum workers prevent the oldest xmin from advancing"
},
{
"msg_contents": "On Wed, Nov 17, 2021 at 1:06 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Nov 16, 2021 at 8:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> The patch looks good to me. But I can't come up with a stable test for\n> this. It seems to be hard without stopping and resuming parallel\n> vacuum workers. Do you have any good idea?\n>\n\nNo, let's wait for a day or so to see if anybody else has any ideas to\nwrite a test for this case, otherwise, I'll check these once again and\npush.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 17 Nov 2021 16:57:42 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel vacuum workers prevent the oldest xmin from advancing"
},
{
"msg_contents": "On Wed, Nov 17, 2021 at 7:28 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > The patch looks good to me. But I can't come up with a stable test for\n> > this. It seems to be hard without stopping and resuming parallel\n> > vacuum workers. Do you have any good idea?\n> >\n>\n> No, let's wait for a day or so to see if anybody else has any ideas to\n> write a test for this case, otherwise, I'll check these once again and\n> push.\n\nI set this \"committed\" in the CF app.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Nov 17, 2021 at 7:28 AM Amit Kapila <amit.kapila16@gmail.com> wrote:>> > The patch looks good to me. But I can't come up with a stable test for> > this. It seems to be hard without stopping and resuming parallel> > vacuum workers. Do you have any good idea?> >>> No, let's wait for a day or so to see if anybody else has any ideas to> write a test for this case, otherwise, I'll check these once again and> push.I set this \"committed\" in the CF app.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 24 Nov 2021 10:15:52 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel vacuum workers prevent the oldest xmin from advancing"
},
{
"msg_contents": "On Wed, Nov 24, 2021 at 7:46 PM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n>\n> On Wed, Nov 17, 2021 at 7:28 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > The patch looks good to me. But I can't come up with a stable test for\n> > > this. It seems to be hard without stopping and resuming parallel\n> > > vacuum workers. Do you have any good idea?\n> > >\n> >\n> > No, let's wait for a day or so to see if anybody else has any ideas to\n> > write a test for this case, otherwise, I'll check these once again and\n> > push.\n>\n> I set this \"committed\" in the CF app.\n>\n\nThanks!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 25 Nov 2021 09:00:20 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel vacuum workers prevent the oldest xmin from advancing"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.