threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "Hi,\n\nWhile working on [1], I observed that extra memory is allocated in\n'create_list_bounds'\nfunction which can be avoided. So the attached patch removes extra memory\nallocations done inside 'create_list_bounds' function and also removes the\nunused variable 'cell'.\n\nIn the existing code, in create_list_bounds(),\n\n 1. It iterates through all the partitions and for each partition,\n - It iterates through the list of datums named 'listdatums'.\n - For each non null value of 'listdatums', it allocates a memory\n for 'list_value' whose type is 'PartitionListValue' and\nstores value and\n index information.\n - Appends 'list_value' to a list named 'non_null_values'.\n 2. Allocates memory to 'all_values' variable which contains\n information of all the list bounds of all the partitions. The count\n allocated for 'all_values' is nothing but the total number of non null\n values which is populated from the previous step (1).\n 3. Iterates through each item of 'non_null_values' list.\n - It allocates a memory for 'all_values[i]' whose type is\n 'PartitionListValue' and copies the information from 'list_value'.\n\n The above logic is changed to following,\n\n 1. Call function 'get_non_null_count_list_bounds()' which iterates\n through all the partitions and for each partition, it iterates through a\n list of datums and calculates the count of all non null bound values.\n 2. Allocates memory to 'all_values' variable which contains information\n of all the list bounds of all the partitions. The count allocated for\n 'all_values' is nothing but the total number of non null values which is\n populated from the previous step (1).\n 3. Iterates through all the partitions and for each partition,\n - It iterates through the list of datums named 'listdatums'.\n - For each non null value of 'listdatums', it allocates a memory\n for 'all_values[i]' whose type is 'PartitionListValue' and stores\n value and index information directly.\n\nThe above fix, removes the extra memory allocations. Let's consider an\nexample.\nIf there are 10 partitions and each partition contains 11 bounds including\nNULL value.\n\nParameters Existing code With patch\nMemory allocation of 'PartitionListValue' 100+100 = 200 times 100 times\nTotal number of iterations 110 + 100 = 210 110 + 110 = 220\nAs we can see in the above data, the total number of iterations are\nincreased slightly\n(When it contains NULL values. Otherwise no change) but it improves in case\nof\nmemory allocations. As memory allocations are costly operations, I feel we\nshould\nconsider changing the existing code.\n\nPlease share your thoughts.\n\n[1] -\nhttps://mail.google.com/mail/u/2/#search/multi+column+list/KtbxLxgZZTjRxNrBWvmHzDTHXCHLssSprg?compose=CllgCHrjDqKgWCBNMmLqhzKhmrvHhSRlRVZxPCVcLkLmFQwrccpTpqLNgbWqKkTkTFCHMtZjWnV\n\nThanks & Regards,\nNitin Jadhav",
"msg_date": "Sat, 15 May 2021 14:40:45 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Removed extra memory allocations from create_list_bounds"
},
{
"msg_contents": "On Sat, May 15, 2021 at 02:40:45PM +0530, Nitin Jadhav wrote:\n> While working on [1], I observed that extra memory is allocated in\n> [1] https://mail.google.com/mail/u/2/#search/multi+column+list/KtbxLxgZZTjRxNrBWvmHzDTHXCHLssSprg?compose=CllgCHrjDqKgWCBNMmLqhzKhmrvHhSRlRVZxPCVcLkLmFQwrccpTpqLNgbWqKkTkTFCHMtZjWnV\n\nThis is a link to your gmail, not to anything public.\n\nIf it's worth counting list elements in advance, then you can also allocate the\nPartitionListValue as a single chunk, rather than palloc in a loop.\nThis may help large partition heirarchies.\n\nAnd the same thing in create_hash_bounds with hbounds.\n\ncreate_range_bounds() already doesn't call palloc in a loop. However, then\nthere's an asymmetry in create_range_bounds, which is still takes a\ndouble-indirect pointer.\n\nI'm not able to detect that this is saving more than about ~1% less RAM, to\ncreate or select from 1000 partitions, probably because other data structures\nare using much more, and savings here are relatively small.\n\nI'm going to add to the next CF. You can add yourself as an author, and watch\nthat the patch passes tests in cfbot.\nhttps://commitfest.postgresql.org/\nhttp://cfbot.cputube.org/\n\nThanks,\n-- \nJustin",
"msg_date": "Sun, 16 May 2021 12:00:29 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Removed extra memory allocations from create_list_bounds"
},
{
"msg_contents": "On Sun, May 16, 2021 at 10:00 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Sat, May 15, 2021 at 02:40:45PM +0530, Nitin Jadhav wrote:\n> > While working on [1], I observed that extra memory is allocated in\n> > [1]\n> https://mail.google.com/mail/u/2/#search/multi+column+list/KtbxLxgZZTjRxNrBWvmHzDTHXCHLssSprg?compose=CllgCHrjDqKgWCBNMmLqhzKhmrvHhSRlRVZxPCVcLkLmFQwrccpTpqLNgbWqKkTkTFCHMtZjWnV\n>\n> This is a link to your gmail, not to anything public.\n>\n> If it's worth counting list elements in advance, then you can also\n> allocate the\n> PartitionListValue as a single chunk, rather than palloc in a loop.\n> This may help large partition heirarchies.\n>\n> And the same thing in create_hash_bounds with hbounds.\n>\n> create_range_bounds() already doesn't call palloc in a loop. However, then\n> there's an asymmetry in create_range_bounds, which is still takes a\n> double-indirect pointer.\n>\n> I'm not able to detect that this is saving more than about ~1% less RAM, to\n> create or select from 1000 partitions, probably because other data\n> structures\n> are using much more, and savings here are relatively small.\n>\n> I'm going to add to the next CF. You can add yourself as an author, and\n> watch\n> that the patch passes tests in cfbot.\n> https://commitfest.postgresql.org/\n> http://cfbot.cputube.org/\n>\n> Thanks,\n> --\n> Justin\n>\nHi,\nFor 0001-Removed-extra-memory-allocations-from-create_list_bo.patch :\n\n+static int\n+get_non_null_count_list_bounds(PartitionBoundSpec **boundspecs, int nparts)\n\nSince the function returns the total number of non null bound values,\nshould it be named get_non_null_list_bounds_count ?\nIt can also be named get_count_of_... but that's longer.\n\n+ all_values = (PartitionListValue **)\n+ palloc(ndatums * sizeof(PartitionListValue *));\n\nThe palloc() call would take place even if ndatums is 0. I think in that\ncase, palloc() doesn't need to be called.\n\nCheers\n\nOn Sun, May 16, 2021 at 10:00 AM Justin Pryzby <pryzby@telsasoft.com> wrote:On Sat, May 15, 2021 at 02:40:45PM +0530, Nitin Jadhav wrote:\n> While working on [1], I observed that extra memory is allocated in\n> [1] https://mail.google.com/mail/u/2/#search/multi+column+list/KtbxLxgZZTjRxNrBWvmHzDTHXCHLssSprg?compose=CllgCHrjDqKgWCBNMmLqhzKhmrvHhSRlRVZxPCVcLkLmFQwrccpTpqLNgbWqKkTkTFCHMtZjWnV\n\nThis is a link to your gmail, not to anything public.\n\nIf it's worth counting list elements in advance, then you can also allocate the\nPartitionListValue as a single chunk, rather than palloc in a loop.\nThis may help large partition heirarchies.\n\nAnd the same thing in create_hash_bounds with hbounds.\n\ncreate_range_bounds() already doesn't call palloc in a loop. However, then\nthere's an asymmetry in create_range_bounds, which is still takes a\ndouble-indirect pointer.\n\nI'm not able to detect that this is saving more than about ~1% less RAM, to\ncreate or select from 1000 partitions, probably because other data structures\nare using much more, and savings here are relatively small.\n\nI'm going to add to the next CF. You can add yourself as an author, and watch\nthat the patch passes tests in cfbot.\nhttps://commitfest.postgresql.org/\nhttp://cfbot.cputube.org/\n\nThanks,\n-- \nJustinHi,For 0001-Removed-extra-memory-allocations-from-create_list_bo.patch :+static int+get_non_null_count_list_bounds(PartitionBoundSpec **boundspecs, int nparts)Since the function returns the total number of non null bound values, should it be named get_non_null_list_bounds_count ?It can also be named get_count_of_... but that's longer.+ all_values = (PartitionListValue **)+ palloc(ndatums * sizeof(PartitionListValue *));The palloc() call would take place even if ndatums is 0. I think in that case, palloc() doesn't need to be called.Cheers",
"msg_date": "Sun, 16 May 2021 10:26:20 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Removed extra memory allocations from create_list_bounds"
},
{
"msg_contents": "> > While working on [1], I observed that extra memory is allocated in\n> > [1] https://mail.google.com/mail/u/2/#search/multi+column+list/KtbxLxgZZTjRxNrBWvmHzDTHXCHLssSprg?compose=CllgCHrjDqKgWCBNMmLqhzKhmrvHhSRlRVZxPCVcLkLmFQwrccpTpqLNgbWqKkTkTFCHMtZjWnV\n\nI am really sorry for this. I wanted to point to the thread subjected\nto 'Multi-Column List Partitioning'.\n\n> If it's worth counting list elements in advance, then you can also allocate the\n> PartitionListValue as a single chunk, rather than palloc in a loop.\n> This may help large partition heirarchies.\n>\n> And the same thing in create_hash_bounds with hbounds.\n\nI agree and thanks for creating those patches. I am not able to apply\nthe patch on the latest HEAD. Kindly check and upload the modified\npatches.\n\n> I'm not able to detect that this is saving more than about ~1% less RAM, to\n> create or select from 1000 partitions, probably because other data structures\n> are using much more, and savings here are relatively small.\n\nYes it does not save huge memory but it's an improvement.\n\n> I'm going to add to the next CF. You can add yourself as an author, and watch\n> that the patch passes tests in cfbot.\n> https://commitfest.postgresql.org/\n> http://cfbot.cputube.org/\n\nThanks for creating the commitfest entry.\n\n> Since the function returns the total number of non null bound values, should it be named get_non_null_list_bounds_count ?\n> It can also be named get_count_of_... but that's longer.\n\nChanged it to 'get_non_null_list_bounds_count'.\n\n> The palloc() call would take place even if ndatums is 0. I think in that case, palloc() doesn't need to be called.\n\nI feel there is no such case where the 'ndatums' is 0 because as we\ncan see below, there is an assert in the 'partition_bounds_create'\nfunction from where we call the 'create_list_bounds' function. Kindly\nprovide such a case if I am wrong.\n\nPartitionBoundInfo\npartition_bounds_create(PartitionBoundSpec **boundspecs, int nparts,\n PartitionKey key, int **mapping)\n{\n int i;\n\n Assert(nparts > 0);\n\nThanks & Regards,\nNitin Jadhav\nOn Sun, May 16, 2021 at 10:52 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n>\n>\n> On Sun, May 16, 2021 at 10:00 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>>\n>> On Sat, May 15, 2021 at 02:40:45PM +0530, Nitin Jadhav wrote:\n>> > While working on [1], I observed that extra memory is allocated in\n>> > [1] https://mail.google.com/mail/u/2/#search/multi+column+list/KtbxLxgZZTjRxNrBWvmHzDTHXCHLssSprg?compose=CllgCHrjDqKgWCBNMmLqhzKhmrvHhSRlRVZxPCVcLkLmFQwrccpTpqLNgbWqKkTkTFCHMtZjWnV\n>>\n>> This is a link to your gmail, not to anything public.\n>>\n>> If it's worth counting list elements in advance, then you can also allocate the\n>> PartitionListValue as a single chunk, rather than palloc in a loop.\n>> This may help large partition heirarchies.\n>>\n>> And the same thing in create_hash_bounds with hbounds.\n>>\n>> create_range_bounds() already doesn't call palloc in a loop. However, then\n>> there's an asymmetry in create_range_bounds, which is still takes a\n>> double-indirect pointer.\n>>\n>> I'm not able to detect that this is saving more than about ~1% less RAM, to\n>> create or select from 1000 partitions, probably because other data structures\n>> are using much more, and savings here are relatively small.\n>>\n>> I'm going to add to the next CF. You can add yourself as an author, and watch\n>> that the patch passes tests in cfbot.\n>> https://commitfest.postgresql.org/\n>> http://cfbot.cputube.org/\n>>\n>> Thanks,\n>> --\n>> Justin\n>\n> Hi,\n> For 0001-Removed-extra-memory-allocations-from-create_list_bo.patch :\n>\n> +static int\n> +get_non_null_count_list_bounds(PartitionBoundSpec **boundspecs, int nparts)\n>\n> Since the function returns the total number of non null bound values, should it be named get_non_null_list_bounds_count ?\n> It can also be named get_count_of_... but that's longer.\n>\n> + all_values = (PartitionListValue **)\n> + palloc(ndatums * sizeof(PartitionListValue *));\n>\n> The palloc() call would take place even if ndatums is 0. I think in that case, palloc() doesn't need to be called.\n>\n> Cheers\n>",
"msg_date": "Mon, 17 May 2021 20:22:25 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Removed extra memory allocations from create_list_bounds"
},
{
"msg_contents": "On Mon, May 17, 2021 at 7:52 AM Nitin Jadhav <nitinjadhavpostgres@gmail.com>\nwrote:\n\n> > > While working on [1], I observed that extra memory is allocated in\n> > > [1]\n> https://mail.google.com/mail/u/2/#search/multi+column+list/KtbxLxgZZTjRxNrBWvmHzDTHXCHLssSprg?compose=CllgCHrjDqKgWCBNMmLqhzKhmrvHhSRlRVZxPCVcLkLmFQwrccpTpqLNgbWqKkTkTFCHMtZjWnV\n>\n> I am really sorry for this. I wanted to point to the thread subjected\n> to 'Multi-Column List Partitioning'.\n>\n> > If it's worth counting list elements in advance, then you can also\n> allocate the\n> > PartitionListValue as a single chunk, rather than palloc in a loop.\n> > This may help large partition heirarchies.\n> >\n> > And the same thing in create_hash_bounds with hbounds.\n>\n> I agree and thanks for creating those patches. I am not able to apply\n> the patch on the latest HEAD. Kindly check and upload the modified\n> patches.\n>\n> > I'm not able to detect that this is saving more than about ~1% less RAM,\n> to\n> > create or select from 1000 partitions, probably because other data\n> structures\n> > are using much more, and savings here are relatively small.\n>\n> Yes it does not save huge memory but it's an improvement.\n>\n> > I'm going to add to the next CF. You can add yourself as an author, and\n> watch\n> > that the patch passes tests in cfbot.\n> > https://commitfest.postgresql.org/\n> > http://cfbot.cputube.org/\n>\n> Thanks for creating the commitfest entry.\n>\n> > Since the function returns the total number of non null bound values,\n> should it be named get_non_null_list_bounds_count ?\n> > It can also be named get_count_of_... but that's longer.\n>\n> Changed it to 'get_non_null_list_bounds_count'.\n>\n> > The palloc() call would take place even if ndatums is 0. I think in that\n> case, palloc() doesn't need to be called.\n>\n> I feel there is no such case where the 'ndatums' is 0 because as we\n> can see below, there is an assert in the 'partition_bounds_create'\n> function from where we call the 'create_list_bounds' function. Kindly\n> provide such a case if I am wrong.\n>\n> PartitionBoundInfo\n> partition_bounds_create(PartitionBoundSpec **boundspecs, int nparts,\n> PartitionKey key, int **mapping)\n> {\n> int i;\n>\n> Assert(nparts > 0);\n>\n\nHi,\nThanks for pointing out the assertion.\nMy corresponding comment can be dropped.\n\nCheers\n\n\n>\n> Thanks & Regards,\n> Nitin Jadhav\n> On Sun, May 16, 2021 at 10:52 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> >\n> >\n> >\n> > On Sun, May 16, 2021 at 10:00 AM Justin Pryzby <pryzby@telsasoft.com>\n> wrote:\n> >>\n> >> On Sat, May 15, 2021 at 02:40:45PM +0530, Nitin Jadhav wrote:\n> >> > While working on [1], I observed that extra memory is allocated in\n> >> > [1]\n> https://mail.google.com/mail/u/2/#search/multi+column+list/KtbxLxgZZTjRxNrBWvmHzDTHXCHLssSprg?compose=CllgCHrjDqKgWCBNMmLqhzKhmrvHhSRlRVZxPCVcLkLmFQwrccpTpqLNgbWqKkTkTFCHMtZjWnV\n> >>\n> >> This is a link to your gmail, not to anything public.\n> >>\n> >> If it's worth counting list elements in advance, then you can also\n> allocate the\n> >> PartitionListValue as a single chunk, rather than palloc in a loop.\n> >> This may help large partition heirarchies.\n> >>\n> >> And the same thing in create_hash_bounds with hbounds.\n> >>\n> >> create_range_bounds() already doesn't call palloc in a loop. However,\n> then\n> >> there's an asymmetry in create_range_bounds, which is still takes a\n> >> double-indirect pointer.\n> >>\n> >> I'm not able to detect that this is saving more than about ~1% less\n> RAM, to\n> >> create or select from 1000 partitions, probably because other data\n> structures\n> >> are using much more, and savings here are relatively small.\n> >>\n> >> I'm going to add to the next CF. You can add yourself as an author,\n> and watch\n> >> that the patch passes tests in cfbot.\n> >> https://commitfest.postgresql.org/\n> >> http://cfbot.cputube.org/\n> >>\n> >> Thanks,\n> >> --\n> >> Justin\n> >\n> > Hi,\n> > For 0001-Removed-extra-memory-allocations-from-create_list_bo.patch :\n> >\n> > +static int\n> > +get_non_null_count_list_bounds(PartitionBoundSpec **boundspecs, int\n> nparts)\n> >\n> > Since the function returns the total number of non null bound values,\n> should it be named get_non_null_list_bounds_count ?\n> > It can also be named get_count_of_... but that's longer.\n> >\n> > + all_values = (PartitionListValue **)\n> > + palloc(ndatums * sizeof(PartitionListValue *));\n> >\n> > The palloc() call would take place even if ndatums is 0. I think in that\n> case, palloc() doesn't need to be called.\n> >\n> > Cheers\n> >\n>\n\nOn Mon, May 17, 2021 at 7:52 AM Nitin Jadhav <nitinjadhavpostgres@gmail.com> wrote:> > While working on [1], I observed that extra memory is allocated in\n> > [1] https://mail.google.com/mail/u/2/#search/multi+column+list/KtbxLxgZZTjRxNrBWvmHzDTHXCHLssSprg?compose=CllgCHrjDqKgWCBNMmLqhzKhmrvHhSRlRVZxPCVcLkLmFQwrccpTpqLNgbWqKkTkTFCHMtZjWnV\n\nI am really sorry for this. I wanted to point to the thread subjected\nto 'Multi-Column List Partitioning'.\n\n> If it's worth counting list elements in advance, then you can also allocate the\n> PartitionListValue as a single chunk, rather than palloc in a loop.\n> This may help large partition heirarchies.\n>\n> And the same thing in create_hash_bounds with hbounds.\n\nI agree and thanks for creating those patches. I am not able to apply\nthe patch on the latest HEAD. Kindly check and upload the modified\npatches.\n\n> I'm not able to detect that this is saving more than about ~1% less RAM, to\n> create or select from 1000 partitions, probably because other data structures\n> are using much more, and savings here are relatively small.\n\nYes it does not save huge memory but it's an improvement.\n\n> I'm going to add to the next CF. You can add yourself as an author, and watch\n> that the patch passes tests in cfbot.\n> https://commitfest.postgresql.org/\n> http://cfbot.cputube.org/\n\nThanks for creating the commitfest entry.\n\n> Since the function returns the total number of non null bound values, should it be named get_non_null_list_bounds_count ?\n> It can also be named get_count_of_... but that's longer.\n\nChanged it to 'get_non_null_list_bounds_count'.\n\n> The palloc() call would take place even if ndatums is 0. I think in that case, palloc() doesn't need to be called.\n\nI feel there is no such case where the 'ndatums' is 0 because as we\ncan see below, there is an assert in the 'partition_bounds_create'\nfunction from where we call the 'create_list_bounds' function. Kindly\nprovide such a case if I am wrong.\n\nPartitionBoundInfo\npartition_bounds_create(PartitionBoundSpec **boundspecs, int nparts,\n PartitionKey key, int **mapping)\n{\n int i;\n\n Assert(nparts > 0);Hi,Thanks for pointing out the assertion.My corresponding comment can be dropped.Cheers \n\nThanks & Regards,\nNitin Jadhav\nOn Sun, May 16, 2021 at 10:52 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n>\n>\n> On Sun, May 16, 2021 at 10:00 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>>\n>> On Sat, May 15, 2021 at 02:40:45PM +0530, Nitin Jadhav wrote:\n>> > While working on [1], I observed that extra memory is allocated in\n>> > [1] https://mail.google.com/mail/u/2/#search/multi+column+list/KtbxLxgZZTjRxNrBWvmHzDTHXCHLssSprg?compose=CllgCHrjDqKgWCBNMmLqhzKhmrvHhSRlRVZxPCVcLkLmFQwrccpTpqLNgbWqKkTkTFCHMtZjWnV\n>>\n>> This is a link to your gmail, not to anything public.\n>>\n>> If it's worth counting list elements in advance, then you can also allocate the\n>> PartitionListValue as a single chunk, rather than palloc in a loop.\n>> This may help large partition heirarchies.\n>>\n>> And the same thing in create_hash_bounds with hbounds.\n>>\n>> create_range_bounds() already doesn't call palloc in a loop. However, then\n>> there's an asymmetry in create_range_bounds, which is still takes a\n>> double-indirect pointer.\n>>\n>> I'm not able to detect that this is saving more than about ~1% less RAM, to\n>> create or select from 1000 partitions, probably because other data structures\n>> are using much more, and savings here are relatively small.\n>>\n>> I'm going to add to the next CF. You can add yourself as an author, and watch\n>> that the patch passes tests in cfbot.\n>> https://commitfest.postgresql.org/\n>> http://cfbot.cputube.org/\n>>\n>> Thanks,\n>> --\n>> Justin\n>\n> Hi,\n> For 0001-Removed-extra-memory-allocations-from-create_list_bo.patch :\n>\n> +static int\n> +get_non_null_count_list_bounds(PartitionBoundSpec **boundspecs, int nparts)\n>\n> Since the function returns the total number of non null bound values, should it be named get_non_null_list_bounds_count ?\n> It can also be named get_count_of_... but that's longer.\n>\n> + all_values = (PartitionListValue **)\n> + palloc(ndatums * sizeof(PartitionListValue *));\n>\n> The palloc() call would take place even if ndatums is 0. I think in that case, palloc() doesn't need to be called.\n>\n> Cheers\n>",
"msg_date": "Mon, 17 May 2021 08:00:40 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Removed extra memory allocations from create_list_bounds"
},
{
"msg_contents": "On Mon, May 17, 2021 at 08:22:25PM +0530, Nitin Jadhav wrote:\n> I agree and thanks for creating those patches. I am not able to apply\n> the patch on the latest HEAD. Kindly check and upload the modified\n> patches.\n\nThe CFBOT had no issues with the patches, so I suspect an issue on your side.\nhttp://cfbot.cputube.org/nitin-jadhav.html\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 17 May 2021 10:03:10 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Removed extra memory allocations from create_list_bounds"
},
{
"msg_contents": "> The CFBOT had no issues with the patches, so I suspect an issue on your side.\n> http://cfbot.cputube.org/nitin-jadhav.html\n\nI am getting the following error when I try to apply in my machine.\n\n$ git apply ../patches/0001-Removed-extra-memory-allocations-from-create_list_bo.patch\n../patches/0001-Removed-extra-memory-allocations-from-create_list_bo.patch:18:\ntrailing whitespace.\n/*\n../patches/0001-Removed-extra-memory-allocations-from-create_list_bo.patch:19:\ntrailing whitespace.\n * get_non_null_count_list_bounds\n../patches/0001-Removed-extra-memory-allocations-from-create_list_bo.patch:20:\ntrailing whitespace.\n * Calculates the total number of non null bound values of\n../patches/0001-Removed-extra-memory-allocations-from-create_list_bo.patch:21:\ntrailing whitespace.\n * all the partitions.\n../patches/0001-Removed-extra-memory-allocations-from-create_list_bo.patch:22:\ntrailing whitespace.\n */\nerror: patch failed: src/backend/partitioning/partbounds.c:432\nerror: src/backend/partitioning/partbounds.c: patch does not apply\n\nHowever I was able to apply it by adding '--reject --whitespace=fix'.\n\n$ git apply --reject --whitespace=fix\n../patches/0001-Removed-extra-memory-allocations-from-create_list_bo.patch\n../patches/0001-Removed-extra-memory-allocations-from-create_list_bo.patch:18:\ntrailing whitespace.\n/*\n../patches/0001-Removed-extra-memory-allocations-from-create_list_bo.patch:19:\ntrailing whitespace.\n * get_non_null_count_list_bounds\n../patches/0001-Removed-extra-memory-allocations-from-create_list_bo.patch:20:\ntrailing whitespace.\n * Calculates the total number of non null bound values of\n../patches/0001-Removed-extra-memory-allocations-from-create_list_bo.patch:21:\ntrailing whitespace.\n * all the partitions.\n../patches/0001-Removed-extra-memory-allocations-from-create_list_bo.patch:22:\ntrailing whitespace.\n */\nChecking patch src/backend/partitioning/partbounds.c...\nApplied patch src/backend/partitioning/partbounds.c cleanly.\nwarning: squelched 30 whitespace errors\nwarning: 35 lines add whitespace errors.\n\nI have rebased all the patches on top of\n'v2_0001-removed_extra_mem_alloc_from_create_list_bounds.patch'.\nAttaching all the patches here.\n\n--\nThanks & Regards,\nNitin Jadhav\n\nOn Mon, May 17, 2021 at 8:33 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Mon, May 17, 2021 at 08:22:25PM +0530, Nitin Jadhav wrote:\n> > I agree and thanks for creating those patches. I am not able to apply\n> > the patch on the latest HEAD. Kindly check and upload the modified\n> > patches.\n>\n> The CFBOT had no issues with the patches, so I suspect an issue on your side.\n> http://cfbot.cputube.org/nitin-jadhav.html\n>\n> --\n> Justin",
"msg_date": "Tue, 18 May 2021 22:58:41 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Removed extra memory allocations from create_list_bounds"
},
{
"msg_contents": "On Tue, May 18, 2021 at 1:29 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n> > The CFBOT had no issues with the patches, so I suspect an issue on your side.\n> > http://cfbot.cputube.org/nitin-jadhav.html\n>\n> I am getting the following error when I try to apply in my machine.\n>\n> $ git apply ../patches/0001-Removed-extra-memory-allocations-from-create_list_bo.patch\n> ../patches/0001-Removed-extra-memory-allocations-from-create_list_bo.patch:18:\n> trailing whitespace.\n\n'git apply' is very picky. Use 'patch -p1' to apply your patches instead.\n\nAlso, use 'git diff --check' or 'git log --check' before generating\npatches to send, and fix any whitespace errors before submitting.\n\nI see that you have made a theoretical argument for why this should be\ngood for performance, but it would be better to have some test results\nthat show that it works out in practice. Sometimes things seem like\nthey ought to be more efficient but turn out to be less efficient when\nthey are actually tried.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 18 May 2021 13:49:12 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Removed extra memory allocations from create_list_bounds"
},
{
"msg_contents": "> 'git apply' is very picky. Use 'patch -p1' to apply your patches instead.\n>\n> Also, use 'git diff --check' or 'git log --check' before generating\n> patches to send, and fix any whitespace errors before submitting.\n\nThanks for the suggestion. I will follow these.\n\n> I see that you have made a theoretical argument for why this should be\n> good for performance, but it would be better to have some test results\n> that show that it works out in practice. Sometimes things seem like\n> they ought to be more efficient but turn out to be less efficient when\n> they are actually tried.\n\nCreated a table with one column of type 'int' and partitioned by that\ncolumn. Created 1 million partitions using following queries.\n\ncreate table t(a int) partition by list(a);\nselect 'create table t_' || i || ' partition of t for\nvalues in (' || i || ');'\nfrom generate_series(1, 10000) i\n\\gexec\n\nAfter this I tried to create 10 partitions and observed the time taken\nto execute. Here is the data for 'with patch'.\n\npostgres@34077=#select 'create table t_' || i || ' partition of t for\npostgres'# values in (' || i || ');'\npostgres-# from generate_series(10001, 10010) i\npostgres-# \\gexec\nCREATE TABLE\nTime: 36.863 ms\nCREATE TABLE\nTime: 46.645 ms\nCREATE TABLE\nTime: 44.915 ms\nCREATE TABLE\nTime: 39.660 ms\nCREATE TABLE\nTime: 42.188 ms\nCREATE TABLE\nTime: 43.163 ms\nCREATE TABLE\nTime: 44.374 ms\nCREATE TABLE\nTime: 45.117 ms\nCREATE TABLE\nTime: 40.340 ms\nCREATE TABLE\nTime: 38.604 ms\n\nThe data for 'without patch' looks like this.\n\npostgres@31718=#select 'create table t_' || i || ' partition of t for\npostgres'# values in (' || i || ');'\npostgres-# from generate_series(10001, 10010) i\npostgres-# \\gexec\nCREATE TABLE\nTime: 45.917 ms\nCREATE TABLE\nTime: 46.815 ms\nCREATE TABLE\nTime: 44.180 ms\nCREATE TABLE\nTime: 48.163 ms\nCREATE TABLE\nTime: 45.884 ms\nCREATE TABLE\nTime: 48.330 ms\nCREATE TABLE\nTime: 48.614 ms\nCREATE TABLE\nTime: 48.376 ms\nCREATE TABLE\nTime: 46.380 ms\nCREATE TABLE\nTime: 48.233 ms\n\nIf we observe above data, we can see the improvement with the patch.\nThe average time taken to execute for the last 10 partitions is.\nWith patch - 42.1869 ms\nWithout patch - 47.0892 ms.\n\nWith respect to memory usage, AFAIK the allocated memory gets cleaned\nduring deallocation of the memory used by the memory context. So at\nthe end of the query, we may see no difference in the memory usage but\nduring the query execution it tries to get less memory than before.\nMaybe during some worst case scenario, if there is less memory\navailable, we may see 'out of memory' errors without the patch but it\nmay work with the patch. I have not done experiments in this angle. I\nam happy to do it if required.\n\nPlease share your thoughts.\n\n--\nThanks & Regards,\nNitin Jadhav\n\nOn Tue, May 18, 2021 at 11:19 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, May 18, 2021 at 1:29 PM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> > > The CFBOT had no issues with the patches, so I suspect an issue on your side.\n> > > http://cfbot.cputube.org/nitin-jadhav.html\n> >\n> > I am getting the following error when I try to apply in my machine.\n> >\n> > $ git apply ../patches/0001-Removed-extra-memory-allocations-from-create_list_bo.patch\n> > ../patches/0001-Removed-extra-memory-allocations-from-create_list_bo.patch:18:\n> > trailing whitespace.\n>\n> 'git apply' is very picky. Use 'patch -p1' to apply your patches instead.\n>\n> Also, use 'git diff --check' or 'git log --check' before generating\n> patches to send, and fix any whitespace errors before submitting.\n>\n> I see that you have made a theoretical argument for why this should be\n> good for performance, but it would be better to have some test results\n> that show that it works out in practice. Sometimes things seem like\n> they ought to be more efficient but turn out to be less efficient when\n> they are actually tried.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 20 May 2021 00:21:19 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Removed extra memory allocations from create_list_bounds"
},
{
"msg_contents": "> Created a table with one column of type 'int' and partitioned by that\n> column. Created 1 million partitions using following queries.\n\nSorry. It's not 1 million. Its 10,000 partitions.\n\n--\nThanks & Regards,\nNitin Jadhav\n\nOn Thu, May 20, 2021 at 12:21 AM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> > 'git apply' is very picky. Use 'patch -p1' to apply your patches instead.\n> >\n> > Also, use 'git diff --check' or 'git log --check' before generating\n> > patches to send, and fix any whitespace errors before submitting.\n>\n> Thanks for the suggestion. I will follow these.\n>\n> > I see that you have made a theoretical argument for why this should be\n> > good for performance, but it would be better to have some test results\n> > that show that it works out in practice. Sometimes things seem like\n> > they ought to be more efficient but turn out to be less efficient when\n> > they are actually tried.\n>\n> Created a table with one column of type 'int' and partitioned by that\n> column. Created 1 million partitions using following queries.\n>\n> create table t(a int) partition by list(a);\n> select 'create table t_' || i || ' partition of t for\n> values in (' || i || ');'\n> from generate_series(1, 10000) i\n> \\gexec\n>\n> After this I tried to create 10 partitions and observed the time taken\n> to execute. Here is the data for 'with patch'.\n>\n> postgres@34077=#select 'create table t_' || i || ' partition of t for\n> postgres'# values in (' || i || ');'\n> postgres-# from generate_series(10001, 10010) i\n> postgres-# \\gexec\n> CREATE TABLE\n> Time: 36.863 ms\n> CREATE TABLE\n> Time: 46.645 ms\n> CREATE TABLE\n> Time: 44.915 ms\n> CREATE TABLE\n> Time: 39.660 ms\n> CREATE TABLE\n> Time: 42.188 ms\n> CREATE TABLE\n> Time: 43.163 ms\n> CREATE TABLE\n> Time: 44.374 ms\n> CREATE TABLE\n> Time: 45.117 ms\n> CREATE TABLE\n> Time: 40.340 ms\n> CREATE TABLE\n> Time: 38.604 ms\n>\n> The data for 'without patch' looks like this.\n>\n> postgres@31718=#select 'create table t_' || i || ' partition of t for\n> postgres'# values in (' || i || ');'\n> postgres-# from generate_series(10001, 10010) i\n> postgres-# \\gexec\n> CREATE TABLE\n> Time: 45.917 ms\n> CREATE TABLE\n> Time: 46.815 ms\n> CREATE TABLE\n> Time: 44.180 ms\n> CREATE TABLE\n> Time: 48.163 ms\n> CREATE TABLE\n> Time: 45.884 ms\n> CREATE TABLE\n> Time: 48.330 ms\n> CREATE TABLE\n> Time: 48.614 ms\n> CREATE TABLE\n> Time: 48.376 ms\n> CREATE TABLE\n> Time: 46.380 ms\n> CREATE TABLE\n> Time: 48.233 ms\n>\n> If we observe above data, we can see the improvement with the patch.\n> The average time taken to execute for the last 10 partitions is.\n> With patch - 42.1869 ms\n> Without patch - 47.0892 ms.\n>\n> With respect to memory usage, AFAIK the allocated memory gets cleaned\n> during deallocation of the memory used by the memory context. So at\n> the end of the query, we may see no difference in the memory usage but\n> during the query execution it tries to get less memory than before.\n> Maybe during some worst case scenario, if there is less memory\n> available, we may see 'out of memory' errors without the patch but it\n> may work with the patch. I have not done experiments in this angle. I\n> am happy to do it if required.\n>\n> Please share your thoughts.\n>\n> --\n> Thanks & Regards,\n> Nitin Jadhav\n>\n> On Tue, May 18, 2021 at 11:19 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Tue, May 18, 2021 at 1:29 PM Nitin Jadhav\n> > <nitinjadhavpostgres@gmail.com> wrote:\n> > > > The CFBOT had no issues with the patches, so I suspect an issue on your side.\n> > > > http://cfbot.cputube.org/nitin-jadhav.html\n> > >\n> > > I am getting the following error when I try to apply in my machine.\n> > >\n> > > $ git apply ../patches/0001-Removed-extra-memory-allocations-from-create_list_bo.patch\n> > > ../patches/0001-Removed-extra-memory-allocations-from-create_list_bo.patch:18:\n> > > trailing whitespace.\n> >\n> > 'git apply' is very picky. Use 'patch -p1' to apply your patches instead.\n> >\n> > Also, use 'git diff --check' or 'git log --check' before generating\n> > patches to send, and fix any whitespace errors before submitting.\n> >\n> > I see that you have made a theoretical argument for why this should be\n> > good for performance, but it would be better to have some test results\n> > that show that it works out in practice. Sometimes things seem like\n> > they ought to be more efficient but turn out to be less efficient when\n> > they are actually tried.\n> >\n> > --\n> > Robert Haas\n> > EDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 20 May 2021 00:40:44 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Removed extra memory allocations from create_list_bounds"
},
{
"msg_contents": "On Tue, May 18, 2021 at 01:49:12PM -0400, Robert Haas wrote:\n> I see that you have made a theoretical argument for why this should be\n> good for performance, but it would be better to have some test results\n> that show that it works out in practice. Sometimes things seem like\n> they ought to be more efficient but turn out to be less efficient when\n> they are actually tried.\n\nI see this as a code cleanup more than an performance optimization.\nI couldn't see a measurable difference in my tests, involving CREATE TABLE and\nSELECT.\n\nI think some of my patches could *increase* memory use, due to power-of-two\nallocation logic. I think it's still a good idea, since it doesn't seem to be\nthe dominant memory allocation.\n\nOn Thu, May 20, 2021 at 12:21:19AM +0530, Nitin Jadhav wrote:\n> > I see that you have made a theoretical argument for why this should be\n> > good for performance, but it would be better to have some test results\n> > that show that it works out in practice. Sometimes things seem like\n> > they ought to be more efficient but turn out to be less efficient when\n> > they are actually tried.\n> \n> After this I tried to create 10 partitions and observed the time taken\n> to execute. Here is the data for 'with patch'.\n> \n> postgres@34077=#select 'create table t_' || i || ' partition of t for\n> postgres'# values in (' || i || ');'\n> postgres-# from generate_series(10001, 10010) i\n> postgres-# \\gexec\n\nI think you should be sure to do this within a transaction, without cassert,\nand maybe with fsync=off full_page_writes=off.\n\n> If we observe above data, we can see the improvement with the patch.\n> The average time taken to execute for the last 10 partitions is.\n\nIt'd be interesting to know which patch(es) contributed to the improvement.\nIt's possible that (say) patch 0001 alone gives all the gain, or that 0002\ndiminishes the gains.\n\nI think there'll be an interest in committing the smallest possible patch to\nrealize the gains, to minimize code churn an unrelated changes.\n\nLIST and RANGE might need to be checked separately..\n\n> With respect to memory usage, AFAIK the allocated memory gets cleaned\n> during deallocation of the memory used by the memory context. So at\n> the end of the query, we may see no difference in the memory usage but\n> during the query execution it tries to get less memory than before.\n\nYou can check MAXRSS (at least on linux) if you enable log_executor_stats,\nlike:\n\n\\set QUIET \\\\ SET client_min_messages=debug; SET log_executor_stats=on; DROP TABLE p; CREATE TABLE p(i int) PARTITION BY LIST(i); CREATE TABLE pd PARTITION OF p DEFAULT;\nSELECT format('CREATE TABLE p%s PARTITION OF p FOR VALUES IN (%s)', a,a) FROM generate_series(1,999)a;\\gexec \\\\ SELECT;\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 19 May 2021 14:16:58 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Removed extra memory allocations from create_list_bounds"
},
{
"msg_contents": "> I see this as a code cleanup more than an performance optimization.\n\nI agree with this. This is like a code cleanup but it also improves\nperformance.\nI have done the performance testing, Just to confirm whether it really\nimproves\nperformance.\n\n> I think some of my patches could *increase* memory use, due to\npower-of-two\n> allocation logic. I think it's still a good idea, since it doesn't seem\nto be\n> the dominant memory allocation.\n\nI don't think that it will increase performance rather it adds to the\nimprovement.\n\n> I think you should be sure to do this within a transaction, without\ncassert,\n> and maybe with fsync=off full_page_writes=off\n\nThanks for sharing this. I have done the above settings and collected the\nbelow data.\n\n> It'd be interesting to know which patch(es) contributed to the\nimprovement.\n> It's possible that (say) patch 0001 alone gives all the gain, or that 0002\n> diminishes the gains.\n>\n> I think there'll be an interest in committing the smallest possible patch\nto\n> realize the gains, to minimize code churn an unrelated changes.\n\nIn order to answer the above points, I have divided the patches into 2 sets.\n1. Only 0001 and 0002 - These are related to list partitioning and do\nnot contain\nchanges related power-of-two allocation logic.\n2. This contains all the 5 patches.\n\nI have used the same testing procedure as explained in the previous mail.\nPlease find the timing information of the last 10 creation of partitioned\ntables as given below.\n\nWithout patch With 0001 and 0002 With all patch\n17.105 14.722 13.878\n15.897 14.427 13.493\n15.991 15.424 14.751\n17.965 16.487 19.491\n19.704 19.042 21.278\n18.98 18.949 18.123\n18.986 21.713 17.585\n21.273 20.596 19.623\n18.839 18.521 17.605\n20.724 18.774 19.242\n18.5464 17.8655 17.5069\nAs we can see in the above data, there is an improvement with both of the\npatch sets.\n\n> You can check MAXRSS (at least on linux) if you enable log_executor_stats,\n> like:\n>\n> \\set QUIET \\\\ SET client_min_messages=debug; SET log_executor_stats=on;\nDROP TABLE p; CREATE TABLE p(i int) PARTITION BY LIST(i); CREATE TABLE pd\nPARTITION OF p DEFAULT;\n> SELECT format('CREATE TABLE p%s PARTITION OF p FOR VALUES IN (%s)', a,a)\nFROM generate_series(1,999)a;\\gexec \\\\ SELECT;\n\nThanks a lot for sharing this information. It was really helpful.\nI have collected the stat information for creation of 1000\npartitions. Please find the stat information for the 'without patch'\ncase below.\n\nLOG: EXECUTOR STATISTICS\nDETAIL: ! system usage stats:\n! 0.000012 s user, 0.000000 s system, 0.000011 s elapsed\n! [71.599426 s user, 4.362552 s system total]\n! 63872 kB max resident size\n! 0/0 [0/231096] filesystem blocks in/out\n! 0/0 [0/2388074] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n! 0/0 [9982/6709] voluntary/involuntary context switches\n\nPlease find the stat information for 'with patch (all 5 patches)' case\nbelow.\n\nLOG: EXECUTOR STATISTICS\nDETAIL: ! system usage stats:\n! 0.000018 s user, 0.000002 s system, 0.000013 s elapsed\n! [73.529715 s user, 4.219172 s system total]\n! 63152 kB max resident size\n! 0/0 [0/204840] filesystem blocks in/out\n! 0/0 [0/2066377] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n! 0/0 [9956/4129] voluntary/involuntary context switches\n\nPlease share your thoughts.\n\n--\nThanks & Regards,\nNitin Jadhav\n\n\n\nOn Thu, May 20, 2021 at 12:47 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Tue, May 18, 2021 at 01:49:12PM -0400, Robert Haas wrote:\n> > I see that you have made a theoretical argument for why this should be\n> > good for performance, but it would be better to have some test results\n> > that show that it works out in practice. Sometimes things seem like\n> > they ought to be more efficient but turn out to be less efficient when\n> > they are actually tried.\n>\n> I see this as a code cleanup more than an performance optimization.\n> I couldn't see a measurable difference in my tests, involving CREATE TABLE\n> and\n> SELECT.\n>\n> I think some of my patches could *increase* memory use, due to power-of-two\n> allocation logic. I think it's still a good idea, since it doesn't seem\n> to be\n> the dominant memory allocation.\n>\n> On Thu, May 20, 2021 at 12:21:19AM +0530, Nitin Jadhav wrote:\n> > > I see that you have made a theoretical argument for why this should be\n> > > good for performance, but it would be better to have some test results\n> > > that show that it works out in practice. Sometimes things seem like\n> > > they ought to be more efficient but turn out to be less efficient when\n> > > they are actually tried.\n> >\n> > After this I tried to create 10 partitions and observed the time taken\n> > to execute. Here is the data for 'with patch'.\n> >\n> > postgres@34077=#select 'create table t_' || i || ' partition of t for\n> > postgres'# values in (' || i || ');'\n> > postgres-# from generate_series(10001, 10010) i\n> > postgres-# \\gexec\n>\n> I think you should be sure to do this within a transaction, without\n> cassert,\n> and maybe with fsync=off full_page_writes=off.\n>\n> > If we observe above data, we can see the improvement with the patch.\n> > The average time taken to execute for the last 10 partitions is.\n>\n> It'd be interesting to know which patch(es) contributed to the improvement.\n> It's possible that (say) patch 0001 alone gives all the gain, or that 0002\n> diminishes the gains.\n>\n> I think there'll be an interest in committing the smallest possible patch\n> to\n> realize the gains, to minimize code churn an unrelated changes.\n>\n> LIST and RANGE might need to be checked separately..\n>\n> > With respect to memory usage, AFAIK the allocated memory gets cleaned\n> > during deallocation of the memory used by the memory context. So at\n> > the end of the query, we may see no difference in the memory usage but\n> > during the query execution it tries to get less memory than before.\n>\n> You can check MAXRSS (at least on linux) if you enable log_executor_stats,\n> like:\n>\n> \\set QUIET \\\\ SET client_min_messages=debug; SET log_executor_stats=on;\n> DROP TABLE p; CREATE TABLE p(i int) PARTITION BY LIST(i); CREATE TABLE pd\n> PARTITION OF p DEFAULT;\n> SELECT format('CREATE TABLE p%s PARTITION OF p FOR VALUES IN (%s)', a,a)\n> FROM generate_series(1,999)a;\\gexec \\\\ SELECT;\n>\n> --\n> Justin\n>\n\n> I see this as a code cleanup more than an performance optimization.I agree with this. This is like a code cleanup but it also improves performance.I have done the performance testing, Just to confirm whether it really improves performance.> I think some of my patches could *increase* memory use, due to power-of-two> allocation logic. I think it's still a good idea, since it doesn't seem to be> the dominant memory allocation.I don't think that it will increase performance rather it adds to the improvement.> I think you should be sure to do this within a transaction, without cassert,> and maybe with fsync=off full_page_writes=offThanks for sharing this. I have done the above settings and collected thebelow data.> It'd be interesting to know which patch(es) contributed to the improvement.> It's possible that (say) patch 0001 alone gives all the gain, or that 0002> diminishes the gains.>> I think there'll be an interest in committing the smallest possible patch to> realize the gains, to minimize code churn an unrelated changes.In order to answer the above points, I have divided the patches into 2 sets.1. Only 0001 and 0002 - These are related to list partitioning and do not contain changes related power-of-two allocation logic.2. This contains all the 5 patches.I have used the same testing procedure as explained in the previous mail.Please find the timing information of the last 10 creation of partitionedtables as given below.Without patchWith 0001 and 0002With all patch17.10514.72213.87815.89714.42713.49315.99115.42414.75117.96516.48719.49119.70419.04221.27818.9818.94918.12318.98621.71317.58521.27320.59619.62318.83918.52117.60520.72418.77419.24218.546417.865517.5069As we can see in the above data, there is an improvement with both of thepatch sets.> You can check MAXRSS (at least on linux) if you enable log_executor_stats,> like:>> \\set QUIET \\\\ SET client_min_messages=debug; SET log_executor_stats=on; DROP TABLE p; CREATE TABLE p(i int) PARTITION BY LIST(i); CREATE TABLE pd PARTITION OF p DEFAULT;> SELECT format('CREATE TABLE p%s PARTITION OF p FOR VALUES IN (%s)', a,a) FROM generate_series(1,999)a;\\gexec \\\\ SELECT;Thanks a lot for sharing this information. It was really helpful.I have collected the stat information for creation of 1000 partitions. Please find the stat information for the 'without patch'case below.LOG: EXECUTOR STATISTICSDETAIL: ! system usage stats:! 0.000012 s user, 0.000000 s system, 0.000011 s elapsed! [71.599426 s user, 4.362552 s system total]! 63872 kB max resident size! 0/0 [0/231096] filesystem blocks in/out! 0/0 [0/2388074] page faults/reclaims, 0 [0] swaps! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent! 0/0 [9982/6709] voluntary/involuntary context switchesPlease find the stat information for 'with patch (all 5 patches)' case below.LOG: EXECUTOR STATISTICSDETAIL: ! system usage stats:! 0.000018 s user, 0.000002 s system, 0.000013 s elapsed! [73.529715 s user, 4.219172 s system total]! 63152 kB max resident size! 0/0 [0/204840] filesystem blocks in/out! 0/0 [0/2066377] page faults/reclaims, 0 [0] swaps! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent! 0/0 [9956/4129] voluntary/involuntary context switchesPlease share your thoughts.--Thanks & Regards,Nitin JadhavOn Thu, May 20, 2021 at 12:47 AM Justin Pryzby <pryzby@telsasoft.com> wrote:On Tue, May 18, 2021 at 01:49:12PM -0400, Robert Haas wrote:\n> I see that you have made a theoretical argument for why this should be\n> good for performance, but it would be better to have some test results\n> that show that it works out in practice. Sometimes things seem like\n> they ought to be more efficient but turn out to be less efficient when\n> they are actually tried.\n\nI see this as a code cleanup more than an performance optimization.\nI couldn't see a measurable difference in my tests, involving CREATE TABLE and\nSELECT.\n\nI think some of my patches could *increase* memory use, due to power-of-two\nallocation logic. I think it's still a good idea, since it doesn't seem to be\nthe dominant memory allocation.\n\nOn Thu, May 20, 2021 at 12:21:19AM +0530, Nitin Jadhav wrote:\n> > I see that you have made a theoretical argument for why this should be\n> > good for performance, but it would be better to have some test results\n> > that show that it works out in practice. Sometimes things seem like\n> > they ought to be more efficient but turn out to be less efficient when\n> > they are actually tried.\n> \n> After this I tried to create 10 partitions and observed the time taken\n> to execute. Here is the data for 'with patch'.\n> \n> postgres@34077=#select 'create table t_' || i || ' partition of t for\n> postgres'# values in (' || i || ');'\n> postgres-# from generate_series(10001, 10010) i\n> postgres-# \\gexec\n\nI think you should be sure to do this within a transaction, without cassert,\nand maybe with fsync=off full_page_writes=off.\n\n> If we observe above data, we can see the improvement with the patch.\n> The average time taken to execute for the last 10 partitions is.\n\nIt'd be interesting to know which patch(es) contributed to the improvement.\nIt's possible that (say) patch 0001 alone gives all the gain, or that 0002\ndiminishes the gains.\n\nI think there'll be an interest in committing the smallest possible patch to\nrealize the gains, to minimize code churn an unrelated changes.\n\nLIST and RANGE might need to be checked separately..\n\n> With respect to memory usage, AFAIK the allocated memory gets cleaned\n> during deallocation of the memory used by the memory context. So at\n> the end of the query, we may see no difference in the memory usage but\n> during the query execution it tries to get less memory than before.\n\nYou can check MAXRSS (at least on linux) if you enable log_executor_stats,\nlike:\n\n\\set QUIET \\\\ SET client_min_messages=debug; SET log_executor_stats=on; DROP TABLE p; CREATE TABLE p(i int) PARTITION BY LIST(i); CREATE TABLE pd PARTITION OF p DEFAULT;\nSELECT format('CREATE TABLE p%s PARTITION OF p FOR VALUES IN (%s)', a,a) FROM generate_series(1,999)a;\\gexec \\\\ SELECT;\n\n-- \nJustin",
"msg_date": "Sun, 23 May 2021 22:40:16 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Removed extra memory allocations from create_list_bounds"
},
{
"msg_contents": "> > I think some of my patches could *increase* memory use, due to\npower-of-two\n> > allocation logic. I think it's still a good idea, since it doesn't\nseem to be\n> > the dominant memory allocation.\n>\n> I don't think that it will increase performance rather it adds to the\nimprovement.\n\nSorry. Kindly ignore the above comment. I had misunderstood the statement.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Sun, May 23, 2021 at 10:40 PM Nitin Jadhav <nitinjadhavpostgres@gmail.com>\nwrote:\n\n> > I see this as a code cleanup more than an performance optimization.\n>\n> I agree with this. This is like a code cleanup but it also improves\n> performance.\n> I have done the performance testing, Just to confirm whether it really\n> improves\n> performance.\n>\n> > I think some of my patches could *increase* memory use, due to\n> power-of-two\n> > allocation logic. I think it's still a good idea, since it doesn't seem\n> to be\n> > the dominant memory allocation.\n>\n> I don't think that it will increase performance rather it adds to the\n> improvement.\n>\n> > I think you should be sure to do this within a transaction, without\n> cassert,\n> > and maybe with fsync=off full_page_writes=off\n>\n> Thanks for sharing this. I have done the above settings and collected the\n> below data.\n>\n> > It'd be interesting to know which patch(es) contributed to the\n> improvement.\n> > It's possible that (say) patch 0001 alone gives all the gain, or that\n> 0002\n> > diminishes the gains.\n> >\n> > I think there'll be an interest in committing the smallest possible\n> patch to\n> > realize the gains, to minimize code churn an unrelated changes.\n>\n> In order to answer the above points, I have divided the patches into 2\n> sets.\n> 1. Only 0001 and 0002 - These are related to list partitioning and do\n> not contain\n> changes related power-of-two allocation logic.\n> 2. This contains all the 5 patches.\n>\n> I have used the same testing procedure as explained in the previous mail.\n> Please find the timing information of the last 10 creation of partitioned\n> tables as given below.\n>\n> Without patch With 0001 and 0002 With all patch\n> 17.105 14.722 13.878\n> 15.897 14.427 13.493\n> 15.991 15.424 14.751\n> 17.965 16.487 19.491\n> 19.704 19.042 21.278\n> 18.98 18.949 18.123\n> 18.986 21.713 17.585\n> 21.273 20.596 19.623\n> 18.839 18.521 17.605\n> 20.724 18.774 19.242\n> 18.5464 17.8655 17.5069\n> As we can see in the above data, there is an improvement with both of the\n> patch sets.\n>\n> > You can check MAXRSS (at least on linux) if you enable\n> log_executor_stats,\n> > like:\n> >\n> > \\set QUIET \\\\ SET client_min_messages=debug; SET log_executor_stats=on;\n> DROP TABLE p; CREATE TABLE p(i int) PARTITION BY LIST(i); CREATE TABLE pd\n> PARTITION OF p DEFAULT;\n> > SELECT format('CREATE TABLE p%s PARTITION OF p FOR VALUES IN (%s)', a,a)\n> FROM generate_series(1,999)a;\\gexec \\\\ SELECT;\n>\n> Thanks a lot for sharing this information. It was really helpful.\n> I have collected the stat information for creation of 1000\n> partitions. Please find the stat information for the 'without patch'\n> case below.\n>\n> LOG: EXECUTOR STATISTICS\n> DETAIL: ! system usage stats:\n> ! 0.000012 s user, 0.000000 s system, 0.000011 s elapsed\n> ! [71.599426 s user, 4.362552 s system total]\n> ! 63872 kB max resident size\n> ! 0/0 [0/231096] filesystem blocks in/out\n> ! 0/0 [0/2388074] page faults/reclaims, 0 [0] swaps\n> ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n> ! 0/0 [9982/6709] voluntary/involuntary context switches\n>\n> Please find the stat information for 'with patch (all 5 patches)' case\n> below.\n>\n> LOG: EXECUTOR STATISTICS\n> DETAIL: ! system usage stats:\n> ! 0.000018 s user, 0.000002 s system, 0.000013 s elapsed\n> ! [73.529715 s user, 4.219172 s system total]\n> ! 63152 kB max resident size\n> ! 0/0 [0/204840] filesystem blocks in/out\n> ! 0/0 [0/2066377] page faults/reclaims, 0 [0] swaps\n> ! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n> ! 0/0 [9956/4129] voluntary/involuntary context switches\n>\n> Please share your thoughts.\n>\n> --\n> Thanks & Regards,\n> Nitin Jadhav\n>\n>\n>\n> On Thu, May 20, 2021 at 12:47 AM Justin Pryzby <pryzby@telsasoft.com>\n> wrote:\n>\n>> On Tue, May 18, 2021 at 01:49:12PM -0400, Robert Haas wrote:\n>> > I see that you have made a theoretical argument for why this should be\n>> > good for performance, but it would be better to have some test results\n>> > that show that it works out in practice. Sometimes things seem like\n>> > they ought to be more efficient but turn out to be less efficient when\n>> > they are actually tried.\n>>\n>> I see this as a code cleanup more than an performance optimization.\n>> I couldn't see a measurable difference in my tests, involving CREATE\n>> TABLE and\n>> SELECT.\n>>\n>> I think some of my patches could *increase* memory use, due to\n>> power-of-two\n>> allocation logic. I think it's still a good idea, since it doesn't seem\n>> to be\n>> the dominant memory allocation.\n>>\n>> On Thu, May 20, 2021 at 12:21:19AM +0530, Nitin Jadhav wrote:\n>> > > I see that you have made a theoretical argument for why this should be\n>> > > good for performance, but it would be better to have some test results\n>> > > that show that it works out in practice. Sometimes things seem like\n>> > > they ought to be more efficient but turn out to be less efficient when\n>> > > they are actually tried.\n>> >\n>> > After this I tried to create 10 partitions and observed the time taken\n>> > to execute. Here is the data for 'with patch'.\n>> >\n>> > postgres@34077=#select 'create table t_' || i || ' partition of t for\n>> > postgres'# values in (' || i || ');'\n>> > postgres-# from generate_series(10001, 10010) i\n>> > postgres-# \\gexec\n>>\n>> I think you should be sure to do this within a transaction, without\n>> cassert,\n>> and maybe with fsync=off full_page_writes=off.\n>>\n>> > If we observe above data, we can see the improvement with the patch.\n>> > The average time taken to execute for the last 10 partitions is.\n>>\n>> It'd be interesting to know which patch(es) contributed to the\n>> improvement.\n>> It's possible that (say) patch 0001 alone gives all the gain, or that 0002\n>> diminishes the gains.\n>>\n>> I think there'll be an interest in committing the smallest possible patch\n>> to\n>> realize the gains, to minimize code churn an unrelated changes.\n>>\n>> LIST and RANGE might need to be checked separately..\n>>\n>> > With respect to memory usage, AFAIK the allocated memory gets cleaned\n>> > during deallocation of the memory used by the memory context. So at\n>> > the end of the query, we may see no difference in the memory usage but\n>> > during the query execution it tries to get less memory than before.\n>>\n>> You can check MAXRSS (at least on linux) if you enable log_executor_stats,\n>> like:\n>>\n>> \\set QUIET \\\\ SET client_min_messages=debug; SET log_executor_stats=on;\n>> DROP TABLE p; CREATE TABLE p(i int) PARTITION BY LIST(i); CREATE TABLE pd\n>> PARTITION OF p DEFAULT;\n>> SELECT format('CREATE TABLE p%s PARTITION OF p FOR VALUES IN (%s)', a,a)\n>> FROM generate_series(1,999)a;\\gexec \\\\ SELECT;\n>>\n>> --\n>> Justin\n>>\n>\n\n> > I think some of my patches could *increase* memory use, due to power-of-two> > allocation logic. I think it's still a good idea, since it doesn't seem to be> > the dominant memory allocation.>> I don't think that it will increase performance rather it adds to the improvement.Sorry. Kindly ignore the above comment. I had misunderstood the statement.Thanks & Regards,Nitin JadhavOn Sun, May 23, 2021 at 10:40 PM Nitin Jadhav <nitinjadhavpostgres@gmail.com> wrote:> I see this as a code cleanup more than an performance optimization.I agree with this. This is like a code cleanup but it also improves performance.I have done the performance testing, Just to confirm whether it really improves performance.> I think some of my patches could *increase* memory use, due to power-of-two> allocation logic. I think it's still a good idea, since it doesn't seem to be> the dominant memory allocation.I don't think that it will increase performance rather it adds to the improvement.> I think you should be sure to do this within a transaction, without cassert,> and maybe with fsync=off full_page_writes=offThanks for sharing this. I have done the above settings and collected thebelow data.> It'd be interesting to know which patch(es) contributed to the improvement.> It's possible that (say) patch 0001 alone gives all the gain, or that 0002> diminishes the gains.>> I think there'll be an interest in committing the smallest possible patch to> realize the gains, to minimize code churn an unrelated changes.In order to answer the above points, I have divided the patches into 2 sets.1. Only 0001 and 0002 - These are related to list partitioning and do not contain changes related power-of-two allocation logic.2. This contains all the 5 patches.I have used the same testing procedure as explained in the previous mail.Please find the timing information of the last 10 creation of partitionedtables as given below.Without patchWith 0001 and 0002With all patch17.10514.72213.87815.89714.42713.49315.99115.42414.75117.96516.48719.49119.70419.04221.27818.9818.94918.12318.98621.71317.58521.27320.59619.62318.83918.52117.60520.72418.77419.24218.546417.865517.5069As we can see in the above data, there is an improvement with both of thepatch sets.> You can check MAXRSS (at least on linux) if you enable log_executor_stats,> like:>> \\set QUIET \\\\ SET client_min_messages=debug; SET log_executor_stats=on; DROP TABLE p; CREATE TABLE p(i int) PARTITION BY LIST(i); CREATE TABLE pd PARTITION OF p DEFAULT;> SELECT format('CREATE TABLE p%s PARTITION OF p FOR VALUES IN (%s)', a,a) FROM generate_series(1,999)a;\\gexec \\\\ SELECT;Thanks a lot for sharing this information. It was really helpful.I have collected the stat information for creation of 1000 partitions. Please find the stat information for the 'without patch'case below.LOG: EXECUTOR STATISTICSDETAIL: ! system usage stats:! 0.000012 s user, 0.000000 s system, 0.000011 s elapsed! [71.599426 s user, 4.362552 s system total]! 63872 kB max resident size! 0/0 [0/231096] filesystem blocks in/out! 0/0 [0/2388074] page faults/reclaims, 0 [0] swaps! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent! 0/0 [9982/6709] voluntary/involuntary context switchesPlease find the stat information for 'with patch (all 5 patches)' case below.LOG: EXECUTOR STATISTICSDETAIL: ! system usage stats:! 0.000018 s user, 0.000002 s system, 0.000013 s elapsed! [73.529715 s user, 4.219172 s system total]! 63152 kB max resident size! 0/0 [0/204840] filesystem blocks in/out! 0/0 [0/2066377] page faults/reclaims, 0 [0] swaps! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent! 0/0 [9956/4129] voluntary/involuntary context switchesPlease share your thoughts.--Thanks & Regards,Nitin JadhavOn Thu, May 20, 2021 at 12:47 AM Justin Pryzby <pryzby@telsasoft.com> wrote:On Tue, May 18, 2021 at 01:49:12PM -0400, Robert Haas wrote:\n> I see that you have made a theoretical argument for why this should be\n> good for performance, but it would be better to have some test results\n> that show that it works out in practice. Sometimes things seem like\n> they ought to be more efficient but turn out to be less efficient when\n> they are actually tried.\n\nI see this as a code cleanup more than an performance optimization.\nI couldn't see a measurable difference in my tests, involving CREATE TABLE and\nSELECT.\n\nI think some of my patches could *increase* memory use, due to power-of-two\nallocation logic. I think it's still a good idea, since it doesn't seem to be\nthe dominant memory allocation.\n\nOn Thu, May 20, 2021 at 12:21:19AM +0530, Nitin Jadhav wrote:\n> > I see that you have made a theoretical argument for why this should be\n> > good for performance, but it would be better to have some test results\n> > that show that it works out in practice. Sometimes things seem like\n> > they ought to be more efficient but turn out to be less efficient when\n> > they are actually tried.\n> \n> After this I tried to create 10 partitions and observed the time taken\n> to execute. Here is the data for 'with patch'.\n> \n> postgres@34077=#select 'create table t_' || i || ' partition of t for\n> postgres'# values in (' || i || ');'\n> postgres-# from generate_series(10001, 10010) i\n> postgres-# \\gexec\n\nI think you should be sure to do this within a transaction, without cassert,\nand maybe with fsync=off full_page_writes=off.\n\n> If we observe above data, we can see the improvement with the patch.\n> The average time taken to execute for the last 10 partitions is.\n\nIt'd be interesting to know which patch(es) contributed to the improvement.\nIt's possible that (say) patch 0001 alone gives all the gain, or that 0002\ndiminishes the gains.\n\nI think there'll be an interest in committing the smallest possible patch to\nrealize the gains, to minimize code churn an unrelated changes.\n\nLIST and RANGE might need to be checked separately..\n\n> With respect to memory usage, AFAIK the allocated memory gets cleaned\n> during deallocation of the memory used by the memory context. So at\n> the end of the query, we may see no difference in the memory usage but\n> during the query execution it tries to get less memory than before.\n\nYou can check MAXRSS (at least on linux) if you enable log_executor_stats,\nlike:\n\n\\set QUIET \\\\ SET client_min_messages=debug; SET log_executor_stats=on; DROP TABLE p; CREATE TABLE p(i int) PARTITION BY LIST(i); CREATE TABLE pd PARTITION OF p DEFAULT;\nSELECT format('CREATE TABLE p%s PARTITION OF p FOR VALUES IN (%s)', a,a) FROM generate_series(1,999)a;\\gexec \\\\ SELECT;\n\n-- \nJustin",
"msg_date": "Sun, 23 May 2021 22:44:03 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Removed extra memory allocations from create_list_bounds"
},
{
"msg_contents": "On Sun, May 23, 2021 at 10:40:16PM +0530, Nitin Jadhav wrote:\n> I have used the same testing procedure as explained in the previous mail.\n> Please find the timing information of the last 10 creation of partitioned\n> tables as given below.\n\n> Without patch With 0001 and 0002 With all patch\n...\n> 18.5464 17.8655 17.5069\n\nFor anyone reading non-HTML email, the last line shows the averages of the\nprevious 10 lines.\n\n>> LIST and RANGE might need to be checked separately..\n\nYou checked LIST but not HASH (patches 3-4) or RANGE (patch 4-5), right?\n\nAnother test is to show the time/memory used by SELECT. That's far more\nimportant than DDL, but I think the same results would apply here, so I think\nit's not needed to test each of LIST/RANGE/HASH, nor to test every combination\nof patches. Mostly it's nice to see if the memory use is more visibly\ndifferent, or if there's an impressive improvement for this case.\n\nNote that for the MAXRSS test, you must a different postgres backend process\nfor each of the tests (or else each test would never show a lower number than\nthe previous test).\n\nThanks,\n-- \nJustin\n\n\n",
"msg_date": "Sun, 23 May 2021 12:46:08 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Removed extra memory allocations from create_list_bounds"
},
{
"msg_contents": "> You checked LIST but not HASH (patches 3-4) or RANGE (patch 4-5), right?\n\nYes. I did not check about HASH and RANGE partitioning related patches\nas the changes are mostly similar to the list partitioning related\nchanges.\n\n> Another test is to show the time/memory used by SELECT. That's far more\n> important than DDL, but I think the same results would apply here, so I think\n> it's not needed to test each of LIST/RANGE/HASH, nor to test every combination\n> of patches.\n\nYes. I also feel that the same result would apply there as well.\n\n> Note that for the MAXRSS test, you must a different postgres backend process\n> for each of the tests (or else each test would never show a lower number than\n> the previous test).\n\nI have used different backend processes for each of the tests.\n\n> Mostly it's nice to see if the memory use is more visibly\n> different, or if there's an impressive improvement for this case.\n\nI did not get this point. Kindly explain for which scenario the memory\nusage test has to be done.\n\nThanks & Regards,\nNitin Jadhav\n\n\nOn Sun, May 23, 2021 at 11:16 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Sun, May 23, 2021 at 10:40:16PM +0530, Nitin Jadhav wrote:\n> > I have used the same testing procedure as explained in the previous mail.\n> > Please find the timing information of the last 10 creation of partitioned\n> > tables as given below.\n>\n> > Without patch With 0001 and 0002 With all patch\n> ...\n> > 18.5464 17.8655 17.5069\n>\n> For anyone reading non-HTML email, the last line shows the averages of the\n> previous 10 lines.\n>\n> >> LIST and RANGE might need to be checked separately..\n>\n> You checked LIST but not HASH (patches 3-4) or RANGE (patch 4-5), right?\n>\n> Another test is to show the time/memory used by SELECT. That's far more\n> important than DDL, but I think the same results would apply here, so I think\n> it's not needed to test each of LIST/RANGE/HASH, nor to test every combination\n> of patches. Mostly it's nice to see if the memory use is more visibly\n> different, or if there's an impressive improvement for this case.\n>\n> Note that for the MAXRSS test, you must a different postgres backend process\n> for each of the tests (or else each test would never show a lower number than\n> the previous test).\n>\n> Thanks,\n> --\n> Justin\n\n\n",
"msg_date": "Mon, 24 May 2021 20:12:26 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Removed extra memory allocations from create_list_bounds"
},
{
"msg_contents": "On Wed, 19 May 2021 at 05:28, Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n> I have rebased all the patches on top of\n> 'v2_0001-removed_extra_mem_alloc_from_create_list_bounds.patch'.\n> Attaching all the patches here.\n\nI had a look over these and I think what's being done here is fine.\n\nI think this will help speed up building the partition bound.\nUnfortunately, it won't help any for speeding up things like finding\nthe correct partition during SELECT or DML on partitioned tables.\nThe reason for this is that RelationBuildPartitionDesc first builds\nthe partition bound using the functions being modified here, but it\nthen copies it into the relcache into a memory context for the\npartition using partition_bounds_copy(). It looks like\npartition_bounds_copy() suffers from the same palloc explosion type\nproblem as is being fixed in each of the create_*_bounds() functions\nhere. The good news is, we can just give partition_bounds_copy() the\nsame treatment. 0004 does that.\n\nI tried to see if the better cache locality of the\nPartitionBoundInfo's datum array would help speed up inserts into a\npartitioned table. I figured a fairly small binary search in a LIST\npartitioned table of 10 partitions might have all Datum visited all on\nthe same cache line. However, I was unable to see any performance\ngains. I think the other work being done is likely just going to drown\nout any gains in cache efficiency in the binary search. COPY into a\npartitioned table might have more chance of becoming a little faster,\nbut I didn't try.\n\nI've attached another set of patches. I squashed all the changes to\neach create_*_bounds function into a patch of their own. Perhaps 0002\nand 0003 which handle range and hash partitioning can be one patch\nsince Justin seemed to write that one. I kept 0001 separate since\nthat's Nitin's patch plus Justin's extra parts. It seems easier to\ncredit properly having the patches broken out like this. I think it's\nexcessive to break down 0001 into Nitin and Justin's individual parts.\n\nI did make a few adjustments to the patches renaming a variable or two\nand I changed how we assign the boundinfo->datums[i] pointers to take\nthe address of the Nth element rather than incrementing the variable\npointing to the array each item. I personally like p = &array[i];\nmore than p = array; array++, others may not feel the same.\n\nNitin and Justin, are you both able to have another look over these\nand let me know what you think. If all is well I'd like to push all 4\npatches.\n\nDavid",
"msg_date": "Tue, 6 Jul 2021 01:48:52 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Removed extra memory allocations from create_list_bounds"
},
{
"msg_contents": "On Tue, Jul 06, 2021 at 01:48:52AM +1200, David Rowley wrote:\n> On Wed, 19 May 2021 at 05:28, Nitin Jadhav <nitinjadhavpostgres@gmail.com> wrote:\n> > I have rebased all the patches on top of\n> > 'v2_0001-removed_extra_mem_alloc_from_create_list_bounds.patch'.\n> > Attaching all the patches here.\n> \n> I had a look over these and I think what's being done here is fine.\n\nThanks for loooking.\n\n0001 is missing a newline before create_list_bounds()\n\n0003 is missing pfree(all_bounds), which I had as 0005.\nIt 1) allocates all_bounds; 2) allocates rbounds; 3) copies all_bounds into\nrbounds; 4) allocates boundDatums; 5) copies rbounds into boundDatums; 6) frees\nrbounds; 7) returns boundInfo with boundinfo->datums.\n\n> The good news is, we can just give partition_bounds_copy() the same\n> treatment. 0004 does that.\n\n+1\n\n> I've attached another set of patches. I squashed all the changes to\n> each create_*_bounds function into a patch of their own. Perhaps 0002\n> and 0003 which handle range and hash partitioning can be one patch\n> since Justin seemed to write that one. I kept 0001 separate since\n> that's Nitin's patch plus Justin's extra parts. It seems easier to\n> credit properly having the patches broken out like this. I think it's\n> excessive to break down 0001 into Nitin and Justin's individual parts.\n\nIf you wanted to further squish the patches together, I don't mind being a\nco-author.\n\nCheers,\n-- \nJustin\n\n\n",
"msg_date": "Mon, 5 Jul 2021 11:45:05 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Removed extra memory allocations from create_list_bounds"
},
{
"msg_contents": "Also, if you're going to remove the initializations here, maybe you'd also\nchange i and j to C99 \"for\" declarations like \"for (int i=0, j=0; ...)\"\n\n- PartitionListValue **all_values = NULL;\n- ListCell *cell;\n- int i = 0;\n- int ndatums = 0;\n+ PartitionListValue *all_values;\n+ int i;\n+ int j;\n+ int ndatums;\n\nSame in get_non_null_list_datum_count()\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 5 Jul 2021 12:03:33 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Removed extra memory allocations from create_list_bounds"
},
{
"msg_contents": "On Tue, 6 Jul 2021 at 04:45, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> If you wanted to further squish the patches together, I don't mind being a\n> co-author.\n\nThanks for looking at the patches.\n\nI fixed the couple of things that you mentioned and pushed all 4\npatches as a single commit (53d86957e)\n\nDavid\n\n\n",
"msg_date": "Tue, 6 Jul 2021 12:26:19 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Removed extra memory allocations from create_list_bounds"
},
{
"msg_contents": "On Tue, 6 Jul 2021 at 05:03, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> Also, if you're going to remove the initializations here, maybe you'd also\n> change i and j to C99 \"for\" declarations like \"for (int i=0, j=0; ...)\"\n>\n> - PartitionListValue **all_values = NULL;\n> - ListCell *cell;\n> - int i = 0;\n> - int ndatums = 0;\n> + PartitionListValue *all_values;\n> + int i;\n> + int j;\n> + int ndatums;\n>\n> Same in get_non_null_list_datum_count()\n\nI tend to only get motivated to use that for new code that does not\nexist in back-branches. I'll maybe stop doing that when we no longer\nhave to support the pre-C99 versions of the code.\n\nDavid\n\n\n",
"msg_date": "Tue, 6 Jul 2021 12:27:29 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Removed extra memory allocations from create_list_bounds"
}
] |
[
{
"msg_contents": "I think there is a typo in src/backend/storage/lmgr/README.barrier.\nAttached patch should fix it.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Sun, 16 May 2021 21:11:33 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Typo in README.barrier"
},
{
"msg_contents": "On Mon, 17 May 2021 at 00:11, Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n> I think there is a typo in src/backend/storage/lmgr/README.barrier.\n> Attached patch should fix it.\n\nYeah looks like a typo to me.\n\nI wonder if we also need to fix this part:\n\n> either one does their writes. Eventually we might be able to use an atomic\n> fetch-and-add instruction for this specific case on architectures that support\n> it, but we can't rely on that being available everywhere, and we currently\n> have no support for it at all. Use a lock.\n\nThat seems to have been written at a time before we got atomics.\n\nThe following also might want to mention atomics too:\n\n> 2. Eight-byte loads and stores aren't necessarily atomic. We assume in\n> various places in the source code that an aligned four-byte load or store is\n> atomic, and that other processes therefore won't see a half-set value.\n> Sadly, the same can't be said for eight-byte value: on some platforms, an\n> aligned eight-byte load or store will generate two four-byte operations. If\n> you need an atomic eight-byte read or write, you must make it atomic with a\n> lock.\n\nDavid\n\n\n",
"msg_date": "Mon, 17 May 2021 00:51:50 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Typo in README.barrier"
},
{
"msg_contents": "> Yeah looks like a typo to me.\n\nOk.\n\n> I wonder if we also need to fix this part:\n> \n>> either one does their writes. Eventually we might be able to use an atomic\n>> fetch-and-add instruction for this specific case on architectures that support\n>> it, but we can't rely on that being available everywhere, and we currently\n>> have no support for it at all. Use a lock.\n> \n> That seems to have been written at a time before we got atomics.\n> \n> The following also might want to mention atomics too:\n> \n>> 2. Eight-byte loads and stores aren't necessarily atomic. We assume in\n>> various places in the source code that an aligned four-byte load or store is\n>> atomic, and that other processes therefore won't see a half-set value.\n>> Sadly, the same can't be said for eight-byte value: on some platforms, an\n>> aligned eight-byte load or store will generate two four-byte operations. If\n>> you need an atomic eight-byte read or write, you must make it atomic with a\n>> lock.\n\nYes, we'd better to fix them. Attached is a propsal for these.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp",
"msg_date": "Sun, 16 May 2021 22:29:30 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Typo in README.barrier"
},
{
"msg_contents": "On Mon, 17 May 2021 at 01:29, Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n> Yes, we'd better to fix them. Attached is a propsal for these.\n\nThanks for working on that. I had a look and wondered if it might be\nbetter to go into slightly less details about the exact atomic\nfunction to use. The wording there might lead you to believe you can\njust call the atomic function on the non-atomic variable.\n\nIt might be best just to leave the details about how exactly to use\natomics by just referencing port/atomics.h.\n\nMaybe something like the attached?\n\nI'm also a bit on the fence if this should be backpatched or not. The\nreasons though maybe not is that it seems unlikely maybe people would\nnot be working in master if they're developing something new. On the\nother side of the argument, 0ccebe779, which adjusts another README\nwas backpatched. I'm leaning towards backpatching.\n\nDavid",
"msg_date": "Mon, 17 May 2021 12:19:41 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Typo in README.barrier"
},
{
"msg_contents": "> Thanks for working on that. I had a look and wondered if it might be\n> better to go into slightly less details about the exact atomic\n> function to use. The wording there might lead you to believe you can\n> just call the atomic function on the non-atomic variable.\n> \n> It might be best just to leave the details about how exactly to use\n> atomics by just referencing port/atomics.h.\n> \n> Maybe something like the attached?\n\nThanks. Agreed and your patch looks good to me.\n\n> I'm also a bit on the fence if this should be backpatched or not. The\n> reasons though maybe not is that it seems unlikely maybe people would\n> not be working in master if they're developing something new. On the\n> other side of the argument, 0ccebe779, which adjusts another README\n> was backpatched. I'm leaning towards backpatching.\n\nMe too. Let's backpatch.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Mon, 17 May 2021 09:33:27 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Typo in README.barrier"
},
{
"msg_contents": "David,\n\n>> Thanks for working on that. I had a look and wondered if it might be\n>> better to go into slightly less details about the exact atomic\n>> function to use. The wording there might lead you to believe you can\n>> just call the atomic function on the non-atomic variable.\n>> \n>> It might be best just to leave the details about how exactly to use\n>> atomics by just referencing port/atomics.h.\n>> \n>> Maybe something like the attached?\n> \n> Thanks. Agreed and your patch looks good to me.\n> \n>> I'm also a bit on the fence if this should be backpatched or not. The\n>> reasons though maybe not is that it seems unlikely maybe people would\n>> not be working in master if they're developing something new. On the\n>> other side of the argument, 0ccebe779, which adjusts another README\n>> was backpatched. I'm leaning towards backpatching.\n> \n> Me too. Let's backpatch.\n\nWould you like to push the patch?\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Mon, 17 May 2021 13:45:09 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Typo in README.barrier"
},
{
"msg_contents": "On Mon, 17 May 2021 at 16:45, Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n> Would you like to push the patch?\n\nYeah, I can. I was just letting it sit for a while to see if anyone\nelse had an opinion about backpatching.\n\nDavid\n\n\n",
"msg_date": "Mon, 17 May 2021 16:46:29 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Typo in README.barrier"
},
{
"msg_contents": "> On Mon, 17 May 2021 at 16:45, Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n>> Would you like to push the patch?\n> \n> Yeah, I can. I was just letting it sit for a while to see if anyone\n> else had an opinion about backpatching.\n\nOk.\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Mon, 17 May 2021 13:48:31 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Typo in README.barrier"
},
{
"msg_contents": "On Mon, May 17, 2021 at 09:33:27AM +0900, Tatsuo Ishii wrote:\n> Me too. Let's backpatch.\n\nA README is not directly user-facing, it is here for developers, so I\nwould not really bother with a backpatch. Now it is not a big deal to\ndo so either, so that's not a -1 from me, more a +0, for \"please feel\nfree to do what you think is most adapted\".\n\nYou may want to hold on until 14beta1 is tagged, though.\n--\nMichael",
"msg_date": "Mon, 17 May 2021 13:52:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Typo in README.barrier"
},
{
"msg_contents": "> On Mon, May 17, 2021 at 09:33:27AM +0900, Tatsuo Ishii wrote:\n>> Me too. Let's backpatch.\n> \n> A README is not directly user-facing, it is here for developers, so I\n> would not really bother with a backpatch. Now it is not a big deal to\n> do so either, so that's not a -1 from me, more a +0, for \"please feel\n> free to do what you think is most adapted\".\n\nI think README is similar to code comments. If a code comment is\nwrong, we usually fix to back branches. Why can't we do the same thing\nfor README?\n\n> You may want to hold on until 14beta1 is tagged, though.\n\nOf course we can wait till that day but I wonder why.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Mon, 17 May 2021 14:18:41 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: Typo in README.barrier"
},
{
"msg_contents": "On Mon, 17 May 2021 at 17:18, Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n>\n> > On Mon, May 17, 2021 at 09:33:27AM +0900, Tatsuo Ishii wrote:\n> >> Me too. Let's backpatch.\n> >\n> > A README is not directly user-facing, it is here for developers, so I\n> > would not really bother with a backpatch. Now it is not a big deal to\n> > do so either, so that's not a -1 from me, more a +0, for \"please feel\n> > free to do what you think is most adapted\".\n>\n> I think README is similar to code comments. If a code comment is\n> wrong, we usually fix to back branches. Why can't we do the same thing\n> for README?\n\nThanks for the votes. Since Michael was on the fence and I was just\nleaning over it and Ishii-san was pro-backpatch, I backpatched it.\n\n> > You may want to hold on until 14beta1 is tagged, though.\n>\n> Of course we can wait till that day but I wonder why.\n\nI imagined that would be a good idea for more risky patches so we\ndon't break something before a good round of buildfarm testing.\nHowever, since this is just a README, I didn't think it would have\nmattered. Maybe there's another reason I'm overlooking?\n\nDavid\n\n\n",
"msg_date": "Tue, 18 May 2021 10:07:30 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Typo in README.barrier"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n>>> You may want to hold on until 14beta1 is tagged, though.\n\n>> Of course we can wait till that day but I wonder why.\n\n> I imagined that would be a good idea for more risky patches so we\n> don't break something before a good round of buildfarm testing.\n> However, since this is just a README, I didn't think it would have\n> mattered. Maybe there's another reason I'm overlooking?\n\nGenerally it's considered poor form to push any inessential patches\nduring a release window (which I'd define roughly as 48 hours before\nthe wrap till after the tag is applied). It complicates the picture\nfor the final round of buildfarm testing, and if we have to do a\nre-wrap then we're faced with the question of whether to back out\nthe patch.\n\nIn this case, it being just a README, I agree there's no harm done.\nBut we've been burnt by \"low risk\" patches before, so I'd tend to\nerr on the side of caution during a release window.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 May 2021 18:37:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Typo in README.barrier"
},
{
"msg_contents": "On Mon, May 17, 2021 at 06:37:56PM -0400, Tom Lane wrote:\n> Generally it's considered poor form to push any inessential patches\n> during a release window (which I'd define roughly as 48 hours before\n> the wrap till after the tag is applied). It complicates the picture\n> for the final round of buildfarm testing, and if we have to do a\n> re-wrap then we're faced with the question of whether to back out\n> the patch.\n> \n> In this case, it being just a README, I agree there's no harm done.\n> But we've been burnt by \"low risk\" patches before, so I'd tend to\n> err on the side of caution during a release window.\n\nYes, I've had this experience once in the past. So I tend to just\nwait until the tag is pushed as long as it is not critical for the\nrelease.\n--\nMichael",
"msg_date": "Tue, 18 May 2021 09:04:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Typo in README.barrier"
}
] |
[
{
"msg_contents": "Greetings!\n\nLet say I have a foreign server using the reference postgres_fdw defined \nwithout a port number:\n\nCREATE SERVER /dat_server/ FOREIGN DATA WRAPPER /postgres_fdw/ OPTIONS ( \n|/host '172.1.1.1', dbname 'dbover_der'/| )\n\nNaturally the tables in question are setup using a foreign table \ndefinition, specifying the foreign server /dat_server/. My understanding \nis when the sql is ready for execution whatever is determined to be \npushed down is sent to the foreign server. Taking a look at the code it \nappears in postgres_fdw.c a connection is probably made in dat case:\n\n/* for remote query execution */\n PGconn *conn; /* connection for the scan */\n PgFdwConnState *conn_state; /* extra per-connection state */\n\nand\n\n/*\n * Get connection to the foreign server. Connection manager will\n * establish new connection if necessary.\n */\n fsstate->conn = GetConnection(user, false, &fsstate->conn_state);\n\nMy question is - how does the call to GetConnection() know what port to \nuse? Lets say we're using PGBouncer to connect on the local server at \nport 6432, but there is no pgbouncer listening at the foreign server, \nwhat port gets passed? My first thought is whatever the client connects \nport is, but I believe pgbouncer ultimately hands of the connection to \nwhatever port you have defined for the local database...\n\nThis gets important when one has an HAProxy instance between the local \nand foreign servers which is interrogating the port number to decide \nwhich ip:port to send the request to, ultimately the master or replicant \nat the foreign remoter server.\n\nSo how does the port number get propagated from local to foreign server???\n\nMuch thanks for your help.\n\nPhil Godfrin\n\n\n\n\n\n\n\n\n\n\n\nGreetings!\nLet say I have a foreign server using the reference postgres_fdw\n defined without a port number:\nCREATE SERVER dat_server FOREIGN DATA\n WRAPPER postgres_fdw OPTIONS ( host\n '172.1.1.1', dbname 'dbover_der' )\nNaturally the tables in question are setup using a foreign table\n definition, specifying the foreign server dat_server. My\n understanding is when the sql is ready for execution whatever is\n determined to be pushed down is sent to the foreign server. Taking\n a look at the code it appears in postgres_fdw.c a connection is\n probably made in dat case:\n/* for remote query execution */\n PGconn *conn; /* connection for the scan */\n PgFdwConnState *conn_state; /* extra per-connection state */\n\nand\n/*\n * Get connection to the foreign server. Connection manager will\n * establish new connection if necessary.\n */\n fsstate->conn = GetConnection(user, false, &fsstate->conn_state);\nMy question is - how does the call to GetConnection() know what\n port to use? Lets say we're using PGBouncer to connect on the\n local server at port 6432, but there is no pgbouncer listening at\n the foreign server, what port gets passed? My first thought is\n whatever the client connects port is, but I believe pgbouncer\n ultimately hands of the connection to whatever port you have\n defined for the local database...\nThis gets important when one has an HAProxy instance between the\n local and foreign servers which is interrogating the port number\n to decide which ip:port to send the request to, ultimately the\n master or replicant at the foreign remoter server.\nSo how does the port number get propagated from local to foreign\n server???\nMuch thanks for your help.\nPhil Godfrin",
"msg_date": "Sun, 16 May 2021 07:57:01 -0500",
"msg_from": "Phil Godfrin <pgodfrin@comcast.net>",
"msg_from_op": true,
"msg_subject": "FDW and connections"
},
{
"msg_contents": "From: Phil Godfrin <pgodfrin@comcast.net>\r\nMy question is - how does the call to GetConnection() know what port to use? Lets say we're using PGBouncer to connect on the local server at port 6432, but there is no pgbouncer listening at the foreign server, what port gets passed? My first thought is whatever the client connects port is, but I believe pgbouncer ultimately hands of the connection to whatever port you have defined for the local database...\r\nThis gets important when one has an HAProxy instance between the local and foreign servers which is interrogating the port number to decide which ip:port to send the request to, ultimately the master or replicant at the foreign remoter server.\r\nSo how does the port number get propagated from local to foreign server???\r\n--------------------------------------------------\r\n\r\n\r\npostgres_fdw uses libpq as a client to connect to the foreign server. So, as the following says, you can specify the libpq's \"port\" parameter in CREATE SERVER. If it's ommitted as in your case, the default 5432 will be used.\r\n\r\nF.35.1.1. Connection Options\r\nhttps://www.postgresql.org/docs/devel/postgres-fdw.html\r\n\r\n\"A foreign server using the postgres_fdw foreign data wrapper can have the same options that libpq accepts in connection strings, as described in Section 34.1.2, except that these options are not allowed or have special handling:\"\r\n\r\n\r\nI'm afraid it's better to post user-level questions like this to pgsql-general@lists.postgresql.org.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n\n\n\n\n\n\n\n\n\n\nFrom: Phil Godfrin <pgodfrin@comcast.net>\r\n\nMy question is - how does the call to GetConnection() know what port to use? Lets say we're using PGBouncer to connect on the local server at port\r\n 6432, but there is no pgbouncer listening at the foreign server, what port gets passed? My first thought is whatever the client connects port is, but I believe pgbouncer ultimately hands of the connection to whatever port you have defined for the local database...\nThis gets important when one has an HAProxy instance between the local and foreign servers which is interrogating the port number to decide which ip:port\r\n to send the request to, ultimately the master or replicant at the foreign remoter server.\nSo how does the port number get propagated from local to foreign server???\n--------------------------------------------------\n \n \npostgres_fdw uses libpq as a client to connect to the foreign server. So, as the following says, you can specify the libpq's \"port\" parameter in CREATE\r\n SERVER. If it's ommitted as in your case, the default 5432 will be used.\n \nF.35.1.1. Connection Options\nhttps://www.postgresql.org/docs/devel/postgres-fdw.html\n \n\"A foreign server using the postgres_fdw foreign data wrapper can have the same options that libpq accepts in connection strings, as described in Section\r\n 34.1.2, except that these options are not allowed or have special handling:\"\n \n \nI'm afraid it's better to post user-level questions like this to pgsql-general@lists.postgresql.org.\n \n \nRegards\nTakayuki Tsunakawa",
"msg_date": "Mon, 17 May 2021 00:43:07 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: FDW and connections"
},
{
"msg_contents": "Apologies, in my mind this was an internals to the postgres_fdw code, \nwhich is why I cam here. I checked that part of the docs and nowhere \ndoes it say anything about defaulting to 5432. However in the referred \nsection, 34.1.2, there it says that libpq defaults to the \"port number \nestablished when PostgreSQL was built\". I'm not well informed about the \ninternals of libpq nor the mailing lists, again I'm sorry. Seems to me I \nneed to learn more about both <grin>. Thanks.\n\npg\n\nOn 5/16/2021 7:43 PM, tsunakawa.takay@fujitsu.com wrote:\n>\n> From: Phil Godfrin <pgodfrin@comcast.net>\n>\n> My question is - how does the call to GetConnection() know what port \n> to use? Lets say we're using PGBouncer to connect on the local server \n> at port 6432, but there is no pgbouncer listening at the foreign \n> server, what port gets passed? My first thought is whatever the client \n> connects port is, but I believe pgbouncer ultimately hands of the \n> connection to whatever port you have defined for the local database...\n>\n> This gets important when one has an HAProxy instance between the local \n> and foreign servers which is interrogating the port number to decide \n> which ip:port to send the request to, ultimately the master or \n> replicant at the foreign remoter server.\n>\n> So how does the port number get propagated from local to foreign server???\n>\n> --------------------------------------------------\n>\n> postgres_fdw uses libpq as a client to connect to the foreign server. \n> So, as the following says, you can specify the libpq's \"port\" \n> parameter in CREATE SERVER. If it's ommitted as in your case, the \n> default 5432 will be used.\n>\n> F.35.1.1. Connection Options\n>\n> https://www.postgresql.org/docs/devel/postgres-fdw.html\n>\n> \"A foreign server using the postgres_fdw foreign data wrapper can have \n> the same options that libpq accepts in connection strings, as \n> described in Section 34.1.2, except that these options are not allowed \n> or have special handling:\"\n>\n> I'm afraid it's better to post user-level questions like this to \n> pgsql-general@lists.postgresql.org.\n>\n> Regards\n>\n> Takayuki Tsunakawa\n>\n\n\n\n\n\n\nApologies, in my mind this was an internals to the postgres_fdw\n code, which is why I cam here. I checked that part of the docs and\n nowhere does it say anything about defaulting to 5432. However in\n the referred section, 34.1.2, there it says that libpq defaults\n to the \"port number established when PostgreSQL was built\". I'm\n not well informed about the internals of libpq nor the mailing\n lists, again I'm sorry. Seems to me I need to learn more about\n both <grin>. Thanks.\n\npg\n\nOn 5/16/2021 7:43 PM,\n tsunakawa.takay@fujitsu.com wrote:\n\n\n\n\n\n\n\nFrom:\n Phil Godfrin <pgodfrin@comcast.net>\n\nMy\n question is - how does the call to GetConnection() know\n what port to use? Lets say we're using PGBouncer to\n connect on the local server at port 6432, but there is no\n pgbouncer listening at the foreign server, what port gets\n passed? My first thought is whatever the client connects\n port is, but I believe pgbouncer ultimately hands of the\n connection to whatever port you have defined for the local\n database...\nThis\n gets important when one has an HAProxy instance between\n the local and foreign servers which is interrogating the\n port number to decide which ip:port to send the request\n to, ultimately the master or replicant at the foreign\n remoter server.\nSo\n how does the port number get propagated from local to\n foreign server???\n--------------------------------------------------\n \n \npostgres_fdw\n uses libpq as a client to connect to the foreign server. \n So, as the following says, you can specify the libpq's\n \"port\" parameter in CREATE SERVER. If it's ommitted as in\n your case, the default 5432 will be used.\n \nF.35.1.1.\n Connection Options\nhttps://www.postgresql.org/docs/devel/postgres-fdw.html\n \n\"A\n foreign server using the postgres_fdw foreign data wrapper\n can have the same options that libpq accepts in connection\n strings, as described in Section 34.1.2, except that these\n options are not allowed or have special handling:\"\n \n \nI'm\n afraid it's better to post user-level questions like this\n to pgsql-general@lists.postgresql.org.\n \n \nRegards\nTakayuki\n Tsunakawa",
"msg_date": "Sun, 16 May 2021 20:52:49 -0500",
"msg_from": "Phil Godfrin <pgodfrin@comcast.net>",
"msg_from_op": true,
"msg_subject": "Re: FDW and connections"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nWhile understanding the behaviour of the to_char() function as\nexplained in [1], I observed that some patterns related to time zones\ndo not display values if we mention in lower case. As shown in the\nsample output [2], time zone related patterns TZH, TZM and OF outputs\nproper values when specified in upper case but does not work if we\nmention in lower case. But other patterns like TZ, HH, etc works fine\nwith upper case as well as lower case.\n\nI would like to know whether the current behaviour of TZH, TZM and OF\nis done intentionally and is as expected.\nPlease share your thoughts.\n\n[1] - https://www.postgresql.org/docs/current/functions-formatting.html\n\n[2] -\npostgres@123613=#select to_char(current_timestamp, 'TZH');\n to_char\n---------\n +05\n(1 row)\n\npostgres@123613=#select to_char(current_timestamp, 'TZM');\n to_char\n---------\n 30\n(1 row)\n\npostgres@123613=#select to_char(current_timestamp, 'OF');\n to_char\n---------\n +05:30\n(1 row)\n\npostgres@123613=#select to_char(current_timestamp, 'tzh');\n to_char\n---------\n isth\n(1 row)\n\npostgres@123613=#select to_char(current_timestamp, 'tzm');\n to_char\n---------\n istm\n(1 row)\n\npostgres@123613=#select to_char(current_timestamp, 'of');\n to_char\n---------\n of\n(1 row)\n\n[3] -\npostgres@123613=#select to_char(current_timestamp, 'tz');\n to_char\n---------\n ist\n(1 row)\n\npostgres@123613=#select to_char(current_timestamp, 'TZ');\n to_char\n---------\n IST\n(1 row)\n\npostgres@123613=#select to_char(current_timestamp, 'HH');\n to_char\n---------\n 08\n(1 row)\n\npostgres@123613=#select to_char(current_timestamp, 'hh');\n to_char\n---------\n 08\n(1 row)\n\nThanks & Regards,\nNitin Jadhav\n\n\n",
"msg_date": "Sun, 16 May 2021 20:25:48 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Query about time zone patterns in to_char"
},
{
"msg_contents": "Nitin Jadhav <nitinjadhavpostgres@gmail.com> writes:\n> While understanding the behaviour of the to_char() function as\n> explained in [1], I observed that some patterns related to time zones\n> do not display values if we mention in lower case. As shown in the\n> sample output [2], time zone related patterns TZH, TZM and OF outputs\n> proper values when specified in upper case but does not work if we\n> mention in lower case. But other patterns like TZ, HH, etc works fine\n> with upper case as well as lower case.\n\n> I would like to know whether the current behaviour of TZH, TZM and OF\n> is done intentionally and is as expected.\n\nAFAICS, table 9.26 specifically shows which case-variants are supported.\nIf there are some others that happen to work, we probably shouldn't\nremove them for fear of breaking poorly-written apps ... but that does\nnot imply that we need to support every case-variant.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 16 May 2021 11:10:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Query about time zone patterns in to_char"
},
{
"msg_contents": "> AFAICS, table 9.26 specifically shows which case-variants are supported.\n> If there are some others that happen to work, we probably shouldn't\n> remove them for fear of breaking poorly-written apps ... but that does\n> not imply that we need to support every case-variant.\n\nThanks for the explanation. I also feel that we may not support every\ncase-variant. But the other reason which triggered me to think in the\nother way is, as mentioned in commit [1] where this feature was added,\nsays that these format patterns are compatible with Oracle. Whereas\nOracle supports both upper case and lower case patterns. I just wanted\nto get it confirmed with this point before concluding.\n\n[1] -\ncommit 11b623dd0a2c385719ebbbdd42dd4ec395dcdc9d\nAuthor: Andrew Dunstan <andrew@dunslane.net>\nDate: Tue Jan 9 14:25:05 2018 -0500\n\n Implement TZH and TZM timestamp format patterns\n\n These are compatible with Oracle and required for the datetime template\n language for jsonpath in an upcoming patch.\n\n Nikita Glukhov and Andrew Dunstan, reviewed by Pavel Stehule.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Sun, May 16, 2021 at 8:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Nitin Jadhav <nitinjadhavpostgres@gmail.com> writes:\n> > While understanding the behaviour of the to_char() function as\n> > explained in [1], I observed that some patterns related to time zones\n> > do not display values if we mention in lower case. As shown in the\n> > sample output [2], time zone related patterns TZH, TZM and OF outputs\n> > proper values when specified in upper case but does not work if we\n> > mention in lower case. But other patterns like TZ, HH, etc works fine\n> > with upper case as well as lower case.\n>\n> > I would like to know whether the current behaviour of TZH, TZM and OF\n> > is done intentionally and is as expected.\n>\n> AFAICS, table 9.26 specifically shows which case-variants are supported.\n> If there are some others that happen to work, we probably shouldn't\n> remove them for fear of breaking poorly-written apps ... but that does\n> not imply that we need to support every case-variant.\n>\n> regards, tom lane\n\n\n",
"msg_date": "Sun, 16 May 2021 21:43:21 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Query about time zone patterns in to_char"
},
{
"msg_contents": "Nitin Jadhav <nitinjadhavpostgres@gmail.com> writes:\n> Thanks for the explanation. I also feel that we may not support every\n> case-variant. But the other reason which triggered me to think in the\n> other way is, as mentioned in commit [1] where this feature was added,\n> says that these format patterns are compatible with Oracle. Whereas\n> Oracle supports both upper case and lower case patterns. I just wanted\n> to get it confirmed with this point before concluding.\n\nHm. If Oracle does that, then there's an argument for us doing it\ntoo. I can't get hugely excited about it, but maybe someone else\ncares enough to prepare a patch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 16 May 2021 13:04:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Query about time zone patterns in to_char"
},
{
"msg_contents": "> Hm. If Oracle does that, then there's an argument for us doing it\n> too. I can't get hugely excited about it, but maybe someone else\n> cares enough to prepare a patch.\n\nThanks for the confirmation. Attached patch supports these format\npatterns. Kindly review and let me know if any changes are required.\n\nThanks & Regards,\nNitin Jadhav\nOn Sun, May 16, 2021 at 10:34 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Nitin Jadhav <nitinjadhavpostgres@gmail.com> writes:\n> > Thanks for the explanation. I also feel that we may not support every\n> > case-variant. But the other reason which triggered me to think in the\n> > other way is, as mentioned in commit [1] where this feature was added,\n> > says that these format patterns are compatible with Oracle. Whereas\n> > Oracle supports both upper case and lower case patterns. I just wanted\n> > to get it confirmed with this point before concluding.\n>\n> Hm. If Oracle does that, then there's an argument for us doing it\n> too. I can't get hugely excited about it, but maybe someone else\n> cares enough to prepare a patch.\n>\n> regards, tom lane",
"msg_date": "Sun, 16 May 2021 23:52:41 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Query about time zone patterns in to_char"
},
{
"msg_contents": "On Mon, 17 May 2021 at 06:23, Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> > Hm. If Oracle does that, then there's an argument for us doing it\n> > too. I can't get hugely excited about it, but maybe someone else\n> > cares enough to prepare a patch.\n>\n> Thanks for the confirmation. Attached patch supports these format\n> patterns. Kindly review and let me know if any changes are required.\n\nPlease add it to the July commitfest: https://commitfest.postgresql.org/33/\n\nDavid\n\n\n",
"msg_date": "Mon, 17 May 2021 13:35:20 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Query about time zone patterns in to_char"
},
{
"msg_contents": "> Please add it to the July commitfest: https://commitfest.postgresql.org/33/\nAdded a commitfest entry https://commitfest.postgresql.org/33/3121/\n\nThanks & Regards,\nNitin Jadhav\n\nOn Mon, May 17, 2021 at 7:05 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Mon, 17 May 2021 at 06:23, Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> >\n> > > Hm. If Oracle does that, then there's an argument for us doing it\n> > > too. I can't get hugely excited about it, but maybe someone else\n> > > cares enough to prepare a patch.\n> >\n> > Thanks for the confirmation. Attached patch supports these format\n> > patterns. Kindly review and let me know if any changes are required.\n>\n> Please add it to the July commitfest: https://commitfest.postgresql.org/33/\n>\n> David\n\n\n",
"msg_date": "Mon, 17 May 2021 09:52:18 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Query about time zone patterns in to_char"
},
{
"msg_contents": "+1 for the change.\n\nI quickly reviewed the patch and overall it looks good to me.\nFew cosmetic suggestions:\n\n1:\n+RESET timezone;\n+\n+\n CREATE TABLE TIMESTAMPTZ_TST (a int , b timestamptz);\n\nExtra line.\n\n2:\n+SET timezone = '00:00';\n+SELECT to_char(now(), 'of') as \"Of\", to_char(now(), 'tzh:tzm') as\n\"tzh:tzm\";\n\nO should be small in alias just for consistency.\n\nI am not sure whether we should backport this or not but I don't see any\nissues with back-patching.\n\nOn Sun, May 16, 2021 at 9:43 PM Nitin Jadhav <nitinjadhavpostgres@gmail.com>\nwrote:\n\n> > AFAICS, table 9.26 specifically shows which case-variants are supported.\n> > If there are some others that happen to work, we probably shouldn't\n> > remove them for fear of breaking poorly-written apps ... but that does\n> > not imply that we need to support every case-variant.\n>\n> Thanks for the explanation. I also feel that we may not support every\n> case-variant. But the other reason which triggered me to think in the\n> other way is, as mentioned in commit [1] where this feature was added,\n> says that these format patterns are compatible with Oracle. Whereas\n> Oracle supports both upper case and lower case patterns. I just wanted\n> to get it confirmed with this point before concluding.\n>\n> [1] -\n> commit 11b623dd0a2c385719ebbbdd42dd4ec395dcdc9d\n> Author: Andrew Dunstan <andrew@dunslane.net>\n> Date: Tue Jan 9 14:25:05 2018 -0500\n>\n> Implement TZH and TZM timestamp format patterns\n>\n> These are compatible with Oracle and required for the datetime template\n> language for jsonpath in an upcoming patch.\n>\n> Nikita Glukhov and Andrew Dunstan, reviewed by Pavel Stehule.\n>\n> Thanks & Regards,\n> Nitin Jadhav\n>\n> On Sun, May 16, 2021 at 8:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Nitin Jadhav <nitinjadhavpostgres@gmail.com> writes:\n> > > While understanding the behaviour of the to_char() function as\n> > > explained in [1], I observed that some patterns related to time zones\n> > > do not display values if we mention in lower case. As shown in the\n> > > sample output [2], time zone related patterns TZH, TZM and OF outputs\n> > > proper values when specified in upper case but does not work if we\n> > > mention in lower case. But other patterns like TZ, HH, etc works fine\n> > > with upper case as well as lower case.\n> >\n> > > I would like to know whether the current behaviour of TZH, TZM and OF\n> > > is done intentionally and is as expected.\n> >\n> > AFAICS, table 9.26 specifically shows which case-variants are supported.\n> > If there are some others that happen to work, we probably shouldn't\n> > remove them for fear of breaking poorly-written apps ... but that does\n> > not imply that we need to support every case-variant.\n> >\n> > regards, tom lane\n>\n>\n>\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\n\n\n\nedbpostgres.com\n\n+1 for the change.I quickly reviewed the patch and overall it looks good to me.Few cosmetic suggestions:1:+RESET timezone;++ CREATE TABLE TIMESTAMPTZ_TST (a int , b timestamptz); Extra line.2:+SET timezone = '00:00';+SELECT to_char(now(), 'of') as \"Of\", to_char(now(), 'tzh:tzm') as \"tzh:tzm\";O should be small in alias just for consistency.I am not sure whether we should backport this or not but I don't see any issues with back-patching.On Sun, May 16, 2021 at 9:43 PM Nitin Jadhav <nitinjadhavpostgres@gmail.com> wrote:> AFAICS, table 9.26 specifically shows which case-variants are supported.\n> If there are some others that happen to work, we probably shouldn't\n> remove them for fear of breaking poorly-written apps ... but that does\n> not imply that we need to support every case-variant.\n\nThanks for the explanation. I also feel that we may not support every\ncase-variant. But the other reason which triggered me to think in the\nother way is, as mentioned in commit [1] where this feature was added,\nsays that these format patterns are compatible with Oracle. Whereas\nOracle supports both upper case and lower case patterns. I just wanted\nto get it confirmed with this point before concluding.\n\n[1] -\ncommit 11b623dd0a2c385719ebbbdd42dd4ec395dcdc9d\nAuthor: Andrew Dunstan <andrew@dunslane.net>\nDate: Tue Jan 9 14:25:05 2018 -0500\n\n Implement TZH and TZM timestamp format patterns\n\n These are compatible with Oracle and required for the datetime template\n language for jsonpath in an upcoming patch.\n\n Nikita Glukhov and Andrew Dunstan, reviewed by Pavel Stehule.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Sun, May 16, 2021 at 8:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Nitin Jadhav <nitinjadhavpostgres@gmail.com> writes:\n> > While understanding the behaviour of the to_char() function as\n> > explained in [1], I observed that some patterns related to time zones\n> > do not display values if we mention in lower case. As shown in the\n> > sample output [2], time zone related patterns TZH, TZM and OF outputs\n> > proper values when specified in upper case but does not work if we\n> > mention in lower case. But other patterns like TZ, HH, etc works fine\n> > with upper case as well as lower case.\n>\n> > I would like to know whether the current behaviour of TZH, TZM and OF\n> > is done intentionally and is as expected.\n>\n> AFAICS, table 9.26 specifically shows which case-variants are supported.\n> If there are some others that happen to work, we probably shouldn't\n> remove them for fear of breaking poorly-written apps ... but that does\n> not imply that we need to support every case-variant.\n>\n> regards, tom lane\n\n\n-- --Thanks & Regards, Suraj kharage, edbpostgres.com",
"msg_date": "Thu, 20 May 2021 08:54:56 +0530",
"msg_from": "Suraj Kharage <suraj.kharage@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Query about time zone patterns in to_char"
},
{
"msg_contents": "Thanks Suraj for reviewing the patch.\n\n> 1:\n> +RESET timezone;\n> +\n> +\n> CREATE TABLE TIMESTAMPTZ_TST (a int , b timestamptz);\n>\n> Extra line.\n>\n> 2:\n> +SET timezone = '00:00';\n> +SELECT to_char(now(), 'of') as \"Of\", to_char(now(), 'tzh:tzm') as\n\"tzh:tzm\";\n\nI have fixed these comments.\n\n> I am not sure whether we should backport this or not but I don't see any\nissues with back-patching.\n\nI am also not sure about this. If it is really required, I would like to\ncreate those patches.\n\nPlease find the patch attached. Kindly confirm and share comments if any.\n\n--\nThanks & Regards,\nNitin Jadhav\n\n\n\nOn Thu, May 20, 2021 at 8:55 AM Suraj Kharage <\nsuraj.kharage@enterprisedb.com> wrote:\n\n> +1 for the change.\n>\n> I quickly reviewed the patch and overall it looks good to me.\n> Few cosmetic suggestions:\n>\n> 1:\n> +RESET timezone;\n> +\n> +\n> CREATE TABLE TIMESTAMPTZ_TST (a int , b timestamptz);\n>\n> Extra line.\n>\n> 2:\n> +SET timezone = '00:00';\n> +SELECT to_char(now(), 'of') as \"Of\", to_char(now(), 'tzh:tzm') as\n> \"tzh:tzm\";\n>\n> O should be small in alias just for consistency.\n>\n> I am not sure whether we should backport this or not but I don't see any\n> issues with back-patching.\n>\n> On Sun, May 16, 2021 at 9:43 PM Nitin Jadhav <\n> nitinjadhavpostgres@gmail.com> wrote:\n>\n>> > AFAICS, table 9.26 specifically shows which case-variants are supported.\n>> > If there are some others that happen to work, we probably shouldn't\n>> > remove them for fear of breaking poorly-written apps ... but that does\n>> > not imply that we need to support every case-variant.\n>>\n>> Thanks for the explanation. I also feel that we may not support every\n>> case-variant. But the other reason which triggered me to think in the\n>> other way is, as mentioned in commit [1] where this feature was added,\n>> says that these format patterns are compatible with Oracle. Whereas\n>> Oracle supports both upper case and lower case patterns. I just wanted\n>> to get it confirmed with this point before concluding.\n>>\n>> [1] -\n>> commit 11b623dd0a2c385719ebbbdd42dd4ec395dcdc9d\n>> Author: Andrew Dunstan <andrew@dunslane.net>\n>> Date: Tue Jan 9 14:25:05 2018 -0500\n>>\n>> Implement TZH and TZM timestamp format patterns\n>>\n>> These are compatible with Oracle and required for the datetime\n>> template\n>> language for jsonpath in an upcoming patch.\n>>\n>> Nikita Glukhov and Andrew Dunstan, reviewed by Pavel Stehule.\n>>\n>> Thanks & Regards,\n>> Nitin Jadhav\n>>\n>> On Sun, May 16, 2021 at 8:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> >\n>> > Nitin Jadhav <nitinjadhavpostgres@gmail.com> writes:\n>> > > While understanding the behaviour of the to_char() function as\n>> > > explained in [1], I observed that some patterns related to time zones\n>> > > do not display values if we mention in lower case. As shown in the\n>> > > sample output [2], time zone related patterns TZH, TZM and OF outputs\n>> > > proper values when specified in upper case but does not work if we\n>> > > mention in lower case. But other patterns like TZ, HH, etc works fine\n>> > > with upper case as well as lower case.\n>> >\n>> > > I would like to know whether the current behaviour of TZH, TZM and OF\n>> > > is done intentionally and is as expected.\n>> >\n>> > AFAICS, table 9.26 specifically shows which case-variants are supported.\n>> > If there are some others that happen to work, we probably shouldn't\n>> > remove them for fear of breaking poorly-written apps ... but that does\n>> > not imply that we need to support every case-variant.\n>> >\n>> > regards, tom lane\n>>\n>>\n>>\n>\n> --\n> --\n>\n> Thanks & Regards,\n> Suraj kharage,\n>\n>\n>\n> edbpostgres.com\n>",
"msg_date": "Thu, 20 May 2021 12:21:12 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Query about time zone patterns in to_char"
},
{
"msg_contents": "On Thu, May 20, 2021 at 12:21:12PM +0530, Nitin Jadhav wrote:\n> Thanks Suraj for reviewing the patch.\n> \n> > 1:\n> > +RESET timezone;\n> > +\n> > +\n> > CREATE TABLE TIMESTAMPTZ_TST (a int , b timestamptz);\n> >\n> > Extra line.\n> >\n> > 2:\n> > +SET timezone = '00:00';\n> > +SELECT to_char(now(), 'of') as \"Of\", to_char(now(), 'tzh:tzm') as \"tzh:tzm\";\n> \n> I have fixed these comments.\n> \n> > I am not sure whether we should backport this or not but I don't see any\n> issues with back-patching.\n\nOnly significant fixes are backpatched, not features.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Thu, 20 May 2021 14:25:15 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Query about time zone patterns in to_char"
},
{
"msg_contents": "\n\nOn 5/20/21 8:25 PM, Bruce Momjian wrote:\n> On Thu, May 20, 2021 at 12:21:12PM +0530, Nitin Jadhav wrote:\n>> Thanks Suraj for reviewing the patch.\n>>\n>>> 1:\n>>> +RESET timezone;\n>>> +\n>>> +\n>>> CREATE TABLE TIMESTAMPTZ_TST (a int , b timestamptz);\n>>>\n>>> Extra line.\n>>>\n>>> 2:\n>>> +SET timezone = '00:00';\n>>> +SELECT to_char(now(), 'of') as \"Of\", to_char(now(), 'tzh:tzm') as \"tzh:tzm\";\n>>\n>> I have fixed these comments.\n>>\n>>> I am not sure whether we should backport this or not but I don't see any\n>> issues with back-patching.\n> \n> Only significant fixes are backpatched, not features.\n> \n\nYeah, does not seem to be worth it, as there seem to be no actual\nreports of issues in the field.\n\nFWIW there seem to be quite a bit of other to_char differences compared\nto Oracle (judging by docs and playing with sqlfiddle). But the patch\nseems fine / simple enough and non-problematic, so perhaps let's just\nget it committed?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 9 Jul 2021 16:43:46 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Query about time zone patterns in to_char"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nApplied the patch `v2_support_of_tzh_tzm_patterns.patch` to `REL_14_STABLE` branch, both `make check` and `make check-world` are all passed.",
"msg_date": "Fri, 09 Jul 2021 19:17:45 +0000",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: Query about time zone patterns in to_char"
},
{
"msg_contents": "On Fri, Jul 9, 2021 at 10:44 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> Yeah, does not seem to be worth it, as there seem to be no actual\n> reports of issues in the field.\n>\n> FWIW there seem to be quite a bit of other to_char differences compared\n> to Oracle (judging by docs and playing with sqlfiddle). But the patch\n> seems fine / simple enough and non-problematic, so perhaps let's just\n> get it committed?\n\nThis patch is still in the current CommitFest, so I decided to review\nit. I see that DCH_keywords[] includes upper and lower-case entries\nfor everything except the three cases corrected by this patch, where\nit includes upper-case entries but not the corresponding lower-case\nentries. It seems to make sense to make these three cases consistent\nwith everything else.\n\nIt took me a while to understand how DCH_keywords[] and DCH_index[]\nactually work, and I think it's a pretty confusing design, but what\nthe patch does seems to be consistent with that, so it appears correct\nto me.\n\nTherefore, I have committed it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 14 Mar 2022 16:52:32 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Query about time zone patterns in to_char"
},
{
"msg_contents": "> This patch is still in the current CommitFest, so I decided to review\n> it. I see that DCH_keywords[] includes upper and lower-case entries\n> for everything except the three cases corrected by this patch, where\n> it includes upper-case entries but not the corresponding lower-case\n> entries. It seems to make sense to make these three cases consistent\n> with everything else.\n>\n> It took me a while to understand how DCH_keywords[] and DCH_index[]\n> actually work, and I think it's a pretty confusing design, but what\n> the patch does seems to be consistent with that, so it appears correct\n> to me.\n>\n> Therefore, I have committed it.\n\nThank you so much.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Tue, Mar 15, 2022 at 2:22 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Jul 9, 2021 at 10:44 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> > Yeah, does not seem to be worth it, as there seem to be no actual\n> > reports of issues in the field.\n> >\n> > FWIW there seem to be quite a bit of other to_char differences compared\n> > to Oracle (judging by docs and playing with sqlfiddle). But the patch\n> > seems fine / simple enough and non-problematic, so perhaps let's just\n> > get it committed?\n>\n> This patch is still in the current CommitFest, so I decided to review\n> it. I see that DCH_keywords[] includes upper and lower-case entries\n> for everything except the three cases corrected by this patch, where\n> it includes upper-case entries but not the corresponding lower-case\n> entries. It seems to make sense to make these three cases consistent\n> with everything else.\n>\n> It took me a while to understand how DCH_keywords[] and DCH_index[]\n> actually work, and I think it's a pretty confusing design, but what\n> the patch does seems to be consistent with that, so it appears correct\n> to me.\n>\n> Therefore, I have committed it.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 15 Mar 2022 15:02:59 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Query about time zone patterns in to_char"
}
] |
[
{
"msg_contents": "Hi all,\n\nI have bumped into $subject while playing with this feature, and this\ncan be really useful to be able to reset the compression method for\nall the tables at restore. The patch is simple but that's perhaps too\nlate for 14, so I am adding it to the next CF. \n\nThanks,\n--\nMichael",
"msg_date": "Mon, 17 May 2021 10:12:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "pg_dumpall misses --no-toast-compression"
},
{
"msg_contents": "On Mon, May 17, 2021 at 6:42 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Hi all,\n>\n> I have bumped into $subject while playing with this feature, and this\n> can be really useful to be able to reset the compression method for\n> all the tables at restore.\n\nThis makes sense\n\n The patch is simple but that's perhaps too\n> late for 14, so I am adding it to the next CF.\n\n+1\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 17 May 2021 11:20:12 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_dumpall misses --no-toast-compression"
},
{
"msg_contents": "> On 17 May 2021, at 03:12, Michael Paquier <michael@paquier.xyz> wrote:\n\n> I have bumped into $subject while playing with this feature, and this\n> can be really useful to be able to reset the compression method for\n> all the tables at restore. The patch is simple but that's perhaps too\n> late for 14, so I am adding it to the next CF.\n\nI think there is a reasonable case to be made for this fixing an oversight in\nbbe0a81db69bd10bd166907c3701492a29aca294 as opposed to adding a brand new\nfeature. Save for --no-synchronized-snapshots all --no-xxx options in pg_dump\nare mirrored in pg_dumpall.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 17 May 2021 16:05:34 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_dumpall misses --no-toast-compression"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 17 May 2021, at 03:12, Michael Paquier <michael@paquier.xyz> wrote:\n>> I have bumped into $subject while playing with this feature, and this\n>> can be really useful to be able to reset the compression method for\n>> all the tables at restore. The patch is simple but that's perhaps too\n>> late for 14, so I am adding it to the next CF.\n\n> I think there is a reasonable case to be made for this fixing an oversight in\n> bbe0a81db69bd10bd166907c3701492a29aca294 as opposed to adding a brand new\n> feature. Save for --no-synchronized-snapshots all --no-xxx options in pg_dump\n> are mirrored in pg_dumpall.\n\n+1, seems more like fixing an oversight than anything else.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 May 2021 10:38:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_dumpall misses --no-toast-compression"
},
{
"msg_contents": "On Mon, May 17, 2021 at 10:38:15AM -0400, Tom Lane wrote:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> I think there is a reasonable case to be made for this fixing an oversight in\n>> bbe0a81db69bd10bd166907c3701492a29aca294 as opposed to adding a brand new\n>> feature. Save for --no-synchronized-snapshots all --no-xxx options in pg_dump\n>> are mirrored in pg_dumpall.\n> \n> +1, seems more like fixing an oversight than anything else.\n\nOkay, thanks. I don't mind taking care of that on HEAD once beta1 is\nshipped, then.\n--\nMichael",
"msg_date": "Tue, 18 May 2021 09:48:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pg_dumpall misses --no-toast-compression"
},
{
"msg_contents": "On Tue, May 18, 2021 at 09:48:59AM +0900, Michael Paquier wrote:\n> Okay, thanks. I don't mind taking care of that on HEAD once beta1 is\n> shipped, then.\n\nBeta1 just got tagged, so this one has been applied as of 694da19.\n--\nMichael",
"msg_date": "Wed, 19 May 2021 09:44:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: pg_dumpall misses --no-toast-compression"
}
] |
[
{
"msg_contents": "While doing logical replication testing we encountered a problem which\ncauses a deadlock error to be logged when replicating a TRUNCATE\nsimultaneously to 2 Subscriptions.\ne.g.\n----------\n2021-05-12 11:30:19.457 AEST [11393] ERROR: deadlock detected\n2021-05-12 11:30:19.457 AEST [11393] DETAIL: Process 11393 waits for\nShareLock on transaction 597; blocked by process 11582.\nProcess 11582 waits for ShareLock on relation 16384 of database 14896;\nblocked by process 11393.\n----------\n\nAt this time, both the subscriber (apply worker) processes are\nexecuting inside the ExecuteTruncateGuts function simultaneously and\nthey become co-dependent.\n\ne.g.\n----------\nProcess 11582\n(gdb) bt\n#0 0x00007fa1979515e3 in __epoll_wait_nocancel () from /lib64/libc.so.6\n#1 0x000000000093e5d0 in WaitEventSetWaitBlock (set=0x2facac8,\ncur_timeout=-1, occurred_events=0x7ffed5fdff00, nevents=1) at\nlatch.c:1450\n#2 0x000000000093e468 in WaitEventSetWait (set=0x2facac8, timeout=-1,\noccurred_events=0x7ffed5fdff00, nevents=1, wait_event_info=50331648)\nat latch.c:1396\n#3 0x000000000093d8cd in WaitLatch (latch=0x7fa191042654,\nwakeEvents=33, timeout=0, wait_event_info=50331648) at latch.c:473\n#4 0x00000000009660f0 in ProcSleep (locallock=0x2fd06d8,\nlockMethodTable=0xcd90a0 <default_lockmethod>) at proc.c:1361\n#5 0x0000000000954fd5 in WaitOnLock (locallock=0x2fd06d8,\nowner=0x2fd9a48) at lock.c:1858\n#6 0x0000000000953c14 in LockAcquireExtended (locktag=0x7ffed5fe0370,\nlockmode=5, sessionLock=false, dontWait=false, reportMemoryError=true,\nlocallockp=0x7ffed5fe0368) at lock.c:1100\n#7 0x00000000009511f1 in LockRelationOid (relid=16384, lockmode=5) at\nlmgr.c:117\n#8 0x000000000049e779 in relation_open (relationId=16384, lockmode=5)\nat relation.c:56\n#9 0x00000000005652ff in table_open (relationId=16384, lockmode=5) at\ntable.c:43\n#10 0x00000000005c8b5a in reindex_relation (relid=16384, flags=1,\nparams=0x7ffed5fe05f0) at index.c:3990\n#11 0x00000000006d2c85 in ExecuteTruncateGuts\n(explicit_rels=0x3068aa8, relids=0x3068b00, relids_extra=0x3068b58,\nrelids_logged=0x3068bb0, behavior=DROP_RESTRICT, restart_seqs=false)\nat tablecmds.c:2036\n#12 0x00000000008ebc50 in apply_handle_truncate (s=0x7ffed5fe08d0) at\nworker.c:2252\n#13 0x00000000008ebe94 in apply_dispatch (s=0x7ffed5fe08d0) at worker.c:2308\n#14 0x00000000008ec424 in LogicalRepApplyLoop (last_received=24192624)\nat worker.c:2564\n----------\nProcess 11393\n(gdb) bt\n#0 0x00007fa197917f90 in __nanosleep_nocancel () from /lib64/libc.so.6\n#1 0x00007fa197917e44 in sleep () from /lib64/libc.so.6\n#2 0x0000000000950f84 in DeadLockReport () at deadlock.c:1151\n#3 0x0000000000955013 in WaitOnLock (locallock=0x2fd05d0,\nowner=0x2fd9a48) at lock.c:1873\n#4 0x0000000000953c14 in LockAcquireExtended (locktag=0x7ffed5fe01d0,\nlockmode=5, sessionLock=false, dontWait=false, reportMemoryError=true,\nlocallockp=0x0) at lock.c:1100\n#5 0x00000000009531bc in LockAcquire (locktag=0x7ffed5fe01d0,\nlockmode=5, sessionLock=false, dontWait=false) at lock.c:751\n#6 0x0000000000951d86 in XactLockTableWait (xid=597,\nrel=0x7fa1986e9e08, ctid=0x7ffed5fe0284, oper=XLTW_Update) at\nlmgr.c:674\n#7 0x00000000004f84d8 in heap_update (relation=0x7fa1986e9e08,\notid=0x3067dc4, newtup=0x3067dc0, cid=0, crosscheck=0x0, wait=true,\ntmfd=0x7ffed5fe03b0, lockmode=0x7ffed5fe03ac) at heapam.c:3549\n#8 0x00000000004fa5dd in simple_heap_update (relation=0x7fa1986e9e08,\notid=0x3067dc4, tup=0x3067dc0) at heapam.c:4222\n#9 0x00000000005c9932 in CatalogTupleUpdate (heapRel=0x7fa1986e9e08,\notid=0x3067dc4, tup=0x3067dc0) at indexing.c:312\n#10 0x0000000000af5774 in RelationSetNewRelfilenode\n(relation=0x7fa1986fdc80, persistence=112 'p') at relcache.c:3707\n#11 0x00000000006d2b4d in ExecuteTruncateGuts\n(explicit_rels=0x30688b8, relids=0x3068910, relids_extra=0x3068968,\nrelids_logged=0x30689c0, behavior=DROP_RESTRICT, restart_seqs=false)\nat tablecmds.c:2012\n#12 0x00000000008ebc50 in apply_handle_truncate (s=0x7ffed5fe08d0) at\nworker.c:2252\n#13 0x00000000008ebe94 in apply_dispatch (s=0x7ffed5fe08d0) at worker.c:2308\n----------\n\nThe essence of the trouble seems to be that the apply_handle_truncate\nfunction never anticipated it may end up truncating the same table\nfrom 2 separate workers (subscriptions) like this test case is doing.\nProbably this is quite an old problem because the\napply_handle_truncate code has not changed much for a long time. The\ncode of apply_handle_truncate function (worker.c) has a very similar\npattern to the ExecuteTruncate function (tablecmds.c) but the\nExecuteTruncate is using a more powerful AcccessExclusiveLock than the\napply_handle_truncate was using.\n\n~~\n\nPSA a patch to make the apply_handle_truncate use AccessExclusiveLock\nsame as the ExecuteTruncate function does.\n\nPSA a patch adding a test for this scenario.\n\n--------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Mon, 17 May 2021 16:59:53 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "\"ERROR: deadlock detected\" when replicating TRUNCATE"
},
{
"msg_contents": "On Mon, May 17, 2021 at 12:30 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> While doing logical replication testing we encountered a problem which\n> causes a deadlock error to be logged when replicating a TRUNCATE\n> simultaneously to 2 Subscriptions.\n> e.g.\n> ----------\n> 2021-05-12 11:30:19.457 AEST [11393] ERROR: deadlock detected\n> 2021-05-12 11:30:19.457 AEST [11393] DETAIL: Process 11393 waits for\n> ShareLock on transaction 597; blocked by process 11582.\n> Process 11582 waits for ShareLock on relation 16384 of database 14896;\n> blocked by process 11393.\n> ----------\n>\n> At this time, both the subscriber (apply worker) processes are\n> executing inside the ExecuteTruncateGuts function simultaneously and\n> they become co-dependent.\n>\n> e.g.\n> ----------\n> Process 11582\n> (gdb) bt\n> #0 0x00007fa1979515e3 in __epoll_wait_nocancel () from /lib64/libc.so.6\n> #1 0x000000000093e5d0 in WaitEventSetWaitBlock (set=0x2facac8,\n> cur_timeout=-1, occurred_events=0x7ffed5fdff00, nevents=1) at\n> latch.c:1450\n> #2 0x000000000093e468 in WaitEventSetWait (set=0x2facac8, timeout=-1,\n> occurred_events=0x7ffed5fdff00, nevents=1, wait_event_info=50331648)\n> at latch.c:1396\n> #3 0x000000000093d8cd in WaitLatch (latch=0x7fa191042654,\n> wakeEvents=33, timeout=0, wait_event_info=50331648) at latch.c:473\n> #4 0x00000000009660f0 in ProcSleep (locallock=0x2fd06d8,\n> lockMethodTable=0xcd90a0 <default_lockmethod>) at proc.c:1361\n..\n> ----------\n> Process 11393\n> (gdb) bt\n> #0 0x00007fa197917f90 in __nanosleep_nocancel () from /lib64/libc.so.6\n> #1 0x00007fa197917e44 in sleep () from /lib64/libc.so.6\n> #2 0x0000000000950f84 in DeadLockReport () at deadlock.c:1151\n> #3 0x0000000000955013 in WaitOnLock (locallock=0x2fd05d0,\n> owner=0x2fd9a48) at lock.c:1873\n>\n..\n> ----------\n>\n> The essence of the trouble seems to be that the apply_handle_truncate\n> function never anticipated it may end up truncating the same table\n> from 2 separate workers (subscriptions) like this test case is doing.\n> Probably this is quite an old problem because the\n> apply_handle_truncate code has not changed much for a long time.\n>\n\nYeah, have you checked it in the back branches?\n\nI am also able to reproduce and have analyzed the cause of the above\nerror. In the above, Process 11393 waits while updating pg_class tuple\nvia RelationSetNewRelfilenode() which is already updated by process\n11582 (with transaction id 597 which is yet not committed). Now,\nprocess 11582 waits for acquiring ShareLock on relation 16384 which is\nacquired RowExclusiveMode by process 11393 in function\napply_handle_truncate. So, both the processes started waiting on each\nother which causes a deadlock.\n\n>\n> PSA a patch adding a test for this scenario.\n>\n\n+\n+$node_publisher->safe_psql('postgres',\n+ \"ALTER SYSTEM SET synchronous_standby_names TO 'any 2(sub5_1, sub5_2)'\");\n+$node_publisher->safe_psql('postgres', \"SELECT pg_reload_conf()\");\n\nDo you really need these steps to reproduce the problem? IIUC, this\nhas nothing to do with synchronous replication.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 17 May 2021 14:17:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"ERROR: deadlock detected\" when replicating TRUNCATE"
},
{
"msg_contents": "On Mon, May 17, 2021 at 12:30 PM Peter Smith <smithpb2250@gmail.com> wrote:\n\n> The essence of the trouble seems to be that the apply_handle_truncate\n> function never anticipated it may end up truncating the same table\n> from 2 separate workers (subscriptions) like this test case is doing.\n> Probably this is quite an old problem because the\n> apply_handle_truncate code has not changed much for a long time. The\n> code of apply_handle_truncate function (worker.c) has a very similar\n> pattern to the ExecuteTruncate function (tablecmds.c) but the\n> ExecuteTruncate is using a more powerful AcccessExclusiveLock than the\n> apply_handle_truncate was using.\n\nRight, that's a problem.\n\n>\n> PSA a patch to make the apply_handle_truncate use AccessExclusiveLock\n> same as the ExecuteTruncate function does.\n\nI think the fix makes sense to me.\n\n> PSA a patch adding a test for this scenario.\n\nI am not sure this test case is exactly targeting the problematic\nbehavior because that will depends upon the order of execution of the\napply workers right?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 17 May 2021 15:04:40 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"ERROR: deadlock detected\" when replicating TRUNCATE"
},
{
"msg_contents": "On Monday, May 17, 2021 5:47 PM, Amit Kapila <amit.kapila16@gmail.com> wrote\r\n> +$node_publisher->safe_psql('postgres',\r\n> + \"ALTER SYSTEM SET synchronous_standby_names TO 'any 2(sub5_1,\r\n> sub5_2)'\");\r\n> +$node_publisher->safe_psql('postgres', \"SELECT pg_reload_conf()\");\r\n> \r\n> Do you really need these steps to reproduce the problem? IIUC, this\r\n> has nothing to do with synchronous replication.\r\n\r\nAgreed. \r\nI tested in asynchronous mode, and could reproduce this problem, too.\r\n\r\nThe attached patch removed the steps for setting synchronous replication.\r\nAnd the test passed after applying Peter's patch.\r\nPlease take it as your reference.\r\n\r\nRegards\r\nTang",
"msg_date": "Mon, 17 May 2021 09:36:33 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: \"ERROR: deadlock detected\" when replicating TRUNCATE"
},
{
"msg_contents": "On Mon, May 17, 2021 at 3:05 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, May 17, 2021 at 12:30 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> > PSA a patch adding a test for this scenario.\n>\n> I am not sure this test case is exactly targeting the problematic\n> behavior because that will depends upon the order of execution of the\n> apply workers right?\n>\n\nYeah, so we can't guarantee that this test will always reproduce the\nproblem but OTOH, I have tried two times and it reproduced both times.\nI guess we don't have a similar test where Truncate will replicate to\ntwo subscriptions, otherwise, we would have caught such a problem.\nHaving said that, I am fine with leaving this test if others feel so\non the grounds that it won't always lead to the problem reported.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 17 May 2021 15:43:05 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"ERROR: deadlock detected\" when replicating TRUNCATE"
},
{
"msg_contents": "On Mon, May 17, 2021 at 3:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, May 17, 2021 at 3:05 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Mon, May 17, 2021 at 12:30 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > > PSA a patch adding a test for this scenario.\n> >\n> > I am not sure this test case is exactly targeting the problematic\n> > behavior because that will depends upon the order of execution of the\n> > apply workers right?\n> >\n>\n> Yeah, so we can't guarantee that this test will always reproduce the\n> problem but OTOH, I have tried two times and it reproduced both times.\n> I guess we don't have a similar test where Truncate will replicate to\n> two subscriptions, otherwise, we would have caught such a problem.\n> Having said that, I am fine with leaving this test if others feel so\n> on the grounds that it won't always lead to the problem reported.\n\nAlthough it is not guaranteed to reproduce the scenario every time, it\nis testing a new scenario, so +1 for the test.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 17 May 2021 16:16:18 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"ERROR: deadlock detected\" when replicating TRUNCATE"
},
{
"msg_contents": "On Mon, May 17, 2021 at 8:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, May 17, 2021 at 3:05 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Mon, May 17, 2021 at 12:30 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > > PSA a patch adding a test for this scenario.\n> >\n> > I am not sure this test case is exactly targeting the problematic\n> > behavior because that will depends upon the order of execution of the\n> > apply workers right?\n> >\n>\n> Yeah, so we can't guarantee that this test will always reproduce the\n> problem but OTOH, I have tried two times and it reproduced both times.\n> I guess we don't have a similar test where Truncate will replicate to\n> two subscriptions, otherwise, we would have caught such a problem.\n> Having said that, I am fine with leaving this test if others feel so\n> on the grounds that it won't always lead to the problem reported.\n>\n\nIf there is any concern that the problem won't always happen then I\nthink we should just increase the numbers of subscriptions.\n\nHaving more simultaneous subscriptions (e.g. I have tried 4). will\nmake it much more likely for the test to encounter the deadlock, and\nit probably would also be quite a useful worker stress test in it's\nown right.\n\n~~\n\nAlso, should this test be in the 010_truncate,pl, or does it belong in\nthe 100_bugs.pl? (I don't know what are the rules for when a test\ngets put into 100_bugs.pl)\n\n----------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 18 May 2021 10:49:12 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: \"ERROR: deadlock detected\" when replicating TRUNCATE"
},
{
"msg_contents": "On Mon, May 17, 2021 at 6:47 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, May 17, 2021 at 12:30 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n[...]\n> > The essence of the trouble seems to be that the apply_handle_truncate\n> > function never anticipated it may end up truncating the same table\n> > from 2 separate workers (subscriptions) like this test case is doing.\n> > Probably this is quite an old problem because the\n> > apply_handle_truncate code has not changed much for a long time.\n> >\n>\n> Yeah, have you checked it in the back branches?\n>\n\nYes, the apply_handle_truncate function was introduced in April/2018 [1].\n\nREL_11_0 was tagged in Oct/2018.\n\nThe \"ERROR: deadlock detected\" log is reproducible in PG 11.0.\n\n----------\n[1] https://github.com/postgres/postgres/commit/039eb6e92f20499ac36cc74f8a5cef7430b706f6\n\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Tue, 18 May 2021 11:22:05 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: \"ERROR: deadlock detected\" when replicating TRUNCATE"
},
{
"msg_contents": "On Tue, May 18, 2021 at 6:19 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Mon, May 17, 2021 at 8:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, May 17, 2021 at 3:05 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Mon, May 17, 2021 at 12:30 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > > PSA a patch adding a test for this scenario.\n> > >\n> > > I am not sure this test case is exactly targeting the problematic\n> > > behavior because that will depends upon the order of execution of the\n> > > apply workers right?\n> > >\n> >\n> > Yeah, so we can't guarantee that this test will always reproduce the\n> > problem but OTOH, I have tried two times and it reproduced both times.\n> > I guess we don't have a similar test where Truncate will replicate to\n> > two subscriptions, otherwise, we would have caught such a problem.\n> > Having said that, I am fine with leaving this test if others feel so\n> > on the grounds that it won't always lead to the problem reported.\n> >\n>\n> If there is any concern that the problem won't always happen then I\n> think we should just increase the numbers of subscriptions.\n>\n> Having more simultaneous subscriptions (e.g. I have tried 4). will\n> make it much more likely for the test to encounter the deadlock, and\n> it probably would also be quite a useful worker stress test in it's\n> own right.\n>\n\nI don't think we need to go that far.\n\n> ~~\n>\n> Also, should this test be in the 010_truncate,pl,\n>\n\n+1 for keeping it in 010_truncate.pl but remove the synchronous part\nof it from the testcase and comments as that is not required.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 18 May 2021 09:39:20 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"ERROR: deadlock detected\" when replicating TRUNCATE"
},
{
"msg_contents": "On Tue, May 18, 2021 at 6:52 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> >\n> > Yeah, have you checked it in the back branches?\n> >\n>\n> Yes, the apply_handle_truncate function was introduced in April/2018 [1].\n>\n> REL_11_0 was tagged in Oct/2018.\n>\n> The \"ERROR: deadlock detected\" log is reproducible in PG 11.0.\n>\n\nOkay, I have prepared the patches for all branches (11...HEAD). Each\nversion needs minor changes in the test, the code doesn't need much\nchange. Some notable changes in the tests:\n1. I have removed the conf change for max_logical_replication_workers\non the publisher node. We only need this for the subscriber node.\n2. After creating the new subscriptions wait for initial\nsynchronization as we do for other tests.\n3. synchronous_standby_names need to be reset for the previous test.\nThis is only required for HEAD.\n4. In PG-11, we need to specify the application_name in the connection\nstring, otherwise, it took the testcase file name as application_name.\nThis is the same as other tests are doing in PG11.\n\nCan you please once verify the attached patches?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Thu, 20 May 2021 11:34:51 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"ERROR: deadlock detected\" when replicating TRUNCATE"
},
{
"msg_contents": "On Thursday, May 20, 2021 3:05 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> Okay, I have prepared the patches for all branches (11...HEAD). Each\r\n> version needs minor changes in the test, the code doesn't need much\r\n> change. Some notable changes in the tests:\r\n> 1. I have removed the conf change for max_logical_replication_workers\r\n> on the publisher node. We only need this for the subscriber node.\r\n> 2. After creating the new subscriptions wait for initial\r\n> synchronization as we do for other tests.\r\n> 3. synchronous_standby_names need to be reset for the previous test.\r\n> This is only required for HEAD.\r\n> 4. In PG-11, we need to specify the application_name in the connection\r\n> string, otherwise, it took the testcase file name as application_name.\r\n> This is the same as other tests are doing in PG11.\r\n> \r\n> Can you please once verify the attached patches?\r\n\r\nI have tested your patches for all branches(11...HEAD). All of them passed. B.T.W, I also confirmed that the bug exists in these branches without your fix.\r\n\r\nThe changes in tests LGTM. \r\nBut I saw whitespace warnings when applied the patches for PG11 and PG12, please take a look at this.\r\n\r\nRegards\r\nTang\r\n",
"msg_date": "Thu, 20 May 2021 07:16:13 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: \"ERROR: deadlock detected\" when replicating TRUNCATE"
},
{
"msg_contents": "On Thu, May 20, 2021 at 12:46 PM tanghy.fnst@fujitsu.com\n<tanghy.fnst@fujitsu.com> wrote:\n>\n> On Thursday, May 20, 2021 3:05 PM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Okay, I have prepared the patches for all branches (11...HEAD). Each\n> > version needs minor changes in the test, the code doesn't need much\n> > change. Some notable changes in the tests:\n> > 1. I have removed the conf change for max_logical_replication_workers\n> > on the publisher node. We only need this for the subscriber node.\n> > 2. After creating the new subscriptions wait for initial\n> > synchronization as we do for other tests.\n> > 3. synchronous_standby_names need to be reset for the previous test.\n> > This is only required for HEAD.\n> > 4. In PG-11, we need to specify the application_name in the connection\n> > string, otherwise, it took the testcase file name as application_name.\n> > This is the same as other tests are doing in PG11.\n> >\n> > Can you please once verify the attached patches?\n>\n> I have tested your patches for all branches(11...HEAD). All of them passed. B.T.W, I also confirmed that the bug exists in these branches without your fix.\n>\n> The changes in tests LGTM.\n> But I saw whitespace warnings when applied the patches for PG11 and PG12, please take a look at this.\n>\n\nThanks, I have pushed after fixing the whitespace warning.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 21 May 2021 15:51:35 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \"ERROR: deadlock detected\" when replicating TRUNCATE"
}
] |
[
{
"msg_contents": "Since PostgreSQL 9.3, in commit a266f7dd93b, we've added the text:\n\n+ The obsolete \"winflex\" binaries distributed on the PostgreSQL FTP site\n+ and referenced in older documentation will fail with \"flex: fatal\n+ internal error, exec failed\" on 64-bit Windows hosts. Use flex from\n+ msys instead.\n\nAt this point. I suggest we simply stop distributing winflex on our\ndownload site, and just remove this note from the documentation. (This\nis just a note, the general documentation still says get flex from\nmsys, separately).\n\nSurely a binary that doesn't work on a 64-bit system is not of help to\nanybody these days.. And \"older documentation\" now refers to 9.2 which\nwas EOL in 2017.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Mon, 17 May 2021 10:17:56 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Winflex docs and distro"
},
{
"msg_contents": "> On 17 May 2021, at 10:17, Magnus Hagander <magnus@hagander.net> wrote:\n\n> Since PostgreSQL 9.3, in commit a266f7dd93b, we've added the text:\n> \n> + The obsolete \"winflex\" binaries distributed on the PostgreSQL FTP site\n\nWhich was slightly updated in 0a9ae44288d.\n\n> At this point. I suggest we simply stop distributing winflex on our\n> download site, and just remove this note from the documentation.\n\nSounds reasonable, are there (easily retrieved) logs/tracking for when it was\naccessed by anyone last?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 17 May 2021 11:11:16 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Winflex docs and distro"
},
{
"msg_contents": "On Mon, May 17, 2021 at 11:11 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 17 May 2021, at 10:17, Magnus Hagander <magnus@hagander.net> wrote:\n>\n> > Since PostgreSQL 9.3, in commit a266f7dd93b, we've added the text:\n> >\n> > + The obsolete \"winflex\" binaries distributed on the PostgreSQL FTP site\n>\n> Which was slightly updated in 0a9ae44288d.\n\nIt's been touched a couple of times, but not in any material fashion.\n\n\n> > At this point. I suggest we simply stop distributing winflex on our\n> > download site, and just remove this note from the documentation.\n>\n> Sounds reasonable, are there (easily retrieved) logs/tracking for when it was\n> accessed by anyone last?\n\nNot really. We don't keep logs going very far back. I can see it being\naccessed a handful of time in the past 14 days. But AFAICT from the\nlimited information we have it's all bots.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Mon, 17 May 2021 11:51:05 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Winflex docs and distro"
},
{
"msg_contents": "\nOn 5/17/21 5:51 AM, Magnus Hagander wrote:\n> On Mon, May 17, 2021 at 11:11 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>>> On 17 May 2021, at 10:17, Magnus Hagander <magnus@hagander.net> wrote:\n>>> Since PostgreSQL 9.3, in commit a266f7dd93b, we've added the text:\n>>>\n>>> + The obsolete \"winflex\" binaries distributed on the PostgreSQL FTP site\n>> Which was slightly updated in 0a9ae44288d.\n> It's been touched a couple of times, but not in any material fashion.\n>\n>\n>>> At this point. I suggest we simply stop distributing winflex on our\n>>> download site, and just remove this note from the documentation.\n>> Sounds reasonable, are there (easily retrieved) logs/tracking for when it was\n>> accessed by anyone last?\n> Not really. We don't keep logs going very far back. I can see it being\n> accessed a handful of time in the past 14 days. But AFAICT from the\n> limited information we have it's all bots.\n>\n\n\n\n+1 for removing the binary and the reference.\n\nThese days my setup for MSVC doesn't use msys: it's basically this PS1\nfragment (which assumes chocolatey is installed):\n\n $utils = 'StrawberryPerl', 'git', 'winflexbison', 'diffutils', 'vim'\n choco install -y --no-progress --limit-output @utils\n $cbin = \"c:\\ProgramData\\chocolatey\\bin\"\n Rename-Item -Path $cbin\\win_bison.exe -NewName bison.exe\n Rename-Item -Path $cbin\\win_flex.exe -NewName flex.exe\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 17 May 2021 08:55:19 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Winflex docs and distro"
},
{
"msg_contents": "On Mon, May 17, 2021 at 2:55 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>\n> On 5/17/21 5:51 AM, Magnus Hagander wrote:\n> > On Mon, May 17, 2021 at 11:11 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >>> On 17 May 2021, at 10:17, Magnus Hagander <magnus@hagander.net> wrote:\n> >>> Since PostgreSQL 9.3, in commit a266f7dd93b, we've added the text:\n> >>>\n> >>> + The obsolete \"winflex\" binaries distributed on the PostgreSQL FTP site\n> >> Which was slightly updated in 0a9ae44288d.\n> > It's been touched a couple of times, but not in any material fashion.\n> >\n> >\n> >>> At this point. I suggest we simply stop distributing winflex on our\n> >>> download site, and just remove this note from the documentation.\n> >> Sounds reasonable, are there (easily retrieved) logs/tracking for when it was\n> >> accessed by anyone last?\n> > Not really. We don't keep logs going very far back. I can see it being\n> > accessed a handful of time in the past 14 days. But AFAICT from the\n> > limited information we have it's all bots.\n> >\n>\n>\n>\n> +1 for removing the binary and the reference.\n\nI think we've collected enough +1's, so I'll go ahead and do it.\n\n\n> These days my setup for MSVC doesn't use msys: it's basically this PS1\n> fragment (which assumes chocolatey is installed):\n>\n> $utils = 'StrawberryPerl', 'git', 'winflexbison', 'diffutils', 'vim'\n> choco install -y --no-progress --limit-output @utils\n> $cbin = \"c:\\ProgramData\\chocolatey\\bin\"\n> Rename-Item -Path $cbin\\win_bison.exe -NewName bison.exe\n> Rename-Item -Path $cbin\\win_flex.exe -NewName flex.exe\n\nPerhaps it is, as a separate thing, worth including that in the docs\nsoemwhere? Or maybe as a script int he sourcetree that is referenced\nfrot he docs?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Mon, 17 May 2021 21:56:23 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Winflex docs and distro"
}
] |
[
{
"msg_contents": "Hi,\n\nIt looks like the values such as '123.456', '789.123' '100$%$#$#',\n'9,223,372,' are accepted and treated as valid integers for\npostgres_fdw options batch_size and fetch_size. Whereas this is not\nthe case with fdw_startup_cost and fdw_tuple_cost options for which an\nerror is thrown. Attaching a patch to fix that.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 17 May 2021 15:28:52 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "postgres_fdw - should we tighten up batch_size, fetch_size options\n against non-numeric values?"
},
{
"msg_contents": "On Mon, May 17, 2021 at 3:29 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> It looks like the values such as '123.456', '789.123' '100$%$#$#',\n> '9,223,372,' are accepted and treated as valid integers for\n> postgres_fdw options batch_size and fetch_size. Whereas this is not\n> the case with fdw_startup_cost and fdw_tuple_cost options for which an\n> error is thrown. Attaching a patch to fix that.\n\nThis looks like a definite improvement. I wonder if we should modify\ndefGetInt variants to convert strings into integers, so that there's\nconsistent error message for such errors. We could define defGetUInt\nso that we could mention non-negative in the error message. Whether or\nnot we do that, this looks good.\n\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 17 May 2021 18:17:20 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw - should we tighten up batch_size, fetch_size\n options against non-numeric values?"
},
{
"msg_contents": "On Mon, May 17, 2021 at 6:17 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Mon, May 17, 2021 at 3:29 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > It looks like the values such as '123.456', '789.123' '100$%$#$#',\n> > '9,223,372,' are accepted and treated as valid integers for\n> > postgres_fdw options batch_size and fetch_size. Whereas this is not\n> > the case with fdw_startup_cost and fdw_tuple_cost options for which an\n> > error is thrown. Attaching a patch to fix that.\n>\n> This looks like a definite improvement. I wonder if we should modify\n> defGetInt variants to convert strings into integers, so that there's\n> consistent error message for such errors. We could define defGetUInt\n> so that we could mention non-negative in the error message.\n\nIf we do that, then the options that are only accepting unquoted\nintegers (i.e. 123, 456 etc.) and throwing errors for the quoted\nintegers ('123', '456' etc.) will then start to accept the quoted\nintegers. Other hackers might not agree to this change.\n\nAnother way is to have new API like\ndefGetInt32_2/defGetInt64_2/defGetNumeric_2 (or some other better\nnames) which basically accept both quoted and unquoted strings, see\n[1] for a rough sketch of the function. These API can be useful if an\noption needs to be parsed in both quoted and unquoted form. Or we can\nalso have these functions as [2] for only parsing quoted options. I\nprefer [2] so that these API can be used without any code duplication.\nThoughts?\n\n> Whether or not we do that, this looks good.\n\nI'm also okay if we can just fix the fetch_size and back_size options\nfor now as it's done in the patch attached with the first mail. Note\nthat I have not added any test case as this change is a trivial thing.\nIf required, I can add one.\n\n[1] -\nint32\ndefGetInt32_2(DefElem *def)\n{\n if (def->arg == NULL)\n ereport(ERROR,\n (errcode(ERRCODE_SYNTAX_ERROR),\n errmsg(\"%s requires an integer value\",\n def->defname)));\n\n switch (nodeTag(def->arg))\n {\n case T_Integer:\n return (int32) intVal(def->arg);\n default:\n {\n char *sval;\n int32 val;\n\n sval = defGetString(def);\n val = strtol(sval, &endp, 10);\n\n if (*endp == '\\0')\n return val;\n }\n }\n\n ereport(ERROR,\n (errcode(ERRCODE_SYNTAX_ERROR),\n errmsg(\"%s requires an integer value\",\n def->defname)));\n\n return 0;\n}\n\n[2] -\nint32\ndefGetInt32_2(DefElem *def)\n{\n char *sval;\n int32 val;\n\n sval = defGetString(def);\n val = strtol(sval, &endp, 10);\n\n if (*endp)\n ereport(ERROR,\n (errcode(ERRCODE_SYNTAX_ERROR),\n errmsg(\"%s requires an integer value\",\n def->defname)));\n return val;\n\n}\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 17 May 2021 19:50:07 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw - should we tighten up batch_size, fetch_size\n options against non-numeric values?"
},
{
"msg_contents": "On Mon, May 17, 2021 at 7:50 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> If we do that, then the options that are only accepting unquoted\n> integers (i.e. 123, 456 etc.) and throwing errors for the quoted\n> integers ('123', '456' etc.) will then start to accept the quoted\n> integers. Other hackers might not agree to this change.\n\nI guess the options which weren't accepting quoted strings, will catch\nthese errors at the time of parsing itself. Even if that's not true, I\nwould see that as an improvement. Anyway, I won't stretch this\nfurther.\n\n>\n> Another way is to have new API like\n> defGetInt32_2/defGetInt64_2/defGetNumeric_2 (or some other better\n> names) which basically accept both quoted and unquoted strings, see\n> [1] for a rough sketch of the function. These API can be useful if an\n> option needs to be parsed in both quoted and unquoted form. Or we can\n> also have these functions as [2] for only parsing quoted options. I\n> prefer [2] so that these API can be used without any code duplication.\n> Thoughts?\n\nI am not sure whether we want to maintain two copies. In that case\nyour first patch is fine.\n\n> Note\n> that I have not added any test case as this change is a trivial thing.\n> If required, I can add one.\n\nThat will help to make sure that we preserve the behaviour even\nthrough code changes.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 18 May 2021 18:52:25 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw - should we tighten up batch_size, fetch_size\n options against non-numeric values?"
},
{
"msg_contents": "\n\nOn 2021/05/17 18:58, Bharath Rupireddy wrote:\n> Hi,\n> \n> It looks like the values such as '123.456', '789.123' '100$%$#$#',\n> '9,223,372,' are accepted and treated as valid integers for\n> postgres_fdw options batch_size and fetch_size. Whereas this is not\n> the case with fdw_startup_cost and fdw_tuple_cost options for which an\n> error is thrown. Attaching a patch to fix that.\n\nThis looks an improvement. But one issue is that the restore of\ndump file taken by pg_dump from v13 may fail for v14 with this patch\nif it contains invalid setting of fetch_size, e.g., \"fetch_size '123.456'\".\nOTOH, since batch_size was added in v14, it has no such issue.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 18 May 2021 22:45:15 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw - should we tighten up batch_size, fetch_size\n options against non-numeric values?"
},
{
"msg_contents": "Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> On 2021/05/17 18:58, Bharath Rupireddy wrote:\n>> It looks like the values such as '123.456', '789.123' '100$%$#$#',\n>> '9,223,372,' are accepted and treated as valid integers for\n>> postgres_fdw options batch_size and fetch_size. Whereas this is not\n>> the case with fdw_startup_cost and fdw_tuple_cost options for which an\n>> error is thrown. Attaching a patch to fix that.\n\n> This looks an improvement. But one issue is that the restore of\n> dump file taken by pg_dump from v13 may fail for v14 with this patch\n> if it contains invalid setting of fetch_size, e.g., \"fetch_size '123.456'\".\n> OTOH, since batch_size was added in v14, it has no such issue.\n\nMaybe better to just silently round to integer? I think that's\nwhat we generally do with integer GUCs these days, eg\n\nregression=# set work_mem = 102.9;\nSET\nregression=# show work_mem;\n work_mem \n----------\n 103kB\n(1 row)\n\nI agree with throwing an error for non-numeric junk though.\nAllowing that on the grounds of backwards compatibility\nseems like too much of a stretch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 May 2021 09:49:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw - should we tighten up batch_size,\n fetch_size options against non-numeric values?"
},
{
"msg_contents": "On Tue, May 18, 2021 at 7:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> > On 2021/05/17 18:58, Bharath Rupireddy wrote:\n> >> It looks like the values such as '123.456', '789.123' '100$%$#$#',\n> >> '9,223,372,' are accepted and treated as valid integers for\n> >> postgres_fdw options batch_size and fetch_size. Whereas this is not\n> >> the case with fdw_startup_cost and fdw_tuple_cost options for which an\n> >> error is thrown. Attaching a patch to fix that.\n>\n> > This looks an improvement. But one issue is that the restore of\n> > dump file taken by pg_dump from v13 may fail for v14 with this patch\n> > if it contains invalid setting of fetch_size, e.g., \"fetch_size '123.456'\".\n> > OTOH, since batch_size was added in v14, it has no such issue.\n>\n> Maybe better to just silently round to integer? I think that's\n> what we generally do with integer GUCs these days, eg\n>\n> regression=# set work_mem = 102.9;\n> SET\n> regression=# show work_mem;\n> work_mem\n> ----------\n> 103kB\n> (1 row)\n\nI think we can use parse_int to parse the fetch_size and batch_size as\nintegers, which also rounds off decimals to integers and returns false\nfor non-numeric junk. But, it accepts both quoted and unquoted\nintegers, something like batch_size 100 and batch_size '100', which I\nfeel is okay because the reloptions also accept both.\n\nWhile on this, we can also use parse_real for fdw_startup_cost and\nfdw_tuple_cost options but with that they will accept both quoted and\nunquoted real values.\n\nThoughts?\n\n> I agree with throwing an error for non-numeric junk though.\n> Allowing that on the grounds of backwards compatibility\n> seems like too much of a stretch.\n\n+1.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 18 May 2021 19:46:39 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw - should we tighten up batch_size, fetch_size\n options against non-numeric values?"
},
{
"msg_contents": "On Tue, May 18, 2021 at 7:46 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, May 18, 2021 at 7:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> > > On 2021/05/17 18:58, Bharath Rupireddy wrote:\n> > >> It looks like the values such as '123.456', '789.123' '100$%$#$#',\n> > >> '9,223,372,' are accepted and treated as valid integers for\n> > >> postgres_fdw options batch_size and fetch_size. Whereas this is not\n> > >> the case with fdw_startup_cost and fdw_tuple_cost options for which an\n> > >> error is thrown. Attaching a patch to fix that.\n> >\n> > > This looks an improvement. But one issue is that the restore of\n> > > dump file taken by pg_dump from v13 may fail for v14 with this patch\n> > > if it contains invalid setting of fetch_size, e.g., \"fetch_size '123.456'\".\n> > > OTOH, since batch_size was added in v14, it has no such issue.\n> >\n> > Maybe better to just silently round to integer? I think that's\n> > what we generally do with integer GUCs these days, eg\n> >\n> > regression=# set work_mem = 102.9;\n> > SET\n> > regression=# show work_mem;\n> > work_mem\n> > ----------\n> > 103kB\n> > (1 row)\n>\n> I think we can use parse_int to parse the fetch_size and batch_size as\n> integers, which also rounds off decimals to integers and returns false\n> for non-numeric junk. But, it accepts both quoted and unquoted\n> integers, something like batch_size 100 and batch_size '100', which I\n> feel is okay because the reloptions also accept both.\n>\n> While on this, we can also use parse_real for fdw_startup_cost and\n> fdw_tuple_cost options but with that they will accept both quoted and\n> unquoted real values.\n\nI'm sorry about saying that the unquoted integers are accepted with\nbatch_size, fetch_size, but actually the parser is throwing the syntax\nerror.\n\nSo, we can safely use parse_int for batch_size and fetch_size,\nparse_real for fdw_tuple_cost and fdw_startup_cost without changing\nany behaviour.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 18 May 2021 20:11:28 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw - should we tighten up batch_size, fetch_size\n options against non-numeric values?"
},
{
"msg_contents": "At Tue, 18 May 2021 19:46:39 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Tue, May 18, 2021 at 7:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> > > On 2021/05/17 18:58, Bharath Rupireddy wrote:\n> > >> It looks like the values such as '123.456', '789.123' '100$%$#$#',\n> > >> '9,223,372,' are accepted and treated as valid integers for\n> > >> postgres_fdw options batch_size and fetch_size. Whereas this is not\n> > >> the case with fdw_startup_cost and fdw_tuple_cost options for which an\n> > >> error is thrown. Attaching a patch to fix that.\n> >\n> > > This looks an improvement. But one issue is that the restore of\n> > > dump file taken by pg_dump from v13 may fail for v14 with this patch\n> > > if it contains invalid setting of fetch_size, e.g., \"fetch_size '123.456'\".\n> > > OTOH, since batch_size was added in v14, it has no such issue.\n> >\n> > Maybe better to just silently round to integer? I think that's\n> > what we generally do with integer GUCs these days, eg\n> >\n> > regression=# set work_mem = 102.9;\n> > SET\n> > regression=# show work_mem;\n> > work_mem\n> > ----------\n> > 103kB\n> > (1 row)\n> \n> I think we can use parse_int to parse the fetch_size and batch_size as\n> integers, which also rounds off decimals to integers and returns false\n> for non-numeric junk. But, it accepts both quoted and unquoted\n> integers, something like batch_size 100 and batch_size '100', which I\n> feel is okay because the reloptions also accept both.\n> \n> While on this, we can also use parse_real for fdw_startup_cost and\n> fdw_tuple_cost options but with that they will accept both quoted and\n> unquoted real values.\n> \n> Thoughts?\n\nThey are more or less a kind of reloptions. So I think it is\nreasonable to treat the option same way with RELOPT_TYPE_INT. That\nis, it would be better to use our standard functions rather than\nrandom codes using bare libc functions for input from users. The same\ncan be said for parameters with real numbers, regardless of the\n\"quoted\" discussion.\n\n> > I agree with throwing an error for non-numeric junk though.\n> > Allowing that on the grounds of backwards compatibility\n> > seems like too much of a stretch.\n> \n> +1.\n\n+1.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 19 May 2021 11:36:57 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw - should we tighten up batch_size, fetch_size\n options against non-numeric values?"
},
{
"msg_contents": "\n\nOn 2021/05/19 11:36, Kyotaro Horiguchi wrote:\n> At Tue, 18 May 2021 19:46:39 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n>> On Tue, May 18, 2021 at 7:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>\n>>> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n>>>> On 2021/05/17 18:58, Bharath Rupireddy wrote:\n>>>>> It looks like the values such as '123.456', '789.123' '100$%$#$#',\n>>>>> '9,223,372,' are accepted and treated as valid integers for\n>>>>> postgres_fdw options batch_size and fetch_size. Whereas this is not\n>>>>> the case with fdw_startup_cost and fdw_tuple_cost options for which an\n>>>>> error is thrown. Attaching a patch to fix that.\n>>>\n>>>> This looks an improvement. But one issue is that the restore of\n>>>> dump file taken by pg_dump from v13 may fail for v14 with this patch\n>>>> if it contains invalid setting of fetch_size, e.g., \"fetch_size '123.456'\".\n>>>> OTOH, since batch_size was added in v14, it has no such issue.\n>>>\n>>> Maybe better to just silently round to integer? I think that's\n>>> what we generally do with integer GUCs these days, eg\n>>>\n>>> regression=# set work_mem = 102.9;\n>>> SET\n>>> regression=# show work_mem;\n>>> work_mem\n>>> ----------\n>>> 103kB\n>>> (1 row)\n>>\n>> I think we can use parse_int to parse the fetch_size and batch_size as\n>> integers, which also rounds off decimals to integers and returns false\n>> for non-numeric junk. But, it accepts both quoted and unquoted\n>> integers, something like batch_size 100 and batch_size '100', which I\n>> feel is okay because the reloptions also accept both.\n>>\n>> While on this, we can also use parse_real for fdw_startup_cost and\n>> fdw_tuple_cost options but with that they will accept both quoted and\n>> unquoted real values.\n>>\n>> Thoughts?\n> \n> They are more or less a kind of reloptions. So I think it is\n> reasonable to treat the option same way with RELOPT_TYPE_INT. That\n> is, it would be better to use our standard functions rather than\n> random codes using bare libc functions for input from users. The same\n> can be said for parameters with real numbers, regardless of the\n> \"quoted\" discussion.\n\nSounds reasonable. Since parse_int() is used to parse RELOPT_TYPE_INT value\nin reloptions.c, your idea seems to be almost the same as Bharath's one.\nThat is, use parse_int() and parse_real() to parse integer and real options\nvalues, respectively.\n\n> \n>>> I agree with throwing an error for non-numeric junk though.\n>>> Allowing that on the grounds of backwards compatibility\n>>> seems like too much of a stretch.\n>>\n>> +1.\n> \n> +1.\n\n+1\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 19 May 2021 11:58:24 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw - should we tighten up batch_size, fetch_size\n options against non-numeric values?"
},
{
"msg_contents": "On Wed, May 19, 2021 at 8:28 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>> I agree with throwing an error for non-numeric junk though.\n> >>> Allowing that on the grounds of backwards compatibility\n> >>> seems like too much of a stretch.\n> >>\n> >> +1.\n> >\n> > +1.\n>\n> +1\n\nThanks all for your inputs. PSA which uses parse_int for\nbatch_size/fech_size and parse_real for fdw_startup_cost and\nfdw_tuple_cost.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 19 May 2021 11:04:31 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw - should we tighten up batch_size, fetch_size\n options against non-numeric values?"
},
{
"msg_contents": "\n\nOn 2021/05/19 14:34, Bharath Rupireddy wrote:\n> On Wed, May 19, 2021 at 8:28 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>>> I agree with throwing an error for non-numeric junk though.\n>>>>> Allowing that on the grounds of backwards compatibility\n>>>>> seems like too much of a stretch.\n>>>>\n>>>> +1.\n>>>\n>>> +1.\n>>\n>> +1\n> \n> Thanks all for your inputs. PSA which uses parse_int for\n> batch_size/fech_size and parse_real for fdw_startup_cost and\n> fdw_tuple_cost.\n\nThanks for updating the patch! It basically looks good to me.\n\n-\t\t\tval = strtod(defGetString(def), &endp);\n-\t\t\tif (*endp || val < 0)\n+\t\t\tis_parsed = parse_real(defGetString(def), &val, 0, NULL);\n+\t\t\tif (!is_parsed || val < 0)\n \t\t\t\tereport(ERROR,\n \t\t\t\t\t\t(errcode(ERRCODE_SYNTAX_ERROR),\n \t\t\t\t\t\t errmsg(\"%s requires a non-negative numeric value\",\n\nIsn't it better to check \"!is_parsed\" and \"val < 0\" separately like\nreloptions.c does? That is, we should throw different error messages\nfor them?\n\nERRCODE_SYNTAX_ERROR seems strange for this type of error?\nERRCODE_INVALID_PARAMETER_VALUE is better and more proper?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 19 May 2021 20:32:04 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw - should we tighten up batch_size, fetch_size\n options against non-numeric values?"
},
{
"msg_contents": "On Wed, May 19, 2021 at 5:02 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2021/05/19 14:34, Bharath Rupireddy wrote:\n> > On Wed, May 19, 2021 at 8:28 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >>>>> I agree with throwing an error for non-numeric junk though.\n> >>>>> Allowing that on the grounds of backwards compatibility\n> >>>>> seems like too much of a stretch.\n> >>>>\n> >>>> +1.\n> >>>\n> >>> +1.\n> >>\n> >> +1\n> >\n> > Thanks all for your inputs. PSA which uses parse_int for\n> > batch_size/fech_size and parse_real for fdw_startup_cost and\n> > fdw_tuple_cost.\n>\n> Thanks for updating the patch! It basically looks good to me.\n>\n> - val = strtod(defGetString(def), &endp);\n> - if (*endp || val < 0)\n> + is_parsed = parse_real(defGetString(def), &val, 0, NULL);\n> + if (!is_parsed || val < 0)\n> ereport(ERROR,\n> (errcode(ERRCODE_SYNTAX_ERROR),\n> errmsg(\"%s requires a non-negative numeric value\",\n>\n> Isn't it better to check \"!is_parsed\" and \"val < 0\" separately like\n> reloptions.c does? That is, we should throw different error messages\n> for them?\n>\n> ERRCODE_SYNTAX_ERROR seems strange for this type of error?\n> ERRCODE_INVALID_PARAMETER_VALUE is better and more proper?\n\nThanks for the comments. I added separate messages, changed the error\ncode from ERRCODE_SYNTAX_ERROR to ERRCODE_INVALID_PARAMETER_VALUE and\nalso quoted the option name in the error message. PSA v3 patch.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 19 May 2021 21:31:24 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw - should we tighten up batch_size, fetch_size\n options against non-numeric values?"
},
{
"msg_contents": "\n\nOn 2021/05/20 1:01, Bharath Rupireddy wrote:\n> Thanks for the comments. I added separate messages, changed the error\n> code from ERRCODE_SYNTAX_ERROR to ERRCODE_INVALID_PARAMETER_VALUE and\n> also quoted the option name in the error message. PSA v3 patch.\n\nThanks for updating the patch!\n\n+\t\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+\t\t\t\t\t\t errmsg(\"invalid numeric value for option \\\"%s\\\"\",\n+\t\t\t\t\t\t\t\tdef->defname)));\n\nIn reloptions.c, when parse_real() fails to parse the input, the error message\n\"invalid value for floating point option...\" is output.\nFor the sake of consistency, we should use the same error message here?\n\n-\t\t\t\t\t\t(errcode(ERRCODE_SYNTAX_ERROR),\n-\t\t\t\t\t\t errmsg(\"%s requires a non-negative integer value\",\n+\t\t\t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+\t\t\t\t\t\t errmsg(\"invalid integer value for option \\\"%s\\\"\",\n\nIMO the error message should be \"invalid value for integer option...\" here\nbecause of the same reason I told above. Thought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 30 Jun 2021 21:23:52 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw - should we tighten up batch_size, fetch_size\n options against non-numeric values?"
},
{
"msg_contents": "On Wed, Jun 30, 2021 at 5:53 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2021/05/20 1:01, Bharath Rupireddy wrote:\n> > Thanks for the comments. I added separate messages, changed the error\n> > code from ERRCODE_SYNTAX_ERROR to ERRCODE_INVALID_PARAMETER_VALUE and\n> > also quoted the option name in the error message. PSA v3 patch.\n>\n> Thanks for updating the patch!\n>\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"invalid numeric value for option \\\"%s\\\"\",\n> + def->defname)));\n>\n> In reloptions.c, when parse_real() fails to parse the input, the error message\n> \"invalid value for floating point option...\" is output.\n> For the sake of consistency, we should use the same error message here?\n\nActually, there's an existing error message errmsg(\"%s requires a\nnon-negative numeric value\" that used \"numeric value\". If we were to\nchange errmsg(\"invalid numeric value for option \\\"%s\\\"\", to\nerrmsg(\"invalid value for floating point option \\\"%s\\\"\",, then we\nmight have to change the existing message. And also, the docs use\n\"numeric value\" for fdw_startup_cost and fdw_tuple_cost. IMO, let's go\nwith errmsg(\"invalid value for numeric option \\\"%s\\\": %s\",.\n\n> - (errcode(ERRCODE_SYNTAX_ERROR),\n> - errmsg(\"%s requires a non-negative integer value\",\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"invalid integer value for option \\\"%s\\\"\",\n>\n> IMO the error message should be \"invalid value for integer option...\" here\n> because of the same reason I told above. Thought?\n\nChanged.\n\nPSA v4.\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Wed, 30 Jun 2021 20:01:55 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw - should we tighten up batch_size, fetch_size\n options against non-numeric values?"
},
{
"msg_contents": "\n\nOn 2021/06/30 23:31, Bharath Rupireddy wrote:\n> On Wed, Jun 30, 2021 at 5:53 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>> On 2021/05/20 1:01, Bharath Rupireddy wrote:\n>>> Thanks for the comments. I added separate messages, changed the error\n>>> code from ERRCODE_SYNTAX_ERROR to ERRCODE_INVALID_PARAMETER_VALUE and\n>>> also quoted the option name in the error message. PSA v3 patch.\n>>\n>> Thanks for updating the patch!\n>>\n>> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n>> + errmsg(\"invalid numeric value for option \\\"%s\\\"\",\n>> + def->defname)));\n>>\n>> In reloptions.c, when parse_real() fails to parse the input, the error message\n>> \"invalid value for floating point option...\" is output.\n>> For the sake of consistency, we should use the same error message here?\n> \n> Actually, there's an existing error message errmsg(\"%s requires a\n> non-negative numeric value\" that used \"numeric value\". If we were to\n> change errmsg(\"invalid numeric value for option \\\"%s\\\"\", to\n> errmsg(\"invalid value for floating point option \\\"%s\\\"\",, then we\n> might have to change the existing message. And also, the docs use\n> \"numeric value\" for fdw_startup_cost and fdw_tuple_cost.\n\nThe recent commit 61d599ede7 documented that the type of those options is\nfloating point. But the docs still use \"is a numeric value\" in the descriptions\nof them. Probably it should be replaced with \"is a floating point value\" there.\nIf we do this, isn't it better to use \"floating point\" even in the error message?\n\n\n> IMO, let's go\n> with errmsg(\"invalid value for numeric option \\\"%s\\\": %s\",.\n> \n>> - (errcode(ERRCODE_SYNTAX_ERROR),\n>> - errmsg(\"%s requires a non-negative integer value\",\n>> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n>> + errmsg(\"invalid integer value for option \\\"%s\\\"\",\n>>\n>> IMO the error message should be \"invalid value for integer option...\" here\n>> because of the same reason I told above. Thought?\n> \n> Changed.\n> \n> PSA v4.\n\nThanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 1 Jul 2021 11:53:56 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw - should we tighten up batch_size, fetch_size\n options against non-numeric values?"
},
{
"msg_contents": "On Thu, Jul 1, 2021 at 8:23 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> The recent commit 61d599ede7 documented that the type of those options is\n> floating point. But the docs still use \"is a numeric value\" in the descriptions\n> of them. Probably it should be replaced with \"is a floating point value\" there.\n> If we do this, isn't it better to use \"floating point\" even in the error message?\n\nAgreed. PSA v5 patch.\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Thu, 1 Jul 2021 09:46:35 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw - should we tighten up batch_size, fetch_size\n options against non-numeric values?"
},
{
"msg_contents": "\n\nOn 2021/07/01 13:16, Bharath Rupireddy wrote:\n> On Thu, Jul 1, 2021 at 8:23 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> The recent commit 61d599ede7 documented that the type of those options is\n>> floating point. But the docs still use \"is a numeric value\" in the descriptions\n>> of them. Probably it should be replaced with \"is a floating point value\" there.\n>> If we do this, isn't it better to use \"floating point\" even in the error message?\n> \n> Agreed. PSA v5 patch.\n\nThanks for updating the patch! LGTM.\nBarring any objection, I will commit this patch.\n\nOne question is; should we back-patch this? This is not a bug fix,\nso I'm not sure if it's worth back-patching that to already-released versions.\nBut it may be better to do that to v14.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 1 Jul 2021 21:37:34 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw - should we tighten up batch_size, fetch_size\n options against non-numeric values?"
},
{
"msg_contents": "On Thu, Jul 1, 2021 at 6:07 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2021/07/01 13:16, Bharath Rupireddy wrote:\n> > On Thu, Jul 1, 2021 at 8:23 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >> The recent commit 61d599ede7 documented that the type of those options is\n> >> floating point. But the docs still use \"is a numeric value\" in the descriptions\n> >> of them. Probably it should be replaced with \"is a floating point value\" there.\n> >> If we do this, isn't it better to use \"floating point\" even in the error message?\n> >\n> > Agreed. PSA v5 patch.\n>\n> Thanks for updating the patch! LGTM.\n> Barring any objection, I will commit this patch.\n\nThanks.\n\n> One question is; should we back-patch this? This is not a bug fix,\n> so I'm not sure if it's worth back-patching that to already-released versions.\n> But it may be better to do that to v14.\n\nIMO, it's a good-to-have fix in v14. But, -1 for backpatching to v13\nand lower branches.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Thu, 1 Jul 2021 18:11:10 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw - should we tighten up batch_size, fetch_size\n options against non-numeric values?"
},
{
"msg_contents": "\n\nOn 2021/07/01 21:41, Bharath Rupireddy wrote:\n> On Thu, Jul 1, 2021 at 6:07 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>> On 2021/07/01 13:16, Bharath Rupireddy wrote:\n>>> On Thu, Jul 1, 2021 at 8:23 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>> The recent commit 61d599ede7 documented that the type of those options is\n>>>> floating point. But the docs still use \"is a numeric value\" in the descriptions\n>>>> of them. Probably it should be replaced with \"is a floating point value\" there.\n>>>> If we do this, isn't it better to use \"floating point\" even in the error message?\n>>>\n>>> Agreed. PSA v5 patch.\n>>\n>> Thanks for updating the patch! LGTM.\n>> Barring any objection, I will commit this patch.\n> \n> Thanks.\n> \n>> One question is; should we back-patch this? This is not a bug fix,\n>> so I'm not sure if it's worth back-patching that to already-released versions.\n>> But it may be better to do that to v14.\n> \n> IMO, it's a good-to-have fix in v14. But, -1 for backpatching to v13\n> and lower branches.\n\nAgreed. So I pushed the patch to master and v14. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 7 Jul 2021 11:17:34 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw - should we tighten up batch_size, fetch_size\n options against non-numeric values?"
}
] |
[
{
"msg_contents": "Hi,\n\n\n\nWhen loading some data into a partitioned table for testing purpose,\n\nI found even if I specified constant value for the partition key[1], it still do\n\nthe tuple routing for each row.\n\n\n\n[1]---------------------\n\nUPDATE partitioned set part_key = 2 , …\n\nINSERT into partitioned(part_key, ...) select 1, …\n\n---------------------\n\n\n\nI saw such SQLs automatically generated by some programs,\n\nSo , personally, It’d be better to skip the tuple routing for this case.\n\n\n\nIMO, we can use the following steps to skip the tuple routing:\n\n1) collect the column that has constant value in the targetList.\n\n2) compare the constant column with the columns used in partition key.\n\n3) if all the columns used in key are constant then we cache the routed partition\n\n and do not do the tuple routing again.\n\n\n\nIn this approach, I did some simple and basic performance tests:\n\n\n\n----For plain single column partition key.(partition by range(col)/list(a)...)\n\nWhen loading 100000000 rows into the table, I can see about 5-7% performance gain\n\nfor both cross-partition UPDATE and INSERT if specified constant for the partition key.\n\n\n\n----For more complicated expression partition key(partition by range(UDF_func(col)+x)…)\n\nWhen loading 100000000 rows into the table, it will bring more performance gain.\n\nAbout > 20% performance gain\n\n\n\nBesides, I did not see noticeable performance degradation for other cases(small data set).\n\n\n\nAttaching a POC patch about this improvement.\n\nThoughts ?\n\n\n\nBest regards,\n\nhouzj",
"msg_date": "Mon, 17 May 2021 11:36:48 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "On Mon, May 17, 2021 at 8:37 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n> When loading some data into a partitioned table for testing purpose,\n>\n> I found even if I specified constant value for the partition key[1], it still do\n>\n> the tuple routing for each row.\n>\n>\n> [1]---------------------\n>\n> UPDATE partitioned set part_key = 2 , …\n>\n> INSERT into partitioned(part_key, ...) select 1, …\n>\n> ---------------------\n>\n> I saw such SQLs automatically generated by some programs,\n\nHmm, does this seem common enough for the added complexity to be worthwhile?\n\nFor an example of what's previously been considered worthwhile for a\nproject like this, see what 0d5f05cde0 did. The cases it addressed\nare common enough -- a file being loaded into a (time range-)\npartitioned table using COPY FROM tends to have lines belonging to the\nsame partition consecutively placed.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 17 May 2021 22:30:47 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "On Tue, 18 May 2021 at 01:31, Amit Langote <amitlangote09@gmail.com> wrote:\n> Hmm, does this seem common enough for the added complexity to be worthwhile?\n\nI'd also like to know if there's some genuine use case for this. For\ntesting purposes does not seem to be quite a good enough reason.\n\nA slightly different optimization that I have considered and even\nwritten patches before was to have ExecFindPartition() cache the last\nrouted to partition and have it check if the new row can go into that\none on the next call. I imagined there might be a use case for\nspeeding that up for RANGE partitioned tables since it seems fairly\nlikely that most use cases, at least for time series ranges will\nalways hit the same partition most of the time. Since RANGE requires\na binary search there might be some savings there. I imagine that\noptimisation would never be useful for HASH partitioning since it\nseems most likely that we'll be routing to a different partition each\ntime and wouldn't save much since routing to hash partitions are\ncheaper than other types. LIST partitioning I'm not so sure about. It\nseems much less likely than RANGE to hit the same partition twice in a\nrow.\n\nIIRC, the patch did something like call ExecPartitionCheck() on the\nnew tuple with the previously routed to ResultRelInfo. I think the\nlast used partition was cached somewhere like relcache (which seems a\nbit questionable). Likely this would speed up the example case here\na bit. Not as much as the proposed patch, but it would likely apply in\nmany more cases.\n\nI don't think I ever posted the patch to the list, and if so I no\nlonger have access to it, so it would need to be done again.\n\nDavid\n\n\n",
"msg_date": "Tue, 18 May 2021 13:27:48 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "> > Hmm, does this seem common enough for the added complexity to be\r\n> worthwhile?\r\n> \r\n> I'd also like to know if there's some genuine use case for this. For testing\r\n> purposes does not seem to be quite a good enough reason.\r\n\r\nThanks for the response.\r\n\r\nFor some big data scenario, we sometimes transfer data from one table(only store not expired data)\r\nto another table(historical data) for future analysis.\r\nIn this case, we import data into historical table regularly(could be one day or half a day),\r\nAnd the data is likely to be imported with date label specified, then all of the data to be\r\nimported this time belong to the same partition which partition by time range.\r\n\r\nSo, personally, It will be nice if postgres can skip tuple routing for each row in this scenario.\r\n\r\n> A slightly different optimization that I have considered and even written\r\n> patches before was to have ExecFindPartition() cache the last routed to\r\n> partition and have it check if the new row can go into that one on the next call.\r\n> I imagined there might be a use case for speeding that up for RANGE\r\n> partitioned tables since it seems fairly likely that most use cases, at least for\r\n> time series ranges will\r\n> always hit the same partition most of the time. Since RANGE requires\r\n> a binary search there might be some savings there. I imagine that\r\n> optimisation would never be useful for HASH partitioning since it seems most\r\n> likely that we'll be routing to a different partition each time and wouldn't save\r\n> much since routing to hash partitions are cheaper than other types. LIST\r\n> partitioning I'm not so sure about. It seems much less likely than RANGE to hit\r\n> the same partition twice in a row.\r\n\r\nI think your approach looks good too,\r\nand it seems does not conflict with the approach proposed here.\r\n\r\nBest regards,\r\nhouzj\r\n",
"msg_date": "Tue, 18 May 2021 02:11:00 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "On Tue, May 18, 2021 at 01:27:48PM +1200, David Rowley wrote:\n> A slightly different optimization that I have considered and even\n> written patches before was to have ExecFindPartition() cache the last\n> routed to partition and have it check if the new row can go into that\n> one on the next call. I imagined there might be a use case for\n> speeding that up for RANGE partitioned tables since it seems fairly\n> likely that most use cases, at least for time series ranges will\n> always hit the same partition most of the time. Since RANGE requires\n> a binary search there might be some savings there. I imagine that\n> optimisation would never be useful for HASH partitioning since it\n> seems most likely that we'll be routing to a different partition each\n> time and wouldn't save much since routing to hash partitions are\n> cheaper than other types. LIST partitioning I'm not so sure about. It\n> seems much less likely than RANGE to hit the same partition twice in a\n> row.\n\nIt depends a lot on the schema used and the load pattern, but I'd like\nto think that a similar argument can be made in favor of LIST\npartitioning here.\n--\nMichael",
"msg_date": "Tue, 18 May 2021 11:32:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "On Tue, May 18, 2021 at 10:28 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Tue, 18 May 2021 at 01:31, Amit Langote <amitlangote09@gmail.com> wrote:\n> > Hmm, does this seem common enough for the added complexity to be worthwhile?\n>\n> I'd also like to know if there's some genuine use case for this. For\n> testing purposes does not seem to be quite a good enough reason.\n>\n> A slightly different optimization that I have considered and even\n> written patches before was to have ExecFindPartition() cache the last\n> routed to partition and have it check if the new row can go into that\n> one on the next call. I imagined there might be a use case for\n> speeding that up for RANGE partitioned tables since it seems fairly\n> likely that most use cases, at least for time series ranges will\n> always hit the same partition most of the time. Since RANGE requires\n> a binary search there might be some savings there. I imagine that\n> optimisation would never be useful for HASH partitioning since it\n> seems most likely that we'll be routing to a different partition each\n> time and wouldn't save much since routing to hash partitions are\n> cheaper than other types. LIST partitioning I'm not so sure about. It\n> seems much less likely than RANGE to hit the same partition twice in a\n> row.\n>\n> IIRC, the patch did something like call ExecPartitionCheck() on the\n> new tuple with the previously routed to ResultRelInfo. I think the\n> last used partition was cached somewhere like relcache (which seems a\n> bit questionable). Likely this would speed up the example case here\n> a bit. Not as much as the proposed patch, but it would likely apply in\n> many more cases.\n>\n> I don't think I ever posted the patch to the list, and if so I no\n> longer have access to it, so it would need to be done again.\n\nI gave a shot to implementing your idea and ended up with the attached\nPoC patch, which does pass make check-world.\n\nI do see some speedup:\n\n-- creates a range-partitioned table with 1000 partitions\ncreate unlogged table foo (a int) partition by range (a);\nselect 'create unlogged table foo_' || i || ' partition of foo for\nvalues from (' || (i-1)*100000+1 || ') to (' || i*100000+1 || ');'\nfrom generate_series(1, 1000) i;\n\\gexec\n\n-- generates a 100 million record file\ncopy (select generate_series(1, 100000000)) to '/tmp/100m.csv' csv;\n\nTimes for loading that file compare as follows:\n\nHEAD:\n\npostgres=# copy foo from '/tmp/100m.csv' csv;\nCOPY 100000000\nTime: 31813.964 ms (00:31.814)\npostgres=# copy foo from '/tmp/100m.csv' csv;\nCOPY 100000000\nTime: 31972.942 ms (00:31.973)\npostgres=# copy foo from '/tmp/100m.csv' csv;\nCOPY 100000000\nTime: 32049.046 ms (00:32.049)\n\nPatched:\n\npostgres=# copy foo from '/tmp/100m.csv' csv;\nCOPY 100000000\nTime: 26151.158 ms (00:26.151)\npostgres=# copy foo from '/tmp/100m.csv' csv;\nCOPY 100000000\nTime: 28161.082 ms (00:28.161)\npostgres=# copy foo from '/tmp/100m.csv' csv;\nCOPY 100000000\nTime: 26700.908 ms (00:26.701)\n\nI guess it would be nice if we could fit in a solution for the use\ncase that houjz mentioned as a special case. BTW, houjz, could you\nplease check if a patch like this one helps the case you mentioned?\n\n\n\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 19 May 2021 22:17:19 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "On Tue, May 18, 2021 at 11:11 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n> > > Hmm, does this seem common enough for the added complexity to be\n> > worthwhile?\n> >\n> > I'd also like to know if there's some genuine use case for this. For testing\n> > purposes does not seem to be quite a good enough reason.\n>\n> Thanks for the response.\n>\n> For some big data scenario, we sometimes transfer data from one table(only store not expired data)\n> to another table(historical data) for future analysis.\n> In this case, we import data into historical table regularly(could be one day or half a day),\n> And the data is likely to be imported with date label specified, then all of the data to be\n> imported this time belong to the same partition which partition by time range.\n\nIs directing that data directly into the appropriate partition not an\nacceptable solution to address this particular use case? Yeah, I know\nwe should avoid encouraging users to perform DML directly on\npartitions, but...\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 19 May 2021 22:25:47 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "From: Amit Langote <amitlangote09@gmail.com>\r\n> On Tue, May 18, 2021 at 11:11 AM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> > For some big data scenario, we sometimes transfer data from one table(only\r\n> store not expired data)\r\n> > to another table(historical data) for future analysis.\r\n> > In this case, we import data into historical table regularly(could be one day or\r\n> half a day),\r\n> > And the data is likely to be imported with date label specified, then all of the\r\n> data to be\r\n> > imported this time belong to the same partition which partition by time range.\r\n> \r\n> Is directing that data directly into the appropriate partition not an\r\n> acceptable solution to address this particular use case? Yeah, I know\r\n> we should avoid encouraging users to perform DML directly on\r\n> partitions, but...\r\n\r\nYes, I want to make/keep it possible that application developers can be unaware of partitions. I believe that's why David-san, Alvaro-san, and you have made great efforts to improve partitioning performance. So, I'm +1 for what Hou-san is trying to achieve.\r\n\r\nIs there something you're concerned about? The amount and/or complexity of added code?\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Thu, 20 May 2021 00:20:16 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "On Thu, 20 May 2021 at 01:17, Amit Langote <amitlangote09@gmail.com> wrote:\n> I gave a shot to implementing your idea and ended up with the attached\n> PoC patch, which does pass make check-world.\n\nI only had a quick look at this.\n\n+ if ((dispatch->key->strategy == PARTITION_STRATEGY_RANGE ||\n+ dispatch->key->strategy == PARTITION_STRATEGY_RANGE))\n+ dispatch->lastPartInfo = rri;\n\nI think you must have meant to have one of these as PARTITION_STRATEGY_LIST?\n\nWondering what your thoughts are on, instead of caching the last used\nResultRelInfo from the last call to ExecFindPartition(), to instead\ncached the last looked up partition index in PartitionDescData? That\nway we could cache lookups between statements. Right now your caching\nis not going to help for single-row INSERTs, for example.\n\nFor multi-level partition hierarchies that would still require looping\nand checking the cached value at each level.\n\nI've not studied the code that builds and rebuilds PartitionDescData,\nso there may be some reason that we shouldn't do that. I know that's\nchanged a bit recently with DETACH CONCURRENTLY. However, providing\nthe cached index is not outside the bounds of the oids array, it\nshouldn't really matter if the cached value happens to end up pointing\nto some other partition. If that happens, we'll just fail the\nExecPartitionCheck() and have to look for the correct partition.\n\nDavid\n\n\n",
"msg_date": "Thu, 20 May 2021 12:31:13 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "On Thu, 20 May 2021 at 12:20, tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n> Yes, I want to make/keep it possible that application developers can be unaware of partitions. I believe that's why David-san, Alvaro-san, and you have made great efforts to improve partitioning performance. So, I'm +1 for what Hou-san is trying to achieve.\n>\n> Is there something you're concerned about? The amount and/or complexity of added code?\n\nIt would be good to see how close Amit's patch gets to the performance\nof the original patch on this thread. As far as I can see, the\ndifference is, aside from the setup code to determine if the partition\nis constant, that Amit's patch just requires an additional\nExecPartitionCheck() call per row. That should be pretty cheap when\ncompared to the binary search to find the partition for a RANGE or\nLIST partitioned table.\n\nHouzj didn't mention how the table in the test was partitioned, so\nit's hard to speculate how many comparisons would be done during a\nbinary search. Or maybe it was HASH partitioned and there was no\nbinary search.\n\nDavid\n\n\n",
"msg_date": "Thu, 20 May 2021 12:37:03 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "On Thu, May 20, 2021 at 9:31 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Thu, 20 May 2021 at 01:17, Amit Langote <amitlangote09@gmail.com> wrote:\n> > I gave a shot to implementing your idea and ended up with the attached\n> > PoC patch, which does pass make check-world.\n>\n> I only had a quick look at this.\n>\n> + if ((dispatch->key->strategy == PARTITION_STRATEGY_RANGE ||\n> + dispatch->key->strategy == PARTITION_STRATEGY_RANGE))\n> + dispatch->lastPartInfo = rri;\n>\n> I think you must have meant to have one of these as PARTITION_STRATEGY_LIST?\n\nOops, of course. Fixed in the attached.\n\n> Wondering what your thoughts are on, instead of caching the last used\n> ResultRelInfo from the last call to ExecFindPartition(), to instead\n> cached the last looked up partition index in PartitionDescData? That\n> way we could cache lookups between statements. Right now your caching\n> is not going to help for single-row INSERTs, for example.\n\nHmm, addressing single-row INSERTs with something like you suggest\nmight help time-range partitioning setups, because each of those\nINSERTs are likely to be targeting the same partition most of the\ntime. Is that case what you had in mind? Although, in the cases\nwhere that doesn't help, we'd end up making a ResultRelInfo for the\ncached partition to check the partition constraint, only then to be\nthrown away because the new row belongs to a different partition.\nThat overhead would not be free for sure.\n\n> For multi-level partition hierarchies that would still require looping\n> and checking the cached value at each level.\n\nYeah, there's no getting around that, though maybe that's not a big problem.\n\n> I've not studied the code that builds and rebuilds PartitionDescData,\n> so there may be some reason that we shouldn't do that. I know that's\n> changed a bit recently with DETACH CONCURRENTLY. However, providing\n> the cached index is not outside the bounds of the oids array, it\n> shouldn't really matter if the cached value happens to end up pointing\n> to some other partition. If that happens, we'll just fail the\n> ExecPartitionCheck() and have to look for the correct partition.\n\nYeah, as long as ExecFindPartition performs ExecPartitionCheck() on\nbefore returning a given cached partition, there's no need to worry\nabout the cached index getting stale for whatever reason.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 20 May 2021 17:49:20 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "On Thu, May 20, 2021 at 9:20 AM tsunakawa.takay@fujitsu.com\n<tsunakawa.takay@fujitsu.com> wrote:\n> From: Amit Langote <amitlangote09@gmail.com>\n> > On Tue, May 18, 2021 at 11:11 AM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > > For some big data scenario, we sometimes transfer data from one table(only\n> > store not expired data)\n> > > to another table(historical data) for future analysis.\n> > > In this case, we import data into historical table regularly(could be one day or\n> > half a day),\n> > > And the data is likely to be imported with date label specified, then all of the\n> > data to be\n> > > imported this time belong to the same partition which partition by time range.\n> >\n> > Is directing that data directly into the appropriate partition not an\n> > acceptable solution to address this particular use case? Yeah, I know\n> > we should avoid encouraging users to perform DML directly on\n> > partitions, but...\n>\n> Yes, I want to make/keep it possible that application developers can be unaware of partitions. I believe that's why David-san, Alvaro-san, and you have made great efforts to improve partitioning performance. So, I'm +1 for what Hou-san is trying to achieve.\n\nI'm very glad to see such discussions on the list, because it means\nthe partitioning feature is being stretched to cover wider set of use\ncases.\n\n> Is there something you're concerned about? The amount and/or complexity of added code?\n\nIMHO, a patch that implements caching more generally would be better\neven if it adds some complexity. Hou-san's patch seemed centered\naround the use case where all rows being loaded in a given command\nroute to the same partition, a very specialized case I'd say.\n\nMaybe we can extract the logic in Hou-san's patch to check the\nconstant-ness of the targetlist producing the rows to insert and find\na way to add it to the patch I posted such that the generality of the\nlatter's implementation is not lost.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 20 May 2021 19:03:18 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "On Thu, 20 May 2021 at 20:49, Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Thu, May 20, 2021 at 9:31 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> > Wondering what your thoughts are on, instead of caching the last used\n> > ResultRelInfo from the last call to ExecFindPartition(), to instead\n> > cached the last looked up partition index in PartitionDescData? That\n> > way we could cache lookups between statements. Right now your caching\n> > is not going to help for single-row INSERTs, for example.\n>\n> Hmm, addressing single-row INSERTs with something like you suggest\n> might help time-range partitioning setups, because each of those\n> INSERTs are likely to be targeting the same partition most of the\n> time. Is that case what you had in mind?\n\nYeah, I thought it would possibly be useful for RANGE partitioning. I\nwas a bit undecided with LIST. There seemed to be bigger risk there\nthat the usage pattern would route to a different partition each time.\nIn my imagination, RANGE partitioning seems more likely to see\nsubsequent tuples heading to the same partition as the last tuple.\n\n> Although, in the cases\n> where that doesn't help, we'd end up making a ResultRelInfo for the\n> cached partition to check the partition constraint, only then to be\n> thrown away because the new row belongs to a different partition.\n> That overhead would not be free for sure.\n\nYeah, there's certainly above zero overhead to getting it wrong. It\nwould be good to see benchmarks to find out what that overhead is.\n\nDavid\n\n\n",
"msg_date": "Thu, 20 May 2021 22:32:28 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "\r\n\r\nFrom: Amit Langote <amitlangote09@gmail.com>\r\nSent: Wednesday, May 19, 2021 9:17 PM\r\n> I gave a shot to implementing your idea and ended up with the attached PoC\r\n> patch, which does pass make check-world.\r\n> \r\n> I do see some speedup:\r\n> \r\n> -- creates a range-partitioned table with 1000 partitions create unlogged table\r\n> foo (a int) partition by range (a); select 'create unlogged table foo_' || i || '\r\n> partition of foo for values from (' || (i-1)*100000+1 || ') to (' || i*100000+1 || ');'\r\n> from generate_series(1, 1000) i;\r\n> \\gexec\r\n> \r\n> -- generates a 100 million record file\r\n> copy (select generate_series(1, 100000000)) to '/tmp/100m.csv' csv;\r\n> \r\n> Times for loading that file compare as follows:\r\n> \r\n> HEAD:\r\n> \r\n> postgres=# copy foo from '/tmp/100m.csv' csv; COPY 100000000\r\n> Time: 31813.964 ms (00:31.814)\r\n> postgres=# copy foo from '/tmp/100m.csv' csv; COPY 100000000\r\n> Time: 31972.942 ms (00:31.973)\r\n> postgres=# copy foo from '/tmp/100m.csv' csv; COPY 100000000\r\n> Time: 32049.046 ms (00:32.049)\r\n> \r\n> Patched:\r\n> \r\n> postgres=# copy foo from '/tmp/100m.csv' csv; COPY 100000000\r\n> Time: 26151.158 ms (00:26.151)\r\n> postgres=# copy foo from '/tmp/100m.csv' csv; COPY 100000000\r\n> Time: 28161.082 ms (00:28.161)\r\n> postgres=# copy foo from '/tmp/100m.csv' csv; COPY 100000000\r\n> Time: 26700.908 ms (00:26.701)\r\n>\r\n> I guess it would be nice if we could fit in a solution for the use case that houjz\r\n> mentioned as a special case. BTW, houjz, could you please check if a patch like\r\n> this one helps the case you mentioned?\r\n\r\nThanks for the patch!\r\nI did some test on it(using the table you provided above):\r\n\r\n1): Test plain column in partition key.\r\nSQL: insert into foo select 1 from generate_series(1, 10000000);\r\n\r\nHEAD:\r\nTime: 5493.392 ms (00:05.493)\r\n\r\nAFTER PATCH(skip constant partition key)\r\nTime: 4198.421 ms (00:04.198)\r\n\r\nAFTER PATCH(cache the last partition)\r\nTime: 4484.492 ms (00:04.484)\r\n\r\nThe test results of your patch in this case looks good.\r\nIt can fit many more cases and the performance gain is nice.\r\n\r\n-----------\r\n2) Test expression in partition key\r\n\r\ncreate or replace function partition_func(i int) returns int as $$\r\n begin\r\n return i;\r\n end;\r\n$$ language plpgsql immutable parallel restricted;\r\ncreate unlogged table foo (a int) partition by range (partition_func(a));\r\n\r\nSQL: insert into foo select 1 from generate_series(1, 10000000);\r\n\r\nHEAD\r\nTime: 8595.120 ms (00:08.595)\r\n\r\nAFTER PATCH(skip constant partition key)\r\nTime: 4198.421 ms (00:04.198)\r\n\r\nAFTER PATCH(cache the last partition)\r\nTime: 12829.800 ms (00:12.830)\r\n\r\nIf add a user defined function in the partition key, it seems have\r\nperformance degradation after the patch. \r\n\r\nI did some analysis on it, for the above testcase , ExecPartitionCheck\r\nexecuted three expression 1) key is null 2) key > low 3) key < top\r\nIn this case, the \"key\" contains a funcexpr and the funcexpr will be executed\r\nthree times for each row, so, it bring extra overhead which cause the performance degradation.\r\n\r\nIMO, improving the ExecPartitionCheck seems a better solution to it, we can\r\nCalculate the key value in advance and use the value to do the bound check.\r\nThoughts ?\r\n\r\n------------\r\n\r\nBesides, are we going to add a reloption or guc to control this cache behaviour if we more forward with this approach ?\r\nBecause, If most of the rows to be inserted are routing to a different partition each time, then I think the extra ExecPartitionCheck\r\nwill become the overhead. Maybe it's better to apply both two approaches(cache the last partition and skip constant partition key)\r\nwhich can achieve the best performance results.\r\n\r\nBest regards,\r\nhouzj\r\n\r\n\r\n\r\n\r\n\r\n",
"msg_date": "Thu, 20 May 2021 10:35:40 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "Hou-san,\n\nOn Thu, May 20, 2021 at 7:35 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n> From: Amit Langote <amitlangote09@gmail.com>\n> Sent: Wednesday, May 19, 2021 9:17 PM\n> > I guess it would be nice if we could fit in a solution for the use case that houjz\n> > mentioned as a special case. BTW, houjz, could you please check if a patch like\n> > this one helps the case you mentioned?\n>\n> Thanks for the patch!\n> I did some test on it(using the table you provided above):\n\nThanks a lot for doing that.\n\n> 1): Test plain column in partition key.\n> SQL: insert into foo select 1 from generate_series(1, 10000000);\n>\n> HEAD:\n> Time: 5493.392 ms (00:05.493)\n>\n> AFTER PATCH(skip constant partition key)\n> Time: 4198.421 ms (00:04.198)\n>\n> AFTER PATCH(cache the last partition)\n> Time: 4484.492 ms (00:04.484)\n>\n> The test results of your patch in this case looks good.\n> It can fit many more cases and the performance gain is nice.\n\nHmm yeah, not too bad.\n\n> 2) Test expression in partition key\n>\n> create or replace function partition_func(i int) returns int as $$\n> begin\n> return i;\n> end;\n> $$ language plpgsql immutable parallel restricted;\n> create unlogged table foo (a int) partition by range (partition_func(a));\n>\n> SQL: insert into foo select 1 from generate_series(1, 10000000);\n>\n> HEAD\n> Time: 8595.120 ms (00:08.595)\n>\n> AFTER PATCH(skip constant partition key)\n> Time: 4198.421 ms (00:04.198)\n>\n> AFTER PATCH(cache the last partition)\n> Time: 12829.800 ms (00:12.830)\n>\n> If add a user defined function in the partition key, it seems have\n> performance degradation after the patch.\n\nOops.\n\n> I did some analysis on it, for the above testcase , ExecPartitionCheck\n> executed three expression 1) key is null 2) key > low 3) key < top\n> In this case, the \"key\" contains a funcexpr and the funcexpr will be executed\n> three times for each row, so, it bring extra overhead which cause the performance degradation.\n>\n> IMO, improving the ExecPartitionCheck seems a better solution to it, we can\n> Calculate the key value in advance and use the value to do the bound check.\n> Thoughts ?\n\nThis one seems bit tough. ExecPartitionCheck() uses the generic\nexpression evaluation machinery like a black box, which means\nexecPartition.c can't really tweal/control the time spent evaluating\npartition constraints. Given that, we may have to disable the caching\nwhen key->partexprs != NIL, unless we can reasonably do what you are\nsuggesting.\n\n> Besides, are we going to add a reloption or guc to control this cache behaviour if we more forward with this approach ?\n> Because, If most of the rows to be inserted are routing to a different partition each time, then I think the extra ExecPartitionCheck\n> will become the overhead. Maybe it's better to apply both two approaches(cache the last partition and skip constant partition key)\n> which can achieve the best performance results.\n\nA reloption will have to be a last resort is what I can say about this\nat the moment.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 20 May 2021 21:22:52 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "From: Amit Langote <amitlangote09@gmail.com>\r\nSent: Thursday, May 20, 2021 8:23 PM\r\n> \r\n> Hou-san,\r\n> \r\n> On Thu, May 20, 2021 at 7:35 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> > 2) Test expression in partition key\r\n> >\r\n> > create or replace function partition_func(i int) returns int as $$\r\n> > begin\r\n> > return i;\r\n> > end;\r\n> > $$ language plpgsql immutable parallel restricted; create unlogged\r\n> > table foo (a int) partition by range (partition_func(a));\r\n> >\r\n> > SQL: insert into foo select 1 from generate_series(1, 10000000);\r\n> >\r\n> > HEAD\r\n> > Time: 8595.120 ms (00:08.595)\r\n> >\r\n> > AFTER PATCH(skip constant partition key)\r\n> > Time: 4198.421 ms (00:04.198)\r\n> >\r\n> > AFTER PATCH(cache the last partition)\r\n> > Time: 12829.800 ms (00:12.830)\r\n> >\r\n> > If add a user defined function in the partition key, it seems have\r\n> > performance degradation after the patch.\r\n> \r\n> Oops.\r\n> \r\n> > I did some analysis on it, for the above testcase , ExecPartitionCheck\r\n> > executed three expression 1) key is null 2) key > low 3) key < top In\r\n> > this case, the \"key\" contains a funcexpr and the funcexpr will be\r\n> > executed three times for each row, so, it bring extra overhead which cause\r\n> the performance degradation.\r\n> >\r\n> > IMO, improving the ExecPartitionCheck seems a better solution to it,\r\n> > we can Calculate the key value in advance and use the value to do the bound\r\n> check.\r\n> > Thoughts ?\r\n> \r\n> This one seems bit tough. ExecPartitionCheck() uses the generic expression\r\n> evaluation machinery like a black box, which means execPartition.c can't really\r\n> tweal/control the time spent evaluating partition constraints. Given that, we\r\n> may have to disable the caching when key->partexprs != NIL, unless we can\r\n> reasonably do what you are suggesting.[] \r\n\r\nI did some research on the CHECK expression that ExecPartitionCheck() execute.\r\nCurrently for a normal RANGE partition key it will first generate a CHECK expression\r\nlike : [Keyexpression IS NOT NULL AND Keyexpression > lowboud AND Keyexpression < lowboud].\r\nIn this case, Keyexpression will be re-executed which will bring some overhead.\r\n\r\nInstead, I think we can try to do the following step:\r\n1)extract the Keyexpression from the CHECK expression\r\n2)evaluate the key expression in advance\r\n3)pass the result of key expression to do the partition CHECK.\r\nIn this way ,we only execute the key expression once which looks more efficient.\r\n\r\nAttaching a POC patch about this approach.\r\nI did some performance test with my laptop for this patch:\r\n\r\n------------------------------------test cheap partition key expression\r\n\r\ncreate unlogged table test_partitioned_inner (a int) partition by range ((abs(a) + a/50));\r\ncreate unlogged table test_partitioned_inner_1 partition of test_partitioned_inner for values from (1) to (50);\r\ncreate unlogged table test_partitioned_inner_2 partition of test_partitioned_inner for values from ( 50 ) to (100);\r\ninsert into test_partitioned_inner_1 select (i%48)+1 from generate_series(1,10000000,1) t(i);\r\n\r\nBEFORE patch:\r\nExecution Time: 6120.706 ms\r\n\r\nAFTER patch:\r\nExecution Time: 5705.967 ms\r\n\r\n------------------------------------test expensive partition key expression\r\ncreate or replace function partfunc(i int) returns int as\r\n$$\r\nbegin\r\n return i;\r\nend;\r\n$$ language plpgsql IMMUTABLE;\r\n\r\ncreate unlogged table test_partitioned_inner (a int) partition by range (partfunc (a));\r\ncreate unlogged table test_partitioned_inner_1 partition of test_partitioned_inner for values from (1) to (50);\r\ncreate unlogged table test_partitioned_inner_2 partition of test_partitioned_inner for values from ( 50 ) to (100);\r\n\r\nI think this can be a independent improvement for partitioncheck.\r\n\r\nbefore patch:\r\nExecution Time: 14048.551 ms\r\n\r\nafter patch:\r\nExecution Time: 8810.518 ms\r\n\r\nI think this patch can solve the performance degradation of key expression\r\nafter applying the [Save the last partition] patch.\r\nBesides, this could be a separate patch which can improve some more cases.\r\nThoughts ?\r\n\r\nBest regards,\r\nhouzj",
"msg_date": "Mon, 24 May 2021 01:31:44 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "From: houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com>\r\n> I think this patch can solve the performance degradation of key expression after\r\n> applying the [Save the last partition] patch.\r\n> Besides, this could be a separate patch which can improve some more cases.\r\n> Thoughts ?\r\n\r\nThank you for proposing an impressive improvement so quickly! Yes, I'm in the mood for adopting Amit-san's patch as a base because it's compact and readable, and plus add this patch of yours to complement the partition key function case.\r\n\r\nBut ...\r\n\r\n* Applying your patch alone produced a compilation error. I'm sorry I mistakenly deleted the compile log, but it said something like \"There's a redeclaration of PartKeyContext in partcache.h; the original definition is in partdef.h\"\r\n\r\n* Hmm, this may be too much to expect, but I wonder if we can make the patch more compact...\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Mon, 24 May 2021 07:34:24 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "From: Tsunakawa, Takayuki <tsunakawa.takay@fujitsu.com>\r\nSent: Monday, May 24, 2021 3:34 PM\r\n> \r\n> From: houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com>\r\n> > I think this patch can solve the performance degradation of key\r\n> > expression after applying the [Save the last partition] patch.\r\n> > Besides, this could be a separate patch which can improve some more cases.\r\n> > Thoughts ?\r\n> \r\n> Thank you for proposing an impressive improvement so quickly! Yes, I'm in\r\n> the mood for adopting Amit-san's patch as a base because it's compact and\r\n> readable, and plus add this patch of yours to complement the partition key\r\n> function case.\r\n\r\nThanks for looking into this.\r\n\r\n> But ...\r\n> \r\n> * Applying your patch alone produced a compilation error. I'm sorry I\r\n> mistakenly deleted the compile log, but it said something like \"There's a\r\n> redeclaration of PartKeyContext in partcache.h; the original definition is in\r\n> partdef.h\"\r\n\r\nIt seems a little strange, I have compiled it alone in two different linux machine and did\r\nnot find such an error. Did you compile it on a windows machine ?\r\n\r\n> * Hmm, this may be too much to expect, but I wonder if we can make the patch\r\n> more compact...\r\n\r\nOf course, I will try to simplify the patch.\r\n\r\nBest regards,\r\nhouzj\r\n",
"msg_date": "Mon, 24 May 2021 07:58:03 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "From: Hou, Zhijie/侯 志杰 <houzj.fnst@fujitsu.com>\r\n> It seems a little strange, I have compiled it alone in two different linux machine\r\n> and did\r\n> not find such an error. Did you compile it on a windows machine ?\r\n\r\nOn Linux, it produces:\r\n\r\ngcc -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-s\\\r\ntatement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-securit\\\r\ny -fno-strict-aliasing -fwrapv -g -O0 -I../../../src/include -D_GNU_SOURCE -\\\r\nc -o heap.o heap.c -MMD -MP -MF .deps/heap.Po\r\nIn file included from heap.c:86:\r\n../../../src/include/utils/partcache.h:54: error: redefinition of typedef 'Part\\\r\nKeyContext'\r\n../../../src/include/partitioning/partdefs.h:26: note: previous declaration of \\\r\n'PartKeyContext' was here\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Mon, 24 May 2021 08:17:29 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "From: houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com>\r\nSent: Monday, May 24, 2021 3:58 PM\r\n> \r\n> From: Tsunakawa, Takayuki <mailto:tsunakawa.takay@fujitsu.com>\r\n> Sent: Monday, May 24, 2021 3:34 PM\r\n> >\r\n> > From: mailto:houzj.fnst@fujitsu.com <mailto:houzj.fnst@fujitsu.com>\r\n> > > I think this patch can solve the performance degradation of key\r\n> > > expression after applying the [Save the last partition] patch.\r\n> > > Besides, this could be a separate patch which can improve some more\r\n> cases.\r\n> > > Thoughts ?\r\n> >\r\n> > Thank you for proposing an impressive improvement so quickly! Yes,\r\n> > I'm in the mood for adopting Amit-san's patch as a base because it's\r\n> > compact and readable, and plus add this patch of yours to complement\r\n> > the partition key function case.\r\n> \r\n> Thanks for looking into this.\r\n> \r\n> > But ...\r\n> >\r\n> > * Applying your patch alone produced a compilation error. I'm sorry I\r\n> > mistakenly deleted the compile log, but it said something like\r\n> > \"There's a redeclaration of PartKeyContext in partcache.h; the\r\n> > original definition is in partdef.h\"\r\n> \r\n> It seems a little strange, I have compiled it alone in two different linux machine\r\n> and did not find such an error. Did you compile it on a windows machine ?\r\n\r\nAh, Maybe I found the issue.\r\nAttaching a new patch, please have a try on this patch.\r\n\r\nBest regards,\r\nhouzj",
"msg_date": "Mon, 24 May 2021 08:17:34 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "Hou-san,\n\nOn Mon, May 24, 2021 at 10:31 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n> From: Amit Langote <amitlangote09@gmail.com>\n> Sent: Thursday, May 20, 2021 8:23 PM\n> > This one seems bit tough. ExecPartitionCheck() uses the generic expression\n> > evaluation machinery like a black box, which means execPartition.c can't really\n> > tweal/control the time spent evaluating partition constraints. Given that, we\n> > may have to disable the caching when key->partexprs != NIL, unless we can\n> > reasonably do what you are suggesting.[]\n>\n> I did some research on the CHECK expression that ExecPartitionCheck() execute.\n\nThanks for looking into this and writing the patch. Your idea does\nsound promising.\n\n> Currently for a normal RANGE partition key it will first generate a CHECK expression\n> like : [Keyexpression IS NOT NULL AND Keyexpression > lowboud AND Keyexpression < lowboud].\n> In this case, Keyexpression will be re-executed which will bring some overhead.\n>\n> Instead, I think we can try to do the following step:\n> 1)extract the Keyexpression from the CHECK expression\n> 2)evaluate the key expression in advance\n> 3)pass the result of key expression to do the partition CHECK.\n> In this way ,we only execute the key expression once which looks more efficient.\n\nI would have preferred this not to touch anything but\nExecPartitionCheck(), at least in the first version. Especially,\nseeing that your patch touches partbounds.c makes me a bit nervous,\nbecause the logic there is pretty complicated to begin with.\n\nHow about we start with something like the attached? It's the same\nidea AFAICS, but implemented with a smaller footprint. We can\nconsider teaching relcache about this as the next step, if at all. I\nhaven't measured the performance, but maybe it's not as fast as yours,\nso will need some fine-tuning. Can you please give it a read?\n\n\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 24 May 2021 17:27:02 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "From: Hou, Zhijie/侯 志杰 <houzj.fnst@fujitsu.com>\r\n> Ah, Maybe I found the issue.\r\n> Attaching a new patch, please have a try on this patch.\r\n\r\nThanks, it has compiled perfectly without any warning.\r\n\r\n\r\nRegards\r\nTakayuki Tsunakawa\r\n\r\n",
"msg_date": "Mon, 24 May 2021 08:32:08 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "Hi Amit-san,\r\n\r\nFrom: Amit Langote <amitlangote09@gmail.com>\r\nSent: Monday, May 24, 2021 4:27 PM\r\n> Hou-san,\r\n> \r\n> On Mon, May 24, 2021 at 10:31 AM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> > From: Amit Langote <amitlangote09@gmail.com>\r\n> > Sent: Thursday, May 20, 2021 8:23 PM\r\n> > > This one seems bit tough. ExecPartitionCheck() uses the generic\r\n> > > expression evaluation machinery like a black box, which means\r\n> > > execPartition.c can't really tweal/control the time spent evaluating\r\n> > > partition constraints. Given that, we may have to disable the\r\n> > > caching when key->partexprs != NIL, unless we can reasonably do what\r\n> > > you are suggesting.[]\r\n> >\r\n> > I did some research on the CHECK expression that ExecPartitionCheck()\r\n> execute.\r\n> \r\n> Thanks for looking into this and writing the patch. Your idea does sound\r\n> promising.\r\n> \r\n> > Currently for a normal RANGE partition key it will first generate a\r\n> > CHECK expression like : [Keyexpression IS NOT NULL AND Keyexpression >\r\n> lowboud AND Keyexpression < lowboud].\r\n> > In this case, Keyexpression will be re-executed which will bring some\r\n> overhead.\r\n> >\r\n> > Instead, I think we can try to do the following step:\r\n> > 1)extract the Keyexpression from the CHECK expression 2)evaluate the\r\n> > key expression in advance 3)pass the result of key expression to do\r\n> > the partition CHECK.\r\n> > In this way ,we only execute the key expression once which looks more\r\n> efficient.\r\n> \r\n> I would have preferred this not to touch anything but ExecPartitionCheck(), at\r\n> least in the first version. Especially, seeing that your patch touches\r\n> partbounds.c makes me a bit nervous, because the logic there is pretty\r\n> complicated to begin with.\r\n\r\nAgreed.\r\n\r\n> How about we start with something like the attached? It's the same idea\r\n> AFAICS, but implemented with a smaller footprint. We can consider teaching\r\n> relcache about this as the next step, if at all. I haven't measured the\r\n> performance, but maybe it's not as fast as yours, so will need some fine-tuning.\r\n> Can you please give it a read?\r\n\r\nThanks for the patch and It looks more compact than mine.\r\n\r\nAfter taking a quick look at the patch, I found a possible issue.\r\nCurrently, the patch does not search the parent's partition key expression recursively.\r\nFor example, If we have multi-level partition:\r\nTable A is partition of Table B, Table B is partition of Table C.\r\nIt looks like if insert into Table A , then we did not replace the key expression which come from Table C.\r\n\r\nIf we want to get the Table C, we might need to use pg_inherit, but it costs too much to me.\r\nInstead, maybe we can use the existing logic which already scanned the pg_inherit in function\r\ngenerate_partition_qual(). Although this change is out of ExecPartitionCheck(). I think we'd better\r\nreplace all the parents and grandparent...'s key expression. Attaching a demo patch based on the\r\npatch you posted earlier. I hope it will help.\r\n\r\nBest regards,\r\nhouzj",
"msg_date": "Mon, 24 May 2021 13:15:35 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "Hou-san,\n\nOn Mon, May 24, 2021 at 10:15 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n> From: Amit Langote <amitlangote09@gmail.com>\n> Sent: Monday, May 24, 2021 4:27 PM\n> > On Mon, May 24, 2021 at 10:31 AM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > > Currently for a normal RANGE partition key it will first generate a\n> > > CHECK expression like : [Keyexpression IS NOT NULL AND Keyexpression >\n> > lowboud AND Keyexpression < lowboud].\n> > > In this case, Keyexpression will be re-executed which will bring some\n> > overhead.\n> > >\n> > > Instead, I think we can try to do the following step:\n> > > 1)extract the Keyexpression from the CHECK expression 2)evaluate the\n> > > key expression in advance 3)pass the result of key expression to do\n> > > the partition CHECK.\n> > > In this way ,we only execute the key expression once which looks more\n> > efficient.\n> >\n> > I would have preferred this not to touch anything but ExecPartitionCheck(), at\n> > least in the first version. Especially, seeing that your patch touches\n> > partbounds.c makes me a bit nervous, because the logic there is pretty\n> > complicated to begin with.\n>\n> Agreed.\n>\n> > How about we start with something like the attached? It's the same idea\n> > AFAICS, but implemented with a smaller footprint. We can consider teaching\n> > relcache about this as the next step, if at all. I haven't measured the\n> > performance, but maybe it's not as fast as yours, so will need some fine-tuning.\n> > Can you please give it a read?\n>\n> Thanks for the patch and It looks more compact than mine.\n>\n> After taking a quick look at the patch, I found a possible issue.\n> Currently, the patch does not search the parent's partition key expression recursively.\n> For example, If we have multi-level partition:\n> Table A is partition of Table B, Table B is partition of Table C.\n> It looks like if insert into Table A , then we did not replace the key expression which come from Table C.\n\nGood catch! Although, I was relieved to realize that it's not *wrong*\nper se, as in it does not produce an incorrect result, but only\n*slower* than if the patch was careful enough to replace all the\nparents' key expressions.\n\n> If we want to get the Table C, we might need to use pg_inherit, but it costs too much to me.\n> Instead, maybe we can use the existing logic which already scanned the pg_inherit in function\n> generate_partition_qual(). Although this change is out of ExecPartitionCheck(). I think we'd better\n> replace all the parents and grandparent...'s key expression. Attaching a demo patch based on the\n> patch you posted earlier. I hope it will help.\n\nThanks.\n\nThough again, I think we can do this without changing the relcache\ninterface, such as RelationGetPartitionQual().\n\nPartitionTupleRouting has all the information that's needed here.\nEach partitioned table involved in routing a tuple to the leaf\npartition has a PartitionDispatch struct assigned to it. That struct\ncontains the PartitionKey and we can access partexprs from there. We\ncan arrange to assemble them into a single list that is saved to a\ngiven partition's ResultRelInfo, that is, after converting the\nexpressions to have partition attribute numbers. I tried that in the\nattached updated patch; see the 0002-* patch.\n\nRegarding the first patch to make ExecFindPartition() cache last used\npartition, I noticed that it only worked for the bottom-most parent in\na multi-level partition tree, because only leaf partitions were\nassigned to dispatch->lastPartitionInfo. I have fixed the earlier\npatch to also save non-leaf partitions and their corresponding\nPartitionDispatch structs so that parents of all levels can use this\ncaching feature. The patch has to become somewhat complex as a\nresult, but hopefully not too unreadable.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 25 May 2021 23:05:39 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "Hi Amit-san\r\n\r\nFrom: Amit Langote <amitlangote09@gmail.com>\r\nSent: Tuesday, May 25, 2021 10:06 PM\r\n> Hou-san,\r\n> > Thanks for the patch and It looks more compact than mine.\r\n> >\r\n> > After taking a quick look at the patch, I found a possible issue.\r\n> > Currently, the patch does not search the parent's partition key expression\r\n> recursively.\r\n> > For example, If we have multi-level partition:\r\n> > Table A is partition of Table B, Table B is partition of Table C.\r\n> > It looks like if insert into Table A , then we did not replace the key expression\r\n> which come from Table C.\r\n> \r\n> Good catch! Although, I was relieved to realize that it's not *wrong* per se, as\r\n> in it does not produce an incorrect result, but only\r\n> *slower* than if the patch was careful enough to replace all the parents' key\r\n> expressions.\r\n> \r\n> > If we want to get the Table C, we might need to use pg_inherit, but it costs\r\n> too much to me.\r\n> > Instead, maybe we can use the existing logic which already scanned the\r\n> > pg_inherit in function generate_partition_qual(). Although this change\r\n> > is out of ExecPartitionCheck(). I think we'd better replace all the\r\n> > parents and grandparent...'s key expression. Attaching a demo patch based\r\n> on the patch you posted earlier. I hope it will help.\r\n> \r\n> Thanks.\r\n> \r\n> Though again, I think we can do this without changing the relcache interface,\r\n> such as RelationGetPartitionQual().\r\n> \r\n> PartitionTupleRouting has all the information that's needed here.\r\n> Each partitioned table involved in routing a tuple to the leaf partition has a\r\n> PartitionDispatch struct assigned to it. That struct contains the PartitionKey\r\n> and we can access partexprs from there. We can arrange to assemble them\r\n> into a single list that is saved to a given partition's ResultRelInfo, that is, after\r\n> converting the expressions to have partition attribute numbers. I tried that in\r\n> the attached updated patch; see the 0002-* patch.\r\n\r\nThanks for the explanation !\r\nYeah, we can get all the parent table info from PartitionTupleRouting when INSERT into a partitioned table.\r\n\r\nBut I have two issues about using the information from PartitionTupleRouting to get the parent table's key expression: \r\n1) It seems we do not initialize the PartitionTupleRouting when directly INSERT into a partition(not a partitioned table).\r\nI think it will be better we let the pre-compute-key_expression feature to be used in all the possible cases, because it\r\ncould bring nice performance improvement.\r\n\r\n2) When INSERT into a partitioned table which is also a partition, the PartitionTupleRouting is initialized after the ExecPartitionCheck.\r\nFor example:\r\ncreate unlogged table parttable1 (a int, b int, c int, d int) partition by range (partition_func(a));\r\ncreate unlogged table parttable1_a partition of parttable1 for values from (0) to (5000);\r\ncreate unlogged table parttable1_b partition of parttable1 for values from (5000) to (10000);\r\n\r\ncreate unlogged table parttable2 (a int, b int, c int, d int) partition by range (partition_func1(b));\r\ncreate unlogged table parttable2_a partition of parttable2 for values from (0) to (5000);\r\ncreate unlogged table parttable2_b partition of parttable2 for values from (5000) to (10000);\r\n\r\n---When INSERT into parttable2, the code do partitioncheck before initialize the PartitionTupleRouting.\r\ninsert into parttable2 select 10001,100,10001,100;\r\n\r\nBest regards,\r\nhouzj\r\n\r\n\r\n\r\n",
"msg_date": "Wed, 26 May 2021 01:05:05 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "Hou-san,\n\nOn Wed, May 26, 2021 at 10:05 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n> From: Amit Langote <amitlangote09@gmail.com>\n> Sent: Tuesday, May 25, 2021 10:06 PM\n> > Though again, I think we can do this without changing the relcache interface,\n> > such as RelationGetPartitionQual().\n> >\n> > PartitionTupleRouting has all the information that's needed here.\n> > Each partitioned table involved in routing a tuple to the leaf partition has a\n> > PartitionDispatch struct assigned to it. That struct contains the PartitionKey\n> > and we can access partexprs from there. We can arrange to assemble them\n> > into a single list that is saved to a given partition's ResultRelInfo, that is, after\n> > converting the expressions to have partition attribute numbers. I tried that in\n> > the attached updated patch; see the 0002-* patch.\n>\n> Thanks for the explanation !\n> Yeah, we can get all the parent table info from PartitionTupleRouting when INSERT into a partitioned table.\n>\n> But I have two issues about using the information from PartitionTupleRouting to get the parent table's key expression:\n> 1) It seems we do not initialize the PartitionTupleRouting when directly INSERT into a partition(not a partitioned table).\n> I think it will be better we let the pre-compute-key_expression feature to be used in all the possible cases, because it\n> could bring nice performance improvement.\n>\n> 2) When INSERT into a partitioned table which is also a partition, the PartitionTupleRouting is initialized after the ExecPartitionCheck.\n\nHmm, do we really need to optimize ExecPartitionCheck() when\npartitions are directly inserted into? As also came up earlier in the\nthread, we want to discourage users from doing that to begin with, so\nit doesn't make much sense to spend our effort on that case.\n\nOptimizing ExecPartitionCheck(), specifically your idea of\npre-computing the partition key expressions, only came up after\nfinding that the earlier patch to teach ExecFindPartition() to cache\npartitions may benefit from it. IOW, optimizing ExecPartitionCheck()\nfor its own sake does not seem worthwhile, especially not if we'd need\nto break module boundaries to make that happen.\n\nThoughts?\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 26 May 2021 10:37:54 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": ">\n> Hi, Amit:\n>\n\nFor ConvertTupleToPartition() in\n0001-ExecFindPartition-cache-last-used-partition-v3.patch:\n\n+ if (tempslot != NULL)\n+ ExecClearTuple(tempslot);\n\nIf tempslot and parent_slot point to the same slot, should ExecClearTuple()\nstill be called ?\n\nCheers\n\nHi, Amit:For ConvertTupleToPartition() in 0001-ExecFindPartition-cache-last-used-partition-v3.patch:+ if (tempslot != NULL)+ ExecClearTuple(tempslot);If tempslot and parent_slot point to the same slot, should ExecClearTuple() still be called ?Cheers",
"msg_date": "Wed, 26 May 2021 10:35:00 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "Hi amit-san\r\n\r\nFrom: Amit Langote <amitlangote09@gmail.com>\r\nSent: Wednesday, May 26, 2021 9:38 AM\r\n> \r\n> Hou-san,\r\n> \r\n> On Wed, May 26, 2021 at 10:05 AM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > Thanks for the explanation !\r\n> > Yeah, we can get all the parent table info from PartitionTupleRouting when\r\n> INSERT into a partitioned table.\r\n> >\r\n> > But I have two issues about using the information from PartitionTupleRouting\r\n> to get the parent table's key expression:\r\n> > 1) It seems we do not initialize the PartitionTupleRouting when directly\r\n> INSERT into a partition(not a partitioned table).\r\n> > I think it will be better we let the pre-compute-key_expression\r\n> > feature to be used in all the possible cases, because it could bring nice\r\n> performance improvement.\r\n> >\r\n> > 2) When INSERT into a partitioned table which is also a partition, the\r\n> PartitionTupleRouting is initialized after the ExecPartitionCheck.\r\n> \r\n> Hmm, do we really need to optimize ExecPartitionCheck() when partitions are\r\n> directly inserted into? As also came up earlier in the thread, we want to\r\n> discourage users from doing that to begin with, so it doesn't make much sense\r\n> to spend our effort on that case.\r\n> \r\n> Optimizing ExecPartitionCheck(), specifically your idea of pre-computing the\r\n> partition key expressions, only came up after finding that the earlier patch to\r\n> teach ExecFindPartition() to cache partitions may benefit from it. IOW,\r\n> optimizing ExecPartitionCheck() for its own sake does not seem worthwhile,\r\n> especially not if we'd need to break module boundaries to make that happen.\r\n> \r\n> Thoughts?\r\n\r\nOK, I see the point, thanks for the explanation. \r\nLet try to move forward.\r\n\r\nAbout teaching relcache about caching the target partition.\r\n\r\nDavid-san suggested cache the partidx in PartitionDesc.\r\nAnd it will need looping and checking the cached value at each level.\r\nI was thinking can we cache a partidx list[1, 2 ,3], and then we can follow\r\nthe list to get the last partition and do the partition CHECK only for the last\r\npartition. If any unexpected thing happen, we can return to the original table\r\nand redo the tuple routing without using the cached index.\r\nWhat do you think ?\r\n\r\nBest regards,\r\nhouzj\r\n \r\n\r\n\r\n",
"msg_date": "Thu, 27 May 2021 02:47:25 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "Hi,\n\nOn Thu, May 27, 2021 at 2:30 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>>\n>> Hi, Amit:\n>\n>\n> For ConvertTupleToPartition() in 0001-ExecFindPartition-cache-last-used-partition-v3.patch:\n>\n> + if (tempslot != NULL)\n> + ExecClearTuple(tempslot);\n>\n> If tempslot and parent_slot point to the same slot, should ExecClearTuple() still be called ?\n\nYeah, we decided back in 1c9bb02d8ec that it's necessary to free the\nslot if it's the same slot as a parent partition's\nPartitionDispatch->tupslot (\"freeing parent's copy of the tuple\").\nMaybe we don't need this parent-slot-clearing anymore due to code\nrestructuring over the last 3 years, but that will have to be a\nseparate patch.\n\nI hope the attached updated patch makes it a bit more clear what's\ngoing on. I refactored more of the code in ExecFindPartition() to\nmake this patch more a bit more readable.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 27 May 2021 13:22:01 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "On Wed, May 26, 2021 at 9:22 PM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> Hi,\n>\n> On Thu, May 27, 2021 at 2:30 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> >>\n> >> Hi, Amit:\n> >\n> >\n> > For ConvertTupleToPartition() in\n> 0001-ExecFindPartition-cache-last-used-partition-v3.patch:\n> >\n> > + if (tempslot != NULL)\n> > + ExecClearTuple(tempslot);\n> >\n> > If tempslot and parent_slot point to the same slot, should\n> ExecClearTuple() still be called ?\n>\n> Yeah, we decided back in 1c9bb02d8ec that it's necessary to free the\n> slot if it's the same slot as a parent partition's\n> PartitionDispatch->tupslot (\"freeing parent's copy of the tuple\").\n> Maybe we don't need this parent-slot-clearing anymore due to code\n> restructuring over the last 3 years, but that will have to be a\n> separate patch.\n>\n> I hope the attached updated patch makes it a bit more clear what's\n> going on. I refactored more of the code in ExecFindPartition() to\n> make this patch more a bit more readable.\n>\n> --\n> Amit Langote\n> EDB: http://www.enterprisedb.com\n\nHi, Amit:\nThanks for the explanation.\n\nFor CanUseSavedPartitionForTuple, nit: you can check\n!dispatch->savedPartResultInfo at the beginning and return early.\nThis would save some indentation.\n\n Cheers\n\nOn Wed, May 26, 2021 at 9:22 PM Amit Langote <amitlangote09@gmail.com> wrote:Hi,\n\nOn Thu, May 27, 2021 at 2:30 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n>>\n>> Hi, Amit:\n>\n>\n> For ConvertTupleToPartition() in 0001-ExecFindPartition-cache-last-used-partition-v3.patch:\n>\n> + if (tempslot != NULL)\n> + ExecClearTuple(tempslot);\n>\n> If tempslot and parent_slot point to the same slot, should ExecClearTuple() still be called ?\n\nYeah, we decided back in 1c9bb02d8ec that it's necessary to free the\nslot if it's the same slot as a parent partition's\nPartitionDispatch->tupslot (\"freeing parent's copy of the tuple\").\nMaybe we don't need this parent-slot-clearing anymore due to code\nrestructuring over the last 3 years, but that will have to be a\nseparate patch.\n\nI hope the attached updated patch makes it a bit more clear what's\ngoing on. I refactored more of the code in ExecFindPartition() to\nmake this patch more a bit more readable.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.comHi, Amit:Thanks for the explanation.For CanUseSavedPartitionForTuple, nit: you can check !dispatch->savedPartResultInfo at the beginning and return early.This would save some indentation. Cheers",
"msg_date": "Wed, 26 May 2021 21:59:56 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "On Thu, May 27, 2021 at 1:55 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> For CanUseSavedPartitionForTuple, nit: you can check !dispatch->savedPartResultInfo at the beginning and return early.\n> This would save some indentation.\n\nSure, see the attached.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 27 May 2021 14:40:34 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "On Thu, May 27, 2021 at 11:47 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n> About teaching relcache about caching the target partition.\n>\n> David-san suggested cache the partidx in PartitionDesc.\n> And it will need looping and checking the cached value at each level.\n> I was thinking can we cache a partidx list[1, 2 ,3], and then we can follow\n> the list to get the last partition and do the partition CHECK only for the last\n> partition. If any unexpected thing happen, we can return to the original table\n> and redo the tuple routing without using the cached index.\n> What do you think ?\n\nWhere are you thinking to cache the partidx list? Inside\nPartitionDesc or some executor struct?\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 27 May 2021 14:53:59 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "From: Amit Langote <amitlangote09@gmail.com>\r\nSent: Thursday, May 27, 2021 1:54 PM\r\n> On Thu, May 27, 2021 at 11:47 AM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> > About teaching relcache about caching the target partition.\r\n> >\r\n> > David-san suggested cache the partidx in PartitionDesc.\r\n> > And it will need looping and checking the cached value at each level.\r\n> > I was thinking can we cache a partidx list[1, 2 ,3], and then we can\r\n> > follow the list to get the last partition and do the partition CHECK\r\n> > only for the last partition. If any unexpected thing happen, we can\r\n> > return to the original table and redo the tuple routing without using the\r\n> cached index.\r\n> > What do you think ?\r\n> \r\n> Where are you thinking to cache the partidx list? Inside PartitionDesc or some\r\n> executor struct?\r\n\r\nI was thinking cache the partidx list in PartitionDescData which is in relcache, if possible, we can\r\nuse the cached partition between statements.\r\n\r\nBest regards,\r\nhouzj\r\n",
"msg_date": "Thu, 27 May 2021 06:56:24 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "Hou-san,\n\nOn Thu, May 27, 2021 at 3:56 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n> From: Amit Langote <amitlangote09@gmail.com>\n> Sent: Thursday, May 27, 2021 1:54 PM\n> > On Thu, May 27, 2021 at 11:47 AM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > > About teaching relcache about caching the target partition.\n> > >\n> > > David-san suggested cache the partidx in PartitionDesc.\n> > > And it will need looping and checking the cached value at each level.\n> > > I was thinking can we cache a partidx list[1, 2 ,3], and then we can\n> > > follow the list to get the last partition and do the partition CHECK\n> > > only for the last partition. If any unexpected thing happen, we can\n> > > return to the original table and redo the tuple routing without using the\n> > cached index.\n> > > What do you think ?\n> >\n> > Where are you thinking to cache the partidx list? Inside PartitionDesc or some\n> > executor struct?\n>\n> I was thinking cache the partidx list in PartitionDescData which is in relcache, if possible, we can\n> use the cached partition between statements.\n\nAh, okay. I thought you were talking about a different idea. How and\nwhere would you determine that a cached partidx value is indeed the\ncorrect one for a given row?\n\nAnyway, do you want to try writing a patch to see how it might work?\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 27 May 2021 17:46:00 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "Hi Amit-san\r\n\r\nFrom: Amit Langote <amitlangote09@gmail.com>\r\nSent: Thursday, May 27, 2021 4:46 PM\r\n> Hou-san,\r\n> \r\n> On Thu, May 27, 2021 at 3:56 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> > From: Amit Langote <amitlangote09@gmail.com>\r\n> > Sent: Thursday, May 27, 2021 1:54 PM\r\n> > > On Thu, May 27, 2021 at 11:47 AM houzj.fnst@fujitsu.com\r\n> > > <houzj.fnst@fujitsu.com> wrote:\r\n> > > > About teaching relcache about caching the target partition.\r\n> > > >\r\n> > > > David-san suggested cache the partidx in PartitionDesc.\r\n> > > > And it will need looping and checking the cached value at each level.\r\n> > > > I was thinking can we cache a partidx list[1, 2 ,3], and then we\r\n> > > > can follow the list to get the last partition and do the partition\r\n> > > > CHECK only for the last partition. If any unexpected thing happen,\r\n> > > > we can return to the original table and redo the tuple routing\r\n> > > > without using the\r\n> > > cached index.\r\n> > > > What do you think ?\r\n> > >\r\n> > > Where are you thinking to cache the partidx list? Inside\r\n> > > PartitionDesc or some executor struct?\r\n> >\r\n> > I was thinking cache the partidx list in PartitionDescData which is in\r\n> > relcache, if possible, we can use the cached partition between statements.\r\n> Ah, okay. I thought you were talking about a different idea. \r\n> How and where would you determine that a cached partidx value is indeed the correct one for\r\n> a given row?\r\n> Anyway, do you want to try writing a patch to see how it might work?\r\n\r\nYeah, the different idea here is to see if it is possible to share the cached\r\npartition info between statements efficiently.\r\n\r\nBut, after some research, I found something not as expected:\r\nCurrently, we tried to use ExecPartitionCheck to check the if the cached\r\npartition is the correct one. And if we want to share the cached partition\r\nbetween statements, we need to Invoke ExecPartitionCheck for single-row INSERT,\r\nbut the first time ExecPartitionCheck call will need to build expression state\r\ntree for the partition. From some simple performance tests, the cost to build\r\nthe state tree could be more than the cached partition saved which could bring\r\nperformance degradation.\r\n\r\nSo, If we want to share the cached partition between statements, we seems cannot\r\nuse ExecPartitionCheck. Instead, I tried directly invoke the partition support\r\nfunction(partsupfunc) to check If the cached info is correct. In this approach I\r\ntried cache the *bound offset* in PartitionDescData, and we can use the bound offset\r\nto get the bound datum from PartitionBoundInfoData and invoke the partsupfunc\r\nto do the CHECK.\r\n\r\nAttach a POC patch about it. Just to share an idea about sharing cached partition info\r\nbetween statements.\r\n\r\nBest regards,\r\nhouzj",
"msg_date": "Tue, 1 Jun 2021 08:43:05 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "Hou-san,\n\nOn Tue, Jun 1, 2021 at 5:43 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n> From: Amit Langote <amitlangote09@gmail.com>\n> > > > Where are you thinking to cache the partidx list? Inside\n> > > > PartitionDesc or some executor struct?\n> > >\n> > > I was thinking cache the partidx list in PartitionDescData which is in\n> > > relcache, if possible, we can use the cached partition between statements.\n> >\n> > Ah, okay. I thought you were talking about a different idea.\n> > How and where would you determine that a cached partidx value is indeed the correct one for\n> > a given row?\n> > Anyway, do you want to try writing a patch to see how it might work?\n>\n> Yeah, the different idea here is to see if it is possible to share the cached\n> partition info between statements efficiently.\n>\n> But, after some research, I found something not as expected:\n\nThanks for investigating this.\n\n> Currently, we tried to use ExecPartitionCheck to check the if the cached\n> partition is the correct one. And if we want to share the cached partition\n> between statements, we need to Invoke ExecPartitionCheck for single-row INSERT,\n> but the first time ExecPartitionCheck call will need to build expression state\n> tree for the partition. From some simple performance tests, the cost to build\n> the state tree could be more than the cached partition saved which could bring\n> performance degradation.\n\nYeah, using the executor in the lower layer will defeat the whole\npoint of caching in that layer.\n\n> So, If we want to share the cached partition between statements, we seems cannot\n> use ExecPartitionCheck. Instead, I tried directly invoke the partition support\n> function(partsupfunc) to check If the cached info is correct. In this approach I\n> tried cache the *bound offset* in PartitionDescData, and we can use the bound offset\n> to get the bound datum from PartitionBoundInfoData and invoke the partsupfunc\n> to do the CHECK.\n>\n> Attach a POC patch about it. Just to share an idea about sharing cached partition info\n> between statements.\n\nI have not looked at your patch yet, but yeah that's what I would\nimagine doing it.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 3 Jun 2021 20:48:57 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "On Thu, Jun 3, 2021 at 8:48 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Tue, Jun 1, 2021 at 5:43 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> > So, If we want to share the cached partition between statements, we seems cannot\n> > use ExecPartitionCheck. Instead, I tried directly invoke the partition support\n> > function(partsupfunc) to check If the cached info is correct. In this approach I\n> > tried cache the *bound offset* in PartitionDescData, and we can use the bound offset\n> > to get the bound datum from PartitionBoundInfoData and invoke the partsupfunc\n> > to do the CHECK.\n> >\n> > Attach a POC patch about it. Just to share an idea about sharing cached partition info\n> > between statements.\n>\n> I have not looked at your patch yet, but yeah that's what I would\n> imagine doing it.\n\nJust read it and think it looks promising.\n\nOn code, I wonder why not add the rechecking-cached-offset code\ndirectly in get_partiiton_for_tuple(), instead of adding a whole new\nfunction for that. Can you please check the attached revised version?\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 4 Jun 2021 16:38:38 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "On Fri, Jun 4, 2021 at 4:38 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Thu, Jun 3, 2021 at 8:48 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Tue, Jun 1, 2021 at 5:43 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > > So, If we want to share the cached partition between statements, we seems cannot\n> > > use ExecPartitionCheck. Instead, I tried directly invoke the partition support\n> > > function(partsupfunc) to check If the cached info is correct. In this approach I\n> > > tried cache the *bound offset* in PartitionDescData, and we can use the bound offset\n> > > to get the bound datum from PartitionBoundInfoData and invoke the partsupfunc\n> > > to do the CHECK.\n> > >\n> > > Attach a POC patch about it. Just to share an idea about sharing cached partition info\n> > > between statements.\n> >\n> > I have not looked at your patch yet, but yeah that's what I would\n> > imagine doing it.\n>\n> Just read it and think it looks promising.\n>\n> On code, I wonder why not add the rechecking-cached-offset code\n> directly in get_partiiton_for_tuple(), instead of adding a whole new\n> function for that. Can you please check the attached revised version?\n\nHere's another, slightly more polished version of that. Also, I added\na check_cached parameter to get_partition_for_tuple() to allow the\ncaller to disable checking the cached version.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 4 Jun 2021 18:05:17 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "On Fri, Jun 4, 2021 at 6:05 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Fri, Jun 4, 2021 at 4:38 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Thu, Jun 3, 2021 at 8:48 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > On Tue, Jun 1, 2021 at 5:43 PM houzj.fnst@fujitsu.com\n> > > <houzj.fnst@fujitsu.com> wrote:\n> > > > So, If we want to share the cached partition between statements, we seems cannot\n> > > > use ExecPartitionCheck. Instead, I tried directly invoke the partition support\n> > > > function(partsupfunc) to check If the cached info is correct. In this approach I\n> > > > tried cache the *bound offset* in PartitionDescData, and we can use the bound offset\n> > > > to get the bound datum from PartitionBoundInfoData and invoke the partsupfunc\n> > > > to do the CHECK.\n> > > >\n> > > > Attach a POC patch about it. Just to share an idea about sharing cached partition info\n> > > > between statements.\n> > >\n> > > I have not looked at your patch yet, but yeah that's what I would\n> > > imagine doing it.\n> >\n> > Just read it and think it looks promising.\n> >\n> > On code, I wonder why not add the rechecking-cached-offset code\n> > directly in get_partiiton_for_tuple(), instead of adding a whole new\n> > function for that. Can you please check the attached revised version?\n\nI should have clarified a bit more on why I think a new function\nlooked unnecessary to me. The thing about that function that bothered\nme was that it appeared to duplicate a lot of code fragments of\nget_partition_for_tuple(). That kind of duplication often leads to\nbugs of omission later if something from either function needs to\nchange.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 4 Jun 2021 20:44:30 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "Hi Amit-san\r\n\r\nFrom: Amit Langote <amitlangote09@gmail.com>\r\n> On Fri, Jun 4, 2021 at 6:05 PM Amit Langote <amitlangote09@gmail.com>\r\n> wrote:\r\n> > On Fri, Jun 4, 2021 at 4:38 PM Amit Langote <amitlangote09@gmail.com>\r\n> wrote:\r\n> > > On Thu, Jun 3, 2021 at 8:48 PM Amit Langote <amitlangote09@gmail.com>\r\n> wrote:\r\n> > > > On Tue, Jun 1, 2021 at 5:43 PM houzj.fnst@fujitsu.com\r\n> > > > <houzj.fnst@fujitsu.com> wrote:\r\n> > > > > So, If we want to share the cached partition between statements,\r\n> > > > > we seems cannot use ExecPartitionCheck. Instead, I tried\r\n> > > > > directly invoke the partition support\r\n> > > > > function(partsupfunc) to check If the cached info is correct. In\r\n> > > > > this approach I tried cache the *bound offset* in\r\n> > > > > PartitionDescData, and we can use the bound offset to get the\r\n> > > > > bound datum from PartitionBoundInfoData and invoke the\r\n> partsupfunc to do the CHECK.\r\n> > > > >\r\n> > > > > Attach a POC patch about it. Just to share an idea about sharing\r\n> > > > > cached partition info between statements.\r\n> > > >\r\n> > > > I have not looked at your patch yet, but yeah that's what I would\r\n> > > > imagine doing it.\r\n> > >\r\n> > > Just read it and think it looks promising.\r\n> > >\r\n> > > On code, I wonder why not add the rechecking-cached-offset code\r\n> > > directly in get_partiiton_for_tuple(), instead of adding a whole new\r\n> > > function for that. Can you please check the attached revised version?\r\n> \r\n> I should have clarified a bit more on why I think a new function looked\r\n> unnecessary to me. The thing about that function that bothered me was that\r\n> it appeared to duplicate a lot of code fragments of get_partition_for_tuple().\r\n> That kind of duplication often leads to bugs of omission later if something from\r\n> either function needs to change.\r\n\r\nThanks for the patch and explanation, I think you are right that it’s better add\r\nthe rechecking-cached-offset code directly in get_partiiton_for_tuple().\r\n\r\nAnd now, I think maybe it's time to try to optimize the performance.\r\nCurrently, if every row to be inserted in a statement belongs to different\r\npartition, then the cache check code will bring a slight performance\r\ndegradation(AFAICS: 2% ~ 4%).\r\n\r\nSo, If we want to solve this, then we may need 1) a reloption to let user control whether use the cache.\r\nOr, 2) introduce some simple strategy to control whether use cache automatically.\r\n\r\nI have not write a patch about 1) reloption, because I think it will be nice if we can\r\nenable this cache feature by default. So, I attached a WIP patch about approach 2).\r\n\r\nThe rough idea is to check the average batch number every 1000 rows.\r\nIf the average batch num is greater than 1, then we enable the cache check,\r\nif not, disable cache check. This is similar to what 0d5f05cde0 did.\r\n\r\nThoughts ?\r\n\r\nBest regards,\r\nhouzj",
"msg_date": "Mon, 7 Jun 2021 11:38:40 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "Hou-san,\n\nOn Mon, Jun 7, 2021 at 8:38 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n> Thanks for the patch and explanation, I think you are right that it’s better add\n> the rechecking-cached-offset code directly in get_partiiton_for_tuple().\n>\n> And now, I think maybe it's time to try to optimize the performance.\n> Currently, if every row to be inserted in a statement belongs to different\n> partition, then the cache check code will bring a slight performance\n> degradation(AFAICS: 2% ~ 4%).\n>\n> So, If we want to solve this, then we may need 1) a reloption to let user control whether use the cache.\n> Or, 2) introduce some simple strategy to control whether use cache automatically.\n>\n> I have not write a patch about 1) reloption, because I think it will be nice if we can\n> enable this cache feature by default. So, I attached a WIP patch about approach 2).\n>\n> The rough idea is to check the average batch number every 1000 rows.\n> If the average batch num is greater than 1, then we enable the cache check,\n> if not, disable cache check. This is similar to what 0d5f05cde0 did.\n\nThanks for sharing the idea and writing a patch for it.\n\nI considered a simpler heuristic where we enable/disable caching of a\ngiven offset if it is found by the binary search algorithm at least N\nconsecutive times. But your idea to check the ratio of the number of\ntuples inserted over partition/bound offset changes every N tuples\ninserted may be more adaptive.\n\nPlease find attached a revised version of your patch, where I tried to\nmake it a bit easier to follow, hopefully. While doing so, I realized\nthat caching the bound offset across queries makes little sense now,\nso I decided to keep the changes confined to execPartition.c. Do you\nhave a counter-argument to that?\n--\nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 16 Jun 2021 16:27:45 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "On Wed, Jun 16, 2021 at 4:27 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Mon, Jun 7, 2021 at 8:38 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> > The rough idea is to check the average batch number every 1000 rows.\n> > If the average batch num is greater than 1, then we enable the cache check,\n> > if not, disable cache check. This is similar to what 0d5f05cde0 did.\n>\n> Thanks for sharing the idea and writing a patch for it.\n>\n> I considered a simpler heuristic where we enable/disable caching of a\n> given offset if it is found by the binary search algorithm at least N\n> consecutive times. But your idea to check the ratio of the number of\n> tuples inserted over partition/bound offset changes every N tuples\n> inserted may be more adaptive.\n>\n> Please find attached a revised version of your patch, where I tried to\n> make it a bit easier to follow, hopefully. While doing so, I realized\n> that caching the bound offset across queries makes little sense now,\n> so I decided to keep the changes confined to execPartition.c. Do you\n> have a counter-argument to that?\n\nAttached a slightly revised version of that patch, with a commit\nmessage this time.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 17 Jun 2021 13:28:58 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "On Wed, Jun 16, 2021 at 9:29 PM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> On Wed, Jun 16, 2021 at 4:27 PM Amit Langote <amitlangote09@gmail.com>\n> wrote:\n> > On Mon, Jun 7, 2021 at 8:38 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > > The rough idea is to check the average batch number every 1000 rows.\n> > > If the average batch num is greater than 1, then we enable the cache\n> check,\n> > > if not, disable cache check. This is similar to what 0d5f05cde0 did.\n> >\n> > Thanks for sharing the idea and writing a patch for it.\n> >\n> > I considered a simpler heuristic where we enable/disable caching of a\n> > given offset if it is found by the binary search algorithm at least N\n> > consecutive times. But your idea to check the ratio of the number of\n> > tuples inserted over partition/bound offset changes every N tuples\n> > inserted may be more adaptive.\n> >\n> > Please find attached a revised version of your patch, where I tried to\n> > make it a bit easier to follow, hopefully. While doing so, I realized\n> > that caching the bound offset across queries makes little sense now,\n> > so I decided to keep the changes confined to execPartition.c. Do you\n> > have a counter-argument to that?\n>\n> Attached a slightly revised version of that patch, with a commit\n> message this time.\n>\n> --\n> Amit Langote\n> EDB: http://www.enterprisedb.com\n\n\nHi,\n\n+ int n_tups_inserted;\n+ int n_offset_changed;\n\nSince tuples appear in plural, maybe offset should be as well: offsets.\n\n+ part_index = get_cached_list_partition(pd, boundinfo, key,\n+ values);\n\nnit:either put values on the same line, or align the 4 parameters on\ndifferent lines.\n\n+ if (part_index < 0)\n+ {\n+ bound_offset =\npartition_range_datum_bsearch(key->partsupfunc,\n\nDo we need to check the value of equal before computing part_index ?\n\nCheers\n\nOn Wed, Jun 16, 2021 at 9:29 PM Amit Langote <amitlangote09@gmail.com> wrote:On Wed, Jun 16, 2021 at 4:27 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Mon, Jun 7, 2021 at 8:38 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> > The rough idea is to check the average batch number every 1000 rows.\n> > If the average batch num is greater than 1, then we enable the cache check,\n> > if not, disable cache check. This is similar to what 0d5f05cde0 did.\n>\n> Thanks for sharing the idea and writing a patch for it.\n>\n> I considered a simpler heuristic where we enable/disable caching of a\n> given offset if it is found by the binary search algorithm at least N\n> consecutive times. But your idea to check the ratio of the number of\n> tuples inserted over partition/bound offset changes every N tuples\n> inserted may be more adaptive.\n>\n> Please find attached a revised version of your patch, where I tried to\n> make it a bit easier to follow, hopefully. While doing so, I realized\n> that caching the bound offset across queries makes little sense now,\n> so I decided to keep the changes confined to execPartition.c. Do you\n> have a counter-argument to that?\n\nAttached a slightly revised version of that patch, with a commit\nmessage this time.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.comHi,+ int n_tups_inserted;+ int n_offset_changed;Since tuples appear in plural, maybe offset should be as well: offsets.+ part_index = get_cached_list_partition(pd, boundinfo, key,+ values);nit:either put values on the same line, or align the 4 parameters on different lines.+ if (part_index < 0)+ {+ bound_offset = partition_range_datum_bsearch(key->partsupfunc,Do we need to check the value of equal before computing part_index ?Cheers",
"msg_date": "Wed, 16 Jun 2021 21:51:00 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "Hi,\n\nThanks for reading the patch.\n\nOn Thu, Jun 17, 2021 at 1:46 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> On Wed, Jun 16, 2021 at 9:29 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> Attached a slightly revised version of that patch, with a commit\n>> message this time.\n>\n> + int n_tups_inserted;\n> + int n_offset_changed;\n>\n> Since tuples appear in plural, maybe offset should be as well: offsets.\n\nI was hoping one would read that as \"the number of times the offset\nchanged\" while inserting \"that many tuples\", so the singular form\nmakes more sense to me.\n\nActually, I even considered naming the variable n_offsets_seen, in\nwhich case the plural form makes sense, but I chose not to go with\nthat name.\n\n> + part_index = get_cached_list_partition(pd, boundinfo, key,\n> + values);\n>\n> nit:either put values on the same line, or align the 4 parameters on different lines.\n\nNot sure pgindent requires us to follow that style, but I too prefer\nthe way you suggest. It does make the patch a bit longer though.\n\n> + if (part_index < 0)\n> + {\n> + bound_offset = partition_range_datum_bsearch(key->partsupfunc,\n>\n> Do we need to check the value of equal before computing part_index ?\n\nJust in case you didn't notice, this is not new code, but appears as a\ndiff hunk due to indenting.\n\nAs for whether the code should be checking 'equal', I don't think the\nlogic at this particular site should do that. Requiring 'equal' to be\ntrue would mean that this code would only accept tuples that exactly\nmatch the bound that partition_range_datum_bsearch() returned.\n\nUpdated patch attached. Aside from addressing your 2nd point, I fixed\na typo in a comment.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 17 Jun 2021 14:36:47 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "On Wed, Jun 16, 2021 at 10:37 PM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> Hi,\n>\n> Thanks for reading the patch.\n>\n> On Thu, Jun 17, 2021 at 1:46 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> > On Wed, Jun 16, 2021 at 9:29 PM Amit Langote <amitlangote09@gmail.com>\n> wrote:\n> >> Attached a slightly revised version of that patch, with a commit\n> >> message this time.\n> >\n> > + int n_tups_inserted;\n> > + int n_offset_changed;\n> >\n> > Since tuples appear in plural, maybe offset should be as well: offsets.\n>\n> I was hoping one would read that as \"the number of times the offset\n> changed\" while inserting \"that many tuples\", so the singular form\n> makes more sense to me.\n>\n> Actually, I even considered naming the variable n_offsets_seen, in\n> which case the plural form makes sense, but I chose not to go with\n> that name.\n>\n> > + part_index = get_cached_list_partition(pd, boundinfo,\n> key,\n> > + values);\n> >\n> > nit:either put values on the same line, or align the 4 parameters on\n> different lines.\n>\n> Not sure pgindent requires us to follow that style, but I too prefer\n> the way you suggest. It does make the patch a bit longer though.\n>\n> > + if (part_index < 0)\n> > + {\n> > + bound_offset =\n> partition_range_datum_bsearch(key->partsupfunc,\n> >\n> > Do we need to check the value of equal before computing part_index ?\n>\n> Just in case you didn't notice, this is not new code, but appears as a\n> diff hunk due to indenting.\n>\n> As for whether the code should be checking 'equal', I don't think the\n> logic at this particular site should do that. Requiring 'equal' to be\n> true would mean that this code would only accept tuples that exactly\n> match the bound that partition_range_datum_bsearch() returned.\n>\n> Updated patch attached. Aside from addressing your 2nd point, I fixed\n> a typo in a comment.\n>\n> --\n> Amit Langote\n> EDB: http://www.enterprisedb.com\n\nHi, Amit:\nThanks for the quick response.\nw.r.t. the last point, since variable equal is defined within the case of\nPARTITION_STRATEGY_RANGE,\nI wonder if it can be named don_t_care or something like that.\nThat way, it would be clearer to the reader that its value is purposefully\nnot checked.\n\nIt is fine to leave the variable as is since this was existing code.\n\nCheers\n\nOn Wed, Jun 16, 2021 at 10:37 PM Amit Langote <amitlangote09@gmail.com> wrote:Hi,\n\nThanks for reading the patch.\n\nOn Thu, Jun 17, 2021 at 1:46 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> On Wed, Jun 16, 2021 at 9:29 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> Attached a slightly revised version of that patch, with a commit\n>> message this time.\n>\n> + int n_tups_inserted;\n> + int n_offset_changed;\n>\n> Since tuples appear in plural, maybe offset should be as well: offsets.\n\nI was hoping one would read that as \"the number of times the offset\nchanged\" while inserting \"that many tuples\", so the singular form\nmakes more sense to me.\n\nActually, I even considered naming the variable n_offsets_seen, in\nwhich case the plural form makes sense, but I chose not to go with\nthat name.\n\n> + part_index = get_cached_list_partition(pd, boundinfo, key,\n> + values);\n>\n> nit:either put values on the same line, or align the 4 parameters on different lines.\n\nNot sure pgindent requires us to follow that style, but I too prefer\nthe way you suggest. It does make the patch a bit longer though.\n\n> + if (part_index < 0)\n> + {\n> + bound_offset = partition_range_datum_bsearch(key->partsupfunc,\n>\n> Do we need to check the value of equal before computing part_index ?\n\nJust in case you didn't notice, this is not new code, but appears as a\ndiff hunk due to indenting.\n\nAs for whether the code should be checking 'equal', I don't think the\nlogic at this particular site should do that. Requiring 'equal' to be\ntrue would mean that this code would only accept tuples that exactly\nmatch the bound that partition_range_datum_bsearch() returned.\n\nUpdated patch attached. Aside from addressing your 2nd point, I fixed\na typo in a comment.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.comHi, Amit:Thanks for the quick response.w.r.t. the last point, since variable equal is defined within the case of PARTITION_STRATEGY_RANGE,I wonder if it can be named don_t_care or something like that.That way, it would be clearer to the reader that its value is purposefully not checked.It is fine to leave the variable as is since this was existing code.Cheers",
"msg_date": "Thu, 17 Jun 2021 00:23:09 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "On Thu, Jun 17, 2021 at 4:18 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n> On Wed, Jun 16, 2021 at 10:37 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> > + if (part_index < 0)\n>> > + {\n>> > + bound_offset = partition_range_datum_bsearch(key->partsupfunc,\n>> >\n>> > Do we need to check the value of equal before computing part_index ?\n>>\n>> Just in case you didn't notice, this is not new code, but appears as a\n>> diff hunk due to indenting.\n>>\n>> As for whether the code should be checking 'equal', I don't think the\n>> logic at this particular site should do that. Requiring 'equal' to be\n>> true would mean that this code would only accept tuples that exactly\n>> match the bound that partition_range_datum_bsearch() returned.\n>\n> Hi, Amit:\n> Thanks for the quick response.\n> w.r.t. the last point, since variable equal is defined within the case of PARTITION_STRATEGY_RANGE,\n> I wonder if it can be named don_t_care or something like that.\n> That way, it would be clearer to the reader that its value is purposefully not checked.\n\nNormally, we write a comment in such cases, like\n\n/* The value returned in 'equal' is ignored! */\n\nThough I forgot to do that when I first wrote this code. :(\n\n> It is fine to leave the variable as is since this was existing code.\n\nYeah, maybe there's not much to be gained by doing something about\nthat now, unless of course a committer insists that we do.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 17 Jun 2021 16:27:56 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "I noticed that there is no CF entry for this, so created one in the next CF:\n\nhttps://commitfest.postgresql.org/34/3270/\n\nRebased patch attached.",
"msg_date": "Mon, 2 Aug 2021 15:29:37 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "There are a whole lot of different patches in this thread.\n\nHowever this last one https://commitfest.postgresql.org/37/3270/\ncreated by Amit seems like a fairly straightforward optimization that\ncan be evaluated on its own separately from the others and seems quite\nmature. I'm actually inclined to set it to \"Ready for Committer\".\n\nIncidentally a quick read-through of the patch myself and the only\nquestion I have is how the parameters of the adaptive algorithm were\nchosen. They seem ludicrously conservative to me and a bit of simple\narguments about how expensive an extra check is versus the time saved\nin the boolean search should be easy enough to come up with to justify\nwhatever values make sense.\n\n\n",
"msg_date": "Tue, 15 Mar 2022 17:54:08 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "Hi Greg,\n\nOn Wed, Mar 16, 2022 at 6:54 AM Greg Stark <stark@mit.edu> wrote:\n> There are a whole lot of different patches in this thread.\n>\n> However this last one https://commitfest.postgresql.org/37/3270/\n> created by Amit seems like a fairly straightforward optimization that\n> can be evaluated on its own separately from the others and seems quite\n> mature. I'm actually inclined to set it to \"Ready for Committer\".\n\nThanks for taking a look at it.\n\n> Incidentally a quick read-through of the patch myself and the only\n> question I have is how the parameters of the adaptive algorithm were\n> chosen. They seem ludicrously conservative to me\n\nDo you think CACHE_BOUND_OFFSET_THRESHOLD_TUPS (1000) is too high? I\nsuspect maybe you do.\n\nBasically, the way this works is that once set, cached_bound_offset is\nnot reset until encountering a tuple for which cached_bound_offset\ndoesn't give the correct partition, so the threshold doesn't matter\nwhen the caching is active. However, once reset, it is not again set\ntill the threshold number of tuples have been processed and that too\nonly if the binary searches done during that interval appear to have\nreturned the same bound offset in succession a number of times. Maybe\nwaiting a 1000 tuples to re-assess that is a bit too conservative,\nyes. I guess even as small a number as 10 is fine here?\n\nI've attached an updated version of the patch, though I haven't\nchanged the threshold constant.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\nOn Wed, Mar 16, 2022 at 6:54 AM Greg Stark <stark@mit.edu> wrote:\n>\n> There are a whole lot of different patches in this thread.\n>\n> However this last one https://commitfest.postgresql.org/37/3270/\n> created by Amit seems like a fairly straightforward optimization that\n> can be evaluated on its own separately from the others and seems quite\n> mature. I'm actually inclined to set it to \"Ready for Committer\".\n>\n> Incidentally a quick read-through of the patch myself and the only\n> question I have is how the parameters of the adaptive algorithm were\n> chosen. They seem ludicrously conservative to me and a bit of simple\n> arguments about how expensive an extra check is versus the time saved\n> in the boolean search should be easy enough to come up with to justify\n> whatever values make sense.\n\n\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 23 Mar 2022 21:52:28 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "On Wed, Mar 23, 2022 at 5:52 AM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> Hi Greg,\n>\n> On Wed, Mar 16, 2022 at 6:54 AM Greg Stark <stark@mit.edu> wrote:\n> > There are a whole lot of different patches in this thread.\n> >\n> > However this last one https://commitfest.postgresql.org/37/3270/\n> > created by Amit seems like a fairly straightforward optimization that\n> > can be evaluated on its own separately from the others and seems quite\n> > mature. I'm actually inclined to set it to \"Ready for Committer\".\n>\n> Thanks for taking a look at it.\n>\n> > Incidentally a quick read-through of the patch myself and the only\n> > question I have is how the parameters of the adaptive algorithm were\n> > chosen. They seem ludicrously conservative to me\n>\n> Do you think CACHE_BOUND_OFFSET_THRESHOLD_TUPS (1000) is too high? I\n> suspect maybe you do.\n>\n> Basically, the way this works is that once set, cached_bound_offset is\n> not reset until encountering a tuple for which cached_bound_offset\n> doesn't give the correct partition, so the threshold doesn't matter\n> when the caching is active. However, once reset, it is not again set\n> till the threshold number of tuples have been processed and that too\n> only if the binary searches done during that interval appear to have\n> returned the same bound offset in succession a number of times. Maybe\n> waiting a 1000 tuples to re-assess that is a bit too conservative,\n> yes. I guess even as small a number as 10 is fine here?\n>\n> I've attached an updated version of the patch, though I haven't\n> changed the threshold constant.\n>\n> --\n> Amit Langote\n> EDB: http://www.enterprisedb.com\n>\n> On Wed, Mar 16, 2022 at 6:54 AM Greg Stark <stark@mit.edu> wrote:\n> >\n> > There are a whole lot of different patches in this thread.\n> >\n> > However this last one https://commitfest.postgresql.org/37/3270/\n> > created by Amit seems like a fairly straightforward optimization that\n> > can be evaluated on its own separately from the others and seems quite\n> > mature. I'm actually inclined to set it to \"Ready for Committer\".\n> >\n> > Incidentally a quick read-through of the patch myself and the only\n> > question I have is how the parameters of the adaptive algorithm were\n> > chosen. They seem ludicrously conservative to me and a bit of simple\n> > arguments about how expensive an extra check is versus the time saved\n> > in the boolean search should be easy enough to come up with to justify\n> > whatever values make sense.\n>\n> Hi,\n\n+ * Threshold of the number of tuples to need to have been processed before\n+ * maybe_cache_partition_bound_offset() (re-)assesses whether caching must\nbe\n\nThe first part of the comment should be:\n\nThreshold of the number of tuples which need to have been processed\n\n+ (double) pd->n_tups_inserted / pd->n_offset_changed > 1)\n\nI think division can be avoided - the condition can be written as:\n\n pd->n_tups_inserted > pd->n_offset_changed\n\n+ /* Check if the value is below the high bound */\n\nhigh bound -> upper bound\n\nCheers\n\nOn Wed, Mar 23, 2022 at 5:52 AM Amit Langote <amitlangote09@gmail.com> wrote:Hi Greg,\n\nOn Wed, Mar 16, 2022 at 6:54 AM Greg Stark <stark@mit.edu> wrote:\n> There are a whole lot of different patches in this thread.\n>\n> However this last one https://commitfest.postgresql.org/37/3270/\n> created by Amit seems like a fairly straightforward optimization that\n> can be evaluated on its own separately from the others and seems quite\n> mature. I'm actually inclined to set it to \"Ready for Committer\".\n\nThanks for taking a look at it.\n\n> Incidentally a quick read-through of the patch myself and the only\n> question I have is how the parameters of the adaptive algorithm were\n> chosen. They seem ludicrously conservative to me\n\nDo you think CACHE_BOUND_OFFSET_THRESHOLD_TUPS (1000) is too high? I\nsuspect maybe you do.\n\nBasically, the way this works is that once set, cached_bound_offset is\nnot reset until encountering a tuple for which cached_bound_offset\ndoesn't give the correct partition, so the threshold doesn't matter\nwhen the caching is active. However, once reset, it is not again set\ntill the threshold number of tuples have been processed and that too\nonly if the binary searches done during that interval appear to have\nreturned the same bound offset in succession a number of times. Maybe\nwaiting a 1000 tuples to re-assess that is a bit too conservative,\nyes. I guess even as small a number as 10 is fine here?\n\nI've attached an updated version of the patch, though I haven't\nchanged the threshold constant.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\nOn Wed, Mar 16, 2022 at 6:54 AM Greg Stark <stark@mit.edu> wrote:\n>\n> There are a whole lot of different patches in this thread.\n>\n> However this last one https://commitfest.postgresql.org/37/3270/\n> created by Amit seems like a fairly straightforward optimization that\n> can be evaluated on its own separately from the others and seems quite\n> mature. I'm actually inclined to set it to \"Ready for Committer\".\n>\n> Incidentally a quick read-through of the patch myself and the only\n> question I have is how the parameters of the adaptive algorithm were\n> chosen. They seem ludicrously conservative to me and a bit of simple\n> arguments about how expensive an extra check is versus the time saved\n> in the boolean search should be easy enough to come up with to justify\n> whatever values make sense.Hi,+ * Threshold of the number of tuples to need to have been processed before+ * maybe_cache_partition_bound_offset() (re-)assesses whether caching must beThe first part of the comment should be: Threshold of the number of tuples which need to have been processed+ (double) pd->n_tups_inserted / pd->n_offset_changed > 1)I think division can be avoided - the condition can be written as: pd->n_tups_inserted > pd->n_offset_changed+ /* Check if the value is below the high bound */high bound -> upper boundCheers",
"msg_date": "Wed, 23 Mar 2022 09:59:01 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "On Thu, Mar 24, 2022 at 1:55 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n> On Wed, Mar 23, 2022 at 5:52 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>> I've attached an updated version of the patch, though I haven't\n>> changed the threshold constant.\n> + * Threshold of the number of tuples to need to have been processed before\n> + * maybe_cache_partition_bound_offset() (re-)assesses whether caching must be\n>\n> The first part of the comment should be:\n>\n> Threshold of the number of tuples which need to have been processed\n\nSounds the same to me, so leaving it as it is.\n\n> + (double) pd->n_tups_inserted / pd->n_offset_changed > 1)\n>\n> I think division can be avoided - the condition can be written as:\n>\n> pd->n_tups_inserted > pd->n_offset_changed\n>\n> + /* Check if the value is below the high bound */\n>\n> high bound -> upper bound\n\nBoth done, thanks.\n\nIn the attached updated patch, I've also lowered the threshold number\nof tuples to wait before re-enabling caching from 1000 down to 10.\nAFAICT, it only makes things better for the cases in which the\nproposed caching is supposed to help, while not affecting the cases in\nwhich caching might actually make things worse.\n\nI've repeated the benchmark mentioned in [1]:\n\n-- creates a range-partitioned table with 1000 partitions\ncreate unlogged table foo (a int) partition by range (a);\nselect 'create unlogged table foo_' || i || ' partition of foo for\nvalues from (' || (i-1)*100000+1 || ') to (' || i*100000+1 || ');'\nfrom generate_series(1, 1000) i;\n\\gexec\n\n-- generates a 100 million record file\ncopy (select generate_series(1, 100000000)) to '/tmp/100m.csv' csv;\n\nHEAD:\n\npostgres=# copy foo from '/tmp/100m.csv' csv; truncate foo;\nCOPY 100000000\nTime: 39445.421 ms (00:39.445)\nTRUNCATE TABLE\nTime: 381.570 ms\npostgres=# copy foo from '/tmp/100m.csv' csv; truncate foo;\nCOPY 100000000\nTime: 38779.235 ms (00:38.779)\n\nPatched:\n\npostgres=# copy foo from '/tmp/100m.csv' csv; truncate foo;\nCOPY 100000000\nTime: 33136.202 ms (00:33.136)\nTRUNCATE TABLE\nTime: 394.939 ms\npostgres=# copy foo from '/tmp/100m.csv' csv; truncate foo;\nCOPY 100000000\nTime: 33914.856 ms (00:33.915)\nTRUNCATE TABLE\nTime: 407.451 ms\n\nSo roughly, 38 seconds with HEAD vs. 33 seconds with the patch applied.\n\n(Curiously, the numbers with both HEAD and patched look worse this\ntime around, because they were 31 seconds with HEAD vs. 26 seconds\nwith patched back in May 2021. Unless that's measurement noise, maybe\nsomething to look into.)\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/CA%2BHiwqFbMSLDMinPRsGQVn_gfb-bMy0J2z_rZ0-b9kSfxXF%2BAg%40mail.gmail.com",
"msg_date": "Fri, 25 Mar 2022 12:22:39 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "Is this a problem with the patch or its tests?\n\n[18:14:20.798] # poll_query_until timed out executing this query:\n[18:14:20.798] # SELECT count(1) = 0 FROM pg_subscription_rel WHERE\nsrsubstate NOT IN ('r', 's');\n[18:14:20.798] # expecting this output:\n[18:14:20.798] # t\n[18:14:20.798] # last actual query output:\n[18:14:20.798] # f\n[18:14:20.798] # with stderr:\n[18:14:20.798] # Tests were run but no plan was declared and\ndone_testing() was not seen.\n[18:14:20.798] # Looks like your test exited with 60 just after 31.\n[18:14:20.798] [18:12:21] t/013_partition.pl .................\n[18:14:20.798] Dubious, test returned 60 (wstat 15360, 0x3c00)\n...\n[18:14:20.798] Test Summary Report\n[18:14:20.798] -------------------\n[18:14:20.798] t/013_partition.pl (Wstat: 15360 Tests: 31 Failed: 0)\n[18:14:20.798] Non-zero exit status: 60\n[18:14:20.798] Parse errors: No plan found in TAP output\n[18:14:20.798] Files=32, Tests=328, 527 wallclock secs ( 0.16 usr 0.09\nsys + 99.81 cusr 87.08 csys = 187.14 CPU)\n[18:14:20.798] Result: FAIL\n\n\n",
"msg_date": "Sun, 3 Apr 2022 09:31:07 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "On Sun, Apr 3, 2022 at 10:31 PM Greg Stark <stark@mit.edu> wrote:\n> Is this a problem with the patch or its tests?\n>\n> [18:14:20.798] # poll_query_until timed out executing this query:\n> [18:14:20.798] # SELECT count(1) = 0 FROM pg_subscription_rel WHERE\n> srsubstate NOT IN ('r', 's');\n> [18:14:20.798] # expecting this output:\n> [18:14:20.798] # t\n> [18:14:20.798] # last actual query output:\n> [18:14:20.798] # f\n> [18:14:20.798] # with stderr:\n> [18:14:20.798] # Tests were run but no plan was declared and\n> done_testing() was not seen.\n> [18:14:20.798] # Looks like your test exited with 60 just after 31.\n> [18:14:20.798] [18:12:21] t/013_partition.pl .................\n> [18:14:20.798] Dubious, test returned 60 (wstat 15360, 0x3c00)\n> ...\n> [18:14:20.798] Test Summary Report\n> [18:14:20.798] -------------------\n> [18:14:20.798] t/013_partition.pl (Wstat: 15360 Tests: 31 Failed: 0)\n> [18:14:20.798] Non-zero exit status: 60\n> [18:14:20.798] Parse errors: No plan found in TAP output\n> [18:14:20.798] Files=32, Tests=328, 527 wallclock secs ( 0.16 usr 0.09\n> sys + 99.81 cusr 87.08 csys = 187.14 CPU)\n> [18:14:20.798] Result: FAIL\n\nHmm, make check-world passes for me after rebasing the patch (v10) to\nthe latest HEAD (clean), nor do I see a failure on cfbot:\n\nhttp://cfbot.cputube.org/amit-langote.html\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 6 Apr 2022 12:37:51 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Sun, Apr 3, 2022 at 10:31 PM Greg Stark <stark@mit.edu> wrote:\n>> Is this a problem with the patch or its tests?\n>> [18:14:20.798] Test Summary Report\n>> [18:14:20.798] -------------------\n>> [18:14:20.798] t/013_partition.pl (Wstat: 15360 Tests: 31 Failed: 0)\n\n> Hmm, make check-world passes for me after rebasing the patch (v10) to\n> the latest HEAD (clean), nor do I see a failure on cfbot:\n> http://cfbot.cputube.org/amit-langote.html\n\n013_partition.pl has been failing regularly in the buildfarm,\nmost recently here:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=florican&dt=2022-03-31%2000%3A49%3A45\n\nI don't think there's room to blame any uncommitted patches\nfor that. Somebody broke it a short time before here:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2022-03-17%2016%3A08%3A19\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 06 Apr 2022 00:07:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "Hi,\n\nOn 2022-04-06 00:07:07 -0400, Tom Lane wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Sun, Apr 3, 2022 at 10:31 PM Greg Stark <stark@mit.edu> wrote:\n> >> Is this a problem with the patch or its tests?\n> >> [18:14:20.798] Test Summary Report\n> >> [18:14:20.798] -------------------\n> >> [18:14:20.798] t/013_partition.pl (Wstat: 15360 Tests: 31 Failed: 0)\n> \n> > Hmm, make check-world passes for me after rebasing the patch (v10) to\n> > the latest HEAD (clean), nor do I see a failure on cfbot:\n> > http://cfbot.cputube.org/amit-langote.html\n> \n> 013_partition.pl has been failing regularly in the buildfarm,\n> most recently here:\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=florican&dt=2022-03-31%2000%3A49%3A45\n\nJust failed locally on my machine as well.\n\n\n> I don't think there's room to blame any uncommitted patches\n> for that. Somebody broke it a short time before here:\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2022-03-17%2016%3A08%3A19\n\nThe obvious thing to point a finger at is\n\ncommit c91f71b9dc91ef95e1d50d6d782f477258374fc6\nAuthor: Tomas Vondra <tomas.vondra@postgresql.org>\nDate: 2022-03-16 16:42:47 +0100\n\n Fix publish_as_relid with multiple publications\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 7 Apr 2022 00:37:29 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "Hi,\n\nOn Thu, Apr 7, 2022 at 4:37 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-04-06 00:07:07 -0400, Tom Lane wrote:\n> > Amit Langote <amitlangote09@gmail.com> writes:\n> > > On Sun, Apr 3, 2022 at 10:31 PM Greg Stark <stark@mit.edu> wrote:\n> > >> Is this a problem with the patch or its tests?\n> > >> [18:14:20.798] Test Summary Report\n> > >> [18:14:20.798] -------------------\n> > >> [18:14:20.798] t/013_partition.pl (Wstat: 15360 Tests: 31 Failed: 0)\n> >\n> > > Hmm, make check-world passes for me after rebasing the patch (v10) to\n> > > the latest HEAD (clean), nor do I see a failure on cfbot:\n> > > http://cfbot.cputube.org/amit-langote.html\n> >\n> > 013_partition.pl has been failing regularly in the buildfarm,\n> > most recently here:\n> >\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=florican&dt=2022-03-31%2000%3A49%3A45\n>\n> Just failed locally on my machine as well.\n>\n>\n> > I don't think there's room to blame any uncommitted patches\n> > for that. Somebody broke it a short time before here:\n> >\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2022-03-17%2016%3A08%3A19\n>\n> The obvious thing to point a finger at is\n>\n> commit c91f71b9dc91ef95e1d50d6d782f477258374fc6\n> Author: Tomas Vondra <tomas.vondra@postgresql.org>\n> Date: 2022-03-16 16:42:47 +0100\n>\n> Fix publish_as_relid with multiple publications\n>\n\nI've not managed to reproduce this issue on my machine but while\nreviewing the code and the server logs[1] I may have found possible\nbugs:\n\n2022-04-08 12:59:30.701 EDT [91997:1] LOG: logical replication apply\nworker for subscription \"sub2\" has started\n2022-04-08 12:59:30.702 EDT [91998:3] 013_partition.pl LOG:\nstatement: ALTER SUBSCRIPTION sub2 SET PUBLICATION pub_lower_level,\npub_all\n2022-04-08 12:59:30.733 EDT [91998:4] 013_partition.pl LOG:\ndisconnection: session time: 0:00:00.036 user=buildfarm\ndatabase=postgres host=[local]\n2022-04-08 12:59:30.740 EDT [92001:1] LOG: logical replication table\nsynchronization worker for subscription \"sub2\", table \"tab4_1\" has\nstarted\n2022-04-08 12:59:30.744 EDT [91997:2] LOG: logical replication apply\nworker for subscription \"sub2\" will restart because of a parameter\nchange\n2022-04-08 12:59:30.750 EDT [92003:1] LOG: logical replication table\nsynchronization worker for subscription \"sub2\", table \"tab3\" has\nstarted\n\nThe logs say that the apply worker for \"sub2\" finished whereas the\ntablesync workers for \"tab4_1\" and \"tab3\" started. After these logs,\nthere are no logs that these tablesync workers finished and the apply\nworker for \"sub2\" restarted, until the timeout. While reviewing the\ncode, I realized that the tablesync workers can advance its relstate\neven without the apply worker intervention.\n\nAfter a tablesync worker copies the table it sets\nSUBREL_STATE_SYNCWAIT to its relstate, then it waits for the apply\nworker to update the relstate to SUBREL_STATE_CATCHUP. If the apply\nworker has already died, it breaks from the wait loop and returns\nfalse:\n\nwait_for_worker_state_change():\n\n for (;;)\n {\n LogicalRepWorker *worker;\n\n :\n\n /*\n * Bail out if the apply worker has died, else signal it we're\n * waiting.\n */\n LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);\n worker = logicalrep_worker_find(MyLogicalRepWorker->subid,\n InvalidOid, false);\n if (worker && worker->proc)\n logicalrep_worker_wakeup_ptr(worker);\n LWLockRelease(LogicalRepWorkerLock);\n if (!worker)\n break;\n\n :\n }\n\n return false;\n\nHowever, the caller doesn't check the return value at all:\n\n /*\n * We are done with the initial data synchronization, update the state.\n */\n SpinLockAcquire(&MyLogicalRepWorker->relmutex);\n MyLogicalRepWorker->relstate = SUBREL_STATE_SYNCWAIT;\n MyLogicalRepWorker->relstate_lsn = *origin_startpos;\n SpinLockRelease(&MyLogicalRepWorker->relmutex);\n\n /*\n * Finally, wait until the main apply worker tells us to catch up and then\n * return to let LogicalRepApplyLoop do it.\n */\n wait_for_worker_state_change(SUBREL_STATE_CATCHUP);\n return slotname;\n\nTherefore, the tablesync worker started logical replication while\nkeeping its relstate as SUBREL_STATE_SYNCWAIT.\n\nGiven the server logs, it's likely that both tablesync workers for\n\"tab4_1\" and \"tab3\" were in this situation. That is, there were two\ntablesync workers who were applying changes for the target relation\nbut the relstate was SUBREL_STATE_SYNCWAIT.\n\nWhen it comes to starting the apply worker, probably it didn't happen\nsince there are already running tablesync workers as much as\nmax_sync_workers_per_subscription (2 by default):\n\nlogicalrep_worker_launch():\n\n /*\n * If we reached the sync worker limit per subscription, just exit\n * silently as we might get here because of an otherwise harmless race\n * condition.\n */\n if (nsyncworkers >= max_sync_workers_per_subscription)\n {\n LWLockRelease(LogicalRepWorkerLock);\n return;\n }\n\nThis scenario seems possible in principle but I've not managed to\nreproduce this issue so I might be wrong. Especially, according to the\nserver logs, it seems like the tablesync workers were launched before\nthe apply worker restarted due to parameter change and this is a\ncommon pattern among other failure logs. But I'm not sure how it could\nreally happen. IIUC the apply worker always re-reads subscription (and\nexits if there is parameter change) and then requests to launch\ntablesync workers accordingly. Also, the fact that we don't check the\nreturn value of wiat_for_worker_state_change() is not a new thing; we\nhave been living with this behavior since v10. So I'm not really sure\nwhy this problem appeared recently if my hypothesis is correct.\n\nRegards,\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=grassquit&dt=2022-04-08%2014%3A13%3A27&stg=subscription-check\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 12 Apr 2022 09:45:23 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "On Tue, Apr 12, 2022 at 6:16 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Hi,\n>\n> On Thu, Apr 7, 2022 at 4:37 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2022-04-06 00:07:07 -0400, Tom Lane wrote:\n> > > Amit Langote <amitlangote09@gmail.com> writes:\n> > > > On Sun, Apr 3, 2022 at 10:31 PM Greg Stark <stark@mit.edu> wrote:\n> > > >> Is this a problem with the patch or its tests?\n> > > >> [18:14:20.798] Test Summary Report\n> > > >> [18:14:20.798] -------------------\n> > > >> [18:14:20.798] t/013_partition.pl (Wstat: 15360 Tests: 31 Failed: 0)\n> > >\n> > > > Hmm, make check-world passes for me after rebasing the patch (v10) to\n> > > > the latest HEAD (clean), nor do I see a failure on cfbot:\n> > > > http://cfbot.cputube.org/amit-langote.html\n> > >\n> > > 013_partition.pl has been failing regularly in the buildfarm,\n> > > most recently here:\n> > >\n> > > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=florican&dt=2022-03-31%2000%3A49%3A45\n> >\n> > Just failed locally on my machine as well.\n> >\n> >\n> > > I don't think there's room to blame any uncommitted patches\n> > > for that. Somebody broke it a short time before here:\n> > >\n> > > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2022-03-17%2016%3A08%3A19\n> >\n> > The obvious thing to point a finger at is\n> >\n> > commit c91f71b9dc91ef95e1d50d6d782f477258374fc6\n> > Author: Tomas Vondra <tomas.vondra@postgresql.org>\n> > Date: 2022-03-16 16:42:47 +0100\n> >\n> > Fix publish_as_relid with multiple publications\n> >\n>\n> I've not managed to reproduce this issue on my machine but while\n> reviewing the code and the server logs[1] I may have found possible\n> bugs:\n>\n> 2022-04-08 12:59:30.701 EDT [91997:1] LOG: logical replication apply\n> worker for subscription \"sub2\" has started\n> 2022-04-08 12:59:30.702 EDT [91998:3] 013_partition.pl LOG:\n> statement: ALTER SUBSCRIPTION sub2 SET PUBLICATION pub_lower_level,\n> pub_all\n> 2022-04-08 12:59:30.733 EDT [91998:4] 013_partition.pl LOG:\n> disconnection: session time: 0:00:00.036 user=buildfarm\n> database=postgres host=[local]\n> 2022-04-08 12:59:30.740 EDT [92001:1] LOG: logical replication table\n> synchronization worker for subscription \"sub2\", table \"tab4_1\" has\n> started\n> 2022-04-08 12:59:30.744 EDT [91997:2] LOG: logical replication apply\n> worker for subscription \"sub2\" will restart because of a parameter\n> change\n> 2022-04-08 12:59:30.750 EDT [92003:1] LOG: logical replication table\n> synchronization worker for subscription \"sub2\", table \"tab3\" has\n> started\n>\n> The logs say that the apply worker for \"sub2\" finished whereas the\n> tablesync workers for \"tab4_1\" and \"tab3\" started. After these logs,\n> there are no logs that these tablesync workers finished and the apply\n> worker for \"sub2\" restarted, until the timeout. While reviewing the\n> code, I realized that the tablesync workers can advance its relstate\n> even without the apply worker intervention.\n>\n> After a tablesync worker copies the table it sets\n> SUBREL_STATE_SYNCWAIT to its relstate, then it waits for the apply\n> worker to update the relstate to SUBREL_STATE_CATCHUP. If the apply\n> worker has already died, it breaks from the wait loop and returns\n> false:\n>\n> wait_for_worker_state_change():\n>\n> for (;;)\n> {\n> LogicalRepWorker *worker;\n>\n> :\n>\n> /*\n> * Bail out if the apply worker has died, else signal it we're\n> * waiting.\n> */\n> LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);\n> worker = logicalrep_worker_find(MyLogicalRepWorker->subid,\n> InvalidOid, false);\n> if (worker && worker->proc)\n> logicalrep_worker_wakeup_ptr(worker);\n> LWLockRelease(LogicalRepWorkerLock);\n> if (!worker)\n> break;\n>\n> :\n> }\n>\n> return false;\n>\n> However, the caller doesn't check the return value at all:\n>\n> /*\n> * We are done with the initial data synchronization, update the state.\n> */\n> SpinLockAcquire(&MyLogicalRepWorker->relmutex);\n> MyLogicalRepWorker->relstate = SUBREL_STATE_SYNCWAIT;\n> MyLogicalRepWorker->relstate_lsn = *origin_startpos;\n> SpinLockRelease(&MyLogicalRepWorker->relmutex);\n>\n> /*\n> * Finally, wait until the main apply worker tells us to catch up and then\n> * return to let LogicalRepApplyLoop do it.\n> */\n> wait_for_worker_state_change(SUBREL_STATE_CATCHUP);\n> return slotname;\n>\n> Therefore, the tablesync worker started logical replication while\n> keeping its relstate as SUBREL_STATE_SYNCWAIT.\n>\n> Given the server logs, it's likely that both tablesync workers for\n> \"tab4_1\" and \"tab3\" were in this situation. That is, there were two\n> tablesync workers who were applying changes for the target relation\n> but the relstate was SUBREL_STATE_SYNCWAIT.\n>\n> When it comes to starting the apply worker, probably it didn't happen\n> since there are already running tablesync workers as much as\n> max_sync_workers_per_subscription (2 by default):\n>\n> logicalrep_worker_launch():\n>\n> /*\n> * If we reached the sync worker limit per subscription, just exit\n> * silently as we might get here because of an otherwise harmless race\n> * condition.\n> */\n> if (nsyncworkers >= max_sync_workers_per_subscription)\n> {\n> LWLockRelease(LogicalRepWorkerLock);\n> return;\n> }\n>\n> This scenario seems possible in principle but I've not managed to\n> reproduce this issue so I might be wrong.\n>\n\nThis is exactly the same analysis I have done in the original thread\nwhere that patch was committed. I have found some crude ways to\nreproduce it with a different test as well. See emails [1][2][3].\n\n> Especially, according to the\n> server logs, it seems like the tablesync workers were launched before\n> the apply worker restarted due to parameter change and this is a\n> common pattern among other failure logs. But I'm not sure how it could\n> really happen. IIUC the apply worker always re-reads subscription (and\n> exits if there is parameter change) and then requests to launch\n> tablesync workers accordingly.\n>\n\nIs there any rule/documentation which ensures that we must re-read the\nsubscription parameter change before trying to launch sync workers?\n\nActually, it would be better if we discuss this problem on another\nthread [1] to avoid hijacking this thread. So, it would be good if you\nrespond there with your thoughts. Thanks for looking into this.\n\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1LpBFU49Ohbnk%3Ddv_v9YP%2BKqh1%2BSf8i%2B%2B_s-QhD1Gy4Qw%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/CAA4eK1JzzoE61CY1qi9Vcdi742JFwG4YA3XpoMHwfKNhbFic6g%40mail.gmail.com\n[3] - https://www.postgresql.org/message-id/CAA4eK1JcQRQw0G-U4A%2BvaGaBWSvggYMMDJH4eDtJ0Yf2eUYXyA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 13 Apr 2022 11:09:06 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "On Wed, Apr 13, 2022 at 2:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Apr 12, 2022 at 6:16 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > On Thu, Apr 7, 2022 at 4:37 PM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > Hi,\n> > >\n> > > On 2022-04-06 00:07:07 -0400, Tom Lane wrote:\n> > > > Amit Langote <amitlangote09@gmail.com> writes:\n> > > > > On Sun, Apr 3, 2022 at 10:31 PM Greg Stark <stark@mit.edu> wrote:\n> > > > >> Is this a problem with the patch or its tests?\n> > > > >> [18:14:20.798] Test Summary Report\n> > > > >> [18:14:20.798] -------------------\n> > > > >> [18:14:20.798] t/013_partition.pl (Wstat: 15360 Tests: 31 Failed: 0)\n> > > >\n> > > > > Hmm, make check-world passes for me after rebasing the patch (v10) to\n> > > > > the latest HEAD (clean), nor do I see a failure on cfbot:\n> > > > > http://cfbot.cputube.org/amit-langote.html\n> > > >\n> > > > 013_partition.pl has been failing regularly in the buildfarm,\n> > > > most recently here:\n> > > >\n> > > > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=florican&dt=2022-03-31%2000%3A49%3A45\n> > >\n> > > Just failed locally on my machine as well.\n> > >\n> > >\n> > > > I don't think there's room to blame any uncommitted patches\n> > > > for that. Somebody broke it a short time before here:\n> > > >\n> > > > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2022-03-17%2016%3A08%3A19\n> > >\n> > > The obvious thing to point a finger at is\n> > >\n> > > commit c91f71b9dc91ef95e1d50d6d782f477258374fc6\n> > > Author: Tomas Vondra <tomas.vondra@postgresql.org>\n> > > Date: 2022-03-16 16:42:47 +0100\n> > >\n> > > Fix publish_as_relid with multiple publications\n> > >\n> >\n> > I've not managed to reproduce this issue on my machine but while\n> > reviewing the code and the server logs[1] I may have found possible\n> > bugs:\n> >\n> > 2022-04-08 12:59:30.701 EDT [91997:1] LOG: logical replication apply\n> > worker for subscription \"sub2\" has started\n> > 2022-04-08 12:59:30.702 EDT [91998:3] 013_partition.pl LOG:\n> > statement: ALTER SUBSCRIPTION sub2 SET PUBLICATION pub_lower_level,\n> > pub_all\n> > 2022-04-08 12:59:30.733 EDT [91998:4] 013_partition.pl LOG:\n> > disconnection: session time: 0:00:00.036 user=buildfarm\n> > database=postgres host=[local]\n> > 2022-04-08 12:59:30.740 EDT [92001:1] LOG: logical replication table\n> > synchronization worker for subscription \"sub2\", table \"tab4_1\" has\n> > started\n> > 2022-04-08 12:59:30.744 EDT [91997:2] LOG: logical replication apply\n> > worker for subscription \"sub2\" will restart because of a parameter\n> > change\n> > 2022-04-08 12:59:30.750 EDT [92003:1] LOG: logical replication table\n> > synchronization worker for subscription \"sub2\", table \"tab3\" has\n> > started\n> >\n> > The logs say that the apply worker for \"sub2\" finished whereas the\n> > tablesync workers for \"tab4_1\" and \"tab3\" started. After these logs,\n> > there are no logs that these tablesync workers finished and the apply\n> > worker for \"sub2\" restarted, until the timeout. While reviewing the\n> > code, I realized that the tablesync workers can advance its relstate\n> > even without the apply worker intervention.\n> >\n> > After a tablesync worker copies the table it sets\n> > SUBREL_STATE_SYNCWAIT to its relstate, then it waits for the apply\n> > worker to update the relstate to SUBREL_STATE_CATCHUP. If the apply\n> > worker has already died, it breaks from the wait loop and returns\n> > false:\n> >\n> > wait_for_worker_state_change():\n> >\n> > for (;;)\n> > {\n> > LogicalRepWorker *worker;\n> >\n> > :\n> >\n> > /*\n> > * Bail out if the apply worker has died, else signal it we're\n> > * waiting.\n> > */\n> > LWLockAcquire(LogicalRepWorkerLock, LW_SHARED);\n> > worker = logicalrep_worker_find(MyLogicalRepWorker->subid,\n> > InvalidOid, false);\n> > if (worker && worker->proc)\n> > logicalrep_worker_wakeup_ptr(worker);\n> > LWLockRelease(LogicalRepWorkerLock);\n> > if (!worker)\n> > break;\n> >\n> > :\n> > }\n> >\n> > return false;\n> >\n> > However, the caller doesn't check the return value at all:\n> >\n> > /*\n> > * We are done with the initial data synchronization, update the state.\n> > */\n> > SpinLockAcquire(&MyLogicalRepWorker->relmutex);\n> > MyLogicalRepWorker->relstate = SUBREL_STATE_SYNCWAIT;\n> > MyLogicalRepWorker->relstate_lsn = *origin_startpos;\n> > SpinLockRelease(&MyLogicalRepWorker->relmutex);\n> >\n> > /*\n> > * Finally, wait until the main apply worker tells us to catch up and then\n> > * return to let LogicalRepApplyLoop do it.\n> > */\n> > wait_for_worker_state_change(SUBREL_STATE_CATCHUP);\n> > return slotname;\n> >\n> > Therefore, the tablesync worker started logical replication while\n> > keeping its relstate as SUBREL_STATE_SYNCWAIT.\n> >\n> > Given the server logs, it's likely that both tablesync workers for\n> > \"tab4_1\" and \"tab3\" were in this situation. That is, there were two\n> > tablesync workers who were applying changes for the target relation\n> > but the relstate was SUBREL_STATE_SYNCWAIT.\n> >\n> > When it comes to starting the apply worker, probably it didn't happen\n> > since there are already running tablesync workers as much as\n> > max_sync_workers_per_subscription (2 by default):\n> >\n> > logicalrep_worker_launch():\n> >\n> > /*\n> > * If we reached the sync worker limit per subscription, just exit\n> > * silently as we might get here because of an otherwise harmless race\n> > * condition.\n> > */\n> > if (nsyncworkers >= max_sync_workers_per_subscription)\n> > {\n> > LWLockRelease(LogicalRepWorkerLock);\n> > return;\n> > }\n> >\n> > This scenario seems possible in principle but I've not managed to\n> > reproduce this issue so I might be wrong.\n> >\n>\n> This is exactly the same analysis I have done in the original thread\n> where that patch was committed. I have found some crude ways to\n> reproduce it with a different test as well. See emails [1][2][3].\n\nGreat. I didn't realize there is a discussion there.\n\n>\n> > Especially, according to the\n> > server logs, it seems like the tablesync workers were launched before\n> > the apply worker restarted due to parameter change and this is a\n> > common pattern among other failure logs. But I'm not sure how it could\n> > really happen. IIUC the apply worker always re-reads subscription (and\n> > exits if there is parameter change) and then requests to launch\n> > tablesync workers accordingly.\n> >\n>\n> Is there any rule/documentation which ensures that we must re-read the\n> subscription parameter change before trying to launch sync workers?\n\nNo, but as far as I read the code I could not find any path of that.\n\n>\n> Actually, it would be better if we discuss this problem on another\n> thread [1] to avoid hijacking this thread. So, it would be good if you\n> respond there with your thoughts. Thanks for looking into this.\n\nAgreed. I'll respond there.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 13 Apr 2022 16:18:59 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "I've spent some time looking at the v10 patch, and to be honest, I\ndon't really like the look of it :(\n\n1. I think we should be putting the cache fields in PartitionDescData\nrather than PartitionDispatch. Having them in PartitionDescData allows\ncaching between statements.\n2. The function name maybe_cache_partition_bound_offset() fills me\nwith dread. It's very unconcise. I don't think anyone should ever use\nthat word in a function or variable name.\n3. I'm not really sure why there's a field named n_tups_inserted.\nThat would lead me to believe that ExecFindPartition is only executed\nfor INSERTs. UPDATEs need to know the partition too.\n4. The fields you're adding to PartitionDispatch are very poorly\ndocumented. I'm not really sure what n_offset_changed means. Why\ncan't you just keep track by recording the last used partition, the\nlast index into the datum array, and then just a count of the number\nof times we've found the last used partition in a row? When the found\npartition does not match the last partition, just reset the counter\nand when the counter reaches the cache threshold, use the cache path.\n\nI've taken a go at rewriting this, from scratch, into what I think it\nshould look like. I then looked at what I came up with and decided\nthe logic for finding partitions should all be kept in a single\nfunction. That way there's much less chance of someone forgetting to\nupdate the double-checking logic during cache hits when they update\nthe logic for finding partitions without the cache.\n\nThe 0001 patch is my original attempt. I then rewrote it and came up\nwith 0002 (applies on top of 0001).\n\nAfter writing a benchmark script, I noticed that the performance of\n0002 was quite a bit worse than 0001. I noticed that the benchmark\nwhere the partition changes each time got much worse with 0002. I can\nonly assume that's due to the increased code size, so I played around\nwith likely() and unlikely() to see if I could use those to shift the\ncode layout around in such a way to make 0002 faster. Surprisingly\nusing likely() for the cache hit path make it faster. I'd have assumed\nit would be unlikely() that would work.\n\ncache_partition_bench.png shows the results. I tested with master @\na5f9f1b88. The \"Amit\" column is your v10 patch.\ncopybench.sh is the script I used to run the benchmarks. This tries\nall 3 partitioning strategies and performs 2 COPY FROMs, one with the\nrows arriving in partition order and another where the next row always\ngoes into a different partition. I'm expecting to see the \"ordered\"\ncase get better for LIST and RANGE partitions and the \"unordered\" case\nnot to get any slower.\n\nWith all of the attached patches applied, it does seem like I've\nmanaged to slightly speed up all of the unordered cases slightly.\nThis might be noise, but I did manage to remove some redundant code\nthat needlessly checked if the HASH partitioned table had a DEFAULT\npartition, which it cannot. This may account for some of the increase\nin performance.\n\nI do need to stare at the patch a bit more before I'm confident that\nit's correct. I just wanted to share it before I go and do that.\n\nDavid",
"msg_date": "Thu, 14 Jul 2022 17:30:56 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "On Thu, Jul 14, 2022 at 2:31 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> I've spent some time looking at the v10 patch, and to be honest, I\n> don't really like the look of it :(\n\nThanks for the review and sorry for the delay in replying.\n\n> 1. I think we should be putting the cache fields in PartitionDescData\n> rather than PartitionDispatch. Having them in PartitionDescData allows\n> caching between statements.\n\nLooking at your patch, yes, that makes sense. Initially, I didn't see\nmuch point in having the ability to cache between (supposedly simple\nOLTP) statements, because the tuple routing binary search is such a\nminuscule portion of their execution, but now I agree why not.\n\n> 2. The function name maybe_cache_partition_bound_offset() fills me\n> with dread. It's very unconcise. I don't think anyone should ever use\n> that word in a function or variable name.\n\nYeah, we can live without this one for sure as your patch\ndemonstrates, but to be fair, it's not like we don't have \"maybe_\"\nused in variables and functions in arguably even trickier parts of our\ncode, like those you can find with `git grep maybe_`.\n\n> 3. I'm not really sure why there's a field named n_tups_inserted.\n> That would lead me to believe that ExecFindPartition is only executed\n> for INSERTs. UPDATEs need to know the partition too.\n\nHmm, (cross-partition) UPDATEs internally use an INSERT that does\nExecFindPartition(). I don't see ExecUpdate() directly calling\nExecFindPartition(). Well, yes, apply_handle_tuple_routing() in a way\ndoes, but apparently I didn't worry about that function.\n\n> 4. The fields you're adding to PartitionDispatch are very poorly\n> documented. I'm not really sure what n_offset_changed means.\n\nMy intention with that variable was to count the number of partition\nswitches that happened over the course of inserting N tuples. The\ntheory was that if the ratio of the number of partition switches and\nthe number of tuples inserted is too close to 1, the dataset being\nloaded is not really in an order that'd benefit from caching. That\nwas an attempt to get some kind of adaptability to account for the\ncases where the ordering in the dataset is not consistent, but it\nseems like your approach is just as adaptive. And your code is much\nsimpler.\n\n> Why\n> can't you just keep track by recording the last used partition, the\n> last index into the datum array, and then just a count of the number\n> of times we've found the last used partition in a row? When the found\n> partition does not match the last partition, just reset the counter\n> and when the counter reaches the cache threshold, use the cache path.\n\nYeah, it makes sense and is easier to understand.\n\n> I've taken a go at rewriting this, from scratch, into what I think it\n> should look like. I then looked at what I came up with and decided\n> the logic for finding partitions should all be kept in a single\n> function. That way there's much less chance of someone forgetting to\n> update the double-checking logic during cache hits when they update\n> the logic for finding partitions without the cache.\n>\n> The 0001 patch is my original attempt. I then rewrote it and came up\n> with 0002 (applies on top of 0001).\n\nThanks for these patches. I've been reading and can't really find\nanything to complain about at a high level.\n\n> After writing a benchmark script, I noticed that the performance of\n> 0002 was quite a bit worse than 0001. I noticed that the benchmark\n> where the partition changes each time got much worse with 0002. I can\n> only assume that's due to the increased code size, so I played around\n> with likely() and unlikely() to see if I could use those to shift the\n> code layout around in such a way to make 0002 faster. Surprisingly\n> using likely() for the cache hit path make it faster. I'd have assumed\n> it would be unlikely() that would work.\n\nHmm, I too would think that unlikely() on that condition, not\nlikely(), would have helped the unordered case better.\n\n> cache_partition_bench.png shows the results. I tested with master @\n> a5f9f1b88. The \"Amit\" column is your v10 patch.\n> copybench.sh is the script I used to run the benchmarks. This tries\n> all 3 partitioning strategies and performs 2 COPY FROMs, one with the\n> rows arriving in partition order and another where the next row always\n> goes into a different partition. I'm expecting to see the \"ordered\"\n> case get better for LIST and RANGE partitions and the \"unordered\" case\n> not to get any slower.\n>\n> With all of the attached patches applied, it does seem like I've\n> managed to slightly speed up all of the unordered cases slightly.\n> This might be noise, but I did manage to remove some redundant code\n> that needlessly checked if the HASH partitioned table had a DEFAULT\n> partition, which it cannot. This may account for some of the increase\n> in performance.\n>\n> I do need to stare at the patch a bit more before I'm confident that\n> it's correct. I just wanted to share it before I go and do that.\n\nThe patch looks good to me. I thought some about whether the cache\nfields in PartitionDesc may ever be \"wrong\". For example, the index\nvalues becoming out-of-bound after partition DETACHes. Even though\nthere's some PartitionDesc-preserving cases in\nRelationClearRelation(), I don't think that a preserved PartitionDesc\nwould ever contain a wrong value.\n\nHere are some comments.\n\n PartitionBoundInfo boundinfo; /* collection of partition bounds */\n+ int last_found_datum_index; /* Index into the owning\n+ * PartitionBoundInfo's datum array\n+ * for the last found partition */\n\nWhat does \"owning PartitionBoundInfo's\" mean? Maybe the \"owning\" is\nunnecessary?\n\n+ int last_found_part_index; /* Partition index of the last found\n+ * partition or -1 if none have been\n+ * found yet or if we've failed to\n+ * find one */\n\n-1 if none *has* been...?\n\n+ int last_found_count; /* Number of times in a row have we found\n+ * values to match the partition\n\nNumber of times in a row *that we have* found.\n\n+ /*\n+ * The Datum has changed. Zero the number of times we've\n+ * found last_found_datum_index in a row.\n+ */\n+ partdesc->last_found_count = 0;\n\n+ /* Zero the \"winning streak\" on the cache hit count */\n+ partdesc->last_found_count = 0;\n\nMight it be better for the two comments to say the same thing? Also,\nI wonder which one do you intend as the resetting of last_found_count:\nsetting it to 0 or 1? I can see that the stanza at the end of the\nfunction sets to 1 to start a new cycle.\n\n+ /* Check if the value is equal to the lower bound */\n+ cmpval = partition_rbound_datum_cmp(key->partsupfunc,\n+ key->partcollation,\n+ lastDatums,\n+ kind,\n+ values,\n+ key->partnatts);\n\nThe function does not merely check for equality, so maybe better to\nsay the following instead:\n\nCheck if the value is >= the lower bound.\n\nPerhaps, just like you've done in the LIST stanza even mention that\nthe lower bound is same as the last found one, like:\n\nCheck if the value >= the last found lower bound.\n\nAnd likewise, change the nearby comment that says this:\n\n+ /* Check if the value is below the upper bound */\n\nto say:\n\nNow check if the value is below the corresponding [to last found lower\nbound] upper bound.\n\n+ * No caching of partitions is done when the last found partition is th\n\nthe\n\n+ * Calling this function can be quite expensive for LIST and RANGE partitioned\n+ * tables have many partitions.\n\nhaving many partitions\n\nMany of the use cases for LIST and RANGE\n+ * partitioned tables mean that the same partition is likely to be found in\n\nmean -> are such that\n\nwe record the partition index we've found in the\n+ * PartitionDesc\n\nwe record the partition index we've found *for given values* in the\nPartitionDesc\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 22 Jul 2022 22:22:48 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "Thank for looking at this.\n\nOn Sat, 23 Jul 2022 at 01:23, Amit Langote <amitlangote09@gmail.com> wrote:\n> + /*\n> + * The Datum has changed. Zero the number of times we've\n> + * found last_found_datum_index in a row.\n> + */\n> + partdesc->last_found_count = 0;\n>\n> + /* Zero the \"winning streak\" on the cache hit count */\n> + partdesc->last_found_count = 0;\n>\n> Might it be better for the two comments to say the same thing? Also,\n> I wonder which one do you intend as the resetting of last_found_count:\n> setting it to 0 or 1? I can see that the stanza at the end of the\n> function sets to 1 to start a new cycle.\n\nI think I've addressed all of your comments. The above one in\nparticular caused me to make some larger changes.\n\nThe reason I was zeroing the last_found_count in LIST partitioned\ntables when the Datum was not equal to the previous found Datum was\ndue to the fact that the code at the end of the function was only\nchecking the partition indexes matched rather than the bound_offset vs\nlast_found_datum_index. The reason I wanted to zero this was that if\nyou had a partition FOR VALUES IN(1,2), and you received rows with\nvalues alternating between 1 and 2 then we'd match to the same\npartition each time, however the equality test with the current\n'values' and the Datum at last_found_datum_index would have been false\neach time. If we didn't zero the last_found_count we'd have kept\nusing the cache path even though the Datum and last Datum wouldn't\nhave been equal each time. That would have resulted in always doing\nthe cache check and failing, then doing the binary search anyway.\n\nI've now changed the code so that instead of checking the last found\npartition is the same as the last one, I'm now checking if\nbound_offset is the same as last_found_datum_index. This will be\nfalse in the \"values alternating between 1 and 2\" case from above.\nThis caused me to have to change how the caching works for LIST\npartitions with a NULL partition which is receiving NULL values. I've\ncoded things now to just skip the cache for that case. Finding the\ncorrect LIST partition for a NULL value is cheap and no need to cache\nthat. I've also moved all the code which updates the cache fields to\nthe bottom of get_partition_for_tuple(). I'm only expecting to do that\nwhen bound_offset is set by the lookup code in the switch statement.\nAny paths, e.g. HASH partitioning lookup and LIST or RANGE with NULL\nvalues shouldn't reach the code which updates the partition fields.\nI've added an Assert(bound_offset >= 0) to ensure that stays true.\n\nThere's probably a bit more to optimise here too, but not much. I\ndon't think the partdesc->last_found_part_index = -1; is needed when\nwe're in the code block that does return boundinfo->default_index;\nHowever, that only might very slightly speedup the case when we're\ninserting continuously into the DEFAULT partition. That code path is\nalso used when we fail to find any matching partition. That's not one\nwe need to worry about making go faster.\n\nI also ran the benchmarks again and saw that most of the use of\nlikely() and unlikely() no longer did what I found them to do earlier.\nSo the weirdness we saw there most likely was just down to random code\nlayout changes. In this patch, I just dropped the use of either of\nthose two macros.\n\nDavid",
"msg_date": "Wed, 27 Jul 2022 10:27:53 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "On Tue, Jul 26, 2022 at 3:28 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> Thank for looking at this.\n>\n> On Sat, 23 Jul 2022 at 01:23, Amit Langote <amitlangote09@gmail.com>\n> wrote:\n> > + /*\n> > + * The Datum has changed. Zero the number of times\n> we've\n> > + * found last_found_datum_index in a row.\n> > + */\n> > + partdesc->last_found_count = 0;\n> >\n> > + /* Zero the \"winning streak\" on the cache hit count\n> */\n> > + partdesc->last_found_count = 0;\n> >\n> > Might it be better for the two comments to say the same thing? Also,\n> > I wonder which one do you intend as the resetting of last_found_count:\n> > setting it to 0 or 1? I can see that the stanza at the end of the\n> > function sets to 1 to start a new cycle.\n>\n> I think I've addressed all of your comments. The above one in\n> particular caused me to make some larger changes.\n>\n> The reason I was zeroing the last_found_count in LIST partitioned\n> tables when the Datum was not equal to the previous found Datum was\n> due to the fact that the code at the end of the function was only\n> checking the partition indexes matched rather than the bound_offset vs\n> last_found_datum_index. The reason I wanted to zero this was that if\n> you had a partition FOR VALUES IN(1,2), and you received rows with\n> values alternating between 1 and 2 then we'd match to the same\n> partition each time, however the equality test with the current\n> 'values' and the Datum at last_found_datum_index would have been false\n> each time. If we didn't zero the last_found_count we'd have kept\n> using the cache path even though the Datum and last Datum wouldn't\n> have been equal each time. That would have resulted in always doing\n> the cache check and failing, then doing the binary search anyway.\n>\n> I've now changed the code so that instead of checking the last found\n> partition is the same as the last one, I'm now checking if\n> bound_offset is the same as last_found_datum_index. This will be\n> false in the \"values alternating between 1 and 2\" case from above.\n> This caused me to have to change how the caching works for LIST\n> partitions with a NULL partition which is receiving NULL values. I've\n> coded things now to just skip the cache for that case. Finding the\n> correct LIST partition for a NULL value is cheap and no need to cache\n> that. I've also moved all the code which updates the cache fields to\n> the bottom of get_partition_for_tuple(). I'm only expecting to do that\n> when bound_offset is set by the lookup code in the switch statement.\n> Any paths, e.g. HASH partitioning lookup and LIST or RANGE with NULL\n> values shouldn't reach the code which updates the partition fields.\n> I've added an Assert(bound_offset >= 0) to ensure that stays true.\n>\n> There's probably a bit more to optimise here too, but not much. I\n> don't think the partdesc->last_found_part_index = -1; is needed when\n> we're in the code block that does return boundinfo->default_index;\n> However, that only might very slightly speedup the case when we're\n> inserting continuously into the DEFAULT partition. That code path is\n> also used when we fail to find any matching partition. That's not one\n> we need to worry about making go faster.\n>\n> I also ran the benchmarks again and saw that most of the use of\n> likely() and unlikely() no longer did what I found them to do earlier.\n> So the weirdness we saw there most likely was just down to random code\n> layout changes. In this patch, I just dropped the use of either of\n> those two macros.\n>\n> David\n>\nHi,\n\n+ return boundinfo->indexes[last_datum_offset + 1];\n+\n+ else if (cmpval < 0 && last_datum_offset + 1 <\nboundinfo->ndatums)\n\nnit: the `else` keyword is not needed.\n\nCheers\n\nOn Tue, Jul 26, 2022 at 3:28 PM David Rowley <dgrowleyml@gmail.com> wrote:Thank for looking at this.\n\nOn Sat, 23 Jul 2022 at 01:23, Amit Langote <amitlangote09@gmail.com> wrote:\n> + /*\n> + * The Datum has changed. Zero the number of times we've\n> + * found last_found_datum_index in a row.\n> + */\n> + partdesc->last_found_count = 0;\n>\n> + /* Zero the \"winning streak\" on the cache hit count */\n> + partdesc->last_found_count = 0;\n>\n> Might it be better for the two comments to say the same thing? Also,\n> I wonder which one do you intend as the resetting of last_found_count:\n> setting it to 0 or 1? I can see that the stanza at the end of the\n> function sets to 1 to start a new cycle.\n\nI think I've addressed all of your comments. The above one in\nparticular caused me to make some larger changes.\n\nThe reason I was zeroing the last_found_count in LIST partitioned\ntables when the Datum was not equal to the previous found Datum was\ndue to the fact that the code at the end of the function was only\nchecking the partition indexes matched rather than the bound_offset vs\nlast_found_datum_index. The reason I wanted to zero this was that if\nyou had a partition FOR VALUES IN(1,2), and you received rows with\nvalues alternating between 1 and 2 then we'd match to the same\npartition each time, however the equality test with the current\n'values' and the Datum at last_found_datum_index would have been false\neach time. If we didn't zero the last_found_count we'd have kept\nusing the cache path even though the Datum and last Datum wouldn't\nhave been equal each time. That would have resulted in always doing\nthe cache check and failing, then doing the binary search anyway.\n\nI've now changed the code so that instead of checking the last found\npartition is the same as the last one, I'm now checking if\nbound_offset is the same as last_found_datum_index. This will be\nfalse in the \"values alternating between 1 and 2\" case from above.\nThis caused me to have to change how the caching works for LIST\npartitions with a NULL partition which is receiving NULL values. I've\ncoded things now to just skip the cache for that case. Finding the\ncorrect LIST partition for a NULL value is cheap and no need to cache\nthat. I've also moved all the code which updates the cache fields to\nthe bottom of get_partition_for_tuple(). I'm only expecting to do that\nwhen bound_offset is set by the lookup code in the switch statement.\nAny paths, e.g. HASH partitioning lookup and LIST or RANGE with NULL\nvalues shouldn't reach the code which updates the partition fields.\nI've added an Assert(bound_offset >= 0) to ensure that stays true.\n\nThere's probably a bit more to optimise here too, but not much. I\ndon't think the partdesc->last_found_part_index = -1; is needed when\nwe're in the code block that does return boundinfo->default_index;\nHowever, that only might very slightly speedup the case when we're\ninserting continuously into the DEFAULT partition. That code path is\nalso used when we fail to find any matching partition. That's not one\nwe need to worry about making go faster.\n\nI also ran the benchmarks again and saw that most of the use of\nlikely() and unlikely() no longer did what I found them to do earlier.\nSo the weirdness we saw there most likely was just down to random code\nlayout changes. In this patch, I just dropped the use of either of\nthose two macros.\n\nDavidHi,+ return boundinfo->indexes[last_datum_offset + 1];++ else if (cmpval < 0 && last_datum_offset + 1 < boundinfo->ndatums)nit: the `else` keyword is not needed.Cheers",
"msg_date": "Tue, 26 Jul 2022 16:22:22 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "On Wed, Jul 27, 2022 at 7:28 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Sat, 23 Jul 2022 at 01:23, Amit Langote <amitlangote09@gmail.com> wrote:\n> > + /*\n> > + * The Datum has changed. Zero the number of times we've\n> > + * found last_found_datum_index in a row.\n> > + */\n> > + partdesc->last_found_count = 0;\n> >\n> > + /* Zero the \"winning streak\" on the cache hit count */\n> > + partdesc->last_found_count = 0;\n> >\n> > Might it be better for the two comments to say the same thing? Also,\n> > I wonder which one do you intend as the resetting of last_found_count:\n> > setting it to 0 or 1? I can see that the stanza at the end of the\n> > function sets to 1 to start a new cycle.\n>\n> I think I've addressed all of your comments. The above one in\n> particular caused me to make some larger changes.\n>\n> The reason I was zeroing the last_found_count in LIST partitioned\n> tables when the Datum was not equal to the previous found Datum was\n> due to the fact that the code at the end of the function was only\n> checking the partition indexes matched rather than the bound_offset vs\n> last_found_datum_index. The reason I wanted to zero this was that if\n> you had a partition FOR VALUES IN(1,2), and you received rows with\n> values alternating between 1 and 2 then we'd match to the same\n> partition each time, however the equality test with the current\n> 'values' and the Datum at last_found_datum_index would have been false\n> each time. If we didn't zero the last_found_count we'd have kept\n> using the cache path even though the Datum and last Datum wouldn't\n> have been equal each time. That would have resulted in always doing\n> the cache check and failing, then doing the binary search anyway.\n\nThanks for the explanation. So, in a way the caching scheme works for\nLIST partitioning only if the same value appears consecutively in the\ninput set, whereas it does not for *a set of* values belonging to the\nsame partition appearing consecutively. Maybe that's a reasonable\nrestriction for now.\n\n> I've now changed the code so that instead of checking the last found\n> partition is the same as the last one, I'm now checking if\n> bound_offset is the same as last_found_datum_index. This will be\n> false in the \"values alternating between 1 and 2\" case from above.\n> This caused me to have to change how the caching works for LIST\n> partitions with a NULL partition which is receiving NULL values. I've\n> coded things now to just skip the cache for that case. Finding the\n> correct LIST partition for a NULL value is cheap and no need to cache\n> that. I've also moved all the code which updates the cache fields to\n> the bottom of get_partition_for_tuple(). I'm only expecting to do that\n> when bound_offset is set by the lookup code in the switch statement.\n> Any paths, e.g. HASH partitioning lookup and LIST or RANGE with NULL\n> values shouldn't reach the code which updates the partition fields.\n> I've added an Assert(bound_offset >= 0) to ensure that stays true.\n\nLooks good.\n\n> There's probably a bit more to optimise here too, but not much. I\n> don't think the partdesc->last_found_part_index = -1; is needed when\n> we're in the code block that does return boundinfo->default_index;\n> However, that only might very slightly speedup the case when we're\n> inserting continuously into the DEFAULT partition. That code path is\n> also used when we fail to find any matching partition. That's not one\n> we need to worry about making go faster.\n\nSo this is about:\n\n if (part_index < 0)\n- part_index = boundinfo->default_index;\n+ {\n+ /*\n+ * Since we don't do caching for the default partition or failed\n+ * lookups, we'll just wipe the cache fields back to their initial\n+ * values. The count becomes 0 rather than 1 as 1 means it's the\n+ * first time we've found a partition we're recording for the cache.\n+ */\n+ partdesc->last_found_datum_index = -1;\n+ partdesc->last_found_part_index = -1;\n+ partdesc->last_found_count = 0;\n+\n+ return boundinfo->default_index;\n+ }\n\nI wonder why not to leave the cache untouched in this case? It's\npossible that erratic rows only rarely occur in the input sets.\n\n> I also ran the benchmarks again and saw that most of the use of\n> likely() and unlikely() no longer did what I found them to do earlier.\n> So the weirdness we saw there most likely was just down to random code\n> layout changes. In this patch, I just dropped the use of either of\n> those two macros.\n\nAh, using either seems to be trying to fit the code one or the other\npattern in the input set anyway, so seems fine to keep them out for\nnow.\n\nSome minor comments:\n\n+ * The number of times the same partition must be found in a row before we\n+ * switch from a search for the given values to just checking if the values\n\nHow about:\n\nswitch from using a binary search for the given values to...\n\nShould the comment update above get_partition_for_tuple() mention\nsomething like the cached path is basically O(1) and the non-cache\npath O (log N) as I can see in comments in some other modules, like\npairingheap.c?\n\n+ * so bump the count by one. If all goes well we'll eventually reach\n\nMaybe a comma is needed after \"well\", because I got tricked into\nthinking the \"well\" is duplicated.\n\n+ * PARTITION_CACHED_FIND_THRESHOLD and we'll try the cache path next time\n\n\"we'll\" sounds redundant with the one in the previous line.\n\n+ * found yet, the last found was the DEFAULT partition, or there was no\n\nAdding \"if\" to both sentence fragments might make this sound better.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 27 Jul 2022 21:50:23 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "On Thu, 28 Jul 2022 at 00:50, Amit Langote <amitlangote09@gmail.com> wrote:\n> So, in a way the caching scheme works for\n> LIST partitioning only if the same value appears consecutively in the\n> input set, whereas it does not for *a set of* values belonging to the\n> same partition appearing consecutively. Maybe that's a reasonable\n> restriction for now.\n\nI'm not really seeing another cheap enough way of doing that. Any LIST\npartition could allow any number of values. We've only space to record\n1 of those values by way of recording which element in the\nPartitionBound that it was located.\n\n> if (part_index < 0)\n> - part_index = boundinfo->default_index;\n> + {\n> + /*\n> + * Since we don't do caching for the default partition or failed\n> + * lookups, we'll just wipe the cache fields back to their initial\n> + * values. The count becomes 0 rather than 1 as 1 means it's the\n> + * first time we've found a partition we're recording for the cache.\n> + */\n> + partdesc->last_found_datum_index = -1;\n> + partdesc->last_found_part_index = -1;\n> + partdesc->last_found_count = 0;\n> +\n> + return boundinfo->default_index;\n> + }\n>\n> I wonder why not to leave the cache untouched in this case? It's\n> possible that erratic rows only rarely occur in the input sets.\n\nI looked into that and I ended up just removing the code to reset the\ncache. It now works similarly to a LIST partitioned table's NULL\npartition.\n\n> Should the comment update above get_partition_for_tuple() mention\n> something like the cached path is basically O(1) and the non-cache\n> path O (log N) as I can see in comments in some other modules, like\n> pairingheap.c?\n\nI adjusted for the other things you mentioned but I didn't add the big\nO stuff. I thought the comment was clear enough.\n\nI'd quite like to push this patch early next week, so if anyone else\nis following along that might have any objections, could they do so\nbefore then?\n\nDavid",
"msg_date": "Thu, 28 Jul 2022 14:59:21 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "On Thu, Jul 28, 2022 at 11:59 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Thu, 28 Jul 2022 at 00:50, Amit Langote <amitlangote09@gmail.com> wrote:\n> > So, in a way the caching scheme works for\n> > LIST partitioning only if the same value appears consecutively in the\n> > input set, whereas it does not for *a set of* values belonging to the\n> > same partition appearing consecutively. Maybe that's a reasonable\n> > restriction for now.\n>\n> I'm not really seeing another cheap enough way of doing that. Any LIST\n> partition could allow any number of values. We've only space to record\n> 1 of those values by way of recording which element in the\n> PartitionBound that it was located.\n\nYeah, no need to complicate the implementation for the LIST case.\n\n> > if (part_index < 0)\n> > - part_index = boundinfo->default_index;\n> > + {\n> > + /*\n> > + * Since we don't do caching for the default partition or failed\n> > + * lookups, we'll just wipe the cache fields back to their initial\n> > + * values. The count becomes 0 rather than 1 as 1 means it's the\n> > + * first time we've found a partition we're recording for the cache.\n> > + */\n> > + partdesc->last_found_datum_index = -1;\n> > + partdesc->last_found_part_index = -1;\n> > + partdesc->last_found_count = 0;\n> > +\n> > + return boundinfo->default_index;\n> > + }\n> >\n> > I wonder why not to leave the cache untouched in this case? It's\n> > possible that erratic rows only rarely occur in the input sets.\n>\n> I looked into that and I ended up just removing the code to reset the\n> cache. It now works similarly to a LIST partitioned table's NULL\n> partition.\n\n+1\n\n> > Should the comment update above get_partition_for_tuple() mention\n> > something like the cached path is basically O(1) and the non-cache\n> > path O (log N) as I can see in comments in some other modules, like\n> > pairingheap.c?\n>\n> I adjusted for the other things you mentioned but I didn't add the big\n> O stuff. I thought the comment was clear enough.\n\nWFM.\n\n> I'd quite like to push this patch early next week, so if anyone else\n> is following along that might have any objections, could they do so\n> before then?\n\nI have no more comments.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 28 Jul 2022 16:37:08 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "On Thursday, July 28, 2022 10:59 AM David Rowley <dgrowleyml@gmail.com> wrote:\r\n> On Thu, 28 Jul 2022 at 00:50, Amit Langote <amitlangote09@gmail.com>\r\n> wrote:\r\n> > So, in a way the caching scheme works for LIST partitioning only if\r\n> > the same value appears consecutively in the input set, whereas it does\r\n> > not for *a set of* values belonging to the same partition appearing\r\n> > consecutively. Maybe that's a reasonable restriction for now.\r\n> \r\n> I'm not really seeing another cheap enough way of doing that. Any LIST\r\n> partition could allow any number of values. We've only space to record\r\n> 1 of those values by way of recording which element in the PartitionBound that\r\n> it was located.\r\n> \r\n> > if (part_index < 0)\r\n> > - part_index = boundinfo->default_index;\r\n> > + {\r\n> > + /*\r\n> > + * Since we don't do caching for the default partition or failed\r\n> > + * lookups, we'll just wipe the cache fields back to their initial\r\n> > + * values. The count becomes 0 rather than 1 as 1 means it's the\r\n> > + * first time we've found a partition we're recording for the cache.\r\n> > + */\r\n> > + partdesc->last_found_datum_index = -1;\r\n> > + partdesc->last_found_part_index = -1;\r\n> > + partdesc->last_found_count = 0;\r\n> > +\r\n> > + return boundinfo->default_index;\r\n> > + }\r\n> >\r\n> > I wonder why not to leave the cache untouched in this case? It's\r\n> > possible that erratic rows only rarely occur in the input sets.\r\n> \r\n> I looked into that and I ended up just removing the code to reset the cache. It\r\n> now works similarly to a LIST partitioned table's NULL partition.\r\n> \r\n> > Should the comment update above get_partition_for_tuple() mention\r\n> > something like the cached path is basically O(1) and the non-cache\r\n> > path O (log N) as I can see in comments in some other modules, like\r\n> > pairingheap.c?\r\n> \r\n> I adjusted for the other things you mentioned but I didn't add the big O stuff. I\r\n> thought the comment was clear enough.\r\n> \r\n> I'd quite like to push this patch early next week, so if anyone else is following\r\n> along that might have any objections, could they do so before then?\r\n\r\nThanks for the patch. The patch looks good to me.\r\n\r\nOnly a minor nitpick:\r\n\r\n+\t/*\r\n+\t * For LIST partitioning, this is the number of times in a row that the\r\n+\t * the datum we're looking\r\n\r\nIt seems a duplicate 'the' word in this comment.\r\n\"the the datum\".\r\n\r\nBest regards,\r\nHou Zhijie\r\n",
"msg_date": "Thu, 28 Jul 2022 08:40:47 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "On Thu, 28 Jul 2022 at 19:37, Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Thu, Jul 28, 2022 at 11:59 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> > I'd quite like to push this patch early next week, so if anyone else\n> > is following along that might have any objections, could they do so\n> > before then?\n>\n> I have no more comments.\n\nThank you both for the reviews.\n\nI've now pushed this.\n\nDavid\n\n\n",
"msg_date": "Tue, 2 Aug 2022 09:58:16 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
},
{
"msg_contents": "On Tue, Aug 2, 2022 at 6:58 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Thu, 28 Jul 2022 at 19:37, Amit Langote <amitlangote09@gmail.com> wrote:\n> >\n> > On Thu, Jul 28, 2022 at 11:59 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> > > I'd quite like to push this patch early next week, so if anyone else\n> > > is following along that might have any objections, could they do so\n> > > before then?\n> >\n> > I have no more comments.\n>\n> Thank you both for the reviews.\n>\n> I've now pushed this.\n\nThank you for working on this.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 2 Aug 2022 16:19:28 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skip partition tuple routing with constant partition key"
}
] |
[
{
"msg_contents": "If a promotion is triggered while recovery is paused, the paused state ends\nand promotion continues. But currently pg_get_wal_replay_pause_state()\nreturns 'paused' in that case. Isn't this a bug?\n\nAttached patch fixes this issue by resetting the recovery pause state to\n'not paused' when standby promotion is triggered.\n\nThought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Mon, 17 May 2021 23:29:18 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "pg_get_wal_replay_pause_state() should not return 'paused' while a\n promotion is ongoing."
},
{
"msg_contents": "At Mon, 17 May 2021 23:29:18 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> If a promotion is triggered while recovery is paused, the paused state\n> ends\n> and promotion continues. But currently pg_get_wal_replay_pause_state()\n> returns 'paused' in that case. Isn't this a bug?\n> \n> Attached patch fixes this issue by resetting the recovery pause state\n> to\n> 'not paused' when standby promotion is triggered.\n> \n> Thought?\n\nNice catch!\n\nOnce the state enteres \"paused\" state no more WAL record is expected\nto be replayed until exiting the state. I'm not sure but maybe we are\nalso expecting that the server promotes whthout a record replayed when\ntriggered while pausing. However, actually there's a chance for a\nrecord to replayed before promotion. Of course it is existing\nbehavior but I'd like to make sure whether we deliberately allow that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 18 May 2021 09:58:05 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_get_wal_replay_pause_state() should not return 'paused'\n while a promotion is ongoing."
},
{
"msg_contents": "\n\nOn 2021/05/18 9:58, Kyotaro Horiguchi wrote:\n> At Mon, 17 May 2021 23:29:18 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>> If a promotion is triggered while recovery is paused, the paused state\n>> ends\n>> and promotion continues. But currently pg_get_wal_replay_pause_state()\n>> returns 'paused' in that case. Isn't this a bug?\n>>\n>> Attached patch fixes this issue by resetting the recovery pause state\n>> to\n>> 'not paused' when standby promotion is triggered.\n>>\n>> Thought?\n> \n> Nice catch!\n> \n> Once the state enteres \"paused\" state no more WAL record is expected\n> to be replayed until exiting the state. I'm not sure but maybe we are\n> also expecting that the server promotes whthout a record replayed when\n> triggered while pausing.\n\nCurrently a promotion causes all available WAL to be replayed before\na standby becomes a primary whether it was in paused state or not.\nOTOH, something like immediate promotion (i.e., standby becomes\na primary without replaying outstanding WAL) might be useful for\nsome cases. I don't object to that.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 18 May 2021 12:48:38 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_get_wal_replay_pause_state() should not return 'paused' while\n a promotion is ongoing."
},
{
"msg_contents": "On Mon, May 17, 2021 at 7:59 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> If a promotion is triggered while recovery is paused, the paused state ends\n> and promotion continues. But currently pg_get_wal_replay_pause_state()\n> returns 'paused' in that case. Isn't this a bug?\n>\n> Attached patch fixes this issue by resetting the recovery pause state to\n> 'not paused' when standby promotion is triggered.\n>\n> Thought?\n>\n\nI think, prior to commit 496ee647ecd2917369ffcf1eaa0b2cdca07c8730\n(Prefer standby promotion over recovery pause.) this behavior was fine\nbecause the pause was continued but after this commit now we are\ngiving preference to pause so this is a bug so need to be fixed.\n\nThe fix looks fine but I think along with this we should also return\nimmediately from the pause loop if promotion is requested. Because if\nwe recheck the recovery pause then someone can pause again and we will\nbe in loop so better to exit as soon as promotion is requested, see\nattached patch. Should be applied along with your patch.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 18 May 2021 11:23:26 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_get_wal_replay_pause_state() should not return 'paused' while\n a promotion is ongoing."
},
{
"msg_contents": "On Tue, May 18, 2021 at 12:48:38PM +0900, Fujii Masao wrote:\n> Currently a promotion causes all available WAL to be replayed before\n> a standby becomes a primary whether it was in paused state or not.\n> OTOH, something like immediate promotion (i.e., standby becomes\n> a primary without replaying outstanding WAL) might be useful for\n> some cases. I don't object to that.\n\nSounds like a \"promotion immediate\" mode. It does not sound difficult\nnor expensive to add a small test for that in one of the existing\nrecovery tests triggerring a promotion. Could you add one based on\npg_get_wal_replay_pause_state()?\n--\nMichael",
"msg_date": "Tue, 18 May 2021 15:46:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_get_wal_replay_pause_state() should not return 'paused' while\n a promotion is ongoing."
},
{
"msg_contents": "\n\nOn 2021/05/18 14:53, Dilip Kumar wrote:\n> On Mon, May 17, 2021 at 7:59 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>> If a promotion is triggered while recovery is paused, the paused state ends\n>> and promotion continues. But currently pg_get_wal_replay_pause_state()\n>> returns 'paused' in that case. Isn't this a bug?\n>>\n>> Attached patch fixes this issue by resetting the recovery pause state to\n>> 'not paused' when standby promotion is triggered.\n>>\n>> Thought?\n>>\n> \n> I think, prior to commit 496ee647ecd2917369ffcf1eaa0b2cdca07c8730\n> (Prefer standby promotion over recovery pause.) this behavior was fine\n> because the pause was continued but after this commit now we are\n> giving preference to pause so this is a bug so need to be fixed.\n> \n> The fix looks fine but I think along with this we should also return\n> immediately from the pause loop if promotion is requested. Because if\n> we recheck the recovery pause then someone can pause again and we will\n> be in loop so better to exit as soon as promotion is requested, see\n> attached patch. Should be applied along with your patch.\n\nBut this change can cause the recovery to continue with insufficient parameter\nsettings if a promotion is requested while the server is in the paused state\nbecause of such invalid settings. This behavior seems not safe.\nIf this my understanding is right, the recovery should abort immediately\n(i.e., FATAL error \"\"recovery aborted because of insufficient parameter settings\"\nshould be thrown) if a promotion is requested in that case, like when\npg_wal_replay_resume() is executed in that case. Thought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 18 May 2021 17:13:36 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_get_wal_replay_pause_state() should not return 'paused' while\n a promotion is ongoing."
},
{
"msg_contents": "On Tue, May 18, 2021 at 1:43 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > The fix looks fine but I think along with this we should also return\n> > immediately from the pause loop if promotion is requested. Because if\n> > we recheck the recovery pause then someone can pause again and we will\n> > be in loop so better to exit as soon as promotion is requested, see\n> > attached patch. Should be applied along with your patch.\n>\n> But this change can cause the recovery to continue with insufficient parameter\n> settings if a promotion is requested while the server is in the paused state\n> because of such invalid settings. This behavior seems not safe.\n> If this my understanding is right, the recovery should abort immediately\n> (i.e., FATAL error \"\"recovery aborted because of insufficient parameter settings\"\n> should be thrown) if a promotion is requested in that case, like when\n> pg_wal_replay_resume() is executed in that case. Thought?\n\nYeah, you are right, I missed that.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 18 May 2021 14:22:42 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_get_wal_replay_pause_state() should not return 'paused' while\n a promotion is ongoing."
},
{
"msg_contents": "At Tue, 18 May 2021 12:48:38 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> Currently a promotion causes all available WAL to be replayed before\n> a standby becomes a primary whether it was in paused state or not.\n> OTOH, something like immediate promotion (i.e., standby becomes\n> a primary without replaying outstanding WAL) might be useful for\n> some cases. I don't object to that.\n\nMmm. I was confused with recovery target + pause. Actually promotion\nworks as so and it is documented. Anyway it is a matter of the next\nversion.\n\nI forgot to mention the patch itself, but what the patch does looks\nfine to me. Disabling pause after setting SharedProteIsTriggered\nprevents later re-pausing (from the sql function).\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 19 May 2021 09:53:08 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_get_wal_replay_pause_state() should not return 'paused'\n while a promotion is ongoing."
},
{
"msg_contents": "\n\nOn 2021/05/18 15:46, Michael Paquier wrote:\n> On Tue, May 18, 2021 at 12:48:38PM +0900, Fujii Masao wrote:\n>> Currently a promotion causes all available WAL to be replayed before\n>> a standby becomes a primary whether it was in paused state or not.\n>> OTOH, something like immediate promotion (i.e., standby becomes\n>> a primary without replaying outstanding WAL) might be useful for\n>> some cases. I don't object to that.\n> \n> Sounds like a \"promotion immediate\" mode. It does not sound difficult\n> nor expensive to add a small test for that in one of the existing\n> recovery tests triggerring a promotion. Could you add one based on\n> pg_get_wal_replay_pause_state()?\n\nYou're thinking to add the test like the following?\n#1. Pause the recovery\n#2. Confirm that pg_get_wal_replay_pause_state() returns 'paused'\n#3. Trigger standby promotion\n#4. Confirm that pg_get_wal_replay_pause_state() returns 'not paused'\n\nIt seems not easy to do the test #4 stably because\npg_get_wal_replay_pause_state() needs to be executed\nbefore the promotion finishes.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 19 May 2021 13:46:45 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_get_wal_replay_pause_state() should not return 'paused' while\n a promotion is ongoing."
},
{
"msg_contents": "\n\nOn 2021/05/19 9:53, Kyotaro Horiguchi wrote:\n> At Tue, 18 May 2021 12:48:38 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>> Currently a promotion causes all available WAL to be replayed before\n>> a standby becomes a primary whether it was in paused state or not.\n>> OTOH, something like immediate promotion (i.e., standby becomes\n>> a primary without replaying outstanding WAL) might be useful for\n>> some cases. I don't object to that.\n> \n> Mmm. I was confused with recovery target + pause. Actually promotion\n> works as so and it is documented. Anyway it is a matter of the next\n> version.\n> \n> I forgot to mention the patch itself, but what the patch does looks\n> fine to me. Disabling pause after setting SharedProteIsTriggered\n> prevents later re-pausing (from the sql function).\n\nThanks for the review! I pushed the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 19 May 2021 13:51:28 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_get_wal_replay_pause_state() should not return 'paused' while\n a promotion is ongoing."
},
{
"msg_contents": "On Wed, May 19, 2021 at 01:46:45PM +0900, Fujii Masao wrote:\n> You're thinking to add the test like the following?\n> #1. Pause the recovery\n> #2. Confirm that pg_get_wal_replay_pause_state() returns 'paused'\n> #3. Trigger standby promotion\n> #4. Confirm that pg_get_wal_replay_pause_state() returns 'not paused'\n> \n> It seems not easy to do the test #4 stably because\n> pg_get_wal_replay_pause_state() needs to be executed\n> before the promotion finishes.\n\nCouldn't you rely on recovery_end_command for number #4? The shared\nmemory state tracked by SharedRecoveryState is updated after the\nend-recovery command is triggered, so pg_get_wal_replay_pause_state()\ncan be executed at this point. A bit hairy, I agree, but that would\nwork :)\n\nStill, it would be easy enough to have something for\npg_get_wal_replay_pause_state() called on a standby when there is no \npause (your case #2) and a second case on a standby with a pause\ntriggered, though (not listed above).\n--\nMichael",
"msg_date": "Wed, 19 May 2021 14:17:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_get_wal_replay_pause_state() should not return 'paused' while\n a promotion is ongoing."
},
{
"msg_contents": "On Wed, May 19, 2021 at 10:16 AM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2021/05/18 15:46, Michael Paquier wrote:\n> > On Tue, May 18, 2021 at 12:48:38PM +0900, Fujii Masao wrote:\n> >> Currently a promotion causes all available WAL to be replayed before\n> >> a standby becomes a primary whether it was in paused state or not.\n> >> OTOH, something like immediate promotion (i.e., standby becomes\n> >> a primary without replaying outstanding WAL) might be useful for\n> >> some cases. I don't object to that.\n> >\n> > Sounds like a \"promotion immediate\" mode. It does not sound difficult\n> > nor expensive to add a small test for that in one of the existing\n> > recovery tests triggerring a promotion. Could you add one based on\n> > pg_get_wal_replay_pause_state()?\n>\n> You're thinking to add the test like the following?\n> #1. Pause the recovery\n> #2. Confirm that pg_get_wal_replay_pause_state() returns 'paused'\n> #3. Trigger standby promotion\n> #4. Confirm that pg_get_wal_replay_pause_state() returns 'not paused'\n>\n> It seems not easy to do the test #4 stably because\n> pg_get_wal_replay_pause_state() needs to be executed\n> before the promotion finishes.\n\nEven for #2, we can not ensure that whether it will be 'paused' or\n'pause requested'.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 19 May 2021 11:19:13 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_get_wal_replay_pause_state() should not return 'paused' while\n a promotion is ongoing."
},
{
"msg_contents": "At Wed, 19 May 2021 11:19:13 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n> On Wed, May 19, 2021 at 10:16 AM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n> >\n> > On 2021/05/18 15:46, Michael Paquier wrote:\n> > > On Tue, May 18, 2021 at 12:48:38PM +0900, Fujii Masao wrote:\n> > >> Currently a promotion causes all available WAL to be replayed before\n> > >> a standby becomes a primary whether it was in paused state or not.\n> > >> OTOH, something like immediate promotion (i.e., standby becomes\n> > >> a primary without replaying outstanding WAL) might be useful for\n> > >> some cases. I don't object to that.\n> > >\n> > > Sounds like a \"promotion immediate\" mode. It does not sound difficult\n> > > nor expensive to add a small test for that in one of the existing\n> > > recovery tests triggerring a promotion. Could you add one based on\n> > > pg_get_wal_replay_pause_state()?\n> >\n> > You're thinking to add the test like the following?\n> > #1. Pause the recovery\n> > #2. Confirm that pg_get_wal_replay_pause_state() returns 'paused'\n> > #3. Trigger standby promotion\n> > #4. Confirm that pg_get_wal_replay_pause_state() returns 'not paused'\n> >\n> > It seems not easy to do the test #4 stably because\n> > pg_get_wal_replay_pause_state() needs to be executed\n> > before the promotion finishes.\n> \n> Even for #2, we can not ensure that whether it will be 'paused' or\n> 'pause requested'.\n\nWe often use poll_query_until() to make sure some desired state is\nreached. And, as Michael suggested, the function\npg_get_wal_replay_pause_state() still works at the time of\nrecovery_end_command. So a bit more detailed steps are:\n\n#0. Equip the server with recovery_end_command that waits for some\n trigger then start the server.\n#1. Pause the recovery\n#2. Wait until pg_get_wal_replay_pause_state() returns 'paused'\n#3. Trigger standby promotion\n#4. Wait until pg_get_wal_replay_pause_state() returns 'not paused'\n#5. Trigger recovery_end_command to let promotion proceed.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 19 May 2021 15:25:29 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_get_wal_replay_pause_state() should not return 'paused'\n while a promotion is ongoing."
},
{
"msg_contents": "On 2021/05/19 15:25, Kyotaro Horiguchi wrote:\n> At Wed, 19 May 2021 11:19:13 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in\n>> On Wed, May 19, 2021 at 10:16 AM Fujii Masao\n>> <masao.fujii@oss.nttdata.com> wrote:\n>>>\n>>> On 2021/05/18 15:46, Michael Paquier wrote:\n>>>> On Tue, May 18, 2021 at 12:48:38PM +0900, Fujii Masao wrote:\n>>>>> Currently a promotion causes all available WAL to be replayed before\n>>>>> a standby becomes a primary whether it was in paused state or not.\n>>>>> OTOH, something like immediate promotion (i.e., standby becomes\n>>>>> a primary without replaying outstanding WAL) might be useful for\n>>>>> some cases. I don't object to that.\n>>>>\n>>>> Sounds like a \"promotion immediate\" mode. It does not sound difficult\n>>>> nor expensive to add a small test for that in one of the existing\n>>>> recovery tests triggerring a promotion. Could you add one based on\n>>>> pg_get_wal_replay_pause_state()?\n>>>\n>>> You're thinking to add the test like the following?\n>>> #1. Pause the recovery\n>>> #2. Confirm that pg_get_wal_replay_pause_state() returns 'paused'\n>>> #3. Trigger standby promotion\n>>> #4. Confirm that pg_get_wal_replay_pause_state() returns 'not paused'\n>>>\n>>> It seems not easy to do the test #4 stably because\n>>> pg_get_wal_replay_pause_state() needs to be executed\n>>> before the promotion finishes.\n>>\n>> Even for #2, we can not ensure that whether it will be 'paused' or\n>> 'pause requested'.\n> \n> We often use poll_query_until() to make sure some desired state is\n> reached.\n\nYes.\n\n> And, as Michael suggested, the function\n> pg_get_wal_replay_pause_state() still works at the time of\n> recovery_end_command. So a bit more detailed steps are:\n\nIMO this idea is tricky and fragile, so I'm inclined to avoid that if possible.\nAttached is the POC patch to add the following tests.\n\n#1. Check that pg_get_wal_replay_pause_state() reports \"not paused\" at first.\n#2. Request to pause archive recovery and wait until it's actually paused.\n#3. Request to resume archive recovery and wait until it's actually resumed.\n#4. Request to pause archive recovery and wait until it's actually paused.\n Then, check that the paused state ends and promotion continues\n if a promotion is triggered while recovery is paused.\n\nIn #4, pg_get_wal_replay_pause_state() is not executed while promotion\nis ongoing. #4 checks that pg_is_in_recovery() returns false and\nthe promotion finishes expectedly in that case. Isn't this test enough for now?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Wed, 19 May 2021 16:21:58 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_get_wal_replay_pause_state() should not return 'paused' while\n a promotion is ongoing."
},
{
"msg_contents": "At Wed, 19 May 2021 16:21:58 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2021/05/19 15:25, Kyotaro Horiguchi wrote:\n> > At Wed, 19 May 2021 11:19:13 +0530, Dilip Kumar\n> > <dilipbalaut@gmail.com> wrote in\n> >> On Wed, May 19, 2021 at 10:16 AM Fujii Masao\n> >> <masao.fujii@oss.nttdata.com> wrote:\n> >>>\n> >>> On 2021/05/18 15:46, Michael Paquier wrote:\n> >>>> On Tue, May 18, 2021 at 12:48:38PM +0900, Fujii Masao wrote:\n> >>>>> Currently a promotion causes all available WAL to be replayed before\n> >>>>> a standby becomes a primary whether it was in paused state or not.\n> >>>>> OTOH, something like immediate promotion (i.e., standby becomes\n> >>>>> a primary without replaying outstanding WAL) might be useful for\n> >>>>> some cases. I don't object to that.\n> >>>>\n> >>>> Sounds like a \"promotion immediate\" mode. It does not sound difficult\n> >>>> nor expensive to add a small test for that in one of the existing\n> >>>> recovery tests triggerring a promotion. Could you add one based on\n> >>>> pg_get_wal_replay_pause_state()?\n> >>>\n> >>> You're thinking to add the test like the following?\n> >>> #1. Pause the recovery\n> >>> #2. Confirm that pg_get_wal_replay_pause_state() returns 'paused'\n> >>> #3. Trigger standby promotion\n> >>> #4. Confirm that pg_get_wal_replay_pause_state() returns 'not paused'\n> >>>\n> >>> It seems not easy to do the test #4 stably because\n> >>> pg_get_wal_replay_pause_state() needs to be executed\n> >>> before the promotion finishes.\n> >>\n> >> Even for #2, we can not ensure that whether it will be 'paused' or\n> >> 'pause requested'.\n> > We often use poll_query_until() to make sure some desired state is\n> > reached.\n> \n> Yes.\n> \n> > And, as Michael suggested, the function\n> > pg_get_wal_replay_pause_state() still works at the time of\n> > recovery_end_command. So a bit more detailed steps are:\n> \n> IMO this idea is tricky and fragile, so I'm inclined to avoid that if\n\nAgreed, the recovery_end_command would be something like the following\navoiding dependency on sh. However, I'm not sure it works as well on\nWindows..\n\nrecovery_end_command='perl -e \"while( -f \\'$trigfile\\') {sleep 0.1;}\"'\n\n> possible.\n> Attached is the POC patch to add the following tests.\n> \n> #1. Check that pg_get_wal_replay_pause_state() reports \"not paused\" at\n> #first.\n> #2. Request to pause archive recovery and wait until it's actually\n> #paused.\n> #3. Request to resume archive recovery and wait until it's actually\n> #resumed.\n> #4. Request to pause archive recovery and wait until it's actually\n> #paused.\n> Then, check that the paused state ends and promotion continues\n> if a promotion is triggered while recovery is paused.\n> \n> In #4, pg_get_wal_replay_pause_state() is not executed while promotion\n> is ongoing. #4 checks that pg_is_in_recovery() returns false and\n> the promotion finishes expectedly in that case. Isn't this test enough\n> for now?\n\n+1 for adding some tests for pg_wal_replay_pause() but the test seems\nlike checking only that pg_get_wal_replay_pause_state() returns the\nexpected state value. Don't we need to check that the recovery is\nactually paused and that the promotion happens at expected LSN?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 19 May 2021 16:43:52 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_get_wal_replay_pause_state() should not return 'paused'\n while a promotion is ongoing."
},
{
"msg_contents": "On Wed, May 19, 2021 at 11:55 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 19 May 2021 11:19:13 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in\n> > On Wed, May 19, 2021 at 10:16 AM Fujii Masao\n> > <masao.fujii@oss.nttdata.com> wrote:\n> > >\n> > > On 2021/05/18 15:46, Michael Paquier wrote:\n> > > > On Tue, May 18, 2021 at 12:48:38PM +0900, Fujii Masao wrote:\n> > > >> Currently a promotion causes all available WAL to be replayed before\n> > > >> a standby becomes a primary whether it was in paused state or not.\n> > > >> OTOH, something like immediate promotion (i.e., standby becomes\n> > > >> a primary without replaying outstanding WAL) might be useful for\n> > > >> some cases. I don't object to that.\n> > > >\n> > > > Sounds like a \"promotion immediate\" mode. It does not sound difficult\n> > > > nor expensive to add a small test for that in one of the existing\n> > > > recovery tests triggerring a promotion. Could you add one based on\n> > > > pg_get_wal_replay_pause_state()?\n> > >\n> > > You're thinking to add the test like the following?\n> > > #1. Pause the recovery\n> > > #2. Confirm that pg_get_wal_replay_pause_state() returns 'paused'\n> > > #3. Trigger standby promotion\n> > > #4. Confirm that pg_get_wal_replay_pause_state() returns 'not paused'\n> > >\n> > > It seems not easy to do the test #4 stably because\n> > > pg_get_wal_replay_pause_state() needs to be executed\n> > > before the promotion finishes.\n> >\n> > Even for #2, we can not ensure that whether it will be 'paused' or\n> > 'pause requested'.\n>\n> We often use poll_query_until() to make sure some desired state is\n> reached. And, as Michael suggested, the function\n> pg_get_wal_replay_pause_state() still works at the time of\n> recovery_end_command. So a bit more detailed steps are:\n\nRight, if we are polling for the state change in #2 then that makes sense.\n\n> #0. Equip the server with recovery_end_command that waits for some\n> trigger then start the server.\n> #1. Pause the recovery\n> #2. Wait until pg_get_wal_replay_pause_state() returns 'paused'\n> #3. Trigger standby promotion\n> #4. Wait until pg_get_wal_replay_pause_state() returns 'not paused'\n> #5. Trigger recovery_end_command to let promotion proceed.\n\n+1\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 19 May 2021 13:29:31 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_get_wal_replay_pause_state() should not return 'paused' while\n a promotion is ongoing."
},
{
"msg_contents": "On 2021/05/19 16:43, Kyotaro Horiguchi wrote:\n> +1 for adding some tests for pg_wal_replay_pause() but the test seems\n> like checking only that pg_get_wal_replay_pause_state() returns the\n> expected state value. Don't we need to check that the recovery is\n> actually paused and that the promotion happens at expected LSN?\n\nSounds good. Attached is the updated version of the patch.\nI added such checks into the test.\n\nBTW, while reading some recovery regression tests, I found that\n013_crash_restart.pl has \"use Time::HiRes qw(usleep)\" but it seems\nnot necessary. We can safely remove that? Patch attached.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Wed, 19 May 2021 19:24:09 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_get_wal_replay_pause_state() should not return 'paused' while\n a promotion is ongoing."
},
{
"msg_contents": "\n\nOn 2021/05/19 19:24, Fujii Masao wrote:\n> \n> \n> On 2021/05/19 16:43, Kyotaro Horiguchi wrote:\n>> +1 for adding some tests for pg_wal_replay_pause() but the test seems\n>> like checking only that pg_get_wal_replay_pause_state() returns the\n>> expected state value. Don't we need to check that the recovery is\n>> actually paused and that the promotion happens at expected LSN?\n> \n> Sounds good. Attached is the updated version of the patch.\n> I added such checks into the test.\n> \n> BTW, while reading some recovery regression tests, I found that\n> 013_crash_restart.pl has \"use Time::HiRes qw(usleep)\" but it seems\n> not necessary. We can safely remove that? Patch attached.\n\nBarring any objections, I'm thinking to commit these two patches.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Mon, 31 May 2021 12:52:54 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_get_wal_replay_pause_state() should not return 'paused' while\n a promotion is ongoing."
},
{
"msg_contents": "Sorry for missing this.\n\nAt Mon, 31 May 2021 12:52:54 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> On 2021/05/19 19:24, Fujii Masao wrote:\n> > On 2021/05/19 16:43, Kyotaro Horiguchi wrote:\n> >> +1 for adding some tests for pg_wal_replay_pause() but the test seems\n> >> like checking only that pg_get_wal_replay_pause_state() returns the\n> >> expected state value. Don't we need to check that the recovery is\n> >> actually paused and that the promotion happens at expected LSN?\n> > Sounds good. Attached is the updated version of the patch.\n> > I added such checks into the test.\n\nThanks! Looks fine. The paused-state test may get false-success but it\nwould be sufficient that it detects the problem in most cases.\n\n> > BTW, while reading some recovery regression tests, I found that\n> > 013_crash_restart.pl has \"use Time::HiRes qw(usleep)\" but it seems\n> > not necessary. We can safely remove that? Patch attached.\n\nLooks just fine for the removal of HiRes usage. All other use of\nHiRes are accompanied by a usleep usage.\n\n> Barring any objections, I'm thinking to commit these two patches.\n\n+1.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 31 May 2021 17:18:42 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_get_wal_replay_pause_state() should not return 'paused'\n while a promotion is ongoing."
},
{
"msg_contents": "\n\nOn 2021/05/31 17:18, Kyotaro Horiguchi wrote:\n> Sorry for missing this.\n> \n> At Mon, 31 May 2021 12:52:54 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>\n>> On 2021/05/19 19:24, Fujii Masao wrote:\n>>> On 2021/05/19 16:43, Kyotaro Horiguchi wrote:\n>>>> +1 for adding some tests for pg_wal_replay_pause() but the test seems\n>>>> like checking only that pg_get_wal_replay_pause_state() returns the\n>>>> expected state value. Don't we need to check that the recovery is\n>>>> actually paused and that the promotion happens at expected LSN?\n>>> Sounds good. Attached is the updated version of the patch.\n>>> I added such checks into the test.\n> \n> Thanks! Looks fine. The paused-state test may get false-success but it\n> would be sufficient that it detects the problem in most cases.\n> \n>>> BTW, while reading some recovery regression tests, I found that\n>>> 013_crash_restart.pl has \"use Time::HiRes qw(usleep)\" but it seems\n>>> not necessary. We can safely remove that? Patch attached.\n> \n> Looks just fine for the removal of HiRes usage. All other use of\n> HiRes are accompanied by a usleep usage.\n> \n>> Barring any objections, I'm thinking to commit these two patches.\n> \n> +1.\n\nThanks for the review! I pushed those two patches.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 2 Jun 2021 12:22:04 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_get_wal_replay_pause_state() should not return 'paused' while\n a promotion is ongoing."
}
] |
[
{
"msg_contents": "Hi,\n\npg_attribute is one of the biggest table in a new cluster, and often the\nbiggest table in production clusters. Its size is also quite relevant in\nmemory, due to all the TupleDescs we allocate.\n\nI just noticed that the new attcompression increased the size not just\nby 1 byte, but by 4, due to padding. While an increase from 112 to 116\nbytes isn't the end of the world, it does seem worth considering using\nexisting unused bytes instead?\n\nIf we moved attcompression to all the other bool/char fields, we'd avoid\nthat size increase, as there's an existing 2 byte hole.\n\nOf course there's the argument that we shouldn't change the column order\nfor existing SELECT * queries, but the existing placement already does\n(the CATALOG_VARLEN columns follow).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 17 May 2021 13:48:03 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Move pg_attribute.attcompression to earlier in struct for reduced\n size?"
},
{
"msg_contents": "On Mon, May 17, 2021 at 01:48:03PM -0700, Andres Freund wrote:\n> pg_attribute is one of the biggest table in a new cluster, and often the\n> biggest table in production clusters. Its size is also quite relevant in\n> memory, due to all the TupleDescs we allocate.\n> \n> I just noticed that the new attcompression increased the size not just\n> by 1 byte, but by 4, due to padding. While an increase from 112 to 116\n> bytes isn't the end of the world, it does seem worth considering using\n> existing unused bytes instead?\n\n+1\n\nFYI: attcompression was an OID until a few weeks before the feature was merged,\nand there were several issues related to that:\n aa25d1089 - fixed two issues\n 226e2be38\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 17 May 2021 16:05:44 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> If we moved attcompression to all the other bool/char fields, we'd avoid\n> that size increase, as there's an existing 2 byte hole.\n\n+1. Looks to me like its existing placement was according to the good\nold \"add new things at the end\" anti-pattern. It certainly isn't\nrelated to the adjacent fields.\n\nPutting it just after attalign seems like a reasonably sane choice\nfrom the standpoint of grouping things affecting physical storage;\nand as you say, that wins from the standpoint of using up alignment\npadding rather than adding more.\n\nPersonally I'd think the most consistent order in that area would\nbe attbyval, attalign, attstorage, attcompression; but perhaps it's\ntoo late to swap the order of attstorage and attalign.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 May 2021 17:06:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-17 17:06:32 -0400, Tom Lane wrote:\n> Putting it just after attalign seems like a reasonably sane choice\n> from the standpoint of grouping things affecting physical storage;\n> and as you say, that wins from the standpoint of using up alignment\n> padding rather than adding more.\n\nMakes sense to me.\n\n\n> Personally I'd think the most consistent order in that area would\n> be attbyval, attalign, attstorage, attcompression; but perhaps it's\n> too late to swap the order of attstorage and attalign.\n\nGiven that we've put in new fields in various positions on a fairly\nregular basis, I don't think swapping around attalign, attstorage would\ncause a meaningful amount of additional pain. Personally I don't have a\npreference for how these are ordered.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 17 May 2021 14:28:57 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "On Mon, May 17, 2021 at 02:28:57PM -0700, Andres Freund wrote:\n> On 2021-05-17 17:06:32 -0400, Tom Lane wrote:\n>> Putting it just after attalign seems like a reasonably sane choice\n>> from the standpoint of grouping things affecting physical storage;\n>> and as you say, that wins from the standpoint of using up alignment\n>> padding rather than adding more.\n> \n> Makes sense to me.\n\n+1.\n\n>> Personally I'd think the most consistent order in that area would\n>> be attbyval, attalign, attstorage, attcompression; but perhaps it's\n>> too late to swap the order of attstorage and attalign.\n> \n> Given that we've put in new fields in various positions on a fairly\n> regular basis, I don't think swapping around attalign, attstorage would\n> cause a meaningful amount of additional pain. Personally I don't have a\n> preference for how these are ordered.\n\nIf you switch attcompression, I'd say to go for the others while on\nit. It would not be the first time in history there is a catalog\nversion bump between betas.\n--\nMichael",
"msg_date": "Tue, 18 May 2021 10:24:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "On Tue, May 18, 2021 at 10:24:36AM +0900, Michael Paquier wrote:\n> If you switch attcompression, I'd say to go for the others while on\n> it. It would not be the first time in history there is a catalog\n> version bump between betas.\n\nThis is still an open item. FWIW, I can get behind the reordering\nproposed by Tom for the consistency gained with pg_type, leading to\nthe attached to reduce the size of FormData_pg_attribute from 116b to\n112b.\n--\nMichael",
"msg_date": "Fri, 21 May 2021 15:32:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "On Fri, May 21, 2021 at 12:02 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, May 18, 2021 at 10:24:36AM +0900, Michael Paquier wrote:\n> > If you switch attcompression, I'd say to go for the others while on\n> > it. It would not be the first time in history there is a catalog\n> > version bump between betas.\n>\n> This is still an open item. FWIW, I can get behind the reordering\n> proposed by Tom for the consistency gained with pg_type, leading to\n> the attached to reduce the size of FormData_pg_attribute from 116b to\n> 112b.\n\nThis makes sense, thanks for working on this.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 21 May 2021 12:25:11 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> This is still an open item. FWIW, I can get behind the reordering\n> proposed by Tom for the consistency gained with pg_type, leading to\n> the attached to reduce the size of FormData_pg_attribute from 116b to\n> 112b.\n\nI think we need to do more than that. It's certainly not okay to\nleave catalogs.sgml out of sync with reality. And maybe I'm just\nan overly anal-retentive sort, but I think that code that manipulates\ntuples ought to match the declared field order if there's not some\nspecific reason to do otherwise. So that led me to the attached.\n\nIt was a good thing I went through this code, too, because I noticed\none serious bug (attcompression not checked in equalTupleDescs) and\nanother thing that looks like a bug: there are two places that set\nup attcompression depending on\n\n if (rel->rd_rel->relkind == RELKIND_RELATION ||\n rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n\nThis seems fairly nuts; in particular, why are matviews excluded?\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 21 May 2021 11:01:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-21 11:01:03 -0400, Tom Lane wrote:\n> It was a good thing I went through this code, too, because I noticed\n> one serious bug (attcompression not checked in equalTupleDescs) and\n> another thing that looks like a bug: there are two places that set\n> up attcompression depending on\n>\n> if (rel->rd_rel->relkind == RELKIND_RELATION ||\n> rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n>\n> This seems fairly nuts; in particular, why are matviews excluded?\n\nYea, that doesn't seem right. I was confused why this appears to work at\nall right now. It only does because REFRESH always inserts into a\nstaging table first - which is created as a normal table. For\nnon-concurrent refresh that relation's relfilenode is swapped with the\nMV's. For concurrent refresh we actually do insert into the MV - but we\nnever need to compress a datum at that point, because it'll already have\nbeen compressed during the insert into the temp table.\n\nI think there might something slightly off with concurrent refresh - the\nTEMPORARY diff table that is created doesn't use the matview's\ncompression settings. Which means all tuples need to be recompressed\nunnecessarily, if default_toast_compression differs from a column in the\nmaterialized view.\n\nSET default_toast_compression = 'lz4';\nDROP MATERIALIZED VIEW IF EXISTS wide_mv;\nCREATE MATERIALIZED VIEW wide_mv AS SELECT 1::int4 AS key, random() || string_agg(i::text, '') data FROM generate_series(1, 10000) g(i);CREATE UNIQUE INDEX ON wide_mv(key);\nALTER MATERIALIZED VIEW wide_mv ALTER COLUMN data SET COMPRESSION pglz;\nREFRESH MATERIALIZED VIEW CONCURRENTLY wide_mv;\n\nWith the SET COMPRESSION pglz I see the following compression calls:\n1) pglz in refresh_matview_datafill\n2) lz4 during temp table CREATE TEMP TABLE AS\n3) pglz during the INSERT into the matview\n\nWithout I only see 1) and 2).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 21 May 2021 13:54:20 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-21 11:01:03 -0400, Tom Lane wrote:\n> It was a good thing I went through this code, too, because I noticed\n> one serious bug (attcompression not checked in equalTupleDescs) and\n> another thing that looks like a bug:\n\nGrepping for attcompression while trying to understand the issue Tom\nreported I found a substantial, but transient, memory leak:\n\nDuring VACUUM FULL reform_and_rewrite_tuple() detoasts the old value if\nit was compressed with a different method, while in\nTopTransactionContext. There's nothing freeing that until\nTopTransactionContext ends - obviously not great for a large relation\nbeing VACUUM FULLed.\n\nSET default_toast_compression = 'lz4';\nDROP TABLE IF EXISTS wide CASCADE;\nCREATE TABLE wide(data text not null);\nINSERT INTO wide(data) SELECT random() || (SELECT string_agg(i::text, '') data FROM generate_series(1, 100000) g(i)) FROM generate_series(1, 1000);\n\n\\c\n\nSET client_min_messages = 'log';\nSET log_statement_stats = on;\nVACUUM FULL wide;\n...\nDETAIL: ! system usage stats:\n!\t0.836638 s user, 0.375344 s system, 1.268705 s elapsed\n!\t[2.502369 s user, 0.961681 s system total]\n!\t18052 kB max resident size\n!\t0/1789088 [0/3530048] filesystem blocks in/out\n!\t0/277 [0/205655] page faults/reclaims, 0 [0] swaps\n!\t0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n!\t22/1 [55/6] voluntary/involuntary context switches\nLOCATION: ShowUsage, postgres.c:4886\nVACUUM\nTime: 1269.029 ms (00:01.269)\n\n\\c\nALTER TABLE wide ALTER COLUMN data SET COMPRESSION pglz;\nSET client_min_messages = 'log';\nSET log_statement_stats = on;\nVACUUM FULL wide;\n...\nDETAIL: ! system usage stats:\n!\t19.816867 s user, 0.493233 s system, 20.320711 s elapsed\n!\t[19.835995 s user, 0.493233 s system total]\n!\t491588 kB max resident size\n!\t0/656032 [0/656048] filesystem blocks in/out\n!\t0/287363 [0/287953] page faults/reclaims, 0 [0] swaps\n!\t0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n!\t1/24 [13/26] voluntary/involuntary context switches\n\nNote the drastically different \"max resident size\". This is with huge\npages (removing s_b from RSS), but it's visible even without.\n\n\nRandom fun note:\ntime for VACUUM FULL wide with recompression:\npglz->lz4: 3.2s\nlz4->pglz: 20.3s\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 21 May 2021 14:19:29 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "I wrote:\n> I think we need to do more than that. It's certainly not okay to\n> leave catalogs.sgml out of sync with reality. And maybe I'm just\n> an overly anal-retentive sort, but I think that code that manipulates\n> tuples ought to match the declared field order if there's not some\n> specific reason to do otherwise. So that led me to the attached.\n\nPushed that after another round of review.\n\n> ... there are two places that set\n> up attcompression depending on\n> if (rel->rd_rel->relkind == RELKIND_RELATION ||\n> rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n> This seems fairly nuts; in particular, why are matviews excluded?\n\nWhile I've not actually tested this, it seems to me that we could\njust drop these relkind tests altogether. It won't hurt anything\nto set up attcompression in relation descriptors where it'll never\nbe consulted.\n\nHowever, the more I looked at that code the less I liked it.\nI think the way that compression selection is handled for indexes,\nie consult default_toast_compression on-the-fly, is *far* saner\nthan what is currently implemented for tables. So I think we\nshould redefine attcompression as \"ID of a compression method\nto use, or \\0 to select the prevailing default. Ignored if\nattstorage does not permit the use of compression\". This would\nresult in approximately 99.44% of all columns just having zero\nattcompression, greatly simplifying the tupdesc setup code, and\nalso making it much easier to flip an installation over to a\ndifferent preferred compression method.\n\nI'm happy to prepare a patch if that sketch sounds sane.\n\n(Note that the existing comment claiming that attcompression\n\"Must be InvalidCompressionMethod if and only if typstorage is\n'plain' or 'external'\" is a flat out lie in any case; *both*\ndirections of that claim are wrong.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 23 May 2021 12:25:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?"
},
{
"msg_contents": "On Fri, May 21, 2021 at 02:19:29PM -0700, Andres Freund wrote:\n> During VACUUM FULL reform_and_rewrite_tuple() detoasts the old value if\n> it was compressed with a different method, while in\n> TopTransactionContext. There's nothing freeing that until\n> TopTransactionContext ends - obviously not great for a large relation\n> being VACUUM FULLed.\n\nYeah, that's not good. The confusion comes from the fact that we'd\njust overwrite the values without freeing them out if recompressed, so\nsomething like the attached would be fine?\n--\nMichael",
"msg_date": "Mon, 24 May 2021 13:09:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "On Sun, May 23, 2021 at 12:25:10PM -0400, Tom Lane wrote:\n> While I've not actually tested this, it seems to me that we could\n> just drop these relkind tests altogether. It won't hurt anything\n> to set up attcompression in relation descriptors where it'll never\n> be consulted.\n\nWouldn't it be confusing to set up attcompression for relkinds without\nstorage, like views?\n\n> However, the more I looked at that code the less I liked it.\n> I think the way that compression selection is handled for indexes,\n> ie consult default_toast_compression on-the-fly, is *far* saner\n> than what is currently implemented for tables. So I think we\n> should redefine attcompression as \"ID of a compression method\n> to use, or \\0 to select the prevailing default. Ignored if\n> attstorage does not permit the use of compression\". This would\n> result in approximately 99.44% of all columns just having zero\n> attcompression, greatly simplifying the tupdesc setup code, and\n> also making it much easier to flip an installation over to a\n> different preferred compression method.\n\nWould there be any impact when it comes to CTAS or matviews where the\ncurrent code assumes that the same compression method as the one from\nthe original value gets used, making the creation of the new relation\ncheaper because there is less de-toasting and re-toasting?\n--\nMichael",
"msg_date": "Mon, 24 May 2021 13:42:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Sun, May 23, 2021 at 12:25:10PM -0400, Tom Lane wrote:\n>> While I've not actually tested this, it seems to me that we could\n>> just drop these relkind tests altogether. It won't hurt anything\n>> to set up attcompression in relation descriptors where it'll never\n>> be consulted.\n\n> Wouldn't it be confusing to set up attcompression for relkinds without\n> storage, like views?\n\nNo more so than setting up attstorage, surely.\n\n>> ... I think we\n>> should redefine attcompression as \"ID of a compression method\n>> to use, or \\0 to select the prevailing default. Ignored if\n>> attstorage does not permit the use of compression\". This would\n>> result in approximately 99.44% of all columns just having zero\n>> attcompression, greatly simplifying the tupdesc setup code, and\n>> also making it much easier to flip an installation over to a\n>> different preferred compression method.\n\n> Would there be any impact when it comes to CTAS or matviews where the\n> current code assumes that the same compression method as the one from\n> the original value gets used, making the creation of the new relation\n> cheaper because there is less de-toasting and re-toasting?\n\nI'd still envision copying the source attcompression setting in such\ncases. I guess the question is (a) does that code path actually\nrecompress values that have the \"wrong\" compression, and (b) if it\ndoes, is that wrong? If you think (a) is correct behavior, then\nI don't see why refreshing after changing default_toast_compression\nshouldn't cause that to happen.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 24 May 2021 01:05:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?"
},
{
"msg_contents": "On Fri, May 21, 2021 at 8:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>\n> if (rel->rd_rel->relkind == RELKIND_RELATION ||\n> rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n>\n> This seems fairly nuts; in particular, why are matviews excluded?\n\nThe matviews are excluded only in \"ATExecAddColumn()\" right? But we\ncan not ALTER TABLE ADD COLUMN to matviews right? I agree that even\nif we don't skip matview it will not create any issue as matview will\nnot reach here. Am I missing something?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 24 May 2021 11:25:18 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?"
},
{
"msg_contents": "On Mon, May 24, 2021 at 9:39 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, May 21, 2021 at 02:19:29PM -0700, Andres Freund wrote:\n> > During VACUUM FULL reform_and_rewrite_tuple() detoasts the old value if\n> > it was compressed with a different method, while in\n> > TopTransactionContext. There's nothing freeing that until\n> > TopTransactionContext ends - obviously not great for a large relation\n> > being VACUUM FULLed.\n>\n> Yeah, that's not good. The confusion comes from the fact that we'd\n> just overwrite the values without freeing them out if recompressed, so\n> something like the attached would be fine?\n\n /* Be sure to null out any dropped columns */\n for (i = 0; i < newTupDesc->natts; i++)\n {\n+ tup_values[i] = values[i];\n+\n if (TupleDescAttr(newTupDesc, i)->attisdropped)\n isnull[i] = true;\n\nI think you don't need to initialize tup_values[i] with the\nvalues[i];, other than that looks fine to me.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 24 May 2021 11:32:22 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?"
},
{
"msg_contents": "On Mon, May 24, 2021 at 11:32:22AM +0530, Dilip Kumar wrote:\n> I think you don't need to initialize tup_values[i] with the\n> values[i];, other than that looks fine to me.\n\nYou mean because heap_deform_tuple() does this job, right? Sure.\n--\nMichael",
"msg_date": "Mon, 24 May 2021 17:53:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "On Mon, May 24, 2021 at 2:23 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, May 24, 2021 at 11:32:22AM +0530, Dilip Kumar wrote:\n> > I think you don't need to initialize tup_values[i] with the\n> > values[i];, other than that looks fine to me.\n>\n> You mean because heap_deform_tuple() does this job, right? Sure.\n\nSorry, I just noticed that my statement was incomplete in last mail,\nwhat I wanted to say is that if the attisdropped then we can avoid\n\"tup_values[i] = values[i]\", so in short we can move \"tup_values[i] =\nvalues[i]\" in the else part of \" if (TupleDescAttr(newTupDesc,\ni)->attisdropped)\" check.\n\nLike this.\n if (TupleDescAttr(newTupDesc, i)->attisdropped)\n isnull[i] = true;\n else\n tup_values[i] = values[i];\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 24 May 2021 14:46:11 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?"
},
{
"msg_contents": "On Mon, May 24, 2021 at 02:46:11PM +0530, Dilip Kumar wrote:\n> Like this.\n> if (TupleDescAttr(newTupDesc, i)->attisdropped)\n> isnull[i] = true;\n> else\n> tup_values[i] = values[i];\n\nThat would work. Your suggestion, as I understood it first, makes the\ncode simpler by not using tup_values at all as the set of values[] is\nfilled when the values and nulls are extracted. So I have gone with\nthis simplification, and applied the patch (moved a bit the comments\nwhile on it).\n--\nMichael",
"msg_date": "Tue, 25 May 2021 14:46:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "On Tue, 25 May 2021 at 11:16 AM, Michael Paquier <michael@paquier.xyz>\nwrote:\n\n> On Mon, May 24, 2021 at 02:46:11PM +0530, Dilip Kumar wrote:\n> > Like this.\n> > if (TupleDescAttr(newTupDesc, i)->attisdropped)\n> > isnull[i] = true;\n> > else\n> > tup_values[i] = values[i];\n>\n> That would work. Your suggestion, as I understood it first, makes the\n> code simpler by not using tup_values at all as the set of values[] is\n> filled when the values and nulls are extracted. So I have gone with\n> this simplification, and applied the patch (moved a bit the comments\n> while on it).\n\n\nPerfect. That looks much better.\n\n>\n\nOn Tue, 25 May 2021 at 11:16 AM, Michael Paquier <michael@paquier.xyz> wrote:On Mon, May 24, 2021 at 02:46:11PM +0530, Dilip Kumar wrote:\n> Like this.\n> if (TupleDescAttr(newTupDesc, i)->attisdropped)\n> isnull[i] = true;\n> else\n> tup_values[i] = values[i];\n\nThat would work. Your suggestion, as I understood it first, makes the\ncode simpler by not using tup_values at all as the set of values[] is\nfilled when the values and nulls are extracted. So I have gone with\nthis simplification, and applied the patch (moved a bit the comments\nwhile on it).Perfect. That looks much better.",
"msg_date": "Tue, 25 May 2021 11:45:27 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?"
},
{
"msg_contents": "On Sun, May 23, 2021 at 12:25:10PM -0400, Tom Lane wrote:\n> However, the more I looked at that code the less I liked it.\n> I think the way that compression selection is handled for indexes,\n> ie consult default_toast_compression on-the-fly, is *far* saner\n> than what is currently implemented for tables. So I think we\n> should redefine attcompression as \"ID of a compression method\n> to use, or \\0 to select the prevailing default. Ignored if\n> attstorage does not permit the use of compression\".\n\n+1\n\nIt reminds me of reltablespace, which is stored as 0 to mean the database's\ndefault tablespace.\n\nAlso, values are currently retoasted during vacuum full if their column's\ncurrent compression method doesn't match the value's old compression.\n\nBut it doesn't rewrite the column if the it used to use the default\ncompression, and the default was changed. I think your idea would handle that.\n\n-- \nJustin\n\nPS. I just ran into the memory leak that Andres reported and Michael fixed.\n\n\n",
"msg_date": "Tue, 25 May 2021 20:33:47 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "On Tue, May 25, 2021 at 08:33:47PM -0500, Justin Pryzby wrote:\n> It reminds me of reltablespace, which is stored as 0 to mean the database's\n> default tablespace.\n> \n> Also, values are currently retoasted during vacuum full if their column's\n> current compression method doesn't match the value's old compression.\n> \n> But it doesn't rewrite the column if the it used to use the default\n> compression, and the default was changed. I think your idea would handle that.\n\nAh, the parallel with reltablespace and default_tablespace at database\nlevel is a very good point. It is true that currently the code would\nassign attcompression to a non-zero value once the relation is defined\ndepending on default_toast_compression set for the database, but\nsetting it to 0 in this case would be really helpful to change the\ncompression methods of all the relations if doing something as crazy\nas a VACUUM FULL for this database. Count me as convinced.\n--\nMichael",
"msg_date": "Wed, 26 May 2021 10:57:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Ah, the parallel with reltablespace and default_tablespace at database\n> level is a very good point. It is true that currently the code would\n> assign attcompression to a non-zero value once the relation is defined\n> depending on default_toast_compression set for the database, but\n> setting it to 0 in this case would be really helpful to change the\n> compression methods of all the relations if doing something as crazy\n> as a VACUUM FULL for this database. Count me as convinced.\n\nHere's a draft patch series to address this.\n\n0001 removes the relkind checks I was questioning originally.\nAs expected, this results in zero changes in check-world results.\n\n0002 is the main change in the semantics of attcompression.\nThis does change the results of compression.sql, but in what\nseem to me to be expected ways: a column's compression option\nis now shown in \\d+ output only if you explicitly set it.\n\n0003 further removes pg_dump's special handling of\ndefault_toast_compression. I don't think we need that anymore.\nAFAICS its only effect would be to override the receiving server's\ndefault_toast_compression setting for dumped/re-loaded data, which\ndoes not seem like a behavior that anyone would want.\n\nLoose ends:\n\n* I've not reviewed the docs fully; there are likely some more\nthings that need updated.\n\n* As things stand here, once you've applied ALTER ... SET COMPRESSION\nto select a specific method, there is no way to undo that and go\nback to the use-the-default setting. All you can do is change to\nexplicitly select the other method. Should we invent \"ALTER ...\nSET COMPRESSION default\" or the like to cover that? (Since\nDEFAULT is a reserved word, that exact syntax might be a bit of\na pain to implement, but maybe we could think of another word.)\n\n* I find GetDefaultToastCompression() annoying. I do not think\nit is project style to invent trivial wrapper functions around\nGUC variable references: it buys nothing while requiring readers\nto remember one more name than they would otherwise. Since there\nare only two uses remaining, maybe this isn't very important either\nway, but I'm still inclined to flush it.\n\nComments?\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 26 May 2021 11:13:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?"
},
{
"msg_contents": "On Wed, May 26, 2021 at 11:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> * As things stand here, once you've applied ALTER ... SET COMPRESSION\n> to select a specific method, there is no way to undo that and go\n> back to the use-the-default setting. All you can do is change to\n> explicitly select the other method. Should we invent \"ALTER ...\n> SET COMPRESSION default\" or the like to cover that? (Since\n> DEFAULT is a reserved word, that exact syntax might be a bit of\n> a pain to implement, but maybe we could think of another word.)\n\nYes. Irreversible catalog changes are bad.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 26 May 2021 11:17:22 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, May 26, 2021 at 11:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> * As things stand here, once you've applied ALTER ... SET COMPRESSION\n>> to select a specific method, there is no way to undo that and go\n>> back to the use-the-default setting. All you can do is change to\n>> explicitly select the other method. Should we invent \"ALTER ...\n>> SET COMPRESSION default\" or the like to cover that?\n\n> Yes. Irreversible catalog changes are bad.\n\nHere's an add-on 0004 that does that, and takes care of assorted\nsilliness in the grammar and docs --- did you know that this patch\ncaused\n\talter table foo alter column bar set ;\nto be allowed?\n\nI think this is about ready to commit now (though I didn't yet nuke\nGetDefaultToastCompression).\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 26 May 2021 15:31:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?"
},
{
"msg_contents": "On Wed, May 26, 2021 at 11:13:46AM -0400, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n> > Ah, the parallel with reltablespace and default_tablespace at database\n> > level is a very good point. It is true that currently the code would\n> > assign attcompression to a non-zero value once the relation is defined\n> > depending on default_toast_compression set for the database, but\n> > setting it to 0 in this case would be really helpful to change the\n> > compression methods of all the relations if doing something as crazy\n> > as a VACUUM FULL for this database. Count me as convinced.\n> \n> Here's a draft patch series to address this.\n> \n> 0001 removes the relkind checks I was questioning originally.\n> As expected, this results in zero changes in check-world results.\n> \n> 0002 is the main change in the semantics of attcompression.\n> This does change the results of compression.sql, but in what\n> seem to me to be expected ways: a column's compression option\n> is now shown in \\d+ output only if you explicitly set it.\n> \n> 0003 further removes pg_dump's special handling of\n> default_toast_compression. I don't think we need that anymore.\n> AFAICS its only effect would be to override the receiving server's\n> default_toast_compression setting for dumped/re-loaded data, which\n> does not seem like a behavior that anyone would want.\n> \n> Loose ends:\n> \n> * I've not reviewed the docs fully; there are likely some more\n> things that need updated.\n> \n> * As things stand here, once you've applied ALTER ... SET COMPRESSION\n> to select a specific method, there is no way to undo that and go\n> back to the use-the-default setting. All you can do is change to\n> explicitly select the other method. Should we invent \"ALTER ...\n> SET COMPRESSION default\" or the like to cover that? (Since\n> DEFAULT is a reserved word, that exact syntax might be a bit of\n> a pain to implement, but maybe we could think of another word.)\n> \n> * I find GetDefaultToastCompression() annoying. I do not think\n> it is project style to invent trivial wrapper functions around\n> GUC variable references: it buys nothing while requiring readers\n> to remember one more name than they would otherwise. Since there\n> are only two uses remaining, maybe this isn't very important either\n> way, but I'm still inclined to flush it.\n\n+1\n\nIt existed when default_toast_compression was a text string. Since e5595de03,\nit's an enum/int/char, and serves no purpose.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 26 May 2021 14:32:38 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "I wrote:\n> I think this is about ready to commit now (though I didn't yet nuke\n> GetDefaultToastCompression).\n\nHere's a bundled-up final version, in case anybody would prefer\nto review it that way.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 26 May 2021 16:11:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?"
},
{
"msg_contents": "On 2021-May-26, Tom Lane wrote:\n\n> I wrote:\n> > I think this is about ready to commit now (though I didn't yet nuke\n> > GetDefaultToastCompression).\n> \n> Here's a bundled-up final version, in case anybody would prefer\n> to review it that way.\n\nLooks good to me.\n\nI tested the behavior with partitioned tables and it seems OK.\n\nIt would be good to have a test case in src/bin/pg_dump/t/002_pg_dump.pl\nfor the case ... and I find it odd that we don't seem to have anything\nfor the \"CREATE TABLE foo (LIKE sometab INCLUDING stuff)\" form of the\ncommand ... but neither of those seem the fault of this patch, and they\nboth work as [I think] is intended.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"Estoy de acuerdo contigo en que la verdad absoluta no existe...\nEl problema es que la mentira s� existe y tu est�s mintiendo\" (G. Lama)\n\n\n",
"msg_date": "Wed, 26 May 2021 18:08:20 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Looks good to me.\n> I tested the behavior with partitioned tables and it seems OK.\n\nThanks for reviewing/testing!\n\n> It would be good to have a test case in src/bin/pg_dump/t/002_pg_dump.pl\n> for the case\n\nPersonally I won't touch 002_pg_dump.pl with a 10-foot pole, but if\nsomebody else wants to, have at it.\n\n> ... and I find it odd that we don't seem to have anything\n> for the \"CREATE TABLE foo (LIKE sometab INCLUDING stuff)\" form of the\n> command ... but neither of those seem the fault of this patch, and they\n> both work as [I think] is intended.\n\nHm, there's this in compression.sql:\n\n-- test LIKE INCLUDING COMPRESSION\nCREATE TABLE cmdata2 (LIKE cmdata1 INCLUDING COMPRESSION);\n\\d+ cmdata2\n\nOr did you mean the case with a partitioned table specifically?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 26 May 2021 18:21:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?"
},
{
"msg_contents": "On 2021-May-26, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n\n> > It would be good to have a test case in src/bin/pg_dump/t/002_pg_dump.pl\n> > for the case\n> \n> Personally I won't touch 002_pg_dump.pl with a 10-foot pole, but if\n> somebody else wants to, have at it.\n\nNod.\n\n> > ... and I find it odd that we don't seem to have anything\n> > for the \"CREATE TABLE foo (LIKE sometab INCLUDING stuff)\" form of the\n> > command ... but neither of those seem the fault of this patch, and they\n> > both work as [I think] is intended.\n> \n> Hm, there's this in compression.sql:\n> \n> -- test LIKE INCLUDING COMPRESSION\n> CREATE TABLE cmdata2 (LIKE cmdata1 INCLUDING COMPRESSION);\n> \\d+ cmdata2\n> \n> Or did you mean the case with a partitioned table specifically?\n\nAh, I guess that's sufficient. (The INCLUDING clause cannot be used to\ncreate a partition, actually.)\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Now I have my system running, not a byte was off the shelf;\nIt rarely breaks and when it does I fix the code myself.\nIt's stable, clean and elegant, and lightning fast as well,\nAnd it doesn't cost a nickel, so Bill Gates can go to hell.\"\n\n\n",
"msg_date": "Wed, 26 May 2021 19:44:03 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "On Wed, May 26, 2021 at 07:44:03PM -0400, Alvaro Herrera wrote:\n> On 2021-May-26, Tom Lane wrote:\n>> Personally I won't touch 002_pg_dump.pl with a 10-foot pole, but if\n>> somebody else wants to, have at it.\n> \n> Nod.\n\nYeah, having an extra test for partitioned tables would be a good\nidea.\n\n>> Hm, there's this in compression.sql:\n>> \n>> -- test LIKE INCLUDING COMPRESSION\n>> CREATE TABLE cmdata2 (LIKE cmdata1 INCLUDING COMPRESSION);\n>> \\d+ cmdata2\n>> \n>> Or did you mean the case with a partitioned table specifically?\n> \n> Ah, I guess that's sufficient. (The INCLUDING clause cannot be used to\n> create a partition, actually.)\n\n+column_compression:\n+ COMPRESSION ColId { $$ = $2; }\n+ | COMPRESSION DEFAULT { $$ =\npstrdup(\"default\"); }\nCould it be possible to have some tests for COMPRESSION DEFAULT? It\nseems to me that this should be documented as a supported keyword for\nCREATE/ALTER TABLE.\n\n --changing column storage should not impact the compression method\n --but the data should not be compressed\n ALTER TABLE cmdata2 ALTER COLUMN f1 TYPE varchar;\n+ALTER TABLE cmdata2 ALTER COLUMN f1 SET COMPRESSION pglz;\nThis comment needs a refresh?\n--\nMichael",
"msg_date": "Thu, 27 May 2021 09:13:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Yeah, having an extra test for partitioned tables would be a good\n> idea.\n\nWe do have some coverage already via the pg_upgrade test.\n\n> Could it be possible to have some tests for COMPRESSION DEFAULT? It\n> seems to me that this should be documented as a supported keyword for\n> CREATE/ALTER TABLE.\n\nUh, I did do both of those, no? (The docs treat \"default\" as another\npossible value, not a keyword, even though it's a keyword internally.)\n\n> --changing column storage should not impact the compression method\n> --but the data should not be compressed\n> ALTER TABLE cmdata2 ALTER COLUMN f1 TYPE varchar;\n> +ALTER TABLE cmdata2 ALTER COLUMN f1 SET COMPRESSION pglz;\n> This comment needs a refresh?\n\nIt's correct AFAICS. Maybe it needs a bit of editing for clarity,\nbut I'm not sure how to make it better. The point is that the\nSET STORAGE just below disables compression of new values, no\nmatter what SET COMPRESSION says.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 26 May 2021 20:29:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-25 14:46:27 +0900, Michael Paquier wrote:\n> That would work. Your suggestion, as I understood it first, makes the\n> code simpler by not using tup_values at all as the set of values[] is\n> filled when the values and nulls are extracted. So I have gone with\n> this simplification, and applied the patch (moved a bit the comments\n> while on it).\n\nHm. memsetting values_free() to zero repeatedly isn't quite free, nor is\niterating over all columns one more time. Note that values/isnull are\npassed in, and allocated with an accurate size, so it's a bit odd to\nthen do a pessimally sized stack allocation. Efficiency aside, that just\nseems a bit weird?\n\nThe efficiency bit is probably going to be swamped by the addition of\nthe compression handling, given the amount of additional work we're now\ndoing in in reform_and_rewrite_tuple(). I wonder if we should check how\nmuch slower a VACUUM FULL of a table with a few varlena columns has\ngotten vs 13.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 26 May 2021 17:31:44 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> The efficiency bit is probably going to be swamped by the addition of\n> the compression handling, given the amount of additional work we're now\n> doing in in reform_and_rewrite_tuple().\n\nOnly if the user has explicitly requested a change of compression, no?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 26 May 2021 20:35:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?"
},
{
"msg_contents": "On Wed, May 26, 2021 at 08:35:46PM -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > The efficiency bit is probably going to be swamped by the addition of\n> > the compression handling, given the amount of additional work we're now\n> > doing in in reform_and_rewrite_tuple().\n> \n> Only if the user has explicitly requested a change of compression, no?\n\nAndres' point is that we'd still initialize and run through\nvalues_free at the end of reform_and_rewrite_tuple() for each tuple\neven if there no need to do so. Well, we could control the\ninitialization and the free() checks at the end of the routine if we\nknow that there has been at least one detoasted value, at the expense\nof making the code a bit less clear, of course.\n--\nMichael",
"msg_date": "Thu, 27 May 2021 10:00:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-26 20:35:46 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > The efficiency bit is probably going to be swamped by the addition of\n> > the compression handling, given the amount of additional work we're now\n> > doing in in reform_and_rewrite_tuple().\n> \n> Only if the user has explicitly requested a change of compression, no?\n\nOh, it'll definitely be more expensive in that case - but that seems\nfair game. What I was wondering about was whether VACUUM FULL would be\nmeasurably slower, because we'll now call toast_get_compression_id() on\neach varlena datum. It's pretty easy for VACUUM FULL to be CPU bound\nalready, and presumably this'll add a bit.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 26 May 2021 18:54:15 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "On Wed, May 26, 2021 at 06:54:15PM -0700, Andres Freund wrote:\n> Oh, it'll definitely be more expensive in that case - but that seems\n> fair game. What I was wondering about was whether VACUUM FULL would be\n> measurably slower, because we'll now call toast_get_compression_id() on\n> each varlena datum. It's pretty easy for VACUUM FULL to be CPU bound\n> already, and presumably this'll add a bit.\n\nThis depends on the number of attributes, but I do see an extra 0.5%\n__memmove_avx_unaligned_erms in reform_and_rewrite_tuple() for a\nnormal VACUUM FULL with a 1-int-column relation on a perf profile,\nwith rewrite_heap_tuple eating most of it as in the past, so that's\nwithin the noise bandwidth if you measure the runtime. What would be\nthe worst case here, a table with one text column made of non-NULL\nstill very short values?\n--\nMichael",
"msg_date": "Thu, 27 May 2021 11:07:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-26 18:54:15 -0700, Andres Freund wrote:\n> On 2021-05-26 20:35:46 -0400, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > The efficiency bit is probably going to be swamped by the addition of\n> > > the compression handling, given the amount of additional work we're now\n> > > doing in in reform_and_rewrite_tuple().\n> >\n> > Only if the user has explicitly requested a change of compression, no?\n>\n> Oh, it'll definitely be more expensive in that case - but that seems\n> fair game. What I was wondering about was whether VACUUM FULL would be\n> measurably slower, because we'll now call toast_get_compression_id() on\n> each varlena datum. It's pretty easy for VACUUM FULL to be CPU bound\n> already, and presumably this'll add a bit.\n>\n\nCREATE UNLOGGED TABLE vacme_text(t01 text not null default 't01',t02 text not null default 't02',t03 text not null default 't03',t04 text not null default 't04',t05 text not null default 't05',t06 text not null default 't06',t07 text not null default 't07',t08 text not null default 't08',t09 text not null default 't09',t10 text not null default 't10');\nCREATE UNLOGGED TABLE vacme_int(i1 int not null default '1',i2 int not null default '2',i3 int not null default '3',i4 int not null default '4',i5 int not null default '5',i6 int not null default '6',i7 int not null default '7',i8 int not null default '8',i9 int not null default '9',i10 int not null default '10');\nINSERT INTO vacme_text SELECT FROM generate_series(1, 10000000);\nINSERT INTO vacme_int SELECT FROM generate_series(1, 10000000);\n\nI ran 10 VACUUM FULLs on each, chose the shortest time:\n\nunmodified\ntext: 3562ms\nint: 3037ms\n\nafter ifdefing out the compression handling:\ntext: 3175ms (x 0.88)\nint: 2894ms (x 0.95)\n\nThat's not *too* bad, but also not nothing....\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 26 May 2021 19:14:02 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-27 11:07:53 +0900, Michael Paquier wrote:\n> This depends on the number of attributes, but I do see an extra 0.5%\n> __memmove_avx_unaligned_erms in reform_and_rewrite_tuple() for a\n> normal VACUUM FULL with a 1-int-column relation on a perf profile,\n> with rewrite_heap_tuple eating most of it as in the past, so that's\n> within the noise bandwidth if you measure the runtime.\n> What would be the worst case here, a table with one text column made\n> of non-NULL still very short values?\n\nI think you need a bunch of columns to see it, like in the benchmark I\njust posted - I didn't test any other number of columns than 10 though.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 26 May 2021 19:24:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> That's not *too* bad, but also not nothing....\n\nThe memsets seem to be easy to get rid of. memset the array\nto zeroes *once* before entering the per-tuple loop. Then,\nin the loop that looks for stuff to pfree, reset any entries\nthat are found to be set, thereby returning the array to all\nzeroes for the next iteration.\n\nI\"m having a hard time though believing that the memset is the\nmain problem. I'd think the pfree search loop is at least as\nexpensive. Maybe skip that when not useful, by having a single\nbool flag remembering whether any columns got detoasted in this\nrow?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 26 May 2021 22:43:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-26 22:43:42 -0400, Tom Lane wrote:\n> The memsets seem to be easy to get rid of. memset the array\n> to zeroes *once* before entering the per-tuple loop. Then,\n> in the loop that looks for stuff to pfree, reset any entries\n> that are found to be set, thereby returning the array to all\n> zeroes for the next iteration.\n\n> I\"m having a hard time though believing that the memset is the\n> main problem. I'd think the pfree search loop is at least as\n> expensive. Maybe skip that when not useful, by having a single\n> bool flag remembering whether any columns got detoasted in this\n> row?\n\nYea, I tested that - it does help in the integer case. But the bigger\ncontributors are the loop over the attributes, and especially the access\nto the datum's compression method. Particularly the latter seems hard to\navoid.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 26 May 2021 20:21:05 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Yea, I tested that - it does help in the integer case. But the bigger\n> contributors are the loop over the attributes, and especially the access\n> to the datum's compression method. Particularly the latter seems hard to\n> avoid.\n\nSo maybe we should just dump the promise that VACUUM FULL will recompress\neverything? I'd be in favor of that actually, because it seems 100%\noutside the charter of either VACUUM FULL or CLUSTER.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 26 May 2021 23:34:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?"
},
{
"msg_contents": "On Wed, May 26, 2021 at 11:34:53PM -0400, Tom Lane wrote:\n> So maybe we should just dump the promise that VACUUM FULL will recompress\n> everything? I'd be in favor of that actually, because it seems 100%\n> outside the charter of either VACUUM FULL or CLUSTER.\n\nHmm. You are right that by default this may not be worth the extra\ncost. We could make that easily an option, though, for users ready to\naccept this cost. And that could be handy when it comes to a\ndatabase-wise VACUUM.\n--\nMichael",
"msg_date": "Thu, 27 May 2021 13:04:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, May 26, 2021 at 11:34:53PM -0400, Tom Lane wrote:\n>> So maybe we should just dump the promise that VACUUM FULL will recompress\n>> everything? I'd be in favor of that actually, because it seems 100%\n>> outside the charter of either VACUUM FULL or CLUSTER.\n\n> Hmm. You are right that by default this may not be worth the extra\n> cost. We could make that easily an option, though, for users ready to\n> accept this cost. And that could be handy when it comes to a\n> database-wise VACUUM.\n\nAFAIR, there are zero promises about how effective, or when effective,\nchanges in SET STORAGE will be. And the number of complaints about\nthat has also been zero. So I'm not sure why we need to do more for\nSET COMPRESSION. Especially since I'm unconvinced that recompressing\neverything just to recompress everything would *ever* be worthwhile.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 27 May 2021 00:11:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?"
},
{
"msg_contents": "On Thu, May 27, 2021 at 12:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> AFAIR, there are zero promises about how effective, or when effective,\n> changes in SET STORAGE will be. And the number of complaints about\n> that has also been zero. So I'm not sure why we need to do more for\n> SET COMPRESSION. Especially since I'm unconvinced that recompressing\n> everything just to recompress everything would *ever* be worthwhile.\n\nI think it is good to have *some* way of ensuring that what you want\nthe system to do, it is actually doing. If we have not a single\noperation in the system anywhere that can force recompression, someone\nwho actually cares will be left with no option but a dump and reload.\nThat is probably both a whole lot slower than something in the server\nitself and also a pretty silly thing to have to tell people to do.\n\nIf it helps, I'd be perfectly fine with having this be an *optional*\nbehavior for CLUSTER or VACUUM FULL, depending on some switch. Or we\ncan devise another way for the user to make it happen. But we\nshouldn't just be setting a policy that users are not allowed to care\nwhether their data is actually compressed using the compression method\nthey specified.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 27 May 2021 07:58:33 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, May 27, 2021 at 12:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> AFAIR, there are zero promises about how effective, or when effective,\n>> changes in SET STORAGE will be. And the number of complaints about\n>> that has also been zero. So I'm not sure why we need to do more for\n>> SET COMPRESSION. Especially since I'm unconvinced that recompressing\n>> everything just to recompress everything would *ever* be worthwhile.\n\n> I think it is good to have *some* way of ensuring that what you want\n> the system to do, it is actually doing. If we have not a single\n> operation in the system anywhere that can force recompression, someone\n> who actually cares will be left with no option but a dump and reload.\n> That is probably both a whole lot slower than something in the server\n> itself and also a pretty silly thing to have to tell people to do.\n\n[ shrug... ] I think the history of the SET STORAGE option teaches us\nthat there is no such requirement, and you're inventing a scenario that\ndoesn't exist in the real world.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 27 May 2021 09:34:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?"
},
{
"msg_contents": "On Thu, May 27, 2021 at 7:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Thu, May 27, 2021 at 12:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> AFAIR, there are zero promises about how effective, or when effective,\n> >> changes in SET STORAGE will be. And the number of complaints about\n> >> that has also been zero. So I'm not sure why we need to do more for\n> >> SET COMPRESSION. Especially since I'm unconvinced that recompressing\n> >> everything just to recompress everything would *ever* be worthwhile.\n>\n> > I think it is good to have *some* way of ensuring that what you want\n> > the system to do, it is actually doing. If we have not a single\n> > operation in the system anywhere that can force recompression, someone\n> > who actually cares will be left with no option but a dump and reload.\n> > That is probably both a whole lot slower than something in the server\n> > itself and also a pretty silly thing to have to tell people to do.\n>\n> [ shrug... ] I think the history of the SET STORAGE option teaches us\n> that there is no such requirement, and you're inventing a scenario that\n> doesn't exist in the real world.\n\nBut can we compare SET STORAGE with SET compression? I mean storage\njust controls how the data are stored internally and there is no\nexternal dependency. But if we see the compression it will have a\ndependency on the external library. So if the user wants to get rid\nof the dependency on the external library then IMHO, there should be\nsome way to do it by recompressing all the data.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 27 May 2021 19:48:09 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?"
},
{
"msg_contents": "On Thu, May 27, 2021 at 10:18 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > [ shrug... ] I think the history of the SET STORAGE option teaches us\n> > that there is no such requirement, and you're inventing a scenario that\n> > doesn't exist in the real world.\n>\n> But can we compare SET STORAGE with SET compression? I mean storage\n> just controls how the data are stored internally and there is no\n> external dependency. But if we see the compression it will have a\n> dependency on the external library. So if the user wants to get rid\n> of the dependency on the external library then IMHO, there should be\n> some way to do it by recompressing all the data.\n\nTBH, I'm more concerned about the other direction. Surely someone who\nupgrades from an existing release to v14 and sets their compression\nmethod to lz4 is going to want a way of actually converting their data\nto using lz4. To say that nobody cares about that is to deem the\nfeature useless. Maybe that's what Tom thinks, but it's not what I\nthink.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 27 May 2021 10:25:17 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, May 27, 2021 at 10:18 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>>> [ shrug... ] I think the history of the SET STORAGE option teaches us\n>>> that there is no such requirement, and you're inventing a scenario that\n>>> doesn't exist in the real world.\n\n>> But can we compare SET STORAGE with SET compression? I mean storage\n>> just controls how the data are stored internally and there is no\n>> external dependency. But if we see the compression it will have a\n>> dependency on the external library. So if the user wants to get rid\n>> of the dependency on the external library then IMHO, there should be\n>> some way to do it by recompressing all the data.\n\n> TBH, I'm more concerned about the other direction. Surely someone who\n> upgrades from an existing release to v14 and sets their compression\n> method to lz4 is going to want a way of actually converting their data\n> to using lz4. To say that nobody cares about that is to deem the\n> feature useless. Maybe that's what Tom thinks, but it's not what I\n> think.\n\nWhat I'm hearing is a whole lot of hypothesizing and zero evidence of\nactual field requirements. On the other side of the coin, we've already\nwasted significant person-hours on fixing this feature's memory leakage,\nand now people are proposing to expend more effort on solving^Wpapering\nover its performance issues by adding yet more user-visible complication.\nIt's already adding too much user-visible complication IMO --- I know\nbecause I was just copy-editing the documentation about that yesterday.\n\nI say it's time to stop the bleeding and rip it out. When and if\nthere are actual field requests to have a way to do this, we can\ndiscuss what's the best way to respond to those requests. Hacking\nVACUUM probably isn't the best answer, anyway. But right now,\nwe are past feature freeze, and I think we ought to jettison this\none rather than quickly kluge something.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 27 May 2021 10:39:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?"
},
{
"msg_contents": "On Thu, May 27, 2021 at 10:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> What I'm hearing is a whole lot of hypothesizing and zero evidence of\n> actual field requirements. On the other side of the coin, we've already\n> wasted significant person-hours on fixing this feature's memory leakage,\n> and now people are proposing to expend more effort on solving^Wpapering\n> over its performance issues by adding yet more user-visible complication.\n> It's already adding too much user-visible complication IMO --- I know\n> because I was just copy-editing the documentation about that yesterday.\n>\n> I say it's time to stop the bleeding and rip it out. When and if\n> there are actual field requests to have a way to do this, we can\n> discuss what's the best way to respond to those requests. Hacking\n> VACUUM probably isn't the best answer, anyway. But right now,\n> we are past feature freeze, and I think we ought to jettison this\n> one rather than quickly kluge something.\n\nThanks for sharing your thoughts. -1 from me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 27 May 2021 14:21:00 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?"
},
{
"msg_contents": "On 2021-May-27, Tom Lane wrote:\n\n> I say it's time to stop the bleeding and rip it out. When and if\n> there are actual field requests to have a way to do this, we can\n> discuss what's the best way to respond to those requests. Hacking\n> VACUUM probably isn't the best answer, anyway. But right now,\n> we are past feature freeze, and I think we ought to jettison this\n> one rather than quickly kluge something.\n\nSorry, I'm unclear on exactly what are you proposing. Are you proposing\nto rip out the fact that VACUUM FULL promises to recompress everything,\nor are you proposing to rip out the whole attcompression feature?\n\nAbsolute -1 on the latter from me. Pluggable compression has taken\nyears to get to this point, it certainly won't do to give that up.\n\nNow about the former. If we do think that recompressing causes an\nunacceptable 10% slowdown for every single VACUUM FULLs, then yeah we\nshould discuss changing that behavior -- maybe remove promises of\nrecompression and wait for pg15 to add \"VACUUM (RECOMPRESS)\" or\nsimilar.\n\nIf it's a 10% slowdown of the only best times (variability unspecified)\nand only in corner cases (unlogged tables with no indexes that fit in\nshared buffers), then I don't think we should bother.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"You're _really_ hosed if the person doing the hiring doesn't understand\nrelational systems: you end up with a whole raft of programmers, none of\nwhom has had a Date with the clue stick.\" (Andrew Sullivan)\n\n\n",
"msg_date": "Thu, 27 May 2021 15:34:21 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "On Thu, May 27, 2021 at 7:25 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> TBH, I'm more concerned about the other direction. Surely someone who\n> upgrades from an existing release to v14 and sets their compression\n> method to lz4 is going to want a way of actually converting their data\n> to using lz4.\n\nYour argument would be more convincing (at least to me) if we really\ndid expect users to want to pick and choose, based on natural\nvariations in datasets that make switching to *either* potentially\nyield a real benefit. It is my understanding that lz4 is pretty much\nsuperior to pglz by every relevant measure, though, so I'm not sure\nthat that argument can be made. At the same time, users tend to only\ncare specifically about things that are real step changes -- which I\ndon't think this qualifies as. Users will go out of their way to get one of\nthose, but otherwise won't bother.\n\nPerhaps there is a practical argument in favor of VACUUM FULL reliably\nrecompressing using lz4 on upgrade, where that's the user's stated\npreference. It's not self-evident that VACUUM FULL must or even should\ndo that, at least to me. I'm not suggesting that there must not be\nsuch an argument. Just that I don't think that anybody has made such\nan argument.\n\n> To say that nobody cares about that is to deem the\n> feature useless. Maybe that's what Tom thinks, but it's not what I\n> think.\n\nI don't think that follows at all.\n\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 27 May 2021 12:57:38 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Sorry, I'm unclear on exactly what are you proposing. Are you proposing\n> to rip out the fact that VACUUM FULL promises to recompress everything,\n> or are you proposing to rip out the whole attcompression feature?\n\nJust the former.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 27 May 2021 16:10:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Now about the former. If we do think that recompressing causes an\n> unacceptable 10% slowdown for every single VACUUM FULLs, then yeah we\n> should discuss changing that behavior -- maybe remove promises of\n> recompression and wait for pg15 to add \"VACUUM (RECOMPRESS)\" or\n> similar.\n> If it's a 10% slowdown of the only best times (variability unspecified)\n> and only in corner cases (unlogged tables with no indexes that fit in\n> shared buffers), then I don't think we should bother.\n\nBTW, perhaps I should clarify my goal here: it's to cut off expending\nfurther effort on this feature during v14. If we can decide that the\nexisting performance situation is acceptable, I'm content with that\ndecision. But if we're to start designing new user-visible behavior to\nsatisfy performance objections, then I'd prefer to remove this VACUUM\nbehavior altogether for now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 27 May 2021 16:17:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?"
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Thu, May 27, 2021 at 7:25 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>> To say that nobody cares about that is to deem the\n>> feature useless. Maybe that's what Tom thinks, but it's not what I\n>> think.\n\n> I don't think that follows at all.\n\nYeah. My belief here is that users might bother to change\ndefault_toast_compression, or that we might do it for them in a few\nyears, but the gains from doing so are going to be only incremental.\nThat being the case, most DBAs will be content to allow the older\ncompression method to age out of their databases through routine row\nupdates. The idea that somebody is going to be excited enough about\nthis to run a downtime-inducing VACUUM FULL doesn't really pass the\nsmell test.\n\nThat doesn't make LZ4 compression useless, by any means, but it does\nsuggest that we shouldn't be adding overhead to VACUUM FULL for the\npurpose of easing instantaneous switchovers.\n\nI'll refrain from re-telling old war stories about JPEG/GIF/PNG, but\nI do have real-world experience with compression algorithm changes.\nIME you need an integer-multiples type of improvement to get peoples'\nattention, and LZ4 isn't going to offer that, except maybe in\ncherry-picked examples.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 27 May 2021 16:29:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?"
},
{
"msg_contents": "On Tue, May 25, 2021 at 08:33:47PM -0500, Justin Pryzby wrote:\n> On Sun, May 23, 2021 at 12:25:10PM -0400, Tom Lane wrote:\n> > However, the more I looked at that code the less I liked it.\n> > I think the way that compression selection is handled for indexes,\n> > ie consult default_toast_compression on-the-fly, is *far* saner\n> > than what is currently implemented for tables. So I think we\n> > should redefine attcompression as \"ID of a compression method\n> > to use, or \\0 to select the prevailing default. Ignored if\n> > attstorage does not permit the use of compression\".\n> \n> +1\n> \n> It reminds me of reltablespace, which is stored as 0 to mean the database's\n> default tablespace.\n\nI was surprised to realize that I made this same suggestion last month...\nhttps://www.postgresql.org/message-id/20210320074420.GR11765@telsasoft.com\n|..unless we changed attcompression='\\0' to mean (for varlena) \"the default\n|compression\". Rather than \"resolving\" to the default compression at the time\n|the table is created, columns without an explicit compression set would \"defer\"\n|to the GUC (of course, that only affects newly-inserted data).\n\nThe original reason for that suggestion Michael handled differently in\n63db0ac3f9e6bae313da67f640c95c0045b7f0ee\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 27 May 2021 17:10:19 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "On Thu, May 27, 2021 at 1:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah. My belief here is that users might bother to change\n> default_toast_compression, or that we might do it for them in a few\n> years, but the gains from doing so are going to be only incremental.\n> That being the case, most DBAs will be content to allow the older\n> compression method to age out of their databases through routine row\n> updates. The idea that somebody is going to be excited enough about\n> this to run a downtime-inducing VACUUM FULL doesn't really pass the\n> smell test.\n\nThat was my original understanding of your position, FWIW. I agree\nwith all of this.\n\n> That doesn't make LZ4 compression useless, by any means, but it does\n> suggest that we shouldn't be adding overhead to VACUUM FULL for the\n> purpose of easing instantaneous switchovers.\n\nRight. More generally, there often seems to be value in\nunder-specifying what a compression option does. Or in treating them\nas advisory.\n\nYou mentioned the history of SET STORAGE, which seems very relevant. I\nam reminded of the example of B-Tree deduplication with unique\nindexes, where we selectively apply the optimization based on\npage-level details. Deduplication isn't usually useful in unique\nindexes (for the obvious reason), though occasionally it is extremely\nuseful. I think that there might be a variety of things that work a\nlittle like that. It can help with avoiding unnecessary dump and\nreload hazards, too.\n\nI am interested in hearing the *principle* behind Robert's position.\nThis whole area seems like something that might have at least a couple\nof different schools of thought. If it is then I'd sincerely like to\nhear the other side of the argument.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 27 May 2021 15:52:06 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?"
},
{
"msg_contents": "On Thu, May 27, 2021 at 04:17:58PM -0400, Tom Lane wrote:\n> BTW, perhaps I should clarify my goal here: it's to cut off expending\n> further effort on this feature during v14.\n\nNo disagreement here.\n\n> If we can decide that the\n> existing performance situation is acceptable, I'm content with that\n> decision. But if we're to start designing new user-visible behavior to\n> satisfy performance objections, then I'd prefer to remove this VACUUM\n> behavior altogether for now.\n\nAfter putting a PGDATA on a tmpfs, I have looked at the run time of\nVACUUM FULL with tables full of text columns, with that:\nCREATE OR REPLACE FUNCTION create_cols(tabname text, num_cols int)\nRETURNS VOID AS\n$func$\nDECLARE\n query text;\nBEGIN\n query := 'CREATE TABLE ' || tabname || ' (';\n FOR i IN 1..num_cols LOOP\n query := query || 'a_' || i::text || ' text NOT NULL DEFAULT ' || i::text;\n IF i != num_cols THEN\n query := query || ', ';\n END IF;\n END LOOP;\n query := query || ')';\n EXECUTE format(query);\n query := 'INSERT INTO ' || tabname || ' SELECT FROM generate_series(1,1000000)';\n EXECUTE format(query);\nEND\n$func$ LANGUAGE plpgsql;\n\nAfter 12 runs of VACUUM FULL on my laptop, I have removed the two\nhighest and the two lowest to remove some noise, and did an average of\nthe rest:\n- HEAD, 100 text columns: 5720ms\n- REL_13_STABLE, 100 text columns: 4308ms\n- HEAD, 200 text columns: 10020ms\n- REL_13_STABLE, 200 text columns: 8319ms\n\nSo yes, that looks much visible to me, and an argument in favor of the\nremoval of the forced recompression on HEAD when rewriting tuples.\n--\nMichael",
"msg_date": "Wed, 2 Jun 2021 11:32:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "On Thu, May 27, 2021 at 03:52:06PM -0700, Peter Geoghegan wrote:\n> On Thu, May 27, 2021 at 1:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Yeah. My belief here is that users might bother to change\n>> default_toast_compression, or that we might do it for them in a few\n>> years, but the gains from doing so are going to be only incremental.\n>> That being the case, most DBAs will be content to allow the older\n>> compression method to age out of their databases through routine row\n>> updates. The idea that somebody is going to be excited enough about\n>> this to run a downtime-inducing VACUUM FULL doesn't really pass the\n>> smell test.\n> \n> That was my original understanding of your position, FWIW. I agree\n> with all of this.\n\nIf one wishes to enforce a compression method on a table, the only way\nI could see through here, able to bypass the downtime constraint, is \nby using logical replication. Anybody willing to enforce a new\ndefault compression may accept the cost of setting up instances for\nthat.\n--\nMichael",
"msg_date": "Wed, 2 Jun 2021 12:25:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "On 2021-Jun-02, Michael Paquier wrote:\n\n> After 12 runs of VACUUM FULL on my laptop, I have removed the two\n> highest and the two lowest to remove some noise, and did an average of\n> the rest:\n> - HEAD, 100 text columns: 5720ms\n> - REL_13_STABLE, 100 text columns: 4308ms\n> - HEAD, 200 text columns: 10020ms\n> - REL_13_STABLE, 200 text columns: 8319ms\n> \n> So yes, that looks much visible to me, and an argument in favor of the\n> removal of the forced recompression on HEAD when rewriting tuples.\n\nJust to be clear -- that's the time to vacuum the table without changing\nthe compression algorithm, right? So the overhead is just the check for\nwhether the recompression is needed, not the recompression itself?\n\nIf the check for recompression is this expensive, then yeah I agree that\nwe should take it out. If recompression is actually occurring, then I\ndon't think this is a good test :-)\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n",
"msg_date": "Thu, 3 Jun 2021 12:04:48 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "On Thu, Jun 03, 2021 at 12:04:48PM -0400, Alvaro Herrera wrote:\n> If the check for recompression is this expensive, then yeah I agree that\n> we should take it out. If recompression is actually occurring, then I\n> don't think this is a good test :-)\n\nI have done no recompression here, so I was just stressing the extra\ntest for the recompression. Sorry for the confusion.\n--\nMichael",
"msg_date": "Fri, 4 Jun 2021 08:54:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "On Fri, Jun 04, 2021 at 08:54:48AM +0900, Michael Paquier wrote:\n> I have done no recompression here, so I was just stressing the extra\n> test for the recompression. Sorry for the confusion.\n\nI am not sure yet which way we are going, but cleaning up this code\ninvolves a couple of things:\n- Clean up the docs.\n- Update one of the tests of compression.sql, with its alternate\noutput.\n- Clean up of reform_and_rewrite_tuple() where the rewrite is done.\n\nSo that would give the attached.\n--\nMichael",
"msg_date": "Fri, 4 Jun 2021 14:24:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "So I tried running vacuum full in pgbench of your 10-column table,\nmax_wal_size=32GB. I didn't move pgdata to an in-memory pgdata, but\nthis is on NVMe so pretty fast anyway.\n\npgbench -c1 -t30 -n -f vacuumfull.sql.\n\nCurrent master:\nlatency average = 2885.550 ms\nlatency stddev = 1771.170 ms\ntps = 0.346554 (without initial connection time)\n\nWith the recompression code ifdef'ed out (pretty much like in your\npatch):\nlatency average = 2481.336 ms\nlatency stddev = 1011.738 ms\ntps = 0.403008 (without initial connection time)\n\nWith toast_get_compression_id as a static inline, like in the attach\npatch:\nlatency average = 2520.982 ms\nlatency stddev = 1043.042 ms\ntps = 0.396671 (without initial connection time)\n\nIt seems to me that most of the overhead is the function call for\ntoast_get_compression_id(), so we should get rid of that.\n\n\nNow, while this patch does seem to work correctly, it raises a number of\nweird cpluspluscheck warnings, which I think are attributable to the\nnew macro definitions. I didn't look into it closely, but I suppose it\nshould be fixable given sufficient effort:\n\nIn file included from /tmp/cpluspluscheck.yuQqS5/test.cpp:2:\n/pgsql/source/master//src/include/access/toast_compression.h: In function ‘ToastCompressionId toast_get_compression_id(varlena*)’:\n/pgsql/source/master//src/include/postgres.h:392:46: warning: comparison of integer expressions of different signedness: ‘uint32’ {aka ‘unsigned int’} and ‘int32’ {aka ‘int’} [-Wsign-compare]\n (VARATT_EXTERNAL_GET_EXTSIZE(toast_pointer) < \\\n/pgsql/source/master//src/include/access/toast_compression.h:109:7: note: in expansion of macro ‘VARATT_EXTERNAL_IS_COMPRESSED’\n if (VARATT_EXTERNAL_IS_COMPRESSED(toast_pointer))\n ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n/pgsql/source/master//src/include/postgres.h:374:30: error: invalid conversion from ‘uint32’ {aka ‘unsigned int’} to ‘ToastCompressionId’ [-fpermissive]\n ((toast_pointer).va_extinfo >> VARLENA_EXTSIZE_BITS)\n ~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~\n/pgsql/source/master//src/include/access/toast_compression.h:110:11: note: in expansion of macro ‘VARATT_EXTERNAL_GET_COMPRESS_METHOD’\n cmid = VARATT_EXTERNAL_GET_COMPRESS_METHOD(toast_pointer);\n ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n/pgsql/source/master//src/include/postgres.h:368:53: error: invalid conversion from ‘uint32’ {aka ‘unsigned int’} to ‘ToastCompressionId’ [-fpermissive]\n (((varattrib_4b *) (PTR))->va_compressed.va_tcinfo >> VARLENA_EXTSIZE_BITS)\n ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~\n/pgsql/source/master//src/include/access/toast_compression.h:113:10: note: in expansion of macro ‘VARDATA_COMPRESSED_GET_COMPRESS_METHOD’\n cmid = VARDATA_COMPRESSED_GET_COMPRESS_METHOD(attr);\n ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nIn file included from /tmp/cpluspluscheck.yuQqS5/test.cpp:2:\n/pgsql/source/master/src/include/access/toast_compression.h: In function ‘ToastCompressionId toast_get_compression_id(varlena*)’:\n/pgsql/source/master//src/include/postgres.h:392:46: warning: comparison of integer expressions of different signedness: ‘uint32’ {aka ‘unsigned int’} and ‘int32’ {aka ‘int’} [-Wsign-compare]\n (VARATT_EXTERNAL_GET_EXTSIZE(toast_pointer) < \\\n/pgsql/source/master/src/include/access/toast_compression.h:109:7: note: in expansion of macro ‘VARATT_EXTERNAL_IS_COMPRESSED’\n if (VARATT_EXTERNAL_IS_COMPRESSED(toast_pointer))\n ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n/pgsql/source/master//src/include/postgres.h:374:30: error: invalid conversion from ‘uint32’ {aka ‘unsigned int’} to ‘ToastCompressionId’ [-fpermissive]\n ((toast_pointer).va_extinfo >> VARLENA_EXTSIZE_BITS)\n ~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~\n/pgsql/source/master/src/include/access/toast_compression.h:110:11: note: in expansion of macro ‘VARATT_EXTERNAL_GET_COMPRESS_METHOD’\n cmid = VARATT_EXTERNAL_GET_COMPRESS_METHOD(toast_pointer);\n ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n/pgsql/source/master//src/include/postgres.h:368:53: error: invalid conversion from ‘uint32’ {aka ‘unsigned int’} to ‘ToastCompressionId’ [-fpermissive]\n (((varattrib_4b *) (PTR))->va_compressed.va_tcinfo >> VARLENA_EXTSIZE_BITS)\n ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~\n/pgsql/source/master/src/include/access/toast_compression.h:113:10: note: in expansion of macro ‘VARDATA_COMPRESSED_GET_COMPRESS_METHOD’\n cmid = VARDATA_COMPRESSED_GET_COMPRESS_METHOD(attr);\n ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n-- \nÁlvaro Herrera Valdivia, Chile",
"msg_date": "Fri, 4 Jun 2021 18:42:57 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> It seems to me that most of the overhead is the function call for\n> toast_get_compression_id(), so we should get rid of that.\n\nNice result. I'm willing to live with 1.5% slowdown ... IME that's\nusually below the noise threshold anyway.\n\n> Now, while this patch does seem to work correctly, it raises a number of\n> weird cpluspluscheck warnings, which I think are attributable to the\n> new macro definitions. I didn't look into it closely, but I suppose it\n> should be fixable given sufficient effort:\n\nDidn't test, but the first one is certainly fixable by adding a cast,\nand I guess the others might be as well.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 04 Jun 2021 18:51:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?"
},
{
"msg_contents": "On 2021-Jun-04, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n\n> > Now, while this patch does seem to work correctly, it raises a number of\n> > weird cpluspluscheck warnings, which I think are attributable to the\n> > new macro definitions. I didn't look into it closely, but I suppose it\n> > should be fixable given sufficient effort:\n> \n> Didn't test, but the first one is certainly fixable by adding a cast,\n> and I guess the others might be as well.\n\nI get no warnings with this one. I'm a bit wary of leaving\nVARDATA_COMPRESSED_GET_EXTSIZE unchanged, but at least nothing in this\npatch requires a cast there.\n\n-- \n�lvaro Herrera Valdivia, Chile",
"msg_date": "Fri, 4 Jun 2021 19:21:11 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "On Fri, Jun 04, 2021 at 07:21:11PM -0400, Alvaro Herrera wrote:\n> On 2021-Jun-04, Tom Lane wrote:\n> \n> > Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> \n> > > Now, while this patch does seem to work correctly, it raises a number of\n> > > weird cpluspluscheck warnings, which I think are attributable to the\n> > > new macro definitions. I didn't look into it closely, but I suppose it\n> > > should be fixable given sufficient effort:\n> > \n> > Didn't test, but the first one is certainly fixable by adding a cast,\n> > and I guess the others might be as well.\n> \n> I get no warnings with this one. I'm a bit wary of leaving\n> VARDATA_COMPRESSED_GET_EXTSIZE unchanged, but at least nothing in this\n> patch requires a cast there.\n\nI have done the same test as previously, with the following\nconfiguration to be clear:\n- No assertion, non-debug build.\n- No autovacuum.\n- No recompression involved.\n- Data put in a tmpfs.\n- One relation with 200 columns of NOT NULL text with default values,\nusing that:\nhttps://postgr.es/m/YLbt02A+IDnFhwIp@paquier.xyz\n- 1M rows.\n- 15 VACUUM FULL runs, discarding the 3 lowest and the 3 highest run\ntimes to remove most of the noise, then did an average of the\nremaining 9 runs.\n\nThe test has been done with four configurations, and here are the\nresults:\n1) HEAD: 9659ms\n2) REL_13_STABLE: 8310ms.\n3) Alvaro's patch, as of\nhttps://postgr.es/m/202106042321.6jx54yliy2l6@alvherre.pgsql: 9521ms.\n4) My patch applied on HEAD, as of\nhttps://postgr.es/m/YLm5I9MCGz4SnPdX@paquier.xyz: 8304ms.\n\nThis case is a kind of worst-case configuration, but it seems to me\nthat there is still a large difference with that :/\n--\nMichael",
"msg_date": "Sun, 6 Jun 2021 12:07:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "On 2021-Jun-06, Michael Paquier wrote:\n\n> On Fri, Jun 04, 2021 at 07:21:11PM -0400, Alvaro Herrera wrote:\n\n> The test has been done with four configurations, and here are the\n> results:\n> 1) HEAD: 9659ms\n> 2) REL_13_STABLE: 8310ms.\n> 3) Alvaro's patch, as of\n> https://postgr.es/m/202106042321.6jx54yliy2l6@alvherre.pgsql: 9521ms.\n> 4) My patch applied on HEAD, as of\n> https://postgr.es/m/YLm5I9MCGz4SnPdX@paquier.xyz: 8304ms.\n\nHmm, ok. Trying to figure out what is happening would require more time\nthan I can devote to this at present.\n\nMy unverified guess is that this code causes too many pipeline stalls\nwhile executing the big per-column loop. Maybe it would be better to\nscan the attribute array twice: one to collect all data from\nForm_pg_attribute for each column into nicely packed arrays, then in a\nsecond loop process all the recompressions together ... the idea being\nthat the first loop can run without stalling.\n\nMaybe at this point reverting is the only solution.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Oh, great altar of passive entertainment, bestow upon me thy discordant images\nat such speed as to render linear thought impossible\" (Calvin a la TV)\n\n\n",
"msg_date": "Tue, 8 Jun 2021 10:39:24 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "On Tue, Jun 08, 2021 at 10:39:24AM -0400, Alvaro Herrera wrote:\n> My unverified guess is that this code causes too many pipeline stalls\n> while executing the big per-column loop. Maybe it would be better to\n> scan the attribute array twice: one to collect all data from\n> Form_pg_attribute for each column into nicely packed arrays, then in a\n> second loop process all the recompressions together ... the idea being\n> that the first loop can run without stalling.\n\nYou mean for attlen and attcompression, right? I agree that it would\nhelp.\n\nA extra set of things worth it here would be to move the allocation\nand memset(0) of values_free from reform_and_rewrite_tuple(), and do\nthe round of pfree() calls if at least one value has been detoasted.\n\n> Maybe at this point reverting is the only solution.\n\nThat's a safe bet at this point. It would be good to conclude this\none by beta2 IMO.\n--\nMichael",
"msg_date": "Wed, 9 Jun 2021 12:24:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, Jun 08, 2021 at 10:39:24AM -0400, Alvaro Herrera wrote:\n>> Maybe at this point reverting is the only solution.\n\n> That's a safe bet at this point. It would be good to conclude this\n> one by beta2 IMO.\n\nI still think it's a really dubious argument that anybody would want to\nincur a VACUUM FULL to force conversion to a different compression method.\n\nI can imagine sometime in the future where we need to get rid of all\ninstances of pglz so we can reassign that compression code to something\nelse. But would we insist on a mass VACUUM FULL to make that happen?\nDoubt it. You'd want a tool that would make that happen over time,\nin the background; like the mechanisms that have been discussed for\nenabling checksums on-the-fly.\n\nIn the meantime I'm +1 for dropping this logic from VACUUM FULL.\nI don't even want to spend enough more time on it to confirm the\ndifferent overhead measurements that have been reported.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 08 Jun 2021 23:32:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for reduced\n size?"
},
{
"msg_contents": "On Tue, Jun 08, 2021 at 11:32:21PM -0400, Tom Lane wrote:\n> I can imagine sometime in the future where we need to get rid of all\n> instances of pglz so we can reassign that compression code to something\n> else. But would we insist on a mass VACUUM FULL to make that happen?\n> Doubt it. You'd want a tool that would make that happen over time,\n> in the background; like the mechanisms that have been discussed for\n> enabling checksums on-the-fly.\n\nWell, I can imagine that some people could afford being more\naggressive here even if it implies some downtime and if they are not\nwilling to afford the deployment of a second instance for a\ndump/restore or a logirep setup.\n\n(The parallel with data checksums is partially true, as you can do a\nswitch of checksums with physical replication as the page's checksums\nare only written when pushed out of shared buffers, not when they are\nwritten into WAL. This needs a second instance, of course.)\n\n> In the meantime I'm +1 for dropping this logic from VACUUM FULL.\n> I don't even want to spend enough more time on it to confirm the\n> different overhead measurements that have been reported.\n\nAgreed. It looks like we are heading toward doing just that for this\nrelease.\n--\nMichael",
"msg_date": "Thu, 10 Jun 2021 11:09:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
},
{
"msg_contents": "On Thu, Jun 10, 2021 at 11:09:52AM +0900, Michael Paquier wrote:\n> On Tue, Jun 08, 2021 at 11:32:21PM -0400, Tom Lane wrote:\n>> In the meantime I'm +1 for dropping this logic from VACUUM FULL.\n>> I don't even want to spend enough more time on it to confirm the\n>> different overhead measurements that have been reported.\n> \n> Agreed. It looks like we are heading toward doing just that for this\n> release.\n\nHearing nothing, done this way.\n--\nMichael",
"msg_date": "Mon, 14 Jun 2021 09:27:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Move pg_attribute.attcompression to earlier in struct for\n reduced size?"
}
] |
[
{
"msg_contents": "You might find that ICU 69 (pretty new, see \nhttp://site.icu-project.org/download/69) will cause compile failures \nwith PG 10 (pretty old). ICU 69 has switched to using stdbool.h, which \nconflicts with the home-made definitions that we used until PG10. \nCompile errors look like this:\n\npg_collation.c:47:1: error: conflicting types for 'CollationCreate'\n 47 | CollationCreate(const char *collname, Oid collnamespace,\n | ^~~~~~~~~~~~~~~\nIn file included from pg_collation.c:25:\n../../../src/include/catalog/pg_collation_fn.h:17:12: note: previous \ndeclaration of 'CollationCreate' was here\n 17 | extern Oid CollationCreate(const char *collname, Oid collnamespace,\n | ^~~~~~~~~~~~~~~\npg_collation.c: In function 'CollationCreate':\npg_collation.c:171:41: warning: passing argument 3 of 'heap_form_tuple' \nfrom incompatible pointer type [-Wincompatible-pointer-types]\n 171 | tup = heap_form_tuple(tupDesc, values, nulls);\n | ^~~~~\n | |\n | _Bool *\nIn file included from pg_collation.c:19:\n../../../src/include/access/htup_details.h:802:26: note: expected 'bool \n*' {aka 'char *'} but argument is of type '_Bool *'\n 802 | Datum *values, bool *isnull);\n | ~~~~~~^~~~~~\n\nThe fix is like what we used to use for plperl back then:\n\ndiff --git a/src/include/utils/pg_locale.h b/src/include/utils/pg_locale.h\nindex f3e04d4d8c..499ada2b69 100644\n--- a/src/include/utils/pg_locale.h\n+++ b/src/include/utils/pg_locale.h\n@@ -17,6 +17,9 @@\n #endif\n #ifdef USE_ICU\n #include <unicode/ucol.h>\n+#ifdef bool\n+#undef bool\n+#endif\n #endif\n\n #include \"utils/guc.h\"\n\nI'll prepare a full patch in a bit.\n\n\n",
"msg_date": "Mon, 17 May 2021 22:56:54 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "ICU bool problem"
},
{
"msg_contents": "On Mon, May 17, 2021 at 10:56:54PM +0200, Peter Eisentraut wrote:\n> The fix is like what we used to use for plperl back then:\n> \n> diff --git a/src/include/utils/pg_locale.h b/src/include/utils/pg_locale.h\n> index f3e04d4d8c..499ada2b69 100644\n> --- a/src/include/utils/pg_locale.h\n> +++ b/src/include/utils/pg_locale.h\n> @@ -17,6 +17,9 @@\n> #endif\n> #ifdef USE_ICU\n> #include <unicode/ucol.h>\n> +#ifdef bool\n> +#undef bool\n> +#endif\n> #endif\n> \n> #include \"utils/guc.h\"\n> \n> I'll prepare a full patch in a bit.\n\nYes, that seems like a good plan.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Mon, 17 May 2021 17:01:16 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: ICU bool problem"
},
{
"msg_contents": "On 17.05.21 23:01, Bruce Momjian wrote:\n> On Mon, May 17, 2021 at 10:56:54PM +0200, Peter Eisentraut wrote:\n>> The fix is like what we used to use for plperl back then:\n>>\n>> diff --git a/src/include/utils/pg_locale.h b/src/include/utils/pg_locale.h\n>> index f3e04d4d8c..499ada2b69 100644\n>> --- a/src/include/utils/pg_locale.h\n>> +++ b/src/include/utils/pg_locale.h\n>> @@ -17,6 +17,9 @@\n>> #endif\n>> #ifdef USE_ICU\n>> #include <unicode/ucol.h>\n>> +#ifdef bool\n>> +#undef bool\n>> +#endif\n>> #endif\n>>\n>> #include \"utils/guc.h\"\n>>\n>> I'll prepare a full patch in a bit.\n> \n> Yes, that seems like a good plan.\n\nI have committed a fix for this.\n\n\n",
"msg_date": "Thu, 1 Jul 2021 10:59:48 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: ICU bool problem"
}
] |
[
{
"msg_contents": "Dear all\n\nIn MobilityDB we have defined parallel aggregations with a combine\nfunction, e.g.,\n\nCREATE AGGREGATE extent(tbox) (\n SFUNC = tbox_extent_transfn,\n STYPE = tbox,\n COMBINEFUNC = tbox_extent_combinefn,\n PARALLEL = safe\n);\n\nWe would like to trigger the combine functions in the coverage tests but\nfor this it is required that the tables are VERY big. In particular for the\nabove aggregation, the combine function only is triggered when the table\nhas more than 300K rows.\n\nAs it is not very effective to have such a big table in the test database\nused for the regression and the coverage tests I wonder whether it is\npossible to set some parameters to launch the combine functions with tables\nof, e.g., 10K rows, which are the bigger tables in our regression test\ndatabase.\n\nMany thanks for your insights !\n\nEsteban\n\nDear allIn MobilityDB we have defined parallel aggregations with a combine function, e.g.,CREATE AGGREGATE extent(tbox) ( SFUNC = tbox_extent_transfn, STYPE = tbox, COMBINEFUNC = tbox_extent_combinefn, PARALLEL = safe);We would like to trigger the combine functions in the coverage tests but for this it is required that the tables are VERY big. In particular for the above aggregation, the combine function only is triggered when the table has more than 300K rows. As it is not very effective to have such a big table in the test database used for the regression and the coverage tests I wonder whether it is possible to set some parameters to launch the combine functions with tables of, e.g., 10K rows, which are the bigger tables in our regression test database.Many thanks for your insights !Esteban",
"msg_date": "Tue, 18 May 2021 11:04:51 +0200",
"msg_from": "Esteban Zimanyi <ezimanyi@ulb.ac.be>",
"msg_from_op": true,
"msg_subject": "How to launch parallel aggregations ?"
},
{
"msg_contents": "On Tue, May 18, 2021 at 2:32 PM Esteban Zimanyi <ezimanyi@ulb.ac.be> wrote:\n>\n> Dear all\n>\n> In MobilityDB we have defined parallel aggregations with a combine function, e.g.,\n>\n> CREATE AGGREGATE extent(tbox) (\n> SFUNC = tbox_extent_transfn,\n> STYPE = tbox,\n> COMBINEFUNC = tbox_extent_combinefn,\n> PARALLEL = safe\n> );\n>\n> We would like to trigger the combine functions in the coverage tests but for this it is required that the tables are VERY big. In particular for the above aggregation, the combine function only is triggered when the table has more than 300K rows.\n>\n> As it is not very effective to have such a big table in the test database used for the regression and the coverage tests I wonder whether it is possible to set some parameters to launch the combine functions with tables of, e.g., 10K rows, which are the bigger tables in our regression test database.\n>\n> Many thanks for your insights !\n\nYou could do something like below, just before your test:\n\n-- encourage use of parallel plans\nset parallel_setup_cost=0;\nset parallel_tuple_cost=0;\nset min_parallel_table_scan_size=0;\nset max_parallel_workers_per_gather=2;\n\nAnd after the test you can reset all of the above parameters.\n\nHope that helps!\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 18 May 2021 14:45:18 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: How to launch parallel aggregations ?"
},
{
"msg_contents": "Thanks a lot! It works!\n\nOn Tue, May 18, 2021 at 11:15 AM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Tue, May 18, 2021 at 2:32 PM Esteban Zimanyi <ezimanyi@ulb.ac.be>\n> wrote:\n> >\n> > Dear all\n> >\n> > In MobilityDB we have defined parallel aggregations with a combine\n> function, e.g.,\n> >\n> > CREATE AGGREGATE extent(tbox) (\n> > SFUNC = tbox_extent_transfn,\n> > STYPE = tbox,\n> > COMBINEFUNC = tbox_extent_combinefn,\n> > PARALLEL = safe\n> > );\n> >\n> > We would like to trigger the combine functions in the coverage tests but\n> for this it is required that the tables are VERY big. In particular for the\n> above aggregation, the combine function only is triggered when the table\n> has more than 300K rows.\n> >\n> > As it is not very effective to have such a big table in the test\n> database used for the regression and the coverage tests I wonder whether it\n> is possible to set some parameters to launch the combine functions with\n> tables of, e.g., 10K rows, which are the bigger tables in our regression\n> test database.\n> >\n> > Many thanks for your insights !\n>\n> You could do something like below, just before your test:\n>\n> -- encourage use of parallel plans\n> set parallel_setup_cost=0;\n> set parallel_tuple_cost=0;\n> set min_parallel_table_scan_size=0;\n> set max_parallel_workers_per_gather=2;\n>\n> And after the test you can reset all of the above parameters.\n>\n> Hope that helps!\n>\n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com\n>\n\nThanks a lot! It works!On Tue, May 18, 2021 at 11:15 AM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Tue, May 18, 2021 at 2:32 PM Esteban Zimanyi <ezimanyi@ulb.ac.be> wrote:\n>\n> Dear all\n>\n> In MobilityDB we have defined parallel aggregations with a combine function, e.g.,\n>\n> CREATE AGGREGATE extent(tbox) (\n> SFUNC = tbox_extent_transfn,\n> STYPE = tbox,\n> COMBINEFUNC = tbox_extent_combinefn,\n> PARALLEL = safe\n> );\n>\n> We would like to trigger the combine functions in the coverage tests but for this it is required that the tables are VERY big. In particular for the above aggregation, the combine function only is triggered when the table has more than 300K rows.\n>\n> As it is not very effective to have such a big table in the test database used for the regression and the coverage tests I wonder whether it is possible to set some parameters to launch the combine functions with tables of, e.g., 10K rows, which are the bigger tables in our regression test database.\n>\n> Many thanks for your insights !\n\nYou could do something like below, just before your test:\n\n-- encourage use of parallel plans\nset parallel_setup_cost=0;\nset parallel_tuple_cost=0;\nset min_parallel_table_scan_size=0;\nset max_parallel_workers_per_gather=2;\n\nAnd after the test you can reset all of the above parameters.\n\nHope that helps!\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 18 May 2021 11:24:26 +0200",
"msg_from": "Esteban Zimanyi <ezimanyi@ulb.ac.be>",
"msg_from_op": true,
"msg_subject": "Re: How to launch parallel aggregations ?"
}
] |
[
{
"msg_contents": "> To: Pengchengliu <pengchengliu@tju.edu.cn>\r\n> Cc: Greg Nancarrow <gregn4422@gmail.com>; Andres Freund <andres@anarazel.de>; PostgreSQL-development <pgsql-hackers@postgresql.org>\r\n> Subject: Re: Re: Parallel scan with SubTransGetTopmostTransaction assert coredump\r\n\r\n> I've also seen the reports of the same Assert(TransactionIdFollowsOrEquals(xid, TransactionXmin)) with a subsequent crash in a parallel worker in PostgreSQL v11-based \r\n> build, Though I was unable to investigate deeper and reproduce the issue. The details above in the thread make me think it is a real and long-time-persistent error that is \r\n> surely worth to be fixed.\r\n\r\nI followed Liu's reproduce steps and successfully reproduce it in about half an hour running.\r\nMy compile option is : \" ./configure --enable-cassert --prefix=/home/pgsql\".\r\n\r\nAfter applying greg-san's change, the coredump did not happened in two hour(it is still running).\r\nNote, I have not taken a deep look into the change, just provide some test information in advance.\r\n\r\nBest regards,\r\nhouzj\r\n",
"msg_date": "Tue, 18 May 2021 11:41:02 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump"
},
{
"msg_contents": "On Tue, May 18, 2021 at 9:41 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> > To: Pengchengliu <pengchengliu@tju.edu.cn>\n> > Cc: Greg Nancarrow <gregn4422@gmail.com>; Andres Freund <andres@anarazel.de>; PostgreSQL-development <pgsql-hackers@postgresql.org>\n> > Subject: Re: Re: Parallel scan with SubTransGetTopmostTransaction assert coredump\n>\n> > I've also seen the reports of the same Assert(TransactionIdFollowsOrEquals(xid, TransactionXmin)) with a subsequent crash in a parallel worker in PostgreSQL v11-based\n> > build, Though I was unable to investigate deeper and reproduce the issue. The details above in the thread make me think it is a real and long-time-persistent error that is\n> > surely worth to be fixed.\n>\n> I followed Liu's reproduce steps and successfully reproduce it in about half an hour running.\n> My compile option is : \" ./configure --enable-cassert --prefix=/home/pgsql\".\n>\n> After applying greg-san's change, the coredump did not happened in two hour(it is still running).\n> Note, I have not taken a deep look into the change, just provide some test information in advance.\n>\n\n+1\nThanks for doing that.\nI'm unsure if that \"fix\" is the right approach, so please investigate it too.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 18 May 2021 22:29:39 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Parallel scan with SubTransGetTopmostTransaction assert coredump"
}
] |
[
{
"msg_contents": "I wanted to dump all heap WAL records with pg_waldump, so I did this:\n\n> $ pg_waldump --rmgr=heap --rmgr=heap2 data/pg_wal/000000010000000000000001 --stat=record\n> Type N (%) Record size (%) FPI size (%) Combined size (%)\n> ---- - --- ----------- --- -------- --- ------------- ---\n> Heap2/PRUNE 268 ( 8.74) 18192 ( 2.73) 0 ( 0.00) 18192 ( 1.74)\n> Heap2/VACUUM 55 ( 1.79) 4940 ( 0.74) 0 ( 0.00) 4940 ( 0.47)\n> Heap2/FREEZE_PAGE 277 ( 9.03) 186868 ( 28.03) 0 ( 0.00) 186868 ( 17.86)\n> Heap2/VISIBLE 467 ( 15.23) 27783 ( 4.17) 376832 ( 99.34) 404615 ( 38.68)\n> Heap2/MULTI_INSERT 1944 ( 63.38) 354800 ( 53.21) 2520 ( 0.66) 357320 ( 34.16)\n> Heap2/MULTI_INSERT+INIT 56 ( 1.83) 74152 ( 11.12) 0 ( 0.00) 74152 ( 7.09)\n> -------- -------- -------- --------\n> Total 3067 666735 [63.74%] 379352 [36.26%] 1046087 [100%]\n> pg_waldump: fatal: error in WAL record at 0/1680118: invalid record length at 0/1680150: wanted 24, got 0\n\nThat didn't do what I wanted. It only printed the Heap2 records, not \nHeap, even though I specified both. The reason is that if you specify \nmultiple --rmgr options, only the last one takes effect.\n\nI propose the attached to allow selecting multiple rmgrs, by giving \nmultiple --rmgr options. With that, it works the way I expected:\n\n> $ pg_waldump --rmgr=heap --rmgr=heap2 data/pg_wal/000000010000000000000001 --stat=record\n> Type N (%) Record size (%) FPI size (%) Combined size (%)\n> ---- - --- ----------- --- -------- --- ------------- ---\n> Heap2/PRUNE 268 ( 1.77) 18192 ( 0.71) 0 ( 0.00) 18192 ( 0.55)\n> Heap2/VACUUM 55 ( 0.36) 4940 ( 0.19) 0 ( 0.00) 4940 ( 0.15)\n> Heap2/FREEZE_PAGE 277 ( 1.83) 186868 ( 7.33) 0 ( 0.00) 186868 ( 5.67)\n> Heap2/VISIBLE 467 ( 3.09) 27783 ( 1.09) 376832 ( 50.37) 404615 ( 12.27)\n> Heap2/MULTI_INSERT 1944 ( 12.86) 354800 ( 13.91) 2520 ( 0.34) 357320 ( 10.83)\n> Heap2/MULTI_INSERT+INIT 56 ( 0.37) 74152 ( 2.91) 0 ( 0.00) 74152 ( 2.25)\n> Heap/INSERT 9948 ( 65.80) 1433891 ( 56.22) 8612 ( 1.15) 1442503 ( 43.73)\n> Heap/DELETE 942 ( 6.23) 50868 ( 1.99) 0 ( 0.00) 50868 ( 1.54)\n> Heap/UPDATE 193 ( 1.28) 101265 ( 3.97) 9556 ( 1.28) 110821 ( 3.36)\n> Heap/HOT_UPDATE 349 ( 2.31) 36910 ( 1.45) 1300 ( 0.17) 38210 ( 1.16)\n> Heap/LOCK 209 ( 1.38) 11481 ( 0.45) 316828 ( 42.35) 328309 ( 9.95)\n> Heap/INPLACE 212 ( 1.40) 44279 ( 1.74) 32444 ( 4.34) 76723 ( 2.33)\n> Heap/INSERT+INIT 184 ( 1.22) 188803 ( 7.40) 0 ( 0.00) 188803 ( 5.72)\n> Heap/UPDATE+INIT 15 ( 0.10) 16273 ( 0.64) 0 ( 0.00) 16273 ( 0.49)\n> -------- -------- -------- --------\n> Total 15119 2550505 [77.32%] 748092 [22.68%] 3298597 [100%]\n> pg_waldump: fatal: error in WAL record at 0/1680150: invalid record length at 0/16801C8: wanted 24, got 0\n\n- Heikki",
"msg_date": "Tue, 18 May 2021 16:50:31 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Multiple pg_waldump --rmgr options"
},
{
"msg_contents": "On Tue, May 18, 2021 at 04:50:31PM +0300, Heikki Linnakangas wrote:\n> I wanted to dump all heap WAL records with pg_waldump, so I did this:\n> \n> > $ pg_waldump --rmgr=heap --rmgr=heap2 data/pg_wal/000000010000000000000001 --stat=record\n> > Type N (%) Record size (%) FPI size (%) Combined size (%)\n> > ---- - --- ----------- --- -------- --- ------------- ---\n> > Heap2/PRUNE 268 ( 8.74) 18192 ( 2.73) 0 ( 0.00) 18192 ( 1.74)\n> > Heap2/VACUUM 55 ( 1.79) 4940 ( 0.74) 0 ( 0.00) 4940 ( 0.47)\n> > Heap2/FREEZE_PAGE 277 ( 9.03) 186868 ( 28.03) 0 ( 0.00) 186868 ( 17.86)\n> > Heap2/VISIBLE 467 ( 15.23) 27783 ( 4.17) 376832 ( 99.34) 404615 ( 38.68)\n> > Heap2/MULTI_INSERT 1944 ( 63.38) 354800 ( 53.21) 2520 ( 0.66) 357320 ( 34.16)\n> > Heap2/MULTI_INSERT+INIT 56 ( 1.83) 74152 ( 11.12) 0 ( 0.00) 74152 ( 7.09)\n> > -------- -------- -------- --------\n> > Total 3067 666735 [63.74%] 379352 [36.26%] 1046087 [100%]\n> > pg_waldump: fatal: error in WAL record at 0/1680118: invalid record length at 0/1680150: wanted 24, got 0\n> \n> That didn't do what I wanted. It only printed the Heap2 records, not Heap,\n> even though I specified both. The reason is that if you specify multiple\n> --rmgr options, only the last one takes effect.\n> \n> I propose the attached to allow selecting multiple rmgrs, by giving multiple\n> --rmgr options. With that, it works the way I expected:\n\nThe change and the patch look sensible to me.\n\n\n",
"msg_date": "Tue, 18 May 2021 23:23:02 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Multiple pg_waldump --rmgr options"
},
{
"msg_contents": "At Tue, 18 May 2021 23:23:02 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> On Tue, May 18, 2021 at 04:50:31PM +0300, Heikki Linnakangas wrote:\n> > That didn't do what I wanted. It only printed the Heap2 records, not Heap,\n> > even though I specified both. The reason is that if you specify multiple\n> > --rmgr options, only the last one takes effect.\n> > \n> > I propose the attached to allow selecting multiple rmgrs, by giving multiple\n> > --rmgr options. With that, it works the way I expected:\n> \n> The change and the patch look sensible to me.\n\n+1.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 19 May 2021 11:50:52 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Multiple pg_waldump --rmgr options"
},
{
"msg_contents": "On Wed, May 19, 2021 at 11:50:52AM +0900, Kyotaro Horiguchi wrote:\n> At Tue, 18 May 2021 23:23:02 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n>> The change and the patch look sensible to me.\n> \n> +1.\n\nAgreed.\n--\nMichael",
"msg_date": "Wed, 19 May 2021 12:26:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Multiple pg_waldump --rmgr options"
},
{
"msg_contents": "> On 18 May 2021, at 15:50, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> The reason is that if you specify multiple --rmgr options, only the last one takes effect.\n\nThat's in line with how options are handled for most binaries, so this will go\nagainst that. That being said, I don't think thats a problem here really given\nwhat this tool is and it's intended usecase.\n\nThis patch makes the special case \"--rmgr=list\" a bit more awkward than before\nIMO, as it breaks the list processing, but nothing we can't live with.\n\n> I propose the attached to allow selecting multiple rmgrs\n\nI agree with the other +1's in this thread, and am marking this as ready for\ncommitter.\n\nAs a tiny nitpick for readability, I would move this line inside the string\ncomparison case where the rmgr is selected. Not that it makes any difference\nin practice, but since that's where the filtering is set it seems a hair\ntidier.\n+ config.filter_by_rmgr_enabled = true;\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 28 Jun 2021 12:34:32 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Multiple pg_waldump --rmgr options"
},
{
"msg_contents": "On 28/06/2021 13:34, Daniel Gustafsson wrote:\n>> On 18 May 2021, at 15:50, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> \n>> The reason is that if you specify multiple --rmgr options, only the last one takes effect.\n> \n> That's in line with how options are handled for most binaries, so this will go\n> against that. That being said, I don't think thats a problem here really given\n> what this tool is and it's intended usecase.\n\nThere is some precedent for this with the pg_dump --table option, for \nexample.\n\nIn general, I think it's weird that the latest option wins. If you \nspecify the same option multiple times, and it's not something like \n--rmgr or --table where it makes sense, it's most likely user error. \nPrinting an error would be nicer than ignoring all but the last \ninstance. But I'm not going to try changing that now.\n\n>> I propose the attached to allow selecting multiple rmgrs\n> \n> I agree with the other +1's in this thread, and am marking this as ready for\n> committer.\n> \n> As a tiny nitpick for readability, I would move this line inside the string\n> comparison case where the rmgr is selected. Not that it makes any difference\n> in practice, but since that's where the filtering is set it seems a hair\n> tidier.\n> + config.filter_by_rmgr_enabled = true;\n\nOk, changed it that way.\n\nI tried to be defensive against WAL records with bogus xl_rmid values here:\n\n> @@ -1098,8 +1100,9 @@ main(int argc, char **argv)\n> }\n> \n> /* apply all specified filters */\n> - if (config.filter_by_rmgr != -1 &&\n> - config.filter_by_rmgr != record->xl_rmid)\n> + if (config.filter_by_rmgr_enabled &&\n> + (record->xl_rmid < 0 || record->xl_rmid > RM_MAX_ID ||\n> + !config.filter_by_rmgr[record->xl_rmid]))\n> continue;\n\nBut looking closer, that's pointless. We use record->xl_rmid directly as \narray index elsewhere, and that's OK because ValidXLogRecordHeader() \nchecks that xl_rmid <= RM_MAX_ID. And the 'xl_rmid < 0' check is \nunnecessary because the field is unsigned. So I'll remove those, and \ncommit this tomorrow.\n\n- Heikki\n\n\n",
"msg_date": "Wed, 30 Jun 2021 23:39:24 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": true,
"msg_subject": "Re: Multiple pg_waldump --rmgr options"
},
{
"msg_contents": "> On 30 Jun 2021, at 22:39, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> In general, I think it's weird that the latest option wins. If you specify the same option multiple times, and it's not something like --rmgr or --table where it makes sense, it's most likely user error. Printing an error would be nicer than ignoring all but the last instance. But I'm not going to try changing that now.\n\nAFAIK, the traditional \"defense\" for it when building a commandline with\nscripts which loop over input, to avoid the need for any data structure holding\nthe options for deduplication. No idea how common that is these days, but I've\nseen it in production in the past for sure.\n\n> I tried to be defensive against WAL records with bogus xl_rmid values here:\n> \n>> @@ -1098,8 +1100,9 @@ main(int argc, char **argv)\n>> }\n>> /* apply all specified filters */\n>> - if (config.filter_by_rmgr != -1 &&\n>> - config.filter_by_rmgr != record->xl_rmid)\n>> + if (config.filter_by_rmgr_enabled &&\n>> + (record->xl_rmid < 0 || record->xl_rmid > RM_MAX_ID ||\n>> + !config.filter_by_rmgr[record->xl_rmid]))\n>> continue;\n> \n> But looking closer, that's pointless. We use record->xl_rmid directly as array index elsewhere, and that's OK because ValidXLogRecordHeader() checks that xl_rmid <= RM_MAX_ID. And the 'xl_rmid < 0' check is unnecessary because the field is unsigned. So I'll remove those, and commit this tomorrow.\n\n+1\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 30 Jun 2021 23:14:56 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Multiple pg_waldump --rmgr options"
}
] |
[
{
"msg_contents": "I revently tried to upgrade a standby following the documentation,\nbut I found it hard to understand, and it took me several tries to\nget it right. This is of course owing to my lack of expertise with\nrsync, but I think the documentation and examples could be clearer.\n\nI think it would be a good idea to recommend the --relative option\nof rsync.\n\nHere is a patch that does that, as well as update the versions in\nthe code samples to something more recent. Also, I think it makes\nsense to place the data directory in the sample in /var/lib/postgresql,\nwhich is similar to what many people will have in real life.\n\nYours,\nLaurenz Albe",
"msg_date": "Tue, 18 May 2021 19:49:45 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Improve documentation for pg_upgrade, standbys and rsync"
},
{
"msg_contents": "Greetings,\n\n* Laurenz Albe (laurenz.albe@cybertec.at) wrote:\n> I revently tried to upgrade a standby following the documentation,\n> but I found it hard to understand, and it took me several tries to\n> get it right. This is of course owing to my lack of expertise with\n> rsync, but I think the documentation and examples could be clearer.\n> \n> I think it would be a good idea to recommend the --relative option\n> of rsync.\n> \n> Here is a patch that does that, as well as update the versions in\n> the code samples to something more recent. Also, I think it makes\n> sense to place the data directory in the sample in /var/lib/postgresql,\n> which is similar to what many people will have in real life.\n\nHaven't had a chance to look at this in depth but improving things here\nwould be good.\n\nAn additional thing that we should really be mentioning is to tell\npeople to go in and TRUNCATE all of their UNLOGGED tables before going\nthrough this process, otherwise the rsync will end up spending a bunch\nof time copying the files for UNLOGGED relations which you really don't\nwant.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 19 May 2021 10:31:35 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Improve documentation for pg_upgrade, standbys and rsync"
},
{
"msg_contents": "On Wed, 2021-05-19 at 10:31 -0400, Stephen Frost wrote:\n> * Laurenz Albe (laurenz.albe@cybertec.at) wrote:\n> > I revently tried to upgrade a standby following the documentation,\n> > but I found it hard to understand, [...]\n>\n> Haven't had a chance to look at this in depth but improving things here\n> would be good.\n> \n> An additional thing that we should really be mentioning is to tell\n> people to go in and TRUNCATE all of their UNLOGGED tables before going\n> through this process, otherwise the rsync will end up spending a bunch\n> of time copying the files for UNLOGGED relations which you really don't\n> want.\n\nThanks for the feedback and the suggestion.\nCCing -hackers so that I can add it to the commitfest.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Wed, 19 May 2021 18:53:49 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Improve documentation for pg_upgrade, standbys and rsync"
},
{
"msg_contents": "On Wed, 2021-05-19 at 10:31 -0400, Stephen Frost wrote:\n> * Laurenz Albe (laurenz.albe@cybertec.at) wrote:\n> > I revently tried to upgrade a standby following the documentation,\n> > but I found it hard to understand, and it took me several tries to\n> > get it right. This is of course owing to my lack of expertise with\n> > rsync, but I think the documentation and examples could be clearer.\n> > \n> > I think it would be a good idea to recommend the --relative option\n> > of rsync.\n> \n> An additional thing that we should really be mentioning is to tell\n> people to go in and TRUNCATE all of their UNLOGGED tables before going\n> through this process, otherwise the rsync will end up spending a bunch\n> of time copying the files for UNLOGGED relations which you really don't\n> want.\n\nI have thought about that some more, and I am not certain that we should\nunconditionally recommend that. Perhaps the pain of rebuilding the\nunlogged table on the primary would be worse than rsyncing it to the\nstandby.\n\nThe documentation already mentions\n\n \"Unfortunately, rsync needlessly copies files associated with temporary\n and unlogged tables because these files don't normally exist on standby\n servers.\"\n\nI'd say that is good enough, and people can draw their conclusions from\nthat.\n\nAttached is a new patch with an added reminder to create \"standby.signal\",\nas mentioned in [1].\n\nYours,\nLaurenz Albe\n\n [1]: https://www.postgr.es/m/1A5A1B6E-7BB6-47EB-8443-40222B769404@iris.washington.edu",
"msg_date": "Fri, 16 Jul 2021 07:46:31 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Improve documentation for pg_upgrade, standbys and rsync"
},
{
"msg_contents": "Greetings,\n\n* Laurenz Albe (laurenz.albe@cybertec.at) wrote:\n> On Wed, 2021-05-19 at 10:31 -0400, Stephen Frost wrote:\n> > * Laurenz Albe (laurenz.albe@cybertec.at) wrote:\n> > > I revently tried to upgrade a standby following the documentation,\n> > > but I found it hard to understand, and it took me several tries to\n> > > get it right. This is of course owing to my lack of expertise with\n> > > rsync, but I think the documentation and examples could be clearer.\n> > > \n> > > I think it would be a good idea to recommend the --relative option\n> > > of rsync.\n> > \n> > An additional thing that we should really be mentioning is to tell\n> > people to go in and TRUNCATE all of their UNLOGGED tables before going\n> > through this process, otherwise the rsync will end up spending a bunch\n> > of time copying the files for UNLOGGED relations which you really don't\n> > want.\n> \n> I have thought about that some more, and I am not certain that we should\n> unconditionally recommend that. Perhaps the pain of rebuilding the\n> unlogged table on the primary would be worse than rsyncing it to the\n> standby.\n\nI disagree entirely. The reason to even consider using this approach is\nto minimize the time required to get things back online and there's no\nquestion that having the unlogged tables get rsync'd across would\nincrease the time required.\n\n> The documentation already mentions\n> \n> \"Unfortunately, rsync needlessly copies files associated with temporary\n> and unlogged tables because these files don't normally exist on standby\n> servers.\"\n> \n> I'd say that is good enough, and people can draw their conclusions from\n> that.\n\nI disagree. Instead, we should have explicit steps included which\ndetail how to find and truncate unlogged tables and what to do to remove\nor exclude temporary files once the server is shut down.\n\n> Attached is a new patch with an added reminder to create \"standby.signal\",\n> as mentioned in [1].\n> \n> Yours,\n> Laurenz Albe\n> \n> [1]: https://www.postgr.es/m/1A5A1B6E-7BB6-47EB-8443-40222B769404@iris.washington.edu\n\n> From 47b685b700548af06ab08673187bdd1df7236464 Mon Sep 17 00:00:00 2001\n> From: Laurenz Albe <laurenz.albe@cybertec.at>\n> Date: Fri, 16 Jul 2021 07:45:22 +0200\n> Subject: [PATCH] Improve doc for pg_upgrade and standby servers\n> \n> Recommend using the --relative option of rsync for clarity\n> and adapt the code samples accordingly.\n> Using relative paths makes clearer what is meant by \"current\n> directory\" and \"remote_dir\".\n\nI'm not really convinced that this is actually a positive change, though\nI don't know that it's really a negative one either. In general, I\nprefer fully qualified paths to try and make things very clear about\nwhat's happening, but this is also a bit of an odd case due to hard\nlinks, etc.\n\n> Add a reminder that \"standby.signal\" needs to be created.\n\nThis makes sense to include, certainly, but it should be an explicit\nstep, not just a \"don't forget\" note at the end. I'm not really sure\nwhy we talk about \"log shipping\" either..? Wouldn't it make more sense\nto have something like:\n\ng. Configure standby servers\n\nReview the prior configuration of the standby servers and set up the\nsame configuration in the newly rsync'd directory.\n\n1. touch /path/to/replica/standby.signal\n2. Configure restore_command to pull from WAL archive\n3. For streaming replicas, configure primary_conninfo\n\nProbably back-patched all the way, with adjustments made for the pre-12\nreleases accordingly.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 16 Jul 2021 09:17:44 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Improve documentation for pg_upgrade, standbys and rsync"
},
{
"msg_contents": "Thanks for looking at this!\n\nOn Fri, 2021-07-16 at 09:17 -0400, Stephen Frost wrote:\n> > > An additional thing that we should really be mentioning is to tell\n> > > people to go in and TRUNCATE all of their UNLOGGED tables before going\n> > > through this process, otherwise the rsync will end up spending a bunch\n> > > of time copying the files for UNLOGGED relations which you really don't\n> > > want.\n> > \n> > I have thought about that some more, and I am not certain that we should\n> > unconditionally recommend that. Perhaps the pain of rebuilding the\n> > unlogged table on the primary would be worse than rsyncing it to the\n> > standby.\n> \n> I disagree entirely. The reason to even consider using this approach is\n> to minimize the time required to get things back online and there's no\n> question that having the unlogged tables get rsync'd across would\n> increase the time required.\n\nI am not totally convinced that minimal down time is always more important\nthan keeping your unlogged tables, but I have adapted the patch accordingly.\n\n> > The documentation already mentions\n> > \n> > \"Unfortunately, rsync needlessly copies files associated with temporary\n> > and unlogged tables because these files don't normally exist on standby\n> > servers.\"\n> > \n> > I'd say that is good enough, and people can draw their conclusions from\n> > that.\n> \n> I disagree. Instead, we should have explicit steps included which\n> detail how to find and truncate unlogged tables and what to do to remove\n> or exclude temporary files once the server is shut down.\n\nOk, done.\n\n> > Recommend using the --relative option of rsync for clarity\n> > and adapt the code samples accordingly.\n> > Using relative paths makes clearer what is meant by \"current\n> > directory\" and \"remote_dir\".\n> \n> I'm not really convinced that this is actually a positive change, though\n> I don't know that it's really a negative one either. In general, I\n> prefer fully qualified paths to try and make things very clear about\n> what's happening, but this is also a bit of an odd case due to hard\n> links, etc.\n\nI normally prefer absolute paths as well.\nBut that is the only way I got it to run, and I think that in this\ncase it adds clarity to have the data directories relative to your\ncurrent working directory.\n\n> > Add a reminder that \"standby.signal\" needs to be created.\n> \n> This makes sense to include, certainly, but it should be an explicit\n> step, not just a \"don't forget\" note at the end. I'm not really sure\n> why we talk about \"log shipping\" either..? Wouldn't it make more sense\n> to have something like:\n> \n> g. Configure standby servers\n> \n> Review the prior configuration of the standby servers and set up the\n> same configuration in the newly rsync'd directory.\n> \n> 1. touch /path/to/replica/standby.signal\n> 2. Configure restore_command to pull from WAL archive\n> 3. For streaming replicas, configure primary_conninfo\n\nOk, I have modified the final step like this. That is better than\ntalking about log shipping.\n\nPatch V3 attached.\n\nYours,\nLaurenz Albe",
"msg_date": "Thu, 22 Jul 2021 15:36:11 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Improve documentation for pg_upgrade, standbys and rsync"
},
{
"msg_contents": "Greetings,\n\n* Laurenz Albe (laurenz.albe@cybertec.at) wrote:\n> Thanks for looking at this!\n\nSure. Thanks for working on it!\n\n> On Fri, 2021-07-16 at 09:17 -0400, Stephen Frost wrote:\n> > > > An additional thing that we should really be mentioning is to tell\n> > > > people to go in and TRUNCATE all of their UNLOGGED tables before going\n> > > > through this process, otherwise the rsync will end up spending a bunch\n> > > > of time copying the files for UNLOGGED relations which you really don't\n> > > > want.\n> > > \n> > > I have thought about that some more, and I am not certain that we should\n> > > unconditionally recommend that. Perhaps the pain of rebuilding the\n> > > unlogged table on the primary would be worse than rsyncing it to the\n> > > standby.\n> > \n> > I disagree entirely. The reason to even consider using this approach is\n> > to minimize the time required to get things back online and there's no\n> > question that having the unlogged tables get rsync'd across would\n> > increase the time required.\n> \n> I am not totally convinced that minimal down time is always more important\n> than keeping your unlogged tables, but I have adapted the patch accordingly.\n\nHaving the unlogged tables end up on replicas seems awkward also because\nthey really shouldn't be there and they'd never end up getting cleaned\nup unless the replica crashed or was rebuilt..\n\n> > > The documentation already mentions\n> > > \n> > > \"Unfortunately, rsync needlessly copies files associated with temporary\n> > > and unlogged tables because these files don't normally exist on standby\n> > > servers.\"\n> > > \n> > > I'd say that is good enough, and people can draw their conclusions from\n> > > that.\n> > \n> > I disagree. Instead, we should have explicit steps included which\n> > detail how to find and truncate unlogged tables and what to do to remove\n> > or exclude temporary files once the server is shut down.\n> \n> Ok, done.\n\nGreat, thanks, it's not quite this simple, unfortunately, more below..\n\n> > > Recommend using the --relative option of rsync for clarity\n> > > and adapt the code samples accordingly.\n> > > Using relative paths makes clearer what is meant by \"current\n> > > directory\" and \"remote_dir\".\n> > \n> > I'm not really convinced that this is actually a positive change, though\n> > I don't know that it's really a negative one either. In general, I\n> > prefer fully qualified paths to try and make things very clear about\n> > what's happening, but this is also a bit of an odd case due to hard\n> > links, etc.\n> \n> I normally prefer absolute paths as well.\n> But that is the only way I got it to run, and I think that in this\n> case it adds clarity to have the data directories relative to your\n> current working directory.\n\nI'm pretty curious that you weren't able to get it to run with absolute\npaths..\n\n> > > Add a reminder that \"standby.signal\" needs to be created.\n> > \n> > This makes sense to include, certainly, but it should be an explicit\n> > step, not just a \"don't forget\" note at the end. I'm not really sure\n> > why we talk about \"log shipping\" either..? Wouldn't it make more sense\n> > to have something like:\n> > \n> > g. Configure standby servers\n> > \n> > Review the prior configuration of the standby servers and set up the\n> > same configuration in the newly rsync'd directory.\n> > \n> > 1. touch /path/to/replica/standby.signal\n> > 2. Configure restore_command to pull from WAL archive\n> > 3. For streaming replicas, configure primary_conninfo\n> \n> Ok, I have modified the final step like this. That is better than\n> talking about log shipping.\n\nYup, glad you agree on that.\n\n> From 43453dc7379f87ca6638c80c9ec6bf528f8e2e28 Mon Sep 17 00:00:00 2001\n> From: Laurenz Albe <laurenz.albe@cybertec.at>\n> Date: Thu, 22 Jul 2021 15:33:59 +0200\n> Subject: [PATCH] Improve doc for pg_upgrade and standby servers\n> \n> Recommend truncating or removing unlogged and temporary\n> tables to speed up \"rsync\". Since this is best done in\n> the step \"Prepare for standby server upgrades\", move that\n> step to precede \"Stop both servers\".\n> \n> Recommend using the --relative option of rsync for clarity\n> and adapt the code samples accordingly.\n> Using relative paths makes clearer what is meant by \"current\n> directory\" and \"remote_dir\".\n> \n> Rewrite the final substep to not mention \"log shipping\".\n> Rather, provide a list of the necessary configuration steps.\n> ---\n> doc/src/sgml/ref/pgupgrade.sgml | 96 +++++++++++++++++++++------------\n> 1 file changed, 63 insertions(+), 33 deletions(-)\n> \n> diff --git a/doc/src/sgml/ref/pgupgrade.sgml b/doc/src/sgml/ref/pgupgrade.sgml\n> index a83c63cd98..3ccb311ff7 100644\n> --- a/doc/src/sgml/ref/pgupgrade.sgml\n> +++ b/doc/src/sgml/ref/pgupgrade.sgml\n> @@ -324,6 +324,35 @@ make prefix=/usr/local/pgsql.new install\n> </para>\n> </step>\n> \n> + <step id=\"prepare-standby-upgrade\">\n> + <title>Prepare for standby server upgrades</title>\n> +\n> + <para>\n> + If you are upgrading standby servers using methods outlined in section <xref\n> + linkend=\"pgupgrade-step-replicas\"/>, you should consider dropping temporary\n> + tables and truncating unlogged tables on the primary, since that will speed up\n> + <application>rsync</application> and keep the down time short.\n> + You could run the following <application>psql</application> commands\n> + in all databases:\n> +\n> +<programlisting>\n> +SELECT format('DROP TABLE %s', oid::regclass) FROM pg_class WHERE relpersistence = 't' \\gexec\n> +SELECT format('TRUNCATE %s', oid::regclass) FROM pg_class WHERE relpersistence = 'u' \\gexec\n> +</programlisting>\n\nTemporary tables aren't actually visible across different backends, nor\nshould they exist once the system has been shut down, but sometimes they\ndo get left around due to a crash, so the above won't actually work and\nisn't the way to deal with those. The same can also happen with\ntemporary files that we create which end up in pgsql_tmp.\n\nWe could possibly exclude pgsql_tmp in the rsync command, but cleaning\nup the temporary table files would involve something more complicated\nlike using 'find' to search for any '^t[0-9]+_[0-9]+.*$' files or\nsomething along those lines.\n\nThough, for that matter we should really be looking through all of the\ndirectories and files that pg_basebackup excludes and considering if\nthey should somehow be excluded. There's no easy way to exclude\neverything that pg_basebackup would with just an rsync because the logic\nis a bit complicated (which is why I was saying we really need a proper\ntool...) but we could probably provide a somewhat better rsync command\nby going through that list and excluding what makes sense to exclude.\nWe could also provide another explicit before-rsync step to review all\nthe temp table files and move them or remove them, depending on how\ncomfortable one is with hacking around in the data directory.\n\nThis, of course, all comes back to the original complaint I had about\ndocumenting this approach, which is that these things should only be\ndone by someone extremely familiar with the PG codebase, until and\nunless we write an actual tool to do this.\n\n> + (There will be a mismatch if old standby servers were shut down\n> + before the old primary or if the old standby servers are still running.)\n\nWould probably be good to note that if the standby's were shut down\nbefore the primary then this method can *not* be used safely... The\nabove leaves it unclear about if the mismatch is an issue or not. I get\nthat this was in the original docs, but still would be good to improve\nit.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 26 Jul 2021 15:11:26 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Improve documentation for pg_upgrade, standbys and rsync"
},
{
"msg_contents": "On Mon, 2021-07-26 at 15:11 -0400, Stephen Frost wrote:\n> > > > > An additional thing that we should really be mentioning is to tell\n> > > > > people to go in and TRUNCATE all of their UNLOGGED tables before going\n> > > > > through this process, otherwise the rsync will end up spending a bunch\n> > > > > of time copying the files for UNLOGGED relations which you really don't\n> > > > > want.\n> > \n> > Ok, done.\n> \n> Great, thanks, it's not quite this simple, unfortunately, more below..\n>\n> > + <para>\n> > + If you are upgrading standby servers using methods outlined in section <xref\n> > + linkend=\"pgupgrade-step-replicas\"/>, you should consider dropping temporary\n> > + tables and truncating unlogged tables on the primary, since that will speed up\n> > + <application>rsync</application> and keep the down time short.\n> > + You could run the following <application>psql</application> commands\n> > + in all databases:\n> > +\n> > +<programlisting>\n> > +SELECT format('DROP TABLE %s', oid::regclass) FROM pg_class WHERE relpersistence = 't' \\gexec\n> > +SELECT format('TRUNCATE %s', oid::regclass) FROM pg_class WHERE relpersistence = 'u' \\gexec\n> > +</programlisting>\n> \n> Temporary tables aren't actually visible across different backends, nor\n> should they exist once the system has been shut down, but sometimes they\n> do get left around due to a crash, so the above won't actually work and\n> isn't the way to deal with those. The same can also happen with\n> temporary files that we create which end up in pgsql_tmp.\n> \n> We could possibly exclude pgsql_tmp in the rsync command, but cleaning\n> up the temporary table files would involve something more complicated\n> like using 'find' to search for any '^t[0-9]+_[0-9]+.*$' files or\n> something along those lines.\n> \n> Though, for that matter we should really be looking through all of the\n> directories and files that pg_basebackup excludes and considering if\n> they should somehow be excluded. There's no easy way to exclude\n> everything that pg_basebackup would with just an rsync because the logic\n> is a bit complicated (which is why I was saying we really need a proper\n> tool...) but we could probably provide a somewhat better rsync command\n> by going through that list and excluding what makes sense to exclude.\n> We could also provide another explicit before-rsync step to review all\n> the temp table files and move them or remove them, depending on how\n> comfortable one is with hacking around in the data directory.\n> \n> This, of course, all comes back to the original complaint I had about\n> documenting this approach, which is that these things should only be\n> done by someone extremely familiar with the PG codebase, until and\n> unless we write an actual tool to do this.\n\nI agree with what you write, but that sounds like you are arguing for\na code patch rather than for documentation to enable the user to do\nthat manually, which is what I believe you said initially.\n\nMy two statements will get rid of temporary tables left behind after\na crash and truncate unlogged tables, which should be an improvement.\n\nOf course it would be good to get rid of orphaned files left behind\nafter a crash, but, as you say, that is not so easy.\n\nI'd say that writing tools to do better than my two SQL statements\nis nice to have, but beyond the scope of this documentation patch.\n\n> > > > Recommend using the --relative option of rsync for clarity\n> > > > and adapt the code samples accordingly.\n> > > > Using relative paths makes clearer what is meant by \"current\n> > > > directory\" and \"remote_dir\".\n> > \n> > I normally prefer absolute paths as well.\n> > But that is the only way I got it to run, and I think that in this\n> > case it adds clarity to have the data directories relative to your\n> > current working directory.\n> \n> I'm pretty curious that you weren't able to get it to run with absolute\n> paths..\n\nI tried a couple of times with a test cluster and failed.\n\nPart of the confustion for me is that you are supposed to run the\nrsync from a certain directory, which seems weird if paths are absolute.\nRun from *any* directory above the old and the new cluster?\n\n\"Relative to my current directory\" makes more sense to me here.\n\n> > + (There will be a mismatch if old standby servers were shut down\n> > + before the old primary or if the old standby servers are still running.)\n> \n> Would probably be good to note that if the standby's were shut down\n> before the primary then this method can *not* be used safely... The\n> above leaves it unclear about if the mismatch is an issue or not. I get\n> that this was in the original docs, but still would be good to improve\n> it.\n\nAgreed.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Wed, 18 Aug 2021 14:24:13 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Improve documentation for pg_upgrade, standbys and rsync"
},
{
"msg_contents": "On Mon, Jul 26, 2021 at 3:11 PM Stephen Frost <sfrost@snowman.net> wrote:\n> * Laurenz Albe (laurenz.albe@cybertec.at) wrote:\n> > Thanks for looking at this!\n>\n> Sure. Thanks for working on it!\n\nStephen, do you intend to do something about this patch in terms of\ngetting it committed? You're the only reviewer but haven't responded\nto the thread for more than 5 months.\n\nI don't feel that I know this area of the documentation well enough to\nfeel comfortable passing judgement on whether this change is an\nimprovement or not. However I do feel somewhat uncomfortable with\nthis:\n\n- <step>\n- <title>Prepare for standby server upgrades</title>\n-\n- <para>\n- If you are upgrading standby servers using methods outlined in\nsection <xref\n- linkend=\"pgupgrade-step-replicas\"/>, verify that the old standby\n- servers are caught up by running <application>pg_controldata</application>\n- against the old primary and standby clusters. Verify that the\n- <quote>Latest checkpoint location</quote> values match in all clusters.\n- (There will be a mismatch if old standby servers were shut down\n- before the old primary or if the old standby servers are still running.)\n- Also, make sure <varname>wal_level</varname> is not set to\n- <literal>minimal</literal> in the\n<filename>postgresql.conf</filename> file on the\n- new primary cluster.\n- </para>\n- </step>\n\nRight now, we say that you should stop the standby servers and then\nprepared for standby server upgrades. With this patch, we say that you\nshould first prepare for standby server upgrades, and then stop the\nstandby servers. But the last part of the text about preparing for\nstandby server upgrades now mentions things to be done after carrying\nout the next step where the servers are actually stopped. That seems\nconfusing. Perhaps we need two separate steps here, one to be\nperformed before stopping both servers and the other after.\n\nAlso, let me express my general terror at the idea of anyone actually\nusing this procedure.\n\nRegards,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 5 Apr 2022 12:38:04 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Improve documentation for pg_upgrade, standbys and rsync"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Mon, Jul 26, 2021 at 3:11 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > * Laurenz Albe (laurenz.albe@cybertec.at) wrote:\n> > > Thanks for looking at this!\n> >\n> > Sure. Thanks for working on it!\n> \n> Stephen, do you intend to do something about this patch in terms of\n> getting it committed? You're the only reviewer but haven't responded\n> to the thread for more than 5 months.\n\nI tried to be clear in the last email on the thread, the one which you\njust responded to, here:\n\n* Stephen Frost (sfrost@snowman.net) wrote:\n> This, of course, all comes back to the original complaint I had about\n> documenting this approach, which is that these things should only be\n> done by someone extremely familiar with the PG codebase, until and\n> unless we write an actual tool to do this.\n\nTo be more explicit though- we should write a tool to do this. We\nshouldn't try to document a way to do it because it's hard to get right.\nWhile rsync is very capable, what's needed to really do this goes beyond\nwhat we could reasonably put into any rsync command or really even into\na documented procedure. I get that we already have it documented (and\nI'll note that doing so was against my recommendation..) and that some\nfolks (likely those who follow this mailing list) have had success using\nit, but I'd really rather we just take it out and put it on a wiki\nsomewhere as a \"we need a tool that does this stuff\" and hope that\nsomeone finds time to write one.\n\n> I don't feel that I know this area of the documentation well enough to\n> feel comfortable passing judgement on whether this change is an\n> improvement or not. However I do feel somewhat uncomfortable with\n> this:\n> \n> - <step>\n> - <title>Prepare for standby server upgrades</title>\n> -\n> - <para>\n> - If you are upgrading standby servers using methods outlined in\n> section <xref\n> - linkend=\"pgupgrade-step-replicas\"/>, verify that the old standby\n> - servers are caught up by running <application>pg_controldata</application>\n> - against the old primary and standby clusters. Verify that the\n> - <quote>Latest checkpoint location</quote> values match in all clusters.\n> - (There will be a mismatch if old standby servers were shut down\n> - before the old primary or if the old standby servers are still running.)\n> - Also, make sure <varname>wal_level</varname> is not set to\n> - <literal>minimal</literal> in the\n> <filename>postgresql.conf</filename> file on the\n> - new primary cluster.\n> - </para>\n> - </step>\n> \n> Right now, we say that you should stop the standby servers and then\n> prepared for standby server upgrades. With this patch, we say that you\n> should first prepare for standby server upgrades, and then stop the\n> standby servers. But the last part of the text about preparing for\n> standby server upgrades now mentions things to be done after carrying\n> out the next step where the servers are actually stopped. That seems\n> confusing. Perhaps we need two separate steps here, one to be\n> performed before stopping both servers and the other after.\n\nIt should really be both- things to do on the primary ahead of time\n(truncate all unlogged tables, make sure there aren't any orphaned\ntemporary tables, etc), and then things to do on the replicas after\nshutting the primary down (basically, make sure they are fully caught up\nwith where the primary was at shutdown). I tried to explain that in my\nprior email but perhaps didn't do a very good job.\n\n> Also, let me express my general terror at the idea of anyone actually\n> using this procedure.\n\nI mean, yeah, I agree.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 5 Apr 2022 13:10:38 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Improve documentation for pg_upgrade, standbys and rsync"
},
{
"msg_contents": "On Tue, Apr 5, 2022 at 01:10:38PM -0400, Stephen Frost wrote:\n> To be more explicit though- we should write a tool to do this. We\n> shouldn't try to document a way to do it because it's hard to get right.\n> While rsync is very capable, what's needed to really do this goes beyond\n> what we could reasonably put into any rsync command or really even into\n> a documented procedure. I get that we already have it documented (and\n> I'll note that doing so was against my recommendation..) and that some\n> folks (likely those who follow this mailing list) have had success using\n> it, but I'd really rather we just take it out and put it on a wiki\n> somewhere as a \"we need a tool that does this stuff\" and hope that\n> someone finds time to write one.\n\nWell, I think pg_upgrade needs a tool, let alone for standby upgrades,\nbut 13 years in, no one has written one, so I am not holding my breath. \nAlso, we need to document the procedure _somewhere_ --- if we don't the\nonly procedure is embedded in a tool. and that seems even worse than\nwhat we have now.\n\n> It should really be both- things to do on the primary ahead of time\n> (truncate all unlogged tables, make sure there aren't any orphaned\n> temporary tables, etc), and then things to do on the replicas after\n> shutting the primary down (basically, make sure they are fully caught up\n> with where the primary was at shutdown). I tried to explain that in my\n> prior email but perhaps didn't do a very good job.\n> \n> > Also, let me express my general terror at the idea of anyone actually\n> > using this procedure.\n> \n> I mean, yeah, I agree.\n\nI thought that was true for pg_upgrade in general? ;-)\n\nSeems like a pull up your sleeves and hold your nose --- I am good at\nthose tasks. ;-) Should I work on this? Tangentially, I see that my\nold macros fastgetattr and heap_getattr have finally been retired by\ncommit e27f4ee0a7. :-)\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 5 Apr 2022 14:59:53 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Improve documentation for pg_upgrade, standbys and rsync"
},
{
"msg_contents": "On Tue, 2022-04-05 at 12:38 -0400, Robert Haas wrote:\n> Also, let me express my general terror at the idea of anyone actually\n> using this procedure.\n\nI did, and I couldn't get it to work with absolute paths, and using\nrelative paths seemed to me to be more intuitive anyway, hence the patch.\n\nOriginally that was the only change I wanted to make to the documentation,\nbut you know how it is: as soon as you touch something like this, someone\nwill (rightly so) prod you and say \"while you change this, that other\nthing there should also be improved\", and the patch gets more\nand more invasive.\n\nI agree with the scariness of this, but I prefer to have it in the\ndocumentation anyway; at least as long as we have nothing better (which\nis always the enemy of the good).\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Wed, 06 Apr 2022 12:35:55 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Improve documentation for pg_upgrade, standbys and rsync"
},
{
"msg_contents": "On Tue, 2022-04-05 at 13:10 -0400, Stephen Frost wrote:\n> * Robert Haas (robertmhaas@gmail.com) wrote:\n> On Mon, Jul 26, 2021 at 3:11 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > * Laurenz Albe (laurenz.albe@cybertec.at) wrote:\n> > > > Thanks for looking at this!\n> > > \n> > > Sure. Thanks for working on it!\n> > \n> > Stephen, do you intend to do something about this patch in terms of\n> > getting it committed? You're the only reviewer but haven't responded\n> > to the thread for more than 5 months.\n> \n> I tried to be clear in the last email on the thread, the one which you\n> just responded to, here:\n> \n> * Stephen Frost (sfrost@snowman.net) wrote:\n> > This, of course, all comes back to the original complaint I had about\n> > documenting this approach, which is that these things should only be\n> > done by someone extremely familiar with the PG codebase, until and\n> > unless we write an actual tool to do this.\n> \n> To be more explicit though- we should write a tool to do this. We\n> shouldn't try to document a way to do it because it's hard to get right.\n\nI see no agreement on this patch. I'll withdraw it from the commitfest\nto avoid hogging resources. Thanks everyone for the review.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Fri, 08 Apr 2022 07:14:54 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: Improve documentation for pg_upgrade, standbys and rsync"
}
] |
[
{
"msg_contents": "Hi,\n\nI can reproducibly get build failures in pgbench on 32-bit i386\nDebian, both on sid and buster. (The older Debian stretch and Ubuntu\nbionic are unaffected. Other architectures are also fine.)\n\nhttps://pgdgbuild.dus.dg-i.net/view/Binaries/job/postgresql-14-binaries/635/\n\nhttps://pgdgbuild.dus.dg-i.net/view/Binaries/job/postgresql-14-binaries/635/architecture=i386,distribution=sid/consoleFull\n\n17:39:41 make[2]: Entering directory '/<<PKGBUILDDIR>>/build/src/bin/pgbench'\n17:39:41 rm -rf '/<<PKGBUILDDIR>>/build/src/bin/pgbench'/tmp_check\n17:39:41 /bin/mkdir -p '/<<PKGBUILDDIR>>/build/src/bin/pgbench'/tmp_check\n17:39:41 cd /<<PKGBUILDDIR>>/build/../src/bin/pgbench && TESTDIR='/<<PKGBUILDDIR>>/build/src/bin/pgbench' PATH=\"/<<PKGBUILDDIR>>/build/tmp_install/usr/lib/postgresql/14/bin:$PATH\" LD_LIBRARY_PATH=\"/<<PKGBUILDDIR>>/build/tmp_install/usr/lib/i386-linux-gnu\" PGPORT='65432' PG_REGRESS='/<<PKGBUILDDIR>>/build/src/bin/pgbench/../../../src/test/regress/pg_regress' REGRESS_SHLIB='/<<PKGBUILDDIR>>/build/src/test/regress/regress.so' /usr/bin/prove -I /<<PKGBUILDDIR>>/build/../src/test/perl/ -I /<<PKGBUILDDIR>>/build/../src/bin/pgbench --verbose t/*.pl\n17:39:50 \n17:39:50 # Failed test 'pgbench expressions stderr /(?^:command=113.: boolean true\\b)/'\n17:39:50 # at t/001_pgbench_with_server.pl line 421.\n17:39:50 # 'pgbench: setting random seed to 5432\n17:39:50 # starting vacuum...end.\n17:39:50 # debug(script=0,command=1): int 13\n17:39:50 # debug(script=0,command=2): int 116\n17:39:50 # debug(script=0,command=3): int 1498\n17:39:50 # debug(script=0,command=4): int 4\n17:39:50 # debug(script=0,command=5): int 5\n17:39:50 # debug(script=0,command=6): int 6\n17:39:50 # debug(script=0,command=7): int 7\n17:39:50 # debug(script=0,command=8): int 8\n17:39:50 # debug(script=0,command=9): int 9\n17:39:50 # debug(script=0,command=10): int 10\n17:39:50 # debug(script=0,command=11): int 11\n17:39:50 # debug(script=0,command=12): int 12\n17:39:50 # debug(script=0,command=13): double 13.856406460551\n17:39:50 # debug(script=0,command=14): double 14.8514851485149\n17:39:50 # debug(script=0,command=15): double 15.39380400259\n17:39:50 # debug(script=0,command=16): double 16\n17:39:50 # debug(script=0,command=17): double 17.094\n17:39:50 # debug(script=0,command=20): int 1\n17:39:50 # debug(script=0,command=21): double -27\n17:39:50 # debug(script=0,command=22): double 1024\n17:39:50 # debug(script=0,command=23): double 1\n17:39:50 # debug(script=0,command=24): double 1\n17:39:50 # debug(script=0,command=25): double -0.125\n17:39:50 # debug(script=0,command=26): double -0.125\n17:39:50 # debug(script=0,command=27): double -0.00032\n17:39:50 # debug(script=0,command=28): double 8.50705917302346e+37\n17:39:50 # debug(script=0,command=29): double 1e+30\n17:39:50 # debug(script=0,command=30): boolean false\n17:39:50 # debug(script=0,command=31): boolean true\n17:39:50 # debug(script=0,command=32): int 32\n17:39:50 # debug(script=0,command=33): int 33\n17:39:50 # debug(script=0,command=34): double 34\n17:39:50 # debug(script=0,command=35): int 35\n17:39:50 # debug(script=0,command=36): int 36\n17:39:50 # debug(script=0,command=37): double 37.0000002\n17:39:50 # debug(script=0,command=38): int 38\n17:39:50 # debug(script=0,command=39): int 39\n17:39:50 # debug(script=0,command=40): boolean true\n17:39:50 # debug(script=0,command=41): null\n17:39:50 # debug(script=0,command=42): null\n17:39:50 # debug(script=0,command=43): boolean true\n17:39:50 # debug(script=0,command=44): boolean true\n17:39:50 # debug(script=0,command=45): boolean true\n17:39:50 # debug(script=0,command=46): int 46\n17:39:50 # debug(script=0,command=47): boolean true\n17:39:50 # debug(script=0,command=48): boolean true\n17:39:50 # debug(script=0,command=49): int -5817877081768721676\n17:39:50 # debug(script=0,command=50): boolean true\n17:39:50 # debug(script=0,command=51): int -7793829335365542153\n17:39:50 # debug(script=0,command=52): int -1464711246773187029\n17:39:50 # debug(script=0,command=53): boolean true\n17:39:50 # debug(script=0,command=55): int -1\n17:39:50 # debug(script=0,command=56): int -1\n17:39:50 # debug(script=0,command=57): int 1\n17:39:50 # debug(script=0,command=65): int 65\n17:39:50 # debug(script=0,command=74): int 74\n17:39:50 # debug(script=0,command=83): int 83\n17:39:50 # debug(script=0,command=86): int 86\n17:39:50 # debug(script=0,command=93): int 93\n17:39:50 # debug(script=0,command=95): int 0\n17:39:50 # debug(script=0,command=96): int 1\n17:39:50 # debug(script=0,command=97): int 0\n17:39:50 # debug(script=0,command=98): int 5432\n17:39:50 # debug(script=0,command=99): int -9223372036854775808\n17:39:50 # debug(script=0,command=100): int 9223372036854775807\n17:39:50 # debug(script=0,command=101): boolean true\n17:39:50 # debug(script=0,command=102): boolean true\n17:39:50 # debug(script=0,command=103): boolean true\n17:39:50 # debug(script=0,command=104): boolean true\n17:39:50 # debug(script=0,command=105): boolean true\n17:39:50 # debug(script=0,command=109): boolean true\n17:39:50 # debug(script=0,command=110): boolean true\n17:39:50 # debug(script=0,command=111): boolean true\n17:39:50 # debug(script=0,command=112): int 9223372036854775797\n17:39:50 # debug(script=0,command=113): boolean false\n17:39:50 # '\n17:39:50 # doesn't match '(?^:command=113.: boolean true\\b)'\n17:39:52 # Looks like you failed 1 test of 415.\n17:39:52 t/001_pgbench_with_server.pl .. \n17:39:52 ok 1 - concurrent OID generation status (got 0 vs expected 0)\n17:39:52 ok 2 - concurrent OID generation stdout /(?^:processed: 125/125)/\n17:39:52 ok 3 - concurrent OID generation stderr /(?^:^$)/\n17:39:52 ok 4 - no such database status (got 1 vs expected 1)\n17:39:52 ok 5 - no such database stdout /(?^:^$)/\n17:39:52 ok 6 - no such database stderr /(?^:connection to server .* failed)/\n17:39:52 ok 7 - no such database stderr /(?^:FATAL: database \"no-such-database\" does not exist)/\n17:39:52 ok 8 - run without init status (got 1 vs expected 1)\n17:39:52 ok 9 - run without init stdout /(?^:^$)/\n17:39:52 ok 10 - run without init stderr /(?^:Perhaps you need to do initialization)/\n17:39:52 ok 11 - pgbench scale 1 initialization status (got 0 vs expected 0)\n17:39:52 ok 12 - pgbench scale 1 initialization stdout /(?^:^$)/\n\n[...]\n\n17:39:52 ok 172 - pgbench expressions stderr /(?^:command=110.: boolean true\\b)/\n17:39:52 ok 173 - pgbench expressions stderr /(?^:command=111.: boolean true\\b)/\n17:39:52 ok 174 - pgbench expressions stderr /(?^:command=112.: int 9223372036854775797\\b)/\n17:39:52 not ok 175 - pgbench expressions stderr /(?^:command=113.: boolean true\\b)/\n17:39:52 ok 176 - random seeded with 733446049 status (got 0 vs expected 0)\n17:39:52 ok 177 - random seeded with 733446049 stdout /(?^:processed: 1/1)/\n17:39:52 ok 178 - random seeded with 733446049 stderr /(?^:setting random seed to 733446049\\b)/\n\n[...]\n\n17:39:52 ok 415 - remove log files\n17:39:52 1..415\n17:39:52 Dubious, test returned 1 (wstat 256, 0x100)\n17:39:52 Failed 1/415 subtests\n17:39:53 t/002_pgbench_no_server.pl ....\n\nChristoph\n\n\n",
"msg_date": "Tue, 18 May 2021 22:45:06 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "pgbench test failing on 14beta1 on Debian/i386"
},
{
"msg_contents": "Christoph Berg <myon@debian.org> writes:\n> I can reproducibly get build failures in pgbench on 32-bit i386\n> Debian, both on sid and buster. (The older Debian stretch and Ubuntu\n> bionic are unaffected. Other architectures are also fine.)\n\nThe test that's failing came in with\n\n6b258e3d688db14aadb58dde2a72939362310684\nAuthor: Dean Rasheed <dean.a.rasheed@gmail.com>\nDate: Tue Apr 6 11:50:42 2021 +0100\n\n pgbench: Function to generate random permutations.\n\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 May 2021 17:51:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgbench test failing on 14beta1 on Debian/i386"
},
{
"msg_contents": "On Wed, May 19, 2021 at 9:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Christoph Berg <myon@debian.org> writes:\n> > I can reproducibly get build failures in pgbench on 32-bit i386\n> > Debian, both on sid and buster. (The older Debian stretch and Ubuntu\n> > bionic are unaffected. Other architectures are also fine.)\n>\n> The test that's failing came in with\n>\n> 6b258e3d688db14aadb58dde2a72939362310684\n> Author: Dean Rasheed <dean.a.rasheed@gmail.com>\n> Date: Tue Apr 6 11:50:42 2021 +0100\n>\n> pgbench: Function to generate random permutations.\n\nFWIW this is reproducible on my local Debian/gcc box with -m32, but\nnot on my FreeBSD/clang box with -m32. permute() produces different\nvalues here:\n\n\\set t debug(permute(:size-1, :size, 5432) = 5301702756001087507 and \\\n permute(:size-2, :size, 5432) = 8968485976055840695 and \\\n permute(:size-3, :size, 5432) = 6708495591295582115 and \\\n permute(:size-4, :size, 5432) = 2801794404574855121 and \\\n permute(:size-5, :size, 5432) = 1489011409218895840 and \\\n permute(:size-6, :size, 5432) = 2267749475878240183 and \\\n permute(:size-7, :size, 5432) = 1300324176838786780)\n\nI don't understand any of this stuff at all, but I added a bunch of\nprintfs and worked out that the first point its local variables\ndiverge is here:\n\n /* Random offset */\n r = (uint64) getrand(&random_state2, 0, size - 1);\n\n... after 4 earlier getrand() produced matching values. Hmm.\n\n\n",
"msg_date": "Wed, 19 May 2021 11:34:30 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench test failing on 14beta1 on Debian/i386"
},
{
"msg_contents": "On Wed, May 19, 2021 at 11:34 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I don't understand any of this stuff at all, but I added a bunch of\n> printfs and worked out that the first point its local variables\n> diverge is here:\n>\n> /* Random offset */\n> r = (uint64) getrand(&random_state2, 0, size - 1);\n\nForgot to post the actual values:\n\n r = 2563421694876090368\n r = 2563421694876090365\n\nSmells a bit like a precision problem in the workings of pg_erand48(),\nbut as soon as I saw floating point numbers I closed my laptop and ran\nfor the door.\n\n\n",
"msg_date": "Wed, 19 May 2021 11:40:59 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench test failing on 14beta1 on Debian/i386"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Forgot to post the actual values:\n> r = 2563421694876090368\n> r = 2563421694876090365\n> Smells a bit like a precision problem in the workings of pg_erand48(),\n> but as soon as I saw floating point numbers I closed my laptop and ran\n> for the door.\n\nYup. This test has a touching, but entirely unwarranted, faith in\npg_erand48() producing bit-for-bit the same values everywhere.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 May 2021 19:45:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgbench test failing on 14beta1 on Debian/i386"
},
{
"msg_contents": "\n>> Forgot to post the actual values:\n>> r = 2563421694876090368\n>> r = 2563421694876090365\n>> Smells a bit like a precision problem in the workings of pg_erand48(),\n>> but as soon as I saw floating point numbers I closed my laptop and ran\n>> for the door.\n>\n> Yup. This test has a touching, but entirely unwarranted, faith in\n> pg_erand48() producing bit-for-bit the same values everywhere.\n\nIndeed.\n\nI argued against involving any floats computation on principle, but Dean \nwas confident it could work, and it did simplify the code, so it did not \nlook that bad an option.\n\nI see two simple approaches:\n\n(1) use another PRNG inside pgbench, eg Knuth's which was used in some \nprevious submission and is very simple and IMHO better than the rand48 \nstuff.\n\n(2) extend pg_*rand48() to provide an unsigned 64 bits out of the 48 bits\nstate.\n\nAny preference?\n\n-- \nFabien.\n\n\n",
"msg_date": "Wed, 19 May 2021 09:06:16 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pgbench test failing on 14beta1 on Debian/i386"
},
{
"msg_contents": "On Wed, May 19, 2021 at 09:06:16AM +0200, Fabien COELHO wrote:\n> I see two simple approaches:\n> \n> (1) use another PRNG inside pgbench, eg Knuth's which was used in some\n> previous submission and is very simple and IMHO better than the rand48\n> stuff.\n> \n> (2) extend pg_*rand48() to provide an unsigned 64 bits out of the 48 bits\n> state.\n\nOr, (3) remove this test? I am not quite sure what there is to gain\nwith this extra test considering all the other tests with permute()\nalready present in this script.\n--\nMichael",
"msg_date": "Wed, 19 May 2021 16:27:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgbench test failing on 14beta1 on Debian/i386"
},
{
"msg_contents": "On Wed, 19 May 2021 at 00:35, Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> FWIW this is reproducible on my local Debian/gcc box with -m32,\n\nConfirmed, thanks for looking. I can reproduce it on my machine with\n-m32. It's somewhat annoying that the buildfarm didn't pick it up\nsooner :-(\n\nOn Wed, 19 May 2021 at 08:28, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, May 19, 2021 at 09:06:16AM +0200, Fabien COELHO wrote:\n> > I see two simple approaches:\n> >\n> > (1) use another PRNG inside pgbench, eg Knuth's which was used in some\n> > previous submission and is very simple and IMHO better than the rand48\n> > stuff.\n> >\n> > (2) extend pg_*rand48() to provide an unsigned 64 bits out of the 48 bits\n> > state.\n>\n> Or, (3) remove this test? I am not quite sure what there is to gain\n> with this extra test considering all the other tests with permute()\n> already present in this script.\n\nYes, I think removing the test is the best option. It was originally\nadded because there was a separate code path for larger permutation\nsizes that needed testing, but that's no longer the case so the test\nreally isn't adding anything.\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 19 May 2021 09:32:36 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench test failing on 14beta1 on Debian/i386"
},
{
"msg_contents": "> Confirmed, thanks for looking. I can reproduce it on my machine with\n> -m32. It's somewhat annoying that the buildfarm didn't pick it up\n> sooner :-(\n>\n> On Wed, 19 May 2021 at 08:28, Michael Paquier <michael@paquier.xyz> wrote:\n>>\n>> On Wed, May 19, 2021 at 09:06:16AM +0200, Fabien COELHO wrote:\n>>> I see two simple approaches:\n>>>\n>>> (1) use another PRNG inside pgbench, eg Knuth's which was used in some\n>>> previous submission and is very simple and IMHO better than the rand48\n>>> stuff.\n>>>\n>>> (2) extend pg_*rand48() to provide an unsigned 64 bits out of the 48 bits\n>>> state.\n>>\n>> Or, (3) remove this test? I am not quite sure what there is to gain\n>> with this extra test considering all the other tests with permute()\n>> already present in this script.\n>\n> Yes, I think removing the test is the best option. It was originally\n> added because there was a separate code path for larger permutation\n> sizes that needed testing, but that's no longer the case so the test\n> really isn't adding anything.\n\nHmmm…\n\nIt is the one test which worked in actually detecting an issue, so I would \nnot say that it is not adding anything, on the contrary, it did prove its \nvalue! The permute function is expected to be deterministic on different \nplatforms and architectures, and it is not.\n\nI agree that removing the test will hide the issue effectively:-) but \nISTM more appropriate to solve the underlying issue and keep the test.\n\nI'd agree with a two phases approach: drop the test in the short term and \ndeal with the PRNG later. I'm sooooo unhappy with this 48 bit PRNG that I \nmay be motivated enough to attempt to replace it, or at least add a better \n(faster?? larger state?? same/better quality?) alternative.\n\n-- \nFabien.",
"msg_date": "Wed, 19 May 2021 12:32:37 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pgbench test failing on 14beta1 on Debian/i386"
},
{
"msg_contents": "Hello Dean,\n\n>>> Or, (3) remove this test? I am not quite sure what there is to gain\n>>> with this extra test considering all the other tests with permute()\n>>> already present in this script.\n>> \n>> Yes, I think removing the test is the best option. It was originally\n>> added because there was a separate code path for larger permutation\n>> sizes that needed testing, but that's no longer the case so the test\n>> really isn't adding anything.\n>\n> Hmmm…\n>\n> It is the one test which worked in actually detecting an issue, so I would \n> not say that it is not adding anything, on the contrary, it did prove its \n> value! The permute function is expected to be deterministic on different \n> platforms and architectures, and it is not.\n>\n> I agree that removing the test will hide the issue effectively:-) but ISTM \n> more appropriate to solve the underlying issue and keep the test.\n>\n> I'd agree with a two phases approach: drop the test in the short term and \n> deal with the PRNG later. I'm sooooo unhappy with this 48 bit PRNG that I may \n> be motivated enough to attempt to replace it, or at least add a better \n> (faster?? larger state?? same/better quality?) alternative.\n\nAttached patch disactivates the test with comments to outline that there \nis an issue to fix… so it is *not* removed.\n\nI'm obviously okay with providing an alternate PRNG, let me know if this \nis the prefered option.\n\n-- \nFabien.",
"msg_date": "Wed, 19 May 2021 13:07:43 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pgbench test failing on 14beta1 on Debian/i386"
},
{
"msg_contents": "On Wed, 19 May 2021 at 11:32, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n> >> Or, (3) remove this test? I am not quite sure what there is to gain\n> >> with this extra test considering all the other tests with permute()\n> >> already present in this script.\n> >\n> > Yes, I think removing the test is the best option. It was originally\n> > added because there was a separate code path for larger permutation\n> > sizes that needed testing, but that's no longer the case so the test\n> > really isn't adding anything.\n>\n> Hmmm…\n>\n> It is the one test which worked in actually detecting an issue, so I would\n> not say that it is not adding anything, on the contrary, it did prove its\n> value! The permute function is expected to be deterministic on different\n> platforms and architectures, and it is not.\n>\n\nIn fact what it demonstrates is that the results from permute(), like\nall the other pgbench random functions, will vary by platform for\nsufficiently large size parameters.\n\n> I'd agree with a two phases approach: drop the test in the short term and\n> deal with the PRNG later. I'm sooooo unhappy with this 48 bit PRNG that I\n> may be motivated enough to attempt to replace it, or at least add a better\n> (faster?? larger state?? same/better quality?) alternative.\n>\n\nI don't necessarily have a problem with that provided the replacement\nis well-chosen and has a proven track record (i.e., let's not invent\nour own PRNG).\n\nFor now though, I'll go remove the test.\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 19 May 2021 12:14:36 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench test failing on 14beta1 on Debian/i386"
},
{
"msg_contents": "On Wed, 19 May 2021 at 12:07, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n> Attached patch disactivates the test with comments to outline that there\n> is an issue to fix… so it is *not* removed.\n>\n\nI opted to just remove the test rather than comment it out, since the\nissue highlighted isn't specific to permute(). Also changing the PRNG\nwill completely change the results, so all the test values would\nrequire rewriting, rather than it just being a case of uncommenting\nthe test and expecting it to work.\n\n> I'm obviously okay with providing an alternate PRNG, let me know if this\n> is the prefered option.\n>\n\nThat's something for consideration in v15. If we do decide we want a\nnew PRNG, it should apply across the board to all pgbench random\nfunctions.\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 19 May 2021 13:06:24 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench test failing on 14beta1 on Debian/i386"
},
{
"msg_contents": ">>>> Or, (3) remove this test? I am not quite sure what there is to gain\n>>>> with this extra test considering all the other tests with permute()\n>>>> already present in this script.\n>>>\n>>> Yes, I think removing the test is the best option. It was originally\n>>> added because there was a separate code path for larger permutation\n>>> sizes that needed testing, but that's no longer the case so the test\n>>> really isn't adding anything.\n>>\n>> Hmmm…\n>>\n>> It is the one test which worked in actually detecting an issue, so I would\n>> not say that it is not adding anything, on the contrary, it did prove its\n>> value! The permute function is expected to be deterministic on different\n>> platforms and architectures, and it is not.\n>>\n>\n> In fact what it demonstrates is that the results from permute(), like\n> all the other pgbench random functions, will vary by platform for\n> sufficiently large size parameters.\n\nIndeed, it is the case if the underlying math use doubles & large numbers. \nFor integer-only computations it should be safe though, and permute should \nbe in this category.\n\n>> I'd agree with a two phases approach: drop the test in the short term and\n>> deal with the PRNG later. I'm sooooo unhappy with this 48 bit PRNG that I\n>> may be motivated enough to attempt to replace it, or at least add a better\n>> (faster?? larger state?? same/better quality?) alternative.\n>\n> I don't necessarily have a problem with that provided the replacement\n> is well-chosen and has a proven track record (i.e., let's not invent\n> our own PRNG).\n\nYes, obviously, I'm not daft enough to reinvent a PRNG. The question is to \nchose one, motivate the choice, and build the relevant API for what pg \nneeds, possibly with some benchmarking.\n\n> For now though, I'll go remove the test.\n\nThis also removes the reminder…\n\n-- \nFabien.",
"msg_date": "Wed, 19 May 2021 17:25:10 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pgbench test failing on 14beta1 on Debian/i386"
},
{
"msg_contents": "\nOn 5/19/21 6:32 AM, Fabien COELHO wrote:\n>\n>\n>> Confirmed, thanks for looking. I can reproduce it on my machine with\n>> -m32. It's somewhat annoying that the buildfarm didn't pick it up\n>> sooner :-(\n>>\n>> On Wed, 19 May 2021 at 08:28, Michael Paquier <michael@paquier.xyz>\n>> wrote:\n>>>\n>>> On Wed, May 19, 2021 at 09:06:16AM +0200, Fabien COELHO wrote:\n>>>> I see two simple approaches:\n>>>>\n>>>> (1) use another PRNG inside pgbench, eg Knuth's which was used in some\n>>>> previous submission and is very simple and IMHO better than the rand48\n>>>> stuff.\n>>>>\n>>>> (2) extend pg_*rand48() to provide an unsigned 64 bits out of the\n>>>> 48 bits\n>>>> state.\n>>>\n>>> Or, (3) remove this test? I am not quite sure what there is to gain\n>>> with this extra test considering all the other tests with permute()\n>>> already present in this script.\n>>\n>> Yes, I think removing the test is the best option. It was originally\n>> added because there was a separate code path for larger permutation\n>> sizes that needed testing, but that's no longer the case so the test\n>> really isn't adding anything.\n>\n> Hmmm…\n>\n> It is the one test which worked in actually detecting an issue, so I\n> would not say that it is not adding anything, on the contrary, it did\n> prove its value! The permute function is expected to be deterministic\n> on different platforms and architectures, and it is not.\n>\n> I agree that removing the test will hide the issue effectively:-) but\n> ISTM more appropriate to solve the underlying issue and keep the test.\n>\n> I'd agree with a two phases approach: drop the test in the short term\n> and deal with the PRNG later. I'm sooooo unhappy with this 48 bit PRNG\n> that I may be motivated enough to attempt to replace it, or at least\n> add a better (faster?? larger state?? same/better quality?) alternative.\n>\n\nYeah, this does seem to be something that should be fixed rather than\nhidden.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 19 May 2021 14:42:36 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pgbench test failing on 14beta1 on Debian/i386"
},
{
"msg_contents": "In the meantime postgresql-14 has been accepted into Debian/experimental:\n\nhttps://buildd.debian.org/status/logs.php?pkg=postgresql-14&ver=14%7Ebeta1-1\n\nInterestingly, the test is only failing on i386 and none of the other\narchitectures, which could hint at 80-bit extended precision FP\nproblems.\n\n(The sparc64 error there is something else, I'll try rerunning it.\ncommand failed: \"psql\" -X -c \"CREATE DATABASE \\\"isolation_regression\\\" TEMPLATE=template0\" \"postgres\")\n\nChristoph\n\n\n",
"msg_date": "Fri, 21 May 2021 21:56:47 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: pgbench test failing on 14beta1 on Debian/i386"
},
{
"msg_contents": "Christoph Berg <myon@debian.org> writes:\n> Interestingly, the test is only failing on i386 and none of the other\n> architectures, which could hint at 80-bit extended precision FP\n> problems.\n\nYeah, that's what I'd assumed it is. We suppress that where we can\nwith -fexcess-precision=standard or -msse2, but I'm guessing that\ndoesn't help here for some reason.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 21 May 2021 16:04:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgbench test failing on 14beta1 on Debian/i386"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nI'm wrapping up a patch that adds SQL:2011 FOR PORTION OF syntax and\nthen uses it to implement CASCADE in temporal foreign keys. The FKs\nare implemented as triggers, like ordinary FKs, and the trigger\nfunction makes a call through SPI that does `UPDATE %s FOR PORTION OF\n%s FROM $%d TO $%d`. But I suspect I'm missing something in the\nanalyze/rewriting phase, because I get this error:\n\nERROR: no value found for parameter 1\n\nThat's coming from ExecEvalParamExtern in executor/execExprInterp.c.\n\nIf I hardcode some dates instead, the query works (even if I use the\nsame parameters elsewhere). Does anyone have any hints what I may be\nmissing? Any suggestions for some other syntax to consult as an\nexample (e.g. ON CONFLICT DO UPDATE)?\n\nIn gram.y I parse the phrase like this:\n\nFOR PORTION OF ColId FROM a_expr TO a_expr\n\nThen in the analysis phase I do this:\n\n /*\n * Build a range from the FROM ... TO .... bounds.\n * This should give a constant result, so we accept functions like NOW()\n * but not column references, subqueries, etc.\n *\n * It also permits MINVALUE and MAXVALUE like declarative partitions.\n */\n Node *target_start = transformForPortionOfBound(forPortionOf->target_start);\n Node *target_end = transformForPortionOfBound(forPortionOf->target_end);\n FuncCall *fc = makeFuncCall(SystemFuncName(range_type_name),\n list_make2(target_start, target_end),\n COERCE_EXPLICIT_CALL,\n forPortionOf->range_name_location);\n result->targetRange = transformExpr(pstate, (Node *) fc,\nEXPR_KIND_UPDATE_PORTION);\n\n(transformForPortionOfBound just handles MIN/MAXVALUE, and for\ntargetRange I feed the bounds into a range type constructor to use\nlater.)\n\nI was hoping that transformExpr would do everything I need re\nidentifying parameters, but maybe there is something else in a later\nphase?\n\nThanks,\nPaul\n\n\n",
"msg_date": "Tue, 18 May 2021 15:00:24 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": true,
"msg_subject": "Supporting $n parameters in new syntax"
},
{
"msg_contents": "On Tue, May 18, 2021 at 3:00 PM Paul A Jungwirth\n<pj@illuminatedcomputing.com> wrote:\n>\n> I suspect I'm missing something in the\n> analyze/rewriting phase, because I get this error:\n>\n> ERROR: no value found for parameter 1\n> . . .\n>\n> I was hoping that transformExpr would do everything I need re\n> identifying parameters, but maybe there is something else in a later\n> phase?\n\nNever mind, I think I figured it out. The problem was that I was\ncalling ExecEvalExpr with CreateStandaloneExprContext(), and I should\nhave been using the context from the query.\n\nThanks!\nPaul\n\n\n",
"msg_date": "Tue, 18 May 2021 16:46:15 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": true,
"msg_subject": "Re: Supporting $n parameters in new syntax"
}
] |
[
{
"msg_contents": "I discovered $SUBJECT after wondering why hyrax hadn't reported\nin recently, and trying to run check-world under CCA to see if\nanything got stuck. Indeed it did --- although this doesn't\nexplain the radio silence from hyrax, because that animal doesn't\nrun any TAP tests. (Neither does avocet, which I think is the\nonly other active CCA critter. So this could have been broken\nfor a very long time.)\n\nI count three distinct bugs that were exposed by this attempt:\n\n1. In the part of 013_partition.pl that tests firing AFTER\ntriggers on partitioned tables, we have a case of continuing\nto access a relcache entry that's already been closed.\n(I'm not quite sure why prion's -DRELCACHE_FORCE_RELEASE\nhasn't exposed this.) It looks to me like instead we had\na relcache reference leak before f3b141c48, but now, the\nonly relcache reference count on a partition child table\nis dropped by ExecCleanupTupleRouting, which logical/worker.c\ninvokes before it fires triggers on that table. Kaboom.\nThis might go away if worker.c weren't so creatively different\nfrom the other code paths concerned with executor shutdown.\n\n2. Said bug causes a segfault in the apply worker process.\nThis causes the parent postmaster to give up and die.\nI don't understand why we don't treat that like a crash\nin a regular backend, considering that an apply worker\nis running largely user-defined code.\n\n3. Once the subscriber1 postmaster has exited, the TAP\ntest will eventually time out, and then this happens:\n\ntimed out waiting for catchup at t/013_partition.pl line 219.\n### Stopping node \"publisher\" using mode immediate\n# Running: pg_ctl -D /Users/tgl/pgsql/src/test/subscription/tmp_check/t_013_partition_publisher_data/pgdata -m immediate stop\nwaiting for server to shut down.... done\nserver stopped\n# No postmaster PID for node \"publisher\"\n### Stopping node \"subscriber1\" using mode immediate\n# Running: pg_ctl -D /Users/tgl/pgsql/src/test/subscription/tmp_check/t_013_partition_subscriber1_data/pgdata -m immediate stop\npg_ctl: PID file \"/Users/tgl/pgsql/src/test/subscription/tmp_check/t_013_partition_subscriber1_data/pgdata/postmaster.pid\" does not exist\nIs server running?\nBail out! system pg_ctl failed\n\nThat is, because we failed to shut down subscriber1, the\ntest script neglects to shut down subscriber2, and now\nthings just sit indefinitely. So that's a robustness\nproblem in the TAP infrastructure, rather than a bug in\nPG proper; but I still say it's a bug that needs fixing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 May 2021 19:42:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Subscription tests fail under CLOBBER_CACHE_ALWAYS"
},
{
"msg_contents": "On Tue, May 18, 2021 at 07:42:08PM -0400, Tom Lane wrote:\n> I count three distinct bugs that were exposed by this attempt:\n> \n> 1. In the part of 013_partition.pl that tests firing AFTER\n> triggers on partitioned tables, we have a case of continuing\n> to access a relcache entry that's already been closed.\n> (I'm not quite sure why prion's -DRELCACHE_FORCE_RELEASE\n> hasn't exposed this.) It looks to me like instead we had\n> a relcache reference leak before f3b141c48, but now, the\n> only relcache reference count on a partition child table\n> is dropped by ExecCleanupTupleRouting, which logical/worker.c\n> invokes before it fires triggers on that table. Kaboom.\n> This might go away if worker.c weren't so creatively different\n> from the other code paths concerned with executor shutdown.\n\nThe tuple routing has made the whole worker logic messier by a larger\ndegree compared to when this stuff was only able to apply DMLs changes\non the partition leaves. I know that it is not that great to be more\ncreative here, but we need to make sure that AfterTriggerEndQuery() is\nmoved before ExecCleanupTupleRouting(). We could either keep the\nExecCleanupTupleRouting() calls as they are now, and call\nAfterTriggerEndQuery() in more code paths. Or we could have one\nPartitionTupleRouting and one ModifyTableState created beforehand\nin create_estate_for_relation() if applying the change on a\npartitioned table but that means manipulating more structures across \nmore layers of this code. Something like the attached fixes the\nproblem for me, but honestly it does not help in clarifying this code\nmore. I was not patient enough to wait for CLOBBER_CACHE_ALWAYS to\ninitialize the nodes in the TAP tests, so I have tested that with a\nsetup initialized with a non-clobber build, and reproduced the problem\nwith CLOBBER_CACHE_ALWAYS builds on those same nodes.\n\nYou are right that this is not a problem of 14~. I can reproduce the\nproblem on 13 as well, and we have no coverage for tuple routing with\ntriggers on this branch, so this would never have been stressed in the\nbuildfarm. There is a good argument to be made here in cherry-picking\n2ecfeda3 to REL_13_STABLE.\n\n> 2. Said bug causes a segfault in the apply worker process.\n> This causes the parent postmaster to give up and die.\n> I don't understand why we don't treat that like a crash\n> in a regular backend, considering that an apply worker\n> is running largely user-defined code.\n\nCleanupBackgroundWorker() and CleanupBackend() have a lot of common\npoints. Are you referring to an inconsistent behavior with\nrestart_after_crash that gets ignored for bgworkers? We disable it by\ndefault in the TAP tests.\n\n> 3. Once the subscriber1 postmaster has exited, the TAP\n> test will eventually time out, and then this happens:\n>\n> [.. logs ..]\n>\n> That is, because we failed to shut down subscriber1, the\n> test script neglects to shut down subscriber2, and now\n> things just sit indefinitely. So that's a robustness\n> problem in the TAP infrastructure, rather than a bug in\n> PG proper; but I still say it's a bug that needs fixing.\n\nThis one comes down to teardown_node() that uses system_or_bail(),\nleaving things unfinished. I guess that we could be more aggressive\nand ignore failures if we have a non-zero error code and that not all\nthe tests have passed within the END block of PostgresNode.pm.\n--\nMichael",
"msg_date": "Wed, 19 May 2021 12:03:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Subscription tests fail under CLOBBER_CACHE_ALWAYS"
},
{
"msg_contents": "On Wed, May 19, 2021 at 12:04 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Tue, May 18, 2021 at 07:42:08PM -0400, Tom Lane wrote:\n> > I count three distinct bugs that were exposed by this attempt:\n> >\n> > 1. In the part of 013_partition.pl that tests firing AFTER\n> > triggers on partitioned tables, we have a case of continuing\n> > to access a relcache entry that's already been closed.\n> > (I'm not quite sure why prion's -DRELCACHE_FORCE_RELEASE\n> > hasn't exposed this.) It looks to me like instead we had\n> > a relcache reference leak before f3b141c48, but now, the\n> > only relcache reference count on a partition child table\n> > is dropped by ExecCleanupTupleRouting, which logical/worker.c\n> > invokes before it fires triggers on that table. Kaboom.\n\nOops.\n\n> > This might go away if worker.c weren't so creatively different\n> > from the other code paths concerned with executor shutdown.\n>\n> The tuple routing has made the whole worker logic messier by a larger\n> degree compared to when this stuff was only able to apply DMLs changes\n> on the partition leaves. I know that it is not that great to be more\n> creative here, but we need to make sure that AfterTriggerEndQuery() is\n> moved before ExecCleanupTupleRouting(). We could either keep the\n> ExecCleanupTupleRouting() calls as they are now, and call\n> AfterTriggerEndQuery() in more code paths.\n\nYeah, that's what I thought to propose doing too. Your patch looks\nenough in that regard. Thanks for writing it.\n\n> Or we could have one\n> PartitionTupleRouting and one ModifyTableState created beforehand\n> in create_estate_for_relation() if applying the change on a\n> partitioned table but that means manipulating more structures across\n> more layers of this code.\n\nYeah, that seems like too much change to me too.\n\n> Something like the attached fixes the\n> problem for me, but honestly it does not help in clarifying this code\n> more. I was not patient enough to wait for CLOBBER_CACHE_ALWAYS to\n> initialize the nodes in the TAP tests, so I have tested that with a\n> setup initialized with a non-clobber build, and reproduced the problem\n> with CLOBBER_CACHE_ALWAYS builds on those same nodes.\n\nI have checked the fix works with a CLOBBER_CACHE_ALWAYS build.\n\n> You are right that this is not a problem of 14~. I can reproduce the\n> problem on 13 as well, and we have no coverage for tuple routing with\n> triggers on this branch, so this would never have been stressed in the\n> buildfarm. There is a good argument to be made here in cherry-picking\n> 2ecfeda3 to REL_13_STABLE.\n\n+1\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 19 May 2021 12:32:41 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Subscription tests fail under CLOBBER_CACHE_ALWAYS"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, May 18, 2021 at 07:42:08PM -0400, Tom Lane wrote:\n>> This might go away if worker.c weren't so creatively different\n>> from the other code paths concerned with executor shutdown.\n\n> The tuple routing has made the whole worker logic messier by a larger\n> degree compared to when this stuff was only able to apply DMLs changes\n> on the partition leaves. I know that it is not that great to be more\n> creative here, but we need to make sure that AfterTriggerEndQuery() is\n> moved before ExecCleanupTupleRouting(). We could either keep the\n> ExecCleanupTupleRouting() calls as they are now, and call\n> AfterTriggerEndQuery() in more code paths. Or we could have one\n> PartitionTupleRouting and one ModifyTableState created beforehand\n> in create_estate_for_relation() if applying the change on a\n> partitioned table but that means manipulating more structures across \n> more layers of this code. Something like the attached fixes the\n> problem for me, but honestly it does not help in clarifying this code\n> more.\n\nI was wondering if we could move the ExecCleanupTupleRouting call\ninto finish_estate. copyfrom.c, for example, does that during\nits shutdown function. Compare also the worker.c changes proposed\nin\n\nhttps://www.postgresql.org/message-id/3362608.1621367104%40sss.pgh.pa.us\n\nwhich are because I discovered it's unsafe to pop the snapshot\nbefore running AfterTriggerEndQuery.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 18 May 2021 23:46:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Subscription tests fail under CLOBBER_CACHE_ALWAYS"
},
{
"msg_contents": "On Tue, May 18, 2021 at 11:46:25PM -0400, Tom Lane wrote:\n> I was wondering if we could move the ExecCleanupTupleRouting call\n> into finish_estate. copyfrom.c, for example, does that during\n> its shutdown function. Compare also the worker.c changes proposed\n> in\n\nYeah, the first patch I wrote for this thread was pushing out\nPopActiveSnapshot() into the finish() routine, but I really found the\ncreation of the ModifyTableState stuff needed for a partitioned table\ndone in create_estate_for_relation() to make the code more confusing,\nas that's only a piece needed for the tuple routing path. Saying\nthat, minimizing calls to PopActiveSnapshot() and PushActiveSnapshot()\nis an improvement. That's why I settled into more calls to\nAfterTriggerEndQuery() in the 4 code paths of any apply (tuple routing\n+ 3 DMLs).\n\n> https://www.postgresql.org/message-id/3362608.1621367104%40sss.pgh.pa.us\n> \n> which are because I discovered it's unsafe to pop the snapshot\n> before running AfterTriggerEndQuery.\n\nDidn't remember this one. This reminds me of something similar I did\na couple of weeks ago for the worker code, similar to what you have\nhere. Moving the snapshot push to be earlier, as your other patch is\ndoing, was bringing a bit more sanity when it came to opening the\nindexes of the relation on which a change is applied as we need an\nactive snapshot for predicates and expressions (aka ExecOpenIndices\nand ExecCloseIndices).\n--\nMichael",
"msg_date": "Wed, 19 May 2021 13:23:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Subscription tests fail under CLOBBER_CACHE_ALWAYS"
},
{
"msg_contents": "On Wed, May 19, 2021 at 9:54 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, May 18, 2021 at 11:46:25PM -0400, Tom Lane wrote:\n> > I was wondering if we could move the ExecCleanupTupleRouting call\n> > into finish_estate. copyfrom.c, for example, does that during\n> > its shutdown function. Compare also the worker.c changes proposed\n> > in\n>\n> Yeah, the first patch I wrote for this thread was pushing out\n> PopActiveSnapshot() into the finish() routine, but I really found the\n> creation of the ModifyTableState stuff needed for a partitioned table\n> done in create_estate_for_relation() to make the code more confusing,\n> as that's only a piece needed for the tuple routing path.\n>\n\nHow about moving AfterTriggerEndQuery() to apply_handle_*_internal\ncalls? That way, we might not even need to change Push/Pop calls.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 19 May 2021 10:26:28 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Subscription tests fail under CLOBBER_CACHE_ALWAYS"
},
{
"msg_contents": "On Wed, May 19, 2021 at 10:26:28AM +0530, Amit Kapila wrote:\n> How about moving AfterTriggerEndQuery() to apply_handle_*_internal\n> calls? That way, we might not even need to change Push/Pop calls.\n\nIsn't that going to be a problem when a tuple is moved to a new\npartition in the tuple routing? This does a DELETE followed by an\nINSERT, but the operation is an UPDATE.\n--\nMichael",
"msg_date": "Wed, 19 May 2021 14:05:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Subscription tests fail under CLOBBER_CACHE_ALWAYS"
},
{
"msg_contents": "On Wed, May 19, 2021 at 10:35 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, May 19, 2021 at 10:26:28AM +0530, Amit Kapila wrote:\n> > How about moving AfterTriggerEndQuery() to apply_handle_*_internal\n> > calls? That way, we might not even need to change Push/Pop calls.\n>\n> Isn't that going to be a problem when a tuple is moved to a new\n> partition in the tuple routing?\n>\n\nRight, it won't work.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 19 May 2021 10:51:21 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Subscription tests fail under CLOBBER_CACHE_ALWAYS"
},
{
"msg_contents": "On Wed, May 19, 2021 at 2:05 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Wed, May 19, 2021 at 10:26:28AM +0530, Amit Kapila wrote:\n> > How about moving AfterTriggerEndQuery() to apply_handle_*_internal\n> > calls? That way, we might not even need to change Push/Pop calls.\n>\n> Isn't that going to be a problem when a tuple is moved to a new\n> partition in the tuple routing? This does a DELETE followed by an\n> INSERT, but the operation is an UPDATE.\n\nThat indeed doesn't work. Once AfterTriggerEndQuery() would get\ncalled for DELETE from apply_handle_delete_internal(), after triggers\nof the subsequent INSERT can't be processed, instead causing:\n\nERROR: AfterTriggerSaveEvent() called outside of query\n\nIOW, the patch you posted earlier seems like the way to go.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 19 May 2021 15:53:57 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Subscription tests fail under CLOBBER_CACHE_ALWAYS"
},
{
"msg_contents": "On 5/19/21 1:42 AM, Tom Lane wrote:\n> I discovered $SUBJECT after wondering why hyrax hadn't reported\n> in recently, and trying to run check-world under CCA to see if\n> anything got stuck. Indeed it did --- although this doesn't\n> explain the radio silence from hyrax, because that animal doesn't\n> run any TAP tests. (Neither does avocet, which I think is the\n> only other active CCA critter. So this could have been broken\n> for a very long time.)\n> \n\nThere are three CCA animals on the same box (avocet, husky and \ntrilobite) with different compilers, running in a round-robin manner. \nOne cycle took about 14 days, but about a month ago the machine got \nstuck, requiring a hard reboot about a week ago (no idea why it got \nstuck). It has more CPU power now (8 cores instead of 2), so it should \ntake less time to run one test cycle.\n\navocet already ran all the tests, husky is running HEAD at the moment \nand then it'll be trilobite's turn ... AFAICS none of those runs seems \nto have failed or got stuck so far.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 19 May 2021 14:54:09 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Subscription tests fail under CLOBBER_CACHE_ALWAYS"
},
{
"msg_contents": "\nOn 5/18/21 11:03 PM, Michael Paquier wrote:\n>\n>> 3. Once the subscriber1 postmaster has exited, the TAP\n>> test will eventually time out, and then this happens:\n>>\n>> [.. logs ..]\n>>\n>> That is, because we failed to shut down subscriber1, the\n>> test script neglects to shut down subscriber2, and now\n>> things just sit indefinitely. So that's a robustness\n>> problem in the TAP infrastructure, rather than a bug in\n>> PG proper; but I still say it's a bug that needs fixing.\n> This one comes down to teardown_node() that uses system_or_bail(),\n> leaving things unfinished. I guess that we could be more aggressive\n> and ignore failures if we have a non-zero error code and that not all\n> the tests have passed within the END block of PostgresNode.pm.\n\n\n\nYeah, this area needs substantial improvement. I have seen similar sorts\nof nasty hangs, where the script is waiting forever for some process\nthat hasn't got the shutdown message. At least we probably need some way\nof making sure the END handler doesn't abort early. Maybe\nPostgresNode::stop() needs a mode that handles failure more gracefully.\nMaybe it needs to try shutting down all the nodes and only calling\nBAIL_OUT after trying all of them and getting a failure. But that might\nstill leave us work to do on failures occuring pre-END.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 19 May 2021 14:36:03 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Subscription tests fail under CLOBBER_CACHE_ALWAYS"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> IOW, the patch you posted earlier seems like the way to go.\n\nI really dislike that patch. I think it's doubling down on the messy,\nunstructured coding patterns that got us into this situation to begin\nwith. I'd prefer to expend a little effort on refactoring so that\nthe ExecCleanupTupleRouting call can be moved to the cleanup function\nwhere it belongs.\n\nSo, I propose the attached, which invents a new struct to carry\nthe stuff we've discovered to be necessary. This makes the APIs\nnoticeably cleaner IMHO.\n\nI did not touch the APIs of the apply_XXX_internal functions,\nas it didn't really seem to offer any notational advantage.\nWe can't simply collapse them to take an \"edata\" as I did for\napply_handle_tuple_routing, because the ResultRelInfo they're\nsupposed to operate on could be different from the original one.\nI considered a couple of alternatives:\n\n* Replace their estate arguments with edata, but keep the separate\nResultRelInfo arguments. This might be worth doing in future, if we\nadd more fields to ApplyExecutionData. Right now it'd save nothing,\nand it'd create a risk of confusion about when to use the\nResultRelInfo argument vs. edata->resultRelInfo.\n\n* Allow apply_handle_tuple_routing to overwrite edata->resultRelInfo\nwith the partition child's RRI, then simplify the apply_XXX_internal\nfunctions to take just edata instead of separate estate and\nresultRelInfo args. I think this would work right now, but it seems\ngrotty, and it might cause problems in future.\n\n* Replace the edata->resultRelInfo field with two fields, one for\nthe original parent and one for the actual/current target. Perhaps\nthis is worth doing, not sure.\n\nThoughts?\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 19 May 2021 16:23:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Subscription tests fail under CLOBBER_CACHE_ALWAYS"
},
{
"msg_contents": "On Wed, May 19, 2021 at 04:23:55PM -0400, Tom Lane wrote:\n> I really dislike that patch. I think it's doubling down on the messy,\n> unstructured coding patterns that got us into this situation to begin\n> with. I'd prefer to expend a little effort on refactoring so that\n> the ExecCleanupTupleRouting call can be moved to the cleanup function\n> where it belongs.\n\nOkay.\n\n> I did not touch the APIs of the apply_XXX_internal functions,\n> as it didn't really seem to offer any notational advantage.\n> We can't simply collapse them to take an \"edata\" as I did for\n> apply_handle_tuple_routing, because the ResultRelInfo they're\n> supposed to operate on could be different from the original one.\n> I considered a couple of alternatives:\n> \n> * Replace their estate arguments with edata, but keep the separate\n> ResultRelInfo arguments. This might be worth doing in future, if we\n> add more fields to ApplyExecutionData. Right now it'd save nothing,\n> and it'd create a risk of confusion about when to use the\n> ResultRelInfo argument vs. edata->resultRelInfo.\n\nNot sure about this one. It may be better to wait until this gets\nmore expanded, if it gets expanded.\n\n> * Allow apply_handle_tuple_routing to overwrite edata->resultRelInfo\n> with the partition child's RRI, then simplify the apply_XXX_internal\n> functions to take just edata instead of separate estate and\n> resultRelInfo args. I think this would work right now, but it seems\n> grotty, and it might cause problems in future.\n\nAgreed that it does not seem like a good idea to blindly overwrite\nresultRelInfo with the partition targetted for the apply.\n\n> * Replace the edata->resultRelInfo field with two fields, one for\n> the original parent and one for the actual/current target. Perhaps\n> this is worth doing, not sure.\n\nThis one sounds more natural to me, though.\n\n> Thoughts?\n\nMay I ask why you are not moving the snapshot pop and push into the\nfinish() and create() routines for this patch? Also, any thoughts\nabout adding the trigger tests from 013_partition.pl to REL_13_STABLE?\n--\nMichael",
"msg_date": "Thu, 20 May 2021 08:56:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Subscription tests fail under CLOBBER_CACHE_ALWAYS"
},
{
"msg_contents": "On Wed, May 19, 2021 at 02:36:03PM -0400, Andrew Dunstan wrote:\n> Yeah, this area needs substantial improvement. I have seen similar sorts\n> of nasty hangs, where the script is waiting forever for some process\n> that hasn't got the shutdown message. At least we probably need some way\n> of making sure the END handler doesn't abort early. Maybe\n> PostgresNode::stop() needs a mode that handles failure more gracefully.\n> Maybe it needs to try shutting down all the nodes and only calling\n> BAIL_OUT after trying all of them and getting a failure. But that might\n> still leave us work to do on failures occuring pre-END.\n\nFor that, we could just make the END block called run_log() directly\nas well, as this catches stderr and an error code. What about making\nthe shutdown a two-phase logic by the way? Trigger an immediate stop,\nand if it fails fallback to an extra kill9() to be on the safe side.\n\nHave you seen this being a problem even in cases where the tests all\npassed? If yes, it may be worth using the more aggressive flow even\nin the case where the tests pass.\n--\nMichael",
"msg_date": "Thu, 20 May 2021 09:02:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Subscription tests fail under CLOBBER_CACHE_ALWAYS"
},
{
"msg_contents": "On Thu, May 20, 2021 at 5:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > IOW, the patch you posted earlier seems like the way to go.\n>\n> I really dislike that patch. I think it's doubling down on the messy,\n> unstructured coding patterns that got us into this situation to begin\n> with. I'd prefer to expend a little effort on refactoring so that\n> the ExecCleanupTupleRouting call can be moved to the cleanup function\n> where it belongs.\n>\n> So, I propose the attached, which invents a new struct to carry\n> the stuff we've discovered to be necessary. This makes the APIs\n> noticeably cleaner IMHO.\n\nLarger footprint, but definitely cleaner. Thanks.\n\n> I did not touch the APIs of the apply_XXX_internal functions,\n> as it didn't really seem to offer any notational advantage.\n> We can't simply collapse them to take an \"edata\" as I did for\n> apply_handle_tuple_routing, because the ResultRelInfo they're\n> supposed to operate on could be different from the original one.\n> I considered a couple of alternatives:\n>\n> * Replace their estate arguments with edata, but keep the separate\n> ResultRelInfo arguments. This might be worth doing in future, if we\n> add more fields to ApplyExecutionData. Right now it'd save nothing,\n> and it'd create a risk of confusion about when to use the\n> ResultRelInfo argument vs. edata->resultRelInfo.\n>\n> * Allow apply_handle_tuple_routing to overwrite edata->resultRelInfo\n> with the partition child's RRI, then simplify the apply_XXX_internal\n> functions to take just edata instead of separate estate and\n> resultRelInfo args. I think this would work right now, but it seems\n> grotty, and it might cause problems in future.\n>\n> * Replace the edata->resultRelInfo field with two fields, one for\n> the original parent and one for the actual/current target. Perhaps\n> this is worth doing, not sure.\n>\n> Thoughts?\n\nIMHO, it would be better to keep the lowest-level\napply_handle_XXX_internal() out of this, because presumably we're only\ncleaning up the mess in higher-level callers. Somewhat related, one\nof the intentions behind a04daa97a43, which removed\nes_result_relation_info in favor of passing the ResultRelInfo\nexplicitly to the executor's lower-level functions, was to avoid bugs\ncaused by failing to set/reset that global field correctly in\nhigher-level callers. Now worker.c is pretty small compared with the\nexecutor, but still it seems like a good idea to follow the same\nprinciple here.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 20 May 2021 09:32:25 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Subscription tests fail under CLOBBER_CACHE_ALWAYS"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, May 19, 2021 at 04:23:55PM -0400, Tom Lane wrote:\n>> * Replace the edata->resultRelInfo field with two fields, one for\n>> the original parent and one for the actual/current target. Perhaps\n>> this is worth doing, not sure.\n\n> This one sounds more natural to me, though.\n\nOK, I'll give that a try tomorrow.\n\n> May I ask why you are not moving the snapshot pop and push into the\n> finish() and create() routines for this patch?\n\nThat does need to happen, but I figured I'd leave it to the other\npatch, since there are other things to change too for that snapshot\nproblem. Obviously, whichever patch goes in second will need trivial\nadjustments, but I think it's logically clearer that way.\n\n> Also, any thoughts\n> about adding the trigger tests from 013_partition.pl to REL_13_STABLE?\n\nYeah, if this is a pre-existing problem then we should back-port the\ntests that revealed it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 19 May 2021 20:49:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Subscription tests fail under CLOBBER_CACHE_ALWAYS"
},
{
"msg_contents": "I wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> On Wed, May 19, 2021 at 04:23:55PM -0400, Tom Lane wrote:\n>>> * Replace the edata->resultRelInfo field with two fields, one for\n>>> the original parent and one for the actual/current target. Perhaps\n>>> this is worth doing, not sure.\n\n>> This one sounds more natural to me, though.\n\n> OK, I'll give that a try tomorrow.\n\nHere's a version that does it like that. I'm not entirely convinced\nwhether this is better or not.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 20 May 2021 14:57:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Subscription tests fail under CLOBBER_CACHE_ALWAYS"
},
{
"msg_contents": "On Thu, May 20, 2021 at 02:57:40PM -0400, Tom Lane wrote:\n> Here's a version that does it like that. I'm not entirely convinced\n> whether this is better or not.\n\nHmm. I think that this is better. This makes the code easier to\nfollow, and the extra information is useful for debugging.\n\nThe change looks good to me.\n--\nMichael",
"msg_date": "Fri, 21 May 2021 09:59:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Subscription tests fail under CLOBBER_CACHE_ALWAYS"
},
{
"msg_contents": "On Fri, May 21, 2021 at 6:29 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, May 20, 2021 at 02:57:40PM -0400, Tom Lane wrote:\n> > Here's a version that does it like that. I'm not entirely convinced\n> > whether this is better or not.\n>\n> Hmm. I think that this is better. This makes the code easier to\n> follow, and the extra information is useful for debugging.\n>\n> The change looks good to me.\n>\n\nYeah, the change looks good to me as well but I think we should\nconsider Amit L's point that maintaining this extra activeRelInfo\nmight be prone to bugs if the partitioning logic needs to be extended\nat other places in the worker.c. As the code stands today, it doesn't\nseem problematic so we can go with the second patch if both Tom and\nyou feel that is a better option.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 21 May 2021 10:45:58 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Subscription tests fail under CLOBBER_CACHE_ALWAYS"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> IMHO, it would be better to keep the lowest-level\n> apply_handle_XXX_internal() out of this, because presumably we're only\n> cleaning up the mess in higher-level callers. Somewhat related, one\n> of the intentions behind a04daa97a43, which removed\n> es_result_relation_info in favor of passing the ResultRelInfo\n> explicitly to the executor's lower-level functions, was to avoid bugs\n> caused by failing to set/reset that global field correctly in\n> higher-level callers.\n\nYeah, that's a fair point, and after some reflection I think that\nrepeatedly changing the \"active\" field of the struct is exactly\nwhat was bothering me about the v2 patch. So in the attached v3,\nI went back to passing that as an explicit argument. The state\nstruct now has no fields that need to change after first being set.\n\nI did notice that we could remove some other random arguments\nby adding the LogicalRepRelMapEntry* to the state struct,\nso this also does that.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 21 May 2021 17:01:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Subscription tests fail under CLOBBER_CACHE_ALWAYS"
},
{
"msg_contents": "I wrote:\n> I count three distinct bugs that were exposed by this attempt:\n> ...\n> 2. Said bug causes a segfault in the apply worker process.\n> This causes the parent postmaster to give up and die.\n> I don't understand why we don't treat that like a crash\n> in a regular backend, considering that an apply worker\n> is running largely user-defined code.\n\nFigured that one out: we *do* treat it like a crash in a regular\nbackend, which explains the lack of field complaints. What's\ncontributing to the TAP test getting stuck is that PostgresNode.pm\ndoes this:\n\n\topen my $conf, '>>', \"$pgdata/postgresql.conf\";\n\tprint $conf \"\\n# Added by PostgresNode.pm\\n\";\n\t...\n\tprint $conf \"restart_after_crash = off\\n\";\n\nSo that'd be fine, if only the TAP tests were a bit more robust\nabout postmasters disappearing unexpectedly.\n\nBTW, I wonder whether it wouldn't be a good idea for the\npostmaster to log something along the lines of \"stopping\nbecause restart_after_crash is off\". The present behavior\ncan be quite mysterious otherwise (it certainly confused me).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 21 May 2021 18:14:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Subscription tests fail under CLOBBER_CACHE_ALWAYS"
},
{
"msg_contents": "I wrote:\n> BTW, I wonder whether it wouldn't be a good idea for the\n> postmaster to log something along the lines of \"stopping\n> because restart_after_crash is off\". The present behavior\n> can be quite mysterious otherwise (it certainly confused me).\n\nConcretely, I suggest the attached.\n\nWhile checking the other ExitPostmaster calls to see if any of\nthem lacked suitable log messages, I noticed that there's one\nafter a call to AuxiliaryProcessMain, which is marked\npg_attribute_noreturn(). So that's dead code, and if it\nweren't dead it'd be wrong, because we shouldn't use\nExitPostmaster to exit a child process.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 21 May 2021 19:54:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Subscription tests fail under CLOBBER_CACHE_ALWAYS"
},
{
"msg_contents": "On Sat, May 22, 2021 at 6:01 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > IMHO, it would be better to keep the lowest-level\n> > apply_handle_XXX_internal() out of this, because presumably we're only\n> > cleaning up the mess in higher-level callers. Somewhat related, one\n> > of the intentions behind a04daa97a43, which removed\n> > es_result_relation_info in favor of passing the ResultRelInfo\n> > explicitly to the executor's lower-level functions, was to avoid bugs\n> > caused by failing to set/reset that global field correctly in\n> > higher-level callers.\n>\n> Yeah, that's a fair point, and after some reflection I think that\n> repeatedly changing the \"active\" field of the struct is exactly\n> what was bothering me about the v2 patch. So in the attached v3,\n> I went back to passing that as an explicit argument. The state\n> struct now has no fields that need to change after first being set.\n\nThanks, that looks good to me.\n\n> I did notice that we could remove some other random arguments\n> by adding the LogicalRepRelMapEntry* to the state struct,\n> so this also does that.\n\nThat seems fine.\n\nBTW, I think we'd need to cherry-pick f3b141c4825 (or maybe parts of\nit) into v13 branch for back-patching this.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 22 May 2021 12:06:33 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Subscription tests fail under CLOBBER_CACHE_ALWAYS"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> BTW, I think we'd need to cherry-pick f3b141c4825 (or maybe parts of\n> it) into v13 branch for back-patching this.\n\nI already did a fair amount of that yesterday, cf 84f5c2908 et al.\nBut that does raise the question of how far we need to back-patch this.\nI gather that the whole issue might've started with 1375422c, so maybe\nwe don't really need a back-patch at all? But I'm sort of inclined to\nback-patch to v11 as I did with 84f5c2908, mainly to keep the worker.c\ncode looking more alike in those branches.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 22 May 2021 09:18:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Subscription tests fail under CLOBBER_CACHE_ALWAYS"
},
{
"msg_contents": "I wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n>> BTW, I think we'd need to cherry-pick f3b141c4825 (or maybe parts of\n>> it) into v13 branch for back-patching this.\n\n> I already did a fair amount of that yesterday, cf 84f5c2908 et al.\n> But that does raise the question of how far we need to back-patch this.\n> I gather that the whole issue might've started with 1375422c, so maybe\n> we don't really need a back-patch at all?\n\n... wrong. Running v13 branch tip under CLOBBER_CACHE_ALWAYS provokes\na core dump in 013_partition.pl, so 1375422c is not to blame. Now\nI'm wondering how far back there's a live issue.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 22 May 2021 11:32:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Subscription tests fail under CLOBBER_CACHE_ALWAYS"
},
{
"msg_contents": "I wrote:\n> ... wrong. Running v13 branch tip under CLOBBER_CACHE_ALWAYS provokes\n> a core dump in 013_partition.pl, so 1375422c is not to blame. Now\n> I'm wondering how far back there's a live issue.\n\nOh, of course, it's directly the fault of the patch that added support\nfor partitioned target tables.\n\nI concluded that a verbatim backpatch wasn't too suitable because\na04daa97a had changed a lot of the APIs here. So I left the APIs\nfor the xxx_internal() functions alone. Otherwise the patch\npretty much works as-is in v13.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 22 May 2021 21:28:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Subscription tests fail under CLOBBER_CACHE_ALWAYS"
},
{
"msg_contents": "On Sun, May 23, 2021 at 10:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > ... wrong. Running v13 branch tip under CLOBBER_CACHE_ALWAYS provokes\n> > a core dump in 013_partition.pl, so 1375422c is not to blame. Now\n> > I'm wondering how far back there's a live issue.\n>\n> Oh, of course, it's directly the fault of the patch that added support\n> for partitioned target tables.\n\nYeah, the problem seems to affect only partition child tables, so yeah\nthis problem started with f1ac27bfda6.\n\n> I concluded that a verbatim backpatch wasn't too suitable because\n> a04daa97a had changed a lot of the APIs here. So I left the APIs\n> for the xxx_internal() functions alone. Otherwise the patch\n> pretty much works as-is in v13.\n\nThat looks reasonable.\n\nThanks.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 23 May 2021 14:05:59 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Subscription tests fail under CLOBBER_CACHE_ALWAYS"
},
{
"msg_contents": "On Sun, May 23, 2021 at 02:05:59PM +0900, Amit Langote wrote:\n> On Sun, May 23, 2021 at 10:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I wrote:\n>> > ... wrong. Running v13 branch tip under CLOBBER_CACHE_ALWAYS provokes\n>> > a core dump in 013_partition.pl, so 1375422c is not to blame. Now\n>> > I'm wondering how far back there's a live issue.\n>>\n>> Oh, of course, it's directly the fault of the patch that added support\n>> for partitioned target tables.\n> \n> Yeah, the problem seems to affect only partition child tables, so yeah\n> this problem started with f1ac27bfda6.\n\nYep.\n\n>> I concluded that a verbatim backpatch wasn't too suitable because\n>> a04daa97a had changed a lot of the APIs here. So I left the APIs\n>> for the xxx_internal() functions alone. Otherwise the patch\n>> pretty much works as-is in v13.\n\nThanks for the backpatch of the partition tests via d18ee6f.\n--\nMichael",
"msg_date": "Sun, 23 May 2021 17:38:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Subscription tests fail under CLOBBER_CACHE_ALWAYS"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nIn response to PMEM-related discussions in the previous thread [1],\nespecially Tomas' performance report [2], I have worked for the way\nthat maps WAL segment files on PMEM as WAL buffers. I start this new\nthread to go that way since the previous one has focused on another\npatchset that I called \"Non-volatile WAL buffer.\"\n\nThe patchset using WAL segment files is attached to this mail. Note\nthat it is tested on 8e4b332 (Mar 22, 2021) and cannot be applied to\nthe latest master. Also note that it has a known issue related to\ncheckpoint request (see Section 1.4 in the attached PDF for details).\nI'm rebasing and fixing it, so please be patient for an update.\n\nThis mail also has a performance report PDF comparing PMEM patchsets\nincluding ones that I have posted to pgsql-hackers ever, and such\nzipped and rebased patchsets for reproducibility. The report covers\nhow to build and configure PostgreSQL with the patchsets, so please\nsee it before you use them.\n\nRegards,\nTakashi\n\n[1] https://www.postgresql.org/message-id/flat/002f01d5d28d%2423c01430%246b403c90%24%40hco.ntt.co.jp_1\n[2] https://www.postgresql.org/message-id/9beaac79-2375-8bfc-489b-eb62bd8d4020@enterprisedb.com\n\n-- \nTakashi Menjo <takashi.menjo@gmail.com>",
"msg_date": "Wed, 19 May 2021 10:25:45 +0900",
"msg_from": "Takashi Menjo <takashi.menjo@gmail.com>",
"msg_from_op": true,
"msg_subject": "Map WAL segment files on PMEM as WAL buffers"
},
{
"msg_contents": "Hi hackers,\n\nThe v2 patchset, an updated performance report, and a zipped\nsupplemental patchset for comparison in the report. I confirmed that\nthe v2 can be applied to the latest master d24c565 (Jun 17 2021)\nwithout conflict, but the supplemental patchset cannot. If you want to\nreproduce the comparison, please apply to eb43bdb (May 25, 2021) as I\ndid so in the report.\n\nThe v2 includes WAL statistics support and WAL pre-allocation feature\nin cases of PMEM, and some fixes for the first version. The size of\nWAL buffers managed by xlblocks becomes min_wal_size, and the buffers\nand underlying segment files are initialized at startup then\nperiodically by walwriter. This looks to improve performance, as I\nwrote in the report.\n\nBy the way, I found that Nathan-san posted a PoC patch for WAL\npre-allocation in another thread [1]. I will pay attention to it and\ndiscussions related to WAL pre-allocation in pgsql-hackers.\n\nBest regards,\nTakashi\n\n[1] https://www.postgresql.org/message-id/flat/20201225200953.jjkrytlrzojbndh5%40alap3.anarazel.de\n\n-- \nTakashi Menjo <takashi.menjo@gmail.com>",
"msg_date": "Fri, 18 Jun 2021 15:44:54 +0900",
"msg_from": "Takashi Menjo <takashi.menjo@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Map WAL segment files on PMEM as WAL buffers"
},
{
"msg_contents": "Rebased.\n\n-- \nTakashi Menjo <takashi.menjo@gmail.com>",
"msg_date": "Wed, 30 Jun 2021 13:52:16 +0900",
"msg_from": "Takashi Menjo <takashi.menjo@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Map WAL segment files on PMEM as WAL buffers"
},
{
"msg_contents": "On Wed, 30 Jun 2021 at 06:53, Takashi Menjo <takashi.menjo@gmail.com> wrote:\n>\n> Rebased.\n\nThanks for these patches!\n\nI recently took a dive into the WAL subsystem, and got to this\npatchset through the previous ones linked from [0]. This patchset\nseems straightforward to understand, so thanks!\n\nI've looked over the patches and added some comments below. I haven't\nyet tested this though; finding out how to get PMEM on WSL without\nactual PMEM is probably going to be difficult.\n\n> [ v3-0002-Add-wal_pmem_map-to-GUC.patch ]\n> +extern bool wal_pmem_map;\n\nA lot of the new code in these patches is gated behind this one flag,\nbut the flag should never be true on !pmem systems. Could you instead\nreplace it with something like the following?\n\n+#ifdef USE_LIBPMEM\n+extern bool wal_pmem_map;\n+#else\n+#define wal_pmem_map false\n+#endif\n\nA good compiler would then eliminate all the dead code from being\ngenerated on non-pmem builds (instead of the compiler needing to keep\nthat code around just in case some extension decides to set\nwal_pmem_map to true on !pmem systems because it has access to that\nvariable).\n\n> [ v3-0004-Map-WAL-segment-files-on-PMEM-as-WAL-buffers.patch ]\n> + if ((uintptr_t) addr & ~PG_DAX_HUGEPAGE_MASK)\n> + elog(WARNING,\n> + \"file not mapped on DAX hugepage boundary: path \\\"%s\\\" addr %p\",\n> + path, addr);\n\nI'm not sure that we should want to log this every time we detect the\nissue; It's likely that once it happens it will happen for the next\nfile as well. Maybe add a timeout, or do we generally not deduplicate\nsuch messages?\n\n\nKind regards,\n\nMatthias\n\n[0] https://wiki.postgresql.org/wiki/Persistent_Memory_for_WAL#Basic_performance\n\n\n",
"msg_date": "Fri, 8 Oct 2021 00:46:17 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Map WAL segment files on PMEM as WAL buffers"
},
{
"msg_contents": "Hello Matthias,\n\nThank you for your comment!\n\n> > [ v3-0002-Add-wal_pmem_map-to-GUC.patch ]\n> > +extern bool wal_pmem_map;\n>\n> A lot of the new code in these patches is gated behind this one flag,\n> but the flag should never be true on !pmem systems. Could you instead\n> replace it with something like the following?\n>\n> +#ifdef USE_LIBPMEM\n> +extern bool wal_pmem_map;\n> +#else\n> +#define wal_pmem_map false\n> +#endif\n>\n> A good compiler would then eliminate all the dead code from being\n> generated on non-pmem builds (instead of the compiler needing to keep\n> that code around just in case some extension decides to set\n> wal_pmem_map to true on !pmem systems because it has access to that\n> variable).\n\nThat sounds good. I will introduce it in the next update.\n\n> > [ v3-0004-Map-WAL-segment-files-on-PMEM-as-WAL-buffers.patch ]\n> > + if ((uintptr_t) addr & ~PG_DAX_HUGEPAGE_MASK)\n> > + elog(WARNING,\n> > + \"file not mapped on DAX hugepage boundary: path \\\"%s\\\" addr %p\",\n> > + path, addr);\n>\n> I'm not sure that we should want to log this every time we detect the\n> issue; It's likely that once it happens it will happen for the next\n> file as well. Maybe add a timeout, or do we generally not deduplicate\n> such messages?\n\nLet me give it some thought. I have believed this WARNING is most\nunlikely to happen, and is mutually independent from other happenings.\nI will try to find a case where the WARNING happens repeatedly; or I\nwill de-duplicate the messages if it is easier.\n\nBest regards,\nTakashi\n\n-- \nTakashi Menjo <takashi.menjo@gmail.com>\n\n\n",
"msg_date": "Fri, 8 Oct 2021 17:07:45 +0900",
"msg_from": "Takashi Menjo <takashi.menjo@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Map WAL segment files on PMEM as WAL buffers"
},
{
"msg_contents": "Hi,\n\nRebased, and added the patches below into the patchset.\n\n- (0006) Let wal_pmem_map be constant unless --with-libpmem\nwal_pmem_map never changes from false in that case, so let it be\nconstant. Thanks, Matthias!\n\n- (0007) Ensure WAL mappings before assertion\nThis fixes SIGSEGV abortion in GetXLogBuffer when --enable-cassert.\n\n- (0008) Update document\nThis adds a new entry for wal_pmem_map in the section Write Ahead Log\n-> Settings.\n\nBest regards,\nTakashi\n\nOn Fri, Oct 8, 2021 at 5:07 PM Takashi Menjo <takashi.menjo@gmail.com> wrote:\n>\n> Hello Matthias,\n>\n> Thank you for your comment!\n>\n> > > [ v3-0002-Add-wal_pmem_map-to-GUC.patch ]\n> > > +extern bool wal_pmem_map;\n> >\n> > A lot of the new code in these patches is gated behind this one flag,\n> > but the flag should never be true on !pmem systems. Could you instead\n> > replace it with something like the following?\n> >\n> > +#ifdef USE_LIBPMEM\n> > +extern bool wal_pmem_map;\n> > +#else\n> > +#define wal_pmem_map false\n> > +#endif\n> >\n> > A good compiler would then eliminate all the dead code from being\n> > generated on non-pmem builds (instead of the compiler needing to keep\n> > that code around just in case some extension decides to set\n> > wal_pmem_map to true on !pmem systems because it has access to that\n> > variable).\n>\n> That sounds good. I will introduce it in the next update.\n>\n> > > [ v3-0004-Map-WAL-segment-files-on-PMEM-as-WAL-buffers.patch ]\n> > > + if ((uintptr_t) addr & ~PG_DAX_HUGEPAGE_MASK)\n> > > + elog(WARNING,\n> > > + \"file not mapped on DAX hugepage boundary: path \\\"%s\\\" addr %p\",\n> > > + path, addr);\n> >\n> > I'm not sure that we should want to log this every time we detect the\n> > issue; It's likely that once it happens it will happen for the next\n> > file as well. Maybe add a timeout, or do we generally not deduplicate\n> > such messages?\n>\n> Let me give it some thought. I have believed this WARNING is most\n> unlikely to happen, and is mutually independent from other happenings.\n> I will try to find a case where the WARNING happens repeatedly; or I\n> will de-duplicate the messages if it is easier.\n>\n> Best regards,\n> Takashi\n>\n> --\n> Takashi Menjo <takashi.menjo@gmail.com>\n\n\n\n-- \nTakashi Menjo <takashi.menjo@gmail.com>",
"msg_date": "Thu, 28 Oct 2021 15:09:29 +0900",
"msg_from": "Takashi Menjo <takashi.menjo@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Map WAL segment files on PMEM as WAL buffers"
},
{
"msg_contents": "> On 28 Oct 2021, at 08:09, Takashi Menjo <takashi.menjo@gmail.com> wrote:\n\n> Rebased, and added the patches below into the patchset.\n\nLooks like the 0001 patch needs to be updated to support Windows and MSVC. See\nsrc/tools/msvc/Mkvcbuild.pm and Solution.pm et.al for inspiration on how to add\nthe MSVC equivalent of --with-libpmem. Currently the patch fails in the\n\"Generating configuration headers\" step in Solution.pm.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 3 Nov 2021 14:04:36 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Map WAL segment files on PMEM as WAL buffers"
},
{
"msg_contents": "Hello Daniel,\n\nThank you for your comment. I had the following error message with\nMSVC on Windows. It looks the same as what you told me. I'll fix it.\n\n| > cd src\\tools\\msvc\n| > build\n| (..snipped..)\n| Copying pg_config_os.h...\n| Generating configuration headers...\n| undefined symbol: HAVE_LIBPMEM at src/include/pg_config.h line 347\nat C:/Users/menjo/Documents/git/postgres/src/tools/msvc/Mkvcbuild.pm\nline 860.\n\nBest regards,\nTakashi\n\n\nOn Wed, Nov 3, 2021 at 10:04 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 28 Oct 2021, at 08:09, Takashi Menjo <takashi.menjo@gmail.com> wrote:\n>\n> > Rebased, and added the patches below into the patchset.\n>\n> Looks like the 0001 patch needs to be updated to support Windows and MSVC. See\n> src/tools/msvc/Mkvcbuild.pm and Solution.pm et.al for inspiration on how to add\n> the MSVC equivalent of --with-libpmem. Currently the patch fails in the\n> \"Generating configuration headers\" step in Solution.pm.\n>\n> --\n> Daniel Gustafsson https://vmware.com/\n>\n\n\n-- \nTakashi Menjo <takashi.menjo@gmail.com>\n\n\n",
"msg_date": "Thu, 4 Nov 2021 17:46:18 +0900",
"msg_from": "Takashi Menjo <takashi.menjo@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Map WAL segment files on PMEM as WAL buffers"
},
{
"msg_contents": "Hi Daniel,\n\nThe issue you told has been fixed. I attach the v5 patchset to this email.\n\nThe v5 has all the patches in the v4, and in addition, has the\nfollowing two new patches:\n\n- (v5-0002) Support build with MSVC on Windows: Please have\nsrc\\tools\\msvc\\config.pl as follows to \"configure --with-libpmem:\"\n\n$config->{pmem} = 'C:\\path\\to\\pmdk\\x64-windows';\n\n- (v5-0006) Compatible to Windows: This patch resolves conflicting\nmode_t typedefs and libpmem API variants (U or W, like Windows API).\n\nBest regards,\nTakashi\n\nOn Thu, Nov 4, 2021 at 5:46 PM Takashi Menjo <takashi.menjo@gmail.com> wrote:\n>\n> Hello Daniel,\n>\n> Thank you for your comment. I had the following error message with\n> MSVC on Windows. It looks the same as what you told me. I'll fix it.\n>\n> | > cd src\\tools\\msvc\n> | > build\n> | (..snipped..)\n> | Copying pg_config_os.h...\n> | Generating configuration headers...\n> | undefined symbol: HAVE_LIBPMEM at src/include/pg_config.h line 347\n> at C:/Users/menjo/Documents/git/postgres/src/tools/msvc/Mkvcbuild.pm\n> line 860.\n>\n> Best regards,\n> Takashi\n>\n>\n> On Wed, Nov 3, 2021 at 10:04 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >\n> > > On 28 Oct 2021, at 08:09, Takashi Menjo <takashi.menjo@gmail.com> wrote:\n> >\n> > > Rebased, and added the patches below into the patchset.\n> >\n> > Looks like the 0001 patch needs to be updated to support Windows and MSVC. See\n> > src/tools/msvc/Mkvcbuild.pm and Solution.pm et.al for inspiration on how to add\n> > the MSVC equivalent of --with-libpmem. Currently the patch fails in the\n> > \"Generating configuration headers\" step in Solution.pm.\n> >\n> > --\n> > Daniel Gustafsson https://vmware.com/\n> >\n>\n>\n> --\n> Takashi Menjo <takashi.menjo@gmail.com>\n\n\n\n-- \nTakashi Menjo <takashi.menjo@gmail.com>",
"msg_date": "Fri, 5 Nov 2021 15:47:33 +0900",
"msg_from": "Takashi Menjo <takashi.menjo@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Map WAL segment files on PMEM as WAL buffers"
},
{
"msg_contents": "Rebased.\n\nOn Fri, Nov 5, 2021 at 3:47 PM Takashi Menjo <takashi.menjo@gmail.com> wrote:\n>\n> Hi Daniel,\n>\n> The issue you told has been fixed. I attach the v5 patchset to this email.\n>\n> The v5 has all the patches in the v4, and in addition, has the\n> following two new patches:\n>\n> - (v5-0002) Support build with MSVC on Windows: Please have\n> src\\tools\\msvc\\config.pl as follows to \"configure --with-libpmem:\"\n>\n> $config->{pmem} = 'C:\\path\\to\\pmdk\\x64-windows';\n>\n> - (v5-0006) Compatible to Windows: This patch resolves conflicting\n> mode_t typedefs and libpmem API variants (U or W, like Windows API).\n>\n> Best regards,\n> Takashi\n>\n> On Thu, Nov 4, 2021 at 5:46 PM Takashi Menjo <takashi.menjo@gmail.com> wrote:\n> >\n> > Hello Daniel,\n> >\n> > Thank you for your comment. I had the following error message with\n> > MSVC on Windows. It looks the same as what you told me. I'll fix it.\n> >\n> > | > cd src\\tools\\msvc\n> > | > build\n> > | (..snipped..)\n> > | Copying pg_config_os.h...\n> > | Generating configuration headers...\n> > | undefined symbol: HAVE_LIBPMEM at src/include/pg_config.h line 347\n> > at C:/Users/menjo/Documents/git/postgres/src/tools/msvc/Mkvcbuild.pm\n> > line 860.\n> >\n> > Best regards,\n> > Takashi\n> >\n> >\n> > On Wed, Nov 3, 2021 at 10:04 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > >\n> > > > On 28 Oct 2021, at 08:09, Takashi Menjo <takashi.menjo@gmail.com> wrote:\n> > >\n> > > > Rebased, and added the patches below into the patchset.\n> > >\n> > > Looks like the 0001 patch needs to be updated to support Windows and MSVC. See\n> > > src/tools/msvc/Mkvcbuild.pm and Solution.pm et.al for inspiration on how to add\n> > > the MSVC equivalent of --with-libpmem. Currently the patch fails in the\n> > > \"Generating configuration headers\" step in Solution.pm.\n> > >\n> > > --\n> > > Daniel Gustafsson https://vmware.com/\n> > >\n> >\n> >\n> > --\n> > Takashi Menjo <takashi.menjo@gmail.com>\n>\n>\n>\n> --\n> Takashi Menjo <takashi.menjo@gmail.com>\n\n\n\n-- \nTakashi Menjo <takashi.menjo@gmail.com>",
"msg_date": "Thu, 6 Jan 2022 10:32:27 +0900",
"msg_from": "Takashi Menjo <takashi.menjo@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Map WAL segment files on PMEM as WAL buffers"
},
{
"msg_contents": "The cfbot showed issues compiling on linux and windows.\nhttp://cfbot.cputube.org/takashi-menjo.html\n\nhttps://cirrus-ci.com/task/6125740327436288\n[02:30:06.538] In file included from xlog.c:38:\n[02:30:06.538] ../../../../src/include/access/xlogpmem.h:32:42: error: unknown type name ‘tli’\n[02:30:06.538] 32 | PmemXLogEnsurePrevMapped(XLogRecPtr ptr, tli)\n[02:30:06.538] | ^~~\n[02:30:06.538] xlog.c: In function ‘GetXLogBuffer’:\n[02:30:06.538] xlog.c:1959:19: warning: implicit declaration of function ‘PmemXLogEnsurePrevMapped’ [-Wimplicit-function-declaration]\n[02:30:06.538] 1959 | openLogSegNo = PmemXLogEnsurePrevMapped(endptr, tli);\n\nhttps://cirrus-ci.com/task/6688690280857600?logs=build#L379\n[02:33:25.752] c:\\cirrus\\src\\include\\access\\xlogpmem.h(33,1): error C2081: 'tli': name in formal parameter list illegal (compiling source file src/backend/access/transam/xlog.c) [c:\\cirrus\\postgres.vcxproj]\n\nI'm attaching a probable fix. Unfortunately, for patches like this, most of\nthe functionality isn't exercised unless the library is installed and\ncompilation and runtime are enabled by default.\n\nIn 0009: recaluculated => recalculated\n\n0010-Update-document should be squished with 0003-Add-wal_pmem_map-to-GUC (and\nmaybe 0002 and 0001). I believe the patches after 0005 are more WIP, so it's\nfine if they're not squished yet. I'm not sure what the point is of this one:\n0008-Let-wal_pmem_map-be-constant-unl\n\n+ ereport(ERROR,\n+ (errcode_for_file_access(),\n+ errmsg(\"could not pmem_map_file \\\"%s\\\": %m\", path)));\n\n=> The outer parenthesis are not needed since e3a87b4.",
"msg_date": "Wed, 5 Jan 2022 22:00:01 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Map WAL segment files on PMEM as WAL buffers"
},
{
"msg_contents": "On Thu, Jan 6, 2022 at 5:00 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I'm attaching a probable fix. Unfortunately, for patches like this, most of\n> the functionality isn't exercised unless the library is installed and\n> compilation and runtime are enabled by default.\n\nBy the way, you could add a separate patch marked not-for-commit that\nadds, say, an apt-get command to the Linux task in the .cirrus.yml\nfile, and whatever --with-blah stuff you might need to the configure\npart, if that'd be useful to test this code. Eventually, if we wanted\nto support that permanently for all CI testing, we'd want to push\npackage installation down to the image building scripts (not in the pg\nsource tree) so that CI starts with everything we need pre-installed,\nbut one of the goals of the recent CI work was to make it possible for\npatches that include dependency changes to be possible (for example\nthe alternative SSL library threads).\n\n\n",
"msg_date": "Thu, 6 Jan 2022 17:52:01 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Map WAL segment files on PMEM as WAL buffers"
},
{
"msg_contents": "On Thu, Jan 06, 2022 at 05:52:01PM +1300, Thomas Munro wrote:\n> On Thu, Jan 6, 2022 at 5:00 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > I'm attaching a probable fix. Unfortunately, for patches like this, most of\n> > the functionality isn't exercised unless the library is installed and\n> > compilation and runtime are enabled by default.\n> \n> By the way, you could add a separate patch marked not-for-commit that\n> adds, say, an apt-get command to the Linux task in the .cirrus.yml\n> file, and whatever --with-blah stuff you might need to the configure\n> part, if that'd be useful to test this code.\n\nIn general, I think that's more or less essential.\n\nBut in this case it really doesn't work :(\n\nrunning bootstrap script ... 2022-01-05 23:17:30.244 CST [12088] FATAL: file not on PMEM: path \"pg_wal/000000010000000000000001\"\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 5 Jan 2022 23:19:08 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Map WAL segment files on PMEM as WAL buffers"
},
{
"msg_contents": "Hi Justin,\n\nThank you for your build test and comments. The v7 patchset attached\nto this email fixes the issues you reported.\n\n\n> The cfbot showed issues compiling on linux and windows.\n> http://cfbot.cputube.org/takashi-menjo.html\n>\n> https://cirrus-ci.com/task/6125740327436288\n> [02:30:06.538] In file included from xlog.c:38:\n> [02:30:06.538] ../../../../src/include/access/xlogpmem.h:32:42: error: unknown type name ‘tli’\n> [02:30:06.538] 32 | PmemXLogEnsurePrevMapped(XLogRecPtr ptr, tli)\n> [02:30:06.538] | ^~~\n> [02:30:06.538] xlog.c: In function ‘GetXLogBuffer’:\n> [02:30:06.538] xlog.c:1959:19: warning: implicit declaration of function ‘PmemXLogEnsurePrevMapped’ [-Wimplicit-function-declaration]\n> [02:30:06.538] 1959 | openLogSegNo = PmemXLogEnsurePrevMapped(endptr, tli);\n>\n> https://cirrus-ci.com/task/6688690280857600?logs=build#L379\n> [02:33:25.752] c:\\cirrus\\src\\include\\access\\xlogpmem.h(33,1): error C2081: 'tli': name in formal parameter list illegal (compiling source file src/backend/access/transam/xlog.c) [c:\\cirrus\\postgres.vcxproj]\n>\n> I'm attaching a probable fix. Unfortunately, for patches like this, most of\n> the functionality isn't exercised unless the library is installed and\n> compilation and runtime are enabled by default.\n\nI got the same error when without --with-libpmem. Your fix looks\nreasonable. My v7-0008 fixes this error.\n\n\n> In 0009: recaluculated => recalculated\n\nv7-0011 fixes this typo.\n\n\n> 0010-Update-document should be squished with 0003-Add-wal_pmem_map-to-GUC (and\n> maybe 0002 and 0001). I believe the patches after 0005 are more WIP, so it's\n> fine if they're not squished yet.\n\nAs you say, the patch updating document should melt into a related\nfix, probably \"Add wal_pmem_map to GUC\". For now I want it to be a\nseparate patch (v7-0014).\n\n\n> I'm not sure what the point is of this one: 0008-Let-wal_pmem_map-be-constant-unl\n\nIf USE_LIBPMEM is not defined (that is, no --with-libpmem),\nwal_pmem_map is always false and is never used essentially. Using\n#if(n)def everywhere is not good for code readability, so I let\nwal_pmem_map be constant. This may help compilers optimize conditional\nbranches.\n\nv7-0005 adds the comment above.\n\n\n> + ereport(ERROR,\n> + (errcode_for_file_access(),\n> + errmsg(\"could not pmem_map_file \\\"%s\\\": %m\", path)));\n>\n> => The outer parenthesis are not needed since e3a87b4.\n\nv7-0009 fixes this.\n\n\n> But in this case it really doesn't work :(\n>\n> running bootstrap script ... 2022-01-05 23:17:30.244 CST [12088] FATAL: file not on PMEM: path \"pg_wal/000000010000000000000001\"\n\nDo you have a real PMEM device such as NVDIMM-N or Intel Optane PMem?\nIf so, please use a PMEM mounted with Filesystem DAX option for\npg_wal, or the FATAL error will occur.\n\nIf you don't, you have two alternatives below. Note that neither of\nthem ensures durability. Each of them is just for testing.\n\n1. Emulate PMEM with memmap=nn[KMG]!ss[KMG]. This can be used only on\nLinux. Please see [1][2] for details; or\n2. Set the environment variable PMEM_IS_PMEM_FORCE=1 to tell libpmem\nto treat any devices as if they were PMEM.\n\n\nRegards,\nTakashi\n\n\n[1] https://www.intel.com/content/www/us/en/developer/articles/training/how-to-emulate-persistent-memory-on-an-intel-architecture-server.html\n[2] https://nvdimm.wiki.kernel.org/\n\n-- \nTakashi Menjo <takashi.menjo@gmail.com>",
"msg_date": "Fri, 7 Jan 2022 12:50:01 +0900",
"msg_from": "Takashi Menjo <takashi.menjo@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Map WAL segment files on PMEM as WAL buffers"
},
{
"msg_contents": "On Fri, Jan 07, 2022 at 12:50:01PM +0900, Takashi Menjo wrote:\n> > But in this case it really doesn't work :(\n> >\n> > running bootstrap script ... 2022-01-05 23:17:30.244 CST [12088] FATAL: file not on PMEM: path \"pg_wal/000000010000000000000001\"\n> \n> Do you have a real PMEM device such as NVDIMM-N or Intel Optane PMem?\n\nNo - the point is that we'd like to have a way to exercise this patch on the\ncfbot. Particularly the new code introduced by this patch, not just the\n--without-pmem case...\n\nI was able to make this pass \"make check\" by adding this to main() in\nsrc/backend/main/main.c:\n| setenv(\"PMEM_IS_PMEM_FORCE\", \"1\", 0);\n\nI think you should add a patch which does what Thomas suggested: 1) add to\n./.cirrus.yaml installation of the libpmem package for debian/bsd/mac/windows;\n2) add setenv to main(), as above; 3) change configure.ac and guc.c to default\nto --with-libpmem and wal_pmem_map=on. This should be the last patch, for\ncfbot only, not meant to be merged.\n\nYou can test that the package installation part works before mailing patches to\nthe list with the instructions here:\n\nsrc/tools/ci/README:\nEnabling cirrus-ci in a github repository..\n\n> If you don't, you have two alternatives below. Note that neither of\n> them ensures durability. Each of them is just for testing.\n> 2. Set the environment variable PMEM_IS_PMEM_FORCE=1 to tell libpmem\n> to treat any devices as if they were PMEM.\n\nThe next revision should surely squish all the fixes into their corresponding\npatches to be fixed. Each of the patches ought to be compile and pass tests\nwithout depending on the \"following\" patches: 0001 without 0002-, 0001-0002\nwithout 0003-, etc.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 6 Jan 2022 22:43:37 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Map WAL segment files on PMEM as WAL buffers"
},
{
"msg_contents": "On Thu, Jan 06, 2022 at 10:43:37PM -0600, Justin Pryzby wrote:\n> On Fri, Jan 07, 2022 at 12:50:01PM +0900, Takashi Menjo wrote:\n> > > But in this case it really doesn't work :(\n> > >\n> > > running bootstrap script ... 2022-01-05 23:17:30.244 CST [12088] FATAL: file not on PMEM: path \"pg_wal/000000010000000000000001\"\n> > \n> > Do you have a real PMEM device such as NVDIMM-N or Intel Optane PMem?\n> \n> No - the point is that we'd like to have a way to exercise this patch on the\n> cfbot. Particularly the new code introduced by this patch, not just the\n> --without-pmem case...\n..\n> I think you should add a patch which does what Thomas suggested: 1) add to\n> ./.cirrus.yaml installation of the libpmem package for debian/bsd/mac/windows;\n> 2) add setenv to main(), as above; 3) change configure.ac and guc.c to default\n> to --with-libpmem and wal_pmem_map=on. This should be the last patch, for\n> cfbot only, not meant to be merged.\n\nI was able to get the cirrus CI to compile on linux and bsd with the below\nchanges. I don't know if there's an easy package installation for mac OSX. I\nthink it's okay if mac CI doesn't use --enable-pmem for now.\n\n> You can test that the package installation part works before mailing patches to\n> the list with the instructions here:\n> \n> src/tools/ci/README:\n> Enabling cirrus-ci in a github repository..\n\nI ran the CI under my own github account.\nLinux crashes in the recovery check.\nAnd freebsd has been stuck for 45min.\n\nI'm not sure, but maybe those are legimate consequence of using\nPMEM_IS_PMEM_FORCE (?) If so, maybe the recovery check would need to be\ndisabled for this patch to run on CI... Or maybe my suggestion to enable it by\ndefault for CI doesn't work for this patch. It would need to be specially\ntested with real hardware.\n\nhttps://cirrus-ci.com/task/6245151591890944\n\nhttps://cirrus-ci.com/task/6162551485497344?logs=test_world#L3941\n#2 0x000055ff43c6edad in ExceptionalCondition (conditionName=0x55ff43d18108 \"!XLogRecPtrIsInvalid(missingContrecPtr)\", errorType=0x55ff43d151c4 \"FailedAssertion\", fileName=0x55ff43d151bd \"xlog.c\", lineNumber=8297) at assert.c:69\n\ncommit 15533794e465a381eb23634d67700afa809a0210\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Thu Jan 6 22:53:28 2022 -0600\n\n tmp: enable pmem by default, for CI\n\ndiff --git a/.cirrus.yml b/.cirrus.yml\nindex 677bdf0e65e..0cb961c8103 100644\n--- a/.cirrus.yml\n+++ b/.cirrus.yml\n@@ -81,6 +81,7 @@ task:\n mkdir -m 770 /tmp/cores\n chown root:postgres /tmp/cores\n sysctl kern.corefile='/tmp/cores/%N.%P.core'\n+ pkg install -y devel/pmdk\n \n # NB: Intentionally build without --with-llvm. The freebsd image size is\n # already large enough to make VM startup slow, and even without llvm\n@@ -99,6 +100,7 @@ task:\n --with-lz4 \\\n --with-pam \\\n --with-perl \\\n+ --with-libpmem \\\n --with-python \\\n --with-ssl=openssl \\\n --with-tcl --with-tclconfig=/usr/local/lib/tcl8.6/ \\\n@@ -138,6 +140,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-\n --with-lz4\n --with-pam\n --with-perl\n+ --with-libpmem\n --with-python\n --with-selinux\n --with-ssl=openssl\n@@ -188,6 +191,9 @@ task:\n mkdir -m 770 /tmp/cores\n chown root:postgres /tmp/cores\n sysctl kernel.core_pattern='/tmp/cores/%e-%s-%p.core'\n+ echo 'deb http://deb.debian.org/debian bullseye universe' >>/etc/apt/sources.list\n+ apt-get update\n+ apt-get -y install libpmem-dev\n \n configure_script: |\n su postgres <<-EOF\n@@ -267,6 +273,7 @@ task:\n make \\\n openldap \\\n openssl \\\n+ pmem \\\n python \\\n tcl-tk\n \n@@ -301,6 +308,7 @@ task:\n --with-libxslt \\\n --with-lz4 \\\n --with-perl \\\n+ --with-libpmem \\\n --with-python \\\n --with-ssl=openssl \\\n --with-tcl --with-tclconfig=${brewpath}/opt/tcl-tk/lib/ \\\ndiff --git a/src/backend/main/main.c b/src/backend/main/main.c\nindex 9124060bde7..b814269675d 100644\n--- a/src/backend/main/main.c\n+++ b/src/backend/main/main.c\n@@ -69,6 +69,7 @@ main(int argc, char *argv[])\n #endif\n \n \tprogname = get_progname(argv[0]);\n+\tsetenv(\"PMEM_IS_PMEM_FORCE\", \"1\", 0);\n \n \t/*\n \t * Platform-specific startup hacks\ndiff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c\nindex ffc55f33e86..32d650cb9b2 100644\n--- a/src/backend/utils/misc/guc.c\n+++ b/src/backend/utils/misc/guc.c\n@@ -1354,7 +1354,7 @@ static struct config_bool ConfigureNamesBool[] =\n \t\t\t\t\t\t \"traditional volatile ones.\"),\n \t\t},\n \t\t&wal_pmem_map,\n-\t\tfalse,\n+\t\ttrue,\n \t\tNULL, NULL, NULL\n \t},\n #endif\n\n\n",
"msg_date": "Mon, 17 Jan 2022 01:34:44 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Map WAL segment files on PMEM as WAL buffers"
},
{
"msg_contents": "Hi Justin,\n\nThanks for your help. I'm making an additional patch for Cirrus CI.\n\nI'm also trying to reproduce the \"make check-world\" error you\nreported, on my Linux environment that has neither a real PMem nor an\nemulated one, with PMEM_IS_PMEM_FORCE=1. I'll keep you updated.\n\nRegards,\nTakashi\n\nOn Mon, Jan 17, 2022 at 4:34 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Jan 06, 2022 at 10:43:37PM -0600, Justin Pryzby wrote:\n> > On Fri, Jan 07, 2022 at 12:50:01PM +0900, Takashi Menjo wrote:\n> > > > But in this case it really doesn't work :(\n> > > >\n> > > > running bootstrap script ... 2022-01-05 23:17:30.244 CST [12088] FATAL: file not on PMEM: path \"pg_wal/000000010000000000000001\"\n> > >\n> > > Do you have a real PMEM device such as NVDIMM-N or Intel Optane PMem?\n> >\n> > No - the point is that we'd like to have a way to exercise this patch on the\n> > cfbot. Particularly the new code introduced by this patch, not just the\n> > --without-pmem case...\n> ..\n> > I think you should add a patch which does what Thomas suggested: 1) add to\n> > ./.cirrus.yaml installation of the libpmem package for debian/bsd/mac/windows;\n> > 2) add setenv to main(), as above; 3) change configure.ac and guc.c to default\n> > to --with-libpmem and wal_pmem_map=on. This should be the last patch, for\n> > cfbot only, not meant to be merged.\n>\n> I was able to get the cirrus CI to compile on linux and bsd with the below\n> changes. I don't know if there's an easy package installation for mac OSX. I\n> think it's okay if mac CI doesn't use --enable-pmem for now.\n>\n> > You can test that the package installation part works before mailing patches to\n> > the list with the instructions here:\n> >\n> > src/tools/ci/README:\n> > Enabling cirrus-ci in a github repository..\n>\n> I ran the CI under my own github account.\n> Linux crashes in the recovery check.\n> And freebsd has been stuck for 45min.\n>\n> I'm not sure, but maybe those are legimate consequence of using\n> PMEM_IS_PMEM_FORCE (?) If so, maybe the recovery check would need to be\n> disabled for this patch to run on CI... Or maybe my suggestion to enable it by\n> default for CI doesn't work for this patch. It would need to be specially\n> tested with real hardware.\n>\n> https://cirrus-ci.com/task/6245151591890944\n>\n> https://cirrus-ci.com/task/6162551485497344?logs=test_world#L3941\n> #2 0x000055ff43c6edad in ExceptionalCondition (conditionName=0x55ff43d18108 \"!XLogRecPtrIsInvalid(missingContrecPtr)\", errorType=0x55ff43d151c4 \"FailedAssertion\", fileName=0x55ff43d151bd \"xlog.c\", lineNumber=8297) at assert.c:69\n>\n> commit 15533794e465a381eb23634d67700afa809a0210\n> Author: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Thu Jan 6 22:53:28 2022 -0600\n>\n> tmp: enable pmem by default, for CI\n>\n> diff --git a/.cirrus.yml b/.cirrus.yml\n> index 677bdf0e65e..0cb961c8103 100644\n> --- a/.cirrus.yml\n> +++ b/.cirrus.yml\n> @@ -81,6 +81,7 @@ task:\n> mkdir -m 770 /tmp/cores\n> chown root:postgres /tmp/cores\n> sysctl kern.corefile='/tmp/cores/%N.%P.core'\n> + pkg install -y devel/pmdk\n>\n> # NB: Intentionally build without --with-llvm. The freebsd image size is\n> # already large enough to make VM startup slow, and even without llvm\n> @@ -99,6 +100,7 @@ task:\n> --with-lz4 \\\n> --with-pam \\\n> --with-perl \\\n> + --with-libpmem \\\n> --with-python \\\n> --with-ssl=openssl \\\n> --with-tcl --with-tclconfig=/usr/local/lib/tcl8.6/ \\\n> @@ -138,6 +140,7 @@ LINUX_CONFIGURE_FEATURES: &LINUX_CONFIGURE_FEATURES >-\n> --with-lz4\n> --with-pam\n> --with-perl\n> + --with-libpmem\n> --with-python\n> --with-selinux\n> --with-ssl=openssl\n> @@ -188,6 +191,9 @@ task:\n> mkdir -m 770 /tmp/cores\n> chown root:postgres /tmp/cores\n> sysctl kernel.core_pattern='/tmp/cores/%e-%s-%p.core'\n> + echo 'deb http://deb.debian.org/debian bullseye universe' >>/etc/apt/sources.list\n> + apt-get update\n> + apt-get -y install libpmem-dev\n>\n> configure_script: |\n> su postgres <<-EOF\n> @@ -267,6 +273,7 @@ task:\n> make \\\n> openldap \\\n> openssl \\\n> + pmem \\\n> python \\\n> tcl-tk\n>\n> @@ -301,6 +308,7 @@ task:\n> --with-libxslt \\\n> --with-lz4 \\\n> --with-perl \\\n> + --with-libpmem \\\n> --with-python \\\n> --with-ssl=openssl \\\n> --with-tcl --with-tclconfig=${brewpath}/opt/tcl-tk/lib/ \\\n> diff --git a/src/backend/main/main.c b/src/backend/main/main.c\n> index 9124060bde7..b814269675d 100644\n> --- a/src/backend/main/main.c\n> +++ b/src/backend/main/main.c\n> @@ -69,6 +69,7 @@ main(int argc, char *argv[])\n> #endif\n>\n> progname = get_progname(argv[0]);\n> + setenv(\"PMEM_IS_PMEM_FORCE\", \"1\", 0);\n>\n> /*\n> * Platform-specific startup hacks\n> diff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c\n> index ffc55f33e86..32d650cb9b2 100644\n> --- a/src/backend/utils/misc/guc.c\n> +++ b/src/backend/utils/misc/guc.c\n> @@ -1354,7 +1354,7 @@ static struct config_bool ConfigureNamesBool[] =\n> \"traditional volatile ones.\"),\n> },\n> &wal_pmem_map,\n> - false,\n> + true,\n> NULL, NULL, NULL\n> },\n> #endif\n\n\n\n-- \nTakashi Menjo <takashi.menjo@gmail.com>\n\n\n",
"msg_date": "Tue, 18 Jan 2022 19:58:35 +0900",
"msg_from": "Takashi Menjo <takashi.menjo@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Map WAL segment files on PMEM as WAL buffers"
},
{
"msg_contents": "Hi Justin,\n\nI can reproduce the error you reported, with PMEM_IS_PMEM_FORCE=1.\n\nMoreover, I can reproduce it **on a real PMem device**. So the causes\nare in my patchset, not in PMem environment.\n\nI'll fix it in the next patchset version.\n\nRegards,\nTakashi\n\n--\nTakashi Menjo <takashi.menjo@gmail.com>\n\n\n",
"msg_date": "Wed, 19 Jan 2022 13:41:11 +0900",
"msg_from": "Takashi Menjo <takashi.menjo@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Map WAL segment files on PMEM as WAL buffers"
},
{
"msg_contents": "Hi Justin,\n\nHere is patchset v8. It will have \"make check-world\" and Cirrus to\npass. Would you try this one?\n\nThe v8 squashes some patches in v7 into related ones, and adds the\nfollowing patches:\n\n- v8-0003: Add wal_pmem_map to postgresql.conf.sample. It also helps v8-0011.\n\n- v8-0009: Fix wrong handling of missingContrecPtr for\ntest/recovery/t/026 to pass. It is the cause of the error. Thanks for\nyour report.\n\n- v8-0010 and v8-0011: Each of the two is for CI only. v8-0010 adds\n--with-libpmem and v8-0011 enables \"wal_pmem_map = on\". Please note\nthat, unlike your suggestion, in my patchset PMEM_IS_PMEM_FORCE=1 will\nbe given as an environment variable in .cirrus.yml and \"wal_pmem_map =\non\" will be given by initdb.\n\nRegards,\nTakashi\n\n-- \nTakashi Menjo <takashi.menjo@gmail.com>",
"msg_date": "Thu, 20 Jan 2022 14:55:13 +0900",
"msg_from": "Takashi Menjo <takashi.menjo@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Map WAL segment files on PMEM as WAL buffers"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-20 14:55:13 +0900, Takashi Menjo wrote:\n> Here is patchset v8. It will have \"make check-world\" and Cirrus to\n> pass.\n\nThis unfortunately does not apply anymore: http://cfbot.cputube.org/patch_37_3181.log\n\nCould you rebase?\n\n- Andres\n\n\n",
"msg_date": "Mon, 21 Mar 2022 17:44:39 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Map WAL segment files on PMEM as WAL buffers"
},
{
"msg_contents": "Hi Andres,\n\nThank you for your report. I rebased and made patchset v9 attached to\nthis email. Note that v9-0009 and v9-0010 are for those who want to\npass their own Cirrus CI.\n\nRegards,\nTakashi\n\n\nOn Tue, Mar 22, 2022 at 9:44 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-01-20 14:55:13 +0900, Takashi Menjo wrote:\n> > Here is patchset v8. It will have \"make check-world\" and Cirrus to\n> > pass.\n>\n> This unfortunately does not apply anymore: http://cfbot.cputube.org/patch_37_3181.log\n>\n> Could you rebase?\n>\n> - Andres\n\n\n\n-- \nTakashi Menjo <takashi.menjo@gmail.com>",
"msg_date": "Wed, 23 Mar 2022 17:58:26 +0900",
"msg_from": "Takashi Menjo <takashi.menjo@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Map WAL segment files on PMEM as WAL buffers"
},
{
"msg_contents": "As discussed in [1], we're taking this opportunity to return some\npatchsets that don't appear to be getting enough reviewer interest.\n\nThis is not a rejection, since we don't necessarily think there's\nanything unacceptable about the entry, but it differs from a standard\n\"Returned with Feedback\" in that there's probably not much actionable\nfeedback at all. Rather than code changes, what this patch needs is more\ncommunity interest. You might\n\n- ask people for help with your approach,\n- see if there are similar patches that your code could supplement,\n- get interested parties to agree to review your patch in a CF, or\n- possibly present the functionality in a way that's easier to review\n overall.\n\n(Doing these things is no guarantee that there will be interest, but\nit's hopefully better than endlessly rebasing a patchset that is not\nreceiving any feedback from the community.)\n\nOnce you think you've built up some community support and the patchset\nis ready for review, you (or any interested party) can resurrect the\npatch entry by visiting\n\n https://commitfest.postgresql.org/38/3181/\n\nand changing the status to \"Needs Review\", and then changing the\nstatus again to \"Move to next CF\". (Don't forget the second step;\nhopefully we will have streamlined this in the near future!)\n\nThanks,\n--Jacob\n\n[1] https://postgr.es/m/86140760-8ba5-6f3a-3e6e-5ca6c060bd24@timescale.com\n\n\n\n\n",
"msg_date": "Mon, 1 Aug 2022 13:40:51 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Map WAL segment files on PMEM as WAL buffers"
}
] |
[
{
"msg_contents": "Hi,\n\nparse_subscription_options function has some similar code when\nthrowing errors [with the only difference in the option]. I feel we\ncould just use a variable for the option and use it in the error.\nWhile this has no benefit at all, it saves some LOC and makes the code\nlook better with lesser ereport(ERROR statements. PSA patch.\n\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 19 May 2021 14:08:38 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Wed, May 19, 2021 at 2:09 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> parse_subscription_options function has some similar code when\n> throwing errors [with the only difference in the option]. I feel we\n> could just use a variable for the option and use it in the error.\n> While this has no benefit at all, it saves some LOC and makes the code\n> look better with lesser ereport(ERROR statements. PSA patch.\n>\n> Thoughts?\n\nI don't have a strong opinion on this, but the patch should add\n__translator__ help comment for the error msg.\n\nRegards,\nAmul\n\n\n",
"msg_date": "Wed, 19 May 2021 14:32:28 +0530",
"msg_from": "Amul Sul <sulamul@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Wed, May 19, 2021 at 2:33 PM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Wed, May 19, 2021 at 2:09 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > parse_subscription_options function has some similar code when\n> > throwing errors [with the only difference in the option]. I feel we\n> > could just use a variable for the option and use it in the error.\n> > While this has no benefit at all, it saves some LOC and makes the code\n> > look better with lesser ereport(ERROR statements. PSA patch.\n> >\n> > Thoughts?\n>\n> I don't have a strong opinion on this, but the patch should add\n> __translator__ help comment for the error msg.\n\nIs the \"/*- translator:\" help comment something visible to the user or\nsome other tool? If not, I don't think that's necessary as the meaning\nof the error message is evident by looking at the error message\nitself. IMO, anyone who looks at that part of the code can understand\nit.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 19 May 2021 15:07:52 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Wed, May 19, 2021 at 3:08 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, May 19, 2021 at 2:33 PM Amul Sul <sulamul@gmail.com> wrote:\n> >\n> > On Wed, May 19, 2021 at 2:09 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > Hi,\n> > >\n> > > parse_subscription_options function has some similar code when\n> > > throwing errors [with the only difference in the option]. I feel we\n> > > could just use a variable for the option and use it in the error.\n\nI am not sure how much it helps to just refactor this part of the code\nalone unless we need to add/change it more. Having said that, this\nfunction is being modified by one of the proposed patches for logical\ndecoding of 2PC and I noticed that the proposed patch is adding more\nparameters to this function which already takes 14 input parameters,\nso I suggested refactoring it. See comment 11 in email[1]. See, if\nthat makes sense to you then we can refactor this function such that\nit can be enhanced easily by future patches.\n\n> > > While this has no benefit at all, it saves some LOC and makes the code\n> > > look better with lesser ereport(ERROR statements. PSA patch.\n> > >\n> > > Thoughts?\n> >\n> > I don't have a strong opinion on this, but the patch should add\n> > __translator__ help comment for the error msg.\n>\n> Is the \"/*- translator:\" help comment something visible to the user or\n> some other tool?\n>\n\nWe use similar comments at other places. So, it makes sense to retain\nthe comment as it might help translation tools.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1Jz64rwLyB6H7Z_SmEDouJ41KN42%3DVkVFp6JTpafJFG8Q%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 19 May 2021 16:10:33 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Wed, May 19, 2021 at 4:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, May 19, 2021 at 3:08 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Wed, May 19, 2021 at 2:33 PM Amul Sul <sulamul@gmail.com> wrote:\n> > >\n> > > On Wed, May 19, 2021 at 2:09 PM Bharath Rupireddy\n> > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > >\n> > > > Hi,\n> > > >\n> > > > parse_subscription_options function has some similar code when\n> > > > throwing errors [with the only difference in the option]. I feel we\n> > > > could just use a variable for the option and use it in the error.\n>\n> I am not sure how much it helps to just refactor this part of the code\n> alone unless we need to add/change it more. Having said that, this\n> function is being modified by one of the proposed patches for logical\n> decoding of 2PC and I noticed that the proposed patch is adding more\n> parameters to this function which already takes 14 input parameters,\n> so I suggested refactoring it. See comment 11 in email[1]. See, if\n> that makes sense to you then we can refactor this function such that\n> it can be enhanced easily by future patches.\n\nThanks Amit for the comments. I agree to move the parse options to a\nnew structure ParseSubOptions as suggested. Then the function can just\nbe parse_subscription_options(ParseSubOptions opts); I wonder if we\nshould also have a structure for parse_publication_options as we might\nadd new options there in the future?\n\nIf okay, I can work on these changes and attach it along with these\nerror message changes. Thoughts?\n\n> > > > While this has no benefit at all, it saves some LOC and makes the code\n> > > > look better with lesser ereport(ERROR statements. PSA patch.\n> > > >\n> > > > Thoughts?\n> > >\n> > > I don't have a strong opinion on this, but the patch should add\n> > > __translator__ help comment for the error msg.\n> >\n> > Is the \"/*- translator:\" help comment something visible to the user or\n> > some other tool?\n> >\n>\n> We use similar comments at other places. So, it makes sense to retain\n> the comment as it might help translation tools.\n\nI will retail it.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 19 May 2021 16:42:05 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Wed, May 19, 2021 at 4:42 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, May 19, 2021 at 4:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, May 19, 2021 at 3:08 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > On Wed, May 19, 2021 at 2:33 PM Amul Sul <sulamul@gmail.com> wrote:\n> > > >\n> > > > On Wed, May 19, 2021 at 2:09 PM Bharath Rupireddy\n> > > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > > >\n> > > > > Hi,\n> > > > >\n> > > > > parse_subscription_options function has some similar code when\n> > > > > throwing errors [with the only difference in the option]. I feel we\n> > > > > could just use a variable for the option and use it in the error.\n> >\n> > I am not sure how much it helps to just refactor this part of the code\n> > alone unless we need to add/change it more. Having said that, this\n> > function is being modified by one of the proposed patches for logical\n> > decoding of 2PC and I noticed that the proposed patch is adding more\n> > parameters to this function which already takes 14 input parameters,\n> > so I suggested refactoring it. See comment 11 in email[1]. See, if\n> > that makes sense to you then we can refactor this function such that\n> > it can be enhanced easily by future patches.\n>\n> Thanks Amit for the comments. I agree to move the parse options to a\n> new structure ParseSubOptions as suggested. Then the function can just\n> be parse_subscription_options(ParseSubOptions opts); I wonder if we\n> should also have a structure for parse_publication_options as we might\n> add new options there in the future?\n>\n\nThat function has just 5 parameters so not sure if that needs the same\ntreatment. Let's leave it for now.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 19 May 2021 17:55:52 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Wed, May 19, 2021 at 5:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, May 19, 2021 at 4:42 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Wed, May 19, 2021 at 4:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, May 19, 2021 at 3:08 PM Bharath Rupireddy\n> > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > >\n> > > > On Wed, May 19, 2021 at 2:33 PM Amul Sul <sulamul@gmail.com> wrote:\n> > > > >\n> > > > > On Wed, May 19, 2021 at 2:09 PM Bharath Rupireddy\n> > > > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > > > >\n> > > > > > Hi,\n> > > > > >\n> > > > > > parse_subscription_options function has some similar code when\n> > > > > > throwing errors [with the only difference in the option]. I feel we\n> > > > > > could just use a variable for the option and use it in the error.\n> > >\n> > > I am not sure how much it helps to just refactor this part of the code\n> > > alone unless we need to add/change it more. Having said that, this\n> > > function is being modified by one of the proposed patches for logical\n> > > decoding of 2PC and I noticed that the proposed patch is adding more\n> > > parameters to this function which already takes 14 input parameters,\n> > > so I suggested refactoring it. See comment 11 in email[1]. See, if\n> > > that makes sense to you then we can refactor this function such that\n> > > it can be enhanced easily by future patches.\n> >\n> > Thanks Amit for the comments. I agree to move the parse options to a\n> > new structure ParseSubOptions as suggested. Then the function can just\n> > be parse_subscription_options(ParseSubOptions opts); I wonder if we\n> > should also have a structure for parse_publication_options as we might\n> > add new options there in the future?\n> >\n>\n> That function has just 5 parameters so not sure if that needs the same\n> treatment. Let's leave it for now.\n\nThanks. I will work on the new structure ParseSubOption only for\nsubscription options.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 19 May 2021 18:13:35 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Wed, May 19, 2021 at 6:13 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Thanks. I will work on the new structure ParseSubOption only for\n> subscription options.\n\nPSA v2 patch that has changes for 1) new ParseSubOption structure 2)\nthe error reporting code refactoring.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 20 May 2021 09:40:46 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Thu, May 20, 2021 at 2:11 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, May 19, 2021 at 6:13 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > Thanks. I will work on the new structure ParseSubOption only for\n> > subscription options.\n>\n> PSA v2 patch that has changes for 1) new ParseSubOption structure 2)\n> the error reporting code refactoring.\n>\n\nI have applied the v2 patch and done some review of the code.\n\n- The patch applies OK.\n\n- The code builds OK.\n\n- The make check and TAP subscription tests are OK\n\n\nI am not really a big fan of this patch - it claims to make things\neasier for future options, but IMO the changes sometimes seem at the\nexpense of readability of the *current* code. The following comments\nare only posted here, not as endorsement, but because I already\nreviewed the code so they may be of some use in case the patch goes\nahead...\n\nCOMMENTS\n==========\n\nparse_subscription_options:\n\n1.\nI felt the function implementation is less readable now than\npreviously due to the plethora of \"opts->\" introduced everywhere.\nMaybe it would be worthwhile to assign all those opts back to local\nvars (of the same name as the original previous 14 args), just for the\nsake of getting rid of all those \"opts->\"?\n\n----------\n\n2.\n(not the fault of this patch) Inside the parse_subscription_options\nfunction, there seem many unstated assertions that if a particular\noption member opts->XXX is passed, then the opts->XXX_given is also\npresent (although that is never checked). Perhaps the code should\nexplicitly Assert those XXX_given vars?\n\n----------\n\n3.\n@@ -225,65 +238,63 @@ parse_subscription_options(List *options,\n * We've been explicitly asked to not connect, that requires some\n * additional processing.\n */\n- if (connect && !*connect)\n+ if (opts->connect && !*opts->connect)\n {\n+ char *option = NULL;\n\n\"option\" seems too generic. Maybe \"incompatible_option\" would be a\nbetter name for that variable?\n\n----------\n\n4.\n- errmsg(\"%s and %s are mutually exclusive options\",\n- \"slot_name = NONE\", \"create_slot = true\")));\n+ option = NULL;\n\n- if (enabled && !*enabled_given && *enabled)\n- ereport(ERROR,\n- (errcode(ERRCODE_SYNTAX_ERROR),\n- /*- translator: both %s are strings of the form \"option = value\" */\n- errmsg(\"subscription with %s must also set %s\",\n- \"slot_name = NONE\", \"enabled = false\")));\n+ if (opts->enabled && !*opts->enabled_given && *opts->enabled)\n+ option = \"enabled = false\";\n+ else if (opts->create_slot && !create_slot_given && *opts->create_slot)\n+ option = \"create_slot = false\";\n\n\nIn the above code you don't need to set option = NULL, because it must\nalready be NULL. But for this 2nd chunk of code I think it would be\nbetter to introduce another variable called something like\n\"required_option\".\n\n==========\n\nCreate Subscription:\n\n5.\n@@ -346,22 +357,32 @@ CreateSubscription(CreateSubscriptionStmt *stmt,\nbool isTopLevel)\n char originname[NAMEDATALEN];\n bool create_slot;\n List *publications;\n+ ParseSubOptions *opts;\n+\n+ opts = (ParseSubOptions *) palloc0(sizeof(ParseSubOptions));\n+\n+ /* Fill only the options that are of interest here. */\n+ opts->stmt_options = stmt->options;\n+ opts->connect = &connect;\n\n\nI feel that this code ought to be using a stack variable instead of\nallocating on the heap because - less code, easier to read, no free\nrequired. etc.\n\nJust memset it to fill all 0s before assigning the values.\n\n----------\n\n6.\n+ /* Fill only the options that are of interest here. */\n\nThe comment is kind of redundant, and what you are setting are not\nreally all options either.\n\nMaybe better like this? Or maybe don't have the comment at all?\n\n/* Assign only members of interest. */\nMemSet(&opts, 0, sizeof(opts));\nopts.stmt_options = stmt->options;\nopts.connect = &connect;\nopts.enabled_given = &enabled_given;\nopts.enabled = &enabled;\nopts.create_slot = &create_slot;\n...\n\n==========\n\nAlterSubscription\n\n7.\nSame review comment as for CreateSubscription.\n- Use a stack variable and memset.\n- Change or remove the comment.\n\n----------\n\n8.\nFor AlterSubscriotion you could also declare the \"opts\" just time only\nand memset it at top of the function, instead of the current code\nwhich repeats 5X the same thing.\n\n----------\n\n9.\n+ /* For DROP PUBLICATION, copy_data option is not supported. */\n+ opts->copy_data = isadd ? ©_data : NULL;\n\nThe opts struct is already zapped 0/NULL so this code maybe should be:\n\nif (isadd)\nopts.copy_data = ©_data;\n\n==========\n\n10.\nSince the new typedef ParseSubOptions was added by this patch\nshouldn't the src/tools/pgindent/typedefs.list file be updated also?\n\n----------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Fri, 21 May 2021 21:21:51 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Fri, May 21, 2021 at 9:21 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Thu, May 20, 2021 at 2:11 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Wed, May 19, 2021 at 6:13 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > Thanks. I will work on the new structure ParseSubOption only for\n> > > subscription options.\n> >\n> > PSA v2 patch that has changes for 1) new ParseSubOption structure 2)\n> > the error reporting code refactoring.\n> >\n>\n> I have applied the v2 patch and done some review of the code.\n>\n> - The patch applies OK.\n>\n> - The code builds OK.\n>\n> - The make check and TAP subscription tests are OK\n>\n>\n> I am not really a big fan of this patch - it claims to make things\n> easier for future options, but IMO the changes sometimes seem at the\n> expense of readability of the *current* code. The following comments\n> are only posted here, not as endorsement, but because I already\n> reviewed the code so they may be of some use in case the patch goes\n> ahead...\n>\n> COMMENTS\n> ==========\n>\n> parse_subscription_options:\n>\n> 1.\n> I felt the function implementation is less readable now than\n> previously due to the plethora of \"opts->\" introduced everywhere.\n> Maybe it would be worthwhile to assign all those opts back to local\n> vars (of the same name as the original previous 14 args), just for the\n> sake of getting rid of all those \"opts->\"?\n>\n> ----------\n>\n> 2.\n> (not the fault of this patch) Inside the parse_subscription_options\n> function, there seem many unstated assertions that if a particular\n> option member opts->XXX is passed, then the opts->XXX_given is also\n> present (although that is never checked). Perhaps the code should\n> explicitly Assert those XXX_given vars?\n>\n> ----------\n>\n> 3.\n> @@ -225,65 +238,63 @@ parse_subscription_options(List *options,\n> * We've been explicitly asked to not connect, that requires some\n> * additional processing.\n> */\n> - if (connect && !*connect)\n> + if (opts->connect && !*opts->connect)\n> {\n> + char *option = NULL;\n>\n> \"option\" seems too generic. Maybe \"incompatible_option\" would be a\n> better name for that variable?\n>\n> ----------\n>\n> 4.\n> - errmsg(\"%s and %s are mutually exclusive options\",\n> - \"slot_name = NONE\", \"create_slot = true\")));\n> + option = NULL;\n>\n> - if (enabled && !*enabled_given && *enabled)\n> - ereport(ERROR,\n> - (errcode(ERRCODE_SYNTAX_ERROR),\n> - /*- translator: both %s are strings of the form \"option = value\" */\n> - errmsg(\"subscription with %s must also set %s\",\n> - \"slot_name = NONE\", \"enabled = false\")));\n> + if (opts->enabled && !*opts->enabled_given && *opts->enabled)\n> + option = \"enabled = false\";\n> + else if (opts->create_slot && !create_slot_given && *opts->create_slot)\n> + option = \"create_slot = false\";\n>\n>\n> In the above code you don't need to set option = NULL, because it must\n> already be NULL. But for this 2nd chunk of code I think it would be\n> better to introduce another variable called something like\n> \"required_option\".\n>\n> ==========\n>\n> Create Subscription:\n>\n> 5.\n> @@ -346,22 +357,32 @@ CreateSubscription(CreateSubscriptionStmt *stmt,\n> bool isTopLevel)\n> char originname[NAMEDATALEN];\n> bool create_slot;\n> List *publications;\n> + ParseSubOptions *opts;\n> +\n> + opts = (ParseSubOptions *) palloc0(sizeof(ParseSubOptions));\n> +\n> + /* Fill only the options that are of interest here. */\n> + opts->stmt_options = stmt->options;\n> + opts->connect = &connect;\n>\n>\n> I feel that this code ought to be using a stack variable instead of\n> allocating on the heap because - less code, easier to read, no free\n> required. etc.\n>\n> Just memset it to fill all 0s before assigning the values.\n>\n> ----------\n>\n> 6.\n> + /* Fill only the options that are of interest here. */\n>\n> The comment is kind of redundant, and what you are setting are not\n> really all options either.\n>\n> Maybe better like this? Or maybe don't have the comment at all?\n>\n> /* Assign only members of interest. */\n> MemSet(&opts, 0, sizeof(opts));\n> opts.stmt_options = stmt->options;\n> opts.connect = &connect;\n> opts.enabled_given = &enabled_given;\n> opts.enabled = &enabled;\n> opts.create_slot = &create_slot;\n> ...\n>\n> ==========\n>\n> AlterSubscription\n>\n> 7.\n> Same review comment as for CreateSubscription.\n> - Use a stack variable and memset.\n> - Change or remove the comment.\n>\n> ----------\n>\n> 8.\n> For AlterSubscriotion you could also declare the \"opts\" just time only\n> and memset it at top of the function, instead of the current code\n> which repeats 5X the same thing.\n>\n> ----------\n>\n> 9.\n> + /* For DROP PUBLICATION, copy_data option is not supported. */\n> + opts->copy_data = isadd ? ©_data : NULL;\n>\n> The opts struct is already zapped 0/NULL so this code maybe should be:\n>\n> if (isadd)\n> opts.copy_data = ©_data;\n>\n> ==========\n>\n> 10.\n> Since the new typedef ParseSubOptions was added by this patch\n> shouldn't the src/tools/pgindent/typedefs.list file be updated also?\n>\n\nThinking about this some more, a few other things occurred to me which\nmight help simplify the code.\n\n==========\n\n11.\n\n+/*\n+ * Structure to hold subscription options for parsing\n+ */\n+typedef struct ParseSubOptions\n+{\n+ List *stmt_options;\n+ bool *connect;\n+ bool *enabled_given;\n+ bool *enabled;\n+ bool *create_slot;\n+ bool *slot_name_given;\n+ char **slot_name;\n+ bool *copy_data;\n+ char **synchronous_commit;\n+ bool *refresh;\n+ bool *binary_given;\n+ bool *binary;\n+ bool *streaming_given;\n+ bool *streaming;\n+} ParseSubOptions;\n\nMaybe I am mistaken, but I am wondering why all this indirection is\neven necessary anymore?\n\nIIUC previously args were declared like bool * so the information\ncould be returned to the caller. But now you have ParseSubOption * to\ndo that, so can't you simply declare all those \"given\" members as bool\ninstead of bool *; And when you do that you can remove all the\nunnecessary storage vars in the calling code as well.\n\n--------\n\n12.\nThe original code seems to have a way to \"register interest in the\noption\" by passing the storage var in which to return the result. (eg.\npass \"enabled\" not NULL).\n\nIIUC the purpose of this is what it says in the function comment\nfunction comment:\n\n * Since not all options can be specified in both commands, this function\n * will report an error on options if the target output pointer is NULL to\n * accommodate that.\n\nBut now that you have your new struct, I wonder if there is another\neasy way to achieve the same. e.g. you now could add some more members\ninstead of the non-NULL pointer to register interest in a particular\noption. Then those other members (like \"enabled\") also only need to be\nbool instead of bool *;\n\nSomething like this?\n\nBEFORE:\nelse if (strcmp(defel->defname, \"enabled\") == 0 && opts->enabled)\n{\nif (*opts->enabled_given)\nereport(ERROR,\n(errcode(ERRCODE_SYNTAX_ERROR),\nerrmsg(\"conflicting or redundant options\")));\n\n*opts->enabled_given = true;\n*opts->enabled = defGetBoolean(defel);\n}\nAFTER:\nif (opts->enabled_is_allowed && strcmp(defel->defname, \"enabled\") == 0)\n{\nif (opts->enabled_given)\nereport(ERROR,\n(errcode(ERRCODE_SYNTAX_ERROR),\nerrmsg(\"conflicting or redundant options\")));\n\nopts->enabled_given = true;\nopts->enabled = defGetBoolean(defel);\n}\n\nI am unsure if this will lead to better code or not; Anyway, it is\nsomething to consider - maybe you can experiment with it to see.\n\n----------\n\n13.\nRegardless of review comment #12, I think all those strcmp conditions\nought to be reversed for better efficiency.\n\ne.g.\nBEFORE:\nelse if (strcmp(defel->defname, \"binary\") == 0 && opts->binary)\nAFTER:\nelse if (opts->binary && strcmp(defel->defname, \"binary\") == 0)\n\n----------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Sat, 22 May 2021 11:02:36 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Sat, May 22, 2021 at 6:32 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> I am unsure if this will lead to better code or not; Anyway, it is\n> something to consider - maybe you can experiment with it to see.\n\nThanks. I think using bitmaps would help us have clean code. This is\nalso more extensible. See pseudo code at [1]. One disadvantage is that\nwe might have bms_XXXfunction calls, but that's okay and it shouldn't\nadd too much to the performance. Thoughts?\n\n[1]\ntypedef enum SubOpts_enum\n{\nSUB_OPT_NONE = 0,\nSUB_OPT_CONNECT,\nSUB_OPT_ENABLED,\nSUB_OPT_CREATE_SLOT,\nSUB_OPT_SLOT_NAME,\nSUB_OPT_COPY_DATA,\nSUB_OPT_SYNCHRONOUS_COMMIT,\nSUB_OPT_REFRESH,\nSUB_OPT_BINARY,\nSUB_OPT_STREAMING\n} SubOpts_enum;\n\ntypedef struct SubOptsVals\n{\nbool connect;\nbool enabled;\nbool create_slot;\nchar *slot_name;\nbool copy_data;\nchar *synchronous_commit;\nbool refresh;\nbool binary;\nbool streaming;\n} SubOptsVals;\n\nBitmapset *supported = NULL;\nBitmapset *specified = NULL;\nParsedSubOpts opts;\n\nMemSet(opts, 0, sizeof(ParsedSubOpts));\n/* Fill in all the supported options, we could use bms_add_member as\nwell if there are less number of supported options.*/\nsupported = bms_add_range(NULL, SUB_OPT_CONNECT, SUB_OPT_STREAMING);\nsupported = bms_del_member(supported, SUB_OPT_REFRESH);\n\nparse_subscription_options(stmt_options, supported, specified, &opts);\n\nif (bms_is_member(SUB_OPT_SLOT_NAME, specified))\n{\n /* get slot name with opts.slot_name */\n}\n\nif (bms_is_member(SUB_OPT_SYNCHRONOUS_COMMIT, specified))\n{\n /* get slot name with opts.synchronous_commit */\n}\n\n/* Similarly get the other options. */\n\nbms_free(supported);\nbms_free(specified);\n\nstatic void\nparse_subscription_options(List *stmt_options,\n Bitmapset *supported,\n Bimapset *specified,\n SubOptsVals *opts)\n{\n\n foreach(lc, stmt_options)\n {\n DefElem *defel = (DefElem *) lfirst(lc);\n\n if (bms_is_member(SUB_OPT_CONNECT, supported) &&\n strcmp(defel->defname, \"connect\") == 0)\n {\n if (bms_is_member(SUB_OPT_CONNECT, specified))\n ereport(ERROR,\n (errcode(ERRCODE_SYNTAX_ERROR),\n errmsg(\"conflicting or redundant options\")));\n\n specified = bms_add_member(specified, SUB_OPT_CONNECT);\n opts->connect = defGetBoolean(defel);\n }\n\n /* Similarly do the same for the other options. */\n }\n}\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 22 May 2021 13:47:24 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Sat, May 22, 2021 at 01:47:24PM +0530, Bharath Rupireddy wrote:\n> Thanks. I think using bitmaps would help us have clean code. This is\n> also more extensible. See pseudo code at [1]. One disadvantage is that\n> we might have bms_XXXfunction calls, but that's okay and it shouldn't\n> add too much to the performance. Thoughts?\n> \n> [1]\n> typedef enum SubOpts_enum\n> {\n> SUB_OPT_NONE = 0,\n> SUB_OPT_CONNECT,\n> SUB_OPT_ENABLED,\n> SUB_OPT_CREATE_SLOT,\n> SUB_OPT_SLOT_NAME,\n> SUB_OPT_COPY_DATA,\n> SUB_OPT_SYNCHRONOUS_COMMIT,\n> SUB_OPT_REFRESH,\n> SUB_OPT_BINARY,\n> SUB_OPT_STREAMING\n> } SubOpts_enum;\n\nWhat you are writing here and your comment two paragraphs above are\ninconsistent as you are using an enum here. Please see a3dc926 and\nthe surrounding discussion for reasons why we've been using bitmaps\nfor option parsing lately.\n--\nMichael",
"msg_date": "Mon, 24 May 2021 10:34:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Mon, May 24, 2021 at 7:04 AM Michael Paquier <michael@paquier.xyz> wrote:\n> What you are writing here and your comment two paragraphs above are\n> inconsistent as you are using an enum here. Please see a3dc926 and\n> the surrounding discussion for reasons why we've been using bitmaps\n> for option parsing lately.\n\nThanks! I'm okay to do something similar to what the commit a3dc926\ndid using bits32. But I wonder if we will ever cross the 32 options\nlimit (imposed by bits32) for CREATE/ALTER SUBSCRIPTION command.\nHaving said that, for now, we can have an error similar to\nlast_assigned_kind in add_reloption_kind() if the limit is crossed.\n\nI would like to hear opinions before proceeding with the implementation.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 24 May 2021 12:56:05 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On 2021-May-24, Bharath Rupireddy wrote:\n\n> On Mon, May 24, 2021 at 7:04 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > What you are writing here and your comment two paragraphs above are\n> > inconsistent as you are using an enum here. Please see a3dc926 and\n> > the surrounding discussion for reasons why we've been using bitmaps\n> > for option parsing lately.\n> \n> Thanks! I'm okay to do something similar to what the commit a3dc926\n> did using bits32. But I wonder if we will ever cross the 32 options\n> limit (imposed by bits32) for CREATE/ALTER SUBSCRIPTION command.\n> Having said that, for now, we can have an error similar to\n> last_assigned_kind in add_reloption_kind() if the limit is crossed.\n\nThere's no API limitation here, since that stuff is not user-visible, so\nit doesn't matter. If we ever need a 33rd option, we can change the\ndatatype to bits64. Any extensions using it will have to be recompiled\nacross a major version jump anyway.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n",
"msg_date": "Mon, 24 May 2021 14:07:23 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Mon, May 24, 2021 at 11:37 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-May-24, Bharath Rupireddy wrote:\n>\n> > On Mon, May 24, 2021 at 7:04 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > > What you are writing here and your comment two paragraphs above are\n> > > inconsistent as you are using an enum here. Please see a3dc926 and\n> > > the surrounding discussion for reasons why we've been using bitmaps\n> > > for option parsing lately.\n> >\n> > Thanks! I'm okay to do something similar to what the commit a3dc926\n> > did using bits32. But I wonder if we will ever cross the 32 options\n> > limit (imposed by bits32) for CREATE/ALTER SUBSCRIPTION command.\n> > Having said that, for now, we can have an error similar to\n> > last_assigned_kind in add_reloption_kind() if the limit is crossed.\n>\n> There's no API limitation here, since that stuff is not user-visible, so\n> it doesn't matter. If we ever need a 33rd option, we can change the\n> datatype to bits64. Any extensions using it will have to be recompiled\n> across a major version jump anyway.\n\nThanks. I think there's no bits64 data type currently, I'm sure you\nmeant we will define (when requirement arises) something like typedef\nuint64 bits64; Am I correct?\n\nI see that the commit a3dc926 and discussion at [1] say below respectively:\n\"All the options of those commands are changed to use hex values\nrather than enums to reduce the risk of compatibility bugs when\nintroducing new options.\"\n\"My reasoning is that if you look at an enum value of this type,\neither say in a switch statement or a debugger, the enum value might\nnot be any of the defined symbols. So that way you lose all the type\nchecking that an enum might give you.\"\n\nI'm not able to grasp what are the incompatibilities we can have if\nthe enums are used as bit masks. It will be great if anyone throws\nsome light on this?\n\n[1] - https://www.postgresql.org/message-id/flat/14dde730-1d34-260e-fa9d-7664df2d6313%40enterprisedb.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 25 May 2021 10:59:37 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Tue, May 25, 2021 at 10:59:37AM +0530, Bharath Rupireddy wrote:\n> I'm not able to grasp what are the incompatibilities we can have if\n> the enums are used as bit masks. It will be great if anyone throws\n> some light on this?\n\n0176753 is one example.\n--\nMichael",
"msg_date": "Tue, 25 May 2021 14:34:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Tue, May 25, 2021 at 11:04 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, May 25, 2021 at 10:59:37AM +0530, Bharath Rupireddy wrote:\n> > I'm not able to grasp what are the incompatibilities we can have if\n> > the enums are used as bit masks. It will be great if anyone throws\n> > some light on this?\n>\n> 0176753 is one example.\n\nHm. I get it, it is the coding style incompatibilities. Thanks.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 25 May 2021 11:30:22 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On 2021-May-25, Bharath Rupireddy wrote:\n\n> On Mon, May 24, 2021 at 11:37 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > There's no API limitation here, since that stuff is not user-visible, so\n> > it doesn't matter. If we ever need a 33rd option, we can change the\n> > datatype to bits64. Any extensions using it will have to be recompiled\n> > across a major version jump anyway.\n> \n> Thanks. I think there's no bits64 data type currently, I'm sure you\n> meant we will define (when requirement arises) something like typedef\n> uint64 bits64; Am I correct?\n\nRight.\n\n> I see that the commit a3dc926 and discussion at [1] say below respectively:\n> \"All the options of those commands are changed to use hex values\n> rather than enums to reduce the risk of compatibility bugs when\n> introducing new options.\"\n> \"My reasoning is that if you look at an enum value of this type,\n> either say in a switch statement or a debugger, the enum value might\n> not be any of the defined symbols. So that way you lose all the type\n> checking that an enum might give you.\"\n> \n> I'm not able to grasp what are the incompatibilities we can have if\n> the enums are used as bit masks. It will be great if anyone throws\n> some light on this?\n\nThe problem is that enum members have consecutive integers assigned by\nthe compiler. Say you have an enum with three values for options. They\nget assigned 0, 1, and 2. You can test for each option with \"opt &\nVAL_ONE\" and \"opt & VAL_TWO\" and everything works -- each test returns\ntrue when that specific option is set, and all is well. Now if somebody\nlater adds a fourth option, it gets value 3. When that option is set,\n\"opt & VAL_ONE\" magically returns true, even though you did not set that\nbit in your code. So that becomes a bug.\n\nUsing hex values or bitshifting (rather than letting the compiler decide\nits value in the enum) is a more robust way to ensure that the options\nwill not collide in that way.\n\nSo why not define the enum as a list, and give each option an exclusive\nbit by bitshifting? For example,\n\nenum options {\n OPT_ZERO = 0,\n OPT_ONE = 1 << 1,\n OPT_TWO = 1 << 2,\n OPT_THREE = 1 << 3,\n};\n\nThis should be okay, right? Well, almost. The problem here is if you\nwant to have a variable where you set more than one option, you have to\nuse bit-and of the enum values ... and the resulting value is no longer\npart of the enum. A compiler would be understandably upset if you try\nto pass that value in a variable of the enum datatype.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"Ed is the standard text editor.\"\n http://groups.google.com/group/alt.religion.emacs/msg/8d94ddab6a9b0ad3\n\n\n",
"msg_date": "Tue, 25 May 2021 09:38:31 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Tue, May 25, 2021 at 7:08 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > I see that the commit a3dc926 and discussion at [1] say below respectively:\n> > \"All the options of those commands are changed to use hex values\n> > rather than enums to reduce the risk of compatibility bugs when\n> > introducing new options.\"\n> > \"My reasoning is that if you look at an enum value of this type,\n> > either say in a switch statement or a debugger, the enum value might\n> > not be any of the defined symbols. So that way you lose all the type\n> > checking that an enum might give you.\"\n> >\n> > I'm not able to grasp what are the incompatibilities we can have if\n> > the enums are used as bit masks. It will be great if anyone throws\n> > some light on this?\n>\n> The problem is that enum members have consecutive integers assigned by\n> the compiler. Say you have an enum with three values for options. They\n> get assigned 0, 1, and 2. You can test for each option with \"opt &\n> VAL_ONE\" and \"opt & VAL_TWO\" and everything works -- each test returns\n> true when that specific option is set, and all is well. Now if somebody\n> later adds a fourth option, it gets value 3. When that option is set,\n> \"opt & VAL_ONE\" magically returns true, even though you did not set that\n> bit in your code. So that becomes a bug.\n>\n> Using hex values or bitshifting (rather than letting the compiler decide\n> its value in the enum) is a more robust way to ensure that the options\n> will not collide in that way.\n>\n> So why not define the enum as a list, and give each option an exclusive\n> bit by bitshifting? For example,\n>\n> enum options {\n> OPT_ZERO = 0,\n> OPT_ONE = 1 << 1,\n> OPT_TWO = 1 << 2,\n> OPT_THREE = 1 << 3,\n> };\n>\n> This should be okay, right? Well, almost. The problem here is if you\n> want to have a variable where you set more than one option, you have to\n> use bit-and of the enum values ... and the resulting value is no longer\n> part of the enum. A compiler would be understandably upset if you try\n> to pass that value in a variable of the enum datatype.\n\nThanks a lot for the detailed explanation.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 26 May 2021 10:48:40 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Mon, May 24, 2021 at 7:04 AM Michael Paquier <michael@paquier.xyz> wrote:\n> Please see a3dc926 and the surrounding discussion for reasons why we've\n> been using bitmaps for option parsing lately.\n\nThanks for the suggestion. Here's a WIP patch implementing the\nsubscription command options as bitmaps similar to what commit a3dc926\ndid. Thoughts?\n\nIf the attached WIP patch seems reasonable, I would also like to\nimplement a similar idea for the parse_publication_options although\nthere are only two options right now. Thoughts?\n\nWith Regards,\nBharath Rupireddy.",
"msg_date": "Tue, 1 Jun 2021 20:25:16 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Wed, Jun 2, 2021 at 12:55 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, May 24, 2021 at 7:04 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > Please see a3dc926 and the surrounding discussion for reasons why we've\n> > been using bitmaps for option parsing lately.\n>\n> Thanks for the suggestion. Here's a WIP patch implementing the\n> subscription command options as bitmaps similar to what commit a3dc926\n> did. Thoughts?\n\nI took a look at this latest WIP patch.\n\nThe patch applied cleanly.\nThe code builds OK.\nThe make check result is OK.\nThe TAP subscription make check result is OK.\n\nBelow are some minor review comments:\n\n------\n\n+typedef struct SubOptVals\n+{\n+ bool connect;\n+ bool enabled;\n+ bool create_slot;\n+ char *slot_name;\n+ bool copy_data;\n+ char *synchronous_commit;\n+ bool refresh;\n+ bool binary;\n+ bool streaming;\n+} SubOptVals;\n+\n+/* options for CREATE/ALTER SUBSCRIPTION */\n+typedef struct SubOpts\n+{\n+ bits32 supported_opts; /* bitmask of supported SUBOPT_* */\n+ bits32 specified_opts; /* bitmask of user specified SUBOPT_* */\n+ SubOptVals vals;\n+} SubOpts;\n+\n\n1. These seem only used by the subscriptioncmds.c file, so should they\nbe declared in there also instead of in the .h?\n\n2. I don't see what was gained by having the SubOptVals as a separate\nstruct; OTOH the code accessing the vals is more verbose because of\nit. Maybe consider combining everything into SubOpts and then can just\naccess \"opts.copy_data\" (etc) instead of \"opts.vals.copy_data\";\n\n------\n\n+ /* If connect option is supported, the others also need to be. */\n+ Assert((supported_opts & SUBOPT_CONNECT) == 0 ||\n+ ((supported_opts & SUBOPT_ENABLED) != 0 &&\n+ (supported_opts & SUBOPT_CREATE_SLOT) != 0 &&\n+ (supported_opts & SUBOPT_COPY_DATA) != 0));\n+\n+ /* Set default values for the supported options. */\n+ if ((supported_opts & SUBOPT_CONNECT) != 0)\n+ vals->connect = true;\n+\n+ if ((supported_opts & SUBOPT_ENABLED) != 0)\n+ vals->enabled = true;\n+\n+ if ((supported_opts & SUBOPT_CREATE_SLOT) != 0)\n+ vals->create_slot = true;\n+\n+ if ((supported_opts & SUBOPT_SLOT_NAME) != 0)\n+ vals->slot_name = NULL;\n+\n+ if ((supported_opts & SUBOPT_COPY_DATA) != 0)\n+ vals->copy_data = true;\n\n3. Are all those \"!= 0\" really necessary when checking the\nsupported_opts against the bit masks? Maybe it is just a style thing,\nbut since there are so many of them I felt it contributed to clutter\nand made the code less readable. This pattern was in many places, not\njust the example above.\n\n------\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 2 Jun 2021 13:37:38 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Wed, Jun 2, 2021 at 9:07 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Wed, Jun 2, 2021 at 12:55 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Mon, May 24, 2021 at 7:04 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > > Please see a3dc926 and the surrounding discussion for reasons why we've\n> > > been using bitmaps for option parsing lately.\n> >\n> > Thanks for the suggestion. Here's a WIP patch implementing the\n> > subscription command options as bitmaps similar to what commit a3dc926\n> > did. Thoughts?\n>\n> I took a look at this latest WIP patch.\n\nThanks.\n\n> The patch applied cleanly.\n> The code builds OK.\n> The make check result is OK.\n> The TAP subscription make check result is OK.\n\nThanks for testing.\n\n> Below are some minor review comments:\n>\n> ------\n>\n> +typedef struct SubOptVals\n> +{\n> + bool connect;\n> + bool enabled;\n> + bool create_slot;\n> + char *slot_name;\n> + bool copy_data;\n> + char *synchronous_commit;\n> + bool refresh;\n> + bool binary;\n> + bool streaming;\n> +} SubOptVals;\n> +\n> +/* options for CREATE/ALTER SUBSCRIPTION */\n> +typedef struct SubOpts\n> +{\n> + bits32 supported_opts; /* bitmask of supported SUBOPT_* */\n> + bits32 specified_opts; /* bitmask of user specified SUBOPT_* */\n> + SubOptVals vals;\n> +} SubOpts;\n> +\n>\n> 1. These seem only used by the subscriptioncmds.c file, so should they\n> be declared in there also instead of in the .h?\n\nAgreed.\n\n> 2. I don't see what was gained by having the SubOptVals as a separate\n> struct; OTOH the code accessing the vals is more verbose because of\n> it. Maybe consider combining everything into SubOpts and then can just\n> access \"opts.copy_data\" (etc) instead of \"opts.vals.copy_data\";\n\nAgreed.\n\n> + /* If connect option is supported, the others also need to be. */\n> + Assert((supported_opts & SUBOPT_CONNECT) == 0 ||\n> + ((supported_opts & SUBOPT_ENABLED) != 0 &&\n> + (supported_opts & SUBOPT_CREATE_SLOT) != 0 &&\n> + (supported_opts & SUBOPT_COPY_DATA) != 0));\n> +\n> + /* Set default values for the supported options. */\n> + if ((supported_opts & SUBOPT_CONNECT) != 0)\n> + vals->connect = true;\n> +\n> + if ((supported_opts & SUBOPT_ENABLED) != 0)\n> + vals->enabled = true;\n> +\n> + if ((supported_opts & SUBOPT_CREATE_SLOT) != 0)\n> + vals->create_slot = true;\n> +\n> + if ((supported_opts & SUBOPT_SLOT_NAME) != 0)\n> + vals->slot_name = NULL;\n> +\n> + if ((supported_opts & SUBOPT_COPY_DATA) != 0)\n> + vals->copy_data = true;\n>\n> 3. Are all those \"!= 0\" really necessary when checking the\n> supported_opts against the bit masks? Maybe it is just a style thing,\n> but since there are so many of them I felt it contributed to clutter\n> and made the code less readable. This pattern was in many places, not\n> just the example above.\n\nYeah these are necessary to know whether a particular option's bit is\nset in the bitmask. How about having a macro like below:\n#define IsSet(val, option) ((val & option) != 0)\nThe if statements can become like below:\nif (IsSet(supported_opts, SUBOPT_CONNECT))\nif (IsSet(supported_opts, SUBOPT_ENABLED))\nif (IsSet(supported_opts, SUBOPT_SLOT_NAME))\nif (IsSet(supported_opts, SUBOPT_COPY_DATA))\n\nThe above looks better to me. Thoughts?\n\nCan we implement a similar idea for the parse_publication_options\nalthough there are only two options right now. Option parsing code\nwill be consistent for logical replication DDLs and is extensible.\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Wed, 2 Jun 2021 11:03:46 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Wed, Jun 2, 2021 at 3:33 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> > + /* If connect option is supported, the others also need to be. */\n> > + Assert((supported_opts & SUBOPT_CONNECT) == 0 ||\n> > + ((supported_opts & SUBOPT_ENABLED) != 0 &&\n> > + (supported_opts & SUBOPT_CREATE_SLOT) != 0 &&\n> > + (supported_opts & SUBOPT_COPY_DATA) != 0));\n> > +\n> > + /* Set default values for the supported options. */\n> > + if ((supported_opts & SUBOPT_CONNECT) != 0)\n> > + vals->connect = true;\n> > +\n> > + if ((supported_opts & SUBOPT_ENABLED) != 0)\n> > + vals->enabled = true;\n> > +\n> > + if ((supported_opts & SUBOPT_CREATE_SLOT) != 0)\n> > + vals->create_slot = true;\n> > +\n> > + if ((supported_opts & SUBOPT_SLOT_NAME) != 0)\n> > + vals->slot_name = NULL;\n> > +\n> > + if ((supported_opts & SUBOPT_COPY_DATA) != 0)\n> > + vals->copy_data = true;\n> >\n> > 3. Are all those \"!= 0\" really necessary when checking the\n> > supported_opts against the bit masks? Maybe it is just a style thing,\n> > but since there are so many of them I felt it contributed to clutter\n> > and made the code less readable. This pattern was in many places, not\n> > just the example above.\n>\n> Yeah these are necessary to know whether a particular option's bit is\n> set in the bitmask.\n\nHmmm. Maybe I did not ask the question properly. See below.\n\n> How about having a macro like below:\n> #define IsSet(val, option) ((val & option) != 0)\n> The if statements can become like below:\n> if (IsSet(supported_opts, SUBOPT_CONNECT))\n> if (IsSet(supported_opts, SUBOPT_ENABLED))\n> if (IsSet(supported_opts, SUBOPT_SLOT_NAME))\n> if (IsSet(supported_opts, SUBOPT_COPY_DATA))\n>\n> The above looks better to me. Thoughts?\n\nYes, it looks better, but (since the masks are all 1 bit) I was only\nasking why not do like:\n\nif (supported_opts & SUBOPT_CONNECT)\nif (supported_opts & SUBOPT_ENABLED)\nif (supported_opts & SUBOPT_SLOT_NAME)\nif (supported_opts & SUBOPT_COPY_DATA)\n\n>\n> Can we implement a similar idea for the parse_publication_options\n> although there are only two options right now. Option parsing code\n> will be consistent for logical replication DDLs and is extensible.\n> Thoughts?\n\nI have no strong opinion about it. It seems a trade off between having\na goal of \"code consistency\", versus \"if it aint broke don't fix it\".\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 2 Jun 2021 16:13:06 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Wed, Jun 2, 2021 at 11:43 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> Yes, it looks better, but (since the masks are all 1 bit) I was only\n> asking why not do like:\n>\n> if (supported_opts & SUBOPT_CONNECT)\n> if (supported_opts & SUBOPT_ENABLED)\n> if (supported_opts & SUBOPT_SLOT_NAME)\n> if (supported_opts & SUBOPT_COPY_DATA)\n\nPlease review the attached v3 patch further.\n\nWith Regards,\nBharath Rupireddy.",
"msg_date": "Wed, 2 Jun 2021 18:11:39 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Wed, Jun 2, 2021 at 6:11 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Jun 2, 2021 at 11:43 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > Yes, it looks better, but (since the masks are all 1 bit) I was only\n> > asking why not do like:\n> >\n> > if (supported_opts & SUBOPT_CONNECT)\n> > if (supported_opts & SUBOPT_ENABLED)\n> > if (supported_opts & SUBOPT_SLOT_NAME)\n> > if (supported_opts & SUBOPT_COPY_DATA)\n>\n> Please review the attached v3 patch further.\n\nAdded it to the commitfeset - https://commitfest.postgresql.org/33/3151/\n\nWith Regards,\nBharath Rupireddy.\n\nOn Wed, Jun 2, 2021 at 6:11 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Jun 2, 2021 at 11:43 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > Yes, it looks better, but (since the masks are all 1 bit) I was only\n> > asking why not do like:\n> >\n> > if (supported_opts & SUBOPT_CONNECT)\n> > if (supported_opts & SUBOPT_ENABLED)\n> > if (supported_opts & SUBOPT_SLOT_NAME)\n> > if (supported_opts & SUBOPT_COPY_DATA)\n>\n> Please review the attached v3 patch further.\n\nAdded it to the commitfeset - https://commitfest.postgresql.org/33/3151/\n\nWith Regards,\nBharath Rupireddy.",
"msg_date": "Fri, 4 Jun 2021 17:03:03 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Wed, Jun 2, 2021 at 10:41 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Jun 2, 2021 at 11:43 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > Yes, it looks better, but (since the masks are all 1 bit) I was only\n> > asking why not do like:\n> >\n> > if (supported_opts & SUBOPT_CONNECT)\n> > if (supported_opts & SUBOPT_ENABLED)\n> > if (supported_opts & SUBOPT_SLOT_NAME)\n> > if (supported_opts & SUBOPT_COPY_DATA)\n>\n> Please review the attached v3 patch further.\n\nOK. I have applied the v3 patch and reviewed it again:\n\n- It applies OK.\n- The code builds OK.\n- The make check and TAP subscription tests are OK\n\n========\n\n1.\n+/*\n+ * Structure to hold the bitmaps and values of all the options for\n+ * CREATE/ALTER SUBSCRIPTION commands.\n+ */\n\nThere seems to be an extra space before \"commands.\"\n\n------\n\n2.\n+ /* If connect option is supported, the others also need to be. */\n+ Assert(!IsSet(supported_opts, SUBOPT_CONNECT) ||\n+ (IsSet(supported_opts, SUBOPT_ENABLED) &&\n+ IsSet(supported_opts, SUBOPT_CREATE_SLOT) &&\n+ IsSet(supported_opts, SUBOPT_COPY_DATA)));\n\nThis comment about \"the others\" doesn’t make sense to me.\n\ne.g. Why only these 3 options? What about all those other SUBOPT_* options?\n\n------\n\n3.\nI feel that this patch should be split into 2 parts\na) the SubOpts changes, and\nb) the mutually exclusive options change.\n\nI agree that the new SubOpts struct etc. is an improvement over existing code.\n\nBut, for the mutually exclusive options part I don't see what is\ngained by the new patch code. I preferred the old code with its\nmultiple ereports. Although it was a bit repetitive IMO it was easier\nto read that way, and length-wise there is almost no difference. So if\nit is less readable and not a lot shorter then what is the benefit of\nthe change?\n\n------\n\n4.\n- char *slotname;\n- bool slotname_given;\n- char *synchronous_commit;\n- bool binary_given;\n- bool binary;\n- bool streaming_given;\n- bool streaming;\n-\n- parse_subscription_options(stmt->options,\n- NULL, /* no \"connect\" */\n- NULL, NULL, /* no \"enabled\" */\n- NULL, /* no \"create_slot\" */\n- &slotname_given, &slotname,\n- NULL, /* no \"copy_data\" */\n- &synchronous_commit,\n- NULL, /* no \"refresh\" */\n- &binary_given, &binary,\n- &streaming_given, &streaming);\n-\n- if (slotname_given)\n+ SubOpts opts = {0};\n\nI feel it would be simpler to declare/init this \"opts\" variable just 1\ntime at top of the function AlterSubscription, instead of the 6\nseparate declarations in this v3 patch. Doing that can allow other\ncode simplifications too. (see #5)\n\n------\n\n5.\n case ALTER_SUBSCRIPTION_DROP_PUBLICATION:\n {\n bool isadd = stmt->kind == ALTER_SUBSCRIPTION_ADD_PUBLICATION;\n- bool copy_data;\n- bool refresh;\n List *publist;\n+ SubOpts opts = {0};\n+\n+ opts.supported_opts |= SUBOPT_REFRESH;\n+\n+ if (isadd)\n+ opts.supported_opts |= SUBOPT_COPY_DATA;\n\nI think having a separate \"isadd\" variable is made moot now since\nadding the SubOpts struct.\n\nInstead you can do this:\n+ if (stmt->kind == ALTER_SUBSCRIPTION_ADD_PUBLICATION)\n+ opts.supported_opts |= SUBOPT_COPY_DATA;\n\nOR (after #4) you could do this:\n\ncase ALTER_SUBSCRIPTION_ADD_PUBLICATION:\n opts.supported_opts |= SUBOPT_COPY_DATA;\n /* fall thru. */\ncase ALTER_SUBSCRIPTION_DROP_PUBLICATION:\n\n------\n\n6.\n+\n+#define IsSet(val, option) ((val & option) != 0)\n+\n\nYour IsSet macro might be better if changed to test *multiple* bits are all set.\n\nLike this:\n#define IsSet(val, bits) ((val & (bits)) == (bits))\n\n~\n\nMost of the code remains the same, but some can be simplified.\ne.g.\n+ /* If connect option is supported, the others also need to be. */\n+ Assert(!IsSet(supported_opts, SUBOPT_CONNECT) ||\n+ (IsSet(supported_opts, SUBOPT_ENABLED) &&\n+ IsSet(supported_opts, SUBOPT_CREATE_SLOT) &&\n+ IsSet(supported_opts, SUBOPT_COPY_DATA)));\n\nBecomes:\nAssert(!IsSet(supported_opts, SUBOPT_CONNECT) ||\n IsSet(supported_opts, SUBOPT_ENABLED|SUBOPT_CREATE_SLOT|SUBOPT_COPY_DATA));\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 9 Jun 2021 15:07:23 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Wed, Jun 9, 2021 at 10:37 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Wed, Jun 2, 2021 at 10:41 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Wed, Jun 2, 2021 at 11:43 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > Yes, it looks better, but (since the masks are all 1 bit) I was only\n> > > asking why not do like:\n> > >\n> > > if (supported_opts & SUBOPT_CONNECT)\n> > > if (supported_opts & SUBOPT_ENABLED)\n> > > if (supported_opts & SUBOPT_SLOT_NAME)\n> > > if (supported_opts & SUBOPT_COPY_DATA)\n> >\n> > Please review the attached v3 patch further.\n>\n> OK. I have applied the v3 patch and reviewed it again:\n>\n> - It applies OK.\n> - The code builds OK.\n> - The make check and TAP subscription tests are OK\n\nThanks.\n\n> 1.\n> +/*\n> + * Structure to hold the bitmaps and values of all the options for\n> + * CREATE/ALTER SUBSCRIPTION commands.\n> + */\n>\n> There seems to be an extra space before \"commands.\"\n\nRemoved.\n\n> 2.\n> + /* If connect option is supported, the others also need to be. */\n> + Assert(!IsSet(supported_opts, SUBOPT_CONNECT) ||\n> + (IsSet(supported_opts, SUBOPT_ENABLED) &&\n> + IsSet(supported_opts, SUBOPT_CREATE_SLOT) &&\n> + IsSet(supported_opts, SUBOPT_COPY_DATA)));\n>\n> This comment about \"the others\" doesn’t make sense to me.\n>\n> e.g. Why only these 3 options? What about all those other SUBOPT_* options?\n\nIt is an existing Assert and comment for ensuring somebody doesn't\ncall parse_subscription_options with SUBOPT_CONNECT, without\nSUBOPT_ENABLED, SUBOPT_CREATE_SLOT and SUBOPT_COPY_DATA. In other\nwords, when SUBOPT_CONNECT is passed in, the other three options\nshould also be passed. \" the others\" there in the comment makes sense\njust by looking at the Assert statement.\n\n> 3.\n> I feel that this patch should be split into 2 parts\n> a) the SubOpts changes, and\n> b) the mutually exclusive options change.\n\nDivided the patch into two.\n\n> I agree that the new SubOpts struct etc. is an improvement over existing code.\n>\n> But, for the mutually exclusive options part I don't see what is\n> gained by the new patch code. I preferred the old code with its\n> multiple ereports. Although it was a bit repetitive IMO it was easier\n> to read that way, and length-wise there is almost no difference. So if\n> it is less readable and not a lot shorter then what is the benefit of\n> the change?\n\nI personally don't like the repeated code when there's a chance of\ndoing it better. It might not reduce the loc, but it removes the many\nsimilar ereport(ERROR calls. PSA v4-0002 patch. I think the committer\ncan take a call on it.\n\n> 4.\n> - char *slotname;\n> - bool slotname_given;\n> - char *synchronous_commit;\n> - bool binary_given;\n> - bool binary;\n> - bool streaming_given;\n> - bool streaming;\n> -\n> - parse_subscription_options(stmt->options,\n> - NULL, /* no \"connect\" */\n> - NULL, NULL, /* no \"enabled\" */\n> - NULL, /* no \"create_slot\" */\n> - &slotname_given, &slotname,\n> - NULL, /* no \"copy_data\" */\n> - &synchronous_commit,\n> - NULL, /* no \"refresh\" */\n> - &binary_given, &binary,\n> - &streaming_given, &streaming);\n> -\n> - if (slotname_given)\n> + SubOpts opts = {0};\n>\n> I feel it would be simpler to declare/init this \"opts\" variable just 1\n> time at top of the function AlterSubscription, instead of the 6\n> separate declarations in this v3 patch. Doing that can allow other\n> code simplifications too. (see #5)\n\nDone.\n\n> 5.\n> case ALTER_SUBSCRIPTION_DROP_PUBLICATION:\n> {\n> bool isadd = stmt->kind == ALTER_SUBSCRIPTION_ADD_PUBLICATION;\n> - bool copy_data;\n> - bool refresh;\n> List *publist;\n> + SubOpts opts = {0};\n> +\n> + opts.supported_opts |= SUBOPT_REFRESH;\n> +\n> + if (isadd)\n> + opts.supported_opts |= SUBOPT_COPY_DATA;\n>\n> I think having a separate \"isadd\" variable is made moot now since\n> adding the SubOpts struct.\n>\n> Instead you can do this:\n> + if (stmt->kind == ALTER_SUBSCRIPTION_ADD_PUBLICATION)\n> + opts.supported_opts |= SUBOPT_COPY_DATA;\n>\n> OR (after #4) you could do this:\n>\n> case ALTER_SUBSCRIPTION_ADD_PUBLICATION:\n> opts.supported_opts |= SUBOPT_COPY_DATA;\n> /* fall thru. */\n> case ALTER_SUBSCRIPTION_DROP_PUBLICATION:\n\nDone.\n\n> 6.\n> +\n> +#define IsSet(val, option) ((val & option) != 0)\n> +\n>\n> Your IsSet macro might be better if changed to test *multiple* bits are all set.\n>\n> Like this:\n> #define IsSet(val, bits) ((val & (bits)) == (bits))\n>\n> ~\n>\n> Most of the code remains the same, but some can be simplified.\n> e.g.\n> + /* If connect option is supported, the others also need to be. */\n> + Assert(!IsSet(supported_opts, SUBOPT_CONNECT) ||\n> + (IsSet(supported_opts, SUBOPT_ENABLED) &&\n> + IsSet(supported_opts, SUBOPT_CREATE_SLOT) &&\n> + IsSet(supported_opts, SUBOPT_COPY_DATA)));\n>\n> Becomes:\n> Assert(!IsSet(supported_opts, SUBOPT_CONNECT) ||\n> IsSet(supported_opts, SUBOPT_ENABLED|SUBOPT_CREATE_SLOT|SUBOPT_COPY_DATA));\n\nChanged.\n\nPSA v4 patch set.\n\nWith Regards,\nBharath Rupireddy.",
"msg_date": "Wed, 9 Jun 2021 20:58:20 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Thu, Jun 10, 2021 at 1:28 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Jun 9, 2021 at 10:37 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n[...]\n\nI checked the v4* patches.\nEverything applies and builds and tests OK for me.\n\n> > 2.\n> > + /* If connect option is supported, the others also need to be. */\n> > + Assert(!IsSet(supported_opts, SUBOPT_CONNECT) ||\n> > + (IsSet(supported_opts, SUBOPT_ENABLED) &&\n> > + IsSet(supported_opts, SUBOPT_CREATE_SLOT) &&\n> > + IsSet(supported_opts, SUBOPT_COPY_DATA)));\n> >\n> > This comment about \"the others\" doesn’t make sense to me.\n> >\n> > e.g. Why only these 3 options? What about all those other SUBOPT_* options?\n>\n> It is an existing Assert and comment for ensuring somebody doesn't\n> call parse_subscription_options with SUBOPT_CONNECT, without\n> SUBOPT_ENABLED, SUBOPT_CREATE_SLOT and SUBOPT_COPY_DATA. In other\n> words, when SUBOPT_CONNECT is passed in, the other three options\n> should also be passed. \" the others\" there in the comment makes sense\n> just by looking at the Assert statement.\n>\n\nThis misses the point of my question. And deducing the meaning of the\n\"the others\" from the code is completely backwards! The comment\ndescribes the code. The code doesn't describe the comment.\n\nAgain, I was asking why “the others” are only these 3 options?. What\nabout binary? What about streaming? What about refresh?\nIn other words - what was the *intent* of that comment, and does the\nnew code still meet the requirements of that intent? I think it does\nnot.\n\nIf you see github [1] when that code was first implemented you can\nsee that “the others” referred to every option other than the\n“connect”. At that time, the only others were those 3 - enabled,\ncreate_slot, copy_data. But now there are lots more options so\nsomething definitely needs to change.\nE.g.\n- Maybe the Assert now needs to include all the new options as well?\n- Maybe the entire reason for the Assert has become redundant now due\nto the introduction of SubOpts. It looks like it was not functional\ncode - just something to quieten a static analysis tool.\n- Certainly “the others” is too vague and no longer has the original\nmeaning anymore\n\nI don't know the answer; my guess is that all this has become obsolete\nand the whole Assert and the dodgy comment can just be deleted.\n\n> > 3.\n> > I feel that this patch should be split into 2 parts\n> > a) the SubOpts changes, and\n> > b) the mutually exclusive options change.\n>\n> Divided the patch into two.\n>\n> > I agree that the new SubOpts struct etc. is an improvement over existing code.\n> >\n> > But, for the mutually exclusive options part I don't see what is\n> > gained by the new patch code. I preferred the old code with its\n> > multiple ereports. Although it was a bit repetitive IMO it was easier\n> > to read that way, and length-wise there is almost no difference. So if\n> > it is less readable and not a lot shorter then what is the benefit of\n> > the change?\n>\n> I personally don't like the repeated code when there's a chance of\n> doing it better. It might not reduce the loc, but it removes the many\n> similar ereport(ERROR calls. PSA v4-0002 patch. I think the committer\n> can take a call on it.\n>\n\nThanks for splitting them. My votes are +1 for patch 0001 and -1 for\npatch 0002. As you say, someone else can decide.\n\n------\n[1] https://github.com/postgres/postgres/commit/b1ff33fd9bb82937f4719f264972e6a3c83cdb89#\n\nKind Regards,\nPeter Smith\nFujitsu Australia.\n\n\n",
"msg_date": "Thu, 10 Jun 2021 13:25:12 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Thu, Jun 10, 2021 at 8:55 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > 2.\n> > > + /* If connect option is supported, the others also need to be. */\n> > > + Assert(!IsSet(supported_opts, SUBOPT_CONNECT) ||\n> > > + (IsSet(supported_opts, SUBOPT_ENABLED) &&\n> > > + IsSet(supported_opts, SUBOPT_CREATE_SLOT) &&\n> > > + IsSet(supported_opts, SUBOPT_COPY_DATA)));\n> > >\n> > > This comment about \"the others\" doesn’t make sense to me.\n> > >\n> > > e.g. Why only these 3 options? What about all those other SUBOPT_* options?\n> >\n> > It is an existing Assert and comment for ensuring somebody doesn't\n> > call parse_subscription_options with SUBOPT_CONNECT, without\n> > SUBOPT_ENABLED, SUBOPT_CREATE_SLOT and SUBOPT_COPY_DATA. In other\n> > words, when SUBOPT_CONNECT is passed in, the other three options\n> > should also be passed. \" the others\" there in the comment makes sense\n> > just by looking at the Assert statement.\n> >\n>\n> This misses the point of my question. And deducing the meaning of the\n> \"the others\" from the code is completely backwards! The comment\n> describes the code. The code doesn't describe the comment.\n>\n> Again, I was asking why “the others” are only these 3 options?. What\n> about binary? What about streaming? What about refresh?\n> In other words - what was the *intent* of that comment, and does the\n> new code still meet the requirements of that intent? I think it does\n> not.\n>\n> If you see github [1] when that code was first implemented you can\n> see that “the others” referred to every option other than the\n> “connect”. At that time, the only others were those 3 - enabled,\n> create_slot, copy_data. But now there are lots more options so\n> something definitely needs to change.\n> E.g.\n> - Maybe the Assert now needs to include all the new options as well?\n> - Maybe the entire reason for the Assert has become redundant now due\n> to the introduction of SubOpts. It looks like it was not functional\n> code - just something to quieten a static analysis tool.\n> - Certainly “the others” is too vague and no longer has the original\n> meaning anymore\n>\n> I don't know the answer; my guess is that all this has become obsolete\n> and the whole Assert and the dodgy comment can just be deleted.\n\nHm. I get it. Unfortunately the commit b1ff33f is missing information\non what the coverity tool was complaining of and it has no related\ndiscussion at all.\n\nI agree to remove that assertion entirely. I will post a new patch set soon.\n\n> > > 3.\n> > > I feel that this patch should be split into 2 parts\n> > > a) the SubOpts changes, and\n> > > b) the mutually exclusive options change.\n> >\n> > Divided the patch into two.\n> >\n> > > I agree that the new SubOpts struct etc. is an improvement over existing code.\n> > >\n> > > But, for the mutually exclusive options part I don't see what is\n> > > gained by the new patch code. I preferred the old code with its\n> > > multiple ereports. Although it was a bit repetitive IMO it was easier\n> > > to read that way, and length-wise there is almost no difference. So if\n> > > it is less readable and not a lot shorter then what is the benefit of\n> > > the change?\n> >\n> > I personally don't like the repeated code when there's a chance of\n> > doing it better. It might not reduce the loc, but it removes the many\n> > similar ereport(ERROR calls. PSA v4-0002 patch. I think the committer\n> > can take a call on it.\n> >\n>\n> Thanks for splitting them. My votes are +1 for patch 0001 and -1 for\n> patch 0002. As you say, someone else can decide.\n\nLet's see how it goes further.\n\nWith Regards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Thu, 10 Jun 2021 09:17:55 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Thu, Jun 10, 2021 at 09:17:55AM +0530, Bharath Rupireddy wrote:\n> Hm. I get it. Unfortunately the commit b1ff33f is missing information\n> on what the coverity tool was complaining of and it has no related\n> discussion at all.\n\nThis came from a FORWARD_NULL complain, due to the fact that\nparse_subscription_options() has checks for all three options if\nconnect is non-NULL a bit down after being done with the value\nassignments with the DefElems. So coverity was warning that we'd\nbetter be careful to always have all three pointers set if a\nconnection is wanted by the caller.\n--\nMichael",
"msg_date": "Thu, 10 Jun 2021 13:17:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Thu, Jun 10, 2021 at 9:17 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> > I don't know the answer; my guess is that all this has become obsolete\n> > and the whole Assert and the dodgy comment can just be deleted.\n>\n> Hm. I get it. Unfortunately the commit b1ff33f is missing information\n> on what the coverity tool was complaining of and it has no related\n> discussion at all.\n>\n> I agree to remove that assertion entirely. I will post a new patch set soon.\n\nPSA v5 patch set.\n\nWith Regards,\nBharath Rupireddy.",
"msg_date": "Thu, 10 Jun 2021 18:36:50 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Tue, May 25, 2021 at 9:38 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> This should be okay, right? Well, almost. The problem here is if you\n> want to have a variable where you set more than one option, you have to\n> use bit-and of the enum values ... and the resulting value is no longer\n> part of the enum. A compiler would be understandably upset if you try\n> to pass that value in a variable of the enum datatype.\n\nYes. I dislike this style for precisely this reason.\n\nI may, however, be in the minority.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 11 Jun 2021 16:29:10 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "At Fri, 11 Jun 2021 16:29:10 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Tue, May 25, 2021 at 9:38 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > This should be okay, right? Well, almost. The problem here is if you\n> > want to have a variable where you set more than one option, you have to\n> > use bit-and of the enum values ... and the resulting value is no longer\n> > part of the enum. A compiler would be understandably upset if you try\n> > to pass that value in a variable of the enum datatype.\n> \n> Yes. I dislike this style for precisely this reason.\n> \n> I may, however, be in the minority.\n\nI personaly don't hate that so much, but generally an \"enumeration\"\ntype is considered to be non-numbers. That is, no arithmetics are\ndefined between two enum values. I think that C being able to perform\narithmetics on enums is just for implement reasons. I think that\narithmetics (logical operations are not arithmetics?) between boolean\nvalues are for the same reasons. Actually Java refuses arithmetics on\nenum values.\n\n> hoge.java:27: error: bad operand types for binary operator '+'\n> int x = theenum.x + theenum.z;\n> ^\n> first type: theenum\n> second type: theenum\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 15 Jun 2021 15:39:31 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Thu, Jun 10, 2021 at 6:36 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Jun 10, 2021 at 9:17 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > I don't know the answer; my guess is that all this has become obsolete\n> > > and the whole Assert and the dodgy comment can just be deleted.\n> >\n> > Hm. I get it. Unfortunately the commit b1ff33f is missing information\n> > on what the coverity tool was complaining of and it has no related\n> > discussion at all.\n> >\n> > I agree to remove that assertion entirely. I will post a new patch set soon.\n>\n> PSA v5 patch set.\n\nPSA v6 patch set rebased onto the latest master.\n\nWith Regards,\nBharath Rupireddy.",
"msg_date": "Fri, 18 Jun 2021 18:35:28 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Fri, Jun 18, 2021 at 6:35 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> > PSA v5 patch set.\n>\n> PSA v6 patch set rebased onto the latest master.\n\nPSA v7 patch set rebased onto the latest master.\n\nWith Regards,\nBharath Rupireddy.",
"msg_date": "Mon, 28 Jun 2021 15:24:00 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Mon, Jun 28, 2021 at 3:24 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Jun 18, 2021 at 6:35 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > PSA v5 patch set.\n> >\n> > PSA v6 patch set rebased onto the latest master.\n>\n> PSA v7 patch set rebased onto the latest master.\n>\n\nFew comments:\n===============\n1.\n+typedef struct SubOpts\n+{\n+ bits32 supported_opts; /* bitmap of supported SUBOPT_* */\n+ bits32 specified_opts; /* bitmap of user specified SUBOPT_* */\n\nI think it will be better to not keep these as part of this structure.\nIs there a reason for doing so?\n\n2.\n+parse_subscription_options(List *stmt_options, SubOpts *opts)\n {\n ListCell *lc;\n- bool connect_given = false;\n- bool create_slot_given = false;\n- bool copy_data_given = false;\n- bool refresh_given = false;\n+ bits32 supported_opts;\n+ bits32 specified_opts;\n\n- /* If connect is specified, the others also need to be. */\n- Assert(!connect || (enabled && create_slot && copy_data));\n\nI am not sure whether removing this assertion will bring back the\ncoverity error for which it was added but I see that the reason for\nwhich it was added still holds true. The same is explained by Michael\nas well in his email [1]. I think it is better to keep an equivalent\nassert.\n\n3.\n * Since not all options can be specified in both commands, this function\n * will report an error on options if the target output pointer is NULL to\n * accommodate that.\n */\nstatic void\nparse_subscription_options(List *stmt_options, SubOpts *opts)\n\nThe comment above this function doesn't seem to match with the new\ncode. I think here it is referring to the mutually exclusive errors in\nthe function. If you agree with that, can we change the comment to\nsomething like: \"Since not all options can be specified in both\ncommands, this function will report an error if mutually exclusive\noptions are specified.\"\n\n\n[1] - https://www.postgresql.org/message-id/YMGSbdV1tMTJroA6%40paquier.xyz\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 29 Jun 2021 16:37:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Tue, Jun 29, 2021 at 4:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Few comments:\n> ===============\n> 1.\n> +typedef struct SubOpts\n> +{\n> + bits32 supported_opts; /* bitmap of supported SUBOPT_* */\n> + bits32 specified_opts; /* bitmap of user specified SUBOPT_* */\n>\n> I think it will be better to not keep these as part of this structure.\n> Is there a reason for doing so?\n\nI wanted to pack all the parsing related params passed to\nparse_subscription_options into a single structure since this is one\nof the main points (reducing the number of function params) on which\nthe patch is coded.\n\n> 2.\n> +parse_subscription_options(List *stmt_options, SubOpts *opts)\n> {\n> ListCell *lc;\n> - bool connect_given = false;\n> - bool create_slot_given = false;\n> - bool copy_data_given = false;\n> - bool refresh_given = false;\n> + bits32 supported_opts;\n> + bits32 specified_opts;\n>\n> - /* If connect is specified, the others also need to be. */\n> - Assert(!connect || (enabled && create_slot && copy_data));\n>\n> I am not sure whether removing this assertion will bring back the\n> coverity error for which it was added but I see that the reason for\n> which it was added still holds true. The same is explained by Michael\n> as well in his email [1]. I think it is better to keep an equivalent\n> assert.\n\nThe coverity was complaining FORWARD_NULL which, I think, can occur\nwith the pointers. In the patch, we don't deal with the pointers for\nthe options but with the bitmaps. So, I don't think we need that\nassertion. However, we can look for the coverity warnings in the\nbuildfarm after this patch gets in and fix if found any warnings.\n\n> 3.\n> * Since not all options can be specified in both commands, this function\n> * will report an error on options if the target output pointer is NULL to\n> * accommodate that.\n> */\n> static void\n> parse_subscription_options(List *stmt_options, SubOpts *opts)\n>\n> The comment above this function doesn't seem to match with the new\n> code. I think here it is referring to the mutually exclusive errors in\n> the function. If you agree with that, can we change the comment to\n> something like: \"Since not all options can be specified in both\n> commands, this function will report an error if mutually exclusive\n> options are specified.\"\n\nYes. Modified.\n\nThanks for taking a look at this. PFA v8 patch set for further review.\n\nWith Regards,\nBharath Rupireddy.",
"msg_date": "Tue, 29 Jun 2021 20:56:40 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On 2021-Jun-29, Bharath Rupireddy wrote:\n\n> On Tue, Jun 29, 2021 at 4:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Few comments:\n> > ===============\n> > 1.\n> > +typedef struct SubOpts\n> > +{\n> > + bits32 supported_opts; /* bitmap of supported SUBOPT_* */\n> > + bits32 specified_opts; /* bitmap of user specified SUBOPT_* */\n> >\n> > I think it will be better to not keep these as part of this structure.\n> > Is there a reason for doing so?\n> \n> I wanted to pack all the parsing related params passed to\n> parse_subscription_options into a single structure since this is one\n> of the main points (reducing the number of function params) on which\n> the patch is coded.\n\nYeah I was looking at the struct too and this bit didn't seem great. I\nthink it'd be better to have the struct be output only; so\n\"specified_opts\" would be part of the struct (not necessarily with that\nname), but \"supported opts\" (which is input data) would be passed as a\nseparate argument. That seems cleaner to *me*, at least.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"Right now the sectors on the hard disk run clockwise, but I heard a rumor that\nyou can squeeze 0.2% more throughput by running them counterclockwise.\nIt's worth the effort. Recommended.\" (Gerry Pourwelle)\n\n\n",
"msg_date": "Tue, 29 Jun 2021 12:11:44 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Tue, Jun 29, 2021 at 9:41 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Jun-29, Bharath Rupireddy wrote:\n>\n> > On Tue, Jun 29, 2021 at 4:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > Few comments:\n> > > ===============\n> > > 1.\n> > > +typedef struct SubOpts\n> > > +{\n> > > + bits32 supported_opts; /* bitmap of supported SUBOPT_* */\n> > > + bits32 specified_opts; /* bitmap of user specified SUBOPT_* */\n> > >\n> > > I think it will be better to not keep these as part of this structure.\n> > > Is there a reason for doing so?\n> >\n> > I wanted to pack all the parsing related params passed to\n> > parse_subscription_options into a single structure since this is one\n> > of the main points (reducing the number of function params) on which\n> > the patch is coded.\n>\n> Yeah I was looking at the struct too and this bit didn't seem great. I\n> think it'd be better to have the struct be output only; so\n> \"specified_opts\" would be part of the struct (not necessarily with that\n> name), but \"supported opts\" (which is input data) would be passed as a\n> separate argument. That seems cleaner to *me*, at least.\n>\n\nYeah, that sounds better than what we have in the patch. Also, I am\nnot sure if it is a good idea to use \"supported_opts\" for input data\nas that sounds more like what is output from the function, how about\nrequired_opts or input_opts? Also, we can name the output structure as\nSpecifiedSubOpts and \"specified_opts\" as either \"opts\" or \"out_opts\".\nI think naming these things is a bit matter of personal preference so\nI am fine if both you and Bharath find current naming more meaningful.\n\n+#define IsSet(val, bits) ((val & (bits)) == (bits))\nAlso, do you have any opinion on this define? I see at other places we\nuse in-place checks but as in this patch there are multiple instances\nof such check so probably such a define should be acceptable.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 30 Jun 2021 10:51:54 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Tue, Jun 29, 2021 at 8:56 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Jun 29, 2021 at 4:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Few comments:\n> > ===============\n>\n> > 2.\n> > +parse_subscription_options(List *stmt_options, SubOpts *opts)\n> > {\n> > ListCell *lc;\n> > - bool connect_given = false;\n> > - bool create_slot_given = false;\n> > - bool copy_data_given = false;\n> > - bool refresh_given = false;\n> > + bits32 supported_opts;\n> > + bits32 specified_opts;\n> >\n> > - /* If connect is specified, the others also need to be. */\n> > - Assert(!connect || (enabled && create_slot && copy_data));\n> >\n> > I am not sure whether removing this assertion will bring back the\n> > coverity error for which it was added but I see that the reason for\n> > which it was added still holds true. The same is explained by Michael\n> > as well in his email [1]. I think it is better to keep an equivalent\n> > assert.\n>\n> The coverity was complaining FORWARD_NULL which, I think, can occur\n> with the pointers. In the patch, we don't deal with the pointers for\n> the options but with the bitmaps. So, I don't think we need that\n> assertion. However, we can look for the coverity warnings in the\n> buildfarm after this patch gets in and fix if found any warnings.\n>\n\nI think irrespective of whether coverity reports or not, the assert\nappears useful to me because we are still doing the check for the\nother three options only if connect is supported.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 30 Jun 2021 11:10:53 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Wed, Jun 30, 2021 at 10:52 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jun 29, 2021 at 9:41 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2021-Jun-29, Bharath Rupireddy wrote:\n> >\n> > > On Tue, Jun 29, 2021 at 4:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > Few comments:\n> > > > ===============\n> > > > 1.\n> > > > +typedef struct SubOpts\n> > > > +{\n> > > > + bits32 supported_opts; /* bitmap of supported SUBOPT_* */\n> > > > + bits32 specified_opts; /* bitmap of user specified SUBOPT_* */\n> > > >\n> > > > I think it will be better to not keep these as part of this structure.\n> > > > Is there a reason for doing so?\n> > >\n> > > I wanted to pack all the parsing related params passed to\n> > > parse_subscription_options into a single structure since this is one\n> > > of the main points (reducing the number of function params) on which\n> > > the patch is coded.\n> >\n> > Yeah I was looking at the struct too and this bit didn't seem great. I\n> > think it'd be better to have the struct be output only; so\n> > \"specified_opts\" would be part of the struct (not necessarily with that\n> > name), but \"supported opts\" (which is input data) would be passed as a\n> > separate argument. That seems cleaner to *me*, at least.\n> >\n>\n> Yeah, that sounds better than what we have in the patch. Also, I am\n> not sure if it is a good idea to use \"supported_opts\" for input data\n> as that sounds more like what is output from the function, how about\n> required_opts or input_opts? Also, we can name the output structure as\n> SpecifiedSubOpts and \"specified_opts\" as either \"opts\" or \"out_opts\".\n\nIMO, SubOpts looks okay. Also, I retained the specified_opts but moved\nsupported_opts out of the structure.\n\n> I think naming these things is a bit matter of personal preference so\n> I am fine if both you and Bharath find current naming more meaningful.\n\nPlease let me know if any of the names look odd.\n\n> +#define IsSet(val, bits) ((val & (bits)) == (bits))\n> Also, do you have any opinion on this define? I see at other places we\n> use in-place checks but as in this patch there are multiple instances\n> of such check so probably such a define should be acceptable.\n\nYeah. I'm retaining this macro as it makes code readable.\n\nPFA v9 patch set for further review.\n\nWith Regards,\nBharath Rupireddy.",
"msg_date": "Wed, 30 Jun 2021 19:38:04 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Wed, Jun 30, 2021 at 11:11 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jun 29, 2021 at 8:56 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Tue, Jun 29, 2021 at 4:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > Few comments:\n> > > ===============\n> >\n> > > 2.\n> > > +parse_subscription_options(List *stmt_options, SubOpts *opts)\n> > > {\n> > > ListCell *lc;\n> > > - bool connect_given = false;\n> > > - bool create_slot_given = false;\n> > > - bool copy_data_given = false;\n> > > - bool refresh_given = false;\n> > > + bits32 supported_opts;\n> > > + bits32 specified_opts;\n> > >\n> > > - /* If connect is specified, the others also need to be. */\n> > > - Assert(!connect || (enabled && create_slot && copy_data));\n> > >\n> > > I am not sure whether removing this assertion will bring back the\n> > > coverity error for which it was added but I see that the reason for\n> > > which it was added still holds true. The same is explained by Michael\n> > > as well in his email [1]. I think it is better to keep an equivalent\n> > > assert.\n> >\n> > The coverity was complaining FORWARD_NULL which, I think, can occur\n> > with the pointers. In the patch, we don't deal with the pointers for\n> > the options but with the bitmaps. So, I don't think we need that\n> > assertion. However, we can look for the coverity warnings in the\n> > buildfarm after this patch gets in and fix if found any warnings.\n> >\n>\n> I think irrespective of whether coverity reports or not, the assert\n> appears useful to me because we are still doing the check for the\n> other three options only if connect is supported.\n\nAdded the assert back. PSA v9 patch set posted upthread.\n\nWith Regards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Wed, 30 Jun 2021 19:38:22 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Wed, Jun 30, 2021 at 7:38 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> PFA v9 patch set for further review.\n>\n\nThe first patch looks mostly good to me. I have made some minor\nmodifications to the 0001 patch: (a) added/edited few comments, (b)\nthere is no need to initialize supported_opts variable in\nCreateSubscription, (c) used extra bracket in macro, (d) ran pgindent.\n\nKindly check and let me know what you think of the attached? I am not\nsure whether second patch is an improvement over what we have\ncurrently but if you and others feel that is a good idea then you can\nsubmit the same after the main patch gets committed.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Thu, 1 Jul 2021 16:37:50 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Thu, Jul 1, 2021 at 4:38 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jun 30, 2021 at 7:38 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > PFA v9 patch set for further review.\n> >\n>\n> The first patch looks mostly good to me. I have made some minor\n> modifications to the 0001 patch: (a) added/edited few comments, (b)\n> there is no need to initialize supported_opts variable in\n> CreateSubscription, (c) used extra bracket in macro, (d) ran pgindent.\n\nThanks a lot Amit.\n\n> Kindly check and let me know what you think of the attachment?\n1) Isn't good to mention in the commit message a note about the\nlimitation of the maximum number of SUBOPT_*? Currently it is 32\nbecause of bits32 data type. If required, then we might have to\nintroduce bits64 (typedef to uint64).\n2) How about just saying \"Refactor function\nparse_subscription_options.\" instead of \"Refactor function\nparse_subscription_options().\" in the commit message? This is similar\nto the commit 531737d \"Refactor function parse_output_parameters.\"\n3) There's an whitespace introduced making the SUBOPT_SLOT_NAME,\nSUBOPT_SYNCHRONOUS_COMMIT and SUBOPT_STREAMING not falling line with\nthe SUBOPT_CONNECT\n+ /* Options that can be specified by CREATE SUBSCRIPTION command. */\n+ supported_opts = (SUBOPT_CONNECT | SUBOPT_ENABLED | SUBOPT_CREATE_SLOT |\n+ SUBOPT_SLOT_NAME | SUBOPT_COPY_DATA |\n+ SUBOPT_SYNCHRONOUS_COMMIT | SUBOPT_BINARY |\n+ SUBOPT_STREAMING);\nShouldn't it be something like below?\n+ supported_opts = (SUBOPT_CONNECT | SUBOPT_ENABLED | SUBOPT_CREATE_SLOT |\n+ SUBOPT_SLOT_NAME | SUBOPT_COPY_DATA |\n+ SUBOPT_SYNCHRONOUS_COMMIT | SUBOPT_BINARY |\n+ SUBOPT_STREAMING);\n\nThe other changes look good to me.\n\n> I am not sure whether the second patch is an improvement over what we have\n> currently but if you and others feel that is a good idea then you can\n> submit the same after the main patch gets committed.\n\n Peter Smith was also not happy with that patch. Anyways, I will post\nthat patch in this thread after 0001 gets in and see if it interests\nother hackers.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Thu, 1 Jul 2021 17:37:41 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Thu, Jul 1, 2021 at 5:37 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Jul 1, 2021 at 4:38 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Jun 30, 2021 at 7:38 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > PFA v9 patch set for further review.\n> > >\n> >\n> > The first patch looks mostly good to me. I have made some minor\n> > modifications to the 0001 patch: (a) added/edited few comments, (b)\n> > there is no need to initialize supported_opts variable in\n> > CreateSubscription, (c) used extra bracket in macro, (d) ran pgindent.\n>\n> Thanks a lot Amit.\n>\n> > Kindly check and let me know what you think of the attachment?\n> 1) Isn't good to mention in the commit message a note about the\n> limitation of the maximum number of SUBOPT_*? Currently it is 32\n> because of bits32 data type. If required, then we might have to\n> introduce bits64 (typedef to uint64).\n>\n\nI am not sure if it is required to mention it as this is not an\nexposed struct and I think we can't reach that number in near future.\n\n> 2) How about just saying \"Refactor function\n> parse_subscription_options.\" instead of \"Refactor function\n> parse_subscription_options().\" in the commit message? This is similar\n> to the commit 531737d \"Refactor function parse_output_parameters.\"\n>\n\nIt hardly matters. We can write either way. I normally use () after\nfunction name.\n\n> 3) There's an whitespace introduced making the SUBOPT_SLOT_NAME,\n> SUBOPT_SYNCHRONOUS_COMMIT and SUBOPT_STREAMING not falling line with\n> the SUBOPT_CONNECT\n>\n\nokay, will fix it.\n\n> + /* Options that can be specified by CREATE SUBSCRIPTION command. */\n> + supported_opts = (SUBOPT_CONNECT | SUBOPT_ENABLED | SUBOPT_CREATE_SLOT |\n> + SUBOPT_SLOT_NAME | SUBOPT_COPY_DATA |\n> + SUBOPT_SYNCHRONOUS_COMMIT | SUBOPT_BINARY |\n> + SUBOPT_STREAMING);\n> Shouldn't it be something like below?\n> + supported_opts = (SUBOPT_CONNECT | SUBOPT_ENABLED | SUBOPT_CREATE_SLOT |\n> + SUBOPT_SLOT_NAME | SUBOPT_COPY_DATA |\n> + SUBOPT_SYNCHRONOUS_COMMIT | SUBOPT_BINARY |\n> + SUBOPT_STREAMING);\n>\n\nBoth appear the same to me. Can you please highlight the difference in some way?\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 1 Jul 2021 18:32:06 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "I find the business with OPT_NONE a bit uselessly verbose. It's like we\nhaven't completely made up our minds that zero means no options set.\nWouldn't it be simpler to remove that #define and leave the variable\nuninitialized until we want to set the options we want, and then use\nplain assignment instead of |= ?\n\nI propose the attached cleanup. Some comments seem a bit too obvious;\nthe use of a local variable for specified_opts instead of directly\nassigning to the one in the struct seemed unnecessary; places that call\nparse_subscription_options() with only one bit set don't need a separate\nvariable for the allowed options; added some whitespace.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/",
"msg_date": "Thu, 1 Jul 2021 10:29:58 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Thu, Jul 1, 2021 at 6:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > 3) There's an whitespace introduced making the SUBOPT_SLOT_NAME,\n> > SUBOPT_SYNCHRONOUS_COMMIT and SUBOPT_STREAMING not falling line with\n> > the SUBOPT_CONNECT\n> >\n>\n> okay, will fix it.\n\nPSA v11 patch which also has the cleanup patch shared by Alvaro Herrera.\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Thu, 1 Jul 2021 21:25:41 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Thu, Jul 1, 2021 at 8:00 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> I find the business with OPT_NONE a bit uselessly verbose. It's like we\n> haven't completely made up our minds that zero means no options set.\n> Wouldn't it be simpler to remove that #define and leave the variable\n> uninitialized until we want to set the options we want, and then use\n> plain assignment instead of |= ?\n>\n\nYeah, that makes sense. I have removed its usage from\nCreateSubscription but I think we can get rid of it entirely as well.\n\nThe latest patch sent by Bharath looks good to me. Would you like to\ncommit it or shall I take care of it?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 2 Jul 2021 07:36:22 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On 2021-Jul-02, Amit Kapila wrote:\n\n> Yeah, that makes sense. I have removed its usage from\n> CreateSubscription but I think we can get rid of it entirely as well.\n\nNod.\n\n> The latest patch sent by Bharath looks good to me. Would you like to\n> commit it or shall I take care of it?\n\nPlease, go ahead.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 1 Jul 2021 23:05:26 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Fri, Jul 2, 2021 at 8:35 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> > The latest patch sent by Bharath looks good to me. Would you like to\n> > commit it or shall I take care of it?\n>\n> Please, go ahead.\n>\n\nOkay, I'll push it early next week (by Tuesday) unless there are more\ncomments or suggestions. Thanks!\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 2 Jul 2021 12:36:32 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Fri, Jul 2, 2021 at 12:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jul 2, 2021 at 8:35 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > > The latest patch sent by Bharath looks good to me. Would you like to\n> > > commit it or shall I take care of it?\n> >\n> > Please, go ahead.\n> >\n>\n> Okay, I'll push it early next week (by Tuesday) unless there are more\n> comments or suggestions. Thanks!\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 6 Jul 2021 13:51:15 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Tue, Jul 6, 2021 at 1:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Okay, I'll push it early next week (by Tuesday) unless there are more\n> > comments or suggestions. Thanks!\n> >\n>\n> Pushed!\n\nThanks, Amit. I'm posting the 0002 patch which removes extra ereport\ncalls using local variables. Please review it.\n\nRegards,\nBharath Rupireddy.",
"msg_date": "Tue, 6 Jul 2021 16:06:01 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On 2021-Jul-06, Bharath Rupireddy wrote:\n\n> Thanks, Amit. I'm posting the 0002 patch which removes extra ereport\n> calls using local variables. Please review it.\n\nI looked at this the other day and I'm not sure I like it very much.\nIt's not making anything any simpler, it's barely saving two lines of\ncode. I think we can do without this change.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 6 Jul 2021 11:54:29 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Tue, Jul 6, 2021 at 6:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jul 2, 2021 at 12:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Jul 2, 2021 at 8:35 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > >\n> > > > The latest patch sent by Bharath looks good to me. Would you like to\n> > > > commit it or shall I take care of it?\n> > >\n> > > Please, go ahead.\n> > >\n> >\n> > Okay, I'll push it early next week (by Tuesday) unless there are more\n> > comments or suggestions. Thanks!\n> >\n>\n> Pushed!\n\nYesterday, I needed to refactor a lot of code due to this push [1].\n\nThe refactoring exercise caused me to study these v11 changes much more deeply.\n\nIMO there are a few improvements that should be made:\n\n//////\n\n1. Zap 'opts' up-front\n\n+ *\n+ * Caller is expected to have cleared 'opts'.\n\nThis comment is putting the onus on the caller to \"do the right thing\".\n\nI think that hopeful expectations about input should be removed - the\nfunction should just be responsible itself just to zap the SubOpts\nup-front. It makes the code more readable, and it removes any\npotential risk of garbage unintentionally passed in that struct.\n\n /* Start out with cleared opts. */\n memset(opts, 0, sizeof(SubOpts));\n\n\nAlternatively, at least there should be an assertion for some sanity check.\n\nAssert(opt->specified_opts == 0);\n\n----\n\n2. Remove redundant conditions\n\n /* Check for incompatible options from the user. */\n- if (enabled && *enabled_given && *enabled)\n+ if (opts->enabled &&\n+ IsSet(supported_opts, SUBOPT_ENABLED) &&\n+ IsSet(opts->specified_opts, SUBOPT_ENABLED))\n ereport(ERROR,\n (errcode(ERRCODE_SYNTAX_ERROR),\n /*- translator: both %s are strings of the form \"option = value\" */\n errmsg(\"%s and %s are mutually exclusive options\",\n \"connect = false\", \"enabled = true\")));\n\n- if (create_slot && create_slot_given && *create_slot)\n+ if (opts->create_slot &&\n+ IsSet(supported_opts, SUBOPT_CREATE_SLOT) &&\n+ IsSet(opts->specified_opts, SUBOPT_CREATE_SLOT))\n ereport(ERROR,\n (errcode(ERRCODE_SYNTAX_ERROR),\n errmsg(\"%s and %s are mutually exclusive options\",\n \"connect = false\", \"create_slot = true\")));\n\n- if (copy_data && copy_data_given && *copy_data)\n+ if (opts->copy_data &&\n+ IsSet(supported_opts, SUBOPT_COPY_DATA) &&\n+ IsSet(opts->specified_opts, SUBOPT_COPY_DATA))\n ereport(ERROR,\n (errcode(ERRCODE_SYNTAX_ERROR),\n errmsg(\"%s and %s are mutually exclusive options\",\n \"connect = false\", \"copy_data = true\")));\n\nBy definition, this function only allows any option to be\n\"specified_opts\" if that option is also \"supported_opts\". So, there is\nreally no need in the above code to re-check again that it is\nsupported.\n\nIt can be simplified like this:\n\n /* Check for incompatible options from the user. */\n- if (enabled && *enabled_given && *enabled)\n+ if (opts->enabled &&\n+ IsSet(opts->specified_opts, SUBOPT_ENABLED))\n ereport(ERROR,\n (errcode(ERRCODE_SYNTAX_ERROR),\n /*- translator: both %s are strings of the form \"option = value\" */\n errmsg(\"%s and %s are mutually exclusive options\",\n \"connect = false\", \"enabled = true\")));\n\n- if (create_slot && create_slot_given && *create_slot)\n+ if (opts->create_slot &&\n+ IsSet(opts->specified_opts, SUBOPT_CREATE_SLOT))\n ereport(ERROR,\n (errcode(ERRCODE_SYNTAX_ERROR),\n errmsg(\"%s and %s are mutually exclusive options\",\n \"connect = false\", \"create_slot = true\")));\n\n- if (copy_data && copy_data_given && *copy_data)\n+ if (opts->copy_data &&\n+ IsSet(opts->specified_opts, SUBOPT_COPY_DATA))\n ereport(ERROR,\n (errcode(ERRCODE_SYNTAX_ERROR),\n errmsg(\"%s and %s are mutually exclusive options\",\n \"connect = false\", \"copy_data = true\")));\n\n-----\n\n3. Remove redundant conditions\n\nSame as 2. Here are more examples of conditions where the redundant\nchecking of \"supported_opts\" can be removed.\n\n /*\n * Do additional checking for disallowed combination when slot_name = NONE\n * was used.\n */\n- if (slot_name && *slot_name_given && !*slot_name)\n+ if (!opts->slot_name &&\n+ IsSet(supported_opts, SUBOPT_SLOT_NAME) &&\n+ IsSet(opts->specified_opts, SUBOPT_SLOT_NAME))\n {\n- if (enabled && *enabled_given && *enabled)\n+ if (opts->enabled &&\n+ IsSet(supported_opts, SUBOPT_ENABLED) &&\n+ IsSet(opts->specified_opts, SUBOPT_ENABLED))\n ereport(ERROR,\n (errcode(ERRCODE_SYNTAX_ERROR),\n /*- translator: both %s are strings of the form \"option = value\" */\n errmsg(\"%s and %s are mutually exclusive options\",\n \"slot_name = NONE\", \"enabled = true\")));\n\n- if (create_slot && create_slot_given && *create_slot)\n+ if (opts->create_slot &&\n+ IsSet(supported_opts, SUBOPT_CREATE_SLOT) &&\n+ IsSet(opts->specified_opts, SUBOPT_CREATE_SLOT))\n ereport(ERROR,\n (errcode(ERRCODE_SYNTAX_ERROR),\n+ /*- translator: both %s are strings of the form \"option = value\" */\n errmsg(\"%s and %s are mutually exclusive options\",\n \"slot_name = NONE\", \"create_slot = true\")));\n\nIt can be simplified like this:\n\n /*\n * Do additional checking for disallowed combination when slot_name = NONE\n * was used.\n */\n- if (slot_name && *slot_name_given && !*slot_name)\n+ if (!opts->slot_name &&\n+ IsSet(opts->specified_opts, SUBOPT_SLOT_NAME))\n {\n- if (enabled && *enabled_given && *enabled)\n+ if (opts->enabled &&\n+ IsSet(opts->specified_opts, SUBOPT_ENABLED))\n ereport(ERROR,\n (errcode(ERRCODE_SYNTAX_ERROR),\n /*- translator: both %s are strings of the form \"option = value\" */\n errmsg(\"%s and %s are mutually exclusive options\",\n \"slot_name = NONE\", \"enabled = true\")));\n\n- if (create_slot && create_slot_given && *create_slot)\n+ if (opts->create_slot &&\n+ IsSet(opts->specified_opts, SUBOPT_CREATE_SLOT))\n ereport(ERROR,\n (errcode(ERRCODE_SYNTAX_ERROR),\n+ /*- translator: both %s are strings of the form \"option = value\" */\n errmsg(\"%s and %s are mutually exclusive options\",\n \"slot_name = NONE\", \"create_slot = true\")));\n\n------\n\n4. Remove redundant conditions\n\n- if (enabled && !*enabled_given && *enabled)\n+ if (opts->enabled &&\n+ IsSet(supported_opts, SUBOPT_ENABLED) &&\n+ !IsSet(opts->specified_opts, SUBOPT_ENABLED))\n ereport(ERROR,\n (errcode(ERRCODE_SYNTAX_ERROR),\n /*- translator: both %s are strings of the form \"option = value\" */\n errmsg(\"subscription with %s must also set %s\",\n \"slot_name = NONE\", \"enabled = false\")));\n\n- if (create_slot && !create_slot_given && *create_slot)\n+ if (opts->create_slot &&\n+ IsSet(supported_opts, SUBOPT_CREATE_SLOT) &&\n+ !IsSet(opts->specified_opts, SUBOPT_CREATE_SLOT))\n ereport(ERROR,\n (errcode(ERRCODE_SYNTAX_ERROR),\n+ /*- translator: both %s are strings of the form \"option = value\" */\n errmsg(\"subscription with %s must also set %s\",\n \"slot_name = NONE\", \"create_slot = false\")));\n\n\nThis code can be simplified even more than the others mentioned,\nbecause here the \"specified_opts\" checks were already done in the code\nthat precedes this.\n\nIt can be simplified like this:\n\n- if (enabled && !*enabled_given && *enabled)\n+ if (opts->enabled)\n ereport(ERROR,\n (errcode(ERRCODE_SYNTAX_ERROR),\n /*- translator: both %s are strings of the form \"option = value\" */\n errmsg(\"subscription with %s must also set %s\",\n \"slot_name = NONE\", \"enabled = false\")));\n\n- if (create_slot && !create_slot_given && *create_slot)\n+ if (opts->create_slot)\n ereport(ERROR,\n (errcode(ERRCODE_SYNTAX_ERROR),\n+ /*- translator: both %s are strings of the form \"option = value\" */\n errmsg(\"subscription with %s must also set %s\",\n \"slot_name = NONE\", \"create_slot = false\")));\n\n//////\n\nPSA my patch which includes all the fixes mentioned above.\n\n(Make check, and TAP subscription tests are tested and pass OK)\n\n------\n[1] https://github.com/postgres/postgres/commit/8aafb02616753f5c6c90bbc567636b73c0cbb9d4\n\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Wed, 7 Jul 2021 10:03:43 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On 2021-Jul-07, Peter Smith wrote:\n\n> 1. Zap 'opts' up-front\n> \n> + *\n> + * Caller is expected to have cleared 'opts'.\n> \n> This comment is putting the onus on the caller to \"do the right thing\".\n> \n> I think that hopeful expectations about input should be removed - the\n> function should just be responsible itself just to zap the SubOpts\n> up-front. It makes the code more readable, and it removes any\n> potential risk of garbage unintentionally passed in that struct.\n> \n> /* Start out with cleared opts. */\n> memset(opts, 0, sizeof(SubOpts));\n\nYeah, I gave the initialization aspect some thought too when I reviewed\n0001. Having an explicit memset() just for sanity check is a waste when\nyou don't really need it; and we're initializing the struct already at\ndeclaration time by assigning {0} to it, so having to add memset feels\nlike such a waste. Another point in the same area is that some of the\nstruct members are initialized to some default value different from 0,\nso I wondered if it would have been better to remove the = {0} and\ninstead have another function that would set everything up the way we\nwant; but it seemed a bit excessive, so I ended up not suggesting that.\n\nWe have many places in the source tree where the caller is expected to\ndo the right thing, even when those right things are more complex than\nthis one. This one place isn't terribly bad in that regard, given that\nit's a pretty well contained static struct anyway (we would certainly\nnot export a struct of this name in any .h file.) I don't think it's\nall that bad.\n\n> Alternatively, at least there should be an assertion for some sanity check.\n> \n> Assert(opt->specified_opts == 0);\n\nNo opinion on this. It doesn't seem valuable enough, but maybe I'm on\nthe minority on this.\n\n> 2. Remove redundant conditions\n> \n> /* Check for incompatible options from the user. */\n> - if (enabled && *enabled_given && *enabled)\n> + if (opts->enabled &&\n> + IsSet(supported_opts, SUBOPT_ENABLED) &&\n> + IsSet(opts->specified_opts, SUBOPT_ENABLED))\n\n(etc)\n\nYeah, I thought about this too when I saw the 0002 patch in this series.\nI agree that the extra rechecks are a bit pointless.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 6 Jul 2021 22:06:01 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Tue, Jul 6, 2021 at 9:24 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Jul-06, Bharath Rupireddy wrote:\n>\n> > Thanks, Amit. I'm posting the 0002 patch which removes extra ereport\n> > calls using local variables. Please review it.\n>\n> I looked at this the other day and I'm not sure I like it very much.\n> It's not making anything any simpler, it's barely saving two lines of\n> code. I think we can do without this change.\n\nJust for the records. I will withdraw the 0002 patch as no one has\nshown interest. Thanks.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Wed, 7 Jul 2021 08:00:20 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Wed, Jul 7, 2021 at 7:36 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Jul-07, Peter Smith wrote:\n>\n> > 1. Zap 'opts' up-front\n> >\n> > + *\n> > + * Caller is expected to have cleared 'opts'.\n> >\n> > This comment is putting the onus on the caller to \"do the right thing\".\n> >\n> > I think that hopeful expectations about input should be removed - the\n> > function should just be responsible itself just to zap the SubOpts\n> > up-front. It makes the code more readable, and it removes any\n> > potential risk of garbage unintentionally passed in that struct.\n> >\n> > /* Start out with cleared opts. */\n> > memset(opts, 0, sizeof(SubOpts));\n>\n> Yeah, I gave the initialization aspect some thought too when I reviewed\n> 0001. Having an explicit memset() just for sanity check is a waste when\n> you don't really need it; and we're initializing the struct already at\n> declaration time by assigning {0} to it, so having to add memset feels\n> like such a waste. Another point in the same area is that some of the\n> struct members are initialized to some default value different from 0,\n> so I wondered if it would have been better to remove the = {0} and\n> instead have another function that would set everything up the way we\n> want; but it seemed a bit excessive, so I ended up not suggesting that.\n>\n> We have many places in the source tree where the caller is expected to\n> do the right thing, even when those right things are more complex than\n> this one. This one place isn't terribly bad in that regard, given that\n> it's a pretty well contained static struct anyway (we would certainly\n> not export a struct of this name in any .h file.) I don't think it's\n> all that bad.\n>\n> > Alternatively, at least there should be an assertion for some sanity check.\n> >\n> > Assert(opt->specified_opts == 0);\n>\n> No opinion on this. It doesn't seem valuable enough, but maybe I'm on\n> the minority on this.\n>\n\nI am also not sure if such an assertion adds much value.\n\n> > 2. Remove redundant conditions\n> >\n> > /* Check for incompatible options from the user. */\n> > - if (enabled && *enabled_given && *enabled)\n> > + if (opts->enabled &&\n> > + IsSet(supported_opts, SUBOPT_ENABLED) &&\n> > + IsSet(opts->specified_opts, SUBOPT_ENABLED))\n>\n> (etc)\n>\n> Yeah, I thought about this too when I saw the 0002 patch in this series.\n> I agree that the extra rechecks are a bit pointless.\n>\n\nI don't think the committed patch has made anything worse here or\nadded any new condition. Now, even if we want to change these\nconditions, it is better to discuss them separately. If we see as per\ncurrent code these look a bit redundant but OTOH, in the future one\nmight expect that if the supported option is not passed by the caller\nand the user has specified it then it should be an error. For example,\nwe don't want to support some option via some Alter variant but it is\nsupported by Create variant.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 7 Jul 2021 08:34:53 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Wed, Jul 7, 2021 at 5:33 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> PSA my patch which includes all the fixes mentioned above.\n\nI agree with Amit to start a separate thread to discuss these points.\nIMO, we can close this thread. What do you think?\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Wed, 7 Jul 2021 09:05:28 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Wed, Jul 7, 2021 at 1:35 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Jul 7, 2021 at 5:33 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > PSA my patch which includes all the fixes mentioned above.\n>\n> I agree with Amit to start a separate thread to discuss these points.\n> IMO, we can close this thread. What do you think?\n>\n\nOK. I created a new thread called \"parse_subscription_options -\nsuggested improvements\" here [1]\n\n------\n[1] https://www.postgresql.org/message-id/CAHut%2BPtXHfLgLHDDJ8ZN5f5Be_37mJoxpEsRg8LNmm4XCr06Rw%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Thu, 8 Jul 2021 10:53:48 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
},
{
"msg_contents": "On Thu, Jul 8, 2021 at 6:24 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> OK. I created a new thread called \"parse_subscription_options -\n> suggested improvements\" here [1]\n\nThanks. I closed the CF entry for this thread.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Thu, 8 Jul 2021 08:33:02 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor \"mutually exclusive options\" error reporting code in\n parse_subscription_options"
}
] |
[
{
"msg_contents": "Hi\n\n\nI've been discussing about user_catalog_table\nand the possibility of deadlock during synchronous mode\nof logical replication in [1]. I'll launch a new thread\nand summarize the contents so that anyone who is\ninterested in this title can join the discussion.\n\nWe don't have any example of user_catalog_tables\nin the core code, so any response and idea related to this area is helpful.\n\nNow, we don't disallow output plugin to take a lock\non user_catalog_table. Then, we can consider a deadlock scenario like below.\n\n1. TRUNCATE command is performed on user_catalog_table.\n2. TRUNCATE command locks the table and index with ACCESS EXCLUSIVE LOCK.\n3. TRUNCATE waits for the subscriber's synchronization\n\twhen synchronous_standby_names is set.\n4. Here, the walsender hangs, *if* it tries to acquire a lock on the user_catalog_table\n\tbecause the table where it wants to see is locked by the TRUNCATE already. \n\n(Here, we don't talk about LOCK command because\nthe discussion is in progress in another thread independently - [2])\n\nAnother important point here is that we can *not*\nknow how and when plugin does read only access to user_catalog_table in general,\nbecause it depends on the purpose of the plugin.\nThen, I'm thinking that changing a behavior of TRUNCATE side\nto error out when TRUNCATE is performed on user_catalog_table\nwill work to make the concern disappear. Kindly have a look at the attached patch.\n\n[1] - https://www.postgresql.org/message-id/MEYP282MB166933B1AB02B4FE56E82453B64D9%40MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM\n\n[2] - https://www.postgresql.org/message-id/CALDaNm1UB==gL9Poad4ETjfcyGdJBphWEzEZocodnBd--kJpVw@mail.gmail.com\n\n\nBest Regards,\n\tTakamichi Osumi",
"msg_date": "Wed, 19 May 2021 10:32:04 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Deadlock concern caused by TRUNCATE on user_catalog_table in\n synchronous mode"
}
] |
[
{
"msg_contents": "Several weeks ago I saw this issue in a production environment. The\nread only file looks like a credential file. Michael told me that\nusually such kinds of files should be better kept in non-pgdata\ndirectories in production environments. Thought further it seems that\npg_rewind should be more user friendly to tolerate such scenarios.\n\nThe failure error is \"Permission denied\" after open(). The reason is\nopen() fais with the below mode in open_target_file()\n\n mode = O_WRONLY | O_CREAT | PG_BINARY;\n if (trunc)\n mode |= O_TRUNC;\n dstfd = open(dstpath, mode, pg_file_create_mode);\n\nThe fix should be quite simple, if open fails with \"Permission denied\"\nthen we try to call chmod to add a S_IWUSR just before open() and call\nchmod to reset the flags soon after open(). A stat() call to get\nprevious st_mode flags is needed.\n\nAny other suggestions or thoughts?\n\nThanks,\n\n-- \nPaul Guo (Vmware)\n\n\n",
"msg_date": "Wed, 19 May 2021 18:43:46 +0800",
"msg_from": "Paul Guo <paulguo@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_rewind fails if there is a read only file."
},
{
"msg_contents": "\nOn 5/19/21 6:43 AM, Paul Guo wrote:\n> Several weeks ago I saw this issue in a production environment. The\n> read only file looks like a credential file. Michael told me that\n> usually such kinds of files should be better kept in non-pgdata\n> directories in production environments. Thought further it seems that\n> pg_rewind should be more user friendly to tolerate such scenarios.\n>\n> The failure error is \"Permission denied\" after open(). The reason is\n> open() fais with the below mode in open_target_file()\n>\n> mode = O_WRONLY | O_CREAT | PG_BINARY;\n> if (trunc)\n> mode |= O_TRUNC;\n> dstfd = open(dstpath, mode, pg_file_create_mode);\n>\n> The fix should be quite simple, if open fails with \"Permission denied\"\n> then we try to call chmod to add a S_IWUSR just before open() and call\n> chmod to reset the flags soon after open(). A stat() call to get\n> previous st_mode flags is needed.\n>\n\nPresumably the user has a reason for adding the file read-only to the\ndata directory, and we shouldn't lightly ignore that.\n\nMichael's advice is reasonable. This seems like a case of:\n\n Patient: Doctor, it hurts when I do this.\n\n Doctor: Stop doing that.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 19 May 2021 15:25:55 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind fails if there is a read only file."
},
{
"msg_contents": "> Presumably the user has a reason for adding the file read-only to the\n> data directory, and we shouldn't lightly ignore that.\n>\n> Michael's advice is reasonable. This seems like a case of:\n\nI agree. Attached is a short patch to handle the case. The patch was\ntested in my dev environment.",
"msg_date": "Thu, 20 May 2021 18:17:49 +0800",
"msg_from": "Paul Guo <paulguo@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_rewind fails if there is a read only file."
},
{
"msg_contents": "\nOn 5/20/21 6:17 AM, Paul Guo wrote:\n>> Presumably the user has a reason for adding the file read-only to the\n>> data directory, and we shouldn't lightly ignore that.\n>>\n>> Michael's advice is reasonable. This seems like a case of:\n> I agree. Attached is a short patch to handle the case. The patch was\n> tested in my dev environment.\n\n\n\nYou seem to have missed my point completely. The answer to this problem\nIMNSHO is \"Don't put a read-only file in the data directory\".\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 20 May 2021 09:01:11 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind fails if there is a read only file."
},
{
"msg_contents": "> You seem to have missed my point completely. The answer to this problem\n> IMNSHO is \"Don't put a read-only file in the data directory\".\n\nOh sorry. Well, if we really do not want this we may want to document this\nand keep educating users, but meanwhile probably the product should be\nmore user friendly for the case, especially considering\nthat we know the fix would be trivial and I suspect it is inevitable that some\nextensions put some read only files (e.g. credentials files) in pgdata.\n\n\n",
"msg_date": "Tue, 25 May 2021 16:57:02 +0800",
"msg_from": "Paul Guo <paulguo@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_rewind fails if there is a read only file."
},
{
"msg_contents": "On Tue, 2021-05-25 at 16:57 +0800, Paul Guo wrote:\n> > You seem to have missed my point completely. The answer to this problem\n> > IMNSHO is \"Don't put a read-only file in the data directory\".\n> \n> Oh sorry. Well, if we really do not want this we may want to document this\n> and keep educating users, but meanwhile probably the product should be\n> more user friendly for the case, especially considering\n> that we know the fix would be trivial and I suspect it is inevitable that some\n> extensions put some read only files (e.g. credentials files) in pgdata.\n\nGood idea. I suggest this documentation page:\nhttps://www.postgresql.org/docs/current/creating-cluster.html\n\nPerhaps something along the line of:\n\n It is not supported to manually create, delete or modify files in the\n data directory, unless they are configuration files or the documentation\n explicitly says otherwise (for example, <file>recovery.signal</file>\n for archive recovery).\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Tue, 25 May 2021 15:38:55 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind fails if there is a read only file."
},
{
"msg_contents": "\nOn 5/25/21 9:38 AM, Laurenz Albe wrote:\n> On Tue, 2021-05-25 at 16:57 +0800, Paul Guo wrote:\n>>> You seem to have missed my point completely. The answer to this problem\n>>> IMNSHO is \"Don't put a read-only file in the data directory\".\n>> Oh sorry. Well, if we really do not want this we may want to document this\n>> and keep educating users, but meanwhile probably the product should be\n>> more user friendly for the case, especially considering\n>> that we know the fix would be trivial and I suspect it is inevitable that some\n>> extensions put some read only files (e.g. credentials files) in pgdata.\n> Good idea. I suggest this documentation page:\n> https://www.postgresql.org/docs/current/creating-cluster.html\n>\n> Perhaps something along the line of:\n>\n> It is not supported to manually create, delete or modify files in the\n> data directory, unless they are configuration files or the documentation\n> explicitly says otherwise (for example, <file>recovery.signal</file>\n> for archive recovery).\n>\n\nPerhaps we should add that read-only files can be particularly problematic.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 25 May 2021 10:20:33 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind fails if there is a read only file."
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> Perhaps we should add that read-only files can be particularly problematic.\n\nGiven the (legitimate, IMO) example of a read-only SSL key, I'm not\nquite convinced that pg_rewind doesn't need to cope with this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 25 May 2021 10:29:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind fails if there is a read only file."
},
{
"msg_contents": "\nOn 5/25/21 10:29 AM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> Perhaps we should add that read-only files can be particularly problematic.\n> Given the (legitimate, IMO) example of a read-only SSL key, I'm not\n> quite convinced that pg_rewind doesn't need to cope with this.\n>\n> \t\t\t\n\n\nIf we do decide to do something the question arises what should it do?\nIf we're to allow for it I'm wondering if the best thing would be simply\nto ignore such a file.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 25 May 2021 15:17:37 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind fails if there is a read only file."
},
{
"msg_contents": "On Tue, May 25, 2021 at 03:17:37PM -0400, Andrew Dunstan wrote:\n> If we do decide to do something the question arises what should it do?\n> If we're to allow for it I'm wondering if the best thing would be simply\n> to ignore such a file.\n\nEnforcing assumptions that any file could be ready-only is a very bad\nidea, as that could lead to weird behaviors if a FS is turned as\nbecoming read-only suddenly while doing a rewind. Another idea that\nhas popped out across the years was to add an option to pg_rewind so\nas users could filter files manually. That could be easily dangerous\nthough in the wrong hands, as one could think that it is a good idea\nto skip a control file, for example.\n\nThe thing is that here we actually know the set of files we'd like to\nignore most of the time, and we still want to have some automated\ncontrol what gets filtered. So here is a new idea: we build a list of\nfiles based on a set of GUC parameters using postgres -C on the target\ndata folder, and assume that these are safe enough to be skipped all\nthe time, if these are in the data folder.\n--\nMichael",
"msg_date": "Wed, 26 May 2021 08:57:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind fails if there is a read only file."
},
{
"msg_contents": "On Wed, 2021-05-26 at 08:57 +0900, Michael Paquier wrote:\n> On Tue, May 25, 2021 at 03:17:37PM -0400, Andrew Dunstan wrote:\n> > If we do decide to do something the question arises what should it do?\n> > If we're to allow for it I'm wondering if the best thing would be simply\n> > to ignore such a file.\n> \n> Enforcing assumptions that any file could be ready-only is a very bad\n> idea, as that could lead to weird behaviors if a FS is turned as\n> becoming read-only suddenly while doing a rewind. Another idea that\n> has popped out across the years was to add an option to pg_rewind so\n> as users could filter files manually. That could be easily dangerous\n> though in the wrong hands, as one could think that it is a good idea\n> to skip a control file, for example.\n> \n> The thing is that here we actually know the set of files we'd like to\n> ignore most of the time, and we still want to have some automated\n> control what gets filtered. So here is a new idea: we build a list of\n> files based on a set of GUC parameters using postgres -C on the target\n> data folder, and assume that these are safe enough to be skipped all\n> the time, if these are in the data folder.\n\nThat sounds complicated, but should work.\nThere should be a code comment somewhere that warns people not to forget\nto look at that when they add a new GUC.\n\nI can think of two alternatives to handle this:\n\n- Skip files that cannot be opened for writing and issue a warning.\n That is simple, but coarse.\n A slightly more sophisticated version would first check if files\n are the same on both machines and skip the warning for those.\n\n- Paul's idea to try and change the mode on the read-only file\n and reset it to the original state after pg_rewind is done.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Wed, 26 May 2021 08:43:03 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind fails if there is a read only file."
},
{
"msg_contents": "\nOn 5/26/21 2:43 AM, Laurenz Albe wrote:\n> On Wed, 2021-05-26 at 08:57 +0900, Michael Paquier wrote:\n>> On Tue, May 25, 2021 at 03:17:37PM -0400, Andrew Dunstan wrote:\n>>> If we do decide to do something the question arises what should it do?\n>>> If we're to allow for it I'm wondering if the best thing would be simply\n>>> to ignore such a file.\n>> Enforcing assumptions that any file could be ready-only is a very bad\n>> idea, as that could lead to weird behaviors if a FS is turned as\n>> becoming read-only suddenly while doing a rewind. Another idea that\n>> has popped out across the years was to add an option to pg_rewind so\n>> as users could filter files manually. That could be easily dangerous\n>> though in the wrong hands, as one could think that it is a good idea\n>> to skip a control file, for example.\n>>\n>> The thing is that here we actually know the set of files we'd like to\n>> ignore most of the time, and we still want to have some automated\n>> control what gets filtered. So here is a new idea: we build a list of\n>> files based on a set of GUC parameters using postgres -C on the target\n>> data folder, and assume that these are safe enough to be skipped all\n>> the time, if these are in the data folder.\n> That sounds complicated, but should work.\n> There should be a code comment somewhere that warns people not to forget\n> to look at that when they add a new GUC.\n>\n> I can think of two alternatives to handle this:\n>\n> - Skip files that cannot be opened for writing and issue a warning.\n> That is simple, but coarse.\n> A slightly more sophisticated version would first check if files\n> are the same on both machines and skip the warning for those.\n>\n> - Paul's idea to try and change the mode on the read-only file\n> and reset it to the original state after pg_rewind is done.\n>\n\nHow about we only skip (with a warning) read-only files if they are in\nthe data root, not a subdirectory? That would save us from silently\nignoring read-only filesystems and not involve using a GUC.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 26 May 2021 10:32:09 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind fails if there is a read only file."
},
{
"msg_contents": "On Wed, May 26, 2021 at 10:32 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>\n> On 5/26/21 2:43 AM, Laurenz Albe wrote:\n> > On Wed, 2021-05-26 at 08:57 +0900, Michael Paquier wrote:\n> >> On Tue, May 25, 2021 at 03:17:37PM -0400, Andrew Dunstan wrote:\n> >>> If we do decide to do something the question arises what should it do?\n> >>> If we're to allow for it I'm wondering if the best thing would be simply\n> >>> to ignore such a file.\n> >> Enforcing assumptions that any file could be ready-only is a very bad\n> >> idea, as that could lead to weird behaviors if a FS is turned as\n> >> becoming read-only suddenly while doing a rewind. Another idea that\n> >> has popped out across the years was to add an option to pg_rewind so\n> >> as users could filter files manually. That could be easily dangerous\n> >> though in the wrong hands, as one could think that it is a good idea\n> >> to skip a control file, for example.\n> >>\n> >> The thing is that here we actually know the set of files we'd like to\n> >> ignore most of the time, and we still want to have some automated\n> >> control what gets filtered. So here is a new idea: we build a list of\n> >> files based on a set of GUC parameters using postgres -C on the target\n> >> data folder, and assume that these are safe enough to be skipped all\n> >> the time, if these are in the data folder.\n> > That sounds complicated, but should work.\n> > There should be a code comment somewhere that warns people not to forget\n> > to look at that when they add a new GUC.\n> >\n> > I can think of two alternatives to handle this:\n> >\n> > - Skip files that cannot be opened for writing and issue a warning.\n> > That is simple, but coarse.\n> > A slightly more sophisticated version would first check if files\n> > are the same on both machines and skip the warning for those.\n> >\n> > - Paul's idea to try and change the mode on the read-only file\n> > and reset it to the original state after pg_rewind is done.\n> >\n>\n> How about we only skip (with a warning) read-only files if they are in\n> the data root, not a subdirectory? That would save us from silently\n\nThe assumption seems to limit the user scenario.\n\n> ignoring read-only filesystems and not involve using a GUC.\n\nHow about this:\nBy default, change and reset the file mode before and after open() for\nread only files,\nbut we also allow to pass skip-file names to pg_rewind (e.g.\npg_rewind --skip filenameN or --skip-list-file listfile) in case users really\nwant to skip some files (probably not just read only files).\nPresumably the people\nwho run pg_rewind should be DBA or admin that have basic knowledge of this\nso we should not worry too much about that the user skips some important files\n(we could even set a deny list for this). Also this solution works\nfine for a read only\nfile system since pg_rewind soon quits when adding the write\npermission for any read only file.\n\n--\nPaul Guo (Vmware)\n\n\n",
"msg_date": "Thu, 27 May 2021 21:50:28 +0800",
"msg_from": "Paul Guo <paulguo@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_rewind fails if there is a read only file."
}
] |
[
{
"msg_contents": "Hi,\n\nWhile working on [1], I found that some parts of the code is using\nstrtol and atoi without checking for non-numeric junk input strings. I\nfound this strange. Most of the time users provide proper numeric\nstrings but there can be some scenarios where these strings are not\nuser-supplied but generated by some other code which may contain\nnon-numeric strings. Shouldn't the code use strtol or atoi\nappropriately and error out in such cases? One way to fix this once\nand for all is to have a common API something like int\npg_strtol/pg_str_convert_to_int(char *opt_name, char *opt_value) which\nreturns a generic message upon invalid strings (\"invalid value \\\"%s\\\"\nis provided for option \\\"%s\\\", opt_name, opt_value) and returns\nintegers on successful parsing.\n\nAlthough this is not a critical issue, I would like to seek thoughts.\n\n[1] - https://www.postgresql.org/message-id/CALj2ACVMO6wY5Pc4oe1OCgUOAtdjHuFsBDw8R5uoYR86eWFQDA%40mail.gmail.com\n[2] strtol:\nvacuumlo.c --> ./vacuumlo -l '2323adfd' postgres -p '5432ERE'\nlibpq_pipeline.c --> ./libpq_pipeline -r '2232adf' tests\n\natoi:\npg_amcheck.c --> ./pg_amcheck -j '1211efe'\npg_basebackup --> ./pg_basebackup -Z 'EFEF' -s 'a$##'\npg_receivewal.c --> ./pg_receivewal -p '54343GDFD' -s '54343GDFD'\npg_recvlogical.c --> ./pg_recvlogical -F '-$#$#' -p '5432$$$' -s '100$$$'\npg_checksums.c. --> ./pg_checksums -f '1212abc'\npg_ctl.c --> ./pg_ctl -t 'efc'\npg_dump.c --> ./pg_dump -j '454adc' -Z '4adc' --extra-float-digits '-14adc'\npg_upgrade/option.c\npgbench.c\nreindexdb.c\nvacuumdb.c\npg_regress.c\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 19 May 2021 16:49:27 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Incorrect usage of strtol, atoi for non-numeric junk inputs"
},
{
"msg_contents": "On 2021-May-19, Bharath Rupireddy wrote:\n\n> While working on [1], I found that some parts of the code is using\n> strtol and atoi without checking for non-numeric junk input strings. I\n> found this strange. Most of the time users provide proper numeric\n> strings but there can be some scenarios where these strings are not\n> user-supplied but generated by some other code which may contain\n> non-numeric strings. Shouldn't the code use strtol or atoi\n> appropriately and error out in such cases? One way to fix this once\n> and for all is to have a common API something like int\n> pg_strtol/pg_str_convert_to_int(char *opt_name, char *opt_value) which\n> returns a generic message upon invalid strings (\"invalid value \\\"%s\\\"\n> is provided for option \\\"%s\\\", opt_name, opt_value) and returns\n> integers on successful parsing.\n\nHi, how is this related to\nhttps://postgr.es/m/20191028012000.GA59064@begriffs.com ?\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n",
"msg_date": "Wed, 26 May 2021 17:35:30 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect usage of strtol, atoi for non-numeric junk inputs"
},
{
"msg_contents": "On Thu, May 27, 2021 at 3:05 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-May-19, Bharath Rupireddy wrote:\n>\n> > While working on [1], I found that some parts of the code is using\n> > strtol and atoi without checking for non-numeric junk input strings. I\n> > found this strange. Most of the time users provide proper numeric\n> > strings but there can be some scenarios where these strings are not\n> > user-supplied but generated by some other code which may contain\n> > non-numeric strings. Shouldn't the code use strtol or atoi\n> > appropriately and error out in such cases? One way to fix this once\n> > and for all is to have a common API something like int\n> > pg_strtol/pg_str_convert_to_int(char *opt_name, char *opt_value) which\n> > returns a generic message upon invalid strings (\"invalid value \\\"%s\\\"\n> > is provided for option \\\"%s\\\", opt_name, opt_value) and returns\n> > integers on successful parsing.\n>\n> Hi, how is this related to\n> https://postgr.es/m/20191028012000.GA59064@begriffs.com ?\n\nThanks. The proposed approach there was to implement postgres's own\nstrtol i.e. string parsing, conversion to integers and use it in the\nplaces where atoi is being used. I'm not sure how far that can go.\nWhat I'm proposing here is to use strtol inplace of atoi to properly\ndetect errors in case of inputs like '1211efe', '-14adc' and so on as\natoi can't detect such errors. Thoughts?\n\nWith Regards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Fri, 4 Jun 2021 20:09:46 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Incorrect usage of strtol, atoi for non-numeric junk inputs"
},
{
"msg_contents": "On 2021-Jun-04, Bharath Rupireddy wrote:\n\n> On Thu, May 27, 2021 at 3:05 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > Hi, how is this related to\n> > https://postgr.es/m/20191028012000.GA59064@begriffs.com ?\n> \n> Thanks. The proposed approach there was to implement postgres's own\n> strtol i.e. string parsing, conversion to integers and use it in the\n> places where atoi is being used. I'm not sure how far that can go.\n> What I'm proposing here is to use strtol inplace of atoi to properly\n> detect errors in case of inputs like '1211efe', '-14adc' and so on as\n> atoi can't detect such errors. Thoughts?\n\nWell, if you scroll back to Surafel's initial submission in that thread,\nit looks very similar in spirit to what you have here.\n\nAnother thing I just noticed which I hadn't realized is that Joe\nNelson's patch depends on Fabien Coelho's patch in this other thread,\nhttps://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1904201223040.29102@lancre\nwhich was closed as returned-with-feedback, I suppose mostly due to\nexhaustion/frustration at the lack of progress/interest.\n\nI would suggest that the best way forward in this area is to rebase both\nthere patches on current master.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"La virtud es el justo medio entre dos defectos\" (Arist�teles)\n\n\n",
"msg_date": "Fri, 4 Jun 2021 11:28:04 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect usage of strtol, atoi for non-numeric junk inputs"
},
{
"msg_contents": "On Fri, Jun 4, 2021 at 8:58 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Jun-04, Bharath Rupireddy wrote:\n>\n> > On Thu, May 27, 2021 at 3:05 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > > Hi, how is this related to\n> > > https://postgr.es/m/20191028012000.GA59064@begriffs.com ?\n> >\n> > Thanks. The proposed approach there was to implement postgres's own\n> > strtol i.e. string parsing, conversion to integers and use it in the\n> > places where atoi is being used. I'm not sure how far that can go.\n> > What I'm proposing here is to use strtol inplace of atoi to properly\n> > detect errors in case of inputs like '1211efe', '-14adc' and so on as\n> > atoi can't detect such errors. Thoughts?\n>\n> Well, if you scroll back to Surafel's initial submission in that thread,\n> it looks very similar in spirit to what you have here.\n>\n> Another thing I just noticed which I hadn't realized is that Joe\n> Nelson's patch depends on Fabien Coelho's patch in this other thread,\n> https://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1904201223040.29102@lancre\n> which was closed as returned-with-feedback, I suppose mostly due to\n> exhaustion/frustration at the lack of progress/interest.\n>\n> I would suggest that the best way forward in this area is to rebase both\n> there patches on current master.\n\nThanks. I will read both the threads [1], [2] and try to rebase the\npatches. If at all I get to rebase them, do you prefer the patches to\nbe in this thread or in a new thread?\n\n[1] - https://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1904201223040.29102@lancre\n[2] - https://www.postgresql.org/message-id/20191028012000.GA59064@begriffs.com\n\nWith Regards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Fri, 4 Jun 2021 21:34:02 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Incorrect usage of strtol, atoi for non-numeric junk inputs"
},
{
"msg_contents": "On 2021-Jun-04, Bharath Rupireddy wrote:\n\n> On Fri, Jun 4, 2021 at 8:58 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > I would suggest that the best way forward in this area is to rebase both\n> > there patches on current master.\n> \n> Thanks. I will read both the threads [1], [2] and try to rebase the\n> patches. If at all I get to rebase them, do you prefer the patches to\n> be in this thread or in a new thread?\n\nThanks, that would be helpful. This thread is a good place.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n",
"msg_date": "Fri, 4 Jun 2021 12:53:31 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect usage of strtol, atoi for non-numeric junk inputs"
},
{
"msg_contents": "On Fri, Jun 4, 2021 at 10:23 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Jun-04, Bharath Rupireddy wrote:\n>\n> > On Fri, Jun 4, 2021 at 8:58 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> > > I would suggest that the best way forward in this area is to rebase both\n> > > there patches on current master.\n> >\n> > Thanks. I will read both the threads [1], [2] and try to rebase the\n> > patches. If at all I get to rebase them, do you prefer the patches to\n> > be in this thread or in a new thread?\n>\n> Thanks, that would be helpful. This thread is a good place.\n\nI'm unable to spend time on this work as promised. I'd be happy if\nsomeone could take it forward, although it's not critical work(IMO)\nthat needs immediate focus. I will try to spend time maybe later this\nyear.\n\nRegards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Wed, 7 Jul 2021 17:40:13 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Incorrect usage of strtol, atoi for non-numeric junk inputs"
},
{
"msg_contents": "At Wed, 7 Jul 2021 17:40:13 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Fri, Jun 4, 2021 at 10:23 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2021-Jun-04, Bharath Rupireddy wrote:\n> >\n> > > On Fri, Jun 4, 2021 at 8:58 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > > > I would suggest that the best way forward in this area is to rebase both\n> > > > there patches on current master.\n> > >\n> > > Thanks. I will read both the threads [1], [2] and try to rebase the\n> > > patches. If at all I get to rebase them, do you prefer the patches to\n> > > be in this thread or in a new thread?\n> >\n> > Thanks, that would be helpful. This thread is a good place.\n> \n> I'm unable to spend time on this work as promised. I'd be happy if\n> someone could take it forward, although it's not critical work(IMO)\n> that needs immediate focus. I will try to spend time maybe later this\n> year.\n\nLooked through the three threads.\n\n[1] is trying to expose pg_strtoint16/32 to frontend, but I don't see\nmuch point in doing that in conjunction with [2] or this thread. Since\nthe integral parameter values of pg-commands are in int, which the\nexising function strtoint() is sufficient to read. So even [2] itself\ndoesn't need to utilize [1].\n\nSo I agree to the Bharath's point.\n\nSo the attached is a standalone patch that:\n\n- doesn't contain [1], since that functions are not needed for this\n purpose.\n\n- basically does the same thing with [2], but uses\n strtoint/strtol/strtod instead of pg_strtoint16/32.\n\n- doesn't try to make error messages verbose. That results in a\n somewhat strange message like this but I'm not sure we should be so\n strict at that point.\n\n > reindexdb: error: number of parallel jobs must be at least 1: hoge\n\n- is extended to cover all usages of atoi/l/f in command line\n processing, which are not fully covered by [2]. (Maybe)\n\n- is extended to cover psql's meta command parameters.\n\n- is extended to cover integral environment variables. (PGPORTOLD/NEW\n of pg_upgrade and COLUMNS of psql). The commands emit a warning for\n invalid values, but I'm not sure it's worthwhile. (The second attached.)\n\n > psql: warning: ignored invalid setting of environemt variable COLUMNS: 3x\n\n- doesn't cover pgbench's meta command parameters (for speed).\n\n\n[1] - https://www.postgresql.org/message-id/flat/alpine.DEB.2.21.1904201223040.29102@lancre\n[2] - https://www.postgresql.org/message-id/20191028012000.GA59064@begriffs.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 08 Jul 2021 17:30:23 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect usage of strtol, atoi for non-numeric junk inputs"
},
{
"msg_contents": "On Thu, Jul 08, 2021 at 05:30:23PM +0900, Kyotaro Horiguchi wrote:\n> Looked through the three threads.\n\nThanks!\n\n> [1] is trying to expose pg_strtoint16/32 to frontend, but I don't see\n> much point in doing that in conjunction with [2] or this thread. Since\n> the integral parameter values of pg-commands are in int, which the\n> exising function strtoint() is sufficient to read. So even [2] itself\n> doesn't need to utilize [1].\n\nIt sounds sensible from here to just use strtoint(), some strtol(),\nson strtod() and call it a day as these are already available.\n\n> - wait_seconds = atoi(optarg);\n> + errno = 0;\n> + wait_seconds = strtoint(optarg, &endptr, 10);\n> + if (*endptr || errno == ERANGE || wait_seconds < 0)\n> + {\n> + pg_log_error(\"invalid timeout \\\"%s\\\"\", optarg);\n> + exit(1);\n> + }\n> [ ... ]\n> - killproc = atol(argv[++optind]);\n> + errno = 0;\n> + killproc = strtol(argv[++optind], &endptr, 10);\n> + if (*endptr || errno == ERANGE || killproc < 0)\n> + {\n> + pg_log_error(\"invalid process ID \\\"%s\\\"\", argv[optind]);\n> + exit(1);\n> + }\n\nEr, wait. We've actually allowed negative values for pg_ctl\n--timeout or the subcommand kill!?\n\n> case 'j':\n> - user_opts.jobs = atoi(optarg);\n> + errno = 0;\n> + user_opts.jobs = strtoint(optarg, &endptr, 10);\n> + /**/\n> + if (*endptr || errno == ERANGE)\n> + pg_fatal(\"invalid number of jobs %s\\n\", optarg);\n> + \n> break;\n\nThis one in pg_upgrade is incomplete. Perhaps the missing comment\nshould tell that negative job values are checked later on?\n--\nMichael",
"msg_date": "Fri, 9 Jul 2021 10:29:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect usage of strtol, atoi for non-numeric junk inputs"
},
{
"msg_contents": "Thank you for the comments.\n\nAt Fri, 9 Jul 2021 10:29:07 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Thu, Jul 08, 2021 at 05:30:23PM +0900, Kyotaro Horiguchi wrote:\n> > [1] is trying to expose pg_strtoint16/32 to frontend, but I don't see\n> > much point in doing that in conjunction with [2] or this thread. Since\n> > the integral parameter values of pg-commands are in int, which the\n> > exising function strtoint() is sufficient to read. So even [2] itself\n> > doesn't need to utilize [1].\n> \n> It sounds sensible from here to just use strtoint(), some strtol(),\n> son strtod() and call it a day as these are already available.\n\nThanks.\n\n> > - wait_seconds = atoi(optarg);\n> > + errno = 0;\n> > + wait_seconds = strtoint(optarg, &endptr, 10);\n> > + if (*endptr || errno == ERANGE || wait_seconds < 0)\n> > + {\n> > + pg_log_error(\"invalid timeout \\\"%s\\\"\", optarg);\n> > + exit(1);\n> > + }\n> > [ ... ]\n> > - killproc = atol(argv[++optind]);\n> > + errno = 0;\n> > + killproc = strtol(argv[++optind], &endptr, 10);\n> > + if (*endptr || errno == ERANGE || killproc < 0)\n> > + {\n> > + pg_log_error(\"invalid process ID \\\"%s\\\"\", argv[optind]);\n> > + exit(1);\n> > + }\n> \n> Er, wait. We've actually allowed negative values for pg_ctl\n> --timeout or the subcommand kill!?\n\nFor killproc, leading minus sign suggests that it is an command line\noption. Since pg_ctl doesn't have such an option, that values is\nrecognized as invalid *options*, even with the additional check. The\nadditional check is useless in that sense. My intension is just to\nmake the restriction explicit so I won't feel sad even if it should be\nremoved.\n\n$ pg_ctl kill HUP -12345\ncg_ctl: infalid option -- '1'\n\n--timeout accepts values less than 1, which values cause the command\nends with the timeout error before checking for the ending state. Not\ndestructive but useless as a behavior. We have two choices here. One\nis simply inhibiting zero or negative timeouts, and another is\nallowing zero as timeout and giving it the same meaning to\n--no-wait. The former is strictly right behavior but the latter is\ncasual and convenient. I took the former way in this version.\n\n$ pg_ctl -w -t 0 start\npg_ctl: error: invalid timeout value \"0\", use --no-wait to finish without waiting\n\nThe same message is shown for non-decimal values but that would not matters.\n\n> > case 'j':\n> > - user_opts.jobs = atoi(optarg);\n> > + errno = 0;\n> > + user_opts.jobs = strtoint(optarg, &endptr, 10);\n> > + /**/\n> > + if (*endptr || errno == ERANGE)\n> > + pg_fatal(\"invalid number of jobs %s\\n\", optarg);\n> > + \n> > break;\n> \n> This one in pg_upgrade is incomplete. Perhaps the missing comment\n> should tell that negative job values are checked later on?\n\nZero or negative job numbers mean non-parallel mode and don't lead to\nan error. If we don't need to preserve that behavior (I personally\ndon't think it is ether useful and/or breaks someone's existing\nusage.), it is sensible to do range-check here.\n\nI noticed that I overlooked PGCTLTIMEOUT.\n\nThe attached is:\n\n- disallowed less-than-one numbers as jobs in pg_upgrade. \n- disallowed less-than-one timeout in pg_ctl\n- Use strtoint for PGCTLTIMEOUT in pg_ctl is \n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Fri, 09 Jul 2021 16:50:28 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect usage of strtol, atoi for non-numeric junk inputs"
},
{
"msg_contents": "On Fri, Jul 09, 2021 at 04:50:28PM +0900, Kyotaro Horiguchi wrote:\n> At Fri, 9 Jul 2021 10:29:07 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n>> Er, wait. We've actually allowed negative values for pg_ctl\n>> --timeout or the subcommand kill!?\n>\n> --timeout accepts values less than 1, which values cause the command\n> ends with the timeout error before checking for the ending state. Not\n> destructive but useless as a behavior. We have two choices here. One\n> is simply inhibiting zero or negative timeouts, and another is\n> allowing zero as timeout and giving it the same meaning to\n> --no-wait. The former is strictly right behavior but the latter is\n> casual and convenient. I took the former way in this version.\n\nYeah, that's not useful.\n\n>> This one in pg_upgrade is incomplete. Perhaps the missing comment\n>> should tell that negative job values are checked later on?\n> \n> Zero or negative job numbers mean non-parallel mode and don't lead to\n> an error. If we don't need to preserve that behavior (I personally\n> don't think it is ether useful and/or breaks someone's existing\n> usage.), it is sensible to do range-check here.\n\nHmm. It would be good to preserve some compatibility here, but I can\nsee more benefits in being consistent between all the tools, and make\npeople aware that such commands are not generated more carefully.\n\n> case 'j':\n> - opts.jobs = atoi(optarg);\n> - if (opts.jobs < 1)\n> + errno = 0;\n> + opts.jobs = strtoint(optarg, &endptr, 10);\n> + if (*endptr || errno == ERANGE || opts.jobs < 1)\n> {\n> pg_log_error(\"number of parallel jobs must be at least 1\");\n> exit(1);\n\nspecifying a value that triggers ERANGE could be confusing for values\nhigher than INT_MAX with pg_amcheck --jobs:\n$ pg_amcheck --jobs=4000000000\npg_amcheck: error: number of parallel jobs must be at least 1\nI think that this should be reworded as \"invalid number of parallel\njobs \\\"$s\\\"\", or \"number of parallel jobs must be in range %d..%d\".\nPerhaps we could have a combination of both? Using the first message\nis useful with junk, non-numeric values or trailing characters. The\nsecond is useful to make users aware that the value is numeric, but\nnot quite right.\n\n> --- a/src/bin/pg_checksums/pg_checksums.c\n> case 'f':\n> - if (atoi(optarg) == 0)\n> + errno = 0;\n> + if (strtoint(optarg, &endptr, 10) == 0\n> + || *endptr || errno == ERANGE)\n> {\n> pg_log_error(\"invalid filenode specification, must be numeric: %s\", optarg);\n> exit(1);\n\nThe confusion is equal here with pg_checksums -f:\n$ ./pg_checksums --f 4000000000\npg_checksums: error: invalid filenode specification, must be numeric: 400000000\nWe could say \"invalid file specification: \\\"%s\\\"\". Another idea to be\ncrystal-clear about the range requirements is to use that:\n\"file specification must be in range %d..%d\"\n\n> @@ -587,8 +602,10 @@ main(int argc, char **argv)\n> \n> case 8:\n> have_extra_float_digits = true;\n> - extra_float_digits = atoi(optarg);\n> - if (extra_float_digits < -15 || extra_float_digits > 3)\n> + errno = 0;\n> + extra_float_digits = strtoint(optarg, &endptr, 10);\n> + if (*endptr || errno == ERANGE ||\n> + extra_float_digits < -15 || extra_float_digits > 3)\n> {\n> pg_log_error(\"extra_float_digits must be in range -15..3\");\n> exit_nicely(1);\n\nShould we take this occasion to reduce the burden of translators and\nreword that as \"%s must be in range %d..%d\"? That could be a separate\npatch.\n\n> case 'p':\n> - if ((old_cluster.port = atoi(optarg)) <= 0)\n> - pg_fatal(\"invalid old port number\\n\");\n> + errno = 0;\n> + if ((old_cluster.port = strtoint(optarg, &endptr, 10)) <= 0 ||\n> + *endptr || errno == ERANGE)\n> + pg_fatal(\"invalid old port number %s\\n\", optarg);\n> break;\n\nYou may want to use columns here, or specify the port range:\n\"invalid old port number: %s\" or \"old port number must be in range\n%d..%d\".\n\n> case 'P':\n> - if ((new_cluster.port = atoi(optarg)) <= 0)\n> - pg_fatal(\"invalid new port number\\n\");\n> + errno = 0;\n> + if ((new_cluster.port = strtoint(optarg, &endptr, 10)) <= 0 ||\n> + *endptr || errno == ERANGE)\n> + pg_fatal(\"invalid new port number %s\\n\", optarg);\n> break;\n\nDitto.\n\n> + if (*endptr || errno == ERANGE || concurrentCons <= 0)\n> {\n> - pg_log_error(\"number of parallel jobs must be at least 1\");\n> + pg_log_error(\"number of parallel jobs must be at least 1: %s\", optarg);\n> exit(1);\n> }\n\nThis one is also confusing with optorg > INT_MAX.\n\n> + concurrentCons = strtoint(optarg, &endptr, 10);\n> + if (*endptr || errno == ERANGE || concurrentCons <= 0)\n> {\n> pg_log_error(\"number of parallel jobs must be at least 1\");\n> exit(1);\n> }\n> break;\n\nAnd ditto for all the ones of vacuumdb.\n--\nMichael",
"msg_date": "Tue, 13 Jul 2021 09:28:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect usage of strtol, atoi for non-numeric junk inputs"
},
{
"msg_contents": "Thanks for the discussion.\n\nAt Tue, 13 Jul 2021 09:28:30 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Fri, Jul 09, 2021 at 04:50:28PM +0900, Kyotaro Horiguchi wrote:\n> > At Fri, 9 Jul 2021 10:29:07 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> >> Er, wait. We've actually allowed negative values for pg_ctl\n> >> --timeout or the subcommand kill!?\n> >\n> > --timeout accepts values less than 1, which values cause the command\n> > ends with the timeout error before checking for the ending state. Not\n> > destructive but useless as a behavior. We have two choices here. One\n> > is simply inhibiting zero or negative timeouts, and another is\n> > allowing zero as timeout and giving it the same meaning to\n> > --no-wait. The former is strictly right behavior but the latter is\n> > casual and convenient. I took the former way in this version.\n> \n> Yeah, that's not useful.\n\n^^; Ok, I'm fine with taking the second way. Changed it.\n\n\"-t 0\" means \"-W\" in the attached and that behavior is described in\nthe doc part. The environment variable PGCTLTIMEOUT accepts the same\nrange of values. I added a warning that notifies that -t 0 overrides\nexplicit -w .\n\n> $ pg_ctl -w -t 0 start\n> pg_ctl: WARNING: -w is ignored because timeout is set to 0\n> server starting\n\n\n> >> This one in pg_upgrade is incomplete. Perhaps the missing comment\n> >> should tell that negative job values are checked later on?\n> > \n> > Zero or negative job numbers mean non-parallel mode and don't lead to\n> > an error. If we don't need to preserve that behavior (I personally\n> > don't think it is ether useful and/or breaks someone's existing\n> > usage.), it is sensible to do range-check here.\n> \n> Hmm. It would be good to preserve some compatibility here, but I can\n> see more benefits in being consistent between all the tools, and make\n> people aware that such commands are not generated more carefully.\n\nAgeed.\n\n> > case 'j':\n> > - opts.jobs = atoi(optarg);\n> > - if (opts.jobs < 1)\n> > + errno = 0;\n> > + opts.jobs = strtoint(optarg, &endptr, 10);\n> > + if (*endptr || errno == ERANGE || opts.jobs < 1)\n> > {\n> > pg_log_error(\"number of parallel jobs must be at least 1\");\n> > exit(1);\n> \n> specifying a value that triggers ERANGE could be confusing for values\n> higher than INT_MAX with pg_amcheck --jobs:\n> $ pg_amcheck --jobs=4000000000\n> pg_amcheck: error: number of parallel jobs must be at least 1\n> I think that this should be reworded as \"invalid number of parallel\n> jobs \\\"$s\\\"\", or \"number of parallel jobs must be in range %d..%d\".\n> Perhaps we could have a combination of both? Using the first message\n> is useful with junk, non-numeric values or trailing characters. The\n> second is useful to make users aware that the value is numeric, but\n> not quite right.\n\nYeah, I noticed that but ignored as a kind of impossible, or\nsomething-needless-to-say:p\n\n> \"invalid number of parallel jobs \\\"$s\\\"\"\n> \"number of parallel jobs must be in range %d..%d\"\n\nThe resulting combined message looks like this:\n\n> \"number of parallel jobs must be an integer in range 1..2147483647\"\n\nI don't think it's not great that the message explicitly suggests a\nlimit like INT_MAX, which is far above the practical limit. So, (even\nthough I avoided to do that but) in the attached, I changed my mind to\nsplit most of the errors into two messages to avoid suggesting such\nimpractical limits.\n\n$ pg_amcheck -j -1\n$ pg_amcheck -j 1x\n$ pg_amcheck -j 10000000000000x\npg_amcheck: error: number of parallel jobs must be an integer greater than zero: \"....\"\n$ pg_amcheck -j 10000000000000\npg_amcheck: error: number of parallel jobs out of range: \"10000000000000\"\n\nIf you feel it's too-much or pointless to split that way, I'll happy\nto change it the \"%d..%d\" form.\n\n\nStill I used the \"%d..%d\" notation for some parameters that has a\nfinite range by design. They look like the following:\n(%d..%d doesn't work well for real numbers.)\n\n\"sampling rate must be an real number between 0.0 and 1.0: \\\"%s\\\"\"\n\nI'm not sure what to do for numWorkers of pg_dump/restore. The limit\nfor numWorkers are lowered on Windows to quite low value, which would\nbe worth showing, but otherwise the limit is INT_MAX. The attached\nmakes pg_dump respond to -j 100 with the following error message which\ndoesn't suggest the finite limit 64 on Windows.\n\n$ pg_dump -j 100\npg_dump: error: number of parallel jobs out of range: \"100\"\n\n\n> > @@ -587,8 +602,10 @@ main(int argc, char **argv)\n> > \n> > case 8:\n> > have_extra_float_digits = true;\n> > - extra_float_digits = atoi(optarg);\n> > - if (extra_float_digits < -15 || extra_float_digits > 3)\n> > + errno = 0;\n> > + extra_float_digits = strtoint(optarg, &endptr, 10);\n> > + if (*endptr || errno == ERANGE ||\n> > + extra_float_digits < -15 || extra_float_digits > 3)\n> > {\n> > pg_log_error(\"extra_float_digits must be in range -15..3\");\n> > exit_nicely(1);\n> \n> Should we take this occasion to reduce the burden of translators and\n> reword that as \"%s must be in range %d..%d\"? That could be a separate\n> patch.\n\nThe first %s is not always an invariable symbol name so it could\nresult in concatenating several phrases into one, like this.\n\n pg_log..(\"%s must be in range %s..%s\", _(\"compression level\"), \"0\", \"9\"))\n\nIt is translatable at least into Japanese but I'm not sure about other\nlanguages.\n\nIn the attached, most of the messages are not in this shape since I\navoided to suggest substantially infinite limits, thus this doesn't\nfully work. I'll consider it if the current shape is found to be\nunacceptable. In that case I'll use the option names in the messages\ninstead of the natural names.\n\n> > case 'p':\n> > - if ((old_cluster.port = atoi(optarg)) <= 0)\n> > - pg_fatal(\"invalid old port number\\n\");\n> > + errno = 0;\n> > + if ((old_cluster.port = strtoint(optarg, &endptr, 10)) <= 0 ||\n> > + *endptr || errno == ERANGE)\n> > + pg_fatal(\"invalid old port number %s\\n\", optarg);\n> > break;\n> \n> You may want to use columns here, or specify the port range:\n> \"invalid old port number: %s\" or \"old port number must be in range\n> %d..%d\".\n\nI'm not sure whether the colons(?) are required, since pg_receivewal\ncurrently complains as \"invalid port number \\\"%s\\\"\" but I agree that\nit would be better if we had one.\n\nBy the way, in the attached version, the message is split into the\nfollowing combination. (\"an integer\" might be useless..)\n\npg_fatal(\"old port number must be an integer greater than zero: \\\"%s\\\"\\n\",\npg_fatal(\"old port number out of range: \\\"%s\\\"\\n\", optarg);\n\nAs the result.\n\n> > + if (*endptr || errno == ERANGE || concurrentCons <= 0)\n> > {\n> > - pg_log_error(\"number of parallel jobs must be at least 1\");\n> > + pg_log_error(\"number of parallel jobs must be at least 1: %s\", optarg);\n> > exit(1);\n> > }\n> \n> This one is also confusing with optorg > INT_MAX.\n\nThe attached version says just \"out of range\" in that case.\n\n\n- Is it worth avoiding suggesting a virtually infinite upper limit by\n splitting out \"out of range\" from an error message?\n\n- If not, I'll use the single message \"xxx must be in range\n 1..2147483647\" or \"xxx must be an integer in range 1..2147483647\".\n\n Do you think we need the parameter type \"an integer\" there?\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 14 Jul 2021 10:35:56 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect usage of strtol, atoi for non-numeric junk inputs"
},
{
"msg_contents": "On 2021-Jul-14, Kyotaro Horiguchi wrote:\n\n> > > pg_log_error(\"extra_float_digits must be in range -15..3\");\n> > > exit_nicely(1);\n> > \n> > Should we take this occasion to reduce the burden of translators and\n> > reword that as \"%s must be in range %d..%d\"? That could be a separate\n> > patch.\n\nYes, please, let's do it here.\n\n> The first %s is not always an invariable symbol name so it could\n> result in concatenating several phrases into one, like this.\n> \n> pg_log..(\"%s must be in range %s..%s\", _(\"compression level\"), \"0\", \"9\"))\n> \n> It is translatable at least into Japanese but I'm not sure about other\n> languages.\n\nNo, this doesn't work. When the first word is something that is\nnot to be translated (a literal parameter name), let's use a %s (but of\ncourse the values must be taken out of the phrase too). But when it is\na translatable phrase, it must be present a full phrase, no\nsubstitution:\n\n\tpg_log_error(\"%s must be in range %s..%s\", \"extra_float_digits\", \"-15\", \"3\");\n\tpg_log..(\"compression level must be in range %s..%s\", \"0\", \"9\"))\n\nI think the purity test is whether you want to use a _() around the\nargument, then you have to include it into the original message.\n\nThanks\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"After a quick R of TFM, all I can say is HOLY CR** THAT IS COOL! PostgreSQL was\namazing when I first started using it at 7.2, and I'm continually astounded by\nlearning new features and techniques made available by the continuing work of\nthe development team.\"\nBerend Tober, http://archives.postgresql.org/pgsql-hackers/2007-08/msg01009.php\n\n\n",
"msg_date": "Wed, 14 Jul 2021 11:02:47 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect usage of strtol, atoi for non-numeric junk inputs"
},
{
"msg_contents": "On Wed, Jul 14, 2021 at 11:02:47AM -0400, Alvaro Herrera wrote:\n> On 2021-Jul-14, Kyotaro Horiguchi wrote:\n>>> Should we take this occasion to reduce the burden of translators and\n>>> reword that as \"%s must be in range %d..%d\"? That could be a separate\n>>> patch.\n> \n> Yes, please, let's do it here.\n\nOkay.\n\n> No, this doesn't work. When the first word is something that is\n> not to be translated (a literal parameter name), let's use a %s (but of\n> course the values must be taken out of the phrase too). But when it is\n> a translatable phrase, it must be present a full phrase, no\n> substitution:\n> \n> \tpg_log_error(\"%s must be in range %s..%s\", \"extra_float_digits\", \"-15\", \"3\");\n> \tpg_log..(\"compression level must be in range %s..%s\", \"0\", \"9\"))\n> \n> I think the purity test is whether you want to use a _() around the\n> argument, then you have to include it into the original message.\n\nAfter thinking about that, it seems to me that we don't have that much\ncontext to lose once we build those error messages to show the option\nname of the command. And the patch, as proposed, finishes with the\nsame error message patterns all over the place, which would be a\nrecipe for more inconsistencies in the future. I think that we should\ncentralize all that, say with a small-ish routine in a new file called\nsrc/fe_utils/option_parsing.c that uses strtoint() as follows:\nbool option_parse_int(const char *optarg,\n const char *optname,\n int min_range,\n int max_range,\n int *result);\n\nThen this routine may print two types of errors through\npg_log_error():\n- Incorrect range, using min_range/max_range.\n- Junk input.\nThe boolean status is here to let the caller do any custom exit()\nactions he wishes if there one of those two failures. pg_dump has\nsome of that with exit_nicely(), for one.\n--\nMichael",
"msg_date": "Thu, 15 Jul 2021 16:19:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect usage of strtol, atoi for non-numeric junk inputs"
},
{
"msg_contents": "At Thu, 15 Jul 2021 16:19:07 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Wed, Jul 14, 2021 at 11:02:47AM -0400, Alvaro Herrera wrote:\n> > No, this doesn't work. When the first word is something that is\n> > not to be translated (a literal parameter name), let's use a %s (but of\n> > course the values must be taken out of the phrase too). But when it is\n> > a translatable phrase, it must be present a full phrase, no\n> > substitution:\n> > \n> > \tpg_log_error(\"%s must be in range %s..%s\", \"extra_float_digits\", \"-15\", \"3\");\n> > \tpg_log..(\"compression level must be in range %s..%s\", \"0\", \"9\"))\n> > \n> > I think the purity test is whether you want to use a _() around the\n> > argument, then you have to include it into the original message.\n> \n> After thinking about that, it seems to me that we don't have that much\n> context to lose once we build those error messages to show the option\n> name of the command. And the patch, as proposed, finishes with the\n\nAgreed.\n\n> same error message patterns all over the place, which would be a\n> recipe for more inconsistencies in the future. I think that we should\n> centralize all that, say with a small-ish routine in a new file called\n> src/fe_utils/option_parsing.c that uses strtoint() as follows:\n> bool option_parse_int(const char *optarg,\n> const char *optname,\n> int min_range,\n> int max_range,\n> int *result);\n>\n> Then this routine may print two types of errors through\n> pg_log_error():\n> - Incorrect range, using min_range/max_range.\n> - Junk input.\n> The boolean status is here to let the caller do any custom exit()\n> actions he wishes if there one of those two failures. pg_dump has\n> some of that with exit_nicely(), for one.\n\nIt is substantially the same suggestion with [1] in the original\nthread. The original proposal in the old thread was\n\n> bool pg_strtoint64_range(arg, &result, min, max, &error_message)\n\nThe difference is your suggestion is making the function output the\nmessage within. I guess that the reason for the original proposal is\ndifferent style of message is required in several places.\n\nActually there are several irregulars.\n\n1. Some \"bare\" options (that is not preceded by a hyphen option) like\n PID of pg_ctl kill doesn't fit the format. \\pset parameters of\n pg_ctl is the same.\n\n2. pg_ctl, pg_upgrade use their own error reporting mechanisms.\n\n3. parameters that take real numbers doesn't fit the scheme specifying\n range borders. For example boundary values may or may not be included\n in the range.\n\n4. Most of the errors are PG_LOG_ERROR, but a few ones are\n PG_LOG_FATAL.\n\nThat being said, most of the caller sites are satisfied by\nfixed-formed messages.\n\nSo I changed the interface as the following to deal with the above issues:\n\n+extern optparse_result option_parse_int(int loglevel,\n+\t\t\t\t\t\t\t\t\t\tconst char *optarg, const char *optname,\n+\t\t\t\t\t\t\t\t\t\tint min_range, int max_range,\n+\t\t\t\t\t\t\t\t\t\tint *result);\n\nloglevel specifies the loglevel to use to emit error messages. If it\nis the newly added item PG_LOG_OMIT, the function never emits an error\nmessage. Addition to that, the return type is changed to an enum which\nindicates what trouble the given string has. The caller can print\narbitrary error messages consulting the value. (killproc in pg_ctl.c)\n\nOther points:\n\nI added two more similar functions option_parse_long/double. The\nformer is a clone of _int. The latter doesn't perform explicit range\nchecks for the reason described above.\n\nMaybe we need to make pg_upgraded use the common-logging features\ninstead of its own, but it is not included in this patch.\n\npgbench's -L option accepts out-of-range values for internal\nvariable. As the added comment says, we can limit the value with the\nlarge exact number but I limited it to 3600s since I can't imagine\npeople needs larger latency limit than that.\n\nSimilarly to the above, -R option can take for example 1E-300, which\nleads to int64 overflow later. Similar to -L, I don't think people\ndon't need less than 1E-5 or so as this parameter.\n\n\nThe attached patch needs more polish but should be enough to tell what\nI have in my mind.\n\nregards.\n\n1: https://www.postgresql.org/message-id/CAKJS1f94kkuB_53LcEf0HF%2BuxMiTCk5FtLx9gSsXcCByUKMz1g%40mail.gmail.com\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Wed, 21 Jul 2021 17:02:29 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect usage of strtol, atoi for non-numeric junk inputs"
},
{
"msg_contents": "On Wed, Jul 21, 2021 at 05:02:29PM +0900, Kyotaro Horiguchi wrote:\n> The difference is your suggestion is making the function output the\n> message within. I guess that the reason for the original proposal is\n> different style of message is required in several places.\n\nThat's one step toward having a maximum number of frontend tools to\nuse the central logging APIs of src/common/.\n\n> 1. Some \"bare\" options (that is not preceded by a hyphen option) like\n> PID of pg_ctl kill doesn't fit the format. \\pset parameters of\n> pg_ctl is the same.\n\nYep. I was reviewing this one, but I have finished by removing it.\nThe argument 2 just below also came into my mind.\n\n> 2. pg_ctl, pg_upgrade use their own error reporting mechanisms.\n\nYeah, for this reason I don't think that it is a good idea to switch\nthose areas to use the parsing of option_utils.c. Perhaps we should\nconsider switching pg_upgrade to have a better logging infra, but\nthere are also reasons behind what we have now. pg_ctl is out of\nscope as it needs to cover WIN32 event logging.\n\n> 3. parameters that take real numbers doesn't fit the scheme specifying\n> range borders. For example boundary values may or may not be included\n> in the range.\n\nThis concerns only pgbench, which I'd be fine to let as-is.\n\n> 4. Most of the errors are PG_LOG_ERROR, but a few ones are\n> PG_LOG_FATAL.\n\nI would take it that pgbench is inconsistent with the rest. Note that\npg_dump uses fatal(), but that's just a wrapper to pg_log_error().\n\n> loglevel specifies the loglevel to use to emit error messages. If it\n> is the newly added item PG_LOG_OMIT, the function never emits an error\n> message. Addition to that, the return type is changed to an enum which\n> indicates what trouble the given string has. The caller can print\n> arbitrary error messages consulting the value. (killproc in pg_ctl.c)\n\nI am not much a fan of that. If we do so, what's the point in having\na dependency to logging.c anyway in option_utils.c? This OMIT option\nonly exists to bypass the specific logging needs where this gets\nadded. That does not seem a design adapted to me in the long term,\nneither am I a fan of specific error codes for a code path that's just\ngoing to be used to parse command options.\n\n> I added two more similar functions option_parse_long/double. The\n> former is a clone of _int. The latter doesn't perform explicit range\n> checks for the reason described above.\n\nThese have a limited impact, so I would limit things to int32 for\nnow.\n\n> Maybe we need to make pg_upgrade use the common-logging features\n> instead of its own, but it is not included in this patch.\n\nMaybe. That would be good in the long term, though its case is very\nparticular.\n\n> The attached patch needs more polish but should be enough to tell what\n> I have in my mind.\n\nThis breaks some of the TAP tests of pgbench and pg_dump, at short\nsight.\n\nThe checks for the port value in pg_receivewal and pg_recvlogical is\nstrange to have. We don't care about that in any other tools.\n\nThe number of checks for --jobs and workers could be made more\nconsistent across the board, but I have let that out for now.\n\nHacking on that, I am finishing with the attached. It is less\nambitious, still very useful as it removes a dozen of custom error\nmessages in favor of the two ones introduced in option_utils.c. On\ntop of that this reduces a bit the code:\n 15 files changed, 156 insertions(+), 169 deletions(-) \n\nWhat do you think?\n--\nMichael",
"msg_date": "Wed, 21 Jul 2021 20:49:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect usage of strtol, atoi for non-numeric junk inputs"
},
{
"msg_contents": "On Wed, 21 Jul 2021 at 23:50, Michael Paquier <michael@paquier.xyz> wrote:\n> Hacking on that, I am finishing with the attached. It is less\n> ambitious, still very useful as it removes a dozen of custom error\n> messages in favor of the two ones introduced in option_utils.c. On\n> top of that this reduces a bit the code:\n> 15 files changed, 156 insertions(+), 169 deletions(-)\n>\n> What do you think?\n\nThis is just a driveby review, but I think that it's good to see no\nincrease in the number of lines of code to make these improvements.\nIt's also good to see the added consistency introduced by this patch.\n\nI didn't test the patch, so this is just from reading through.\n\nI wondered about the TAP tests here:\n\ncommand_fails_like(\n[ 'pg_dump', '-j', '-1' ],\nqr/\\Qpg_dump: error: -j\\/--jobs must be in range 0..2147483647\\E/,\n'pg_dump: invalid number of parallel jobs');\n\ncommand_fails_like(\n[ 'pg_restore', '-j', '-1', '-f -' ],\nqr/\\Qpg_restore: error: -j\\/--jobs must be in range 0..2147483647\\E/,\n'pg_restore: invalid number of parallel jobs');\n\nI see both of these are limited to 64 on windows. Won't those fail on Windows?\n\nI also wondered if it would be worth doing #define MAX_JOBS somewhere\naway from the option parsing code. This part is pretty ugly:\n\n/*\n* On Windows we can only have at most MAXIMUM_WAIT_OBJECTS\n* (= 64 usually) parallel jobs because that's the maximum\n* limit for the WaitForMultipleObjects() call.\n*/\nif (!option_parse_int(optarg, \"-j/--jobs\", 0,\n#ifdef WIN32\n MAXIMUM_WAIT_OBJECTS,\n#else\n INT_MAX,\n#endif\n &numWorkers))\nexit(1);\nbreak;\n\nDavid\n\n\n",
"msg_date": "Thu, 22 Jul 2021 00:32:39 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect usage of strtol, atoi for non-numeric junk inputs"
},
{
"msg_contents": "On Thu, Jul 22, 2021 at 12:32:39AM +1200, David Rowley wrote:\n> I see both of these are limited to 64 on windows. Won't those fail on Windows?\n\nYes, thanks, they would. I would just cut the range numbers from the\nexpected output here. This does not matter in terms of coverage\neither.\n\nx> I also wondered if it would be worth doing #define MAX_JOBS somewhere\n> away from the option parsing code. This part is pretty ugly:\n\nAgreed as well. pg_dump and pg_restore have their own idea of\nparallelism in parallel.{c.h}. What about putting MAX_JOBS in\nparallel.h then?\n--\nMichael",
"msg_date": "Wed, 21 Jul 2021 21:44:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect usage of strtol, atoi for non-numeric junk inputs"
},
{
"msg_contents": "On Thu, 22 Jul 2021 at 00:44, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Jul 22, 2021 at 12:32:39AM +1200, David Rowley wrote:\n> > I see both of these are limited to 64 on windows. Won't those fail on Windows?\n>\n> Yes, thanks, they would. I would just cut the range numbers from the\n> expected output here. This does not matter in terms of coverage\n> either.\n\nSounds good.\n\n> x> I also wondered if it would be worth doing #define MAX_JOBS somewhere\n> > away from the option parsing code. This part is pretty ugly:\n>\n> Agreed as well. pg_dump and pg_restore have their own idea of\n> parallelism in parallel.{c.h}. What about putting MAX_JOBS in\n> parallel.h then?\n\nparallel.h looks ok to me.\n\nDavid\n\n\n",
"msg_date": "Thu, 22 Jul 2021 01:19:41 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect usage of strtol, atoi for non-numeric junk inputs"
},
{
"msg_contents": "On Thu, Jul 22, 2021 at 01:19:41AM +1200, David Rowley wrote:\n> On Thu, 22 Jul 2021 at 00:44, Michael Paquier <michael@paquier.xyz> wrote:\n>>\n>> On Thu, Jul 22, 2021 at 12:32:39AM +1200, David Rowley wrote:\n>> > I see both of these are limited to 64 on windows. Won't those fail on Windows?\n>>\n>> Yes, thanks, they would. I would just cut the range numbers from the\n>> expected output here. This does not matter in terms of coverage\n>> either.\n> \n> Sounds good.\n> \n>> x> I also wondered if it would be worth doing #define MAX_JOBS somewhere\n>> > away from the option parsing code. This part is pretty ugly:\n>>\n>> Agreed as well. pg_dump and pg_restore have their own idea of\n>> parallelism in parallel.{c.h}. What about putting MAX_JOBS in\n>> parallel.h then?\n> \n> parallel.h looks ok to me.\n\nOkay, done those parts as per the attached. While on it, I noticed an\nextra one for pg_dump --rows-per-insert. I am counting 25 translated\nstrings cut in total.\n\nAny objections to this first step?\n--\nMichael",
"msg_date": "Thu, 22 Jul 2021 14:32:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect usage of strtol, atoi for non-numeric junk inputs"
},
{
"msg_contents": "On 2021-Jul-21, Michael Paquier wrote:\n\n> +/*\n> + * option_parse_int\n> + *\n> + * Parse an integer for a given option. Returns true if the parsing\n> + * could be done with optionally *result holding the parsed value, and\n> + * false on failure.\n> + */\n\nMay I suggest for the second sentence something like \"If the parsing is\nsuccessful, returns true and stores the result in *result if that's\ngiven; if parsing fails, returns false\"\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"Aprender sin pensar es inútil; pensar sin aprender, peligroso\" (Confucio)\n\n\n",
"msg_date": "Thu, 22 Jul 2021 09:42:00 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect usage of strtol, atoi for non-numeric junk inputs"
},
{
"msg_contents": "On Thu, Jul 22, 2021 at 09:42:00AM -0400, Alvaro Herrera wrote:\n> May I suggest for the second sentence something like \"If the parsing is\n> successful, returns true and stores the result in *result if that's\n> given; if parsing fails, returns false\"\n\nSounds fine to me. Thanks.\n--\nMichael",
"msg_date": "Fri, 23 Jul 2021 06:09:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect usage of strtol, atoi for non-numeric junk inputs"
},
{
"msg_contents": "On Thu, Jul 22, 2021 at 02:32:35PM +0900, Michael Paquier wrote:\n> Okay, done those parts as per the attached. While on it, I noticed an\n> extra one for pg_dump --rows-per-insert. I am counting 25 translated\n> strings cut in total.\n> \n> Any objections to this first step?\n\nI have looked at that over the last couple of days, and applied it\nafter some small fixes, including an indentation. The int64 and float\nparts are extra types we could handle, but I have not looked yet at\nhow much benefits we'd get in those cases.\n--\nMichael",
"msg_date": "Sat, 24 Jul 2021 19:41:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect usage of strtol, atoi for non-numeric junk inputs"
},
{
"msg_contents": "On Sat, Jul 24, 2021 at 07:41:12PM +0900, Michael Paquier wrote:\n> I have looked at that over the last couple of days, and applied it\n> after some small fixes, including an indentation.\n\nOne thing that we forgot here is the handling of trailing\nwhitespaces. Attached is small patch to adjust that, with one\npositive and one negative tests.\n\n> The int64 and float\n> parts are extra types we could handle, but I have not looked yet at\n> how much benefits we'd get in those cases.\n\nI have looked at these two but there is really less benefits, so for\nnow I am not planning to do more in this area. For float options,\npg_basebackup --max-rate could be one target on top of the three set\nof options in pgbench, but it needs to handle units :(\n--\nMichael",
"msg_date": "Mon, 26 Jul 2021 15:01:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect usage of strtol, atoi for non-numeric junk inputs"
},
{
"msg_contents": "At Mon, 26 Jul 2021 15:01:35 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Sat, Jul 24, 2021 at 07:41:12PM +0900, Michael Paquier wrote:\n> > I have looked at that over the last couple of days, and applied it\n> > after some small fixes, including an indentation.\n> \n> One thing that we forgot here is the handling of trailing\n> whitespaces. Attached is small patch to adjust that, with one\n> positive and one negative tests.\n> \n> > The int64 and float\n> > parts are extra types we could handle, but I have not looked yet at\n> > how much benefits we'd get in those cases.\n> \n> I have looked at these two but there is really less benefits, so for\n> now I am not planning to do more in this area. For float options,\n> pg_basebackup --max-rate could be one target on top of the three set\n> of options in pgbench, but it needs to handle units :(\n\nThanks for revising and committing! I'm fine with all of the recent\ndiscussions on the committed part. Though I don't think it works for\n\"live\" command line options, making the omitting policy symmetric\nlooks good. I see the same done in several similar use of strto[il].\n\nThe change in 020_pg_receivewal.pl results in a chain of four\nsuccessive failures, which is fine. But the last failure (#23) happens\nfor a bit dubious reason.\n\n> Use of uninitialized value in pattern match (m//) at t/020_pg_receivewal.pl line 114.\n> not ok 23 - one partial WAL segment is now completed\n\nIt might not be worth amending, but we don't need to use m/// there\nand eq works fine.\n\n020_pg_receivewal.pl: 114\n-\tis($zlib_wals[0] =~ m/$partial_wals[0]/,\n+\tis($zlib_wals[0] eq $partial_wals[0],\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 26 Jul 2021 17:46:22 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect usage of strtol, atoi for non-numeric junk inputs"
},
{
"msg_contents": "On Mon, Jul 26, 2021 at 05:46:22PM +0900, Kyotaro Horiguchi wrote:\n> Thanks for revising and committing! I'm fine with all of the recent\n> discussions on the committed part. Though I don't think it works for\n> \"live\" command line options, making the omitting policy symmetric\n> looks good. I see the same done in several similar use of strto[il].\n\nOK, applied this one. So for now we are done here.\n\n> The change in 020_pg_receivewal.pl results in a chain of four\n> successive failures, which is fine. But the last failure (#23) happens\n> for a bit dubious reason.\n\nYes, I saw that as well. I'll address that separately.\n--\nMichael",
"msg_date": "Tue, 27 Jul 2021 10:47:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect usage of strtol, atoi for non-numeric junk inputs"
}
] |
[
{
"msg_contents": "Hello,\n\nwe had a Customer-Report in which `refresh materialized view \nCONCURRENTLY` failed with: `ERROR: column reference \"mv\" is ambiguous`\n\nThey're using `mv` as an alias for one column and this is causing a \ncollision with an internal alias. They also made it reproducible like this:\n```\ncreate materialized view testmv as select 'asdas' mv; --ok\ncreate unique index on testmv (mv); --ok\nrefresh materialized view testmv; --ok\nrefresh materialized view CONCURRENTLY testmv; ---BAM!\n```\n\n```\nERROR: column reference \"mv\" is ambiguous\nLINE 1: ...alog.=) mv.mv AND newdata OPERATOR(pg_catalog.*=) mv) WHERE ...\n ^\nQUERY: CREATE TEMP TABLE pg_temp_4.pg_temp_218322_2 AS SELECT mv.ctid \nAS tid, newdata FROM public.testmv mv FULL JOIN pg_temp_4.pg_temp_218322 \nnewdata ON (newdata.mv OPERATOR(pg_catalog.=) mv.mv AND newdata \nOPERATOR(pg_catalog.*=) mv) WHERE newdata IS NULL OR mv IS NULL ORDER BY tid\n```\n\nThe corresponding Code is in `matview.c` in function \n`refresh_by_match_merge`. With adding a prefix like `_pg_internal_` we \ncould make collisions pretty unlikely, without intrusive changes.\n\nThe appended patch does this change for the aliases `mv`, `newdata` and \n`newdata2`.\n\nKind regards,\nMathis",
"msg_date": "Wed, 19 May 2021 14:03:01 +0200",
"msg_from": "Mathis Rudolf <mathis.rudolf@credativ.de>",
"msg_from_op": true,
"msg_subject": "Alias collision in `refresh materialized view concurrently`"
},
{
"msg_contents": "On Wed, May 19, 2021 at 5:33 PM Mathis Rudolf <mathis.rudolf@credativ.de> wrote:\n>\n> Hello,\n>\n> we had a Customer-Report in which `refresh materialized view\n> CONCURRENTLY` failed with: `ERROR: column reference \"mv\" is ambiguous`\n>\n> They're using `mv` as an alias for one column and this is causing a\n> collision with an internal alias. They also made it reproducible like this:\n> ```\n> create materialized view testmv as select 'asdas' mv; --ok\n> create unique index on testmv (mv); --ok\n> refresh materialized view testmv; --ok\n> refresh materialized view CONCURRENTLY testmv; ---BAM!\n> ```\n>\n> ```\n> ERROR: column reference \"mv\" is ambiguous\n> LINE 1: ...alog.=) mv.mv AND newdata OPERATOR(pg_catalog.*=) mv) WHERE ...\n> ^\n> QUERY: CREATE TEMP TABLE pg_temp_4.pg_temp_218322_2 AS SELECT mv.ctid\n> AS tid, newdata FROM public.testmv mv FULL JOIN pg_temp_4.pg_temp_218322\n> newdata ON (newdata.mv OPERATOR(pg_catalog.=) mv.mv AND newdata\n> OPERATOR(pg_catalog.*=) mv) WHERE newdata IS NULL OR mv IS NULL ORDER BY tid\n> ```\n>\n> The corresponding Code is in `matview.c` in function\n> `refresh_by_match_merge`. With adding a prefix like `_pg_internal_` we\n> could make collisions pretty unlikely, without intrusive changes.\n>\n> The appended patch does this change for the aliases `mv`, `newdata` and\n> `newdata2`.\n\nI think it's better to have some random name, see below. We could\neither use \"OIDNewHeap\" or \"MyBackendId\" to make those column names\nunique and almost unguessable. So, something like \"pg_temp1_XXXX\",\n\"pg_temp2_XXXX\" or \"pg_temp3_XXXX\" and so on would be better IMO.\n\n snprintf(NewHeapName, sizeof(NewHeapName), \"pg_temp_%u\", OIDOldHeap);\n snprintf(namespaceName, sizeof(namespaceName), \"pg_temp_%d\", MyBackendId);\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 19 May 2021 18:06:03 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Alias collision in `refresh materialized view concurrently`"
},
{
"msg_contents": "Am Mittwoch, dem 19.05.2021 um 18:06 +0530 schrieb Bharath Rupireddy:\n> > The corresponding Code is in `matview.c` in function\n> > `refresh_by_match_merge`. With adding a prefix like `_pg_internal_`\n> > we\n> > could make collisions pretty unlikely, without intrusive changes.\n> > \n> > The appended patch does this change for the aliases `mv`, `newdata`\n> > and\n> > `newdata2`.\n> \n> I think it's better to have some random name, see below. We could\n> either use \"OIDNewHeap\" or \"MyBackendId\" to make those column names\n> unique and almost unguessable. So, something like \"pg_temp1_XXXX\",\n> \"pg_temp2_XXXX\" or \"pg_temp3_XXXX\" and so on would be better IMO.\n> \n> snprintf(NewHeapName, sizeof(NewHeapName), \"pg_temp_%u\",\n> OIDOldHeap);\n> snprintf(namespaceName, sizeof(namespaceName), \"pg_temp_%d\",\n> MyBackendId);\n\nHmm, it's an idea, but this can also lead to pretty random failures if\nyou have an unlucky user who had the same idea in its generating query\ntool than the backend, no? Not sure if that's really better.\n\nWith the current implementation of REFRESH MATERIALIZED VIEW\nCONCURRENTLY we always have the problem of possible collisions here,\nyou'd never get out of this area without analyzing the whole query for\nsuch collisions. \n\n\"mv\" looks like a very common alias (i use it all over the time when\ntesting or playing around with materialized views, so i'm wondering why\ni didn't see this issue already myself). So the risk here for such a\ncollision looks very high. We can try to lower this risk by choosing an\nalias name, which is not so common. With a static alias however you get\na static error condition, not something that fails here and then.\n\n\n-- \nThanks,\n\tBernd\n\n\n\n\n",
"msg_date": "Thu, 20 May 2021 16:21:57 +0200",
"msg_from": "Bernd Helmle <mailings@oopsware.de>",
"msg_from_op": false,
"msg_subject": "Re: Alias collision in `refresh materialized view concurrently`"
},
{
"msg_contents": "On Thu, May 20, 2021 at 7:52 PM Bernd Helmle <mailings@oopsware.de> wrote:\n>\n> Am Mittwoch, dem 19.05.2021 um 18:06 +0530 schrieb Bharath Rupireddy:\n> > > The corresponding Code is in `matview.c` in function\n> > > `refresh_by_match_merge`. With adding a prefix like `_pg_internal_`\n> > > we\n> > > could make collisions pretty unlikely, without intrusive changes.\n> > >\n> > > The appended patch does this change for the aliases `mv`, `newdata`\n> > > and\n> > > `newdata2`.\n> >\n> > I think it's better to have some random name, see below. We could\n> > either use \"OIDNewHeap\" or \"MyBackendId\" to make those column names\n> > unique and almost unguessable. So, something like \"pg_temp1_XXXX\",\n> > \"pg_temp2_XXXX\" or \"pg_temp3_XXXX\" and so on would be better IMO.\n> >\n> > snprintf(NewHeapName, sizeof(NewHeapName), \"pg_temp_%u\",\n> > OIDOldHeap);\n> > snprintf(namespaceName, sizeof(namespaceName), \"pg_temp_%d\",\n> > MyBackendId);\n>\n> Hmm, it's an idea, but this can also lead to pretty random failures if\n> you have an unlucky user who had the same idea in its generating query\n> tool than the backend, no? Not sure if that's really better.\n>\n> With the current implementation of REFRESH MATERIALIZED VIEW\n> CONCURRENTLY we always have the problem of possible collisions here,\n> you'd never get out of this area without analyzing the whole query for\n> such collisions.\n>\n> \"mv\" looks like a very common alias (i use it all over the time when\n> testing or playing around with materialized views, so i'm wondering why\n> i didn't see this issue already myself). So the risk here for such a\n> collision looks very high. We can try to lower this risk by choosing an\n> alias name, which is not so common. With a static alias however you get\n> a static error condition, not something that fails here and then.\n\nAnother idea is to use random() function to generate required number\nof uint32 random values(refresh_by_match_merge might need 3 values to\nreplace newdata, newdata2 and mv) and use the names like\npg_temp_rmv_<<rand_no1>>, pg_temp_rmv_<<rand_no2>> and so on. This\nwould make the name unguessable. Note that we use this in\nchoose_dsm_implementation, dsm_impl_posix.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 20 May 2021 21:14:45 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Alias collision in `refresh materialized view concurrently`"
},
{
"msg_contents": "On Thu, May 20, 2021 at 09:14:45PM +0530, Bharath Rupireddy wrote:\n> On Thu, May 20, 2021 at 7:52 PM Bernd Helmle <mailings@oopsware.de> wrote:\n>> \"mv\" looks like a very common alias (i use it all over the time when\n>> testing or playing around with materialized views, so i'm wondering why\n>> i didn't see this issue already myself). So the risk here for such a\n>> collision looks very high. We can try to lower this risk by choosing an\n>> alias name, which is not so common. With a static alias however you get\n>> a static error condition, not something that fails here and then.\n> \n> Another idea is to use random() function to generate required number\n> of uint32 random values(refresh_by_match_merge might need 3 values to\n> replace newdata, newdata2 and mv) and use the names like\n> pg_temp_rmv_<<rand_no1>>, pg_temp_rmv_<<rand_no2>> and so on. This\n> would make the name unguessable. Note that we use this in\n> choose_dsm_implementation, dsm_impl_posix.\n\nI am not sure that I see the point of using a random() number here\nwhile the backend ID, or just the PID, would easily provide enough\nentropy for this internal alias. I agree that \"mv\" is a bad choice\nfor this alias name. One thing that comes in mind here is to use an\nalias similar to what we do for dropped attributes, say \n........pg.matview.%d........ where %d is the PID. This will very\nunlikely cause conflicts.\n--\nMichael",
"msg_date": "Fri, 21 May 2021 09:38:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Alias collision in `refresh materialized view concurrently`"
},
{
"msg_contents": "On Fri, May 21, 2021 at 6:08 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, May 20, 2021 at 09:14:45PM +0530, Bharath Rupireddy wrote:\n> > On Thu, May 20, 2021 at 7:52 PM Bernd Helmle <mailings@oopsware.de> wrote:\n> >> \"mv\" looks like a very common alias (i use it all over the time when\n> >> testing or playing around with materialized views, so i'm wondering why\n> >> i didn't see this issue already myself). So the risk here for such a\n> >> collision looks very high. We can try to lower this risk by choosing an\n> >> alias name, which is not so common. With a static alias however you get\n> >> a static error condition, not something that fails here and then.\n> >\n> > Another idea is to use random() function to generate required number\n> > of uint32 random values(refresh_by_match_merge might need 3 values to\n> > replace newdata, newdata2 and mv) and use the names like\n> > pg_temp_rmv_<<rand_no1>>, pg_temp_rmv_<<rand_no2>> and so on. This\n> > would make the name unguessable. Note that we use this in\n> > choose_dsm_implementation, dsm_impl_posix.\n>\n> I am not sure that I see the point of using a random() number here\n> while the backend ID, or just the PID, would easily provide enough\n> entropy for this internal alias. I agree that \"mv\" is a bad choice\n> for this alias name. One thing that comes in mind here is to use an\n> alias similar to what we do for dropped attributes, say\n> ........pg.matview.%d........ where %d is the PID. This will very\n> unlikely cause conflicts.\n\nI agree that backend ID and/or PID is enough. I'm not fully convinced\nwith using random(). To make it more concrete, how about something\nlike pg.matview.%d.%d (MyBackendId, MyProcPid)? If the user still sees\nsome collisions, then IMHO, it's better to ensure that this kind of\ntable/alias names are not generated outside of the server.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 21 May 2021 15:56:31 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Alias collision in `refresh materialized view concurrently`"
},
{
"msg_contents": "On Fri, May 21, 2021 at 03:56:31PM +0530, Bharath Rupireddy wrote:\n> On Fri, May 21, 2021 at 6:08 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> I am not sure that I see the point of using a random() number here\n>> while the backend ID, or just the PID, would easily provide enough\n>> entropy for this internal alias. I agree that \"mv\" is a bad choice\n>> for this alias name. One thing that comes in mind here is to use an\n>> alias similar to what we do for dropped attributes, say\n>> ........pg.matview.%d........ where %d is the PID. This will very\n>> unlikely cause conflicts.\n> \n> I agree that backend ID and/or PID is enough. I'm not fully convinced\n> with using random(). To make it more concrete, how about something\n> like pg.matview.%d.%d (MyBackendId, MyProcPid)? If the user still sees\n> some collisions, then IMHO, it's better to ensure that this kind of\n> table/alias names are not generated outside of the server.\n\nThere is no need to have the PID if MyBackendId is enough, so after\nconsidering it I would just choose something like what I quoted above.\nDon't we need also to worry about the queries using newdata, newdata2\nand diff as aliases? Would you like to implement a patch doing\nsomething like that?\n--\nMichael",
"msg_date": "Tue, 1 Jun 2021 10:41:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Alias collision in `refresh materialized view concurrently`"
},
{
"msg_contents": "On Tue, Jun 1, 2021 at 7:11 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, May 21, 2021 at 03:56:31PM +0530, Bharath Rupireddy wrote:\n> > On Fri, May 21, 2021 at 6:08 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >> I am not sure that I see the point of using a random() number here\n> >> while the backend ID, or just the PID, would easily provide enough\n> >> entropy for this internal alias. I agree that \"mv\" is a bad choice\n> >> for this alias name. One thing that comes in mind here is to use an\n> >> alias similar to what we do for dropped attributes, say\n> >> ........pg.matview.%d........ where %d is the PID. This will very\n> >> unlikely cause conflicts.\n> >\n> > I agree that backend ID and/or PID is enough. I'm not fully convinced\n> > with using random(). To make it more concrete, how about something\n> > like pg.matview.%d.%d (MyBackendId, MyProcPid)? If the user still sees\n> > some collisions, then IMHO, it's better to ensure that this kind of\n> > table/alias names are not generated outside of the server.\n>\n> There is no need to have the PID if MyBackendId is enough, so after\n> considering it I would just choose something like what I quoted above.\n> Don't we need also to worry about the queries using newdata, newdata2\n> and diff as aliases? Would you like to implement a patch doing\n> something like that?\n\nSure. PSA v2 patch. We can't have \".\" as separator in the alias names,\nso I used \"_\" instead.\n\nI used MyProcPid which seems more random than MyBackendId (which is\njust a number like 1,2,3...). Even with this, someone could argue that\nthey can look at the backend PID, use it in the materialized view\nnames just to trick the server. I'm not sure if anyone would want to\ndo this.\n\nI used the existing function make_temptable_name_n to prepare the\nalias names. The advantage of this is that the code looks cleaner, but\nit leaks memory, 1KB string for each call of the function. This is\nalso true with the existing usage of the function. Now, we will have 5\nmake_temptable_name_n function calls leaking 5KB memory. And we also\nhave quote_qualified_identifier leaking memory, 2 function calls, 2KB.\nSo, in total, these two functions will leak 7KB of memory (with the\npatch).\n\nShall I pfree the memory for all the strings returned by the functions\nmake_temptable_name_n and quote_qualified_identifier? The problem is\nthat pfree isn't cheaper.\nOr shall we leave it as is so that the memory will be freed up by the context?\n\nNote I have not added tests for this, as the code is covered by the\nexisting tests.\n\nWith Regards,\nBharath Rupireddy.",
"msg_date": "Tue, 1 Jun 2021 13:13:44 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Alias collision in `refresh materialized view concurrently`"
},
{
"msg_contents": "Am Dienstag, dem 01.06.2021 um 13:13 +0530 schrieb Bharath Rupireddy:\n> I used MyProcPid which seems more random than MyBackendId (which is\n> just a number like 1,2,3...). Even with this, someone could argue\n> that\n> they can look at the backend PID, use it in the materialized view\n> names just to trick the server. I'm not sure if anyone would want to\n> do this.\n> \n> \n\nA generated query likely uses just an incremented value derived from\nsomewhere and in my opinion 1,2,3 makes it more likely that you get a\nchance for collisions if you managed to get the same alias prefix\nsomehow. So +1 with the MyProcPid...\n\n> I used the existing function make_temptable_name_n to prepare the\n> alias names. The advantage of this is that the code looks cleaner,\n> but\n> it leaks memory, 1KB string for each call of the function. This is\n> also true with the existing usage of the function. Now, we will have\n> 5\n> make_temptable_name_n function calls leaking 5KB memory. And we also\n> have quote_qualified_identifier leaking memory, 2 function calls,\n> 2KB.\n> So, in total, these two functions will leak 7KB of memory (with the\n> patch).\n> \n> Shall I pfree the memory for all the strings returned by the\n> functions\n> make_temptable_name_n and quote_qualified_identifier? The problem is\n> that pfree isn't cheaper.\n> Or shall we leave it as is so that the memory will be freed up by the\n> context?\n> \n\nafaics the memory context is deleted after execution immediately, so\ni'd assume it's okay.\n\n\n\n-- \nThanks,\n\tBernd\n\n\n\n\n",
"msg_date": "Tue, 01 Jun 2021 13:54:38 +0200",
"msg_from": "Bernd Helmle <mailings@oopsware.de>",
"msg_from_op": false,
"msg_subject": "Re: Alias collision in `refresh materialized view concurrently`"
},
{
"msg_contents": "On Tue, Jun 1, 2021 at 5:24 PM Bernd Helmle <mailings@oopsware.de> wrote:\n>\n> Am Dienstag, dem 01.06.2021 um 13:13 +0530 schrieb Bharath Rupireddy:\n> > I used MyProcPid which seems more random than MyBackendId (which is\n> > just a number like 1,2,3...). Even with this, someone could argue\n> > that\n> > they can look at the backend PID, use it in the materialized view\n> > names just to trick the server. I'm not sure if anyone would want to\n> > do this.\n> >\n>\n> A generated query likely uses just an incremented value derived from\n> somewhere and in my opinion 1,2,3 makes it more likely that you get a\n> chance for collisions if you managed to get the same alias prefix\n> somehow. So +1 with the MyProcPid...\n\nThanks.\n\n> > I used the existing function make_temptable_name_n to prepare the\n> > alias names. The advantage of this is that the code looks cleaner,\n> > but\n> > it leaks memory, 1KB string for each call of the function. This is\n> > also true with the existing usage of the function. Now, we will have\n> > 5\n> > make_temptable_name_n function calls leaking 5KB memory. And we also\n> > have quote_qualified_identifier leaking memory, 2 function calls,\n> > 2KB.\n> > So, in total, these two functions will leak 7KB of memory (with the\n> > patch).\n> >\n> > Shall I pfree the memory for all the strings returned by the\n> > functions\n> > make_temptable_name_n and quote_qualified_identifier? The problem is\n> > that pfree isn't cheaper.\n> > Or shall we leave it as is so that the memory will be freed up by the\n> > context?\n> >\n>\n> afaics the memory context is deleted after execution immediately, so\n> i'd assume it's okay.\n\nYes, the refresh operation happens in the \"PortalContext\", which gets\ndestroyed at the end of the query in PortalDrop.\n\nPSA v3 patch. I added a commit message and made some cosmetic adjustments.\n\nWith Regards,\nBharath Rupireddy.",
"msg_date": "Tue, 1 Jun 2021 19:31:51 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Alias collision in `refresh materialized view concurrently`"
},
{
"msg_contents": "On Wed, Jun 2, 2021 at 2:02 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> PSA v3 patch. I added a commit message and made some cosmetic adjustments.\n\nReminds me of this fun topic in Lisp:\n\nhttps://en.wikipedia.org/wiki/Hygienic_macro#Strategies_used_in_languages_that_lack_hygienic_macros\n\nI wondered if we could find a way to make identifiers that regular\nqueries are prohibited from using, at least by documentation. You\ncould take advantage of the various constraints on unquoted\nidentifiers in the standard (for example, something involving $), but\nit does seem a shame to remove the ability for users to put absolutely\nanything except NUL in quoted identifiers. I do wonder if at least\nusing something like _$mv would be slightly more principled than\npg_mv_1234, since nothing says pg_XXX is reserved (except in some very\nspecific places like schema names), and the number on the end seems a\nbit cargo-cultish.\n\n\n",
"msg_date": "Wed, 2 Jun 2021 12:30:55 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Alias collision in `refresh materialized view concurrently`"
},
{
"msg_contents": "On Wed, Jun 02, 2021 at 12:30:55PM +1200, Thomas Munro wrote:\n> I wondered if we could find a way to make identifiers that regular\n> queries are prohibited from using, at least by documentation. You\n> could take advantage of the various constraints on unquoted\n> identifiers in the standard (for example, something involving $), but\n> it does seem a shame to remove the ability for users to put absolutely\n> anything except NUL in quoted identifiers. I do wonder if at least\n> using something like _$mv would be slightly more principled than\n> pg_mv_1234, since nothing says pg_XXX is reserved (except in some very\n> specific places like schema names), and the number on the end seems a\n> bit cargo-cultish.\n\nYeah, using an underscore at the beginning of the name would have the\nadvantage to mark the relation as an internal thing.\n\n+ \"(SELECT %s.tid FROM %s %s \"\n+ \"WHERE %s.tid IS NOT NULL \"\n+ \"AND %s.%s IS NULL)\",\nAnyway, I have another problem with the patch: readability. It\nbecomes really hard for one to guess to which object or alias portions\nof the internal queries refer to, especially with a total of five \ntemporary names lying around. I think that you should drop the\nbusiness with make_temptable_name_n(), and just append those extra\nunderscores and uses of MyProcPid directly in the query string. The\nsurroundings of quote_qualified_identifier() require two extra printf\ncalls, but that does not sound bad to me compared to the long-term\nmaintenance of those queries.\n--\nMichael",
"msg_date": "Wed, 2 Jun 2021 10:03:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Alias collision in `refresh materialized view concurrently`"
},
{
"msg_contents": "On Wed, Jun 2, 2021 at 6:33 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Jun 02, 2021 at 12:30:55PM +1200, Thomas Munro wrote:\n> > I wondered if we could find a way to make identifiers that regular\n> > queries are prohibited from using, at least by documentation. You\n> > could take advantage of the various constraints on unquoted\n> > identifiers in the standard (for example, something involving $), but\n> > it does seem a shame to remove the ability for users to put absolutely\n> > anything except NUL in quoted identifiers. I do wonder if at least\n> > using something like _$mv would be slightly more principled than\n> > pg_mv_1234, since nothing says pg_XXX is reserved (except in some very\n> > specific places like schema names), and the number on the end seems a\n> > bit cargo-cultish.\n>\n> Yeah, using an underscore at the beginning of the name would have the\n> advantage to mark the relation as an internal thing.\n>\n> + \"(SELECT %s.tid FROM %s %s \"\n> + \"WHERE %s.tid IS NOT NULL \"\n> + \"AND %s.%s IS NULL)\",\n> Anyway, I have another problem with the patch: readability. It\n> becomes really hard for one to guess to which object or alias portions\n> of the internal queries refer to, especially with a total of five\n> temporary names lying around. I think that you should drop the\n> business with make_temptable_name_n(), and just append those extra\n> underscores and uses of MyProcPid directly in the query string. The\n> surroundings of quote_qualified_identifier() require two extra printf\n> calls, but that does not sound bad to me compared to the long-term\n> maintenance of those queries.\n\nThanks. PSA v4.\n\nWith Regards,\nBharath Rupireddy.",
"msg_date": "Wed, 2 Jun 2021 10:53:22 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Alias collision in `refresh materialized view concurrently`"
},
{
"msg_contents": "On Wed, Jun 02, 2021 at 10:53:22AM +0530, Bharath Rupireddy wrote:\n> Thanks. PSA v4.\n\nThanks for the new version.\n\n+ MyProcPid, tempname, MyProcPid, MyProcPid,\n+ tempname, MyProcPid, MyProcPid, MyProcPid,\n+ MyProcPid, MyProcPid, MyProcPid);\nThis style is still a bit heavy-ish. Perhaps we should just come back\nto Thomas's suggestion and just use a prefix with _$ for all that.\n--\nMichael",
"msg_date": "Wed, 2 Jun 2021 16:57:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Alias collision in `refresh materialized view concurrently`"
},
{
"msg_contents": "On Wed, Jun 2, 2021 at 1:27 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Jun 02, 2021 at 10:53:22AM +0530, Bharath Rupireddy wrote:\n> > Thanks. PSA v4.\n>\n> Thanks for the new version.\n>\n> + MyProcPid, tempname, MyProcPid, MyProcPid,\n> + tempname, MyProcPid, MyProcPid, MyProcPid,\n> + MyProcPid, MyProcPid, MyProcPid);\n> This style is still a bit heavy-ish. Perhaps we should just come back\n> to Thomas's suggestion and just use a prefix with _$ for all that.\n\nThanks.The changes with that approach are very minimal. PSA v5 and let\nme know if any more changes are needed.\n\nWith Regards,\nBharath Rupireddy.",
"msg_date": "Wed, 2 Jun 2021 15:44:45 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Alias collision in `refresh materialized view concurrently`"
},
{
"msg_contents": "On Wed, Jun 02, 2021 at 03:44:45PM +0530, Bharath Rupireddy wrote:\n> Thanks.The changes with that approach are very minimal. PSA v5 and let\n> me know if any more changes are needed.\n\nSimple enough, so applied and back-patched. It took 8 years for \nsomebody to complain about the current aliases, so that should be\nenough to get us close to zero conflicts now. I have looked a bit to\nsee if anybody would use this naming convention, but could not find a\ntrace, FWIW.\n--\nMichael",
"msg_date": "Thu, 3 Jun 2021 15:56:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Alias collision in `refresh materialized view concurrently`"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, Jun 02, 2021 at 03:44:45PM +0530, Bharath Rupireddy wrote:\n>> Thanks.The changes with that approach are very minimal. PSA v5 and let\n>> me know if any more changes are needed.\n\n> Simple enough, so applied and back-patched.\n\nI just came across this issue while preparing the release notes.\nISTM that people have expended a great deal of effort on a fundamentally\nunreliable solution, when a reliable one is easily available.\nThe originally complained-of issue was that a user-chosen column name\ncould collide with the query-chosen table name:\n\nERROR: column reference \"mv\" is ambiguous\nLINE 1: ...alog.=) mv.mv AND newdata OPERATOR(pg_catalog.*=) mv) WHERE ...\n\nThis is true, but it's self-inflicted damage, because all you have\nto do is write the query so that mv is clearly a table name:\n\n... mv.mv AND newdata.* OPERATOR(pg_catalog.*=) mv.*) WHERE ...\n\nAFAICT that works and generates the identical parse tree to the original\ncoding. The only place touched by the patch where it's a bit difficult to\nmake the syntax unambiguous this way is\n\n \"CREATE TEMP TABLE %s AS \"\n \"SELECT _$mv.ctid AS tid, _$newdata \"\n \"FROM %s _$mv FULL JOIN %s _$newdata ON (\",\n\nbecause newdata.* would ordinarily get expanded to multiple columns\nif it's at the top level of a SELECT list, and that's not what we want.\nHowever, that's easily fixed using the same hack as in ruleutils.c's\nget_variable: add a no-op cast to the table's rowtype. So this\nwould look like\n\n appendStringInfo(&querybuf,\n \"CREATE TEMP TABLE %s AS \"\n \"SELECT mv.ctid AS tid, newdata.*::%s \"\n \"FROM %s mv FULL JOIN %s newdata ON (\",\n diffname, matviewname, matviewname, tempname);\n\nGiven that it took this long to notice the problem at all, maybe\nthis is not a fix to cram in on the weekend before the release wrap.\nBut I don't see why we need to settle for \"mostly works\" when\n\"always works\" is barely any harder.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 06 Aug 2021 10:48:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Alias collision in `refresh materialized view concurrently`"
},
{
"msg_contents": "I wrote:\n> I just came across this issue while preparing the release notes.\n> ISTM that people have expended a great deal of effort on a fundamentally\n> unreliable solution, when a reliable one is easily available.\n\nConcretely, I propose the attached.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 06 Aug 2021 16:25:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Alias collision in `refresh materialized view concurrently`"
},
{
"msg_contents": "On Fri, Aug 06, 2021 at 10:48:40AM -0400, Tom Lane wrote:\n> AFAICT that works and generates the identical parse tree to the original\n> coding. The only place touched by the patch where it's a bit difficult to\n> make the syntax unambiguous this way is\n> \n> \"CREATE TEMP TABLE %s AS \"\n> \"SELECT _$mv.ctid AS tid, _$newdata \"\n> \"FROM %s _$mv FULL JOIN %s _$newdata ON (\",\n> \n> because newdata.* would ordinarily get expanded to multiple columns\n> if it's at the top level of a SELECT list, and that's not what we want.\n> However, that's easily fixed using the same hack as in ruleutils.c's\n> get_variable: add a no-op cast to the table's rowtype. So this\n> would look like\n> \n> appendStringInfo(&querybuf,\n> \"CREATE TEMP TABLE %s AS \"\n> \"SELECT mv.ctid AS tid, newdata.*::%s \"\n> \"FROM %s mv FULL JOIN %s newdata ON (\",\n> diffname, matviewname, matviewname, tempname);\n\nSmart piece. I haven't thought of that.\n\n> Given that it took this long to notice the problem at all, maybe\n> this is not a fix to cram in on the weekend before the release wrap.\n> But I don't see why we need to settle for \"mostly works\" when\n> \"always works\" is barely any harder.\n\nYes, I would vote to delay that for a couple of days. That's not\nworth taking a risk for.\n--\nMichael",
"msg_date": "Sat, 7 Aug 2021 10:40:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Alias collision in `refresh materialized view concurrently`"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Fri, Aug 06, 2021 at 10:48:40AM -0400, Tom Lane wrote:\n>> Given that it took this long to notice the problem at all, maybe\n>> this is not a fix to cram in on the weekend before the release wrap.\n>> But I don't see why we need to settle for \"mostly works\" when\n>> \"always works\" is barely any harder.\n\n> Yes, I would vote to delay that for a couple of days. That's not\n> worth taking a risk for.\n\nI went ahead and created the patch, including test case, and it\nseems fine. So I'm leaning towards pushing that tomorrow. Mainly\nbecause I don't want to have to document \"we partially fixed this\"\nin this release set and then \"we really fixed it\" three months from\nnow.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 06 Aug 2021 22:35:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Alias collision in `refresh materialized view concurrently`"
}
] |
[
{
"msg_contents": "While playing around with the recent SSL testharness changes I wrote a test\nsuite for sslinfo as a side effect, which seemed valuable in its own right as\nwe currently have no coverage of this code. The small change needed to the\ntestharness is to support installing modules, which is broken out into 0001 for\neasier reading.\n\nI'll park this in the next commitfest for now.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Wed, 19 May 2021 16:10:45 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "SSL Tests for sslinfo extension"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n\n> In order to be able to test extensions with SSL connections, allow\n> configure_test_server_for_ssl to create any extensions passed as\n> comma separated list. Each extension is created in all the test\n> databases which may or may not be useful.\n\nWhy the comma-separated string, rather than an array reference,\ni.e. `extensions => [qw(foo bar baz)]`? Also, should it use `CREATE\nEXTENSION .. CASCADE`, in case the specified extensions depend on\nothers?\n\n- ilmari\n-- \n\"The surreality of the universe tends towards a maximum\" -- Skud's Law\n\"Never formulate a law or axiom that you're not prepared to live with\n the consequences of.\" -- Skud's Meta-Law\n\n\n",
"msg_date": "Wed, 19 May 2021 18:01:58 +0100",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": false,
"msg_subject": "Re: SSL Tests for sslinfo extension"
},
{
"msg_contents": "\nOn 5/19/21 1:01 PM, Dagfinn Ilmari Mannsåker wrote:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n>\n>> In order to be able to test extensions with SSL connections, allow\n>> configure_test_server_for_ssl to create any extensions passed as\n>> comma separated list. Each extension is created in all the test\n>> databases which may or may not be useful.\n> Why the comma-separated string, rather than an array reference,\n> i.e. `extensions => [qw(foo bar baz)]`? Also, should it use `CREATE\n> EXTENSION .. CASCADE`, in case the specified extensions depend on\n> others?\n>\n\n\nAlso, instead of one line per db there should be an inner loop over the\ndb names.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 19 May 2021 15:05:29 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: SSL Tests for sslinfo extension"
},
{
"msg_contents": "> On 19 May 2021, at 21:05, Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n> On 5/19/21 1:01 PM, Dagfinn Ilmari Mannsåker wrote:\n>> Daniel Gustafsson <daniel@yesql.se> writes:\n>> \n>>> In order to be able to test extensions with SSL connections, allow\n>>> configure_test_server_for_ssl to create any extensions passed as\n>>> comma separated list. Each extension is created in all the test\n>>> databases which may or may not be useful.\n>> Why the comma-separated string, rather than an array reference,\n>> i.e. `extensions => [qw(foo bar baz)]`? \n\nNo real reason, I just haven't written Perl enough lately to \"think in Perl\".\nFixed in the attached.\n\n>> Also, should it use `CREATE\n>> EXTENSION .. CASCADE`, in case the specified extensions depend on\n>> others?\n\nGood point. Each extension will have to be in EXTRA_INSTALL as well of course,\nbut we should to CASCADE.\n\n> Also, instead of one line per db there should be an inner loop over the\n> db names.\n\nRight, I was lazily using the same approach as for CREATE DATABASE but when the\nlist is used it two places it should be a proper list. Fixed in the attached.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Thu, 20 May 2021 20:40:48 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: SSL Tests for sslinfo extension"
},
{
"msg_contents": "On Thu, May 20, 2021 at 08:40:48PM +0200, Daniel Gustafsson wrote:\n> > On 19 May 2021, at 21:05, Andrew Dunstan <andrew@dunslane.net> wrote:\n> > \n> > On 5/19/21 1:01 PM, Dagfinn Ilmari Mannsåker wrote:\n> >> Daniel Gustafsson <daniel@yesql.se> writes:\n> >> \n> >>> In order to be able to test extensions with SSL connections, allow\n> >>> configure_test_server_for_ssl to create any extensions passed as\n> >>> comma separated list. Each extension is created in all the test\n> >>> databases which may or may not be useful.\n> >> Why the comma-separated string, rather than an array reference,\n> >> i.e. `extensions => [qw(foo bar baz)]`? \n> \n> No real reason, I just haven't written Perl enough lately to \"think in Perl\".\n> Fixed in the attached.\n\nHmm. Adding internal dependencies between the tests should be avoided\nif we can. What would it take to move those TAP tests to\ncontrib/sslinfo instead? This is while keeping in mind that there was\na patch aimed at refactoring the SSL test suite for NSS.\n--\nMichael",
"msg_date": "Thu, 17 Jun 2021 16:29:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: SSL Tests for sslinfo extension"
},
{
"msg_contents": "> On 17 Jun 2021, at 09:29, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Thu, May 20, 2021 at 08:40:48PM +0200, Daniel Gustafsson wrote:\n>>> On 19 May 2021, at 21:05, Andrew Dunstan <andrew@dunslane.net> wrote:\n>>> \n>>> On 5/19/21 1:01 PM, Dagfinn Ilmari Mannsåker wrote:\n>>>> Daniel Gustafsson <daniel@yesql.se> writes:\n>>>> \n>>>>> In order to be able to test extensions with SSL connections, allow\n>>>>> configure_test_server_for_ssl to create any extensions passed as\n>>>>> comma separated list. Each extension is created in all the test\n>>>>> databases which may or may not be useful.\n>>>> Why the comma-separated string, rather than an array reference,\n>>>> i.e. `extensions => [qw(foo bar baz)]`? \n>> \n>> No real reason, I just haven't written Perl enough lately to \"think in Perl\".\n>> Fixed in the attached.\n> \n> Hmm. Adding internal dependencies between the tests should be avoided\n> if we can. What would it take to move those TAP tests to\n> contrib/sslinfo instead? This is while keeping in mind that there was\n> a patch aimed at refactoring the SSL test suite for NSS.\n\nIt would be quite invasive as we currently don't provide the SSLServer test\nharness outside of src/test/ssl, and given how tailored it is today I'm not\nsure doing so without a rewrite would be a good idea.\n\nA longer term solution would probably be to teach PostgresNode to provide an\ninstance set up for TLS in case the backend is compiled with TLS support, and\nuse that for things like sslinfo.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 23 Jun 2021 16:25:52 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: SSL Tests for sslinfo extension"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> On 17 Jun 2021, at 09:29, Michael Paquier <michael@paquier.xyz> wrote:\n>> Hmm. Adding internal dependencies between the tests should be avoided\n>> if we can. What would it take to move those TAP tests to\n>> contrib/sslinfo instead? This is while keeping in mind that there was\n>> a patch aimed at refactoring the SSL test suite for NSS.\n\n> It would be quite invasive as we currently don't provide the SSLServer test\n> harness outside of src/test/ssl, and given how tailored it is today I'm not\n> sure doing so without a rewrite would be a good idea.\n\nI think testing sslinfo in src/test/ssl is fine, while putting its test\ninside contrib/ would be dangerous, because then the test would be run\nby default. src/test/ssl is not run by default because the server it\nstarts is potentially accessible by other local users, and AFAICS the\nsame has to be true for an sslinfo test.\n\nSo I don't have any problem with this structurally, but I do have a\nfew nitpicks:\n\n* I think the error message added in 0001 should complain about\nmissing password \"encryption\" not \"encoding\", no?\n\n* 0002 hasn't been updated for the great PostgresNode renaming.\n\n* 0002 needs to extend src/test/ssl/README to mention that\n\"make installcheck\" requires having installed contrib/sslinfo,\nanalogous to similar comments in (eg) src/test/recovery/README.\n\n* 0002 writes a temporary file in the source tree. This is bad;\nfor one thing I bet it fails under VPATH, but in any case there\nis no reason to risk it. Put it in the tmp_check directory instead\n(cf temp kdc files in src/test/kerberos/t/001_auth.pl). That's\nsafer and you needn't worry about cleaning it up.\n\n* Hmm ... now I notice that you borrowed the key-file-copying logic\nfrom the 001 and 002 tests, but it's just as bad practice there.\nWe should fix them too.\n\n* I ran a code-coverage check and it shows that this doesn't test\nssl_issuer_field() or any of the following functions in sslinfo.c.\nI think at least ssl_extension_info() is complicated enough to\ndeserve a test.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 27 Nov 2021 14:27:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SSL Tests for sslinfo extension"
},
{
"msg_contents": "On Sat, Nov 27, 2021 at 02:27:19PM -0500, Tom Lane wrote:\n> I think testing sslinfo in src/test/ssl is fine, while putting its test\n> inside contrib/ would be dangerous, because then the test would be run\n> by default. src/test/ssl is not run by default because the server it\n> starts is potentially accessible by other local users, and AFAICS the\n> same has to be true for an sslinfo test.\n\nAh, indeed, good point. I completely forgot that we'd better control\nthis stuff with PG_TEST_EXTRA.\n--\nMichael",
"msg_date": "Sun, 28 Nov 2021 13:34:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: SSL Tests for sslinfo extension"
},
{
"msg_contents": "> On 27 Nov 2021, at 20:27, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I don't have any problem with this structurally, but I do have a\n> few nitpicks:\n\nThanks for reviewing!\n\n> * I think the error message added in 0001 should complain about\n> missing password \"encryption\" not \"encoding\", no?\n\nDoh, of course.\n\n> * 0002 hasn't been updated for the great PostgresNode renaming.\n\nFixed.\n\n> * 0002 needs to extend src/test/ssl/README to mention that\n> \"make installcheck\" requires having installed contrib/sslinfo,\n> analogous to similar comments in (eg) src/test/recovery/README.\n\nGood point, I copied over the wording from recovery/README and adapted for SSL\nsince I think it was well written as is. (Consistency is also a good benefit.)\n\n> * 0002 writes a temporary file in the source tree. This is bad;\n> for one thing I bet it fails under VPATH, but in any case there\n> is no reason to risk it. Put it in the tmp_check directory instead\n> (cf temp kdc files in src/test/kerberos/t/001_auth.pl). That's\n> safer and you needn't worry about cleaning it up.\n\nFixed, and see below.\n\n> * Hmm ... now I notice that you borrowed the key-file-copying logic\n> from the 001 and 002 tests, but it's just as bad practice there.\n> We should fix them too.\n\nWell spotted, I hadn't thought about that but in hindsight it's quite obviously\nbad. I've done this in a 0003 patch in this series which also comes with the\nIMO benefit of a tighter coupling between the key filename used in the test\nwith what's in the repo by removing the _tmp suffix. To avoid concatenating\nwith the long tmp_check path variable everywhere, I went with a lookup HASH to\nmake it easier on the eye and harder to mess up should we change tmp path at\nsome point. There might be ways which are more like modern Perl, but I wasn't\nable to think of one off the bat.\n\n> * I ran a code-coverage check and it shows that this doesn't test\n> ssl_issuer_field() or any of the following functions in sslinfo.c.\n> I think at least ssl_extension_info() is complicated enough to\n> deserve a test.\n\nAgreed. The attached v3 covers the issuer and extension function to at least\nsome degree. In order to reliably test the extension I added a new cert with a\nCA extension.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Mon, 29 Nov 2021 22:15:11 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: SSL Tests for sslinfo extension"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> Agreed. The attached v3 covers the issuer and extension function to at least\n> some degree. In order to reliably test the extension I added a new cert with a\n> CA extension.\n\nI have two remaining trivial nitpicks, for which I attach an 0004\ndelta patch: the README change was fat-fingered slightly, and some\nof the commentary about the key file seems now obsolete.\n\nOtherwise I think it's good to go, so I marked it RFC.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 29 Nov 2021 17:50:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SSL Tests for sslinfo extension"
},
{
"msg_contents": "> On 29 Nov 2021, at 23:50, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> Agreed. The attached v3 covers the issuer and extension function to at least\n>> some degree. In order to reliably test the extension I added a new cert with a\n>> CA extension.\n> \n> I have two remaining trivial nitpicks, for which I attach an 0004\n> delta patch: the README change was fat-fingered slightly, and some\n> of the commentary about the key file seems now obsolete.\n\nAh yes, thanks.\n\n> Otherwise I think it's good to go, so I marked it RFC.\n\nGreat! I'll take another look over it tomorrow and will go ahead with it then.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 30 Nov 2021 00:16:57 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: SSL Tests for sslinfo extension"
},
{
"msg_contents": "> On 30 Nov 2021, at 00:16, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> On 29 Nov 2021, at 23:50, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>> Otherwise I think it's good to go, so I marked it RFC.\n> \n> Great! I'll take another look over it tomorrow and will go ahead with it then.\n\nI applied your nitpick diff and took it for another spin in CI, and pushed it.\nThanks for review!\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 30 Nov 2021 11:52:24 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: SSL Tests for sslinfo extension"
}
] |
[
{
"msg_contents": "Fwiw, if the PostgreSQL projects is considering moving the #postgresql\nIRC channel(s) elsewhere given [1,2], I'm a member of OFTC.net's network\noperations committee and would be happy to help.\n\n[1] https://gist.github.com/aaronmdjones/1a9a93ded5b7d162c3f58bdd66b8f491\n[2] https://fuchsnet.ch/freenode-resign-letter.txt\n\nChristoph\n\n\n",
"msg_date": "Wed, 19 May 2021 16:18:54 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Freenode woes"
},
{
"msg_contents": "On Wed, May 19, 2021 at 10:19 AM Christoph Berg <myon@debian.org> wrote:\n>\n> Fwiw, if the PostgreSQL projects is considering moving the #postgresql\n> IRC channel(s) elsewhere given [1,2], I'm a member of OFTC.net's network\n> operations committee and would be happy to help.\n>\n> [1] https://gist.github.com/aaronmdjones/1a9a93ded5b7d162c3f58bdd66b8f491\n> [2] https://fuchsnet.ch/freenode-resign-letter.txt\n>\n\nI've been wondering the same thing; given our relationship with SPI,\nOFTC seems like an option worthy of consideration.\nFor those unfamiliar, there is additional info about the network at\nhttps://www.oftc.net\n\n\nRobert Treat\nPostgreSQL Project SPI Liaison\nhttps://xzilla.net\n\n\n",
"msg_date": "Wed, 19 May 2021 16:27:45 -0400",
"msg_from": "Robert Treat <rob@xzilla.net>",
"msg_from_op": false,
"msg_subject": "Re: Freenode woes"
},
{
"msg_contents": "On 5/19/21 4:27 PM, Robert Treat wrote:\n> On Wed, May 19, 2021 at 10:19 AM Christoph Berg <myon@debian.org> wrote:\n>>\n>> Fwiw, if the PostgreSQL projects is considering moving the #postgresql\n>> IRC channel(s) elsewhere given [1,2], I'm a member of OFTC.net's network\n>> operations committee and would be happy to help.\n>>\n>> [1] https://gist.github.com/aaronmdjones/1a9a93ded5b7d162c3f58bdd66b8f491\n>> [2] https://fuchsnet.ch/freenode-resign-letter.txt\n>>\n> \n> I've been wondering the same thing; given our relationship with SPI,\n> OFTC seems like an option worthy of consideration.\n> For those unfamiliar, there is additional info about the network at\n> https://www.oftc.net\n\n\n+1 (at least so far for me...)\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n",
"msg_date": "Thu, 20 May 2021 08:24:11 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Freenode woes"
}
] |
[
{
"msg_contents": "Hi, hackers.\n\nI've been playing with \"autoprepared\" patch, and have got isolation\n\"freeze-the-dead\" test stuck on first VACUUM FREEZE statement.\nAfter some research I found issue is reproduced with unmodified master\nbranch if extended protocol used. I've prepared ruby script for\ndemonstration (cause ruby-pg has simple interface to PQsendQueryParams).\n\nFurther investigation showed it happens due to portal is not dropped\ninside of exec_execute_message, and it is kept in third session till\nCOMMIT is called. Therefore heap page remains pinned, and VACUUM FREEZE\nbecame locked inside LockBufferForCleanup.\n\nIt seems that it is usually invisible to common users since either:\n- command is called as standalone and then transaction is closed\n immediately,\n- next PQsendQueryParams will initiate another unnamed portal using\n CreatePortal(\"\", true, true) and this action will drop previous\n one.\n\nBut \"freeze-the-dead\" remains locked since third session could not\nsend COMMIT until VACUUM FULL finished.\n\nI propose to add PortalDrop at the 'if (completed)' branch of\nexec_execute_message.\n\n--- a/src/backend/tcop/postgres.c\n+++ b/src/backend/tcop/postgres.c\n@@ -2209,6 +2209,8 @@ exec_execute_message(const char *portal_name, long \nmax_rows)\n\n if (completed)\n {\n+ PortalDrop(portal, false);\n+\n if (is_xact_command)\n {\n\nWith this change 'make check-world' runs without flaws (at least\non empty configure with enable-cassert and enable-tap-tests).\n\nThere is small chance applications exist which abuses seekable\nportals with 'execute' protocol message so not every completed\nportal can be safely dropped. But I believe there is some sane\nconditions that cover common case. For example, isn't empty name\ncheck is enough? Can client reset or seek portal with empty\nname?\n\nregards,\nSokolov Yura aka funny_falcon",
"msg_date": "Wed, 19 May 2021 19:18:27 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Add PortalDrop in exec_execute_message"
},
{
"msg_contents": "Yura Sokolov <y.sokolov@postgrespro.ru> writes:\n> I propose to add PortalDrop at the 'if (completed)' branch of\n> exec_execute_message.\n\nThis violates our wire protocol specification, which\nspecifically says\n\n If successfully created, a named portal object lasts till the end of\n the current transaction, unless explicitly destroyed. An unnamed\n portal is destroyed at the end of the transaction, or as soon as the\n next Bind statement specifying the unnamed portal as destination is\n issued. (Note that a simple Query message also destroys the unnamed\n portal.)\n\nI'm inclined to think that your complaint would be better handled\nby having the client send a portal-close command, if it's not\ngoing to do something else immediately.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 21 May 2021 14:23:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add PortalDrop in exec_execute_message"
},
{
"msg_contents": "Tom Lane писал 2021-05-21 21:23:\n> Yura Sokolov <y.sokolov@postgrespro.ru> writes:\n>> I propose to add PortalDrop at the 'if (completed)' branch of\n>> exec_execute_message.\n> \n> This violates our wire protocol specification, which\n> specifically says\n> \n> If successfully created, a named portal object lasts till the end \n> of\n> the current transaction, unless explicitly destroyed. An unnamed\n> portal is destroyed at the end of the transaction, or as soon as \n> the\n> next Bind statement specifying the unnamed portal as destination is\n> issued. (Note that a simple Query message also destroys the unnamed\n> portal.)\n> \n> I'm inclined to think that your complaint would be better handled\n> by having the client send a portal-close command, if it's not\n> going to do something else immediately.\n\nI thought about it as well. Then, if I understand correctly,\nPQsendQueryGuts and PQsendQueryInternal in pipeline mode should send\n\"close portal\" (CP) message after \"execute\" message, right?\n\nregards,\nSokolov Yura\n\n\n",
"msg_date": "Tue, 25 May 2021 03:58:43 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add PortalDrop in exec_execute_message"
},
{
"msg_contents": "On 2021-May-25, Yura Sokolov wrote:\n\n> Tom Lane писал 2021-05-21 21:23:\n> > Yura Sokolov <y.sokolov@postgrespro.ru> writes:\n> > > I propose to add PortalDrop at the 'if (completed)' branch of\n> > > exec_execute_message.\n> > \n> > This violates our wire protocol specification, which\n> > specifically says\n> > \n> > If successfully created, a named portal object lasts till the end of\n> > the current transaction, unless explicitly destroyed. An unnamed\n> > portal is destroyed at the end of the transaction, or as soon as the\n> > next Bind statement specifying the unnamed portal as destination is\n> > issued. (Note that a simple Query message also destroys the unnamed\n> > portal.)\n> > \n> > I'm inclined to think that your complaint would be better handled\n> > by having the client send a portal-close command, if it's not\n> > going to do something else immediately.\n> \n> I thought about it as well. Then, if I understand correctly,\n> PQsendQueryGuts and PQsendQueryInternal in pipeline mode should send\n> \"close portal\" (CP) message after \"execute\" message, right?\n\nI don't think they should do that. The portal remains open, and the\nlibpq interface does that. The portal gets closed at end of transaction\nwithout the need for any client message. I think if the client wanted\nto close the portal ahead of time, it would need a new libpq entry point\nto send the message to do that.\n\n(I didn't add a Close Portal message to PQsendQueryInternal in pipeline\nmode precisely because there is no such message in PQsendQueryGuts.\nI think it would be wrong to unconditionally add a Close Portal message\nto any of those places.)\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W\n\n\n",
"msg_date": "Wed, 26 May 2021 16:59:17 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Add PortalDrop in exec_execute_message"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> (I didn't add a Close Portal message to PQsendQueryInternal in pipeline\n> mode precisely because there is no such message in PQsendQueryGuts.\n> I think it would be wrong to unconditionally add a Close Portal message\n> to any of those places.)\n\nYeah, I'm not very comfortable with having libpq take it on itself\nto do that, either.\n\nLooking back at the original complaint, it seems like it'd be fair to\nwonder why we're still holding a page pin in a supposedly completed\nexecutor run. Maybe the right fix is somewhere in the executor\nscan logic.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 26 May 2021 17:19:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add PortalDrop in exec_execute_message"
},
{
"msg_contents": "Alvaro Herrera писал 2021-05-26 23:59:\n> On 2021-May-25, Yura Sokolov wrote:\n> \n>> Tom Lane писал 2021-05-21 21:23:\n>> > Yura Sokolov <y.sokolov@postgrespro.ru> writes:\n>> > > I propose to add PortalDrop at the 'if (completed)' branch of\n>> > > exec_execute_message.\n>> >\n>> > This violates our wire protocol specification, which\n>> > specifically says\n>> >\n>> > If successfully created, a named portal object lasts till the end of\n>> > the current transaction, unless explicitly destroyed. An unnamed\n>> > portal is destroyed at the end of the transaction, or as soon as the\n>> > next Bind statement specifying the unnamed portal as destination is\n>> > issued. (Note that a simple Query message also destroys the unnamed\n>> > portal.)\n>> >\n>> > I'm inclined to think that your complaint would be better handled\n>> > by having the client send a portal-close command, if it's not\n>> > going to do something else immediately.\n>> \n>> I thought about it as well. Then, if I understand correctly,\n>> PQsendQueryGuts and PQsendQueryInternal in pipeline mode should send\n>> \"close portal\" (CP) message after \"execute\" message, right?\n> \n> I don't think they should do that. The portal remains open, and the\n> libpq interface does that. The portal gets closed at end of \n> transaction\n> without the need for any client message. I think if the client wanted\n> to close the portal ahead of time, it would need a new libpq entry \n> point\n> to send the message to do that.\n\n- PQsendQuery issues Query message, and exec_simple_query closes its\n portal.\n- people doesn't expect PQsendQueryParams to be different from\n PQsendQuery aside of parameter sending. The fact that the portal\n remains open is a significant, unexpected and undesired difference.\n- PQsendQueryGuts is used in PQsendQueryParams and PQsendQueryPrepared.\n It is always sends empty portal name and always \"send me all rows\"\n limit (zero). Both usages are certainly to just perform query and\n certainly no one expects portal remains open.\n\n> (I didn't add a Close Portal message to PQsendQueryInternal in pipeline\n> mode precisely because there is no such message in PQsendQueryGuts.\n\nBut PQsendQueryInternal should replicate behavior of PQsendQuery and\nnot PQsendQueryParams. Despite it has to use new protocol, it should\nbe indistinguishable to user, therefore portal should be closed.\n\n> I think it would be wrong to unconditionally add a Close Portal message\n> to any of those places.)\n\nWhy? If you foresee problems, please share your mind.\n\nregards,\nSokolov Yura aka funny_falcon\n\n\n",
"msg_date": "Thu, 27 May 2021 14:45:50 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add PortalDrop in exec_execute_message"
},
{
"msg_contents": "Tom Lane wrote 2021-05-27 00:19:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> (I didn't add a Close Portal message to PQsendQueryInternal in \n>> pipeline\n>> mode precisely because there is no such message in PQsendQueryGuts.\n>> I think it would be wrong to unconditionally add a Close Portal \n>> message\n>> to any of those places.)\n> \n> Yeah, I'm not very comfortable with having libpq take it on itself\n> to do that, either.\n\nBut...\n\nTom Lane wrote 2021-05-21 21:23:\n> I'm inclined to think that your complaint would be better handled\n> by having the client send a portal-close command, if it's not\n> going to do something else immediately.\n\nAnd given PQsendQueryParams should not be different from\nPQsendQuery (aside of parameters sending) why shouldn't it close\nits portal immediately, like it happens in exec_simple_query ?\n\nI really doubt user of PQsendQueryPrepared is aware of portal as\nwell since it is also unnamed and also exhausted (because\nPQsendQueryGuts always sends \"send me all rows\" limit).\n\nAnd why PQsendQueryInternal should behave differently in pipelined\nand not pipelined mode? It closes portal in not pipelined mode,\nand will not close portal of last query in pipelined mode (inside\nof transaction).\n\n> Looking back at the original complaint, it seems like it'd be fair to\n> wonder why we're still holding a page pin in a supposedly completed\n> executor run. Maybe the right fix is somewhere in the executor\n> scan logic.\n\nPerhaps because query is simple and portal is created as seek-able?\n\n> \n> \t\t\tregards, tom lane\n\nregards\nYura Sokolov\n\n\n",
"msg_date": "Thu, 27 May 2021 14:54:11 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add PortalDrop in exec_execute_message"
},
{
"msg_contents": "On 2021-May-27, Yura Sokolov wrote:\n\n> Alvaro Herrera писал 2021-05-26 23:59:\n\n> > I don't think they should do that. The portal remains open, and the\n> > libpq interface does that. The portal gets closed at end of transaction\n> > without the need for any client message. I think if the client wanted\n> > to close the portal ahead of time, it would need a new libpq entry point\n> > to send the message to do that.\n> \n> - PQsendQuery issues Query message, and exec_simple_query closes its\n> portal.\n> - people doesn't expect PQsendQueryParams to be different from\n> PQsendQuery aside of parameter sending. The fact that the portal\n> remains open is a significant, unexpected and undesired difference.\n> - PQsendQueryGuts is used in PQsendQueryParams and PQsendQueryPrepared.\n> It is always sends empty portal name and always \"send me all rows\"\n> limit (zero). Both usages are certainly to just perform query and\n> certainly no one expects portal remains open.\n\nThinking about it some more, Yura's argument about PQsendQuery does make\nsense -- since what we're doing is replacing the use of a 'Q' message\njust because we can't use it when in pipeline mode, then it is\nreasonable to think that the replacement ought to have the same\nbehavior. Upon receipt of a 'Q' message, the portal is closed\nautomatically, and ISTM that that behavior should be preserved.\n\nThat change would not solve the problem he complains about, because IIUC\nhis framework is using PQsendQueryPrepared, which I'm not proposing to\nchange. It just removes the other discrepancy that was discussed in the\nthread.\n\nThe attached patch does it. Any opinions?\n\n-- \nÁlvaro Herrera Valdivia, Chile\n\"[PostgreSQL] is a great group; in my opinion it is THE best open source\ndevelopment communities in existence anywhere.\" (Lamar Owen)\n\n\n",
"msg_date": "Mon, 7 Jun 2021 17:07:34 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Add PortalDrop in exec_execute_message"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> The attached patch does it. Any opinions?\n\nMy opinion is there's no patch here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 07 Jun 2021 17:59:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add PortalDrop in exec_execute_message"
},
{
"msg_contents": "On 2021-Jun-07, Alvaro Herrera wrote:\n\n> The attached patch does it. Any opinions?\n\nEh, really attached.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"No es bueno caminar con un hombre muerto\"",
"msg_date": "Mon, 7 Jun 2021 18:08:33 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Add PortalDrop in exec_execute_message"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Jun-07, Alvaro Herrera wrote:\n>> The attached patch does it. Any opinions?\n\n> Eh, really attached.\n\nNo particular objection. I'm not sure this will behave quite the\nsame as simple-Query in error cases, but probably it's close enough.\n\nI'm still wondering though why Yura is observing resources remaining\nheld by an executed-to-completion Portal. I think investigating that\nmight be more useful than tinkering with pipeline mode.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 07 Jun 2021 18:15:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add PortalDrop in exec_execute_message"
},
{
"msg_contents": "I wrote:\n> I'm still wondering though why Yura is observing resources remaining\n> held by an executed-to-completion Portal. I think investigating that\n> might be more useful than tinkering with pipeline mode.\n\nI got a chance to look into this finally. The lens I've been looking\nat this through is \"why are we still holding any buffer pins when\nExecutorRun finishes?\". Normal table scan nodes won't do that.\n\nIt turns out that the problem is specific to SELECT FOR UPDATE, and\nit happens because nodeLockRows is not careful to shut down the\nEvalPlanQual mechanism it uses before returning NULL at the end of\na scan. If EPQ has been fired, it'll be holding a tuple slot\nreferencing whatever tuple it was last asked about. The attached\ntrivial patch seems to take care of the issue nicely, while adding\nlittle if any overhead. (A repeat call to EvalPlanQualEnd doesn't\ndo much.)\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 09 Jun 2021 13:25:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add PortalDrop in exec_execute_message"
},
{
"msg_contents": "I wrote:\n> It turns out that the problem is specific to SELECT FOR UPDATE, and\n> it happens because nodeLockRows is not careful to shut down the\n> EvalPlanQual mechanism it uses before returning NULL at the end of\n> a scan. If EPQ has been fired, it'll be holding a tuple slot\n> referencing whatever tuple it was last asked about. The attached\n> trivial patch seems to take care of the issue nicely, while adding\n> little if any overhead. (A repeat call to EvalPlanQualEnd doesn't\n> do much.)\n\nBTW, to be clear: I think Alvaro's change is also necessary.\nIf libpq is going to silently do something different in pipeline\nmode than it does in normal mode, it should strive to minimize\nthe semantic difference. exec_simple_query includes a PortalDrop,\nso we'd best do the same in pipeline mode. But this LockRows\nmisbehavior would be visible in other operating modes anyway.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 09 Jun 2021 15:07:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add PortalDrop in exec_execute_message"
},
{
"msg_contents": "On 2021-Jun-09, Tom Lane wrote:\n\n> I wrote:\n> > It turns out that the problem is specific to SELECT FOR UPDATE, and\n> > it happens because nodeLockRows is not careful to shut down the\n> > EvalPlanQual mechanism it uses before returning NULL at the end of\n> > a scan. If EPQ has been fired, it'll be holding a tuple slot\n> > referencing whatever tuple it was last asked about. The attached\n> > trivial patch seems to take care of the issue nicely, while adding\n> > little if any overhead. (A repeat call to EvalPlanQualEnd doesn't\n> > do much.)\n\nThanks for researching that -- good find.\n\n> BTW, to be clear: I think Alvaro's change is also necessary.\n> If libpq is going to silently do something different in pipeline\n> mode than it does in normal mode, it should strive to minimize\n> the semantic difference. exec_simple_query includes a PortalDrop,\n> so we'd best do the same in pipeline mode. But this LockRows\n> misbehavior would be visible in other operating modes anyway.\n\nAgreed. I'll get it pushed after the patch I'm currently looking at.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n",
"msg_date": "Wed, 9 Jun 2021 15:34:53 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Add PortalDrop in exec_execute_message"
},
{
"msg_contents": "Alvaro Herrera wrote 2021-06-08 00:07:\n> On 2021-May-27, Yura Sokolov wrote:\n> \n>> Alvaro Herrera писал 2021-05-26 23:59:\n> \n>> > I don't think they should do that. The portal remains open, and the\n>> > libpq interface does that. The portal gets closed at end of transaction\n>> > without the need for any client message. I think if the client wanted\n>> > to close the portal ahead of time, it would need a new libpq entry point\n>> > to send the message to do that.\n>> \n>> - PQsendQuery issues Query message, and exec_simple_query closes its\n>> portal.\n>> - people doesn't expect PQsendQueryParams to be different from\n>> PQsendQuery aside of parameter sending. The fact that the portal\n>> remains open is a significant, unexpected and undesired difference.\n>> - PQsendQueryGuts is used in PQsendQueryParams and \n>> PQsendQueryPrepared.\n>> It is always sends empty portal name and always \"send me all rows\"\n>> limit (zero). Both usages are certainly to just perform query and\n>> certainly no one expects portal remains open.\n> \n> Thinking about it some more, Yura's argument about PQsendQuery does \n> make\n> sense -- since what we're doing is replacing the use of a 'Q' message\n> just because we can't use it when in pipeline mode, then it is\n> reasonable to think that the replacement ought to have the same\n> behavior. Upon receipt of a 'Q' message, the portal is closed\n> automatically, and ISTM that that behavior should be preserved.\n> \n> That change would not solve the problem he complains about, because \n> IIUC\n> his framework is using PQsendQueryPrepared, which I'm not proposing to\n> change. It just removes the other discrepancy that was discussed in \n> the\n> thread.\n> \n> The attached patch does it. Any opinions?\n\nI'm propose to change PQsendQueryParams and PQsendQueryPrepared\n(through change of PQsendQueryGuts) since they both has semantic\n\"execute unnamed portal till the end and send me all rows\".\n\nExtended protocol were introduced by Tom Lane on 2003-05-05\nin 16503e6fa4a13051debe09698b6db9ce0d509af8\nThis commit already has Close ('C') message.\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=16503e6fa4a13051debe09698b6db9ce0d509af8\n\nlibpq adoption of extended protocol were made by Tom month later\non 2003-06-23 in efc3a25bb02ada63158fe7006673518b005261ba\nand there is already no Close message in PQsendQueryParams.\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=efc3a25bb02ada63158fe7006673518b005261ba\n\nI didn't found any relevant discussion in pgsql-hackers on May\nand June 2003.\n\nThis makes me think, Close message were intended to be used\nbut simply forgotten when libpq patch were made.\n\nTom, could I be right?\n\nregards,\nYura.\n\n\n",
"msg_date": "Fri, 11 Jun 2021 08:21:01 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add PortalDrop in exec_execute_message"
},
{
"msg_contents": "Yura Sokolov <y.sokolov@postgrespro.ru> writes:\n> This makes me think, Close message were intended to be used\n> but simply forgotten when libpq patch were made.\n> Tom, could I be right?\n\nYou could argue all day about what the intentions were nearly twenty\nyears ago. But the facts on the ground are that we don't issue Close\nin those places, and changing it now would be a de facto protocol\nchange for applications. So I'm a hard -1 on these proposals.\n\n(Alvaro's proposed change isn't a protocol break, since pipeline\nmode hasn't shipped yet. It's trying to make some brand new code\nact more like old code, which seems like a fine idea.)\n\nI think that the actual problem here has been resolved in\ncommit bb4aed46a. Perhaps we should reconsider my decision not to\nback-patch that. Unlike a protocol change, that one could possibly\nbe sane to back-patch. I didn't think it was worth the trouble and\nrisk, but maybe there's a case for it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 11 Jun 2021 09:38:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add PortalDrop in exec_execute_message"
},
{
"msg_contents": "On 2021-Jun-09, Tom Lane wrote:\n\n> BTW, to be clear: I think Alvaro's change is also necessary.\n> If libpq is going to silently do something different in pipeline\n> mode than it does in normal mode, it should strive to minimize\n> the semantic difference. exec_simple_query includes a PortalDrop,\n> so we'd best do the same in pipeline mode.\n\nPushed that patch, thanks.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"At least to kernel hackers, who really are human, despite occasional\nrumors to the contrary\" (LWN.net)\n\n\n",
"msg_date": "Fri, 11 Jun 2021 16:21:29 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Add PortalDrop in exec_execute_message"
}
] |
[
{
"msg_contents": "I would like to add a thread on pgsql-docs to the commitfest, but I\nfound that that cannot be done.\n\nWhat is the best way to proceed?\nSince we have a \"documentation\" section in the commitfest, it would\nbe useful to allow links to the -docs archives.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Wed, 19 May 2021 18:58:12 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Commitfest app vs. pgsql-docs"
},
{
"msg_contents": "Greetings,\n\n* Laurenz Albe (laurenz.albe@cybertec.at) wrote:\n> I would like to add a thread on pgsql-docs to the commitfest, but I\n> found that that cannot be done.\n> \n> What is the best way to proceed?\n> Since we have a \"documentation\" section in the commitfest, it would\n> be useful to allow links to the -docs archives.\n\n... or get rid of the pgsql-docs mailing list, as has been suggested\nbefore.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 19 May 2021 13:01:16 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest app vs. pgsql-docs"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Laurenz Albe (laurenz.albe@cybertec.at) wrote:\n>> Since we have a \"documentation\" section in the commitfest, it would\n>> be useful to allow links to the -docs archives.\n\n> ... or get rid of the pgsql-docs mailing list, as has been suggested\n> before.\n\nIIRC, the CF app also rejects threads on pgsql-bugs, which is even\nmore pointlessly annoying. Couldn't we just remove that restriction\naltogether, and allow anything posted to some pgsql list?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 19 May 2021 13:39:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest app vs. pgsql-docs"
},
{
"msg_contents": "> On 19 May 2021, at 19:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Stephen Frost <sfrost@snowman.net> writes:\n>> * Laurenz Albe (laurenz.albe@cybertec.at) wrote:\n>>> Since we have a \"documentation\" section in the commitfest, it would\n>>> be useful to allow links to the -docs archives.\n> \n>> ... or get rid of the pgsql-docs mailing list, as has been suggested\n>> before.\n> \n> IIRC, the CF app also rejects threads on pgsql-bugs, which is even\n> more pointlessly annoying. Couldn't we just remove that restriction\n> altogether, and allow anything posted to some pgsql list?\n\n+1. Regardless of the fate of any individual list I think this is the most\nsensible thing for the CF app.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 19 May 2021 19:53:54 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest app vs. pgsql-docs"
},
{
"msg_contents": "\nOn 5/19/21 1:39 PM, Tom Lane wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n>> * Laurenz Albe (laurenz.albe@cybertec.at) wrote:\n>>> Since we have a \"documentation\" section in the commitfest, it would\n>>> be useful to allow links to the -docs archives.\n>> ... or get rid of the pgsql-docs mailing list, as has been suggested\n>> before.\n> IIRC, the CF app also rejects threads on pgsql-bugs, which is even\n> more pointlessly annoying. Couldn't we just remove that restriction\n> altogether, and allow anything posted to some pgsql list?\n>\n> \t\t\t\n\n\n\n+several\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 19 May 2021 14:54:27 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest app vs. pgsql-docs"
},
{
"msg_contents": "On Wed, May 19, 2021 at 7:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Laurenz Albe (laurenz.albe@cybertec.at) wrote:\n> >> Since we have a \"documentation\" section in the commitfest, it would\n> >> be useful to allow links to the -docs archives.\n>\n> > ... or get rid of the pgsql-docs mailing list, as has been suggested\n> > before.\n>\n> IIRC, the CF app also rejects threads on pgsql-bugs, which is even\n> more pointlessly annoying. Couldn't we just remove that restriction\n> altogether, and allow anything posted to some pgsql list?\n\nIt's not technically rejecting anything, it's just explicitly looking\nin -hackers and doesn't even know the others exist :)\n\nChanging that to look globally can certainly be done. It takes a bit\nof work I think, as there are no API endpoints today that will do\nthat, but those could be added.\n\nBut just to be clear -- \"some pgsql list\" would include things like\npgsql-general, the pgadmin lists, the non-english regional lists, etc.\nThat may be fine, I just want to be sure everybody realizes that's\nwhat it means. Basically everything on\nhttps://www.postgresql.org/list/\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Wed, 19 May 2021 21:07:21 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest app vs. pgsql-docs"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> On Wed, May 19, 2021 at 7:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> IIRC, the CF app also rejects threads on pgsql-bugs, which is even\n>> more pointlessly annoying. Couldn't we just remove that restriction\n>> altogether, and allow anything posted to some pgsql list?\n\n> It's not technically rejecting anything, it's just explicitly looking\n> in -hackers and doesn't even know the others exist :)\n\n> Changing that to look globally can certainly be done. It takes a bit\n> of work I think, as there are no API endpoints today that will do\n> that, but those could be added.\n\nAh. Personally, I'd settle for it checking -hackers, -docs and -bugs.\nPerhaps there's some case for -general as well.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 19 May 2021 15:35:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest app vs. pgsql-docs"
},
{
"msg_contents": "\nOn 5/19/21 3:07 PM, Magnus Hagander wrote:\n> On Wed, May 19, 2021 at 7:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Stephen Frost <sfrost@snowman.net> writes:\n>>> * Laurenz Albe (laurenz.albe@cybertec.at) wrote:\n>>>> Since we have a \"documentation\" section in the commitfest, it would\n>>>> be useful to allow links to the -docs archives.\n>>> ... or get rid of the pgsql-docs mailing list, as has been suggested\n>>> before.\n>> IIRC, the CF app also rejects threads on pgsql-bugs, which is even\n>> more pointlessly annoying. Couldn't we just remove that restriction\n>> altogether, and allow anything posted to some pgsql list?\n> It's not technically rejecting anything, it's just explicitly looking\n> in -hackers and doesn't even know the others exist :)\n>\n> Changing that to look globally can certainly be done. It takes a bit\n> of work I think, as there are no API endpoints today that will do\n> that, but those could be added.\n>\n> But just to be clear -- \"some pgsql list\" would include things like\n> pgsql-general, the pgadmin lists, the non-english regional lists, etc.\n> That may be fine, I just want to be sure everybody realizes that's\n> what it means. Basically everything on\n> https://www.postgresql.org/list/\n>\n\nIt's just a reference after all. So someone supplies a reference to an\nemail on an out of the way list. What's the evil that will occur? Not\nmuch really AFAICT.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 19 May 2021 15:39:53 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest app vs. pgsql-docs"
},
{
"msg_contents": "On 2021-May-19, Andrew Dunstan wrote:\n\n> It's just a reference after all. So someone supplies a reference to an\n> email on an out of the way list. What's the evil that will occur? Not\n> much really� AFAICT.\n\n... as long as it doesn't leak data from private lists ...\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n",
"msg_date": "Wed, 19 May 2021 17:08:14 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest app vs. pgsql-docs"
},
{
"msg_contents": "On Wed, May 19, 2021 at 03:35:00PM -0400, Tom Lane wrote:\n> Magnus Hagander <magnus@hagander.net> writes:\n>> Changing that to look globally can certainly be done. It takes a bit\n>> of work I think, as there are no API endpoints today that will do\n>> that, but those could be added.\n> \n> Ah. Personally, I'd settle for it checking -hackers, -docs and -bugs.\n> Perhaps there's some case for -general as well.\n\nFWIW, I have seen cases for -general in the past.\n--\nMichael",
"msg_date": "Thu, 20 May 2021 09:39:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest app vs. pgsql-docs"
},
{
"msg_contents": "On Thu, May 20, 2021 at 8:39 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, May 19, 2021 at 03:35:00PM -0400, Tom Lane wrote:\n> > Magnus Hagander <magnus@hagander.net> writes:\n> >> Changing that to look globally can certainly be done. It takes a bit\n> >> of work I think, as there are no API endpoints today that will do\n> >> that, but those could be added.\n> >\n> > Ah. Personally, I'd settle for it checking -hackers, -docs and -bugs.\n> > Perhaps there's some case for -general as well.\n>\n> FWIW, I have seen cases for -general in the past.\n\n+1, I had the problem with -general not being usable multiple times.\n\n\n",
"msg_date": "Fri, 21 May 2021 11:15:58 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest app vs. pgsql-docs"
},
{
"msg_contents": "On Thu, May 20, 2021 at 09:39:13AM +0900, Michael Paquier wrote:\n> On Wed, May 19, 2021 at 03:35:00PM -0400, Tom Lane wrote:\n> > Magnus Hagander <magnus@hagander.net> writes:\n> >> Changing that to look globally can certainly be done. It takes a bit\n> >> of work I think, as there are no API endpoints today that will do\n> >> that, but those could be added.\n> > \n> > Ah. Personally, I'd settle for it checking -hackers, -docs and -bugs.\n> > Perhaps there's some case for -general as well.\n> \n> FWIW, I have seen cases for -general in the past.\n\nI was under the impression that posting patches to -hackers meant an\nimplicit acknowledge that this code can be used by the Postgres project\nunder the Postgres license and the PGDG copyright. Is this the same for\nall lists, and/or does this need to be amended then somehow (or am I\ngetting this totally wrong)?\n\nI assume the point of cross-linking patches to the commitfest is to get\nthem into Postgres after all.\n\nAlso, I'd have expected that any meaningful patch surfacing on -general\nwould be cross-posted to -hackers anyway (less/not so for -bugs and\n-docs).\n\n\nMichael\n\n-- \nMichael Banck\nProjektleiter / Senior Berater\nTel.: +49 2166 9901-171\nFax: +49 2166 9901-100\nEmail: michael.banck@credativ.de\n\ncredativ GmbH, HRB M�nchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 M�nchengladbach\nGesch�ftsf�hrung: Dr. Michael Meskes, Sascha Heuer\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz\n\n\n",
"msg_date": "Sat, 22 May 2021 10:10:17 +0200",
"msg_from": "Michael Banck <michael.banck@credativ.de>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest app vs. pgsql-docs"
},
{
"msg_contents": "On Wed, May 19, 2021 at 11:08 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-May-19, Andrew Dunstan wrote:\n>\n> > It's just a reference after all. So someone supplies a reference to an\n> > email on an out of the way list. What's the evil that will occur? Not\n> > much really AFAICT.\n\nWell, if you include all lists, the ability for you to findi things by\nthe \"most recent posts\" or by searching for anything other than a\nunique message id will likely become less useful. As long as you only\never search by message-id it won't make a difference.\n\n\n> ... as long as it doesn't leak data from private lists ...\n\nPrivate lists are archived at a completely different server, so there\nshould be no risk for that.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Mon, 24 May 2021 11:47:05 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest app vs. pgsql-docs"
},
{
"msg_contents": "> On 24 May 2021, at 11:47, Magnus Hagander <magnus@hagander.net> wrote:\n> \n> On Wed, May 19, 2021 at 11:08 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>> \n>> On 2021-May-19, Andrew Dunstan wrote:\n>> \n>>> It's just a reference after all. So someone supplies a reference to an\n>>> email on an out of the way list. What's the evil that will occur? Not\n>>> much really AFAICT.\n> \n> Well, if you include all lists, the ability for you to findi things by\n> the \"most recent posts\" or by searching for anything other than a\n> unique message id will likely become less useful.\n\nThats a good case for restricting this to the smaller set of lists which will\ncover most submissions anyways. With a smaller set we could make the UX still\nwork without presenting an incredibly long list.\n\nOr, the most recent emails dropdown only cover a set of common lists but\na search will scan all lists?\n\n> As long as you only ever search by message-id it won't make a difference.\n\nWithout any supporting evidence to back it up, my gut feeling tells me the most\nrecent mails list is a good/simple way to lower the bar for submissions.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 24 May 2021 14:42:59 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest app vs. pgsql-docs"
},
{
"msg_contents": "\nOn 5/24/21 8:42 AM, Daniel Gustafsson wrote:\n>> On 24 May 2021, at 11:47, Magnus Hagander <magnus@hagander.net> wrote:\n>>\n>> On Wed, May 19, 2021 at 11:08 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>>> On 2021-May-19, Andrew Dunstan wrote:\n>>>\n>>>> It's just a reference after all. So someone supplies a reference to an\n>>>> email on an out of the way list. What's the evil that will occur? Not\n>>>> much really AFAICT.\n>> Well, if you include all lists, the ability for you to findi things by\n>> the \"most recent posts\" or by searching for anything other than a\n>> unique message id will likely become less useful.\n> Thats a good case for restricting this to the smaller set of lists which will\n> cover most submissions anyways. With a smaller set we could make the UX still\n> work without presenting an incredibly long list.\n>\n> Or, the most recent emails dropdown only cover a set of common lists but\n> a search will scan all lists?\n>\n>> As long as you only ever search by message-id it won't make a difference.\n> Without any supporting evidence to back it up, my gut feeling tells me the most\n> recent mails list is a good/simple way to lower the bar for submissions.\n>\n\nMaybe. I only ever do this by using an exact message-id, since that's\nwhat the web form specifically asks for :-)\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 24 May 2021 10:18:07 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest app vs. pgsql-docs"
},
{
"msg_contents": "On Mon, May 24, 2021 at 4:18 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>\n> On 5/24/21 8:42 AM, Daniel Gustafsson wrote:\n> >> On 24 May 2021, at 11:47, Magnus Hagander <magnus@hagander.net> wrote:\n> >>\n> >> On Wed, May 19, 2021 at 11:08 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >>> On 2021-May-19, Andrew Dunstan wrote:\n> >>>\n> >>>> It's just a reference after all. So someone supplies a reference to an\n> >>>> email on an out of the way list. What's the evil that will occur? Not\n> >>>> much really AFAICT.\n> >> Well, if you include all lists, the ability for you to findi things by\n> >> the \"most recent posts\" or by searching for anything other than a\n> >> unique message id will likely become less useful.\n> > Thats a good case for restricting this to the smaller set of lists which will\n> > cover most submissions anyways. With a smaller set we could make the UX still\n> > work without presenting an incredibly long list.\n> >\n> > Or, the most recent emails dropdown only cover a set of common lists but\n> > a search will scan all lists?\n> >\n> >> As long as you only ever search by message-id it won't make a difference.\n> > Without any supporting evidence to back it up, my gut feeling tells me the most\n> > recent mails list is a good/simple way to lower the bar for submissions.\n> >\n>\n> Maybe. I only ever do this by using an exact message-id, since that's\n> what the web form specifically asks for :-)\n\nThe webform lets you either do a free text search, or pick from a\nlist, or enter a message-id, no?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Mon, 24 May 2021 16:55:23 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest app vs. pgsql-docs"
},
{
"msg_contents": "\nOn 5/24/21 10:55 AM, Magnus Hagander wrote:\n> On Mon, May 24, 2021 at 4:18 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>>\n>> On 5/24/21 8:42 AM, Daniel Gustafsson wrote:\n>>>> On 24 May 2021, at 11:47, Magnus Hagander <magnus@hagander.net> wrote:\n>>>>\n>>>> On Wed, May 19, 2021 at 11:08 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>>>>> On 2021-May-19, Andrew Dunstan wrote:\n>>>>>\n>>>>>> It's just a reference after all. So someone supplies a reference to an\n>>>>>> email on an out of the way list. What's the evil that will occur? Not\n>>>>>> much really AFAICT.\n>>>> Well, if you include all lists, the ability for you to findi things by\n>>>> the \"most recent posts\" or by searching for anything other than a\n>>>> unique message id will likely become less useful.\n>>> Thats a good case for restricting this to the smaller set of lists which will\n>>> cover most submissions anyways. With a smaller set we could make the UX still\n>>> work without presenting an incredibly long list.\n>>>\n>>> Or, the most recent emails dropdown only cover a set of common lists but\n>>> a search will scan all lists?\n>>>\n>>>> As long as you only ever search by message-id it won't make a difference.\n>>> Without any supporting evidence to back it up, my gut feeling tells me the most\n>>> recent mails list is a good/simple way to lower the bar for submissions.\n>>>\n>> Maybe. I only ever do this by using an exact message-id, since that's\n>> what the web form specifically asks for :-)\n> The webform lets you either do a free text search, or pick from a\n> list, or enter a message-id, no?\n\n\n\nYes it does, but the text next to the field says \"Specify thread msgid:\".\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 24 May 2021 11:03:37 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest app vs. pgsql-docs"
},
{
"msg_contents": "On Mon, May 24, 2021 at 11:03 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n> On 5/24/21 10:55 AM, Magnus Hagander wrote:\n> > On Mon, May 24, 2021 at 4:18 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> >>\n> >> On 5/24/21 8:42 AM, Daniel Gustafsson wrote:\n> >>>> On 24 May 2021, at 11:47, Magnus Hagander <magnus@hagander.net> wrote:\n> >>>>\n> >>>> On Wed, May 19, 2021 at 11:08 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >>>>> On 2021-May-19, Andrew Dunstan wrote:\n> >>>>>\n> >>>>>> It's just a reference after all. So someone supplies a reference to an\n> >>>>>> email on an out of the way list. What's the evil that will occur? Not\n> >>>>>> much really AFAICT.\n> >>>> Well, if you include all lists, the ability for you to findi things by\n> >>>> the \"most recent posts\" or by searching for anything other than a\n> >>>> unique message id will likely become less useful.\n> >>> Thats a good case for restricting this to the smaller set of lists which will\n> >>> cover most submissions anyways. With a smaller set we could make the UX still\n> >>> work without presenting an incredibly long list.\n> >>>\n> >>> Or, the most recent emails dropdown only cover a set of common lists but\n> >>> a search will scan all lists?\n> >>>\n> >>>> As long as you only ever search by message-id it won't make a difference.\n> >>> Without any supporting evidence to back it up, my gut feeling tells me the most\n> >>> recent mails list is a good/simple way to lower the bar for submissions.\n> >>>\n> >> Maybe. I only ever do this by using an exact message-id, since that's\n> >> what the web form specifically asks for :-)\n> > The webform lets you either do a free text search, or pick from a\n> > list, or enter a message-id, no?\n>\n>\n>\n> Yes it does, but the text next to the field says \"Specify thread msgid:\".\n\nYes, I've always been confused by that form. I may have tried to\nenter some free text once but AFAIR I always use the specific\nmessage-id.\n\n\n",
"msg_date": "Tue, 25 May 2021 01:22:42 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest app vs. pgsql-docs"
},
{
"msg_contents": "On Mon, May 24, 2021 at 7:21 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Mon, May 24, 2021 at 11:03 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> >\n> > On 5/24/21 10:55 AM, Magnus Hagander wrote:\n> > > On Mon, May 24, 2021 at 4:18 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> > >>\n> > >> On 5/24/21 8:42 AM, Daniel Gustafsson wrote:\n> > >>>> On 24 May 2021, at 11:47, Magnus Hagander <magnus@hagander.net> wrote:\n> > >>>>\n> > >>>> On Wed, May 19, 2021 at 11:08 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > >>>>> On 2021-May-19, Andrew Dunstan wrote:\n> > >>>>>\n> > >>>>>> It's just a reference after all. So someone supplies a reference to an\n> > >>>>>> email on an out of the way list. What's the evil that will occur? Not\n> > >>>>>> much really AFAICT.\n> > >>>> Well, if you include all lists, the ability for you to findi things by\n> > >>>> the \"most recent posts\" or by searching for anything other than a\n> > >>>> unique message id will likely become less useful.\n> > >>> Thats a good case for restricting this to the smaller set of lists which will\n> > >>> cover most submissions anyways. With a smaller set we could make the UX still\n> > >>> work without presenting an incredibly long list.\n> > >>>\n> > >>> Or, the most recent emails dropdown only cover a set of common lists but\n> > >>> a search will scan all lists?\n> > >>>\n> > >>>> As long as you only ever search by message-id it won't make a difference.\n> > >>> Without any supporting evidence to back it up, my gut feeling tells me the most\n> > >>> recent mails list is a good/simple way to lower the bar for submissions.\n> > >>>\n> > >> Maybe. I only ever do this by using an exact message-id, since that's\n> > >> what the web form specifically asks for :-)\n> > > The webform lets you either do a free text search, or pick from a\n> > > list, or enter a message-id, no?\n> >\n> >\n> >\n> > Yes it does, but the text next to the field says \"Specify thread msgid:\".\n>\n> Yes, I've always been confused by that form. I may have tried to\n> enter some free text once but AFAIR I always use the specific\n> message-id.\n\nThis is clearly in need of a better UX. Any suggestions on how would\nbe welcome. Would be enough to just say \"Or specify... \"?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Wed, 26 May 2021 22:25:58 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Commitfest app vs. pgsql-docs"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhile diving into a transformation of the tests of pg_upgrade to TAP,\nI am getting annoyed by the fact that regress.so is needed if you\nupgrade an older instance that holds the regression objects from the\nmain regression test suite. The buildfarm code is using a trick to\ncopy regress.so from the source code tree of the old instance into its\ninstallation. See in PGBuild/Modules/TestUpgradeXversion.pm:\n # at some stage we stopped installing regress.so\n copy \"$self->{pgsql}/src/test/regress/regress.so\",\n \"$installdir/lib/postgresql/regress.so\"\n unless (-e \"$installdir/lib/postgresql/regress.so\");\n\nThis creates a hard dependency with the source code of the old\ninstance if attempting to create an old instance based on a dump,\nwhich is what the buildfarm does, and something that I'd like to get\nsupport for in the TAP tests of pg_upgrade in the tree.\n\nCould it be possible to install regress.so at least in the same\nlocation as pg_regress? This would still require the test to either\nmove regress.so into a location from where the backend could load the\nlibrary, but at least the library could be accessible without a\ndependency to the source tree of the old instance upgrading from. To\nmake that really usable, this would require a backpatch, though..\n\nThoughts?\n--\nMichael",
"msg_date": "Thu, 20 May 2021 11:15:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Installation of regress.so?"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Could it be possible to install regress.so at least in the same\n> location as pg_regress?\n\nI don't think this is a great idea. Aside from the fact that\nwe'd be littering the install tree with a .so of no use to end\nusers, I'm failing to see how it really gets you anywhere unless\nyou want to further require regress.so from back versions to be\nloadable into the current server.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 19 May 2021 22:24:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Installation of regress.so?"
},
{
"msg_contents": "\nOn 5/19/21 10:24 PM, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> Could it be possible to install regress.so at least in the same\n>> location as pg_regress?\n> I don't think this is a great idea. Aside from the fact that\n> we'd be littering the install tree with a .so of no use to end\n> users, I'm failing to see how it really gets you anywhere unless\n> you want to further require regress.so from back versions to be\n> loadable into the current server.\n>\n> \t\t\t\n\n\n\nWe certainly shouldn't want that. But we do need it for the target\nunless we wipe out everything in the source that refers to it. However,\na given installation can be a source in one test and a target in another\n- currently we test upgrade to every live version from every known\nversion less than or equal to that version (currently crake knows about\nversions down to 9.2, but it could easily be taught more).\n\n\nI do agree that we should not install regress.so by default.\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 20 May 2021 09:16:50 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Installation of regress.so?"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 5/19/21 10:24 PM, Tom Lane wrote:\n>> Michael Paquier <michael@paquier.xyz> writes:\n>>> Could it be possible to install regress.so at least in the same\n>>> location as pg_regress?\n\n>> I don't think this is a great idea. ...\n\n> I do agree that we should not install regress.so by default.\n\nI'd be okay with it being some sort of non-default option.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 20 May 2021 09:30:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Installation of regress.so?"
},
{
"msg_contents": "On Thu, May 20, 2021 at 09:30:37AM -0400, Tom Lane wrote:\n> I'd be okay with it being some sort of non-default option.\n\nOkay. It would be possible to control that with an environment\nvariable. However I am wondering if it would not be more user-friendly\nfor automated environments if we had a configure switch to control\nwhen things related to the tests are installed or not. Say a\n--with-test-install?\n--\nMichael",
"msg_date": "Fri, 21 May 2021 08:58:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Installation of regress.so?"
},
{
"msg_contents": "\nOn 5/20/21 7:58 PM, Michael Paquier wrote:\n> On Thu, May 20, 2021 at 09:30:37AM -0400, Tom Lane wrote:\n>> I'd be okay with it being some sort of non-default option.\n> Okay. It would be possible to control that with an environment\n> variable. However I am wondering if it would not be more user-friendly\n> for automated environments if we had a configure switch to control\n> when things related to the tests are installed or not. Say a\n> --with-test-install?\n\n\nThat seems a bit tortured. Why should you have to make the decision at\nconfigure time? ISTM all you need is an extra make target that will\ninstall it for you, or a make variable that controls it in the existing\ninstall target.\n\n\ne.g. make INSTALL_REGRESS_LIB=1 install\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 21 May 2021 08:38:50 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Installation of regress.so?"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> That seems a bit tortured. Why should you have to make the decision at\n> configure time? ISTM all you need is an extra make target that will\n> install it for you, or a make variable that controls it in the existing\n> install target.\n\nWorks fine for Unix, but what about Windows?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 21 May 2021 09:25:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Installation of regress.so?"
},
{
"msg_contents": "\nOn 5/21/21 9:25 AM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> That seems a bit tortured. Why should you have to make the decision at\n>> configure time? ISTM all you need is an extra make target that will\n>> install it for you, or a make variable that controls it in the existing\n>> install target.\n> Works fine for Unix, but what about Windows?\n>\n> \t\t\t\n\n\n\nGood point. One item on my TODO is to make the cross version test module\nwork in Windows ... currently it's Unix only.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 21 May 2021 09:46:12 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Installation of regress.so?"
},
{
"msg_contents": "\nOn 5/21/21 9:46 AM, Andrew Dunstan wrote:\n> On 5/21/21 9:25 AM, Tom Lane wrote:\n>> Andrew Dunstan <andrew@dunslane.net> writes:\n>>> That seems a bit tortured. Why should you have to make the decision at\n>>> configure time? ISTM all you need is an extra make target that will\n>>> install it for you, or a make variable that controls it in the existing\n>>> install target.\n>> Works fine for Unix, but what about Windows?\n>>\n>> \t\t\t\n>\n>\n> Good point. One item on my TODO is to make the cross version test module\n> work in Windows ... currently it's Unix only.\n>\n>\n\nOn further investigation there doesn't actually seem to be anything to\ndo here: the MSVC install script installs everything, including\nregress.dll, and so does the EDB installer. Maybe we need to look at\nmodifying that, but there don't seem to have been any complaints over an\nextended period.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 21 May 2021 17:24:10 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Installation of regress.so?"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-20 09:16:50 -0400, Andrew Dunstan wrote:\n> We certainly shouldn't want that. But we do need it for the target\n> unless we wipe out everything in the source that refers to it.\n\nIs there a reason not to go for the wipe? I don't think the type of\nfunctions we have in regress.so are necessarily ones we'd even expect to\nwork in the next version?\n\nHere's references to explicit files I see after an installcheck:\n\nSELECT oid::regproc, prosrc, probin FROM pg_proc WHERE probin IS NOT NULL AND probin NOT LIKE '$libdir%';\n┌───────────────────────────┬───────────────────────────┬────────────────────────────────────────────────────────────────────────────┐\n│ oid │ prosrc │ probin │\n├───────────────────────────┼───────────────────────────┼────────────────────────────────────────────────────────────────────────────┤\n│ check_primary_key │ check_primary_key │ /home/andres/build/postgres/dev-optimize/vpath/src/test/regress/refint.so │\n│ check_foreign_key │ check_foreign_key │ /home/andres/build/postgres/dev-optimize/vpath/src/test/regress/refint.so │\n│ autoinc │ autoinc │ /home/andres/build/postgres/dev-optimize/vpath/src/test/regress/autoinc.so │\n│ trigger_return_old │ trigger_return_old │ /home/andres/build/postgres/dev-optimize/vpath/src/test/regress/regress.so │\n│ ttdummy │ ttdummy │ /home/andres/build/postgres/dev-optimize/vpath/src/test/regress/regress.so │\n│ set_ttdummy │ set_ttdummy │ /home/andres/build/postgres/dev-optimize/vpath/src/test/regress/regress.so │\n│ make_tuple_indirect │ make_tuple_indirect │ /home/andres/build/postgres/dev-optimize/vpath/src/test/regress/regress.so │\n│ test_atomic_ops │ test_atomic_ops │ /home/andres/build/postgres/dev-optimize/vpath/src/test/regress/regress.so │\n│ test_fdw_handler │ test_fdw_handler │ /home/andres/build/postgres/dev-optimize/vpath/src/test/regress/regress.so │\n│ test_support_func │ test_support_func │ /home/andres/build/postgres/dev-optimize/vpath/src/test/regress/regress.so │\n│ test_opclass_options_func │ test_opclass_options_func │ /home/andres/build/postgres/dev-optimize/vpath/src/test/regress/regress.so │\n│ test_enc_conversion │ test_enc_conversion │ /home/andres/build/postgres/dev-optimize/vpath/src/test/regress/regress.so │\n│ binary_coercible │ binary_coercible │ /home/andres/build/postgres/dev-optimize/vpath/src/test/regress/regress.so │\n│ widget_in │ widget_in │ /home/andres/build/postgres/dev-optimize/vpath/src/test/regress/regress.so │\n│ widget_out │ widget_out │ /home/andres/build/postgres/dev-optimize/vpath/src/test/regress/regress.so │\n│ int44in │ int44in │ /home/andres/build/postgres/dev-optimize/vpath/src/test/regress/regress.so │\n│ int44out │ int44out │ /home/andres/build/postgres/dev-optimize/vpath/src/test/regress/regress.so │\n│ pt_in_widget │ pt_in_widget │ /home/andres/build/postgres/dev-optimize/vpath/src/test/regress/regress.so │\n│ overpaid │ overpaid │ /home/andres/build/postgres/dev-optimize/vpath/src/test/regress/regress.so │\n│ interpt_pp │ interpt_pp │ /home/andres/build/postgres/dev-optimize/vpath/src/test/regress/regress.so │\n│ reverse_name │ reverse_name │ /home/andres/build/postgres/dev-optimize/vpath/src/test/regress/regress.so │\n└───────────────────────────┴───────────────────────────┴────────────────────────────────────────────────────────────────────────────┘\n(21 rows)\n\nTesting the pg_upgrade path for these doesn't seem to add meaningful\ncoverage, and several seem likely to cause problems across versions?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 21 May 2021 14:43:57 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Installation of regress.so?"
},
{
"msg_contents": "\nOn 5/21/21 5:43 PM, Andres Freund wrote:\n> Hi,\n>\n> On 2021-05-20 09:16:50 -0400, Andrew Dunstan wrote:\n>> We certainly shouldn't want that. But we do need it for the target\n>> unless we wipe out everything in the source that refers to it.\n> Is there a reason not to go for the wipe? I don't think the type of\n> functions we have in regress.so are necessarily ones we'd even expect to\n> work in the next version?\n>\n> Here's references to explicit files I see after an installcheck:\n>\n[...]\n> Testing the pg_upgrade path for these doesn't seem to add meaningful\n> coverage, and several seem likely to cause problems across versions?\n>\n\n\nPossibly.\n\nMy approach generally has been to upgrade as much as possible, only\nremoving things known to have issues.\n\nHowever, this discussion does raise some deeper points.\n\nThe first is that while we test that pg_upgrade passes we don't actually\ntest that everything is still working. So for example if an SQL function\nin a loaded module changed signature from one version to another we\nmight never discover it. So one area that needs development is some\npost-upgrade tests.\n\nSecond, we are treating the regression databases as a suitable base for\ntesting pg_upgrade. But they aren't designed for that, they are designed\nfor completely different purposes, and we're really just using them out\nof laziness because they are something we happen to have on hand. Maybe\nwe should develop a suitable purpose-designed upgrade database for\ntesting. There are things that we have found in the past that caused\nissues we didn't detect because they weren't covered in the upgraded\ndatabases.\n\nBoth of these seem like possibly good Summer of Code or intern projects.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 22 May 2021 10:49:48 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Installation of regress.so?"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> Second, we are treating the regression databases as a suitable base for\n> testing pg_upgrade. But they aren't designed for that, they are designed\n> for completely different purposes, and we're really just using them out\n> of laziness because they are something we happen to have on hand.\n\nYup, no doubt about that.\n\n> Maybe we should develop a suitable purpose-designed upgrade database for\n> testing. There are things that we have found in the past that caused\n> issues we didn't detect because they weren't covered in the upgraded\n> databases.\n\nI'm not sure what a \"purpose-designed upgrade database\" would look like,\nthough. However, it does seem like the setup would only need to create a\nbunch of objects (maybe some of them involving a create/alter sequence),\nwhich means it could run a great deal faster than the current method of\nrunning the core regression tests. And we could stop worrying about\nwhether the core tests leave an adequate set of objects behind. So yeah,\nI'm on board with this if we can find someone who wants to do the\ngruntwork.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 22 May 2021 11:01:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Installation of regress.so?"
}
] |
[
{
"msg_contents": "Currently we are using a custom/generic strategy to handle the data skew\nissue. However, it doesn't work well all the time. For example: SELECT *\nFROM t WHERE a between $1 and $2. We assume the selectivity is 0.0025,\nBut users may provide a large range every time. Per our current strategy,\na generic plan will be chosen, Index scan on A will be chosen. oops..\n\nI think Oracle's Adaptive Cursor sharing should work. First It calculate\nthe selectivity with the real bind values and generate/reuse different plan\nbased on the similarity of selectivity. The challenges I can think of now\nare:\na). How to define the similarity. b). How to adjust the similarity during\nthe\nreal run. for example, we say [1% ~ 10%] is similar. but we find\nselectivity 20%\nused the same plan as 10%. what should be done here.\n\n\nI am searching for the best place to invest in the optimizer aspect. and\nthe above idea should be the one I can think of now. Any thought?\n\n\nThanks\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nCurrently we are using a custom/generic strategy to handle the data skewissue. However, it doesn't work well all the time. For example: SELECT * FROM t WHERE a between $1 and $2. We assume the selectivity is 0.0025, But users may provide a large range every time. Per our current strategy, a generic plan will be chosen, Index scan on A will be chosen. oops..I think Oracle's Adaptive Cursor sharing should work. First It calculatethe selectivity with the real bind values and generate/reuse different planbased on the similarity of selectivity. The challenges I can think of now are:a). How to define the similarity. b). How to adjust the similarity during thereal run. for example, we say [1% ~ 10%] is similar. but we find selectivity 20%used the same plan as 10%. what should be done here.I am searching for the best place to invest in the optimizer aspect. andthe above idea should be the one I can think of now. Any thought?Thanks-- Best RegardsAndy Fan (https://www.aliyun.com/)",
"msg_date": "Thu, 20 May 2021 11:43:39 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Adaptive Plan Sharing for PreparedStmt"
},
{
"msg_contents": "Hi,\n\nOn 5/20/21 5:43 AM, Andy Fan wrote:\n> Currently we are using a custom/generic strategy to handle the data skew\n> issue. However, it doesn't work well all the time. For example: SELECT *\n> FROM t WHERE a between $1 and $2. We assume the selectivity is 0.0025,\n> But users may provide a large range every time. Per our current strategy,\n> a generic plan will be chosen, Index scan on A will be chosen. oops..\n> \n\nYeah, the current logic is rather simple, which is however somewhat on \npurpose, as it makes the planning very cheap. But it also means there's \nvery little info to check/compare and so we may make mistakes.\n\n> I think Oracle's Adaptive Cursor sharing should work. First It calculate\n> the selectivity with the real bind values and generate/reuse different plan\n> based on the similarity of selectivity. The challenges I can think of \n> now are:\n> a). How to define the similarity. b). How to adjust the similarity \n> during the\n> real run. for example, we say [1% ~ 10%] is similar. but we find \n> selectivity 20%\n> used the same plan as 10%. what should be done here.\n> \n\nIMO the big question is how expensive this would be. Calculating the \nselectivities for real values (i.e. for each query) is not expensive, \nbut it's not free either. So even if we compare the selectivities in \nsome way and skip the actual query planning, it's still going to impact \nthe prepared statements.\n\nAlso, we currently don't have any mechanism to extract the selectivities \nfrom the whole query - not sure how complex that would be, as it may \ninvolve e.g. join selectivities.\n\n\nAs for how to define the similarity, I doubt there's a simple and \nsensible/reliable way to do that :-(\n\nI remember reading a paper about query planning in which the parameter \nspace was divided into regions with the same plan. In this case the \nparameters are selectivities for all the query operations. So what we \nmight do is this:\n\n1) Run the first N queries and extract the selectivities / plans.\n\n2) Build \"clusters\" of selecitivies with the same plan.\n\n3) Before running a query, see if it the selectivities fall into one of \nthe existing clusters. If yes, use the plan. If not, do regular \nplanning, add it to the data set and repeat (2).\n\nI have no idea how expensive would this be, and I assume the \"clusters\" \nmay have fairly complicated shapes (not simple convex regions).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 20 May 2021 11:02:56 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Adaptive Plan Sharing for PreparedStmt"
},
{
"msg_contents": "Hi,\n\nOn Thu, May 20, 2021 at 5:02 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> Hi,\n>\n> On 5/20/21 5:43 AM, Andy Fan wrote:\n> > Currently we are using a custom/generic strategy to handle the data skew\n> > issue. However, it doesn't work well all the time. For example: SELECT *\n> > FROM t WHERE a between $1 and $2. We assume the selectivity is 0.0025,\n> > But users may provide a large range every time. Per our current strategy,\n> > a generic plan will be chosen, Index scan on A will be chosen. oops..\n> >\n>\n> Yeah, the current logic is rather simple, which is however somewhat on\n> purpose, as it makes the planning very cheap. But it also means there's\n> very little info to check/compare and so we may make mistakes.\n>\n> > I think Oracle's Adaptive Cursor sharing should work. First It calculate\n> > the selectivity with the real bind values and generate/reuse different\n> plan\n> > based on the similarity of selectivity. The challenges I can think of\n> > now are:\n> > a). How to define the similarity. b). How to adjust the similarity\n> > during the\n> > real run. for example, we say [1% ~ 10%] is similar. but we find\n> > selectivity 20%\n> > used the same plan as 10%. what should be done here.\n> >\n>\n> IMO the big question is how expensive this would be. Calculating the\n> selectivities for real values (i.e. for each query) is not expensive,\n> but it's not free either. So even if we compare the selectivities in\n> some way and skip the actual query planning, it's still going to impact\n> the prepared statements.\n>\n\nThat's true if we need to do this every time. We may just need to do\nthis on some cases where the estimation is likely to be wrong, like a > $1;\nor\na between $1 and $2; In such cases, we just use the hard coded value\ncurrently.\n\n\n> Also, we currently don't have any mechanism to extract the selectivities\n> from the whole query - not sure how complex that would be, as it may\n> involve e.g. join selectivities.\n>\n> The idea in my mind is just checking the quals on base relations. like\nt1.a > $1.\nSo for something like t1.a + t2.a > $1 will be ignored.\n\n\n>\n> As for how to define the similarity, I doubt there's a simple and\n> sensible/reliable way to do that :-(\n>\n> I remember reading a paper about query planning in which the parameter\n> space was divided into regions with the same plan. In this case the\n> parameters are selectivities for all the query operations. So what we\n> might do is this:\n>\n> 1) Run the first N queries and extract the selectivities / plans.\n>\n> 2) Build \"clusters\" of selecitivies with the same plan.\n>\n> 3) Before running a query, see if it the selectivities fall into one of\n> the existing clusters. If yes, use the plan. If not, do regular\n> planning, add it to the data set and repeat (2).\n>\n> I have no idea how expensive would this be, and I assume the \"clusters\"\n> may have fairly complicated shapes (not simple convex regions).\n>\n>\nThanks for sharing this, we do have lots of things to do here. Your idea\nshould be a good place to start with.\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nHi, On Thu, May 20, 2021 at 5:02 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:Hi,\n\nOn 5/20/21 5:43 AM, Andy Fan wrote:\n> Currently we are using a custom/generic strategy to handle the data skew\n> issue. However, it doesn't work well all the time. For example: SELECT *\n> FROM t WHERE a between $1 and $2. We assume the selectivity is 0.0025,\n> But users may provide a large range every time. Per our current strategy,\n> a generic plan will be chosen, Index scan on A will be chosen. oops..\n> \n\nYeah, the current logic is rather simple, which is however somewhat on \npurpose, as it makes the planning very cheap. But it also means there's \nvery little info to check/compare and so we may make mistakes.\n\n> I think Oracle's Adaptive Cursor sharing should work. First It calculate\n> the selectivity with the real bind values and generate/reuse different plan\n> based on the similarity of selectivity. The challenges I can think of \n> now are:\n> a). How to define the similarity. b). How to adjust the similarity \n> during the\n> real run. for example, we say [1% ~ 10%] is similar. but we find \n> selectivity 20%\n> used the same plan as 10%. what should be done here.\n> \n\nIMO the big question is how expensive this would be. Calculating the \nselectivities for real values (i.e. for each query) is not expensive, \nbut it's not free either. So even if we compare the selectivities in \nsome way and skip the actual query planning, it's still going to impact \nthe prepared statements.That's true if we need to do this every time. We may just need to do this on some cases where the estimation is likely to be wrong, like a > $1; ora between $1 and $2; In such cases, we just use the hard coded valuecurrently.\n\nAlso, we currently don't have any mechanism to extract the selectivities \nfrom the whole query - not sure how complex that would be, as it may \ninvolve e.g. join selectivities.\nThe idea in my mind is just checking the quals on base relations. like t1.a > $1. So for something like t1.a + t2.a > $1 will be ignored. \n\nAs for how to define the similarity, I doubt there's a simple and \nsensible/reliable way to do that :-(\n\nI remember reading a paper about query planning in which the parameter \nspace was divided into regions with the same plan. In this case the \nparameters are selectivities for all the query operations. So what we \nmight do is this:\n\n1) Run the first N queries and extract the selectivities / plans.\n\n2) Build \"clusters\" of selecitivies with the same plan.\n\n3) Before running a query, see if it the selectivities fall into one of \nthe existing clusters. If yes, use the plan. If not, do regular \nplanning, add it to the data set and repeat (2).\n\nI have no idea how expensive would this be, and I assume the \"clusters\" \nmay have fairly complicated shapes (not simple convex regions).Thanks for sharing this, we do have lots of things to do here. Your ideashould be a good place to start with.-- Best RegardsAndy Fan (https://www.aliyun.com/)",
"msg_date": "Tue, 25 May 2021 19:02:14 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Adaptive Plan Sharing for PreparedStmt"
}
] |
[
{
"msg_contents": "Hi,\nI recently ran into a problem in one of our production postgresql cluster.\nI had noticed lock contention on procarray lock on standby, which causes\nWAL replay lag growth.\nTo reproduce this, you can do the following:\n\n1) set max_connections to big number, like 100000\n2) begin a transaction on primary\n3) start pgbench workload on primary and on standby\n\nAfter a while it will be possible to see KnownAssignedXidsGetAndSetXmin in\nperf top consuming abount 75 % of CPU.\n\n%%\n PerfTop: 1060 irqs/sec kernel: 0.0% exact: 0.0% [4000Hz cycles:u],\n (target_pid: 273361)\n-------------------------------------------------------------------------------\n\n 73.92% postgres [.] KnownAssignedXidsGetAndSetXmin\n 1.40% postgres [.] base_yyparse\n 0.96% postgres [.] LWLockAttemptLock\n 0.84% postgres [.] hash_search_with_hash_value\n 0.84% postgres [.] AtEOXact_GUC\n 0.72% postgres [.] ResetAllOptions\n 0.70% postgres [.] AllocSetAlloc\n 0.60% postgres [.] _bt_compare\n 0.55% postgres [.] core_yylex\n 0.42% libc-2.27.so [.] __strlen_avx2\n 0.23% postgres [.] LWLockRelease\n 0.19% postgres [.] MemoryContextAllocZeroAligned\n 0.18% postgres [.] expression_tree_walker.part.3\n 0.18% libc-2.27.so [.] __memmove_avx_unaligned_erms\n 0.17% postgres [.] PostgresMain\n 0.17% postgres [.] palloc\n 0.17% libc-2.27.so [.] _int_malloc\n 0.17% postgres [.] set_config_option\n 0.17% postgres [.] ScanKeywordLookup\n 0.16% postgres [.] _bt_checkpage\n\n%%\n\n\nWe have tried to fix this by using BitMapSet instead of boolean array\nKnownAssignedXidsValid, but this does not help too much.\n\nInstead, using a doubly linked list helps a little more, we got +1000 tps\non pgbench workload with patched postgresql. The general idea of this patch\nis that, instead of memorizing which elements in KnownAssignedXids are\nvalid, lets maintain a doubly linked list of them. This solution will work\nin exactly the same way, except that taking a snapshot on the replica is\nnow O(running transaction) instead of O(head - tail) which is significantly\nfaster under some workloads. The patch helps to reduce CPU usage of\nKnownAssignedXidsGetAndSetXmin to ~48% instead of ~74%, but does eliminate\nit from perf top.\n\nThe problem is better reproduced on PG13 since PG14 has some snapshot\noptimization.\n\nThanks!\n\nBest regards, reshke\n\nHi,I recently ran into a problem in one of our production postgresql cluster. I had noticed lock contention on procarray lock on standby, which causes WAL replay lag growth. To reproduce this, you can do the following:1) set max_connections to big number, like 1000002) begin a transaction on primary3) start pgbench workload on primary and on standbyAfter a while it will be possible to see KnownAssignedXidsGetAndSetXmin in perf top consuming abount 75 % of CPU. %% PerfTop: 1060 irqs/sec kernel: 0.0% exact: 0.0% [4000Hz cycles:u], (target_pid: 273361)------------------------------------------------------------------------------- 73.92% postgres [.] KnownAssignedXidsGetAndSetXmin 1.40% postgres [.] base_yyparse 0.96% postgres [.] LWLockAttemptLock 0.84% postgres [.] hash_search_with_hash_value 0.84% postgres [.] AtEOXact_GUC 0.72% postgres [.] ResetAllOptions 0.70% postgres [.] AllocSetAlloc 0.60% postgres [.] _bt_compare 0.55% postgres [.] core_yylex 0.42% libc-2.27.so [.] __strlen_avx2 0.23% postgres [.] LWLockRelease 0.19% postgres [.] MemoryContextAllocZeroAligned 0.18% postgres [.] expression_tree_walker.part.3 0.18% libc-2.27.so [.] __memmove_avx_unaligned_erms 0.17% postgres [.] PostgresMain 0.17% postgres [.] palloc 0.17% libc-2.27.so [.] _int_malloc 0.17% postgres [.] set_config_option 0.17% postgres [.] ScanKeywordLookup 0.16% postgres [.] _bt_checkpage%%We have tried to fix this by using BitMapSet instead of boolean array KnownAssignedXidsValid, but this does not help too much. Instead, using a doubly linked list helps a little more, we got +1000 tps on pgbench workload with patched postgresql. The general idea of this patch is that, instead of memorizing which elements in KnownAssignedXids are valid, lets maintain a doubly linked list of them. This solution will work in exactly the same way, except that taking a snapshot on the replica is now O(running transaction) instead of O(head - tail) which is significantly faster under some workloads. The patch helps to reduce CPU usage of KnownAssignedXidsGetAndSetXmin to ~48% instead of ~74%, but does eliminate it from perf top.The problem is better reproduced on PG13 since PG14 has some snapshot optimization.Thanks!Best regards, reshke",
"msg_date": "Thu, 20 May 2021 13:52:47 +0500",
"msg_from": "=?UTF-8?B?0JrQuNGA0LjQu9C7INCg0LXRiNC60LU=?= <reshkekirill@gmail.com>",
"msg_from_op": true,
"msg_subject": "Slow standby snapshot"
},
{
"msg_contents": "sorry, forgot to add a patch to the letter\n\n\n\nчт, 20 мая 2021 г. в 13:52, Кирилл Решке <reshkekirill@gmail.com>:\n\n> Hi,\n> I recently ran into a problem in one of our production postgresql cluster.\n> I had noticed lock contention on procarray lock on standby, which causes\n> WAL replay lag growth.\n> To reproduce this, you can do the following:\n>\n> 1) set max_connections to big number, like 100000\n> 2) begin a transaction on primary\n> 3) start pgbench workload on primary and on standby\n>\n> After a while it will be possible to see KnownAssignedXidsGetAndSetXmin in\n> perf top consuming abount 75 % of CPU.\n>\n> %%\n> PerfTop: 1060 irqs/sec kernel: 0.0% exact: 0.0% [4000Hz cycles:u],\n> (target_pid: 273361)\n>\n> -------------------------------------------------------------------------------\n>\n> 73.92% postgres [.] KnownAssignedXidsGetAndSetXmin\n> 1.40% postgres [.] base_yyparse\n> 0.96% postgres [.] LWLockAttemptLock\n> 0.84% postgres [.] hash_search_with_hash_value\n> 0.84% postgres [.] AtEOXact_GUC\n> 0.72% postgres [.] ResetAllOptions\n> 0.70% postgres [.] AllocSetAlloc\n> 0.60% postgres [.] _bt_compare\n> 0.55% postgres [.] core_yylex\n> 0.42% libc-2.27.so [.] __strlen_avx2\n> 0.23% postgres [.] LWLockRelease\n> 0.19% postgres [.] MemoryContextAllocZeroAligned\n> 0.18% postgres [.] expression_tree_walker.part.3\n> 0.18% libc-2.27.so [.] __memmove_avx_unaligned_erms\n> 0.17% postgres [.] PostgresMain\n> 0.17% postgres [.] palloc\n> 0.17% libc-2.27.so [.] _int_malloc\n> 0.17% postgres [.] set_config_option\n> 0.17% postgres [.] ScanKeywordLookup\n> 0.16% postgres [.] _bt_checkpage\n>\n> %%\n>\n>\n> We have tried to fix this by using BitMapSet instead of boolean array\n> KnownAssignedXidsValid, but this does not help too much.\n>\n> Instead, using a doubly linked list helps a little more, we got +1000 tps\n> on pgbench workload with patched postgresql. The general idea of this patch\n> is that, instead of memorizing which elements in KnownAssignedXids are\n> valid, lets maintain a doubly linked list of them. This solution will work\n> in exactly the same way, except that taking a snapshot on the replica is\n> now O(running transaction) instead of O(head - tail) which is significantly\n> faster under some workloads. The patch helps to reduce CPU usage of\n> KnownAssignedXidsGetAndSetXmin to ~48% instead of ~74%, but does eliminate\n> it from perf top.\n>\n> The problem is better reproduced on PG13 since PG14 has some snapshot\n> optimization.\n>\n> Thanks!\n>\n> Best regards, reshke\n>",
"msg_date": "Thu, 20 May 2021 14:16:39 +0500",
"msg_from": "Kirill Reshke <reshkekirill@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": ")Hello.\n\n> I recently ran into a problem in one of our production postgresql cluster.\n> I had noticed lock contention on procarray lock on standby, which causes WAL\n> replay lag growth.\n\nYes, I saw the same issue on my production cluster.\n\n> 1) set max_connections to big number, like 100000\n\nI made the tests with a more realistic value - 5000. It is valid value\nfor Amazon RDS for example (default is\nLEAST({DBInstanceClassMemory/9531392}, 5000)).\n\nThe test looks like this:\n\npgbench -i -s 10 -U postgres -d postgres\npgbench -b select-only -p 6543 -j 1 -c 50 -n -P 1 -T 18000 -U postgres postgres\npgbench -b simple-update -j 1 -c 50 -n -P 1 -T 18000 -U postgres postgres\nlong transaction on primary - begin;select txid_current();\nperf top -p <pid of some standby>\n\nSo, on postgres 14 (master) non-patched version looks like this:\n\n 5.13% postgres [.] KnownAssignedXidsGetAndSetXmin\n 4.61% postgres [.] pg_checksum_block\n 2.54% postgres [.] AllocSetAlloc\n 2.44% postgres [.] base_yyparse\n\nIt is too much to spend 5-6% of CPU running throw an array :) I think\nit should be fixed for both the 13 and 14 versions.\n\nThe patched version like this (was unable to notice\nKnownAssignedXidsGetAndSetXmin):\n\n 3.08% postgres [.] pg_checksum_block\n 2.89% postgres [.] AllocSetAlloc\n 2.66% postgres [.] base_yyparse\n 2.00% postgres [.] MemoryContextAllocZeroAligned\n\nOn postgres 13 non patched version looks even worse (definitely need\nto be fixed in my opinion):\n\n 26.44% postgres [.] KnownAssignedXidsGetAndSetXmin\n 2.17% postgres [.] base_yyparse\n 2.01% postgres [.] AllocSetAlloc\n 1.55% postgres [.] MemoryContextAllocZeroAligned\n\nBut your patch does not apply to REL_13_STABLE. Could you please\nprovide two versions?\n\nAlso, there are warnings while building with patch:\n\n procarray.c:4595:9: warning: ISO C90 forbids mixed\ndeclarations and code [-Wdeclaration-after-statement]\n 4595 | int prv = -1;\n | ^~~\n procarray.c: In function ‘KnownAssignedXidsGetOldestXmin’:\n procarray.c:5056:5: warning: variable ‘tail’ set but not used\n[-Wunused-but-set-variable]\n 5056 | tail;\n | ^~~~\n procarray.c:5067:38: warning: ‘i’ is used uninitialized in\nthis function [-Wuninitialized]\n 5067 | i = KnownAssignedXidsValidDLL[i].nxt;\n\n\nSome of them are clear errors, so, please recheck the code.\n\nAlso, maybe it is better to reduce the invasivity by using a more\nsimple approach. For example, use the first bit to mark xid as valid\nand the last 7 bit (128 values) as an optimistic offset to the next\nvalid xid (jump by 127 steps in the worse scenario).\nWhat do you think?\n\nAlso, it is a good idea to register the patch in the commitfest app\n(https://commitfest.postgresql.org/).\n\nThanks,\nMichail.\n\n\n",
"msg_date": "Sun, 13 Jun 2021 20:12:13 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Hello, Kirill.\n\n> Also, maybe it is better to reduce the invasivity by using a more\n> simple approach. For example, use the first bit to mark xid as valid\n> and the last 7 bit (128 values) as an optimistic offset to the next\n> valid xid (jump by 127 steps in the worse scenario).\n> What do you think?\n\nI have tried such an approach but looks like it is not effective,\nprobably because of CPU caching issues.\n\nI have looked again at your patch, ut seems like it has a lot of\nissues at the moment:\n\n* error in KnownAssignedXidsGetOldestXmin, `i` is uninitialized, logic is wrong\n* error in compressing function\n(```KnownAssignedXidsValidDLL[compress_index].prv = prv;```, `prv` is\nnever updated)\n* probably other errors?\n* compilation warnings\n* looks a little complex logic with `KAX_DLL_ENTRY_INVALID`\n* variable\\methods placing is bad (see `KAX_E_INVALID` and others)\n* need to update documentation about KnownAssignedXidsValid, see ```To\nkeep individual deletions cheap, we need to allow gaps in the array```\nin procarray.c\n* formatting is broken\n\nDo you have plans to update it? If not - I could try to rewrite it.\n\nAlso, what is about to add a patch to commitfest?\n\nThanks,\nMichail.\n\n\n",
"msg_date": "Sun, 11 Jul 2021 16:51:11 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Hello.\n\n> I have tried such an approach but looks like it is not effective,\n> probably because of CPU caching issues.\n\nIt was a mistake by me. I have repeated the approach and got good\nresults with small and a non-invasive patch.\n\nThe main idea is simple optimistic optimization - store offset to next\nvalid entry. So, in most cases, we could just skip all the gaps.\nOf course, it adds some additional impact for workloads without long\n(few seconds) transactions but it is almost not detectable (because of\nCPU caches).\n\n* TEST\n\nThe next testing setup was used:\n\nmax_connections=5000 (taken from real RDS installation)\npgbench -i -s 10 -U postgres -d postgres\n\n# emulate typical OLTP load\npgbench -b simple-update -j 1 -c 50 -n -P 1 -T 18000 -U postgres postgres\n\n#emulate typical cache load on replica\npgbench -b select-only -p 6543 -j 1 -c 50 -n -P 1 -T 18000 -U postgres postgres\n\n# emulate some typical long transactions up to 60 seconds on primary\necho \"\\set t random(0, 60)\n BEGIN;\n select txid_current();\n select pg_sleep(:t);\n COMMIT;\" > slow_transactions.bench\npgbench -f /home/nkey/pg/slow_transactions.bench -p 5432 -j 1 -c 10 -n\n-P 1 -T 18000 -U postgres postgres\n\n* RESULTS\n\n*REL_13_STABLE* - 23.02% vs 0.76%\n\nnon-patched:\n 23.02% postgres [.] KnownAssignedXidsGetAndSetXmin\n 2.56% postgres [.] base_yyparse\n 2.15% postgres [.] AllocSetAlloc\n 1.68% postgres [.] MemoryContextAllocZeroAligned\n 1.51% postgres [.] hash_search_with_hash_value\n 1.26% postgres [.] SearchCatCacheInternal\n 1.03% postgres [.] hash_bytes\n 0.92% postgres [.] pg_checksum_block\n 0.89% postgres [.] expression_tree_walker\n 0.81% postgres [.] core_yylex\n 0.69% postgres [.] palloc\n 0.68% [kernel] [k] do_syscall_64\n 0.59% postgres [.] _bt_compare\n 0.54% postgres [.] new_list\n\npatched:\n 3.09% postgres [.] base_yyparse\n 3.00% postgres [.] AllocSetAlloc\n 2.01% postgres [.] MemoryContextAllocZeroAligned\n 1.89% postgres [.] SearchCatCacheInternal\n 1.80% postgres [.] hash_search_with_hash_value\n 1.27% postgres [.] expression_tree_walker\n 1.27% postgres [.] pg_checksum_block\n 1.18% postgres [.] hash_bytes\n 1.10% postgres [.] core_yylex\n 0.96% [kernel] [k] do_syscall_64\n 0.86% postgres [.] palloc\n 0.84% postgres [.] _bt_compare\n 0.79% postgres [.] new_list\n 0.76% postgres [.] KnownAssignedXidsGetAndSetXmin\n\n*MASTER* - 6.16% vs ~0%\n(includes snapshot scalability optimization by Andres Freund (1))\n\nnon-patched:\n 6.16% postgres [.] KnownAssignedXidsGetAndSetXmin\n 3.05% postgres [.] AllocSetAlloc\n 2.59% postgres [.] base_yyparse\n 1.95% postgres [.] hash_search_with_hash_value\n 1.87% postgres [.] MemoryContextAllocZeroAligned\n 1.85% postgres [.] SearchCatCacheInternal\n 1.27% postgres [.] hash_bytes\n 1.16% postgres [.] expression_tree_walker\n 1.06% postgres [.] core_yylex\n 0.94% [kernel] [k] do_syscall_64\n\npatched:\n 3.35% postgres [.] base_yyparse\n 2.84% postgres [.] AllocSetAlloc\n 1.89% postgres [.] hash_search_with_hash_value\n 1.82% postgres [.] MemoryContextAllocZeroAligned\n 1.79% postgres [.] SearchCatCacheInternal\n 1.49% postgres [.] pg_checksum_block\n 1.26% postgres [.] hash_bytes\n 1.26% postgres [.] expression_tree_walker\n 1.08% postgres [.] core_yylex\n 1.04% [kernel] [k] do_syscall_64\n 0.81% postgres [.] palloc\n\nLooks like it is possible to get a significant TPS increase on a very\ntypical standby workload.\nCurrently, I have no environment to measure TPS accurately. Could you\nplease try it on yours?\n\nI have attached two versions of the patch - for master and REL_13_STABLE.\nAlso, I am going to add a patch to commitfest (2).\n\nThanks,\nMIchail.\n\n(1): https://commitfest.postgresql.org/29/2500/\n(2): https://commitfest.postgresql.org/34/3271/",
"msg_date": "Tue, 3 Aug 2021 00:07:23 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Hi,\n\nOn 2021-08-03 00:07:23 +0300, Michail Nikolaev wrote:\n> The main idea is simple optimistic optimization - store offset to next\n> valid entry. So, in most cases, we could just skip all the gaps.\n> Of course, it adds some additional impact for workloads without long\n> (few seconds) transactions but it is almost not detectable (because of\n> CPU caches).\n\nI'm doubtful that that's really the right direction. For workloads that\nare replay heavy we already often can see the cost of maintaining the\nknown xids datastructures show up significantly - not surprising, given\nthe datastructure. And for standby workloads with active primaries the\ncost of searching through the array in all backends is noticeable as\nwell. I think this needs a bigger data structure redesign.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 2 Aug 2021 15:01:55 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Hi,\n\n> 3 авг. 2021 г., в 03:01, Andres Freund <andres@anarazel.de> написал(а):\n> \n> Hi,\n> \n> On 2021-08-03 00:07:23 +0300, Michail Nikolaev wrote:\n>> The main idea is simple optimistic optimization - store offset to next\n>> valid entry. So, in most cases, we could just skip all the gaps.\n>> Of course, it adds some additional impact for workloads without long\n>> (few seconds) transactions but it is almost not detectable (because of\n>> CPU caches).\n> \n> I'm doubtful that that's really the right direction. For workloads that\n> are replay heavy we already often can see the cost of maintaining the\n> known xids datastructures show up significantly - not surprising, given\n> the datastructure. And for standby workloads with active primaries the\n> cost of searching through the array in all backends is noticeable as\n> well. I think this needs a bigger data structure redesign.\n\nKnownAssignedXids implements simple membership test idea. What kind of redesign would you suggest? Proposed optimisation makes it close to optimal, but needs eventual compression.\n\nMaybe use a hashtable of running transactions? It will be slightly faster when adding\\removing single transactions. But much worse when doing KnownAssignedXidsRemove().\n\nMaybe use a tree? (AVL\\RB or something like that) It will be slightly better, because it does not need eventual compression like exiting array.\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Tue, 3 Aug 2021 10:33:50 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Hi,\n\nOn 2021-08-03 10:33:50 +0500, Andrey Borodin wrote:\n> > 3 авг. 2021 г., в 03:01, Andres Freund <andres@anarazel.de> написал(а):\n> > On 2021-08-03 00:07:23 +0300, Michail Nikolaev wrote:\n> >> The main idea is simple optimistic optimization - store offset to next\n> >> valid entry. So, in most cases, we could just skip all the gaps.\n> >> Of course, it adds some additional impact for workloads without long\n> >> (few seconds) transactions but it is almost not detectable (because of\n> >> CPU caches).\n> > \n> > I'm doubtful that that's really the right direction. For workloads that\n> > are replay heavy we already often can see the cost of maintaining the\n> > known xids datastructures show up significantly - not surprising, given\n> > the datastructure. And for standby workloads with active primaries the\n> > cost of searching through the array in all backends is noticeable as\n> > well. I think this needs a bigger data structure redesign.\n> \n> KnownAssignedXids implements simple membership test idea. What kind of\n> redesign would you suggest? Proposed optimisation makes it close to optimal,\n> but needs eventual compression.\n\nBinary searches are very ineffecient on modern CPUs (unpredictable memory\naccesses, unpredictable branches). We constantly need to do binary searches\nduring replay to remove xids from the array. I don't see how you can address\nthat with just the current datastructure.\n\n\n> Maybe use a hashtable of running transactions? It will be slightly faster\n> when adding\\removing single transactions. But much worse when doing\n> KnownAssignedXidsRemove().\n\nWhy would it be worse for KnownAssignedXidsRemove()? Were you intending to\nwrite KnownAssignedXidsGet[AndSetXmin]()?\n\n\n> Maybe use a tree? (AVL\\RB or something like that) It will be slightly better, because it does not need eventual compression like exiting array.\n\nI'm not entirely sure what datastructure would work best. I can see something\nlike a radix tree work well, or a skiplist style approach. Or a hashtable:\n\nI'm not sure that we need to care as much about the cost of\nKnownAssignedXidsGetAndSetXmin() - for one, the caching we now have makes that\nless frequent. But more importantly, it'd not be hard to maintain an\noccasionally (or opportunistically) maintained denser version for\nGetSnapshotData() - there's only a single writer, making the concurrency\nissues a lot simpler.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 3 Aug 2021 10:35:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Hello, Andres.\n\nThanks for your feedback.\n\n>> Maybe use a hashtable of running transactions? It will be slightly faster\n>> when adding\\removing single transactions. But much worse when doing\n>> KnownAssignedXidsRemove().\n> Why would it be worse for KnownAssignedXidsRemove()? Were you intending to\n> write KnownAssignedXidsGet[AndSetXmin]()?\n\nIt is actually was a hashtable in 2010. It was replaced by Simon Riggs\nin 2871b4618af1acc85665eec0912c48f8341504c4.\n\n> I'm not sure that we need to care as much about the cost of\n> KnownAssignedXidsGetAndSetXmin() - for one, the caching we now have makes that\n> less frequent.\n\nIt is still about 5-7% of CPU for a typical workload, a considerable\namount for me. And a lot of systems still work on 12 and 13.\nThe proposed approach eliminates KnownAssignedXidsGetAndSetXmin from\nthe top by a small changeset.\n\n> But more importantly, it'd not be hard to maintain an\n> occasionally (or opportunistically) maintained denser version for\n> GetSnapshotData() - there's only a single writer, making the concurrency\n> issues a lot simpler.\n\nCould you please explain it in more detail?\nSingle writer and GetSnapshotData() already exclusively hold\nProcArrayLock at the moment of KnownAssignedXidsRemove,\nKnownAssignedXidsGetAndSetXmin, and sometimes KnownAssignedXidsAdd.\n\nBTW, the tiny thing we could also fix now is (comment from code):\n\n> (We could dispense with the spinlock if we were to\n> * create suitable memory access barrier primitives and use those instead.)\n> * The spinlock must be taken to read or write the head/tail pointers unless\n> * the caller holds ProcArrayLock exclusively.\n\nThanks,\nMichail.\n\n\n",
"msg_date": "Tue, 3 Aug 2021 22:23:58 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Hi,\n\nOn 2021-08-03 22:23:58 +0300, Michail Nikolaev wrote:\n> > I'm not sure that we need to care as much about the cost of\n> > KnownAssignedXidsGetAndSetXmin() - for one, the caching we now have makes that\n> > less frequent.\n> \n> It is still about 5-7% of CPU for a typical workload, a considerable\n> amount for me.\n\nI'm not saying we shouldn't optimize things. Just that it's less pressing. And\nwhat kind of price we're willing to optimize may have changed.\n\n\n> And a lot of systems still work on 12 and 13.\n\nI don't see us backporting performance improvements around this to 12 and 13,\nso I don't think that matters much... We've done that a few times, but usually\nwhen there's some accidentally quadratic behaviour or such.\n\n\n> > But more importantly, it'd not be hard to maintain an\n> > occasionally (or opportunistically) maintained denser version for\n> > GetSnapshotData() - there's only a single writer, making the concurrency\n> > issues a lot simpler.\n> \n> Could you please explain it in more detail?\n> Single writer and GetSnapshotData() already exclusively hold\n> ProcArrayLock at the moment of KnownAssignedXidsRemove,\n> KnownAssignedXidsGetAndSetXmin, and sometimes KnownAssignedXidsAdd.\n\nGetSnapshotData() only locks ProcArrayLock in shared mode.\n\nThe problem is that we don't want to add a lot of work\nKnownAssignedXidsAdd/Remove, because very often nobody will build a snapshot\nfor that moment and building a sorted, gap-free, linear array of xids isn't\ncheap. In my experience it's more common to be bottlenecked by replay CPU\nperformance than on replica snapshot building.\n\nDuring recovery, where there's only one writer to the procarray / known xids,\nit might not be hard to opportunistically build a dense version of the known\nxids whenever a snapshot is built.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 3 Aug 2021 18:33:08 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Hello, Andres.\n\nThanks for the feedback again.\n\n> The problem is that we don't want to add a lot of work\n> KnownAssignedXidsAdd/Remove, because very often nobody will build a snapshot\n> for that moment and building a sorted, gap-free, linear array of xids isn't\n> cheap. In my experience it's more common to be bottlenecked by replay CPU\n> performance than on replica snapshot building.\n\nYes, but my patch adds almost the smallest possible amount for\nKnownAssignedXidsAdd/Remove - a single write to the array by index.\nIt differs from the first version in this thread which is based on linked lists.\nThe \"next valid offset\" is just \"optimistic optimization\" - it means\n\"you could safely skip KnownAssignedXidsNext[i] while finding the next\nvalid\".\nBut KnownAssignedXidsNext is not updated by Add/Remove.\n\n> During recovery, where there's only one writer to the procarray / known xids,\n> it might not be hard to opportunistically build a dense version of the known\n> xids whenever a snapshot is built.\n\nAFAIU the patch does exactly the same.\nOn the first snapshot building, offsets to the next valid entry are\nset. So, a dense version is created on-demand.\nAnd this version is reused (even partly if something was removed) on\nthe next snapshot building.\n\n> I'm not entirely sure what datastructure would work best. I can see something\n> like a radix tree work well, or a skiplist style approach. Or a hashtable:\n\nWe could try to use some other structure (for example - linked hash\nmap) - but the additional cost (memory management, links, hash\ncalculation) will probably significantly reduce performance.\nAnd it is a much harder step to perform.\n\nSo, I think \"next valid offset\" optimization is a good trade-off for now:\n* KnownAssignedXidsAdd/Remove are almost not affected in their complexity\n* KnownAssignedXidsGetAndSetXmin is eliminated from the CPU top on\ntypical read scenario - so, more TPS, less ProcArrayLock contention\n* it complements GetSnapshotData() scalability - now on standby\n* changes are small\n\nThanks,\nMichail.\n\n\n",
"msg_date": "Tue, 10 Aug 2021 00:45:17 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "On Tue, Aug 10, 2021 at 12:45:17AM +0300, Michail Nikolaev wrote:\n> Thanks for the feedback again.\n\nFrom what I can see, there has been some feedback from Andres here,\nand the thread is idle for six weeks now, so I have marked this patch\nas RwF in the CF app.\n--\nMichael",
"msg_date": "Fri, 1 Oct 2021 15:40:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Hello, Andres.\n\nCould you please clarify how to better deal with the situation?\n\nAccording to your previous letter, I think there was some\nmisunderstanding regarding the latest patch version (but I am not\nsure). Because as far as understand provided optimization (lazily\ncalculated optional offset to the next valid xid) fits into your\nwishes. It was described in the previous letter in more detail.\n\nAnd now it is not clear for me how to move forward :)\n\nThere is an option to try to find some better data structure (like\nsome tricky linked hash map) but it is going to be huge changes\nwithout any confidence to get a more effective version (because\nprovided changes make the structure pretty effective).\n\nAnother option I see - use optimization from the latest patch version\nand get a significant TPS increase (5-7%) for the typical standby read\nscenario. Patch is small and does not affect other scenarios in a\nnegative way. Probably I could make an additional set of some\nperformance tests and provide some simulation to prove that\npg_atomic_uint32-related code is correct (if required).\n\nOr just leave the issue and hope someone else will try to fix it in\nthe future :)\n\nThanks a lot,\nMichail.\n\n\n",
"msg_date": "Sat, 2 Oct 2021 14:38:20 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Sorry for so late reply. I've been thinking about possible approaches.\nKnownAssignedXids over hashtable in fact was implemented long before and rejected [0].\n\n> 3 авг. 2021 г., в 22:35, Andres Freund <andres@anarazel.de> написал(а):\n> \n> On 2021-08-03 10:33:50 +0500, Andrey Borodin wrote:\n>>> 3 авг. 2021 г., в 03:01, Andres Freund <andres@anarazel.de> написал(а):\n>>> On 2021-08-03 00:07:23 +0300, Michail Nikolaev wrote:\n>>>> The main idea is simple optimistic optimization - store offset to next\n>>>> valid entry. So, in most cases, we could just skip all the gaps.\n>>>> Of course, it adds some additional impact for workloads without long\n>>>> (few seconds) transactions but it is almost not detectable (because of\n>>>> CPU caches).\n>>> \n>>> I'm doubtful that that's really the right direction. For workloads that\n>>> are replay heavy we already often can see the cost of maintaining the\n>>> known xids datastructures show up significantly - not surprising, given\n>>> the datastructure. And for standby workloads with active primaries the\n>>> cost of searching through the array in all backends is noticeable as\n>>> well. I think this needs a bigger data structure redesign.\n>> \n>> KnownAssignedXids implements simple membership test idea. What kind of\n>> redesign would you suggest? Proposed optimisation makes it close to optimal,\n>> but needs eventual compression.\n> \n> Binary searches are very ineffecient on modern CPUs (unpredictable memory\n> accesses, unpredictable branches). We constantly need to do binary searches\n> during replay to remove xids from the array. I don't see how you can address\n> that with just the current datastructure.\nCurrent patch addresses another problem. In presence of old enough transaction enumeration of KnownAssignedXids with shared lock prevents adding new transactions with exclusive lock. And recovery effectively pauses.\n\nBinary searches can consume 10-15 cache misses, which is unreasonable amount of memory waits. But that's somewhat different problem.\nAlso binsearch is not that expensive when we compress KnownAssignedXids often.\n\n>> Maybe use a hashtable of running transactions? It will be slightly faster\n>> when adding\\removing single transactions. But much worse when doing\n>> KnownAssignedXidsRemove().\n> \n> Why would it be worse for KnownAssignedXidsRemove()? Were you intending to\n> write KnownAssignedXidsGet[AndSetXmin]()?\nI was thinking about inefficient KnownAssignedXidsRemovePreceding() in hashtable. But, probably, this is not so frequent operation.\n\n>> Maybe use a tree? (AVL\\RB or something like that) It will be slightly better, because it does not need eventual compression like exiting array.\n> \n> I'm not entirely sure what datastructure would work best. I can see something\n> like a radix tree work well, or a skiplist style approach. Or a hashtable:\n> \n> I'm not sure that we need to care as much about the cost of\n> KnownAssignedXidsGetAndSetXmin() - for one, the caching we now have makes that\n> less frequent. But more importantly, it'd not be hard to maintain an\n> occasionally (or opportunistically) maintained denser version for\n> GetSnapshotData() - there's only a single writer, making the concurrency\n> issues a lot simpler.\n\nI've been prototyping Radix tree for a while.\nHere every 4 xids are summarized my minimum Xid and number of underlying Xids. Of cause 4 is arbitrary number, summarization area must be of cacheline size.\n┌───────┐ \n│ 1 / 9 │ \n├───────┴────┐ \n│ └────┐ \n│ └────┐ \n│ └────┐ \n▼ └───▶ \n┌───────────────────────────────┐ \n│ 1 / 3 | 5 / 0 | 9 / 3 | D / 3 │ \n├───────┬───────┬────────┬──────┴────┐ \n│ └─┐ └───┐ └────┐ └─────┐ \n│ └─┐ └──┐ └────┐ └─────┐ \n│ └─┐ └──┐ └────┐ └────┐ \n▼ └─┐ └──┐ └───┐ └────┐ \n┌───────────────▼────────────┴─▶────────────┴──▶───────────┴───▶\n│ 1 | 2 | | 4 | | | | | 9 | | B | C | D | E | F | │\n└──────────────────────────────────────────────────────────────┘\nBottom layer is current array (TransactionId *KnownAssignedXids).\nWhen we remove Xid we need theoretical minimum of cachelines touched. I'd say 5-7 instead of 10-15 of binsearch (in case of millions of entries in KnownAssignedXids).\nEnumerating running Xids is not that difficult too: we will need to scan O(xip) memory, not whole KnownAssignedXids array.\n\nBut the overall complexity of this approach does not seem good to me.\n\nAll in all, I think using proposed \"KnownAssignedXidsNext\" patch solves real problem and the problem of binary searches should be addressed by compressing KnownAssignedXids more often.\n\nCurrently we do not compress array\n if (nelements < 4 * PROCARRAY_MAXPROCS || // It's not that big yet OR\n nelements < 2 * pArray->numKnownAssignedXids) // It's contains less than a half of a bloat\n return;\nFrom my POV arbitrary number 4 is just too high.\n\nSummary: I think (\"KnownAssignedXidsNext\" patch + compressing KnownAssignedXids more often) is better than major KnownAssignedXids redesign.\n\n\nBest regards, Andrey Borodin.\n\n[0] https://github.com/postgres/postgres/commit/2871b4618af1acc85665eec0912c48f8341504c4\n\n",
"msg_date": "Sun, 7 Nov 2021 16:37:39 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Hello, Andrey.\n\nThanks for your feedback.\n\n> Current patch addresses another problem. In presence of old enough transaction enumeration of KnownAssignedXids with shared lock prevents adding new transactions with exclusive lock. And recovery effectively pauses.\n\nActually, I see two problems here (caused by the presence of old long\ntransactions). The first one is lock contention which causes recovery\npauses. The second one - just high CPU usage on standby by\nKnownAssignedXidsGetAndSetXmin.\n\n> All in all, I think using proposed \"KnownAssignedXidsNext\" patch solves real problem and the problem of binary searches should be addressed by compressing KnownAssignedXids more often.\n\nI updated the patch a little. KnownAssignedXidsGetAndSetXmin now\ncauses fewer cache misses because some values are stored in variables\n(registers). I think it is better to not lean on the compiler here\nbecause of `volatile` args.\nAlso, I have added some comments.\n\nBest regards,\nMichail.",
"msg_date": "Wed, 10 Nov 2021 00:15:42 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Hello, everyone.\n\nI made a performance test to make sure the patch solves real issues\nwithout performance regression.\nTests are made on 3 VM - one for primary, another - standby, latest\none - pgbench. It is Azure Standard_D16ads_v5 - 16 VCPU, 64GIB RAM,\nFast SSD.\n5000 used as a number of connections (it is the max number of\nconnections for AWS - LEAST({DBInstanceClassMemory/9531392}, 5000)).\n\nSetup:\n primary:\n max_connections=5000\n listen_addresses='*'\n fsync=off\n standby:\n primary_conninfo = 'user=postgres host=10.0.0.4 port=5432\nsslmode=prefer sslcompression=0 gssencmode=prefer krbsrvname=postgres\ntarget_session_attrs=any'\n hot_standby_feedback = on\n max_connections=5000\n listen_addresses='*'\n fsync=off\n\n\nThe test was run the following way:\n\n# restart both standby and primary\n# init fresh DB\n./pgbench -h 10.0.0.4 -i -s 10 -U postgres -d postgres\n\n# warm up primary for 10 seconds\n./pgbench -h 10.0.0.4 -b simple-update -j 8 -c 16 -P 1 -T 10 -U\npostgres postgres\n\n# warm up standby for 10 seconds\n./pgbench -h 10.0.0.5 -b select-only -j 8 -c 16 -n -P 1 -T 10 -U\npostgres postgres\n\n# then, run at the same(!) time (in parallel):\n\n# simple-update on primary\n./pgbench -h 10.0.0.4 -b simple-update -j 8 -c 16 -P 1 -T 180 -U\npostgres postgres\n\n# simple-select on standby\n./pgbench -h 10.0.0.5 -b select-only -j 8 -c 16 -n -P 1 -T 180 -U\npostgres postgres\n\n# then, after 60 seconds after test start - start a long transaction\non the master\n./psql -h 10.0.0.4 -c \"BEGIN; select txid_current();SELECT\npg_sleep(5);COMMIT;\" -U postgres postgres\n\nI made 3 runs for both the patched and vanilla versions (current\nmaster branch). One run of the patched version was retried because of\na significant difference in TPS (it is vCPU on VM with neighborhoods,\nso, probably some isolation issue).\nThe result on the primary is about 23k-25k TPS for both versions.\n\nSo, graphics show a significant reduction of TPS on the secondary\nwhile the long transaction is active (about 10%).\nThe patched version solves the issue without any noticeable regression\nin the case of short-only transactions.\nAlso, transactions could be much shorted to reduce CPU - a few seconds\nis enough.\n\nAlso, this is `perf diff` between `with` and `without` long\ntransaction recording.\n\nVanilla (+ 10.26% of KnownAssignedXidsGetAndSetXmin):\n 0.22% +10.26% postgres [.]\nKnownAssignedXidsGetAndSetXmin\n 3.39% +0.68% [kernel.kallsyms] [k]\n_raw_spin_unlock_irqrestore\n 2.66% -0.61% libc-2.31.so [.] 0x0000000000045dc1\n 3.77% -0.50% postgres [.] base_yyparse\n 3.43% -0.45% [kernel.kallsyms] [k] finish_task_switch\n 0.41% +0.36% postgres [.] pg_checksum_page\n 0.61% +0.31% [kernel.kallsyms] [k] copy_user_generic_string\n\nPatched (+ 0.22%):\n 2.26% -0.40% [kernel.kallsyms] [k] _raw_spin_unlock_irqrestore\n 0.78% +0.39% [kernel.kallsyms] [k] copy_user_generic_string\n 0.22% +0.26% postgres [.] KnownAssignedXidsGetAndSetXmin\n 0.23% +0.20% postgres [.] ScanKeywordLookup\n 3.77% +0.19% postgres [.] base_yyparse\n 0.64% +0.19% postgres [.] pg_checksum_page\n 3.63% -0.18% [kernel.kallsyms] [k] finish_task_switch\n\nIf someone knows any additional performance tests that need to be done\n- please share.\n\nBest regards,\nMichail.",
"msg_date": "Sun, 14 Nov 2021 15:09:43 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "On Wed, Nov 10, 2021 at 12:16 AM Michail Nikolaev\n<michail.nikolaev@gmail.com> wrote:\n> I updated the patch a little. KnownAssignedXidsGetAndSetXmin now\n> causes fewer cache misses because some values are stored in variables\n> (registers). I think it is better to not lean on the compiler here\n> because of `volatile` args.\n> Also, I have added some comments.\n\nIt looks like KnownAssignedXidsNext doesn't have to be\npg_atomic_uint32. I see it only gets read with pg_atomic_read_u32()\nand written with pg_atomic_write_u32(). Existing code believes that\nread/write of 32-bit values is atomic. So, you can use just uint32\ninstead of pg_atomic_uint32.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Tue, 16 Nov 2021 05:00:08 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Hello, Alexander.\n\nThanks for your review.\n\n> It looks like KnownAssignedXidsNext doesn't have to be\n> pg_atomic_uint32. I see it only gets read with pg_atomic_read_u32()\n> and written with pg_atomic_write_u32(). Existing code believes that\n> read/write of 32-bit values is atomic. So, you can use just uint32\n> instead of pg_atomic_uint32.\n\nFixed. Looks better now, yeah.\n\nAlso, I added an additional (not depending on KnownAssignedXidsNext\nfeature) commit replacing the spinlock with a memory barrier. It goes\nto Simon Riggs and Tom Lane at 2010:\n\n> (We could dispense with the spinlock if we were to\n> create suitable memory access barrier primitives and use those instead.)\n\nNow it is possible to avoid additional spinlock on each\nKnownAssignedXidsGetAndSetXmin. I have not measured the performance\nimpact of this particular change yet (and it is not easy to reliable\nmeasure impact less than 0.5% probably), but I think it is worth\nadding. We need to protect only the head pointer because the tail is\nupdated only with exclusive ProcArrayLock. BTW should I provide a\nseparate patch for this?\n\nSo, now we have a pretty successful benchmark for the typical use-case\nand some additional investigation had been done. So, I think I’ll\nre-add the patch to the commitfest app.\n\nThanks,\nMichail",
"msg_date": "Sun, 21 Nov 2021 21:58:29 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "\n\n> 21 нояб. 2021 г., в 23:58, Michail Nikolaev <michail.nikolaev@gmail.com> написал(а):\n> \n> <v3-0001-memory-barrier-instead-of-spinlock.patch>\n\nWrite barrier must be issued after write, not before.\nDon't we need to issue read barrier too?\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Mon, 22 Nov 2021 13:23:49 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Hello, Andrey.\n\n> Write barrier must be issued after write, not before.\n> Don't we need to issue read barrier too?\n\nThe write barrier is issued after the changes to KnownAssignedXidsNext\nand KnownAssignedXidsValid arrays and before the update of\nheadKnownAssignedXids.\nSo, it seems to be correct. We make sure once the CPU sees changes of\nheadKnownAssignedXids - underlying arrays contain all the required\ndata.\n\nThanks,\nMichail.\n\n\n",
"msg_date": "Mon, 22 Nov 2021 12:05:36 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "> On 22 Nov 2021, at 14:05, Michail Nikolaev <michail.nikolaev@gmail.com> wrote:\n> \n>> Write barrier must be issued after write, not before.\n>> Don't we need to issue read barrier too?\n> \n> The write barrier is issued after the changes to KnownAssignedXidsNext\n> and KnownAssignedXidsValid arrays and before the update of\n> headKnownAssignedXids.\n> So, it seems to be correct. We make sure once the CPU sees changes of\n> headKnownAssignedXids - underlying arrays contain all the required\n> data.\n\nPatch on barrier seems too complicated to me right now. I’d propose to focus on KnowAssignedXidsNext patch: it’s clean, simple and effective.\n\nI’ve rebased the patch so that it does not depend on previous step. Please check out it’s current state, if you are OK with it - let’s mark the patch Ready for Committer. Just maybe slightly better commit message would make the patch easier to understand.\n\n\nThanks! Best regards, Andrey Borodin.",
"msg_date": "Sun, 20 Feb 2022 22:56:08 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Hello, Andrey.\n\nThanks for your efforts.\n\n> Patch on barrier seems too complicated to me right now. I’d propose to focus on KnowAssignedXidsNext patch: it’s clean, simple and effective.\nI'll extract it to the separated patch later.\n\n> I’ve rebased the patch so that it does not depend on previous step. Please check out it’s current state, if you are OK with it - let’s mark the patch Ready for Committer. Just maybe slightly better commit message would make the patch easier to understand.\nEverything seems to be correct.\n\nBest regards,\nMichail.\n\n\n",
"msg_date": "Mon, 21 Feb 2022 10:12:56 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Hello.\n\nJust an updated commit message.\n\nThanks,\nMichail.",
"msg_date": "Fri, 1 Apr 2022 02:18:41 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Hello, Simon.\n\nSorry for calling you directly, but you know the subject better than\nanyone else. It is related to your work from 2010 - replacing\nKnownAssignedXidsHash with the KnownAssignedXids array.\n\nI have added additional optimization to the data structure you\nimplemented. Initially, it was caused by the huge usage of CPU (70%+)\nfor KnownAssignedXidsGetAndSetXmin in case of long (few seconds)\ntransactions on primary and high (few thousands) max_connections in\nPostgres 11.\n\nAfter snapshot scalability optimization by Andres Freund (2), it is\nnot so crucial but still provides a significant performance impact\n(+10% TPS) for a typical workload, see benchmark (3).\n\nLast patch version is here - (4).\n\nDoes such optimisation look worth committing?\n\nThanks in advance,\nMichail.\n\n[1]: https://github.com/postgres/postgres/commit/2871b4618af1acc85665eec0912c48f8341504c4#diff-8879f0173be303070ab7931db7c757c96796d84402640b9e386a4150ed97b179\n[2]: https://commitfest.postgresql.org/29/2500/\n[3]: https://www.postgresql.org/message-id/flat/CANtu0ohzBFTYwdLtcanWo4%2B794WWUi7LY2rnbHyorJdE8_ZnGg%40mail.gmail.com#379c1be7b8134ada5a574078d51b64c6\n[4]: https://www.postgresql.org/message-id/flat/CANtu0ogzo4MsR7My9%2BNhu3to5%3Dy7G9zSzUbxfWYOn9W5FfHjTA%40mail.gmail.com#341a3c3b033f69b260120b3173a66382\n\n\n",
"msg_date": "Sat, 2 Jul 2022 20:32:23 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "\n\n> On 1 Apr 2022, at 04:18, Michail Nikolaev <michail.nikolaev@gmail.com> wrote:\n> \n> Hello.\n> \n> Just an updated commit message.\n\nI've looked into v5.\n\nIMO the purpose of KnownAssignedXidsNext would be slightly more obvious if it was named KnownAssignedXidsNextOffset.\nAlso please consider some editorialisation:\ns/high value/big number/g\nKnownAssignedXidsNext[] is updating while taking the snapshot. -> KnownAssignedXidsNext[] is updated during taking the snapshot.\nO(N) next call -> amortized O(N) on next call\n\nIs it safe on all platforms to do \"KnownAssignedXidsNext[prev] = n;\" while only holding shared lock? I think it is, per Alexander's comment, but maybe let's document it?\n\nThank you!\n\nThanks! Best regards, Andrey Borodin.\n\n",
"msg_date": "Sun, 3 Jul 2022 14:42:51 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Hello, Andrey.\n\n> I've looked into v5.\nThanks!\n\nPatch is updated accordingly your remarks.\n\nBest regards,\nMichail.",
"msg_date": "Wed, 20 Jul 2022 00:12:39 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "\n\n> On 20 Jul 2022, at 02:12, Michail Nikolaev <michail.nikolaev@gmail.com> wrote:\n> \n>> I've looked into v5.\n> Thanks!\n> \n> Patch is updated accordingly your remarks.\n\nThe patch seems Ready for Committer from my POV.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Tue, 26 Jul 2022 23:09:16 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "At Tue, 26 Jul 2022 23:09:16 +0500, Andrey Borodin <x4mmm@yandex-team.ru> wrote in \n> \n> \n> > On 20 Jul 2022, at 02:12, Michail Nikolaev <michail.nikolaev@gmail.com> wrote:\n> > \n> >> I've looked into v5.\n> > Thanks!\n> > \n> > Patch is updated accordingly your remarks.\n> \n> The patch seems Ready for Committer from my POV.\n\n+ * is s updated during taking the snapshot. The KnownAssignedXidsNextOffset\n+ * contains not an offset to the next valid xid but a number which tends to\n+ * the offset to next valid xid. KnownAssignedXidsNextOffset[] values read\n\nIs there still a reason why the array stores the distnace to the next\nvalid element instead of the index number of the next valid index? It\nseems to me that that was in an intention to reduce the size of the\noffset array but it is int32[] which is far wider than\nTOTAL_MAX_CACHED_SUBXIDS.\n\nIt seems to me storing the index itself is simpler and maybe faster by\nthe cycles to perform addition.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 27 Jul 2022 16:08:07 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "On Wed, 27 Jul 2022 at 08:08, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 26 Jul 2022 23:09:16 +0500, Andrey Borodin <x4mmm@yandex-team.ru> wrote in\n> >\n> >\n> > > On 20 Jul 2022, at 02:12, Michail Nikolaev <michail.nikolaev@gmail.com> wrote:\n> > >\n> > >> I've looked into v5.\n> > > Thanks!\n> > >\n> > > Patch is updated accordingly your remarks.\n> >\n> > The patch seems Ready for Committer from my POV.\n>\n> + * is s updated during taking the snapshot. The KnownAssignedXidsNextOffset\n> + * contains not an offset to the next valid xid but a number which tends to\n> + * the offset to next valid xid. KnownAssignedXidsNextOffset[] values read\n>\n> Is there still a reason why the array stores the distnace to the next\n> valid element instead of the index number of the next valid index? It\n> seems to me that that was in an intention to reduce the size of the\n> offset array but it is int32[] which is far wider than\n> TOTAL_MAX_CACHED_SUBXIDS.\n>\n> It seems to me storing the index itself is simpler and maybe faster by\n> the cycles to perform addition.\n\nI'm inclined to think this is all too much. All of this optimization\nmakes sense when the array spreads out horribly, but we should be just\navoiding that in the first place by compressing more often.\n\nThe original coded frequency of compression was just a guess and was\nnever tuned later.\n\nA simple patch like this seems to hit the main concern, aiming to keep\nthe array from spreading out and impacting snapshot performance for\nSELECTs, yet not doing it so often that the startup process has a\nhigher burden of work.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Fri, 29 Jul 2022 16:08:38 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Hello.\n\nThanks to everyone for the review.\n\n> It seems to me storing the index itself is simpler and maybe faster by\n> the cycles to perform addition.\n\nYes, first version used 1-byte for offset with maximum value of 255.\nAgreed, looks like there is no sense to store offsets now.\n\n> A simple patch like this seems to hit the main concern, aiming to keep\n> the array from spreading out and impacting snapshot performance for\n> SELECTs, yet not doing it so often that the startup process has a\n> higher burden of work.\n\nNice, I'll do performance testing for both versions and master branch\nas baseline.\n\nThanks,\nMichail.\n\n\n",
"msg_date": "Fri, 29 Jul 2022 20:24:30 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "On Fri, 29 Jul 2022 at 18:24, Michail Nikolaev\n<michail.nikolaev@gmail.com> wrote:\n\n> > A simple patch like this seems to hit the main concern, aiming to keep\n> > the array from spreading out and impacting snapshot performance for\n> > SELECTs, yet not doing it so often that the startup process has a\n> > higher burden of work.\n>\n> Nice, I'll do performance testing for both versions and master branch\n> as baseline.\n\nThe objective of all patches is to touch the smallest number of\ncachelines when accessing the KnownAssignedXacts array.\n\nThe trade-off is to keep the array small with the minimum number of\ncompressions, so that normal snapshots don't feel the contention and\nso that the Startup process doesn't slow down because of the extra\ncompression work. The values I've chosen in the recent patch are just\nguesses at what we'll need to reduce it to, so there may be some\nbenefit in varying those numbers to see the effects.\n\nIt did also occur to me that we might need a separate process to\nperform the compressions, which we might be able to give to WALWriter.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 2 Aug 2022 11:47:53 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "\n\n> On 29 Jul 2022, at 20:08, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> \n> A simple patch like this seems to hit the main concern, aiming to keep\n> the array from spreading out and impacting snapshot performance for\n> SELECTs, yet not doing it so often that the startup process has a\n> higher burden of work.\n\nThe idea to compress more often seem viable. But this might make some other workloads pathological.\nSome KnownAssignedXids routines now can become quadratic in case of lots of subtransactions.\n\nKnownAssignedXidsRemoveTree() only compress with probability 1/8, but it is still O(N*N).\n\nIMO original patch (with next pointer) is much safer in terms of unexpected performance degradation.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Tue, 2 Aug 2022 16:32:39 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "On Tue, 2 Aug 2022 at 12:32, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n\n> KnownAssignedXidsRemoveTree() only compress with probability 1/8, but it is still O(N*N).\n\nCurrently it is O(NlogS), not quite as bad as O(N^2).\n\nSince each xid in the tree is always stored to the right, it should be\npossible to make that significantly better by starting each binary\nsearch from the next element, rather than the start of the array.\nSomething like the attached might help, but we can probably make that\ncache conscious to improve things even more.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Tue, 2 Aug 2022 16:18:44 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "At Tue, 2 Aug 2022 16:18:44 +0100, Simon Riggs <simon.riggs@enterprisedb.com> wrote in \n> On Tue, 2 Aug 2022 at 12:32, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> \n> > KnownAssignedXidsRemoveTree() only compress with probability 1/8, but it is still O(N*N).\n> \n> Currently it is O(NlogS), not quite as bad as O(N^2).\n> \n> Since each xid in the tree is always stored to the right, it should be\n> possible to make that significantly better by starting each binary\n> search from the next element, rather than the start of the array.\n> Something like the attached might help, but we can probably make that\n> cache conscious to improve things even more.\n\nThe original complaint is KnownAssignedXidsGetAndSetXmin can get very\nslow for large max_connections. I'm not sure what was happening on the\nKAXidsArray at the time precisely, but if the array starts with a\nlarge number of invalid entries (I guess it is likely), and the\nvariable \"start\" were available to the function (that is, it were\nplaced in procArray), that strategy seems to work for this case. With\nthis strategy we can avoid compression if only the relatively narrow\nrange in the array is occupied.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 03 Aug 2022 10:04:14 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "\n\n> On 2 Aug 2022, at 20:18, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> \n> On Tue, 2 Aug 2022 at 12:32, Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> \n>> KnownAssignedXidsRemoveTree() only compress with probability 1/8, but it is still O(N*N).\n> \n> Currently it is O(NlogS), not quite as bad as O(N^2).\nConsider workload when we have a big number of simultaneously active xids. Number of calls to KnownAssignedXidsRemoveTree() is proportional to number of these xids.\nAnd the complexity of KnownAssignedXidsRemoveTree() is proportional to the number of these xids, because each call to KnownAssignedXidsRemoveTree() might evenly run compression (which will not compress much).\n\nCompression is not an answer to performance problems - because it might be burden itself. Instead we can make compression unneeded to make a snapshot's xids-in-progress list.\n\n\n> Since each xid in the tree is always stored to the right, it should be\n> possible to make that significantly better by starting each binary\n> search from the next element, rather than the start of the array.\n> Something like the attached might help, but we can probably make that\n> cache conscious to improve things even more.\n\nAs Kyotaro-san correctly mentioned - performance degradation happened in KnownAssignedXidsGetAndSetXmin() which does not do binary search.\n\n\n\n> On 3 Aug 2022, at 06:04, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> \n> The original complaint is KnownAssignedXidsGetAndSetXmin can get very\n> slow for large max_connections. I'm not sure what was happening on the\n> KAXidsArray at the time precisely, but if the array starts with a\n> large number of invalid entries (I guess it is likely), and the\n> variable \"start\" were available to the function (that is, it were\n> placed in procArray), that strategy seems to work for this case. With\n> this strategy we can avoid compression if only the relatively narrow\n> range in the array is occupied.\n\nThis applies to only one workload - all transactions are very short. If we have a tiny fraction of mid or long transactions - this heuristics does not help anymore.\n\n\nThank you!\n\nBest regards, Andrey Borodin. \n\n\n\n\n\n",
"msg_date": "Wed, 3 Aug 2022 10:53:15 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Hello, everyone.\n\n> It seems to me storing the index itself is simpler and maybe faster by\n> the cycles to perform addition.\nDone in v7.\n\n> Since each xid in the tree is always stored to the right, it should be\n> possible to make that significantly better by starting each binary\n> search from the next element, rather than the start of the array.\nAlso, looks like it is better to go with `tail = Max(start,\npArray->tailKnownAssignedXids)` (in v1-0001-TODO.patch)\n\nPerformance tests show Simon's approach solves the issue without\nsignificant difference in performance comparing to my version.\nI need some additional time to provide statistically significant best\ncoefficients (how often to go compression, minimum number of invalid\nxids to start compression).\n\nThanks,\nMichail.",
"msg_date": "Sun, 7 Aug 2022 22:28:36 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Hello everyone.\n\nTo find the best frequency for calling KnownAssignedXidsCompress in\nSimon's patch, I made a set of benchmarks. It looks like each 8th xid\nis a little bit often.\n\nSetup and method is the same as previous (1). 16-core machines,\nmax_connections = 5000. Tests were running for about a day, 220 runs\nin total (each version for 20 times, evenly distributed throughout the\nday).\n\nStaring from 60th second, 30 seconds-long transaction was started on primary.\n\nGraphs in attachment. So, looks like 64 is the best value here. It\ngives even a little bit more TPS than smaller values.\n\nYes, such benchmark does not cover all possible cases, but it is\nbetter to measure at least something when selecting constants :)\n\nIf someone has an idea of different benchmark scenarios - please share them.\n\nSo, updated version (with 64 and some commit message) in attachment too.\n\n[1]: https://www.postgresql.org/message-id/flat/CANtu0ohzBFTYwdLtcanWo4%2B794WWUi7LY2rnbHyorJdE8_ZnGg%40mail.gmail.com#379c1be7b8134ada5a574078d51b64c6",
"msg_date": "Fri, 16 Sep 2022 19:08:24 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "On Fri, 16 Sept 2022 at 17:08, Michail Nikolaev\n<michail.nikolaev@gmail.com> wrote:\n>\n> Hello everyone.\n>\n> To find the best frequency for calling KnownAssignedXidsCompress in\n> Simon's patch, I made a set of benchmarks. It looks like each 8th xid\n> is a little bit often.\n>\n> Setup and method is the same as previous (1). 16-core machines,\n> max_connections = 5000. Tests were running for about a day, 220 runs\n> in total (each version for 20 times, evenly distributed throughout the\n> day).\n>\n> Staring from 60th second, 30 seconds-long transaction was started on primary.\n>\n> Graphs in attachment. So, looks like 64 is the best value here. It\n> gives even a little bit more TPS than smaller values.\n>\n> Yes, such benchmark does not cover all possible cases, but it is\n> better to measure at least something when selecting constants :)\n\nThis is very good and clear, thank you.\n\n\n> If someone has an idea of different benchmark scenarios - please share them.\n\n> So, updated version (with 64 and some commit message) in attachment too.\n>\n> [1]: https://www.postgresql.org/message-id/flat/CANtu0ohzBFTYwdLtcanWo4%2B794WWUi7LY2rnbHyorJdE8_ZnGg%40mail.gmail.com#379c1be7b8134ada5a574078d51b64c6\n\nI've cleaned up the comments and used a #define for the constant.\n\nIMHO this is ready for commit. I will add it to the next CF.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Sat, 17 Sep 2022 07:27:30 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "On Sat, Sep 17, 2022 at 6:27 PM Simon Riggs\n<simon.riggs@enterprisedb.com> wrote:\n> I've cleaned up the comments and used a #define for the constant.\n>\n> IMHO this is ready for commit. I will add it to the next CF.\n\nFYI This had many successful cfbot runs but today it blew up on\nWindows when the assertion TransactionIdPrecedesOrEquals(safeXid,\nsnap->xmin) failed:\n\nhttps://api.cirrus-ci.com/v1/artifact/task/5311549010083840/crashlog/crashlog-postgres.exe_1c40_2022-11-08_00-20-28-110.txt\n\n\n",
"msg_date": "Wed, 9 Nov 2022 11:42:36 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-09 11:42:36 +1300, Thomas Munro wrote:\n> On Sat, Sep 17, 2022 at 6:27 PM Simon Riggs\n> <simon.riggs@enterprisedb.com> wrote:\n> > I've cleaned up the comments and used a #define for the constant.\n> >\n> > IMHO this is ready for commit. I will add it to the next CF.\n> \n> FYI This had many successful cfbot runs but today it blew up on\n> Windows when the assertion TransactionIdPrecedesOrEquals(safeXid,\n> snap->xmin) failed:\n> \n> https://api.cirrus-ci.com/v1/artifact/task/5311549010083840/crashlog/crashlog-postgres.exe_1c40_2022-11-08_00-20-28-110.txt\n\nI don't immediately see how that could be connected to this patch - afaict\nthat crash wasn't during recovery, and the modified functions should only be\nactive during hot standby.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 8 Nov 2022 16:54:00 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "On Wed, Nov 9, 2022 at 1:54 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-11-09 11:42:36 +1300, Thomas Munro wrote:\n> > On Sat, Sep 17, 2022 at 6:27 PM Simon Riggs\n> > <simon.riggs@enterprisedb.com> wrote:\n> > > I've cleaned up the comments and used a #define for the constant.\n> > >\n> > > IMHO this is ready for commit. I will add it to the next CF.\n> >\n> > FYI This had many successful cfbot runs but today it blew up on\n> > Windows when the assertion TransactionIdPrecedesOrEquals(safeXid,\n> > snap->xmin) failed:\n> >\n> > https://api.cirrus-ci.com/v1/artifact/task/5311549010083840/crashlog/crashlog-postgres.exe_1c40_2022-11-08_00-20-28-110.txt\n>\n> I don't immediately see how that could be connected to this patch - afaict\n> that crash wasn't during recovery, and the modified functions should only be\n> active during hot standby.\n\nIndeed, sorry for the noise.\n\n\n",
"msg_date": "Wed, 9 Nov 2022 14:05:08 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> I've cleaned up the comments and used a #define for the constant.\n> IMHO this is ready for commit. I will add it to the next CF.\n\nI looked at this a little. It's a simple enough patch, and if it\nsolves the problem then I sure like it better than the previous\nideas in this thread.\n\nHowever ... I tried to reproduce the original complaint, and\nfailed entirely. I do see KnownAssignedXidsGetAndSetXmin\neating a bit of time in the standby backends, but it's under 1%\nand doesn't seem to be rising over time. Perhaps we've already\napplied some optimization that ameliorates the problem? But\nI tested v13 as well as HEAD, and got the same results.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 15 Nov 2022 17:53:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "BTW, while nosing around this code I came across this statement\n(procarray.c, about line 4550 in HEAD):\n\n * ... To add XIDs to the array, we just insert\n * them into slots to the right of the head pointer and then advance the head\n * pointer. This wouldn't require any lock at all, except that on machines\n * with weak memory ordering we need to be careful that other processors\n * see the array element changes before they see the head pointer change.\n * We handle this by using a spinlock to protect reads and writes of the\n * head/tail pointers. (We could dispense with the spinlock if we were to\n * create suitable memory access barrier primitives and use those instead.)\n * The spinlock must be taken to read or write the head/tail pointers unless\n * the caller holds ProcArrayLock exclusively.\n\nNowadays we've *got* those primitives. Can we get rid of\nknown_assigned_xids_lck, and if so would it make a meaningful\ndifference in this scenario?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 15 Nov 2022 18:06:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "On Tue, 15 Nov 2022 at 23:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> BTW, while nosing around this code I came across this statement\n> (procarray.c, about line 4550 in HEAD):\n>\n> * ... To add XIDs to the array, we just insert\n> * them into slots to the right of the head pointer and then advance the head\n> * pointer. This wouldn't require any lock at all, except that on machines\n> * with weak memory ordering we need to be careful that other processors\n> * see the array element changes before they see the head pointer change.\n> * We handle this by using a spinlock to protect reads and writes of the\n> * head/tail pointers. (We could dispense with the spinlock if we were to\n> * create suitable memory access barrier primitives and use those instead.)\n> * The spinlock must be taken to read or write the head/tail pointers unless\n> * the caller holds ProcArrayLock exclusively.\n>\n> Nowadays we've *got* those primitives. Can we get rid of\n> known_assigned_xids_lck, and if so would it make a meaningful\n> difference in this scenario?\n\nI think you could do that *as well*, since it does act as an overhead\nbut that is not related to the main issues:\n\n* COMMITs: xids are removed from the array by performing a binary\nsearch - this gets more and more expensive as the array gets wider\n* SNAPSHOTs: scanning the array for snapshots becomes more expensive\nas the array gets wider\n\nHence more frequent compression is effective at reducing the overhead.\nBut too frequent compression slows down the startup process, which\ncan't then keep up.\n\nSo we're just looking for an optimal frequency of compression for any\ngiven workload.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 15 Nov 2022 23:14:42 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-15 23:14:42 +0000, Simon Riggs wrote:\n> * COMMITs: xids are removed from the array by performing a binary\n> search - this gets more and more expensive as the array gets wider\n> * SNAPSHOTs: scanning the array for snapshots becomes more expensive\n> as the array gets wider\n>\n> Hence more frequent compression is effective at reducing the overhead.\n> But too frequent compression slows down the startup process, which\n> can't then keep up.\n\n> So we're just looking for an optimal frequency of compression for any\n> given workload.\n\nWhat about making the behaviour adaptive based on the amount of wasted effort\nduring those two operations, rather than just a hardcoded \"emptiness\" factor?\nIt's common that basically no snapshots are taken, and constantly compressing\nin that case is likely going to be wasted effort.\n\n\nThe heuristic the patch adds to KnownAssignedXidsRemoveTree() seems somewhat\nmisplaced to me. When called from ProcArrayApplyXidAssignment() we probably\nshould always compress - it'll only be issued when a substantial amount of\nsubxids have been assigned, so there'll be a bunch of cleanup work. It makes\nmore sense from ExpireTreeKnownAssignedTransactionIds(), since it will very\ncommonly called for individual xids - but even then, we afaict should take\ninto account how many xids we've just expired.\n\nI don't think the xids % KAX_COMPRESS_FREQUENCY == 0 filter is a good idea -\nif you have a workload with plenty subxids you might end up never compressing\nbecause xids divisible by KAX_COMPRESS_FREQUENCY will end up as a subxid\nmost/all of the time.\n\n\nRe cost of processing at COMMITs: We do a fresh binary search for each subxid\nright now. There's a lot of locality in the xids that can be expired. Perhaps\nwe could have a cache for the position of the latest value in\nKnownAssignedXidsSearch() and search linearly if the distance from the last\nlooked up value isn't large?\n\n\nGreetings,\n\nAndres\n\n\n",
"msg_date": "Tue, 15 Nov 2022 16:06:53 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-11-15 23:14:42 +0000, Simon Riggs wrote:\n>> Hence more frequent compression is effective at reducing the overhead.\n>> But too frequent compression slows down the startup process, which\n>> can't then keep up.\n>> So we're just looking for an optimal frequency of compression for any\n>> given workload.\n\n> What about making the behaviour adaptive based on the amount of wasted effort\n> during those two operations, rather than just a hardcoded \"emptiness\" factor?\n\nNot quite sure how we could do that, given that those things aren't even\nhappening in the same process. But yeah, it does feel like the proposed\napproach is only going to be optimal over a small range of conditions.\n\n> I don't think the xids % KAX_COMPRESS_FREQUENCY == 0 filter is a good idea -\n> if you have a workload with plenty subxids you might end up never compressing\n> because xids divisible by KAX_COMPRESS_FREQUENCY will end up as a subxid\n> most/all of the time.\n\nYeah, I didn't think that was too safe either. It'd be more reliable\nto use a static counter to skip all but every N'th compress attempt\n(something we could do inside KnownAssignedXidsCompress itself, instead\nof adding warts at the call sites).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 15 Nov 2022 19:15:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> but that is not related to the main issues:\n\n> * COMMITs: xids are removed from the array by performing a binary\n> search - this gets more and more expensive as the array gets wider\n> * SNAPSHOTs: scanning the array for snapshots becomes more expensive\n> as the array gets wider\n\nRight. The case complained of in this thread is SNAPSHOT cost,\nsince that's what KnownAssignedXidsGetAndSetXmin is used for.\n\n> Hence more frequent compression is effective at reducing the overhead.\n> But too frequent compression slows down the startup process, which\n> can't then keep up.\n> So we're just looking for an optimal frequency of compression for any\n> given workload.\n\nHmm. I wonder if my inability to detect a problem is because the startup\nprocess does keep ahead of the workload on my machine, while it fails\nto do so on the OP's machine. I've only got a 16-CPU machine at hand,\nwhich probably limits the ability of the primary to saturate the standby's\nstartup process. If that's accurate, reducing the frequency of\ncompression attempts could be counterproductive in my workload range.\nIt would help the startup process when that is the bottleneck --- but\nthat wasn't what the OP complained of, so I'm not sure it helps him\neither.\n\nIt seems like maybe what we should do is just drop the \"nelements < 4 *\nPROCARRAY_MAXPROCS\" part of the existing heuristic, which is clearly\ndangerous with large max_connection settings, and in any case doesn't\nhave a clear connection to either the cost of scanning or the cost\nof compressing. Or we could replace that with a hardwired constant,\nlike \"nelements < 400\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 15 Nov 2022 19:31:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-15 23:14:42 +0000, Simon Riggs wrote:\n> On Tue, 15 Nov 2022 at 23:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > BTW, while nosing around this code I came across this statement\n> > (procarray.c, about line 4550 in HEAD):\n> >\n> > * ... To add XIDs to the array, we just insert\n> > * them into slots to the right of the head pointer and then advance the head\n> > * pointer. This wouldn't require any lock at all, except that on machines\n> > * with weak memory ordering we need to be careful that other processors\n> > * see the array element changes before they see the head pointer change.\n> > * We handle this by using a spinlock to protect reads and writes of the\n> > * head/tail pointers. (We could dispense with the spinlock if we were to\n> > * create suitable memory access barrier primitives and use those instead.)\n> > * The spinlock must be taken to read or write the head/tail pointers unless\n> > * the caller holds ProcArrayLock exclusively.\n> >\n> > Nowadays we've *got* those primitives. Can we get rid of\n> > known_assigned_xids_lck, and if so would it make a meaningful\n> > difference in this scenario?\n\nForgot to reply to this part:\n\nI'm confused by the explanation of the semantics of the spinlock:\n\n \"The spinlock must be taken to read or write the head/tail pointers\n unless the caller holds ProcArrayLock exclusively.\"\n\nmakes it sound like it'd be valid to modify the KnownAssignedXids without\nholding ProcArrayLock exclusively. Doesn't that contradict only needing the\nspinlock because of memory ordering?\n\nAnd when would it be valid to do any modifications of KnownAssignedXids\nwithout holding ProcArrayLock exclusively? Concurrent readers of\nKnownAssignedXids will operate on a snapshot of head/tail (the spinlock is\nonly ever held to query them). Afaict any such modification would be racy,\nbecause subsequent modifications of KnownAssignedXids could overwrite contents\nof KnownAssignedXids that another backend is in the process of reading, based\non the stale snapshot of head/tail.\n\n\nTo me it sounds like known_assigned_xids_lck is pointless and the talk about\nmemory barriers a red herring, since all modifications have to happen with\nProcArrayLock held exlusively and all reads with ProcArrayLock held in share\nmode. It can't be legal to modify head/tail or the contents of the array\noutside of that. And lwlocks provide sufficient barrier semantics.\n\n\n\n> I think you could do that *as well*, since it does act as an overhead\n> but that is not related to the main issues\n\nI think it might be a bigger effect than one might immediately think. Because\nthe spinlock will typically be on the same cacheline as head/tail, and because\nevery spinlock acquisition requires the cacheline to be modified (and thus\nowned mexlusively) by the current core, uses of head/tail will very commonly\nbe cache misses even in workloads without a lot of KAX activity.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 15 Nov 2022 16:31:43 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-15 19:15:15 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-11-15 23:14:42 +0000, Simon Riggs wrote:\n> >> Hence more frequent compression is effective at reducing the overhead.\n> >> But too frequent compression slows down the startup process, which\n> >> can't then keep up.\n> >> So we're just looking for an optimal frequency of compression for any\n> >> given workload.\n> \n> > What about making the behaviour adaptive based on the amount of wasted effort\n> > during those two operations, rather than just a hardcoded \"emptiness\" factor?\n> \n> Not quite sure how we could do that, given that those things aren't even\n> happening in the same process.\n\nI'm not certain what the best approach is, but I don't think the\nnot-the-same-process part is a blocker.\n\n\nApproach 1:\n\nWe could have an atomic variable in ProcArrayStruct that counts the amount of\nwasted effort and have processes update it whenever they've wasted a\nmeaningful amount of effort. Something like counting the skipped elements in\nKnownAssignedXidsGetAndSetXmin in a function local static variable and\nupdating the shared counter whenever that reaches\n\n\n\nApproach 2:\n\nPerform conditional cleanup in non-startup processes - I think that'd actually\nbe ok, as long as ProcArrayLock is held exlusively. We could count the amount\nof skipped elements in KnownAssignedXidsGetAndSetXmin() in a local variable,\nand whenever that gets too high, conditionally acquire ProcArrayLock lock\nexlusively at the end of GetSnapshotData() and compress KAX. Reset the local\nvariable independent of getting the lock or not, to avoid causing a lot of\ncontention.\n\nThe nice part is that this would work even without the startup making\nprocess. The not nice part that it'd require a bit of code study to figure out\nwhether it's safe to modify KAX from outside the startup process.\n\n\n\n> But yeah, it does feel like the proposed\n> approach is only going to be optimal over a small range of conditions.\n\nIn particular, it doesn't adapt at all to workloads that don't replay all that\nmuch, but do compute a lot of snapshots.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 15 Nov 2022 16:44:48 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> To me it sounds like known_assigned_xids_lck is pointless and the talk about\n> memory barriers a red herring, since all modifications have to happen with\n> ProcArrayLock held exlusively and all reads with ProcArrayLock held in share\n> mode. It can't be legal to modify head/tail or the contents of the array\n> outside of that. And lwlocks provide sufficient barrier semantics.\n\nNo ... RecordKnownAssignedTransactionIds calls KnownAssignedXidsAdd\nwith exclusive_lock = false, and in the typical case that will not\nacquire ProcArrayLock at all. Since there's only one writer, that\nseems safe enough, and I believe the commentary's claim that we\nreally just need to be sure the head-pointer update is seen\nafter the array updates.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 15 Nov 2022 19:46:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-15 19:46:26 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > To me it sounds like known_assigned_xids_lck is pointless and the talk about\n> > memory barriers a red herring, since all modifications have to happen with\n> > ProcArrayLock held exlusively and all reads with ProcArrayLock held in share\n> > mode. It can't be legal to modify head/tail or the contents of the array\n> > outside of that. And lwlocks provide sufficient barrier semantics.\n> \n> No ... RecordKnownAssignedTransactionIds calls KnownAssignedXidsAdd\n> with exclusive_lock = false, and in the typical case that will not\n> acquire ProcArrayLock at all. Since there's only one writer, that\n> seems safe enough, and I believe the commentary's claim that we\n> really just need to be sure the head-pointer update is seen\n> after the array updates.\n\nOh, right. I focussed to much on the part of the comment quoted in your email.\n\nI still think it's misleading for the comment to say that the tail can be\nmodified with the spinlock - I don't see how that'd ever be correct. Nor could\nhead be safely decreased with just the spinlock.\n\n\nToo bad, that seems to make the idea of compressing in other backends a\nnon-starter unfortunately :(. Although - are we really gaining that much by\navoiding ProcArrayLock in the RecordKnownAssignedTransactionIds() case? It\nonly happens when latestObservedXid is increased, and we'll remove them at\ncommit with the exclusive lock held. Afaict that's the only KAX access that\ndoesn't also require ProcArrayLock?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 15 Nov 2022 17:12:35 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "On Wed, 16 Nov 2022 at 00:15, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-11-15 23:14:42 +0000, Simon Riggs wrote:\n> >> Hence more frequent compression is effective at reducing the overhead.\n> >> But too frequent compression slows down the startup process, which\n> >> can't then keep up.\n> >> So we're just looking for an optimal frequency of compression for any\n> >> given workload.\n>\n> > What about making the behaviour adaptive based on the amount of wasted effort\n> > during those two operations, rather than just a hardcoded \"emptiness\" factor?\n>\n> Not quite sure how we could do that, given that those things aren't even\n> happening in the same process. But yeah, it does feel like the proposed\n> approach is only going to be optimal over a small range of conditions.\n\nI have not been able to think of a simple way to autotune it.\n\n> > I don't think the xids % KAX_COMPRESS_FREQUENCY == 0 filter is a good idea -\n> > if you have a workload with plenty subxids you might end up never compressing\n> > because xids divisible by KAX_COMPRESS_FREQUENCY will end up as a subxid\n> > most/all of the time.\n>\n> Yeah, I didn't think that was too safe either.\n\n> It'd be more reliable\n> to use a static counter to skip all but every N'th compress attempt\n> (something we could do inside KnownAssignedXidsCompress itself, instead\n> of adding warts at the call sites).\n\nI was thinking exactly that myself, for the reason of keeping it all\ninside KnownAssignedXidsCompress().\n\n--\nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 16 Nov 2022 02:40:47 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Hello everyone.\n\n> However ... I tried to reproduce the original complaint, and\n> failed entirely. I do see KnownAssignedXidsGetAndSetXmin\n> eating a bit of time in the standby backends, but it's under 1%\n> and doesn't seem to be rising over time. Perhaps we've already\n> applied some optimization that ameliorates the problem? But\n> I tested v13 as well as HEAD, and got the same results.\n\n> Hmm. I wonder if my inability to detect a problem is because the startup\n> process does keep ahead of the workload on my machine, while it fails\n> to do so on the OP's machine. I've only got a 16-CPU machine at hand,\n> which probably limits the ability of the primary to saturate the standby's\n> startup process.\n\nYes, optimization by Andres Freund made things much better, but the\nimpact is still noticeable.\n\nI was also using 16CPU machine - but two of them (primary and standby).\n\nHere are the scripts I was using (1) for benchmark - maybe it could help.\n\n\n> Nowadays we've *got* those primitives. Can we get rid of\n> known_assigned_xids_lck, and if so would it make a meaningful\n> difference in this scenario?\n\nI was trying it already - but was unable to find real benefits for it.\nWIP patch in attachment.\n\nHm, I see I have sent it to list, but it absent in archives... Just\nquote from it:\n\n> First potential positive effect I could see is\n> (TransactionIdIsInProgress -> KnownAssignedXidsSearch) locking but\n> seems like it is not on standby hotpath.\n\n> Second one - locking for KnownAssignedXidsGetAndSetXmin (build\n> snapshot). But I was unable to measure impact. It wasn’t visible\n> separately in (3) test.\n\n> Maybe someone knows scenario causing known_assigned_xids_lck or\n> TransactionIdIsInProgress become bottleneck on standby?\n\nThe latest question is still actual :)\n\n> I think it might be a bigger effect than one might immediately think. Because\n> the spinlock will typically be on the same cacheline as head/tail, and because\n> every spinlock acquisition requires the cacheline to be modified (and thus\n> owned mexlusively) by the current core, uses of head/tail will very commonly\n> be cache misses even in workloads without a lot of KAX activity.\n\nI was trying to find some way to practically achieve any noticeable\nimpact here, but unsuccessfully.\n\n>> But yeah, it does feel like the proposed\n>> approach is only going to be optimal over a small range of conditions.\n\n> In particular, it doesn't adapt at all to workloads that don't replay all that\n> much, but do compute a lot of snapshots.\n\nThe approach (2) was optimized to avoid any additional work for anyone\nexcept non-startup\nprocesses (approach with offsets to skip gaps while building snapshot).\n\n\n[1]: https://gist.github.com/michail-nikolaev/e1dfc70bdd7cfd1b902523dbb3db2f28\n[2]: https://www.postgresql.org/message-id/flat/CANtu0ogzo4MsR7My9%2BNhu3to5%3Dy7G9zSzUbxfWYOn9W5FfHjTA%40mail.gmail.com#341a3c3b033f69b260120b3173a66382\n\n--\nMichail Nikolaev",
"msg_date": "Wed, 16 Nov 2022 15:23:46 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Hello.\n\nOn Wed, Nov 16, 2022 at 3:44 AM Andres Freund <andres@anarazel.de> wrote:\n> Approach 1:\n\n> We could have an atomic variable in ProcArrayStruct that counts the amount of\n> wasted effort and have processes update it whenever they've wasted a\n> meaningful amount of effort. Something like counting the skipped elements in\n> KnownAssignedXidsGetAndSetXmin in a function local static variable and\n> updating the shared counter whenever that reaches\n\nI made the WIP patch for that approach and some initial tests. It\nseems like it works pretty well.\nAt least it is better than previous ways for standbys without high\nread only load.\n\nBoth patch and graph in attachments. Strange numbers is a limit of\nwasted work to perform compression.\nI have used the same (1) testing script and configuration as before\n(two 16-CPU machines, long transaction on primary at 60th second,\nsimple-update and select-only for pgbench).\n\nIf such approach looks committable - I could do more careful\nperformance testing to find the best value for\nWASTED_SNAPSHOT_WORK_LIMIT_TO_COMPRESS.\n\n[1]: https://gist.github.com/michail-nikolaev/e1dfc70bdd7cfd1b902523dbb3db2f28\n--\nMichail Nikolaev",
"msg_date": "Sun, 20 Nov 2022 16:45:13 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Oops, wrong image, this is correct one. But is 1-run tests, so it\nshows only basic correlation,",
"msg_date": "Sun, 20 Nov 2022 16:50:01 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Oh, seems like it is not my day :) The image fixed again.",
"msg_date": "Sun, 20 Nov 2022 16:55:29 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "On Sun, 20 Nov 2022 at 13:45, Michail Nikolaev\n<michail.nikolaev@gmail.com> wrote:\n\n> If such approach looks committable - I could do more careful\n> performance testing to find the best value for\n> WASTED_SNAPSHOT_WORK_LIMIT_TO_COMPRESS.\n\nNice patch.\n\nWe seem to have replaced one magic constant with another, so not sure\nif this is autotuning, but I like it much better than what we had\nbefore (i.e. better than my prev patch).\n\nFew thoughts\n\n1. I was surprised that you removed the limits on size and just had\nthe wasted work limit. If there is no read traffic that will mean we\nhardly ever compress, which means the removal of xids at commit will\nget slower over time. I would prefer that we forced compression on a\nregular basis, such as every time we process an XLOG_RUNNING_XACTS\nmessage (every 15s), as well as when we hit certain size limits.\n\n2. If there is lots of read traffic but no changes flowing, it would\nalso make sense to force compression when the startup process goes\nidle rather than wait for the work to be wasted first.\n\nQuick patch to add those two compression events also.\n\nThat should favour the smaller wasted work limits.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Sun, 20 Nov 2022 16:41:52 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> We seem to have replaced one magic constant with another, so not sure\n> if this is autotuning, but I like it much better than what we had\n> before (i.e. better than my prev patch).\n\nYeah, the magic constant is still magic, even if it looks like it's\nnot terribly sensitive to the exact value.\n\n> 1. I was surprised that you removed the limits on size and just had\n> the wasted work limit. If there is no read traffic that will mean we\n> hardly ever compress, which means the removal of xids at commit will\n> get slower over time. I would prefer that we forced compression on a\n> regular basis, such as every time we process an XLOG_RUNNING_XACTS\n> message (every 15s), as well as when we hit certain size limits.\n\n> 2. If there is lots of read traffic but no changes flowing, it would\n> also make sense to force compression when the startup process goes\n> idle rather than wait for the work to be wasted first.\n\nIf we do those things, do we need a wasted-work counter at all?\n\nI still suspect that 90% of the problem is the max_connections\ndependency in the existing heuristic, because of the fact that\nyou have to push max_connections to the moon before it becomes\na measurable problem. If we do\n\n- if (nelements < 4 * PROCARRAY_MAXPROCS ||\n- nelements < 2 * pArray->numKnownAssignedXids)\n+ if (nelements < 2 * pArray->numKnownAssignedXids)\n\nand then add the forced compressions you suggest, where\ndoes that put us?\n\nAlso, if we add more forced compressions, it seems like we should have\na short-circuit for a forced compression where there's nothing to do.\nSo more or less like\n\n nelements = head - tail;\n if (!force)\n {\n if (nelements < 2 * pArray->numKnownAssignedXids)\n return;\n }\n else\n {\n if (nelements == pArray->numKnownAssignedXids)\n return;\n }\n\nI'm also wondering why there's not an\n\n Assert(compress_index == pArray->numKnownAssignedXids);\n\nafter the loop, to make sure our numKnownAssignedXids tracking\nis sane.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 22 Nov 2022 11:28:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "On Tue, 22 Nov 2022 at 16:28, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> > We seem to have replaced one magic constant with another, so not sure\n> > if this is autotuning, but I like it much better than what we had\n> > before (i.e. better than my prev patch).\n>\n> Yeah, the magic constant is still magic, even if it looks like it's\n> not terribly sensitive to the exact value.\n>\n> > 1. I was surprised that you removed the limits on size and just had\n> > the wasted work limit. If there is no read traffic that will mean we\n> > hardly ever compress, which means the removal of xids at commit will\n> > get slower over time. I would prefer that we forced compression on a\n> > regular basis, such as every time we process an XLOG_RUNNING_XACTS\n> > message (every 15s), as well as when we hit certain size limits.\n>\n> > 2. If there is lots of read traffic but no changes flowing, it would\n> > also make sense to force compression when the startup process goes\n> > idle rather than wait for the work to be wasted first.\n>\n> If we do those things, do we need a wasted-work counter at all?\n>\n> I still suspect that 90% of the problem is the max_connections\n> dependency in the existing heuristic, because of the fact that\n> you have to push max_connections to the moon before it becomes\n> a measurable problem. If we do\n>\n> - if (nelements < 4 * PROCARRAY_MAXPROCS ||\n> - nelements < 2 * pArray->numKnownAssignedXids)\n> + if (nelements < 2 * pArray->numKnownAssignedXids)\n>\n> and then add the forced compressions you suggest, where\n> does that put us?\n\nThe forced compressions I propose happen\n* when idle - since we have time to do it when that happens, which\nhappens often since most workloads are bursty\n* every 15s - since we already have lock\nwhich is overall much less often than every 64 commits, as benchmarked\nby Michail.\nI didn't mean to imply that superceded the wasted work approach, it\nwas meant to be in addition to.\n\nThe wasted work counter works well to respond to heavy read-only\ntraffic and also avoids wasted compressions for write-heavy workloads.\nSo I still like it the best.\n\n> Also, if we add more forced compressions, it seems like we should have\n> a short-circuit for a forced compression where there's nothing to do.\n> So more or less like\n>\n> nelements = head - tail;\n> if (!force)\n> {\n> if (nelements < 2 * pArray->numKnownAssignedXids)\n> return;\n> }\n> else\n> {\n> if (nelements == pArray->numKnownAssignedXids)\n> return;\n> }\n\n+1\n\n> I'm also wondering why there's not an\n>\n> Assert(compress_index == pArray->numKnownAssignedXids);\n>\n> after the loop, to make sure our numKnownAssignedXids tracking\n> is sane.\n\n+1\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 22 Nov 2022 16:40:01 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> On Tue, 22 Nov 2022 at 16:28, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> If we do those things, do we need a wasted-work counter at all?\n\n> The wasted work counter works well to respond to heavy read-only\n> traffic and also avoids wasted compressions for write-heavy workloads.\n> So I still like it the best.\n\nThis argument presumes that maintenance of the counter is free,\nwhich it surely is not. I don't know how bad contention on that\natomically-updated variable could get, but it seems like it could\nbe an issue when lots of processes are acquiring snapshots.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 22 Nov 2022 11:53:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "On Tue, 22 Nov 2022 at 16:53, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> > On Tue, 22 Nov 2022 at 16:28, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> If we do those things, do we need a wasted-work counter at all?\n>\n> > The wasted work counter works well to respond to heavy read-only\n> > traffic and also avoids wasted compressions for write-heavy workloads.\n> > So I still like it the best.\n>\n> This argument presumes that maintenance of the counter is free,\n> which it surely is not. I don't know how bad contention on that\n> atomically-updated variable could get, but it seems like it could\n> be an issue when lots of processes are acquiring snapshots.\n\nI understand. I was assuming that you and Andres liked that approach.\n\nIn the absence of that approach, falling back to a counter that\ncompresses every N xids would be best, in addition to the two new\nforced compression events.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 22 Nov 2022 17:06:40 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Hello, everyone.\n\nI have tried to put it all together.\n\n> In the absence of that approach, falling back to a counter that\n> compresses every N xids would be best, in addition to the two new\n> forced compression events.\n\nDone.\n\n> Also, if we add more forced compressions, it seems like we should have\n> a short-circuit for a forced compression where there's nothing to do.\n\nDone.\n\n> I'm also wondering why there's not an\n>\n> Assert(compress_index == pArray->numKnownAssignedXids);\n>\n> after the loop, to make sure our numKnownAssignedXids tracking\n> is sane.\n\nDone.\n\n> * when idle - since we have time to do it when that happens, which\n> happens often since most workloads are bursty\n\nI have added getting of ProcArrayLock for this case.\nAlso, I have added maximum frequency as 1 per second to avoid\ncontention with heavy read load in case of small,\nepisodic but regular WAL traffic (WakeupRecovery() for each 100ms for\nexample). Or it is useless?\n\n> It'd be more reliable\n> to use a static counter to skip all but every N'th compress attempt\n> (something we could do inside KnownAssignedXidsCompress itself, instead\n> of adding warts at the call sites).\n\nDone. I have added “reason” enum for calling KnownAssignedXidsCompress\nto keep it as much clean as possible.\nBut not sure that I was successful here.\n\nAlso, I think while we still in the context, it is good to add:\n* Simon's optimization (1) for KnownAssignedXidsRemoveTree (it is\nsimple and effective for some situations without any seen drawbacks)\n* Maybe my patch (2) for replacing known_assigned_xids_lck with memory barrier?\n\nNew version in attach. Running benchmarks now.\nPreliminary result in attachments (16CPU, 5000 max_connections, now 64\nactive connections instead of 16).\nAlso, interesting moment - with 64 connections, vanilla version is\nunable to recover its performance after 30-sec transaction on primary.\n\n[1]: https://www.postgresql.org/message-id/flat/CANbhV-Ey8HRYPvnvQnsZAteCfzN3VHVhZVKfWMYcnjMnSzs4dQ%40mail.gmail.com#05993cf2bc87e35e0dff38fec26b9805\n[2]: https://www.postgresql.org/message-id/flat/CANtu0oiPoSdQsjRd6Red5WMHi1E83d2%2B-bM9J6dtWR3c5Tap9g%40mail.gmail.com#cc4827dee902978f93278732435e8521\n\n--\nMichail Nikolaev",
"msg_date": "Wed, 23 Nov 2022 00:53:33 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Hello, everyone.\n\nBenchmarks for the last version of patch are ready.\n\nExecuted on two 16CPU machines (AMD EPYC 7763), 5000 max_connections.\nStaring from the 60th second, 30 seconds-long transaction was started\non primary. Setup is the same as in (1), scripts here - (2).\n\nFor most of the tests, simple-update and simple-select pgbench\nscenarios were used on primary and standby.\nFor one of the tests - just “SELECT txid_current();” and “SELECT 1;”\naccordingly.\n\nThe name of the line is KAX_COMPRESS_FREQUENCY value.\n\nFor 16 connections, 64, 128 and 256 are the best ones.\n\nFor 32 - 32, 64, 12, 256.\n\nFor 64 - a little bit tricky story. 128 and 256 are best, but\n1024-4096 can be faster some small period of time with continuing\ndegradation. Still not sure why. Three different run sets in\nattachment, one with start of long transaction on 20th second.\n\nFor 128 - anything < 1024 is good.\n\nFor “txid_current+select 1” case - the same.\n\nAlso, in all the cases, patched version is better than current master.\nAnd for master (and some big values of KAX_COMPRESS_FREQUENCY) version\nit is not possible for performance to recover, probably caches and\nlocking goes into some bad, but stable pattern.\n\nSo, I think it is better to go 128 here.\n\n[1]: https://www.postgresql.org/message-id/flat/CANtu0ohzBFTYwdLtcanWo4%2B794WWUi7LY2rnbHyorJdE8_ZnGg%40mail.gmail.com#379c1be7b8134ada5a574078d51b64c6\n[2]: https://gist.github.com/michail-nikolaev/e1dfc70bdd7cfd1b902523dbb3db2f28\n\n--\nMichail Nikolaev.",
"msg_date": "Mon, 28 Nov 2022 10:19:31 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Michail Nikolaev <michail.nikolaev@gmail.com> writes:\n>> * when idle - since we have time to do it when that happens, which\n>> happens often since most workloads are bursty\n\n> I have added getting of ProcArrayLock for this case.\n\nThat seems like a fairly bad idea: it will add extra contention\non ProcArrayLock, and I see no real strong argument that the path\ncan't get traversed often enough for that to matter. It would\nlikely be better for KnownAssignedXidsCompress to obtain the lock\nfor itself, only after it knows there is something worth doing.\n(This ties back to previous discussion: the comment claiming it's\nsafe to read head/tail because we have the lock is misguided.\nIt's safe because we're the only process that changes them.\nSo we can make the heuristic decision before acquiring lock.)\n\nWhile you could use the \"reason\" code to decide whether you need\nto take the lock, it might be better to add a separate boolean\nargument specifying whether the caller already has the lock.\n\nBeyond that, I don't see any issues except cosmetic ones.\n\n> Also, I think while we still in the context, it is good to add:\n> * Simon's optimization (1) for KnownAssignedXidsRemoveTree (it is\n> simple and effective for some situations without any seen drawbacks)\n> * Maybe my patch (2) for replacing known_assigned_xids_lck with memory barrier?\n\nDoesn't seem like we have any hard evidence in favor of either of\nthose being worth doing. We especially haven't any evidence that\nthey'd still be worth doing after this patch. I'd be willing to\nmake the memory barrier change anyway, because that seems like\na simple change that can't hurt. I'm less enthused about the\npatch at (1) because of the complexity it adds.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 28 Nov 2022 16:29:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "I wrote:\n> That seems like a fairly bad idea: it will add extra contention\n> on ProcArrayLock, and I see no real strong argument that the path\n> can't get traversed often enough for that to matter. It would\n> likely be better for KnownAssignedXidsCompress to obtain the lock\n> for itself, only after it knows there is something worth doing.\n\nSince we're running out of time in the current commitfest,\nI went ahead and changed that, and made the cosmetic fixes\nI wanted, and pushed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 29 Nov 2022 15:46:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Hello, Tom.\n\n> Since we're running out of time in the current commitfest,\n> I went ahead and changed that, and made the cosmetic fixes\n> I wanted, and pushed.\n\nGreat, thanks!\n\nThe small thing I was thinking to add in KnownAssignedXidsCompress is\nthe assertion like\n\nAssert(MyBackendType == B_STARTUP);\n\nJust to make it more clear that locking is not the only thing required\nfor the call.\n\n> I'd be willing to\n> make the memory barrier change anyway, because that seems like\n> a simple change that can't hurt.\n\nI'm going to create a separate commit fest entry for it, ok?\n\nBest regards,\nMichail.\n\n\n",
"msg_date": "Wed, 30 Nov 2022 00:22:25 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "Michail Nikolaev <michail.nikolaev@gmail.com> writes:\n> The small thing I was thinking to add in KnownAssignedXidsCompress is\n> the assertion like\n\n> Assert(MyBackendType == B_STARTUP);\n\nMmm ... given where the call sites are, we have got lots more problems\nthan this if some non-startup process reaches them. I'm not sure this\nis worth the trouble, but if it is, I'd put it in the callers.\n\n>> I'd be willing to\n>> make the memory barrier change anyway, because that seems like\n>> a simple change that can't hurt.\n\n> I'm going to create a separate commit fest entry for it, ok?\n\nRight, since I closed this one already.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 29 Nov 2022 16:39:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
},
{
"msg_contents": "On Tue, 29 Nov 2022 at 20:46, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > That seems like a fairly bad idea: it will add extra contention\n> > on ProcArrayLock, and I see no real strong argument that the path\n> > can't get traversed often enough for that to matter. It would\n> > likely be better for KnownAssignedXidsCompress to obtain the lock\n> > for itself, only after it knows there is something worth doing.\n>\n> Since we're running out of time in the current commitfest,\n> I went ahead and changed that, and made the cosmetic fixes\n> I wanted, and pushed.\n\nThat is a complete patch from multiple angles; very happy here.\n\nThanks for a great job.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 30 Nov 2022 06:53:19 +0000",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Slow standby snapshot"
}
] |
[
{
"msg_contents": "Hi all,\n\nRobert has mentioned https://nvd.nist.gov/vuln/detail/CVE-2021-3449 on\nthe -security list where a TLS server could crash with some crafted\nrenegociation message. We already disable SSL renegociation\ninitialization for some time now, per 48d23c72, but we don't prevent\nthe server from complying should the client wish to use renegociation.\nIn terms of robustness and because SSL renegociation had its set of\nflaws and issues for many years, it looks like it would be a good idea\nto disable renegociation on the backend (not the client as that may be\nused with older access points where renegociation is still used, per\nan argument from Andres).\n\nIn flavor, this is controlled in a way similar to\nSSL_OP_NO_COMPRESSION that we already enforce in the backend to\ndisable SSL compression. However, there are a couple of compatibility\ntweaks regarding this one:\n- SSL_OP_NO_RENEGOTIATION controls that. It is present in OpenSSL >=\n1.1.1 and has been backported in 1.1.0h (it is not present in older\nversions of 1.1.0).\n- In 1.0.2 and older versions, OpenSSL has an undocumented flag called\nSSL3_FLAGS_NO_RENEGOTIATE_CIPHERS, able to do the same as far as I\nunderstand.\n\nAttached is a patch to use SSL_OP_NO_RENEGOTIATION if it exists, and\nforce that in the backend. We could go further down, but using\nundocumented things looks rather unsafe here, to say the least. Could\nthere be a point in backpatching that, in light of what we have done in\n48d23c72 in the past, though?\n\nThoughts?\n--\nMichael",
"msg_date": "Thu, 20 May 2021 20:00:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Force disable of SSL renegociation in the server"
},
{
"msg_contents": "> On 20 May 2021, at 13:00, Michael Paquier <michael@paquier.xyz> wrote:\n\n> - SSL_OP_NO_RENEGOTIATION controls that. It is present in OpenSSL >=\n> 1.1.1 and has been backported in 1.1.0h (it is not present in older\n> versions of 1.1.0).\n\nFor OpenSSL 1.1.0 versions < 1.1.0h it will be silently accepted without\nactually doing anything, so we might want to combine it with the below.\n\n> - In 1.0.2 and older versions, OpenSSL has an undocumented flag called\n> SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS, able to do the same as far as I\n> understand.\n\nWell, it's documented in the changelog that it's undocumented (sigh..) along\nwith a note stating that it works like SSL_OP_NO_RENEGOTIATION. Skimming the\ncode it seems to ring true. For older OpenSSL versions there's also\nSSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION which controls renegotiation for an\nolder OpenSSL reneg bug. That applies to 0.9.8 versions which we don't\nsupport, but a malicious user can craft whatever they feel like so maybe we\nshould ensure it's off as well?\n\n> Could there be a point in backpatching that, in light of what we have done in\n> 48d23c72 in the past, though?\n\nI think there is merit to that idea, especially given the precedent.\n\n> Thoughts?\n\n+\t/* disallow SSL renegociation, option available since 1.1.0h */\ns/renegociation/renegotiation/\n\n+1 on disabling renegotiation, but I think it's worth considering using\nSSL3_FLAGS_NO_RENEGOTIATE_CIPHERS as well. One could also argue that extending\nthe comment with a note that it only applies to TLSv1.2 and lower could be\nhelpful to readers who aren't familiar with TLS protocol versions. TLSv1.3 did\naway with renegotiation.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Thu, 20 May 2021 14:15:52 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Force disable of SSL renegociation in the server"
},
{
"msg_contents": "On Thu, May 20, 2021 at 02:15:52PM +0200, Daniel Gustafsson wrote:\n> On 20 May 2021, at 13:00, Michael Paquier <michael@paquier.xyz> wrote:\n>> - SSL_OP_NO_RENEGOTIATION controls that. It is present in OpenSSL >=\n>> 1.1.1 and has been backported in 1.1.0h (it is not present in older\n>> versions of 1.1.0).\n> \n> For OpenSSL 1.1.0 versions < 1.1.0h it will be silently accepted without\n> actually doing anything, so we might want to combine it with the below.\n\nYeah, still that stresses me quite a bit. OpenSSL does not have a\ngood history with compatibility, and we are talking about something\nthat does not officially exist on the map.\n\n>> - In 1.0.2 and older versions, OpenSSL has an undocumented flag called\n>> SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS, able to do the same as far as I\n>> understand.\n> \n> Well, it's documented in the changelog that it's undocumented (sigh..) along\n> with a note stating that it works like SSL_OP_NO_RENEGOTIATION.\n\nI'd say that this is still part of the definition of undocumented.\nThere is no mention of it in their online documentation :)\n\n> Skimming the\n> code it seems to ring true. For older OpenSSL versions there's also\n> SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION which controls renegotiation for an\n> older OpenSSL reneg bug. That applies to 0.9.8 versions which we don't\n> support, but a malicious user can craft whatever they feel like so maybe we\n> should ensure it's off as well?\n\nIf I am getting it right by reading upstream, SSL_OP_NO_RENEGOTIATION\ntakes priority over that. Hence, if we force SSL_OP_NO_RENEGOTIATION,\nthen SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION has no effect anyway.\n\n> +\t/* disallow SSL renegociation, option available since 1.1.0h */\n> s/renegociation/renegotiation/\n\nArgh, French-ism here.\n\n> +1 on disabling renegotiation, but I think it's worth considering using\n> SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS as well.\n\nThis one can be set within ssl->s3->flags in the port information.\nStill that's not completely feasable either as some versions of\nOpenSSL hide the internals of a bunch of internal structures, and some\ndistributions patch the upstream code? At the end of the day, I think\nthat I would stick with simplicity and use SSL_OP_NO_RENEGOTIATION.\nIt is not our job to go around any decision OpenSSL has poorly done\neither over the years. At least this part is officially documented :)\n\n> One could also argue that extending\n> the comment with a note that it only applies to TLSv1.2 and lower could be\n> helpful to readers who aren't familiar with TLS protocol versions. TLSv1.3 did\n> away with renegotiation.\n\nGood idea to mention that.\n--\nMichael",
"msg_date": "Fri, 21 May 2021 10:41:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Force disable of SSL renegociation in the server"
},
{
"msg_contents": "On Fri, May 21, 2021 at 10:41:34AM +0900, Michael Paquier wrote:\n> This one can be set within ssl->s3->flags in the port information.\n> Still that's not completely feasable either as some versions of\n> OpenSSL hide the internals of a bunch of internal structures, and some\n> distributions patch the upstream code? At the end of the day, I think\n> that I would stick with simplicity and use SSL_OP_NO_RENEGOTIATION.\n> It is not our job to go around any decision OpenSSL has poorly done\n> either over the years. At least this part is officially documented :)\n\nI got to look at that in details, and the attached would be able to do\nthe job with OpenSSL 1.0.2 and older versions. The main idea is to\nset up SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS once the SSL object is\ncreated when opening the TLS connection to business. I have tested\nthat down to 0.9.8 on all supported branches with the protocols we\nsupport (heads up to ssl_min_protocol_version here), and that looks to\nwork as I'd expect.\n\nIt is not a good idea to rely on OPENSSL_VERSION_NUMBER for such\nversion checks as I am doing here, as we've been bitten with\ncompatibility with LibreSSL in the past. So this had better use a\ncheck based on HAVE_OPENSSL_INIT_SSL to make sure that 1.1.0 is the\nversion of OpenSSL used. Anyway, I really don't like using this\nundocumented option, and there is nothing that can be done with\nOpenSSL < 1.1.0h in the 1.1.0 series as the s3 part of the *SSL object\ngets hidden to the application, so it is not possible to set\nSSL3_FLAGS_NO_RENEGOTIATE_CIPHERS there. And so, I would like to\nstick with a backpatch here, only for the part of the patch involving\nbe_tls_init(). Full patch is attached for reference.\n\nWhile on it, I have added a comment about TLSv1.2 being the last\nprotocol supporting renegotiation.\n\nAny objections?\n--\nMichael",
"msg_date": "Mon, 24 May 2021 10:29:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Force disable of SSL renegociation in the server"
},
{
"msg_contents": "> On 24 May 2021, at 03:29, Michael Paquier <michael@paquier.xyz> wrote:\n\n> I got to look at that in details, and the attached would be able to do\n> the job with OpenSSL 1.0.2 and older versions. The main idea is to\n> set up SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS once the SSL object is\n> created when opening the TLS connection to business. I have tested\n> that down to 0.9.8 on all supported branches with the protocols we\n> support (heads up to ssl_min_protocol_version here), and that looks to\n> work as I'd expect.\n> \n> It is not a good idea to rely on OPENSSL_VERSION_NUMBER for such\n> version checks as I am doing here, as we've been bitten with\n> compatibility with LibreSSL in the past. So this had better use a\n> check based on HAVE_OPENSSL_INIT_SSL to make sure that 1.1.0 is the\n> version of OpenSSL used.\n\nI agree that a capability based check is better than using the version numbers\nas their is a collision risk between distributions (and even within OpenSSL as\nNetBSD for example invented their own version etc).\n\n> Anyway, I really don't like using this undocumented option, and there is\n> nothing that can be done with OpenSSL < 1.1.0h in the 1.1.0 series as the s3\n> part of the *SSL object gets hidden to the application, so it is not possible\n> to set SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS there.\n\n\n1.1.0d killed what was left of SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS while keeping\nit defined, so there is also very little value in even attempting it there.\n\n+1 on the patch, LGTM.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 24 May 2021 11:09:38 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Force disable of SSL renegociation in the server"
},
{
"msg_contents": "On Mon, May 24, 2021 at 11:09:38AM +0200, Daniel Gustafsson wrote:\n> 1.1.0d killed what was left of SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS while keeping\n> it defined, so there is also very little value in even attempting it there.\n> \n> +1 on the patch, LGTM.\n\nThanks, applied.\n\nI was having a very hard time putting a T instead of a C to\nrenegotiation..\n--\nMichael",
"msg_date": "Tue, 25 May 2021 10:17:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Force disable of SSL renegociation in the server"
}
] |
[
{
"msg_contents": "Hi\nLogicalIncreaseRestartDecodingForSlot() has a debug log to report a\nnew restart_lsn. But the corresponding function for catalog_xmin.\nHere's a patch to add the same.\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Thu, 20 May 2021 17:43:32 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": true,
"msg_subject": "Diagnostic comment in LogicalIncreaseXminForSlot"
},
{
"msg_contents": "On Thu, May 20, 2021 at 5:43 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> Hi\n> LogicalIncreaseRestartDecodingForSlot() has a debug log to report a\n> new restart_lsn. But the corresponding function for catalog_xmin.\n> Here's a patch to add the same.\n>\n\nI think this can be useful. One minor comment:\n+ elog(DEBUG1, \"got new catalog_xmin %u at %X/%X\", xmin,\n+ (uint32) (current_lsn >> 32), (uint32) current_lsn);\n\nIsn't it better to use LSN_FORMAT_ARGS for current_lsn? Also, there\ndoesn't seem to be any urgency for adding this, so you can register it\nfor the next CF so that we can add this when the branch opens for\nPG-15.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 21 May 2021 11:26:34 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Diagnostic comment in LogicalIncreaseXminForSlot"
},
{
"msg_contents": "On Fri, May 21, 2021 at 11:26 AM Amit Kapila <amit.kapila16@gmail.com>\nwrote:\n\n> On Thu, May 20, 2021 at 5:43 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > Hi\n> > LogicalIncreaseRestartDecodingForSlot() has a debug log to report a\n> > new restart_lsn. But the corresponding function for catalog_xmin.\n> > Here's a patch to add the same.\n> >\n>\n> I think this can be useful. One minor comment:\n> + elog(DEBUG1, \"got new catalog_xmin %u at %X/%X\", xmin,\n> + (uint32) (current_lsn >> 32), (uint32) current_lsn);\n>\n> Isn't it better to use LSN_FORMAT_ARGS for current_lsn?\n\n\nThanks for reminding me about that. :).\n\nAttached revised patch.\n\n\n> Also, there\n> doesn't seem to be any urgency for adding this, so you can register it\n> for the next CF so that we can add this when the branch opens for\n> PG-15.\n>\n\nIt's there in CF. I am fine with PG-15. It will be good to patch the\nback-branches to have this extra diagnostic information available.\n\n--\nBest Wishes,\nAshutosh",
"msg_date": "Fri, 21 May 2021 14:30:00 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Diagnostic comment in LogicalIncreaseXminForSlot"
},
{
"msg_contents": "On Fri, May 21, 2021 at 6:00 PM Ashutosh Bapat\n<ashutosh.bapat@enterprisedb.com> wrote:\n>\n>\n>\n> On Fri, May 21, 2021 at 11:26 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Thu, May 20, 2021 at 5:43 PM Ashutosh Bapat\n>> <ashutosh.bapat.oss@gmail.com> wrote:\n>> >\n>> > Hi\n>> > LogicalIncreaseRestartDecodingForSlot() has a debug log to report a\n>> > new restart_lsn. But the corresponding function for catalog_xmin.\n>> > Here's a patch to add the same.\n>> >\n>>\n>> I think this can be useful. One minor comment:\n>> + elog(DEBUG1, \"got new catalog_xmin %u at %X/%X\", xmin,\n>> + (uint32) (current_lsn >> 32), (uint32) current_lsn);\n>>\n>> Isn't it better to use LSN_FORMAT_ARGS for current_lsn?\n>\n>\n> Thanks for reminding me about that. :).\n>\n> Attached revised patch.\n>\n>>\n>> Also, there\n>> doesn't seem to be any urgency for adding this, so you can register it\n>> for the next CF so that we can add this when the branch opens for\n>> PG-15.\n>\n>\n> It's there in CF. I am fine with PG-15. It will be good to patch the back-branches to have this extra diagnostic information available.\n\nThe patch looks to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 5 Jul 2021 16:23:43 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Diagnostic comment in LogicalIncreaseXminForSlot"
},
{
"msg_contents": "On Mon, Jul 5, 2021 at 12:54 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, May 21, 2021 at 6:00 PM Ashutosh Bapat\n> <ashutosh.bapat@enterprisedb.com> wrote:\n> >\n> >\n> >\n> > On Fri, May 21, 2021 at 11:26 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>\n> >> On Thu, May 20, 2021 at 5:43 PM Ashutosh Bapat\n> >> <ashutosh.bapat.oss@gmail.com> wrote:\n> >> >\n> >> > Hi\n> >> > LogicalIncreaseRestartDecodingForSlot() has a debug log to report a\n> >> > new restart_lsn. But the corresponding function for catalog_xmin.\n> >> > Here's a patch to add the same.\n> >> >\n> >>\n> >> I think this can be useful. One minor comment:\n> >> + elog(DEBUG1, \"got new catalog_xmin %u at %X/%X\", xmin,\n> >> + (uint32) (current_lsn >> 32), (uint32) current_lsn);\n> >>\n> >> Isn't it better to use LSN_FORMAT_ARGS for current_lsn?\n> >\n> >\n> > Thanks for reminding me about that. :).\n> >\n> > Attached revised patch.\n> >\n> >>\n> >> Also, there\n> >> doesn't seem to be any urgency for adding this, so you can register it\n> >> for the next CF so that we can add this when the branch opens for\n> >> PG-15.\n> >\n> >\n> > It's there in CF. I am fine with PG-15. It will be good to patch the back-branches to have this extra diagnostic information available.\n>\n> The patch looks to me.\n>\n\nDo you or others have any opinion on whether this should be\nback-patched? I personally prefer it to be a HEAD-only patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 8 Jul 2021 16:26:12 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Diagnostic comment in LogicalIncreaseXminForSlot"
},
{
"msg_contents": "> On 8 Jul 2021, at 12:56, Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> Do you or others have any opinion on whether this should be\n> back-patched? I personally prefer it to be a HEAD-only patch.\n\n+1 for only applying this to HEAD. The restart_lsn debug elog has been there\nsince 2014 so there doesn’t seem to be any immediate rush.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Thu, 8 Jul 2021 13:14:47 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Diagnostic comment in LogicalIncreaseXminForSlot"
},
{
"msg_contents": "On Thu, Jul 8, 2021 at 8:14 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 8 Jul 2021, at 12:56, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > Do you or others have any opinion on whether this should be\n> > back-patched? I personally prefer it to be a HEAD-only patch.\n>\n> +1 for only applying this to HEAD. The restart_lsn debug elog has been there\n> since 2014 so there doesn’t seem to be any immediate rush.\n\n+1\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 8 Jul 2021 20:35:54 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Diagnostic comment in LogicalIncreaseXminForSlot"
},
{
"msg_contents": "On Mon, Jul 5, 2021 at 12:54 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, May 21, 2021 at 6:00 PM Ashutosh Bapat\n> <ashutosh.bapat@enterprisedb.com> wrote:\n> >\n> > It's there in CF. I am fine with PG-15. It will be good to patch the back-branches to have this extra diagnostic information available.\n>\n> The patch looks to me.\n>\n\n{\n slot->candidate_catalog_xmin = xmin;\n slot->candidate_xmin_lsn = current_lsn;\n+ elog(DEBUG1, \"got new catalog_xmin %u at %X/%X\", xmin,\n+ LSN_FORMAT_ARGS(current_lsn));\n }\n SpinLockRelease(&slot->mutex);\n\nToday, again looking at this patch, I don't think doing elog inside\nspinlock is a good idea. We can either release spinlock before it or\nuse some variable to remember that we need to write such an elog and\ndo it after releasing the lock. What do you think? I have noticed that\na nearby function LogicalIncreaseRestartDecodingForSlot() logs similar\ninformation after releasing spinlock, so it is better to follow the\nsame here as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 12 Jul 2021 08:38:55 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Diagnostic comment in LogicalIncreaseXminForSlot"
},
{
"msg_contents": "On Mon, Jul 12, 2021 at 8:39 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Mon, Jul 5, 2021 at 12:54 PM Masahiko Sawada <sawada.mshk@gmail.com>\n> wrote:\n> >\n> > On Fri, May 21, 2021 at 6:00 PM Ashutosh Bapat\n> > <ashutosh.bapat@enterprisedb.com> wrote:\n> > >\n> > > It's there in CF. I am fine with PG-15. It will be good to patch the\n> back-branches to have this extra diagnostic information available.\n> >\n> > The patch looks to me.\n> >\n>\n> {\n> slot->candidate_catalog_xmin = xmin;\n> slot->candidate_xmin_lsn = current_lsn;\n> + elog(DEBUG1, \"got new catalog_xmin %u at %X/%X\", xmin,\n> + LSN_FORMAT_ARGS(current_lsn));\n> }\n> SpinLockRelease(&slot->mutex);\n>\n> Today, again looking at this patch, I don't think doing elog inside\n> spinlock is a good idea. We can either release spinlock before it or\n> use some variable to remember that we need to write such an elog and\n> do it after releasing the lock. What do you think?\n\n\nThe elog will be effective only under DEBUG1, otherwise it will be almost a\nNOOP. I am wondering whether it's worth adding a bool assignment and move\nthe elog out only for DEBUG1. Anyway, will defer it to you.\n\n\n> I have noticed that\n> a nearby function LogicalIncreaseRestartDecodingForSlot() logs similar\n> information after releasing spinlock, so it is better to follow the\n> same here as well.\n>\n\nNow that you mention it, the code their looks rather suspicious :)\nWe acquire the spinlock at the beginning of the function but do not release\nit if (restart_lsn <= slot->data.restart_lsn) or if (current_lsn <=\nslot->data.confirmed_flush). I might be missing something there. But it\nlooks unrelated.\n\n-- \n--\nBest Wishes,\nAshutosh\n\nOn Mon, Jul 12, 2021 at 8:39 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Mon, Jul 5, 2021 at 12:54 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, May 21, 2021 at 6:00 PM Ashutosh Bapat\n> <ashutosh.bapat@enterprisedb.com> wrote:\n> >\n> > It's there in CF. I am fine with PG-15. It will be good to patch the back-branches to have this extra diagnostic information available.\n>\n> The patch looks to me.\n>\n\n{\n slot->candidate_catalog_xmin = xmin;\n slot->candidate_xmin_lsn = current_lsn;\n+ elog(DEBUG1, \"got new catalog_xmin %u at %X/%X\", xmin,\n+ LSN_FORMAT_ARGS(current_lsn));\n }\n SpinLockRelease(&slot->mutex);\n\nToday, again looking at this patch, I don't think doing elog inside\nspinlock is a good idea. We can either release spinlock before it or\nuse some variable to remember that we need to write such an elog and\ndo it after releasing the lock. What do you think?The elog will be effective only under DEBUG1, otherwise it will be almost a NOOP. I am wondering whether it's worth adding a bool assignment and move the elog out only for DEBUG1. Anyway, will defer it to you. I have noticed that\na nearby function LogicalIncreaseRestartDecodingForSlot() logs similar\ninformation after releasing spinlock, so it is better to follow the\nsame here as well.Now that you mention it, the code their looks rather suspicious :)We acquire the spinlock at the beginning of the function but do not release it if (restart_lsn <= slot->data.restart_lsn) or if (current_lsn <= slot->data.confirmed_flush). I might be missing something there. But it looks unrelated.-- --Best Wishes,Ashutosh",
"msg_date": "Mon, 12 Jul 2021 17:28:15 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Diagnostic comment in LogicalIncreaseXminForSlot"
},
{
"msg_contents": "On Mon, Jul 12, 2021 at 5:28 PM Ashutosh Bapat\n<ashutosh.bapat@enterprisedb.com> wrote:\n>\n> On Mon, Jul 12, 2021 at 8:39 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> I have noticed that\n>> a nearby function LogicalIncreaseRestartDecodingForSlot() logs similar\n>> information after releasing spinlock, so it is better to follow the\n>> same here as well.\n>\n>\n> Now that you mention it, the code their looks rather suspicious :)\n> We acquire the spinlock at the beginning of the function but do not release it if (restart_lsn <= slot->data.restart_lsn) or if (current_lsn <= slot->data.confirmed_flush).\n>\n\nNote that we end else if with (current_lsn <=\nslot->data.confirmed_flush) and after that there is a new if. We\nrelease lock in both the if/else conditions, so the code is fine as it\nis.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 12 Jul 2021 18:23:16 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Diagnostic comment in LogicalIncreaseXminForSlot"
},
{
"msg_contents": "On Mon, Jul 12, 2021 at 6:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Mon, Jul 12, 2021 at 5:28 PM Ashutosh Bapat\n> <ashutosh.bapat@enterprisedb.com> wrote:\n> >\n> > On Mon, Jul 12, 2021 at 8:39 AM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >>\n> >> I have noticed that\n> >> a nearby function LogicalIncreaseRestartDecodingForSlot() logs similar\n> >> information after releasing spinlock, so it is better to follow the\n> >> same here as well.\n> >\n> >\n> > Now that you mention it, the code their looks rather suspicious :)\n> > We acquire the spinlock at the beginning of the function but do not\n> release it if (restart_lsn <= slot->data.restart_lsn) or if (current_lsn <=\n> slot->data.confirmed_flush).\n> >\n>\n> Note that we end else if with (current_lsn <=\n> slot->data.confirmed_flush) and after that there is a new if. We\n> release lock in both the if/else conditions, so the code is fine as it\n> is.\n>\n\nAh! I overlooked the closing else if (). Sorry for the noise.\n\n--\nBest Wishes,\nAshutosh\n\nOn Mon, Jul 12, 2021 at 6:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Mon, Jul 12, 2021 at 5:28 PM Ashutosh Bapat\n<ashutosh.bapat@enterprisedb.com> wrote:\n>\n> On Mon, Jul 12, 2021 at 8:39 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> I have noticed that\n>> a nearby function LogicalIncreaseRestartDecodingForSlot() logs similar\n>> information after releasing spinlock, so it is better to follow the\n>> same here as well.\n>\n>\n> Now that you mention it, the code their looks rather suspicious :)\n> We acquire the spinlock at the beginning of the function but do not release it if (restart_lsn <= slot->data.restart_lsn) or if (current_lsn <= slot->data.confirmed_flush).\n>\n\nNote that we end else if with (current_lsn <=\nslot->data.confirmed_flush) and after that there is a new if. We\nrelease lock in both the if/else conditions, so the code is fine as it\nis.Ah! I overlooked the closing else if (). Sorry for the noise.--Best Wishes,Ashutosh",
"msg_date": "Tue, 13 Jul 2021 10:00:49 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Diagnostic comment in LogicalIncreaseXminForSlot"
},
{
"msg_contents": "On Mon, Jul 12, 2021 at 5:28 PM Ashutosh Bapat\n<ashutosh.bapat@enterprisedb.com> wrote:\n>\n> On Mon, Jul 12, 2021 at 8:39 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Mon, Jul 5, 2021 at 12:54 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>> >\n>> > On Fri, May 21, 2021 at 6:00 PM Ashutosh Bapat\n>> > <ashutosh.bapat@enterprisedb.com> wrote:\n>> > >\n>> > > It's there in CF. I am fine with PG-15. It will be good to patch the back-branches to have this extra diagnostic information available.\n>> >\n>> > The patch looks to me.\n>> >\n>>\n>> {\n>> slot->candidate_catalog_xmin = xmin;\n>> slot->candidate_xmin_lsn = current_lsn;\n>> + elog(DEBUG1, \"got new catalog_xmin %u at %X/%X\", xmin,\n>> + LSN_FORMAT_ARGS(current_lsn));\n>> }\n>> SpinLockRelease(&slot->mutex);\n>>\n>> Today, again looking at this patch, I don't think doing elog inside\n>> spinlock is a good idea. We can either release spinlock before it or\n>> use some variable to remember that we need to write such an elog and\n>> do it after releasing the lock. What do you think?\n>\n>\n> The elog will be effective only under DEBUG1, otherwise it will be almost a NOOP. I am wondering whether it's worth adding a bool assignment and move the elog out only for DEBUG1.\n>\n\nIf you don't like adding another variable then probably we can release\nspinlock in each of if .. else loop. As mentioned previously, it\ndoesn't seem a good idea to use elog inside spinlock, so we either\nneed to find some way to avoid that or probably will drop this patch.\nDo let me know what you or others prefer here?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 7 Aug 2021 11:28:13 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Diagnostic comment in LogicalIncreaseXminForSlot"
},
{
"msg_contents": "Hi,\n\nOn 2021-07-12 17:28:15 +0530, Ashutosh Bapat wrote:\n> On Mon, Jul 12, 2021 at 8:39 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> > On Mon, Jul 5, 2021 at 12:54 PM Masahiko Sawada <sawada.mshk@gmail.com>\n> > Today, again looking at this patch, I don't think doing elog inside\n> > spinlock is a good idea. We can either release spinlock before it or\n> > use some variable to remember that we need to write such an elog and\n> > do it after releasing the lock. What do you think?\n> \n> \n> The elog will be effective only under DEBUG1, otherwise it will be almost a\n> NOOP. I am wondering whether it's worth adding a bool assignment and move\n> the elog out only for DEBUG1. Anyway, will defer it to you.\n\nIt's *definitely* not ok to do an elog() while holding a spinlock. Consider\nwhat happens if the elog tries to format the message and runs out of\nmemory. Or if elog detects the stack depth is too deep. An ERROR would be\nthrown, and we'd loose track of the held spinlock.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 6 Aug 2021 23:10:10 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Diagnostic comment in LogicalIncreaseXminForSlot"
},
{
"msg_contents": "On Sat, Aug 7, 2021 at 11:40 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2021-07-12 17:28:15 +0530, Ashutosh Bapat wrote:\n> > On Mon, Jul 12, 2021 at 8:39 AM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >\n> > > On Mon, Jul 5, 2021 at 12:54 PM Masahiko Sawada <sawada.mshk@gmail.com\n> >\n> > > Today, again looking at this patch, I don't think doing elog inside\n> > > spinlock is a good idea. We can either release spinlock before it or\n> > > use some variable to remember that we need to write such an elog and\n> > > do it after releasing the lock. What do you think?\n> >\n> >\n> > The elog will be effective only under DEBUG1, otherwise it will be\n> almost a\n> > NOOP. I am wondering whether it's worth adding a bool assignment and move\n> > the elog out only for DEBUG1. Anyway, will defer it to you.\n>\n> It's *definitely* not ok to do an elog() while holding a spinlock. Consider\n> what happens if the elog tries to format the message and runs out of\n> memory. Or if elog detects the stack depth is too deep. An ERROR would be\n> thrown, and we'd loose track of the held spinlock.\n>\n\nThanks for the clarification.\n\nAmit,\nI will provide the patch changed accordingly.\n\n--\nBest Wishes,\nAshutosh\n\nOn Sat, Aug 7, 2021 at 11:40 AM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2021-07-12 17:28:15 +0530, Ashutosh Bapat wrote:\n> On Mon, Jul 12, 2021 at 8:39 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> > On Mon, Jul 5, 2021 at 12:54 PM Masahiko Sawada <sawada.mshk@gmail.com>\n> > Today, again looking at this patch, I don't think doing elog inside\n> > spinlock is a good idea. We can either release spinlock before it or\n> > use some variable to remember that we need to write such an elog and\n> > do it after releasing the lock. What do you think?\n> \n> \n> The elog will be effective only under DEBUG1, otherwise it will be almost a\n> NOOP. I am wondering whether it's worth adding a bool assignment and move\n> the elog out only for DEBUG1. Anyway, will defer it to you.\n\nIt's *definitely* not ok to do an elog() while holding a spinlock. Consider\nwhat happens if the elog tries to format the message and runs out of\nmemory. Or if elog detects the stack depth is too deep. An ERROR would be\nthrown, and we'd loose track of the held spinlock.Thanks for the clarification.Amit,I will provide the patch changed accordingly.--Best Wishes,Ashutosh",
"msg_date": "Mon, 9 Aug 2021 11:14:15 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Diagnostic comment in LogicalIncreaseXminForSlot"
},
{
"msg_contents": "Hi Amit and Andres,\nHere's updated patch\n\nOn Mon, Aug 9, 2021 at 11:14 AM Ashutosh Bapat <\nashutosh.bapat@enterprisedb.com> wrote:\n\n>\n>\n> On Sat, Aug 7, 2021 at 11:40 AM Andres Freund <andres@anarazel.de> wrote:\n>\n>> Hi,\n>>\n>> On 2021-07-12 17:28:15 +0530, Ashutosh Bapat wrote:\n>> > On Mon, Jul 12, 2021 at 8:39 AM Amit Kapila <amit.kapila16@gmail.com>\n>> wrote:\n>> >\n>> > > On Mon, Jul 5, 2021 at 12:54 PM Masahiko Sawada <\n>> sawada.mshk@gmail.com>\n>> > > Today, again looking at this patch, I don't think doing elog inside\n>> > > spinlock is a good idea. We can either release spinlock before it or\n>> > > use some variable to remember that we need to write such an elog and\n>> > > do it after releasing the lock. What do you think?\n>> >\n>> >\n>> > The elog will be effective only under DEBUG1, otherwise it will be\n>> almost a\n>> > NOOP. I am wondering whether it's worth adding a bool assignment and\n>> move\n>> > the elog out only for DEBUG1. Anyway, will defer it to you.\n>>\n>> It's *definitely* not ok to do an elog() while holding a spinlock.\n>> Consider\n>> what happens if the elog tries to format the message and runs out of\n>> memory. Or if elog detects the stack depth is too deep. An ERROR would be\n>> thrown, and we'd loose track of the held spinlock.\n>>\n>\n> Thanks for the clarification.\n>\n> Amit,\n> I will provide the patch changed accordingly.\n>\n> --\n> Best Wishes,\n> Ashutosh\n>\n\n\n-- \n--\nBest Wishes,\nAshutosh",
"msg_date": "Tue, 17 Aug 2021 11:58:24 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Diagnostic comment in LogicalIncreaseXminForSlot"
},
{
"msg_contents": "On Tue, Aug 17, 2021 at 11:58 AM Ashutosh Bapat\n<ashutosh.bapat@enterprisedb.com> wrote:\n>\n> Hi Amit and Andres,\n> Here's updated patch\n>\n\nI think we can log the required information immediately after\nreleasing spinlock, otherwise, the other logs from\nLogicalConfirmReceivedLocation() might interleave. I have made that\nchange and slightly edit the comment and commit message. I am planning\nto push this tomorrow unless you or others have any comments.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Mon, 6 Sep 2021 15:24:59 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Diagnostic comment in LogicalIncreaseXminForSlot"
},
{
"msg_contents": "Yeah, I agree. Sorry for missing that.\n\nThe updated patch looks good to me.\n\nOn Mon, Sep 6, 2021 at 3:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Tue, Aug 17, 2021 at 11:58 AM Ashutosh Bapat\n> <ashutosh.bapat@enterprisedb.com> wrote:\n> >\n> > Hi Amit and Andres,\n> > Here's updated patch\n> >\n>\n> I think we can log the required information immediately after\n> releasing spinlock, otherwise, the other logs from\n> LogicalConfirmReceivedLocation() might interleave. I have made that\n> change and slightly edit the comment and commit message. I am planning\n> to push this tomorrow unless you or others have any comments.\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\n\n-- \n--\nBest Wishes,\nAshutosh\n\nYeah, I agree. Sorry for missing that.The updated patch looks good to me.On Mon, Sep 6, 2021 at 3:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Tue, Aug 17, 2021 at 11:58 AM Ashutosh Bapat\n<ashutosh.bapat@enterprisedb.com> wrote:\n>\n> Hi Amit and Andres,\n> Here's updated patch\n>\n\nI think we can log the required information immediately after\nreleasing spinlock, otherwise, the other logs from\nLogicalConfirmReceivedLocation() might interleave. I have made that\nchange and slightly edit the comment and commit message. I am planning\nto push this tomorrow unless you or others have any comments.\n\n-- \nWith Regards,\nAmit Kapila.\n-- --Best Wishes,Ashutosh",
"msg_date": "Mon, 6 Sep 2021 17:29:16 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Diagnostic comment in LogicalIncreaseXminForSlot"
},
{
"msg_contents": "On Mon, Sep 6, 2021 at 5:29 PM Ashutosh Bapat\n<ashutosh.bapat@enterprisedb.com> wrote:\n>\n> Yeah, I agree. Sorry for missing that.\n>\n> The updated patch looks good to me.\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 7 Sep 2021 11:14:23 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Diagnostic comment in LogicalIncreaseXminForSlot"
},
{
"msg_contents": "Thanks Amit.\n\nOn Tue, Sep 7, 2021 at 11:14 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Sep 6, 2021 at 5:29 PM Ashutosh Bapat\n> <ashutosh.bapat@enterprisedb.com> wrote:\n> >\n> > Yeah, I agree. Sorry for missing that.\n> >\n> > The updated patch looks good to me.\n> >\n>\n> Pushed!\n>\n> --\n> With Regards,\n> Amit Kapila.\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 7 Sep 2021 18:12:28 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Diagnostic comment in LogicalIncreaseXminForSlot"
},
{
"msg_contents": "On Tue, Sep 07, 2021 at 11:14:23AM +0530, Amit Kapila wrote:\n> On Mon, Sep 6, 2021 at 5:29 PM Ashutosh Bapat\n> <ashutosh.bapat@enterprisedb.com> wrote:\n> >\n> > Yeah, I agree. Sorry for missing that.\n> >\n> > The updated patch looks good to me.\n> >\n> \n> Pushed!\n> \n\nThis patch is still on \"Needs review\"!\nShould we change it to Committed or is expected something else \nabout it?\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n",
"msg_date": "Thu, 30 Sep 2021 11:45:34 -0500",
"msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>",
"msg_from_op": false,
"msg_subject": "Re: Diagnostic comment in LogicalIncreaseXminForSlot"
},
{
"msg_contents": "On Fri, Oct 1, 2021 at 1:45 AM Jaime Casanova\n<jcasanov@systemguards.com.ec> wrote:\n>\n> On Tue, Sep 07, 2021 at 11:14:23AM +0530, Amit Kapila wrote:\n> > On Mon, Sep 6, 2021 at 5:29 PM Ashutosh Bapat\n> > <ashutosh.bapat@enterprisedb.com> wrote:\n> > >\n> > > Yeah, I agree. Sorry for missing that.\n> > >\n> > > The updated patch looks good to me.\n> > >\n> >\n> > Pushed!\n> >\n>\n> This patch is still on \"Needs review\"!\n> Should we change it to Committed or is expected something else\n> about it?\n\nYes, the patch already gets committed (4c347885). So I also think we\nshould mark it as Committed.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 1 Oct 2021 10:06:13 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Diagnostic comment in LogicalIncreaseXminForSlot"
},
{
"msg_contents": "On Fri, Oct 1, 2021 at 6:36 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Oct 1, 2021 at 1:45 AM Jaime Casanova\n> <jcasanov@systemguards.com.ec> wrote:\n> >\n> > On Tue, Sep 07, 2021 at 11:14:23AM +0530, Amit Kapila wrote:\n> > > On Mon, Sep 6, 2021 at 5:29 PM Ashutosh Bapat\n> > > <ashutosh.bapat@enterprisedb.com> wrote:\n> > > >\n> > > > Yeah, I agree. Sorry for missing that.\n> > > >\n> > > > The updated patch looks good to me.\n> > > >\n> > >\n> > > Pushed!\n> > >\n> >\n> > This patch is still on \"Needs review\"!\n> > Should we change it to Committed or is expected something else\n> > about it?\n>\n> Yes, the patch already gets committed (4c347885). So I also think we\n> should mark it as Committed.\n>\n\nRight, I have changed the status to Committed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 1 Oct 2021 07:55:26 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Diagnostic comment in LogicalIncreaseXminForSlot"
}
] |
[
{
"msg_contents": "Hi.\n\nThe attached patch allows pushing joins with function RTEs to PostgreSQL \ndata sources.\nThis makes executing queries like this\n\ncreate foreign table f_pgbench_accounts (aid int, bid int, abalance int, \nfiller char(84)) SERVER local_srv OPTIONS (table_name \n'pgbench_accounts');\nselect * from f_pgbench_accounts join unnest(array[1,2,3]) ON unnest = \naid;\n\nmore efficient.\n\nwith patch:\n\n# explain analyze select * from f_pgbench_accounts join \nunnest(array[1,2,3,4,5,6]) ON unnest = aid;\n QUERY PLAN\n------------------------------------------------------------------------------------------------\n Foreign Scan (cost=100.00..116.95 rows=7 width=356) (actual \ntime=2.282..2.287 rows=6 loops=1)\n Relations: (f_pgbench_accounts) INNER JOIN (FUNCTION RTE unnest)\n Planning Time: 0.487 ms\n Execution Time: 3.336 ms\n\nwithout patch:\n\n# explain analyze select * from f_pgbench_accounts join \nunnest(array[1,2,3,4,5,6]) ON unnest = aid;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=100.14..158.76 rows=7 width=356) (actual \ntime=2.263..1268.607 rows=6 loops=1)\n Hash Cond: (f_pgbench_accounts.aid = unnest.unnest)\n -> Foreign Scan on f_pgbench_accounts (cost=100.00..157.74 rows=217 \nwidth=352) (actual time=2.190..1205.938 rows=100000 loops=1)\n -> Hash (cost=0.06..0.06 rows=6 width=4) (actual time=0.041..0.043 \nrows=6 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> Function Scan on unnest (cost=0.00..0.06 rows=6 width=4) \n(actual time=0.025..0.028 rows=6 loops=1)\n Planning Time: 0.389 ms\n Execution Time: 1269.627 ms\n\nSo far I don't know how to visualize actual function expression used in \nfunction RTE, as in postgresExplainForeignScan() es->rtable comes from \nqueryDesc->plannedstmt->rtable, and rte->functions is already 0.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional",
"msg_date": "Thu, 20 May 2021 20:43:42 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Function scan FDW pushdown"
},
{
"msg_contents": "Hi Alexander,\n\nOn Thu, May 20, 2021 at 11:13 PM Alexander Pyhalov\n<a.pyhalov@postgrespro.ru> wrote:\n>\n> Hi.\n>\n> The attached patch allows pushing joins with function RTEs to PostgreSQL\n> data sources.\n> This makes executing queries like this\n>\n> create foreign table f_pgbench_accounts (aid int, bid int, abalance int,\n> filler char(84)) SERVER local_srv OPTIONS (table_name\n> 'pgbench_accounts');\n> select * from f_pgbench_accounts join unnest(array[1,2,3]) ON unnest =\n> aid;\n>\n\nIt will be good to provide some practical examples where this is useful.\n\n\n\n> more efficient.\n>\n> with patch:\n>\n>\n> So far I don't know how to visualize actual function expression used in\n> function RTE, as in postgresExplainForeignScan() es->rtable comes from\n> queryDesc->plannedstmt->rtable, and rte->functions is already 0.\n\nThe actual function expression will be part of the Remote SQL of\nForeignScan node so no need to visualize it separately.\n\nThe patch will have problems when there are multiple foreign tables\nall on different servers or use different FDWs. In such a case the\nfunction scan's RelOptInfo will get the fpinfo based on the first\nforeign table the function scan is paired with during join planning.\nBut that may not be the best foreign table to join. We should be able\nto plan all the possible joins. Current infra to add one fpinfo per\nRelOptInfo won't help there. We need something better.\n\nThe patch targets only postgres FDW, how do you see this working with\nother FDWs?\n\nIf we come up with the right approach we could use it for 1. pushing\ndown queries with IN () clause 2. joining a small local table with a\nlarge foreign table by sending the local table rows down to the\nforeign server.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 15 Jun 2021 18:45:24 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Function scan FDW pushdown"
},
{
"msg_contents": "Ashutosh Bapat писал 2021-06-15 16:15:\n> Hi Alexander,\n\nHi.\n\nThe current version of the patch is based on asymetric partition-wise \njoin.\nCurrently it is applied after \nv19-0001-Asymmetric-partitionwise-join.patch from\non \nhttps://www.postgresql.org/message-id/792d60f4-37bc-e6ad-68ca-c2af5cbb2d9b@postgrespro.ru \n.\n\n>> So far I don't know how to visualize actual function expression used \n>> in\n>> function RTE, as in postgresExplainForeignScan() es->rtable comes from\n>> queryDesc->plannedstmt->rtable, and rte->functions is already 0.\n> \n> The actual function expression will be part of the Remote SQL of\n> ForeignScan node so no need to visualize it separately.\n\nWe still need to create tuple description for functions in \nget_tupdesc_for_join_scan_tuples(),\nso I had to remove setting newrte->functions to NIL in \nadd_rte_to_flat_rtable().\nWith rte->functions in place, there's no issues for explain.\n\n> \n> The patch will have problems when there are multiple foreign tables\n> all on different servers or use different FDWs. In such a case the\n> function scan's RelOptInfo will get the fpinfo based on the first\n> foreign table the function scan is paired with during join planning.\n> But that may not be the best foreign table to join. We should be able\n> to plan all the possible joins. Current infra to add one fpinfo per\n> RelOptInfo won't help there. We need something better.\n\nI suppose attached version of the patch is more mature.\n\n> \n> The patch targets only postgres FDW, how do you see this working with\n> other FDWs?\n\nNot now. We introduce necessary APIs for other FDWs, but implementing \nTryShippableJoinPaths()\ndoesn't seem straightforward.\n\n> \n> If we come up with the right approach we could use it for 1. pushing\n> down queries with IN () clause 2. joining a small local table with a\n> large foreign table by sending the local table rows down to the\n> foreign server.\n\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional",
"msg_date": "Mon, 04 Oct 2021 10:42:56 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Function scan FDW pushdown"
}
] |
[
{
"msg_contents": "I'm not too happy with this:\n\nregression=# create procedure p1(out x int) language plpgsql\nregression-# as 'begin x := 42; end';\nCREATE PROCEDURE\n\nregression=# call p1();\nERROR: procedure p1() does not exist\nLINE 1: call p1();\n ^\nHINT: No procedure matches the given name and argument types. You might need to add explicit type casts.\n\nregression=# call p1(null);\n x \n----\n 42\n(1 row)\n\nI can see that that makes some sense within plpgsql, where the CALL\nought to provide a plpgsql variable for each OUT argument. But it\nseems moderately insane for calls from SQL. It certainly fails\nto match the documentation [1], which says fairly explicitly that\nthe argument list items match the *input* arguments of the procedure,\nand further notes that plpgsql handles output arguments differently.\n\nI think we ought to fix this so that OUT-only arguments are ignored\nwhen calling from SQL not plpgsql. This is less than simple, since\nthe parser doesn't actually have any context that would let it know\nwhich one we're doing, but I think we could hack that up somehow.\n(The RawParseMode mechanism seems like one way we could pass the\ninfo, and there are probably others.)\n\nAlternatively, if we're going to stick with this behavior, we have\nto change the docs to explain it. Either way it seems like an\nopen item for v14. (For those who've forgotten, OUT-only procedure\narguments are a new thing in v14.)\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/docs/devel/sql-call.html\n\n\n",
"msg_date": "Thu, 20 May 2021 13:53:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "CALL versus procedures with output-only arguments"
},
{
"msg_contents": "čt 20. 5. 2021 v 19:53 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> I'm not too happy with this:\n>\n> regression=# create procedure p1(out x int) language plpgsql\n> regression-# as 'begin x := 42; end';\n> CREATE PROCEDURE\n>\n> regression=# call p1();\n> ERROR: procedure p1() does not exist\n> LINE 1: call p1();\n> ^\n> HINT: No procedure matches the given name and argument types. You might\n> need to add explicit type casts.\n>\n> regression=# call p1(null);\n> x\n> ----\n> 42\n> (1 row)\n>\n> I can see that that makes some sense within plpgsql, where the CALL\n> ought to provide a plpgsql variable for each OUT argument. But it\n> seems moderately insane for calls from SQL. It certainly fails\n> to match the documentation [1], which says fairly explicitly that\n> the argument list items match the *input* arguments of the procedure,\n> and further notes that plpgsql handles output arguments differently.\n>\n> I think we ought to fix this so that OUT-only arguments are ignored\n> when calling from SQL not plpgsql. This is less than simple, since\n> the parser doesn't actually have any context that would let it know\n> which one we're doing, but I think we could hack that up somehow.\n> (The RawParseMode mechanism seems like one way we could pass the\n> info, and there are probably others.)\n>\n\n+1\n\nPavel\n\n\n> Alternatively, if we're going to stick with this behavior, we have\n> to change the docs to explain it. Either way it seems like an\n> open item for v14. (For those who've forgotten, OUT-only procedure\n> arguments are a new thing in v14.)\n>\n> regards, tom lane\n>\n> [1] https://www.postgresql.org/docs/devel/sql-call.html\n>\n>\n>\n\nčt 20. 5. 2021 v 19:53 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:I'm not too happy with this:\n\nregression=# create procedure p1(out x int) language plpgsql\nregression-# as 'begin x := 42; end';\nCREATE PROCEDURE\n\nregression=# call p1();\nERROR: procedure p1() does not exist\nLINE 1: call p1();\n ^\nHINT: No procedure matches the given name and argument types. You might need to add explicit type casts.\n\nregression=# call p1(null);\n x \n----\n 42\n(1 row)\n\nI can see that that makes some sense within plpgsql, where the CALL\nought to provide a plpgsql variable for each OUT argument. But it\nseems moderately insane for calls from SQL. It certainly fails\nto match the documentation [1], which says fairly explicitly that\nthe argument list items match the *input* arguments of the procedure,\nand further notes that plpgsql handles output arguments differently.\n\nI think we ought to fix this so that OUT-only arguments are ignored\nwhen calling from SQL not plpgsql. This is less than simple, since\nthe parser doesn't actually have any context that would let it know\nwhich one we're doing, but I think we could hack that up somehow.\n(The RawParseMode mechanism seems like one way we could pass the\ninfo, and there are probably others.)+1Pavel\n\nAlternatively, if we're going to stick with this behavior, we have\nto change the docs to explain it. Either way it seems like an\nopen item for v14. (For those who've forgotten, OUT-only procedure\narguments are a new thing in v14.)\n\n regards, tom lane\n\n[1] https://www.postgresql.org/docs/devel/sql-call.html",
"msg_date": "Thu, 20 May 2021 20:39:33 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "I wrote:\n> I think we ought to fix this so that OUT-only arguments are ignored\n> when calling from SQL not plpgsql.\n\nI'm working on a patch to make it act that way. I've got some issues\nyet to fix with named arguments (which seem rather undertested BTW,\nsince the patch is passing check-world even though I know it will\ncrash instantly on cases with CALL+named-args+out-only-args).\n\nBefore I spend too much time on it though, I wanted to mention that\nit includes undoing 2453ea142's decision to include OUT arguments\nin pg_proc.proargtypes for procedures (but not for any other kind of\nroutine). I thought that was a terrible decision and I'm very happy\nto revert it, but is anyone likely to complain loudly?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 23 May 2021 20:01:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "\nOn 5/23/21 8:01 PM, Tom Lane wrote:\n> I wrote:\n>> I think we ought to fix this so that OUT-only arguments are ignored\n>> when calling from SQL not plpgsql.\n> I'm working on a patch to make it act that way. I've got some issues\n> yet to fix with named arguments (which seem rather undertested BTW,\n> since the patch is passing check-world even though I know it will\n> crash instantly on cases with CALL+named-args+out-only-args).\n>\n> Before I spend too much time on it though, I wanted to mention that\n> it includes undoing 2453ea142's decision to include OUT arguments\n> in pg_proc.proargtypes for procedures (but not for any other kind of\n> routine). I thought that was a terrible decision and I'm very happy\n> to revert it, but is anyone likely to complain loudly?\n>\n> \t\t\t\n\n\nPossibly, Will take a look. IIRC we have based some other things on this.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 24 May 2021 08:22:05 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 5/23/21 8:01 PM, Tom Lane wrote:\n>> Before I spend too much time on it though, I wanted to mention that\n>> it includes undoing 2453ea142's decision to include OUT arguments\n>> in pg_proc.proargtypes for procedures (but not for any other kind of\n>> routine). I thought that was a terrible decision and I'm very happy\n>> to revert it, but is anyone likely to complain loudly?\n\n> Possibly, Will take a look. IIRC we have based some other things on this.\n\nThere's 9213462c5, which I *think* just needs to be reverted along\nwith much of 2453ea142. But I don't have a JDBC setup to check it\nwith.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 24 May 2021 09:45:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "I wrote:\n>> I think we ought to fix this so that OUT-only arguments are ignored\n>> when calling from SQL not plpgsql.\n\nHere's a draft patch for that. The docs probably need some more\nfiddling, but I think the code is in good shape. (I'm unsure about\nthe JDBC compatibility issue, and would appreciate someone else\ntesting that.)\n\n> I'm working on a patch to make it act that way. I've got some issues\n> yet to fix with named arguments (which seem rather undertested BTW,\n> since the patch is passing check-world even though I know it will\n> crash instantly on cases with CALL+named-args+out-only-args).\n\nAfter I'd finished fixing that, I realized that HEAD is really pretty\nbroken for the case. For example\n\nregression=# CREATE PROCEDURE test_proc10(IN a int, OUT b int, IN c int) \nregression-# LANGUAGE plpgsql\nregression-# AS $$\nregression$# BEGIN\nregression$# RAISE NOTICE 'a: %, b: %, c: %', a, b, c;\nregression$# b := a - c;\nregression$# END;\nregression$# $$;\nCREATE PROCEDURE\nregression=# DO $$\nregression$# DECLARE _a int; _b int; _c int;\nregression$# BEGIN\nregression$# _a := 10; _b := 30; _c := 7;\nregression$# CALL test_proc10(a => _a, b => _b, c => _c);\nregression$# RAISE NOTICE '_a: %, _b: %, _c: %', _a, _b, _c;\nregression$# END$$;\nERROR: procedure test_proc10(a => integer, b => integer, c => integer) does not exist\nLINE 1: CALL test_proc10(a => _a, b => _b, c => _c)\n ^\nHINT: No procedure matches the given name and argument types. You might need to add explicit type casts.\nQUERY: CALL test_proc10(a => _a, b => _b, c => _c)\nCONTEXT: PL/pgSQL function inline_code_block line 5 at CALL\n\nSo even if you object to what I'm trying to do here, there is\nwork to be done.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 24 May 2021 16:44:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "On 24.05.21 02:01, Tom Lane wrote:\n> I wrote:\n>> I think we ought to fix this so that OUT-only arguments are ignored\n>> when calling from SQL not plpgsql.\n> \n> I'm working on a patch to make it act that way. I've got some issues\n> yet to fix with named arguments (which seem rather undertested BTW,\n> since the patch is passing check-world even though I know it will\n> crash instantly on cases with CALL+named-args+out-only-args).\n> \n> Before I spend too much time on it though, I wanted to mention that\n> it includes undoing 2453ea142's decision to include OUT arguments\n> in pg_proc.proargtypes for procedures (but not for any other kind of\n> routine). I thought that was a terrible decision and I'm very happy\n> to revert it, but is anyone likely to complain loudly?\n\nI don't understand why you want to change this. The argument resolution \nof CALL is specified in the SQL standard; we shouldn't just make up our \nown system.\n\n\n",
"msg_date": "Tue, 25 May 2021 13:21:34 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 24.05.21 02:01, Tom Lane wrote:\n>>> I think we ought to fix this so that OUT-only arguments are ignored\n>>> when calling from SQL not plpgsql.\n\n> I don't understand why you want to change this. The argument resolution \n> of CALL is specified in the SQL standard; we shouldn't just make up our \n> own system.\n\nI don't really see how you can argue that the existing behavior is\nmore spec-compliant than what I'm suggesting. What I read in the spec\n(SQL:2021 10.4 <routine invocation> SR 9) h) iii) 1)) is\n\n 1) If Pi is an output SQL parameter, then XAi shall be a <target\n specification>.\n\n(where <target specification> more or less reduces to \"variable\").\nNow, sure, that's what we've got in plpgsql, and I'm not proposing\nto change that. But in plain SQL, as of HEAD, you are supposed to\nwrite NULL, or a random literal, or indeed anything at all *except*\na variable. How is that more standard-compliant than not writing\nanything?\n\nAlso, one could argue that the behavior I'm suggesting is completely\nspec-compliant if one assumes that the OUT parameters have some sort\nof default, allowing them to be omitted from the call.\n\nMore generally, there are enough deviations from spec in what we do\nto perform ambiguous-call resolution that it seems rather silly to\nhang your hat on this particular point.\n\nNow as against that, we are giving up a whole lot of consistency.\nAs of HEAD:\n\n* The rules for what is a conflict of signatures are different\nfor functions and procedures.\n\n* The rules for how to identify a target routine in ALTER, DROP,\netc are different for functions and procedures. That's especially\nnasty in ALTER/DROP ROUTINE, where we don't have a syntax cue\nas to whether or not to ignore OUT parameters.\n\n* The rules for how to call functions and procedures with OUT\nparameters from SQL are different.\n\n* Client code that looks at pg_proc.proargtypes is almost certainly\ngoing to be broken.\n\nI don't like any of those side-effects, and I don't want to pay\nthose prices for what seems to me to be a bogus claim of improved\nspec compliance.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 25 May 2021 11:20:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "On 25.05.21 17:20, Tom Lane wrote:\n> I don't really see how you can argue that the existing behavior is\n> more spec-compliant than what I'm suggesting. What I read in the spec\n> (SQL:2021 10.4 <routine invocation> SR 9) h) iii) 1)) is\n> \n> 1) If Pi is an output SQL parameter, then XAi shall be a <target\n> specification>.\n> \n> (where <target specification> more or less reduces to \"variable\").\n> Now, sure, that's what we've got in plpgsql, and I'm not proposing\n> to change that. But in plain SQL, as of HEAD, you are supposed to\n> write NULL, or a random literal, or indeed anything at all *except*\n> a variable. How is that more standard-compliant than not writing\n> anything?\n\nI concede that the current implementation is not fully standards \ncompliant in this respect. Maybe we need to rethink how we can satisfy \nthis better. For example, in some other implementations, you write CALL \np(?), (where ? is the parameter placeholder), so it's sort of an output \nparameter. However, changing it so that the entire way the parameters \nare counted is different seems a much greater departure.\n\n> More generally, there are enough deviations from spec in what we do\n> to perform ambiguous-call resolution that it seems rather silly to\n> hang your hat on this particular point.\n\nI don't know what you mean by this. Some stuff is different in the \ndetails, but you *can* write conforming code if you avoid the extremely \ncomplicated cases. With your proposal, everything is always different, \nand we might as well remove the CALL statement and name it something \nelse because users migrating from other systems won't be able to use it \nproperly.\n\n> Now as against that, we are giving up a whole lot of consistency.\n> As of HEAD:\n> \n> * The rules for what is a conflict of signatures are different\n> for functions and procedures.\n\nBut that's the fault of the way it was done for functions. That doesn't \nmean we have to repeat it for procedures. I mean, sure it would be \nbetter if it were consistent. But SQL-standard syntax should behave in \nSQL standard ways. Creating, altering, and dropping procedures is meant \nto be portable between SQL implementations. If we change this in subtle \nways so that DROP PROCEDURE p(int, int) drops a different procedure in \ndifferent SQL implementations, that seems super-dangerous and annoying.\n\n\n",
"msg_date": "Tue, 25 May 2021 20:04:01 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "I wrote:\n> * The rules for how to identify a target routine in ALTER, DROP,\n> etc are different for functions and procedures. That's especially\n> nasty in ALTER/DROP ROUTINE, where we don't have a syntax cue\n> as to whether or not to ignore OUT parameters.\n\nJust to enlarge on that point a bit:\n\nregression=# create function foo(int, out int) language sql\nregression-# as 'select $1';\nCREATE FUNCTION\nregression=# create procedure foo(int, out int) language sql\nregression-# as 'select $1';\nCREATE PROCEDURE\n\nIMO this should have failed, but since it doesn't:\n\nregression=# drop routine foo(int, out int);\nDROP ROUTINE\n\nWhich object was dropped, and what is the argument for that one\nbeing the right one?\n\nExperinentation shows that in HEAD, what is dropped is the procedure,\nand indeed the DROP will fail if you try to use it on the function.\nThat is a compatibility break, because in previous versions this\nworked:\n\nregression=# create function foo(int, out int) language sql\nas 'select $1';\nCREATE FUNCTION\nregression=# drop routine foo(int, out int);\nDROP ROUTINE\n\nThe fact that you now have to be aware of these details to use\nALTER/DROP ROUTINE seems like a pretty serious loss of user\nfriendliness, as well as compatibility.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 25 May 2021 14:20:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "On Tue, May 25, 2021 at 2:20 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Just to enlarge on that point a bit:\n>\n> regression=# create function foo(int, out int) language sql\n> regression-# as 'select $1';\n> CREATE FUNCTION\n> regression=# create procedure foo(int, out int) language sql\n> regression-# as 'select $1';\n> CREATE PROCEDURE\n>\n> IMO this should have failed, but since it doesn't:\n>\n> regression=# drop routine foo(int, out int);\n> DROP ROUTINE\n>\n> Which object was dropped, and what is the argument for that one\n> being the right one?\n>\n> Experinentation shows that in HEAD, what is dropped is the procedure,\n> and indeed the DROP will fail if you try to use it on the function.\n> That is a compatibility break, because in previous versions this\n> worked:\n>\n> regression=# create function foo(int, out int) language sql\n> as 'select $1';\n> CREATE FUNCTION\n> regression=# drop routine foo(int, out int);\n> DROP ROUTINE\n>\n> The fact that you now have to be aware of these details to use\n> ALTER/DROP ROUTINE seems like a pretty serious loss of user\n> friendliness, as well as compatibility.\n\nI'm also concerned about the behavior here. I noticed it when this\ncommit went in, and it seemed concerning to me then, and it still\ndoes. Nevertheless, I'm not convinced that your proposal is an\nimprovement. Suppose we have foo(int, out int) and also foo(int).\nThen, if I understand correctly, under your proposal, foo(4) will call\nthe former within plpgsql code, because in that context the OUT\nparameters must be included, and the latter from SQL code, because in\nthat context they must be emitted. I suspect in practice what will\nhappen is that you'll end up with both interpretations even within the\nbody of a plpgsql function, because plpgsql functions tend to include\nSQL queries where, I presume, the SQL interpretation must apply. It\nseems that it will be very difficult for users to know which set of\nrules apply in which contexts.\n\nNow, that being said, the status quo is also pretty bad, because we\nhave one set of rules for functions and another for procedures. I\nbelieve that users will expect those to behave in similar ways, and\nwill be sad and surprised when they don't.\n\nBut on the third hand, Peter is also correct when he says that there's\nnot much use in implementing standard features with non-standard\nsemantics. The fact that we've chosen to make OUT parameters do some\nrandom thing that is not what other systems do is, indeed, not great\nfor migrations. So doubling down on that questionable choice is also\nnot great. In a green field I think we ought to go the other way and\nmake OUT parameters as consistent with the standard as we can, and\nhave that handling be the same for procedures and for functions, but\nit seems impossible to imagine making such a large compatibility break\nwith our own previous releases, however much the spec may dictate it.\n\nI don't see any really great choice here, but in some sense your\nproposal seems like the worst of all the options. It does not reverse\nthe patch's choice to treat functions and procedures differently, so\nusers will still have to deal with that inconsistency. But in addition\nthe handling of procedures will itself be inconsistent based on\ncontext.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 25 May 2021 14:58:48 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 25.05.21 17:20, Tom Lane wrote:\n>> I don't really see how you can argue that the existing behavior is\n>> more spec-compliant than what I'm suggesting. What I read in the spec\n>> (SQL:2021 10.4 <routine invocation> SR 9) h) iii) 1)) is\n>> 1) If Pi is an output SQL parameter, then XAi shall be a <target\n>> specification>.\n\n> I concede that the current implementation is not fully standards \n> compliant in this respect. Maybe we need to rethink how we can satisfy \n> this better. For example, in some other implementations, you write CALL \n> p(?), (where ? is the parameter placeholder), so it's sort of an output \n> parameter. However, changing it so that the entire way the parameters \n> are counted is different seems a much greater departure.\n\nI'd expect to be able to write something like that in contexts where\nthere's a reasonable way to name an output parameter. Like, say,\nplpgsql. Or JDBC --- I think they already use a notation like that\nfor output parameters from functions, and transform it after the fact.\nAs things work in HEAD, they'll have to have a different special hack\nfor procedures than they do for functions. But none of this applies\nto bare-SQL CALL.\n\n>> More generally, there are enough deviations from spec in what we do\n>> to perform ambiguous-call resolution that it seems rather silly to\n>> hang your hat on this particular point.\n\n> I don't know what you mean by this.\n\nWell, let's take an example. If OUT parameters are part of the\nsignature, then I'm allowed to do this:\n\nregression=# create procedure p1(in x int, out y int) \nregression-# language sql as 'select $1';\nCREATE PROCEDURE\nregression=# create procedure p1(in x int, out y float8)\nlanguage sql as 'select $1';\nCREATE PROCEDURE\nregression=# call p1(42, null);\n y \n----\n 42\n(1 row)\n\nI'm surprised that that worked rather than throwing an ambiguity\nerror. I wonder which procedure it called, and where in the spec\nyou can find chapter and verse saying that that one and not the other\none is right.\n\nIt gets even sillier though, because experimentation shows that it\nwas the int one that was preferred:\n\nregression=# create or replace procedure p1(in x int, out y float8)\nlanguage sql as 'select $1+1';\nCREATE PROCEDURE\nregression=# call p1(42, null);\n y \n----\n 42\n(1 row)\n\nThat seems kind of backwards really, considering that float8 is\nfurther up the numeric hierarchy. But let's keep going:\n\nregression=# create procedure p1(in x int, out y text)\nlanguage sql as 'select $1+2';\nCREATE PROCEDURE\nregression=# call p1(42, null);\n y \n----\n 44\n(1 row)\n\nSo text is preferred to either int or float8. I know why that\nhappened: we have a preference for matching UNKNOWN to string types.\nBut I challenge you to provide any argument that this behavior is\nspec-compliant.\n\nMore generally, the point I'm trying to make is that our rules\nfor resolving an ambiguous function differ in a whole lot of\ndetails from what SQL says. That ship sailed a couple of\ndecades ago, so I'm not excited about adopting a fundamentally\nbad design in pursuit of trying to make one small detail of\nthat behavior slightly closer to SQL.\n\n[ thinks a bit ]\n\nA lot of what I'm exercised about here is not the question of\nhow many parameters we write in CALL, but the choice to redefine\nproargtypes (and thereby change what is considered the routine's\nsignature). With the infrastructure in the patch I proposed,\nit'd be possible to revert the signature changes and still\nwrite dummy output parameters in CALL -- we'd just make CALL\nset include_out_parameters=true all the time. I do not think that\nsolution is superior to what I did in the patch, but if we can't\nhave a meeting of the minds on CALL, doing that much would still\nbe an improvement.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 25 May 2021 15:02:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I'm also concerned about the behavior here. I noticed it when this\n> commit went in, and it seemed concerning to me then, and it still\n> does. Nevertheless, I'm not convinced that your proposal is an\n> improvement. Suppose we have foo(int, out int) and also foo(int).\n> Then, if I understand correctly, under your proposal, foo(4) will call\n> the former within plpgsql code, because in that context the OUT\n> parameters must be included, and the latter from SQL code, because in\n> that context they must be emitted.\n\nNo, you misunderstand my proposal. The thing that I most urgently\nwant to do is to prevent that situation from ever arising, by not\nallowing those two procedures to coexist, just as you can't have\ntwo functions with such signatures.\n\nIf procedures are required to have distinct signatures when considering\ninput parameters only, then a fortiori they are distinct when also\nconsidering output parameters. So my proposal cannot make a CALL\nthat includes output parameters ambiguous if it was not before.\n\n> I don't see any really great choice here, but in some sense your\n> proposal seems like the worst of all the options. It does not reverse\n> the patch's choice to treat functions and procedures differently, so\n> users will still have to deal with that inconsistency.\n\nYou're definitely confused, because reversing that choice is *exactly*\nwhat I'm on about. The question of whether SQL-level CALL should act\ndifferently from plpgsql CALL is pretty secondary.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 25 May 2021 15:10:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "On Tue, May 25, 2021 at 3:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> No, you misunderstand my proposal. The thing that I most urgently\n> want to do is to prevent that situation from ever arising, by not\n> allowing those two procedures to coexist, just as you can't have\n> two functions with such signatures.\n>\n> If procedures are required to have distinct signatures when considering\n> input parameters only, then a fortiori they are distinct when also\n> considering output parameters. So my proposal cannot make a CALL\n> that includes output parameters ambiguous if it was not before.\n\nOh, OK.\n\nI'm not sure what I think about that yet. It certainly seems to make\nthings less confusing. But on the other hand, I think that the\nstandard - or some competing systems - may have cases where they\ndisambiguate calls based on output arguments only. Granted, if we\nprohibit that now, we can always change our minds and allow it later\nif we are sure we've got everything figured out, whereas if we don't\nprohibit now, backward compatibility will make it hard to prohibit it\nlater. But on the other hand I don't really fully understand Peter's\nthinking here, so I'm a little reluctant to jump to the conclusion\nthat he's lost the way.\n\n> > I don't see any really great choice here, but in some sense your\n> > proposal seems like the worst of all the options. It does not reverse\n> > the patch's choice to treat functions and procedures differently, so\n> > users will still have to deal with that inconsistency.\n>\n> You're definitely confused, because reversing that choice is *exactly*\n> what I'm on about. The question of whether SQL-level CALL should act\n> differently from plpgsql CALL is pretty secondary.\n\nI understood the reverse from the first post on the thread, so perhaps\nit is more that your thinking has developed than that I am confused.\n\nHowever, it's possible that I only think that because I'm confused.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 25 May 2021 15:53:36 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, May 25, 2021 at 3:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> You're definitely confused, because reversing that choice is *exactly*\n>> what I'm on about. The question of whether SQL-level CALL should act\n>> differently from plpgsql CALL is pretty secondary.\n\n> I understood the reverse from the first post on the thread, so perhaps\n> it is more that your thinking has developed than that I am confused.\n\nYeah, the odd behavior of CALL is where I started from, but now I think\nthe main problem is with the signature (ie, allowing procedures with\nsignatures that differ only in OUT parameter positions). If we got\nrid of that choice then it'd be possible to document that you should\nonly ever write NULL for OUT-parameter positions, because the type\nof such an argument would never be significant for disambiguation.\n\nWe could consider going further and actually enforcing use of NULL,\nor inventing some other syntactic placeholder such as the '?' that\nPeter was speculating about. But I'm not sure that that adds much.\n\nRelevant to this is that my proposed patch gets rid of the existing\nbehavior that such arguments actually get evaluated. That would\nneed to be documented, unless we go with the placeholder approach.\nBut I've not spent time on the documentation yet.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 25 May 2021 16:21:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "On 25.05.21 22:21, Tom Lane wrote:\n> Yeah, the odd behavior of CALL is where I started from, but now I think\n> the main problem is with the signature (ie, allowing procedures with\n> signatures that differ only in OUT parameter positions). If we got\n> rid of that choice then it'd be possible to document that you should\n> only ever write NULL for OUT-parameter positions, because the type\n> of such an argument would never be significant for disambiguation.\n\nAFAICT, your patch does not main the property that\n\n CREATE PROCEDURE p1(OUT int, OUT int)\n\ncorresponds to\n\n DROP PROCEDURE p1(int, int)\n\nwhich would be bad.\n\nI'm not opposed to reverting the feature if we can't find a good \nsolution in a hurry. The main value is of this feature is for \nmigrations, so I want to be sure that whatever we settle on doesn't back \nus into a corner with respect to that.\n\nWe could perhaps also just disable the SQL-level calling until a better \nsolution arises. AFAICT, things work okay in PL/pgSQL, because OUT \nparameters are tied to a typed target there.\n\n\n",
"msg_date": "Wed, 26 May 2021 18:41:02 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> AFAICT, your patch does not main the property that\n> CREATE PROCEDURE p1(OUT int, OUT int)\n> corresponds to\n> DROP PROCEDURE p1(int, int)\n> which would be bad.\n\nWhy? If it actually works that way right now, I'd maintain\nstrenously that it's broken. The latter should be referring\nto a procedure with two IN arguments. Even if the SQL spec\nallows fuzziness about that, we cannot afford to, because we\nhave a more generous view of overloading than the spec does.\n(As far as I could tell from looking at the spec yesterday,\nthey think that you aren't allowed to have two procedures\nwith the same name/schema and same number of arguments,\nregardless of the details of those arguments. Up with that\nI will not put.)\n\n> I'm not opposed to reverting the feature if we can't find a good \n> solution in a hurry.\n\nI'm not looking to revert the feature. I mainly want a saner catalog\nrepresentation, and less inconsistency in object naming (which is\ntightly tied to the first thing).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 26 May 2021 13:28:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "Here's a stripped-down patch that drops the change in what should be\nin CALL argument lists, and just focuses on reverting the change in\npg_proc.proargtypes and the consequent mess for ALTER/DROP ROUTINE.\nI spent some more effort on the docs, too.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 29 May 2021 13:32:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "On 26.05.21 19:28, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> AFAICT, your patch does not main the property that\n>> CREATE PROCEDURE p1(OUT int, OUT int)\n>> corresponds to\n>> DROP PROCEDURE p1(int, int)\n>> which would be bad.\n> \n> Why? If it actually works that way right now, I'd maintain\n> strenously that it's broken. The latter should be referring\n> to a procedure with two IN arguments. Even if the SQL spec\n> allows fuzziness about that, we cannot afford to, because we\n> have a more generous view of overloading than the spec does.\n\nThere is no fuzziness in the spec about this. See subclause <specific \nroutine designator>. It just talks about arguments, nothing about input \nor output arguments. I don't find any ambiguity there. I don't see why \nwe want to reinvent this here.\n\nIf I have two procedures\n\np1(IN int, IN int, OUT int, OUT int)\np1(OUT int, OUT int)\n\nthen a DROP, or ALTER, or GRANT, etc. on p1(int, int) should operate on \nthe second one in a spec-compliant implementation, but you propose to \nhave it operate on the first one. That kind of discrepancy would be \nreally bad to have. It would be very difficult for migration tools to \ncheck or handle this in a robust way.\n\n> (As far as I could tell from looking at the spec yesterday,\n> they think that you aren't allowed to have two procedures\n> with the same name/schema and same number of arguments,\n> regardless of the details of those arguments. Up with that\n> I will not put.)\n\nI don't see that.\n\n\n",
"msg_date": "Mon, 31 May 2021 20:59:02 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 26.05.21 19:28, Tom Lane wrote:\n>> Why? If it actually works that way right now, I'd maintain\n>> strenously that it's broken. The latter should be referring\n>> to a procedure with two IN arguments. Even if the SQL spec\n>> allows fuzziness about that, we cannot afford to, because we\n>> have a more generous view of overloading than the spec does.\n\n> There is no fuzziness in the spec about this. See subclause <specific \n> routine designator>. It just talks about arguments, nothing about input \n> or output arguments. I don't find any ambiguity there. I don't see why \n> we want to reinvent this here.\n\nI agree that the spec isn't ambiguous: it says that you should be able\nto uniquely identify a routine from the list of only its argument types,\nwithout distinguishing whether those arguments are IN or OUT or INOUT,\n*and* without distinguishing whether the routine is a procedure or\nfunction.\n\nHowever, that doesn't work for Postgres functions, nor for Postgres\nroutines (since those must include functions). I do not think that we\nshould confuse our users and effectively break ALTER/DROP ROUTINE in\norder to make it sort-of work for procedures. The are-we-exactly-\ncompatible-with-the-spec ship sailed a couple of decades ago.\n\n> If I have two procedures\n> p1(IN int, IN int, OUT int, OUT int)\n> p1(OUT int, OUT int)\n> then a DROP, or ALTER, or GRANT, etc. on p1(int, int) should operate on \n> the second one in a spec-compliant implementation, but you propose to \n> have it operate on the first one. That kind of discrepancy would be \n> really bad to have.\n\nWe already have that situation for functions. I think having procedures\nwork differently from functions is much worse than your complaint here;\nand I do not see why being spec-compliant for one case when we are not\nfor the other is a good situation to be in.\n\nWe could, perhaps, insist that ALTER/DROP include OUT parameters when\nit is being applied to a procedure, rather than treating them as being\neffectively noise words as we do now. I'd still want to revert the\ndefinition of proargtypes, which would have implications for which\nprocedure signatures are considered distinct --- but it looks to me\nlike we would still be allowing more combinations than the spec does.\n\n>> (As far as I could tell from looking at the spec yesterday,\n>> they think that you aren't allowed to have two procedures\n>> with the same name/schema and same number of arguments,\n>> regardless of the details of those arguments. Up with that\n>> I will not put.)\n\n> I don't see that.\n\nIt's under CREATE PROCEDURE. 11.60 <SQL-invoked routine> SR 20 says\n\n20) Case:\n\n a) If R is an SQL-invoked procedure, then S shall not include another\n SQL-invoked procedure whose <schema qualified routine name> is\n equivalent to RN and whose number of SQL parameters is PN.\n\nCase b) has different and laxer rules for what you can do with functions,\nbut it still looks like they'd forbid a lot of situations that we allow.\n\nI think that these restrictive overloading rules have a whole lot to do\nwith the fact that they feel that you don't need IN/OUT argument labeling\nto correctly identify a function or procedure. But, as I said, that ship\nsailed for us a long time ago.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 31 May 2021 15:55:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "I wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> I don't see that.\n\n> It's under CREATE PROCEDURE. 11.60 <SQL-invoked routine> SR 20 says\n\nOh... just noticed something else relevant to this discussion: SR 13\nin the same section saith\n\n 13) If R is an SQL-invoked function, then no <SQL parameter declaration>\n in NPL shall contain a <parameter mode>.\n\nIn other words, the spec does not have OUT or INOUT parameters for\nfunctions. So, again, their notion of what is sufficient to identify\na routine is based on a very different model than what we are using. \n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 31 May 2021 16:25:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "On Mon, 2021-05-31 at 15:55 -0400, Tom Lane wrote:\n> > If I have two procedures\n> > p1(IN int, IN int, OUT int, OUT int)\n> > p1(OUT int, OUT int)\n> > then a DROP, or ALTER, or GRANT, etc. on p1(int, int) should operate on \n> > the second one in a spec-compliant implementation, but you propose to \n> > have it operate on the first one. That kind of discrepancy would be \n> > really bad to have.\n> \n> We already have that situation for functions. I think having procedures\n> work differently from functions is much worse than your complaint here;\n> and I do not see why being spec-compliant for one case when we are not\n> for the other is a good situation to be in.\n\n+1\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Tue, 01 Jun 2021 03:28:06 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "On Monday, May 31, 2021, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n\n> On Mon, 2021-05-31 at 15:55 -0400, Tom Lane wrote:\n> > > If I have two procedures\n> > > p1(IN int, IN int, OUT int, OUT int)\n> > > p1(OUT int, OUT int)\n> > > then a DROP, or ALTER, or GRANT, etc. on p1(int, int) should operate\n> on\n> > > the second one in a spec-compliant implementation, but you propose to\n> > > have it operate on the first one. That kind of discrepancy would be\n> > > really bad to have.\n> >\n> > We already have that situation for functions. I think having procedures\n> > work differently from functions is much worse than your complaint here;\n> > and I do not see why being spec-compliant for one case when we are not\n> > for the other is a good situation to be in.\n>\n> +1\n>\n\nWhen this discussion concludes a review of the compatibility sections of\nthe create/drop “routine” reference pages would be appreciated.\n\nI agree that being consistent with our long-standing function behavior is\nmore important than being standards compliant. FWIW this being DDL lessens\nany non-compliance reservations I may have.\n\nDavid J.\n\nOn Monday, May 31, 2021, Laurenz Albe <laurenz.albe@cybertec.at> wrote:On Mon, 2021-05-31 at 15:55 -0400, Tom Lane wrote:\n> > If I have two procedures\n> > p1(IN int, IN int, OUT int, OUT int)\n> > p1(OUT int, OUT int)\n> > then a DROP, or ALTER, or GRANT, etc. on p1(int, int) should operate on \n> > the second one in a spec-compliant implementation, but you propose to \n> > have it operate on the first one. That kind of discrepancy would be \n> > really bad to have.\n> \n> We already have that situation for functions. I think having procedures\n> work differently from functions is much worse than your complaint here;\n> and I do not see why being spec-compliant for one case when we are not\n> for the other is a good situation to be in.\n\n+1\nWhen this discussion concludes a review of the compatibility sections of the create/drop “routine” reference pages would be appreciated.I agree that being consistent with our long-standing function behavior is more important than being standards compliant. FWIW this being DDL lessens any non-compliance reservations I may have.David J.",
"msg_date": "Mon, 31 May 2021 18:48:07 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> When this discussion concludes a review of the compatibility sections of\n> the create/drop “routine” reference pages would be appreciated.\n\nGood idea, whichever answer we settle on. But it's notable that\nthe existing text gives no hint that the rules are different\nfor functions and procedures. That will need work if we leave\nthe code as it stands.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 01 Jun 2021 13:46:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "\nOn 5/31/21 4:25 PM, Tom Lane wrote:\n>\n> Oh... just noticed something else relevant to this discussion: SR 13\n> in the same section saith\n>\n> 13) If R is an SQL-invoked function, then no <SQL parameter declaration>\n> in NPL shall contain a <parameter mode>.\n>\n> In other words, the spec does not have OUT or INOUT parameters for\n> functions. So, again, their notion of what is sufficient to identify\n> a routine is based on a very different model than what we are using. \n>\n> \t\t\t\n\n\n\nHistorical note: this might have had its origin in Ada, where it was the\nrule. It's thus amusing that as of the 2012 revision Ada no longer has\nthis rule, and functions as well as procedures can have IN OUT and OUT\nparameters (although there the return value is separate from any OUT\nparameter). Ada probably dropped the rule because it was simply a\nhindrance rather than a help - certainly I remember finding that it\nforced somewhat unnatural expressions back when I was an Ada programmer\n(mid 90s). Maybe the SQL spec needs to catch up :-)\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 1 Jun 2021 15:37:47 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "On 31.05.21 22:25, Tom Lane wrote:\n> I wrote:\n>> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>>> I don't see that.\n> \n>> It's under CREATE PROCEDURE. 11.60 <SQL-invoked routine> SR 20 says\n> \n> Oh... just noticed something else relevant to this discussion: SR 13\n> in the same section saith\n> \n> 13) If R is an SQL-invoked function, then no <SQL parameter declaration>\n> in NPL shall contain a <parameter mode>.\n> \n> In other words, the spec does not have OUT or INOUT parameters for\n> functions. So, again, their notion of what is sufficient to identify\n> a routine is based on a very different model than what we are using.\n\nYeah, I figured that was known, but maybe it is good to point it out in \nthis thread.\n\nThe OUT and INOUT parameters for functions and how they affect \nsignatures was \"invented here\" for PostgreSQL.\n\nThe OUT and INOUT parameters for procedures is something that exists in \nthe standard and other implementations.\n\nUnfortunately, these two things are not consistent.\n\nSo now when we add OUT parameters for procedures in PostgreSQL, we are \nforced to make a choice: Do we choose consistency with precedent A or \nprecedent B? That's the point we disagree on, and I'm not sure how to \nresolve it.\n\nAnother dimension to this question of what things are consistent with is \nhow you reference versus how you invoke these things.\n\nIf you have a function f1(IN xt, OUT yt), you reference it as f1(xt) and \nyou invoke it as SELECT f1(xv).\n\nIf you have a procedure p1(IN xt, OUT yt), you invoke it as CALL \np1(something, something). So in my mind, it would also make sense to \nreference it as p1(something, something).\n\nSo while I understand the argument of\n\n- Function signatures should work consistently with procedure signatures.\n\nI find the arguments of\n\n- Procedure signatures should match the SQL standard, and\n- Signature for invoking should match signature for calling.\n\na more appealing combination.\n\nDoes that summarize the issue correctly?\n\n\n",
"msg_date": "Wed, 2 Jun 2021 00:54:00 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> So while I understand the argument of\n> - Function signatures should work consistently with procedure signatures.\n> I find the arguments of\n> - Procedure signatures should match the SQL standard, and\n> - Signature for invoking should match signature for calling.\n> a more appealing combination.\n> Does that summarize the issue correctly?\n\nWell, mumble ... I think you've left out a couple of significant\nproblems. The two things that I'm seriously unhappy about are:\n\n1. ALTER/DROP ROUTINE is basically broken. It does not work the\nsame as it did before for functions; as I showed upthread, there\nare cases that worked in prior versions and fail in HEAD. Moreover\nit's impossible to make it work in any remotely consistent fashion,\nbecause there are two incompatible standards for it to follow.\n\n2. I really do not like considering OUT arguments as part of a\nprocedure's unique signature, because that means that you can\nhave both of\n\tcreate procedure p1(IN x int, IN y int, OUT z int) ...\n\tcreate procedure p1(IN x int, IN y int, OUT z text) ...\nThe key problem with this is that it breaks the advice that\n\"you can just write NULL for the output argument(s)\". Sometimes\nyou'll have to write something else to select the procedure you\nwanted. That's not per the documentation, and it's also going\nto be a thorn in the side of client software that would like\nto use \"?\" or some other type-free syntax for OUT parameters.\n\nGiven the fact that the spec won't allow you to have two procedures\nwith the same number of parameters (never mind their types), there's\nno argument that this scenario needs to be allowed per spec. So\nI think we would be very well advised to prevent it. This is why\nI'm so hot about reverting the definition of proargtypes.\n\nIt's possible that we could revert proargtypes and still accommodate\nthe spec's definition for ALTER/DROP ROUTINE/PROCEDURE. I'm imagining\nsome rules along the line of:\n\n1. If arg list contains any parameter modes, then it must be PG\nsyntax, so interpret it according to our traditional rules.\n\n2. Otherwise, try to match the given arg types against *both*\nproargtypes and proallargtypes. If we get multiple matches,\ncomplain that the command is ambiguous. (In the case of DROP\nPROCEDURE, it's probably OK to consider only proallargtypes.)\n\nThis is just handwaving at this point, so it might need some\nrefinement, but perhaps it could lead to an acceptable compromise.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 01 Jun 2021 19:24:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "I wrote:\n> It's possible that we could revert proargtypes and still accommodate\n> the spec's definition for ALTER/DROP ROUTINE/PROCEDURE. I'm imagining\n> some rules along the line of:\n> 1. If arg list contains any parameter modes, then it must be PG\n> syntax, so interpret it according to our traditional rules.\n> 2. Otherwise, try to match the given arg types against *both*\n> proargtypes and proallargtypes. If we get multiple matches,\n> complain that the command is ambiguous. (In the case of DROP\n> PROCEDURE, it's probably OK to consider only proallargtypes.)\n\nHmm, actually we could make step 2 a shade tighter: if a candidate\nroutine is a function, match against proargtypes. If it's a procedure,\nmatch against coalesce(proallargtypes, proargtypes). If we find\nmultiple matches, raise ambiguity error.\n\nThe cases where you get the error could be resolved by either\nusing traditional PG syntax, or (in most cases) by saying\nFUNCTION or PROCEDURE instead of ROUTINE.\n\nAn interesting point here is that if you did, say,\n create procedure p1(IN x int, IN y float8, OUT z int)\n create procedure p1(IN x int, OUT y float8, IN z int)\nthese would be allowed by my preferred catalog design (since\nproargtypes would be different), but their proallargtypes are\nthe same so you could not drop one using SQL-spec syntax.\nYou'd be forced into using traditional PG syntax. Since the\nspec would disallow the case anyway, I don't see an argument\nthat this is a problem for spec compliance.\n\nI'm not very sure offhand how thoroughly this approach\ncovers the expectations of the spec. There may be combinations\nof procedure/function signatures that the spec thinks should\nbe allowed but would be ambiguous according to these rules for\nDROP ROUTINE. But I believe that any such cases would be\npretty corner-ish, and we could get away with saying \"too bad\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 01 Jun 2021 20:04:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "I wrote:\n> Hmm, actually we could make step 2 a shade tighter: if a candidate\n> routine is a function, match against proargtypes. If it's a procedure,\n> match against coalesce(proallargtypes, proargtypes). If we find\n> multiple matches, raise ambiguity error.\n\nWhere do we stand on this topic?\n\nI'm willing to have a go at implementing things that way, but\ntime's a-wasting.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Jun 2021 14:29:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "\nOn 6/3/21 2:29 PM, Tom Lane wrote:\n> I wrote:\n>> Hmm, actually we could make step 2 a shade tighter: if a candidate\n>> routine is a function, match against proargtypes. If it's a procedure,\n>> match against coalesce(proallargtypes, proargtypes). If we find\n>> multiple matches, raise ambiguity error.\n> Where do we stand on this topic?\n>\n> I'm willing to have a go at implementing things that way, but\n> time's a-wasting.\n>\n> \t\t\t\n\n\n\nSo AIUI your suggestion is that ALTER/DROP ROUTINE will look for an\nambiguity. If it doesn't find one it proceeds, otherwise it complains in\nwhich case the user will have to fall back to ALTER/DROP\nFUNCTION/PROCEDURE. Is that right? It seems a reasonable approach, and I\nwouldn't expect to find too many ambiguous cases in practice.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 3 Jun 2021 15:46:45 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> So AIUI your suggestion is that ALTER/DROP ROUTINE will look for an\n> ambiguity. If it doesn't find one it proceeds, otherwise it complains in\n> which case the user will have to fall back to ALTER/DROP\n> FUNCTION/PROCEDURE. Is that right? It seems a reasonable approach, and I\n> wouldn't expect to find too many ambiguous cases in practice.\n\nYeah, I think that practical problems would be pretty rare. My impression\nis that users tend not to use function/procedure name overloading too much\nin the first place, and none of this affects you at all till you do.\n\nOnce you do, you'll possibly notice that PG's rules for which combinations\nof signatures are allowed are different from the spec's. I believe that\nwe're largely more generous than the spec, but there are a few cases where\nthis proposal isn't. An example is that (AFAICT) the spec allows having\nboth\n\tcreate procedure divide(x int, y int, OUT q int) ...\n\tcreate procedure divide(x int, y int, OUT q int, OUT r int) ...\nwhich I want to reject because they have the same input parameters.\nThis is perhaps annoying. But seeing that the spec won't allow you to\nalso have divide() procedures for other datatypes, I'm having a hard\ntime feeling that this is losing on the overloading-flexibility front.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Jun 2021 16:21:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "\nOn 6/3/21 4:21 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> So AIUI your suggestion is that ALTER/DROP ROUTINE will look for an\n>> ambiguity. If it doesn't find one it proceeds, otherwise it complains in\n>> which case the user will have to fall back to ALTER/DROP\n>> FUNCTION/PROCEDURE. Is that right? It seems a reasonable approach, and I\n>> wouldn't expect to find too many ambiguous cases in practice.\n> Yeah, I think that practical problems would be pretty rare. My impression\n> is that users tend not to use function/procedure name overloading too much\n> in the first place, and none of this affects you at all till you do.\n>\n> Once you do, you'll possibly notice that PG's rules for which combinations\n> of signatures are allowed are different from the spec's. I believe that\n> we're largely more generous than the spec, but there are a few cases where\n> this proposal isn't. An example is that (AFAICT) the spec allows having\n> both\n> \tcreate procedure divide(x int, y int, OUT q int) ...\n> \tcreate procedure divide(x int, y int, OUT q int, OUT r int) ...\n> which I want to reject because they have the same input parameters.\n> This is perhaps annoying. But seeing that the spec won't allow you to\n> also have divide() procedures for other datatypes, I'm having a hard\n> time feeling that this is losing on the overloading-flexibility front.\n>\n> \t\t\t\n\n\n\nNot sure I follow the \"other datatypes\" bit. Are you saying the spec\nwon't let you have this?:\n\n create procedure divide(x int, y int, OUT q int);\n create procedure divide(x int, y int, OUT q float);\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 3 Jun 2021 16:39:40 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> Not sure I follow the \"other datatypes\" bit. Are you saying the spec\n> won't let you have this?:\n\n> create procedure divide(x int, y int, OUT q int);\n> create procedure divide(x int, y int, OUT q float);\n\nIn fact it won't, because the spec's rule is simply \"you can't have\ntwo procedures with the same name and same number of parameters\"\n(where they count OUT parameters, I believe). However the case\nI was considering was wanting to have\n\n\tcreate procedure divide(x int, y int, OUT q int) ...\n\tcreate procedure divide(x numeric, y numeric, OUT q numeric) ...\n\nwhich likewise falls foul of the spec's restriction, but which\nIMO must be allowed in Postgres.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Jun 2021 16:50:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "On 02.06.21 02:04, Tom Lane wrote:\n> I wrote:\n>> It's possible that we could revert proargtypes and still accommodate\n>> the spec's definition for ALTER/DROP ROUTINE/PROCEDURE. I'm imagining\n>> some rules along the line of:\n>> 1. If arg list contains any parameter modes, then it must be PG\n>> syntax, so interpret it according to our traditional rules.\n>> 2. Otherwise, try to match the given arg types against *both*\n>> proargtypes and proallargtypes. If we get multiple matches,\n>> complain that the command is ambiguous. (In the case of DROP\n>> PROCEDURE, it's probably OK to consider only proallargtypes.)\n> \n> Hmm, actually we could make step 2 a shade tighter: if a candidate\n> routine is a function, match against proargtypes. If it's a procedure,\n> match against coalesce(proallargtypes, proargtypes). If we find\n> multiple matches, raise ambiguity error.\n> \n> The cases where you get the error could be resolved by either\n> using traditional PG syntax, or (in most cases) by saying\n> FUNCTION or PROCEDURE instead of ROUTINE.\n\nI'm ok with this proposal.\n\n\n",
"msg_date": "Thu, 3 Jun 2021 23:13:37 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "\nOn 6/3/21 4:50 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> Not sure I follow the \"other datatypes\" bit. Are you saying the spec\n>> won't let you have this?:\n>> create procedure divide(x int, y int, OUT q int);\n>> create procedure divide(x int, y int, OUT q float);\n> In fact it won't, because the spec's rule is simply \"you can't have\n> two procedures with the same name and same number of parameters\"\n> (where they count OUT parameters, I believe). \n\n\nOh. That's a truly awful rule.\n\n\n\n> However the case\n> I was considering was wanting to have\n>\n> \tcreate procedure divide(x int, y int, OUT q int) ...\n> \tcreate procedure divide(x numeric, y numeric, OUT q numeric) ...\n>\n> which likewise falls foul of the spec's restriction, but which\n> IMO must be allowed in Postgres.\n>\n\n\nRight, we should certainly allow that.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 3 Jun 2021 17:22:09 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 02.06.21 02:04, Tom Lane wrote:\n>> Hmm, actually we could make step 2 a shade tighter: if a candidate\n>> routine is a function, match against proargtypes. If it's a procedure,\n>> match against coalesce(proallargtypes, proargtypes). If we find\n>> multiple matches, raise ambiguity error.\n\n> I'm ok with this proposal.\n\nCool. Do you want to try to implement it, or shall I?\n\nA question that maybe we should refer to the RMT is whether it's\ntoo late for this sort of redesign for v14. I dislike reverting\nthe OUT-procedure feature altogether in v14, but perhaps that's\nthe sanest way to proceed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Jun 2021 17:29:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "On 03.06.21 22:21, Tom Lane wrote:\n> Once you do, you'll possibly notice that PG's rules for which combinations\n> of signatures are allowed are different from the spec's. I believe that\n> we're largely more generous than the spec, but there are a few cases where\n> this proposal isn't. An example is that (AFAICT) the spec allows having\n> both\n> \tcreate procedure divide(x int, y int, OUT q int) ...\n> \tcreate procedure divide(x int, y int, OUT q int, OUT r int) ...\n> which I want to reject because they have the same input parameters.\n> This is perhaps annoying. But seeing that the spec won't allow you to\n> also have divide() procedures for other datatypes, I'm having a hard\n> time feeling that this is losing on the overloading-flexibility front.\n\nI'm okay with disallowing this. In my experience, overloading of \nprocedures is done even more rarely than of functions, so this probably \nwon't affect anything in practice.\n\n(I'm by no means suggesting this, but I could imagine a catalog \nrepresentation that allows this but still checks that you can't have \nmultiple candidates that differ only by the type of an OUT parameters. \nSay with some kind of bitmap or boolean array that indicates where the \nOUT parameters are. Then you can only have one candidate with a given \nnumber of arguments, but the above could be allowed. Again, I'm not \nsuggesting this, but it's a possibility in theory.)\n\n\n",
"msg_date": "Thu, 3 Jun 2021 23:41:38 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 03.06.21 22:21, Tom Lane wrote:\n>> An example is that (AFAICT) the spec allows having both\n>> \tcreate procedure divide(x int, y int, OUT q int) ...\n>> \tcreate procedure divide(x int, y int, OUT q int, OUT r int) ...\n>> which I want to reject because they have the same input parameters.\n\n> (I'm by no means suggesting this, but I could imagine a catalog \n> representation that allows this but still checks that you can't have \n> multiple candidates that differ only by the type of an OUT parameters. \n> Say with some kind of bitmap or boolean array that indicates where the \n> OUT parameters are. Then you can only have one candidate with a given \n> number of arguments, but the above could be allowed. Again, I'm not \n> suggesting this, but it's a possibility in theory.)\n\nWe could certainly do something like that in a green field. But one\nof the reasons I'm unhappy about the current design is that I'm convinced\nthat altering the definition of pg_proc.proargtypes will break client-side\ncode that's looking at the catalogs. I don't think we get to monkey with\nsuch fundamental bits of the catalog data without a really good reason.\nAllowing different OUT parameters for the same IN parameters doesn't seem\nto me to qualify, given that there are other reasons why that's dubious.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Jun 2021 17:50:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "On 03.06.21 23:29, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> On 02.06.21 02:04, Tom Lane wrote:\n>>> Hmm, actually we could make step 2 a shade tighter: if a candidate\n>>> routine is a function, match against proargtypes. If it's a procedure,\n>>> match against coalesce(proallargtypes, proargtypes). If we find\n>>> multiple matches, raise ambiguity error.\n> \n>> I'm ok with this proposal.\n> \n> Cool. Do you want to try to implement it, or shall I?\n> \n> A question that maybe we should refer to the RMT is whether it's\n> too late for this sort of redesign for v14. I dislike reverting\n> the OUT-procedure feature altogether in v14, but perhaps that's\n> the sanest way to proceed.\n\nI'll take a look at this. I'm not clear on the beta schedule, but the \nnext beta is probably still a few weeks away.\n\n\n",
"msg_date": "Fri, 4 Jun 2021 21:35:00 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 02.06.21 02:04, Tom Lane wrote:\n>>> It's possible that we could revert proargtypes and still accommodate\n>>> the spec's definition for ALTER/DROP ROUTINE/PROCEDURE. I'm imagining\n>>> some rules along the line of:\n>>> 1. If arg list contains any parameter modes, then it must be PG\n>>> syntax, so interpret it according to our traditional rules.\n>>> 2. Otherwise, try to match the given arg types against *both*\n>>> proargtypes and proallargtypes. If we get multiple matches,\n>>> complain that the command is ambiguous. (In the case of DROP\n>>> PROCEDURE, it's probably OK to consider only proallargtypes.)\n\n>> Hmm, actually we could make step 2 a shade tighter: if a candidate\n>> routine is a function, match against proargtypes. If it's a procedure,\n>> match against coalesce(proallargtypes, proargtypes). If we find\n>> multiple matches, raise ambiguity error.\n\n> I'm ok with this proposal.\n\nI spent some time playing with this, and ran into a problem.\nGiven the example we discussed upthread:\n\nd1=# create procedure p1(int, int) language sql as 'select 1';\nCREATE PROCEDURE\nd1=# create procedure p1(out int, out int) language sql as 'select 1,2';\nCREATE PROCEDURE\n\nyou can uniquely refer to the first p1 by writing (IN int, IN int),\nand you can uniquely refer to the second p1 by writing an empty parameter\nlist or by writing (OUT int, OUT int). If you write just (int, int),\nyou get an ambiguity error as discussed.\n\nThe problem is that we have a lot of existing code that expects\np1(int, int) to work for the first p1. Notably, this scenario breaks\n\"pg_dump --clean\", which emits commands like \n\nDROP PROCEDURE public.p1(integer, integer);\nDROP PROCEDURE public.p1(OUT integer, OUT integer);\n\nIt would likely not be very hard to fix pg_dump to include explicit\nIN markers. I don't think this results in a compatibility problem\nfor existing dumps, since they won't be taken from databases in\nwhich there are procedures with OUT arguments.\n\nI'm concerned however about what other client code might get side-swiped.\nWe (or users) would not be likely to hit the ambiguity right away,\nso that sort of issue could go unnoticed for a long time.\n\nSo I'm unsure right now whether this is going to be an acceptable\nchange. I feel like it's still a better situation than what we\nhave in HEAD, but it's not as cost-free as I'd hoped.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 04 Jun 2021 15:36:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "I wrote:\n> It would likely not be very hard to fix pg_dump to include explicit\n> IN markers. I don't think this results in a compatibility problem\n> for existing dumps, since they won't be taken from databases in\n> which there are procedures with OUT arguments.\n\nActually, all we have to do to fix pg_dump is to tweak ruleutils.c\n(although this has some effects on existing regression test outputs,\nof course). So maybe it's not as bad as all that.\n\nHere's a draft-quality patch to handle ALTER/DROP this way. I think\nthe code may be finished, but I've not looked at the docs at all.\n\n0001 is the same patch I posted earlier, 0002 is a delta to enable\nhandling ALTER/DROP per spec.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 04 Jun 2021 17:07:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "On 04.06.21 23:07, Tom Lane wrote:\n> I wrote:\n>> It would likely not be very hard to fix pg_dump to include explicit\n>> IN markers. I don't think this results in a compatibility problem\n>> for existing dumps, since they won't be taken from databases in\n>> which there are procedures with OUT arguments.\n> \n> Actually, all we have to do to fix pg_dump is to tweak ruleutils.c\n> (although this has some effects on existing regression test outputs,\n> of course). So maybe it's not as bad as all that.\n> \n> Here's a draft-quality patch to handle ALTER/DROP this way. I think\n> the code may be finished, but I've not looked at the docs at all.\n> \n> 0001 is the same patch I posted earlier, 0002 is a delta to enable\n> handling ALTER/DROP per spec.\n\nI checked these patches. They appear to match what was talked about. I \ndidn't find anything surprising. I couldn't apply the 0002 after \napplying 0001 to today's master, so I wasn't able to do more exploratory \ntesting. What are these patches based on? Are there are any more open \nissues to focus on?\n\nOne thing I was wondering is whether we should force CALL arguments in \ndirect SQL to be null rather than allowing arbitrary expressions. Since \nthere is more elaborate code now to process the CALL arguments, maybe it \nwould be easier than before to integrate that.\n\n\n",
"msg_date": "Mon, 7 Jun 2021 21:54:33 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 04.06.21 23:07, Tom Lane wrote:\n>> 0001 is the same patch I posted earlier, 0002 is a delta to enable\n>> handling ALTER/DROP per spec.\n\n> I checked these patches. They appear to match what was talked about. I \n> didn't find anything surprising. I couldn't apply the 0002 after \n> applying 0001 to today's master, so I wasn't able to do more exploratory \n> testing. What are these patches based on? Are there are any more open \n> issues to focus on?\n\nHmm, these are atop HEAD from a week or so back. The cfbot seems to\nthink they still apply. In any case, I was about to spend some effort\non the docs, so I'll post an updated version soon (hopefully today).\n\n> One thing I was wondering is whether we should force CALL arguments in \n> direct SQL to be null rather than allowing arbitrary expressions. Since \n> there is more elaborate code now to process the CALL arguments, maybe it \n> would be easier than before to integrate that.\n\nYeah. We could possibly do that, but at first glance it seems like it\nwould be adding code for little purpose except nanny-ism.\n\nOne angle that maybe needs discussion is what about CALL in SQL-language\nfunctions. I see that's disallowed right now. If we're willing to keep\nit that way until somebody implements local variables a la SQL/PSM,\nthen we could transition smoothly to having the same definition as in\nplpgsql, where you MUST write a variable. If we wanted to open it up\nsooner, we'd have to plan on ending with a definition like \"write either\na variable, or NULL to discard the value\", so that enforcing\nmust-be-NULL in the interim would make sense to prevent future\nsurprises. But IMO that would be best done as a SQL-language-function\nspecific restriction.\n\nI suppose if you imagine that we might someday have variables in\ntop-level SQL, then the same argument would apply there. But we already\nguaranteed ourselves some conversion pain for that scenario with respect\nto INOUT parameters, so I doubt that locking down OUT parameters will\nhelp much.\n\nMy inclination is to not bother adding the restriction, but it's\nonly a mild preference.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 07 Jun 2021 16:34:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "I wrote:\n> Hmm, these are atop HEAD from a week or so back. The cfbot seems to\n> think they still apply. In any case, I was about to spend some effort\n> on the docs, so I'll post an updated version soon (hopefully today).\n\nHere is said update (rolled up into one patch this time; maybe that will\navoid the apply problems you had).\n\nI noticed that there is one other loose end in the patch: should\nLookupFuncName() really be passing OBJECT_ROUTINE to\nLookupFuncNameInternal()? This matches its old behavior, in which\nno particular routine type restriction was applied; but I think that\nthe callers are nearly all expecting that only plain functions will\nbe returned. That's more of a possible pre-existing bug than it\nis the fault of the patch, but nonetheless this might be a good\ntime to resolve it.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 07 Jun 2021 19:10:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "On 08.06.21 01:10, Tom Lane wrote:\n> I wrote:\n>> Hmm, these are atop HEAD from a week or so back. The cfbot seems to\n>> think they still apply. In any case, I was about to spend some effort\n>> on the docs, so I'll post an updated version soon (hopefully today).\n> \n> Here is said update (rolled up into one patch this time; maybe that will\n> avoid the apply problems you had).\n\nThis patch looks good to me.\n\nA minor comment: You changed the docs in some places like this:\n\n- </itemizedlist></para>\n+ </itemizedlist>\n+ </para>\n\nThe original layout is required to avoid spurious whitespace in the \noutput (mainly affecting man pages).\n\n> I noticed that there is one other loose end in the patch: should\n> LookupFuncName() really be passing OBJECT_ROUTINE to\n> LookupFuncNameInternal()? This matches its old behavior, in which\n> no particular routine type restriction was applied; but I think that\n> the callers are nearly all expecting that only plain functions will\n> be returned. That's more of a possible pre-existing bug than it\n> is the fault of the patch, but nonetheless this might be a good\n> time to resolve it.\n\nIt appears that all uses of LookupFuncName() are lookups of internal \nsupport functions (with one exception in pltcl), so using \nOBJECT_FUNCTION would be okay.\n\nIt might be good to keep the argument order of LookupFuncNameInternal() \nconsistent with LookupFuncWithArgs() with respect to the new ObjectType \nargument.\n\n\n",
"msg_date": "Thu, 10 Jun 2021 10:42:35 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 08.06.21 01:10, Tom Lane wrote:\n>> Here is said update (rolled up into one patch this time; maybe that will\n>> avoid the apply problems you had).\n\n> This patch looks good to me.\n\nThanks for reviewing!\n\n> A minor comment: You changed the docs in some places like this:\n> - </itemizedlist></para>\n> + </itemizedlist>\n> + </para>\n> The original layout is required to avoid spurious whitespace in the \n> output (mainly affecting man pages).\n\nUgh, that seems like a toolchain bug. We're certainly not consistent\nabout formatting things that way. But I'll refrain from changing these.\n\n>> I noticed that there is one other loose end in the patch: should\n>> LookupFuncName() really be passing OBJECT_ROUTINE to\n>> LookupFuncNameInternal()?\n\n> It appears that all uses of LookupFuncName() are lookups of internal \n> support functions (with one exception in pltcl), so using \n> OBJECT_FUNCTION would be okay.\n\nOK, I'll take a closer look at that.\n\n> It might be good to keep the argument order of LookupFuncNameInternal() \n> consistent with LookupFuncWithArgs() with respect to the new ObjectType \n> argument.\n\nGood point, thanks.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 10 Jun 2021 09:57:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: CALL versus procedures with output-only arguments"
}
] |
[
{
"msg_contents": "\nWhile solving a problem with the Beta RPMs, I noticed that they export\nour perl test modules as capabilities like this:\n\n [andrew@f34 x86_64]$ rpm -q --provides -p\n postgresql14-devel-14-beta1_PGDG.fc34.x86_64.rpm | grep ^perl\n perl(PostgresNode)\n perl(PostgresVersion)\n perl(RecursiveCopy)\n perl(SimpleTee)\n perl(TestLib)\n\n\nI don't think we should be putting this stuff in a global namespace like\nthat. We should invent a namespace that's not likely to conflict with\nother people, like, say, 'PostgreSQL::Test' to put these modules. That\nwould require moving some code around and adjusting a bunch of scripts,\nbut it would not be difficult. Maybe something to be done post-14?\nMeanwhile I would suggest that RPM maintainers exclude both requires and\nprovides for these five names.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 20 May 2021 15:47:43 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Postgres perl module namespace"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> While solving a problem with the Beta RPMs, I noticed that they export\n> our perl test modules as capabilities like this:\n\n> [andrew@f34 x86_64]$ rpm -q --provides -p\n> postgresql14-devel-14-beta1_PGDG.fc34.x86_64.rpm | grep ^perl\n> perl(PostgresNode)\n> perl(PostgresVersion)\n> perl(RecursiveCopy)\n> perl(SimpleTee)\n> perl(TestLib)\n\n> I don't think we should be putting this stuff in a global namespace like\n> that. We should invent a namespace that's not likely to conflict with\n> other people, like, say, 'PostgreSQL::Test' to put these modules. That\n> would require moving some code around and adjusting a bunch of scripts,\n> but it would not be difficult. Maybe something to be done post-14?\n\nAgreed that we ought to namespace these better. It looks to me like most\nof these are several versions old. Given the lack of field complaints,\nI'm content to wait for v15 for a fix, rather than treating it as an open\nitem for v14.\n\n> Meanwhile I would suggest that RPM maintainers exclude both requires and\n> provides for these five names.\n\n+1\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 20 May 2021 17:18:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "Hi,\n\nOn Thu, 2021-05-20 at 15:47 -0400, Andrew Dunstan wrote:\n> Meanwhile I would suggest that RPM maintainers exclude both requires\n> and provides for these five names.\n\nDone, thanks. Will appear in next beta build.\n\nRegards,\n-- \nDevrim Gündüz\nOpen Source Solution Architect, Red Hat Certified Engineer\nTwitter: @DevrimGunduz , @DevrimGunduzTR",
"msg_date": "Fri, 21 May 2021 14:55:23 +0100",
"msg_from": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "\nOn 5/20/21 5:18 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> While solving a problem with the Beta RPMs, I noticed that they export\n>> our perl test modules as capabilities like this:\n>> [andrew@f34 x86_64]$ rpm -q --provides -p\n>> postgresql14-devel-14-beta1_PGDG.fc34.x86_64.rpm | grep ^perl\n>> perl(PostgresNode)\n>> perl(PostgresVersion)\n>> perl(RecursiveCopy)\n>> perl(SimpleTee)\n>> perl(TestLib)\n>> I don't think we should be putting this stuff in a global namespace like\n>> that. We should invent a namespace that's not likely to conflict with\n>> other people, like, say, 'PostgreSQL::Test' to put these modules. That\n>> would require moving some code around and adjusting a bunch of scripts,\n>> but it would not be difficult. Maybe something to be done post-14?\n> Agreed that we ought to namespace these better. It looks to me like most\n> of these are several versions old. Given the lack of field complaints,\n> I'm content to wait for v15 for a fix, rather than treating it as an open\n> item for v14.\n\n\n\nSo now the dev tree is open for v15 it's time to get back to this item.\nI will undertake to do the work, once we get the bike-shedding part done.\n\n\nI'll kick that off by suggesting we move all of these to the namespace\nPgTest, and rename TestLib to Utils, so instead of\n\n\n use TestLib;\n use PostgresNode;\n\n\nyou would say\n\n\n use PgTest::Utils;\n use PgTest::PostgresNode;\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 10 Aug 2021 10:10:56 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> I will undertake to do the work, once we get the bike-shedding part done.\n\n> I'll kick that off by suggesting we move all of these to the namespace\n> PgTest, and rename TestLib to Utils, so instead of\n> use TestLib;\n> use PostgresNode;\n> you would say\n> use PgTest::Utils;\n> use PgTest::PostgresNode;\n\nUsing both \"Pg\" and \"Postgres\" seems a bit inconsistent.\nMaybe \"PgTest::PgNode\"?\n\nMore generally, I've never thought that \"Node\" was a great name\nhere; it's a very vague term. While we're renaming, maybe we\ncould change it to \"PgTest::PgCluster\" or the like?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 10 Aug 2021 10:40:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "On 2021-Aug-10, Andrew Dunstan wrote:\n\n> I'll kick that off by suggesting we move all of these to the namespace\n> PgTest, and rename TestLib to Utils, so [...] you would say\n> \n> use PgTest::Utils;\n> use PgTest::PostgresNode;\n\nWFM.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"Hay que recordar que la existencia en el cosmos, y particularmente la\nelaboración de civilizaciones dentro de él no son, por desgracia,\nnada idílicas\" (Ijon Tichy)\n\n\n",
"msg_date": "Tue, 10 Aug 2021 10:41:35 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "\nOn 8/10/21 10:40 AM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> I will undertake to do the work, once we get the bike-shedding part done.\n>> I'll kick that off by suggesting we move all of these to the namespace\n>> PgTest, and rename TestLib to Utils, so instead of\n>> use TestLib;\n>> use PostgresNode;\n>> you would say\n>> use PgTest::Utils;\n>> use PgTest::PostgresNode;\n> Using both \"Pg\" and \"Postgres\" seems a bit inconsistent.\n> Maybe \"PgTest::PgNode\"?\n>\n> More generally, I've never thought that \"Node\" was a great name\n> here; it's a very vague term. While we're renaming, maybe we\n> could change it to \"PgTest::PgCluster\" or the like?\n>\n> \t\t\t\n\n\n\nI can live with that. I guess to be consistent we would also rename\nPostgresVersion to PgVersion\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 10 Aug 2021 11:02:13 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "On Tue, Aug 10, 2021 at 11:02:13AM -0400, Andrew Dunstan wrote:\n> I can live with that. I guess to be consistent we would also rename\n> PostgresVersion to PgVersion\n\nAre RewindTest.pm and SSLServer.pm things you are considering for a\nrenaming as well? It may be more consistent to put everything in the\nsame namespace if this move is done.\n--\nMichael",
"msg_date": "Wed, 11 Aug 2021 10:25:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "\n\n> On Aug 10, 2021, at 7:10 AM, Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n> use PgTest::Utils;\n> use PgTest::PostgresNode;\n\nChecking CPAN, it seems there are three older modules with names starting with \"Postgres\":\n\n\tPostgres\n\tPostgres::Handler\n\tPostgres::Handler::HTML\n\nIt would be confusing to combine official PostgreSQL modules with those third party ones, so perhaps we can claim the PostgreSQL namespace for official project modules. How about:\n\n\tPostgreSQL::Test::Cluster\n\tPostgreSQL::Test::Lib\n\tPostgreSQL::Test::Utils\n\nand then if we ever wanted to have official packages for non-test purposes, we could start another namespace under PostgreSQL. \n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 10 Aug 2021 18:37:12 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "On Wed, Aug 11, 2021 at 9:37 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n> > On Aug 10, 2021, at 7:10 AM, Andrew Dunstan <andrew@dunslane.net> wrote:\n> >\n> > use PgTest::Utils;\n> > use PgTest::PostgresNode;\n>\n> Checking CPAN, it seems there are three older modules with names starting with \"Postgres\":\n>\n> Postgres\n> Postgres::Handler\n> Postgres::Handler::HTML\n>\n> It would be confusing to combine official PostgreSQL modules with those third party ones, so perhaps we can claim the PostgreSQL namespace for official project modules. How about:\n>\n> PostgreSQL::Test::Cluster\n> PostgreSQL::Test::Lib\n> PostgreSQL::Test::Utils\n>\n> and then if we ever wanted to have official packages for non-test purposes, we could start another namespace under PostgreSQL.\n\nMaybe it's me but I would find that more confusing. Also, to actually\nclaim PostgreSQL namespace, we would have to actually publish them on\nCPAN right?\n\n\n",
"msg_date": "Wed, 11 Aug 2021 10:09:29 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "\nOn 8/10/21 9:37 PM, Mark Dilger wrote:\n>\n>> On Aug 10, 2021, at 7:10 AM, Andrew Dunstan <andrew@dunslane.net> wrote:\n>>\n>> use PgTest::Utils;\n>> use PgTest::PostgresNode;\n> Checking CPAN, it seems there are three older modules with names starting with \"Postgres\":\n>\n> \tPostgres\n> \tPostgres::Handler\n> \tPostgres::Handler::HTML\n>\n> It would be confusing to combine official PostgreSQL modules with those third party ones, so perhaps we can claim the PostgreSQL namespace for official project modules. How about:\n>\n> \tPostgreSQL::Test::Cluster\n> \tPostgreSQL::Test::Lib\n> \tPostgreSQL::Test::Utils\n>\n> and then if we ever wanted to have official packages for non-test purposes, we could start another namespace under PostgreSQL. \n>\n\nIf we were publishing them on CPAN that would be reasonable. But we're\nnot, nor are we likely to, I believe. I'd rather not have to add two\nlevel of directory hierarchy for this, and this also seems a bit\nlong-winded:\n\n\n my $node = PostgreSQL::Test::Cluster->new('nodename');\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 10 Aug 2021 22:11:22 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "\n\n> On Aug 10, 2021, at 7:11 PM, Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n> If we were publishing them on CPAN that would be reasonable. But we're\n> not, nor are we likely to, I believe.\n\nI'm now trying to understand the purpose of the renaming. I thought the problem was that RPM packagers wanted something that was unlikely to collide. Publishing on CPAN would be the way to claim the namespace.\n\nWhat's the purpose of this idea then? If there isn't one, I'd rather just keep the current names.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 10 Aug 2021 19:13:17 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "\nOn 8/10/21 9:25 PM, Michael Paquier wrote:\n> On Tue, Aug 10, 2021 at 11:02:13AM -0400, Andrew Dunstan wrote:\n>> I can live with that. I guess to be consistent we would also rename\n>> PostgresVersion to PgVersion\n> Are RewindTest.pm and SSLServer.pm things you are considering for a\n> renaming as well? It may be more consistent to put everything in the\n> same namespace if this move is done.\n\n\nIt could be very easily done. But I doubt these will make their way into\npackages, which is how this discussion started. There's good reason to\npackage src/test/perl, so you can use those modules to write TAP tests\nfor extensions. The same reasoning doesn't apply to SSLServer.pm and\nRewindTest.pm.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 10 Aug 2021 22:22:48 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "\nOn 8/10/21 10:13 PM, Mark Dilger wrote:\n>\n>> On Aug 10, 2021, at 7:11 PM, Andrew Dunstan <andrew@dunslane.net> wrote:\n>>\n>> If we were publishing them on CPAN that would be reasonable. But we're\n>> not, nor are we likely to, I believe.\n> I'm now trying to understand the purpose of the renaming. I thought the problem was that RPM packagers wanted something that was unlikely to collide. Publishing on CPAN would be the way to claim the namespace.\n>\n> What's the purpose of this idea then? If there isn't one, I'd rather just keep the current names.\n\n\n\nYes we want them to be in a namespace where they are unlikely to collide\nwith anything else. No, you don't have to publish on CPAN to achieve that.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 10 Aug 2021 22:26:42 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "\nOn 8/10/21 10:26 PM, Andrew Dunstan wrote:\n> On 8/10/21 10:13 PM, Mark Dilger wrote:\n>>> On Aug 10, 2021, at 7:11 PM, Andrew Dunstan <andrew@dunslane.net> wrote:\n>>>\n>>> If we were publishing them on CPAN that would be reasonable. But we're\n>>> not, nor are we likely to, I believe.\n>> I'm now trying to understand the purpose of the renaming. I thought the problem was that RPM packagers wanted something that was unlikely to collide. Publishing on CPAN would be the way to claim the namespace.\n>>\n>> What's the purpose of this idea then? If there isn't one, I'd rather just keep the current names.\n>\n>\n> Yes we want them to be in a namespace where they are unlikely to collide\n> with anything else. No, you don't have to publish on CPAN to achieve that.\n>\n\nIncidentally, not publishing on CPAN was a major reason given a few\nyears ago for using fairly lax perlcritic policies. If we publish these\non CPAN now some people at least might want to revisit that decision.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 10 Aug 2021 22:33:50 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "On 2021-Aug-10, Andrew Dunstan wrote:\n\n> If we were publishing them on CPAN that would be reasonable. But we're\n> not, nor are we likely to, I believe. I'd rather not have to add two\n> level of directory hierarchy for this, and this also seems a bit\n> long-winded:\n> \n> my $node = PostgreSQL::Test::Cluster->new('nodename');\n\nI'll recast my vote to make this line be\n\n my $node = PgTest::Cluster->new('nodename');\n\nwhich seems succint enough. I could get behind PgTest::PgCluster too,\nbut having a second Pg there seems unnecessary.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"Cada quien es cada cual y baja las escaleras como quiere\" (JMSerrat)\n\n\n",
"msg_date": "Wed, 11 Aug 2021 09:22:45 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> I'll recast my vote to make this line be\n> my $node = PgTest::Cluster->new('nodename');\n> which seems succint enough. I could get behind PgTest::PgCluster too,\n> but having a second Pg there seems unnecessary.\n\nEither of those WFM.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 11 Aug 2021 09:30:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "\nOn 8/11/21 9:30 AM, Tom Lane wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> I'll recast my vote to make this line be\n>> my $node = PgTest::Cluster->new('nodename');\n>> which seems succint enough. I could get behind PgTest::PgCluster too,\n>> but having a second Pg there seems unnecessary.\n> Either of those WFM.\n>\n> \t\t\t\n\n\n\nOK, I count 3 in favor of changing to PgTest::Cluster, 1 against,\nremainder don't care.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 23 Aug 2021 15:03:37 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "On Mon, Aug 23, 2021 at 3:03 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> OK, I count 3 in favor of changing to PgTest::Cluster, 1 against,\n> remainder don't care.\n\nI'd have gone with something starting with Postgres:: ... but I don't care much.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 23 Aug 2021 15:39:15 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "On Mon, Aug 23, 2021 at 03:39:15PM -0400, Robert Haas wrote:\n> On Mon, Aug 23, 2021 at 3:03 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> OK, I count 3 in favor of changing to PgTest::Cluster, 1 against,\n>> remainder don't care.\n> \n> I'd have gone with something starting with Postgres:: ... but I don't care much.\n\nPgTest seems like a better choice to me, as \"Postgres\" could be used\nfor other purposes than a testing framework, and the argument that\nmultiple paths makes things annoying for hackers is sensible.\n--\nMichael",
"msg_date": "Wed, 25 Aug 2021 14:48:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "On Wed, Aug 25, 2021 at 1:48 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, Aug 23, 2021 at 03:39:15PM -0400, Robert Haas wrote:\n> > On Mon, Aug 23, 2021 at 3:03 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> >> OK, I count 3 in favor of changing to PgTest::Cluster, 1 against,\n> >> remainder don't care.\n> >\n> > I'd have gone with something starting with Postgres:: ... but I don't care much.\n>\n> PgTest seems like a better choice to me, as \"Postgres\" could be used\n> for other purposes than a testing framework, and the argument that\n> multiple paths makes things annoying for hackers is sensible.\n\nI mean, it's a hierarchical namespace. The idea is you do\nPostgres::Test or Postgres::<whatever> and other people using the\nPostgres database can use other parts of it. But again, not really\nworth arguing about.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 25 Aug 2021 10:08:54 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "On 8/25/21 10:08 AM, Robert Haas wrote:\n> On Wed, Aug 25, 2021 at 1:48 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> On Mon, Aug 23, 2021 at 03:39:15PM -0400, Robert Haas wrote:\n>>> On Mon, Aug 23, 2021 at 3:03 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>>>> OK, I count 3 in favor of changing to PgTest::Cluster, 1 against,\n>>>> remainder don't care.\n>>> I'd have gone with something starting with Postgres:: ... but I don't care much.\n>> PgTest seems like a better choice to me, as \"Postgres\" could be used\n>> for other purposes than a testing framework, and the argument that\n>> multiple paths makes things annoying for hackers is sensible.\n> I mean, it's a hierarchical namespace. The idea is you do\n> Postgres::Test or Postgres::<whatever> and other people using the\n> Postgres database can use other parts of it. But again, not really\n> worth arguing about.\n>\n\n\nI think I have come around to this POV. Here's a patch. The worst of it\nis changes like this:\n\n- my $node2 = PostgresNode->new('replica');\n+ my $node2 = Postgres::Test::Cluster->new('replica');\n...\n- TestLib::system_or_bail($tar, 'xf', $tblspc_tars[0], '-C', $repTsDir);\n+ Postgres::Test::Utils::system_or_bail($tar, 'xf', $tblspc_tars[0], '-C', $repTsDir);\n\nand I think that's not so bad.\n\ncheers\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Fri, 3 Sep 2021 15:34:24 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "On Fri, Sep 03, 2021 at 03:34:24PM -0400, Andrew Dunstan wrote:\n> On 8/25/21 10:08 AM, Robert Haas wrote:\n> > On Wed, Aug 25, 2021 at 1:48 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >> On Mon, Aug 23, 2021 at 03:39:15PM -0400, Robert Haas wrote:\n> >>> On Mon, Aug 23, 2021 at 3:03 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> >>>> OK, I count 3 in favor of changing to PgTest::Cluster, 1 against,\n> >>>> remainder don't care.\n> >>> I'd have gone with something starting with Postgres:: ... but I don't care much.\n> >> PgTest seems like a better choice to me, as \"Postgres\" could be used\n> >> for other purposes than a testing framework, and the argument that\n> >> multiple paths makes things annoying for hackers is sensible.\n> > I mean, it's a hierarchical namespace. The idea is you do\n> > Postgres::Test or Postgres::<whatever> and other people using the\n> > Postgres database can use other parts of it. But again, not really\n> > worth arguing about.\n> \n> I think I have come around to this POV. Here's a patch. The worst of it\n> is changes like this:\n> \n> - my $node2 = PostgresNode->new('replica');\n> + my $node2 = Postgres::Test::Cluster->new('replica');\n> ...\n> - TestLib::system_or_bail($tar, 'xf', $tblspc_tars[0], '-C', $repTsDir);\n> + Postgres::Test::Utils::system_or_bail($tar, 'xf', $tblspc_tars[0], '-C', $repTsDir);\n\nplperl uses PostgreSQL:: as the first component of its Perl module namespace.\nWe shouldn't use both PostgreSQL:: and Postgres:: in the same source tree, so\nthis change should not use Postgres::.\n\n\n",
"msg_date": "Fri, 3 Sep 2021 23:19:49 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "On 9/4/21 2:19 AM, Noah Misch wrote:\n> On Fri, Sep 03, 2021 at 03:34:24PM -0400, Andrew Dunstan wrote:\n>> On 8/25/21 10:08 AM, Robert Haas wrote:\n>>> On Wed, Aug 25, 2021 at 1:48 AM Michael Paquier <michael@paquier.xyz> wrote:\n>>>> On Mon, Aug 23, 2021 at 03:39:15PM -0400, Robert Haas wrote:\n>>>>> On Mon, Aug 23, 2021 at 3:03 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>>>>>> OK, I count 3 in favor of changing to PgTest::Cluster, 1 against,\n>>>>>> remainder don't care.\n>>>>> I'd have gone with something starting with Postgres:: ... but I don't care much.\n>>>> PgTest seems like a better choice to me, as \"Postgres\" could be used\n>>>> for other purposes than a testing framework, and the argument that\n>>>> multiple paths makes things annoying for hackers is sensible.\n>>> I mean, it's a hierarchical namespace. The idea is you do\n>>> Postgres::Test or Postgres::<whatever> and other people using the\n>>> Postgres database can use other parts of it. But again, not really\n>>> worth arguing about.\n>> I think I have come around to this POV. Here's a patch. The worst of it\n>> is changes like this:\n>>\n>> - my $node2 = PostgresNode->new('replica');\n>> + my $node2 = Postgres::Test::Cluster->new('replica');\n>> ...\n>> - TestLib::system_or_bail($tar, 'xf', $tblspc_tars[0], '-C', $repTsDir);\n>> + Postgres::Test::Utils::system_or_bail($tar, 'xf', $tblspc_tars[0], '-C', $repTsDir);\n> plperl uses PostgreSQL:: as the first component of its Perl module namespace.\n> We shouldn't use both PostgreSQL:: and Postgres:: in the same source tree, so\n> this change should not use Postgres::.\n\n\nGood point. Here's the same thing using PostgreSQL::Test\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sat, 4 Sep 2021 09:58:08 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "On Sat, Sep 04, 2021 at 09:58:08AM -0400, Andrew Dunstan wrote:\n> On 9/4/21 2:19 AM, Noah Misch wrote:\n>> plperl uses PostgreSQL:: as the first component of its Perl module namespace.\n>> We shouldn't use both PostgreSQL:: and Postgres:: in the same source tree, so\n>> this change should not use Postgres::. \n>\n> Good point. Here's the same thing using PostgreSQL::Test\n\nA minor point: this introduces PostgreSQL::Test::PostgresVersion.\nCould be this stripped down to PostgreSQL::Test::Version instead?\n--\nMichael",
"msg_date": "Mon, 6 Sep 2021 14:08:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "On Mon, Sep 06, 2021 at 02:08:45PM +0900, Michael Paquier wrote:\n> A minor point: this introduces PostgreSQL::Test::PostgresVersion.\n> Could be this stripped down to PostgreSQL::Test::Version instead?\n\nThis fails to apply since 5fcb23c, but the conflicts are simple enough\nto solve. Sorry about that :/\n--\nMichael",
"msg_date": "Tue, 7 Sep 2021 11:30:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "\nOn 9/6/21 1:08 AM, Michael Paquier wrote:\n> On Sat, Sep 04, 2021 at 09:58:08AM -0400, Andrew Dunstan wrote:\n>> On 9/4/21 2:19 AM, Noah Misch wrote:\n>>> plperl uses PostgreSQL:: as the first component of its Perl module namespace.\n>>> We shouldn't use both PostgreSQL:: and Postgres:: in the same source tree, so\n>>> this change should not use Postgres::. \n>> Good point. Here's the same thing using PostgreSQL::Test\n> A minor point: this introduces PostgreSQL::Test::PostgresVersion.\n> Could be this stripped down to PostgreSQL::Test::Version instead?\n\n\n\nThat name isn't very clear - what is it the version of, PostgreSQL or\nthe test?\n\nThere's nothing very test-specific about this module - it simply\nencapsulates a Postgres version string. So maybe it should just be\nPostgreSQL::Version.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 7 Sep 2021 07:43:47 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "On Tue, Sep 07, 2021 at 07:43:47AM -0400, Andrew Dunstan wrote:\n> On 9/6/21 1:08 AM, Michael Paquier wrote:\n> > On Sat, Sep 04, 2021 at 09:58:08AM -0400, Andrew Dunstan wrote:\n> >> On 9/4/21 2:19 AM, Noah Misch wrote:\n> >>> plperl uses PostgreSQL:: as the first component of its Perl module namespace.\n> >>> We shouldn't use both PostgreSQL:: and Postgres:: in the same source tree, so\n> >>> this change should not use Postgres::. \n> >> Good point. Here's the same thing using PostgreSQL::Test\n> > A minor point: this introduces PostgreSQL::Test::PostgresVersion.\n> > Could be this stripped down to PostgreSQL::Test::Version instead?\n> \n> That name isn't very clear - what is it the version of, PostgreSQL or\n> the test?\n\nFair.\n\n> There's nothing very test-specific about this module - it simply\n> encapsulates a Postgres version string. So maybe it should just be\n> PostgreSQL::Version.\n\nCould be fine, but that name could be useful as a CPAN module. These modules\ndon't belong on CPAN, so I'd keep PostgreSQL::Test::PostgresVersion. There's\nonly one reference in the tree, so optimizing that particular name is less\nexciting.\n\n(I wondered about using PGXS:: as the namespace for all these modules, since\nit's short and \"pgxs\" is the closest thing to a name for the PostgreSQL build\nsystem. Overall, I didn't convince myself about it being an improvement.)\n\n\n",
"msg_date": "Tue, 7 Sep 2021 21:00:11 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "\n\n> On Sep 7, 2021, at 9:00 PM, Noah Misch <noah@leadboat.com> wrote:\n> \n> I wondered about using PGXS:: as the namespace for all these modules\n\nThat immediately suggests perl modules wrapping C code, which is misleading for these. See `man perlxstut`\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 8 Sep 2021 07:15:24 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "On 9/7/21 7:43 AM, Andrew Dunstan wrote:\n> On 9/6/21 1:08 AM, Michael Paquier wrote:\n>> On Sat, Sep 04, 2021 at 09:58:08AM -0400, Andrew Dunstan wrote:\n>>> On 9/4/21 2:19 AM, Noah Misch wrote:\n>>>> plperl uses PostgreSQL:: as the first component of its Perl module namespace.\n>>>> We shouldn't use both PostgreSQL:: and Postgres:: in the same source tree, so\n>>>> this change should not use Postgres::. \n>>> Good point. Here's the same thing using PostgreSQL::Test\n>> A minor point: this introduces PostgreSQL::Test::PostgresVersion.\n>> Could be this stripped down to PostgreSQL::Test::Version instead?\n>\n>\n> That name isn't very clear - what is it the version of, PostgreSQL or\n> the test?\n>\n> There's nothing very test-specific about this module - it simply\n> encapsulates a Postgres version string. So maybe it should just be\n> PostgreSQL::Version.\n>\n>\n\n\nDiscussion has gone quiet and the tree is now relatively quiet, so now\nseems like a good time to do this. See attached patches.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 19 Oct 2021 14:54:58 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "Op 19-10-2021 om 20:54 schreef Andrew Dunstan:\n> \n> \n> \n> Discussion has gone quiet and the tree is now relatively quiet, so now\n> seems like a good time to do this. See attached patches.\n> \n\n > [0001-move-perl-test-modules-to-PostgreSQL-Test-namespace.patch ]\n > [0002-move-PostgreSQL-Test-PostgresVersion-up-in-the-names.patch]\n\n\nThose patches gave some complains about \nPostgreSQL/Test/PostgresVersion.pm being absent so I added this \ndeletion. I'm not sure that's correct but it enabled the build and \ncheck-world ran without errors.\n\n\nErik Rijkers",
"msg_date": "Tue, 19 Oct 2021 22:16:06 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "On Tue, Oct 19, 2021 at 10:16:06PM +0200, Erik Rijkers wrote:\n>> [0001-move-perl-test-modules-to-PostgreSQL-Test-namespace.patch ]\n>> [0002-move-PostgreSQL-Test-PostgresVersion-up-in-the-names.patch]\n\nIt seems to me that the hardest part is sorted out with the naming and\npathing of the modules, so better to apply them sooner than later. \n\n> Those patches gave some complains about PostgreSQL/Test/PostgresVersion.pm\n> being absent so I added this deletion. I'm not sure that's correct but it\n> enabled the build and check-world ran without errors.\n\nYour change is incorrect, as we want to install PostgresVersion.pm.\nWhat's needed here is the following:\n{PostgresVersion.pm => PostgreSQL/Version.pm}\n\nAnd so the patch needs to be changed like that:\n- $(INSTALL_DATA) $(srcdir)/PostgreSQL/Test/PostgresVersion.pm '$(DESTDIR)$(pgxsdir)/$(subdir)/PostgreSQL/Test/PostgresVersion.pm'\n+ $(INSTALL_DATA) $(srcdir)/PostgreSQL/Version.pm '$(DESTDIR)$(pgxsdir)/$(subdir)/PostgreSQL/Version.pm'\n[...]\n- rm -f '$(DESTDIR)$(pgxsdir)/$(subdir)/PostgreSQL/Test/PostgresVersion.pm'\n+ rm -f '$(DESTDIR)$(pgxsdir)/$(subdir)/PostgreSQL/Version.pm'\n--\nMichael",
"msg_date": "Wed, 20 Oct 2021 12:22:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "\nOn 10/19/21 11:22 PM, Michael Paquier wrote:\n> On Tue, Oct 19, 2021 at 10:16:06PM +0200, Erik Rijkers wrote:\n>>> [0001-move-perl-test-modules-to-PostgreSQL-Test-namespace.patch ]\n>>> [0002-move-PostgreSQL-Test-PostgresVersion-up-in-the-names.patch]\n> It seems to me that the hardest part is sorted out with the naming and\n> pathing of the modules, so better to apply them sooner than later. \n\n\nYeah, my plan is to apply it today or tomorrow\n\n\n>\n>> Those patches gave some complains about PostgreSQL/Test/PostgresVersion.pm\n>> being absent so I added this deletion. I'm not sure that's correct but it\n>> enabled the build and check-world ran without errors.\n> Your change is incorrect, as we want to install PostgresVersion.pm.\n> What's needed here is the following:\n> {PostgresVersion.pm => PostgreSQL/Version.pm}\n>\n> And so the patch needs to be changed like that:\n> - $(INSTALL_DATA) $(srcdir)/PostgreSQL/Test/PostgresVersion.pm '$(DESTDIR)$(pgxsdir)/$(subdir)/PostgreSQL/Test/PostgresVersion.pm'\n> + $(INSTALL_DATA) $(srcdir)/PostgreSQL/Version.pm '$(DESTDIR)$(pgxsdir)/$(subdir)/PostgreSQL/Version.pm'\n> [...]\n> - rm -f '$(DESTDIR)$(pgxsdir)/$(subdir)/PostgreSQL/Test/PostgresVersion.pm'\n> + rm -f '$(DESTDIR)$(pgxsdir)/$(subdir)/PostgreSQL/Version.pm'\n\nright. There are one or two other cosmetic changes too.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 20 Oct 2021 08:40:04 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "\nOn 10/20/21 08:40, Andrew Dunstan wrote:\n> On 10/19/21 11:22 PM, Michael Paquier wrote:\n>> On Tue, Oct 19, 2021 at 10:16:06PM +0200, Erik Rijkers wrote:\n>>>> [0001-move-perl-test-modules-to-PostgreSQL-Test-namespace.patch ]\n>>>> [0002-move-PostgreSQL-Test-PostgresVersion-up-in-the-names.patch]\n>> It seems to me that the hardest part is sorted out with the naming and\n>> pathing of the modules, so better to apply them sooner than later. \n>\n> Yeah, my plan is to apply it today or tomorrow\n>\n>\n>>> Those patches gave some complains about PostgreSQL/Test/PostgresVersion.pm\n>>> being absent so I added this deletion. I'm not sure that's correct but it\n>>> enabled the build and check-world ran without errors.\n>> Your change is incorrect, as we want to install PostgresVersion.pm.\n>> What's needed here is the following:\n>> {PostgresVersion.pm => PostgreSQL/Version.pm}\n>>\n>> And so the patch needs to be changed like that:\n>> - $(INSTALL_DATA) $(srcdir)/PostgreSQL/Test/PostgresVersion.pm '$(DESTDIR)$(pgxsdir)/$(subdir)/PostgreSQL/Test/PostgresVersion.pm'\n>> + $(INSTALL_DATA) $(srcdir)/PostgreSQL/Version.pm '$(DESTDIR)$(pgxsdir)/$(subdir)/PostgreSQL/Version.pm'\n>> [...]\n>> - rm -f '$(DESTDIR)$(pgxsdir)/$(subdir)/PostgreSQL/Test/PostgresVersion.pm'\n>> + rm -f '$(DESTDIR)$(pgxsdir)/$(subdir)/PostgreSQL/Version.pm'\n> right. There are one or two other cosmetic changes too.\n>\n>\n\n\n... and pushed.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 24 Oct 2021 10:46:30 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "On Sun, Oct 24, 2021 at 10:46:30AM -0400, Andrew Dunstan wrote:\n> ... and pushed.\n\nThanks!\n--\nMichael",
"msg_date": "Mon, 25 Oct 2021 17:12:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "Hi,\n\nOn 2021-10-25 17:12:08 +0900, Michael Paquier wrote:\n> On Sun, Oct 24, 2021 at 10:46:30AM -0400, Andrew Dunstan wrote:\n> > ... and pushed.\n> \n> Thanks!\n\nI just, again, tried to backport a test as part of a bugfix. The\nrenaming between 14 and 15 makes that task almost comically harder. The\nonly way I see of dealing with that for the next 5 years is to just\nnever backpatch tests to < 15. Which seems like a bad outcome.\n\nI just read through the thread and didn't really see this aspect\ndiscussed - which I find surprising.\n\nExcept that it's *way* too late I would argue that this should just\nstraight up be reverted until that aspect is addressed. It's a\nmaintenance nightmare.\n\n- Andres\n\n\n",
"msg_date": "Mon, 18 Apr 2022 07:15:30 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I just, again, tried to backport a test as part of a bugfix. The\n> renaming between 14 and 15 makes that task almost comically harder. The\n> only way I see of dealing with that for the next 5 years is to just\n> never backpatch tests to < 15. Which seems like a bad outcome.\n\nYeah ...\n\n> Except that it's *way* too late I would argue that this should just\n> straight up be reverted until that aspect is addressed. It's a\n> maintenance nightmare.\n\nI'm not for that, but could it be sane to back-patch the renaming?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 18 Apr 2022 10:26:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "Hi,\n\nOn 2022-04-18 10:26:15 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I just, again, tried to backport a test as part of a bugfix. The\n> > renaming between 14 and 15 makes that task almost comically harder. The\n> > only way I see of dealing with that for the next 5 years is to just\n> > never backpatch tests to < 15. Which seems like a bad outcome.\n> \n> Yeah ...\n> \n> > Except that it's *way* too late I would argue that this should just\n> > straight up be reverted until that aspect is addressed. It's a\n> > maintenance nightmare.\n> \n> I'm not for that\n\nI'm not either, at this point...\n\n\n> but could it be sane to back-patch the renaming?\n\nThat might be the best. But it might not even suffice. There've been\nother global refactorings between 14 and 15. E.g. 201a76183e2.\n\nI wonder if we should just backpatch the current PostgreSQL module, but\nleave the old stuff around :/.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 18 Apr 2022 07:44:29 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "On Mon, Apr 18, 2022 at 07:15:30AM -0700, Andres Freund wrote:\n> I just, again, tried to backport a test as part of a bugfix. The\n> renaming between 14 and 15 makes that task almost comically harder. The\n> only way I see of dealing with that for the next 5 years is to just\n> never backpatch tests to < 15. Which seems like a bad outcome.\n\nFor what it's worth, to back-patch TAP suite changes, I've been using this\nscript (works on a .p[lm] file or on a patch file):\n\n==== bin/tap15to14\n#! /bin/sh\n\n# This translates a PostgreSQL 15 TAP test into a PostgreSQL 14 TAP test\n\nsed -i~ '\n s/PostgreSQL::Test::Cluster/PostgresNode/g\n s/PostgreSQL::Test::Utils/TestLib/g\n s/PostgresNode->new/get_new_node/g\n' -- \"$@\"\n\ngrep -w subtest -- \"$@\"\n====\n\n> Except that it's *way* too late I would argue that this should just\n> straight up be reverted until that aspect is addressed. It's a\n> maintenance nightmare.\n\nI do feel PostgreSQL has been over-eager to do cosmetic refactoring. For me,\nthis particular one has been sort-of-tolerable.\n\n\n",
"msg_date": "Mon, 18 Apr 2022 08:52:24 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "\nOn 2022-04-18 Mo 11:52, Noah Misch wrote:\n> On Mon, Apr 18, 2022 at 07:15:30AM -0700, Andres Freund wrote:\n>> I just, again, tried to backport a test as part of a bugfix. The\n>> renaming between 14 and 15 makes that task almost comically harder. The\n>> only way I see of dealing with that for the next 5 years is to just\n>> never backpatch tests to < 15. Which seems like a bad outcome.\n\n\nI'm not sure how often we do things like that. But I don't agree it's\nimpossibly hard, although I can see it might be a bit annoying.\n\n\n> For what it's worth, to back-patch TAP suite changes, I've been using this\n> script (works on a .p[lm] file or on a patch file):\n>\n> ==== bin/tap15to14\n> #! /bin/sh\n>\n> # This translates a PostgreSQL 15 TAP test into a PostgreSQL 14 TAP test\n>\n> sed -i~ '\n> s/PostgreSQL::Test::Cluster/PostgresNode/g\n> s/PostgreSQL::Test::Utils/TestLib/g\n> s/PostgresNode->new/get_new_node/g\n> ' -- \"$@\"\n>\n> grep -w subtest -- \"$@\"\n> ====\n>\n\n\nYeah, that should take care of most of it.\n\n\n>> Except that it's *way* too late I would argue that this should just\n>> straight up be reverted until that aspect is addressed. It's a\n>> maintenance nightmare.\n> I do feel PostgreSQL has been over-eager to do cosmetic refactoring. For me,\n> this particular one has been sort-of-tolerable.\n\n\n\nThere were reasons beyond being purely cosmetic for all the changes.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 18 Apr 2022 13:28:40 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2022-04-18 Mo 11:52, Noah Misch wrote:\n>> On Mon, Apr 18, 2022 at 07:15:30AM -0700, Andres Freund wrote:\n>>> I just, again, tried to backport a test as part of a bugfix. The\n>>> renaming between 14 and 15 makes that task almost comically harder. The\n>>> only way I see of dealing with that for the next 5 years is to just\n>>> never backpatch tests to < 15. Which seems like a bad outcome.\n\n> I'm not sure how often we do things like that. But I don't agree it's\n> impossibly hard, although I can see it might be a bit annoying.\n\nI think we back-patch test cases *all the time*. So I think Andres\nis quite right to be concerned about making that harder, although I'm\nnot sure that his estimate of the conversion difficulty is accurate.\nI plan to keep a copy of Noah's script and see if applying that to\nthe patch files alleviates the pain. In a few months we should have\na better idea of whether that's sufficient, or we want to go to the\nwork of back-patching the renaming.\n\nI doubt that just plopping the new Cluster.pm in alongside the old\nfile could work --- wouldn't the two modules need to share state\nsomehow?\n\nAnother thing that ought to be on the table is back-patching\n549ec201d (Replace Test::More plans with done_testing). Those\ntest counts are an even huger pain for back-patching than the\nrenaming, because the count is often different in each branch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 18 Apr 2022 13:43:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "\nOn 2022-04-18 Mo 13:43, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 2022-04-18 Mo 11:52, Noah Misch wrote:\n>>> On Mon, Apr 18, 2022 at 07:15:30AM -0700, Andres Freund wrote:\n>>>> I just, again, tried to backport a test as part of a bugfix. The\n>>>> renaming between 14 and 15 makes that task almost comically harder. The\n>>>> only way I see of dealing with that for the next 5 years is to just\n>>>> never backpatch tests to < 15. Which seems like a bad outcome.\n>> I'm not sure how often we do things like that. But I don't agree it's\n>> impossibly hard, although I can see it might be a bit annoying.\n> I think we back-patch test cases *all the time*. So I think Andres\n> is quite right to be concerned about making that harder, although I'm\n> not sure that his estimate of the conversion difficulty is accurate.\n> I plan to keep a copy of Noah's script and see if applying that to\n> the patch files alleviates the pain. In a few months we should have\n> a better idea of whether that's sufficient, or we want to go to the\n> work of back-patching the renaming.\n>\n> I doubt that just plopping the new Cluster.pm in alongside the old\n> file could work --- wouldn't the two modules need to share state\n> somehow?\n\n\nNo, I think we could probably just port the whole of src/test/PostreSQL\nback if required, and have it live alongside the old modules. Each TAP\ntest is a separate miracle - see comments elsewhere about port\nassignment in parallel TAP tests.\n\n\nBut that would mean we have some tests in the old flavor and some in the\nnew flavor in the back branches, which might get confusing.\n\n\n>\n> Another thing that ought to be on the table is back-patching\n> 549ec201d (Replace Test::More plans with done_testing). Those\n> test counts are an even huger pain for back-patching than the\n> renaming, because the count is often different in each branch.\n>\n> \t\t\t\n\n\n+1 for doing that\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 18 Apr 2022 13:59:23 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> No, I think we could probably just port the whole of src/test/PostreSQL\n> back if required, and have it live alongside the old modules. Each TAP\n> test is a separate miracle - see comments elsewhere about port\n> assignment in parallel TAP tests.\n> But that would mean we have some tests in the old flavor and some in the\n> new flavor in the back branches, which might get confusing.\n\nThat works for back-patching entire new test scripts, but not for adding\nsome cases to an existing script, which I think is more common.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 18 Apr 2022 14:07:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "\nOn 2022-04-18 Mo 14:07, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> No, I think we could probably just port the whole of src/test/PostreSQL\n>> back if required, and have it live alongside the old modules. Each TAP\n>> test is a separate miracle - see comments elsewhere about port\n>> assignment in parallel TAP tests.\n>> But that would mean we have some tests in the old flavor and some in the\n>> new flavor in the back branches, which might get confusing.\n> That works for back-patching entire new test scripts, but not for adding\n> some cases to an existing script, which I think is more common.\n>\n> \t\t\t\n\n\nI think the only thing that should trip people up in those cases is the\nthe new/get_new_node thing. That's complicated by the fact that the old\nPostgresNode module has both new() and get_new_node(), although it\nadvises people not to use its new(). Probably the best way around that\nis a) rename it's new() and deal with any callers, and b) add a new\nnew(), which would be a wrapper around get_new_node(). I'll have a play\nwith that.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 18 Apr 2022 15:29:18 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "\n\n> On Apr 18, 2022, at 10:59 AM, Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n> No, I think we could probably just port the whole of src/test/PostreSQL\n> back if required, and have it live alongside the old modules. Each TAP\n> test is a separate miracle - see comments elsewhere about port\n> assignment in parallel TAP tests.\n\nI think $last_port_assigned would need to be shared between the two modules. This global safeguard is already a bit buggy, but not sharing it between modules would be far worse. Currently, if a node which has a port assigned is stopped, and a parallel test creates a new node, this global variable prevents the new node from getting the port already assigned to the old stopped node, except when port assignment wraps around. Without sharing the global, wrap-around need not happen for port collisions.\n\nOr am I reading the code wrong?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 18 Apr 2022 12:46:09 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "\nOn 2022-04-18 Mo 15:46, Mark Dilger wrote:\n>\n>> On Apr 18, 2022, at 10:59 AM, Andrew Dunstan <andrew@dunslane.net> wrote:\n>>\n>> No, I think we could probably just port the whole of src/test/PostreSQL\n>> back if required, and have it live alongside the old modules. Each TAP\n>> test is a separate miracle - see comments elsewhere about port\n>> assignment in parallel TAP tests.\n> I think $last_port_assigned would need to be shared between the two modules. This global safeguard is already a bit buggy, but not sharing it between modules would be far worse. Currently, if a node which has a port assigned is stopped, and a parallel test creates a new node, this global variable prevents the new node from getting the port already assigned to the old stopped node, except when port assignment wraps around. Without sharing the global, wrap-around need not happen for port collisions.\n>\n> Or am I reading the code wrong?\n>\n\nI don't see anything at all in the current code that involves sharing\n$last_port_assigned (or anything else) between parallel tests. The only\nreason we don't get lots of collisions there is because each one starts\noff at a random port. If you want it shared to guarantee non-collision\nwe will need far more infrastructure, AFAICS, but that seems quite\nseparate from the present issue. I have some experience of managing that\n- the buildfarm code has some shared state, protected by bunch of locks.\n\nTo the best of my knowledge. each TAP test runs in its own process, a\nchild of prove. And that's just as well because we certainly wouldn't\nwant other package globals (like the node list) shared.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 18 Apr 2022 16:19:31 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "\n\n> On Apr 18, 2022, at 1:19 PM, Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n> that seems quite separate from the present issue.\n\nThanks for the clarification. I agree, given your comments, that it is unrelated to this thread.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 18 Apr 2022 13:22:17 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "On Mon, Apr 18, 2022 at 01:59:23PM -0400, Andrew Dunstan wrote:\n> On 2022-04-18 Mo 13:43, Tom Lane wrote:\n>> I doubt that just plopping the new Cluster.pm in alongside the old\n>> file could work --- wouldn't the two modules need to share state\n>> somehow?\n> \n> No, I think we could probably just port the whole of src/test/PostreSQL\n> back if required, and have it live alongside the old modules. Each TAP\n> test is a separate miracle - see comments elsewhere about port\n> assignment in parallel TAP tests.\n\nDoesn't that mean doubling the maintenance cost if any of the internal\nmodule routines are changed? If the existing in-core TAP tests use\none module or the other exclusively, how do we make easily sure that\none and the other base modules are not broken? There are also\nout-of-tree TAP tests relying on those modules, though having\neverything in parallel would work.\n\n>> Another thing that ought to be on the table is back-patching\n>> 549ec201d (Replace Test::More plans with done_testing). Those\n>> test counts are an even huger pain for back-patching than the\n>> renaming, because the count is often different in each branch.\n> \n> +1 for doing that\n\nThe harcoded number of tests has been the most annoying part for me,\nto be honest, while the namespace change just requires a few seds and\nit is a matter of getting used to it. FWIW, I have a git script that\ndoes the same thing as Noah, but only for files part of the code tree,\nas of:\nfor file in $(git grep -l \"$OLDSTR\")\ndo\n sed -i \"s/$OLDSTR/$NEWSTR/g\" \"$file\"\ndone\n--\nMichael",
"msg_date": "Tue, 19 Apr 2022 11:43:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "> On 18 Apr 2022, at 19:59, Andrew Dunstan <andrew@dunslane.net> wrote:\n> On 2022-04-18 Mo 13:43, Tom Lane wrote:\n\n>> Another thing that ought to be on the table is back-patching\n>> 549ec201d (Replace Test::More plans with done_testing). Those\n>> test counts are an even huger pain for back-patching than the\n>> renaming, because the count is often different in each branch.\t\t\t\n> \n> +1 for doing that\n\nTI'll get to work on that then.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 19 Apr 2022 09:43:03 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "On 2022-04-18 Mo 14:07, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> No, I think we could probably just port the whole of src/test/PostreSQL\n>> back if required, and have it live alongside the old modules. Each TAP\n>> test is a separate miracle - see comments elsewhere about port\n>> assignment in parallel TAP tests.\n>> But that would mean we have some tests in the old flavor and some in the\n>> new flavor in the back branches, which might get confusing.\n> That works for back-patching entire new test scripts, but not for adding\n> some cases to an existing script, which I think is more common.\n>\n> \t\t\t\n\n\nI think I've come up with a better scheme that I hope will fix all or\nalmost all of the pain complained of in this thread. I should note that\nwe deliberately delayed making these changes until fairly early in the\nrelease 15 development cycle, and that was clearly a good decision.\n\nThe attached three patches basically implement the new naming scheme for\nthe back branches without doing away with the old scheme or doing a\nwholesale copy of the new modules.\n\nThe first simply implements a proper \"new\" constructor for PostgresNode,\njust like we have in PostgreSQL:Test::Cluster. It's not really essential\nbut it seems like a good idea. The second adds all the visible\nfunctionality of the PostgresNode and TestLib modules to the\nPostgreSQL::Test::Cluster and PostgreSQL::Test::Utils namespaces.. The\nthird adds dummy packages so that any code doing 'use\nPostgreSQL::Test::Utils;' or 'use PostgreSQL::Test::Cluster;' will\nactually import the old modules. This last piece is where there might be\nsome extra work needed, to export the names so that using an unqualified\nfunction or variable, say, 'slurp_file(\"foo\");' will work. But in\ngeneral, modulo that issue, I believe things should Just Work (tm). You\nshould basically just be able to backpatch any new or modified TAP test\nwithout difficulty, sed script usage, etc.\n\nComments welcome.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 19 Apr 2022 11:36:44 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "Hi,\n\nOn 2022-04-19 11:36:44 -0400, Andrew Dunstan wrote:\n> The attached three patches basically implement the new naming scheme for\n> the back branches without doing away with the old scheme or doing a\n> wholesale copy of the new modules.\n\nThat sounds like good plan!\n\nI don't know perl enough to comment on the details, but it looks roughly\nsane to me.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 19 Apr 2022 10:15:28 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "On 2022-04-19 Tu 11:36, Andrew Dunstan wrote:\n> On 2022-04-18 Mo 14:07, Tom Lane wrote:\n>> Andrew Dunstan <andrew@dunslane.net> writes:\n>>> No, I think we could probably just port the whole of src/test/PostreSQL\n>>> back if required, and have it live alongside the old modules. Each TAP\n>>> test is a separate miracle - see comments elsewhere about port\n>>> assignment in parallel TAP tests.\n>>> But that would mean we have some tests in the old flavor and some in the\n>>> new flavor in the back branches, which might get confusing.\n>> That works for back-patching entire new test scripts, but not for adding\n>> some cases to an existing script, which I think is more common.\n>>\n>> \t\t\t\n>\n> I think I've come up with a better scheme that I hope will fix all or\n> almost all of the pain complained of in this thread. I should note that\n> we deliberately delayed making these changes until fairly early in the\n> release 15 development cycle, and that was clearly a good decision.\n>\n> The attached three patches basically implement the new naming scheme for\n> the back branches without doing away with the old scheme or doing a\n> wholesale copy of the new modules.\n>\n> The first simply implements a proper \"new\" constructor for PostgresNode,\n> just like we have in PostgreSQL:Test::Cluster. It's not really essential\n> but it seems like a good idea. The second adds all the visible\n> functionality of the PostgresNode and TestLib modules to the\n> PostgreSQL::Test::Cluster and PostgreSQL::Test::Utils namespaces.. The\n> third adds dummy packages so that any code doing 'use\n> PostgreSQL::Test::Utils;' or 'use PostgreSQL::Test::Cluster;' will\n> actually import the old modules. This last piece is where there might be\n> some extra work needed, to export the names so that using an unqualified\n> function or variable, say, 'slurp_file(\"foo\");' will work. But in\n> general, modulo that issue, I believe things should Just Work (tm). You\n> should basically just be able to backpatch any new or modified TAP test\n> without difficulty, sed script usage, etc.\n>\n> Comments welcome.\n>\n>\n\nHere's a version with a fixed third patch that corrects a file misnaming\nand fixes the export issue referred to above. Passes my testing so far.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 19 Apr 2022 16:06:28 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "On Tue, Apr 19, 2022 at 04:06:28PM -0400, Andrew Dunstan wrote:\n> Here's a version with a fixed third patch that corrects a file misnaming\n> and fixes the export issue referred to above. Passes my testing so far.\n\nWow. That's really cool. You are combining the best of both worlds\nhere to ease backpatching, as far as I understand what you wrote.\n\n+*generate_ascii_string = *TestLib::generate_ascii_string;\n+*slurp_dir = *TestLib::slurp_dir;\n+*slurp_file = *TestLib::slurp_file;\n\nI am not sure if it is possible and my perl-fu is limited in this\narea, but could a failure be enforced when loading this path if a new\nroutine added in TestLib.pm is forgotten in this list?\n--\nMichael",
"msg_date": "Wed, 20 Apr 2022 07:39:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "\nOn 2022-04-19 Tu 18:39, Michael Paquier wrote:\n> On Tue, Apr 19, 2022 at 04:06:28PM -0400, Andrew Dunstan wrote:\n>> Here's a version with a fixed third patch that corrects a file misnaming\n>> and fixes the export issue referred to above. Passes my testing so far.\n> Wow. That's really cool. You are combining the best of both worlds\n> here to ease backpatching, as far as I understand what you wrote.\n\n\nThanks.\n\n\n>\n> +*generate_ascii_string = *TestLib::generate_ascii_string;\n> +*slurp_dir = *TestLib::slurp_dir;\n> +*slurp_file = *TestLib::slurp_file;\n>\n> I am not sure if it is possible and my perl-fu is limited in this\n> area, but could a failure be enforced when loading this path if a new\n> routine added in TestLib.pm is forgotten in this list?\n\n\n\nNot very easily that I'm aware of, but maybe some superior perl wizard\nwill know better.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 19 Apr 2022 19:24:58 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "On Tue, Apr 19, 2022 at 07:24:58PM -0400, Andrew Dunstan wrote:\n> On 2022-04-19 Tu 18:39, Michael Paquier wrote:\n>> +*generate_ascii_string = *TestLib::generate_ascii_string;\n>> +*slurp_dir = *TestLib::slurp_dir;\n>> +*slurp_file = *TestLib::slurp_file;\n>>\n>> I am not sure if it is possible and my perl-fu is limited in this\n>> area, but could a failure be enforced when loading this path if a new\n>> routine added in TestLib.pm is forgotten in this list?\n> \n> Not very easily that I'm aware of, but maybe some superior perl wizard\n> will know better.\n\nOkay. Please do not consider this as a blocker. I was just wondering\nabout ways to ease more the error reports when it comes to\nback-patching, and this would move the error stack a bit earlier.\n--\nMichael",
"msg_date": "Wed, 20 Apr 2022 09:30:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "\nOn 2022-04-19 Tu 20:30, Michael Paquier wrote:\n> On Tue, Apr 19, 2022 at 07:24:58PM -0400, Andrew Dunstan wrote:\n>> On 2022-04-19 Tu 18:39, Michael Paquier wrote:\n>>> +*generate_ascii_string = *TestLib::generate_ascii_string;\n>>> +*slurp_dir = *TestLib::slurp_dir;\n>>> +*slurp_file = *TestLib::slurp_file;\n>>>\n>>> I am not sure if it is possible and my perl-fu is limited in this\n>>> area, but could a failure be enforced when loading this path if a new\n>>> routine added in TestLib.pm is forgotten in this list?\n>> Not very easily that I'm aware of, but maybe some superior perl wizard\n>> will know better.\n> Okay. Please do not consider this as a blocker. I was just wondering\n> about ways to ease more the error reports when it comes to\n> back-patching, and this would move the error stack a bit earlier.\n\n\n\nThere are a few other things that could make backpatching harder, and\nwhile they are not related to the namespace issue they do affect a bit\nhow that is managed.\n\n\nThe following variables are missing in various versions of TestLib:\n\n\nin version 13 and earlier: $is_msys2, $timeout_default\n\nin version 12 and earlier: $use_unix_sockets\n\n\nand the following functions are missing:\n\n\nin version 14 and earlier: pump_until\n\nin version 13 and earlier: dir_symlink\n\nin version 11 and earlier: run_command\n\nin version 10: check_mode_recursive, chmod_recursive, check_pg_config\n\n\n(Also in version 10 command_checks_all exists but isn't exported. I'm\ninclined just to remedy that along the way)\n\n\nTurning to PostgresNode, the class-wide function get_free_port is absent\nfrom version 10, and the following instance methods are absent from some\nor all of the back branches:\n\nadjust_conf, clean_node, command_fails_like, config_data, connect_fails,\nconnect_ok, corrupt_page_checksum, group_access, installed_command,\ninstall_path, interactive_psql, logrotate, set_recovery_mode,\nset_standby_mode, wait_for_log\n\nWe don't export or provide aliases for any of these instance methods in\nthese patches, but attempts to use them in backpatched code will fail\nwhere they are absent, so I thought it worth mentioning.\n\n\nBasically I propose just to remove any mention of the Testlib items and\nget_free_port from the export and alias lists for versions where they\nare absent. If backpatchers need a function they can backport it if\nnecessary.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 20 Apr 2022 15:56:17 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "On Wed, Apr 20, 2022 at 03:56:17PM -0400, Andrew Dunstan wrote:\n> Basically I propose just to remove any mention of the Testlib items and\n> get_free_port from the export and alias lists for versions where they\n> are absent. If backpatchers need a function they can backport it if\n> necessary.\n\nAgreed. I am fine to stick to that (I may have done that only once or\ntwice in the past years, so that does not happen a lot either IMO).\nThe patch in itself looks like an improvement in the right direction,\nso +1 from me.\n--\nMichael",
"msg_date": "Thu, 21 Apr 2022 13:11:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "\nOn 2022-04-21 Th 00:11, Michael Paquier wrote:\n> On Wed, Apr 20, 2022 at 03:56:17PM -0400, Andrew Dunstan wrote:\n>> Basically I propose just to remove any mention of the Testlib items and\n>> get_free_port from the export and alias lists for versions where they\n>> are absent. If backpatchers need a function they can backport it if\n>> necessary.\n> Agreed. I am fine to stick to that (I may have done that only once or\n> twice in the past years, so that does not happen a lot either IMO).\n> The patch in itself looks like an improvement in the right direction,\n> so +1 from me.\n\n\n\nThanks, pushed.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 21 Apr 2022 09:42:44 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "On 2022-04-21 09:42:44 -0400, Andrew Dunstan wrote:\n> On 2022-04-21 Th 00:11, Michael Paquier wrote:\n> > On Wed, Apr 20, 2022 at 03:56:17PM -0400, Andrew Dunstan wrote:\n> >> Basically I propose just to remove any mention of the Testlib items and\n> >> get_free_port from the export and alias lists for versions where they\n> >> are absent. If backpatchers need a function they can backport it if\n> >> necessary.\n> > Agreed. I am fine to stick to that (I may have done that only once or\n> > twice in the past years, so that does not happen a lot either IMO).\n> > The patch in itself looks like an improvement in the right direction,\n> > so +1 from me.\n\n> Thanks, pushed.\n\nThanks for working on this!\n\n\n",
"msg_date": "Fri, 22 Apr 2022 11:36:56 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "On Tue, Apr 19, 2022 at 07:24:58PM -0400, Andrew Dunstan wrote:\n> On 2022-04-19 Tu 18:39, Michael Paquier wrote:\n> > +*generate_ascii_string = *TestLib::generate_ascii_string;\n> > +*slurp_dir = *TestLib::slurp_dir;\n> > +*slurp_file = *TestLib::slurp_file;\n> >\n> > I am not sure if it is possible and my perl-fu is limited in this\n> > area, but could a failure be enforced when loading this path if a new\n> > routine added in TestLib.pm is forgotten in this list?\n> \n> Not very easily that I'm aware of, but maybe some superior perl wizard\n> will know better.\n\nOne can alias the symbol table, like https://metacpan.org/pod/Package::Alias\ndoes. I'm attaching what I plan to use. Today, check-world fails after\n\n sed -i 's/TestLib/PostgreSQL::Test::Utils/g; s/PostgresNode/PostgreSQL::Test::Cluster/g' **/*.pl\n\non REL_14_STABLE, because today's alias list is incomplete. With this change,\nthe same check-world passes.",
"msg_date": "Wed, 22 Jun 2022 00:21:44 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "\nOn 2022-06-22 We 03:21, Noah Misch wrote:\n> On Tue, Apr 19, 2022 at 07:24:58PM -0400, Andrew Dunstan wrote:\n>> On 2022-04-19 Tu 18:39, Michael Paquier wrote:\n>>> +*generate_ascii_string = *TestLib::generate_ascii_string;\n>>> +*slurp_dir = *TestLib::slurp_dir;\n>>> +*slurp_file = *TestLib::slurp_file;\n>>>\n>>> I am not sure if it is possible and my perl-fu is limited in this\n>>> area, but could a failure be enforced when loading this path if a new\n>>> routine added in TestLib.pm is forgotten in this list?\n>> Not very easily that I'm aware of, but maybe some superior perl wizard\n>> will know better.\n> One can alias the symbol table, like https://metacpan.org/pod/Package::Alias\n> does. I'm attaching what I plan to use. Today, check-world fails after\n>\n> sed -i 's/TestLib/PostgreSQL::Test::Utils/g; s/PostgresNode/PostgreSQL::Test::Cluster/g' **/*.pl\n>\n> on REL_14_STABLE, because today's alias list is incomplete. With this change,\n> the same check-world passes.\n\nNice. 30 years of writing perl and I'm still learning of nifty features.\n\ncheers\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 22 Jun 2022 11:03:22 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "On Wed, Jun 22, 2022 at 11:03:22AM -0400, Andrew Dunstan wrote:\n> On 2022-06-22 We 03:21, Noah Misch wrote:\n> > On Tue, Apr 19, 2022 at 07:24:58PM -0400, Andrew Dunstan wrote:\n> >> On 2022-04-19 Tu 18:39, Michael Paquier wrote:\n> >>> +*generate_ascii_string = *TestLib::generate_ascii_string;\n> >>> +*slurp_dir = *TestLib::slurp_dir;\n> >>> +*slurp_file = *TestLib::slurp_file;\n> >>>\n> >>> I am not sure if it is possible and my perl-fu is limited in this\n> >>> area, but could a failure be enforced when loading this path if a new\n> >>> routine added in TestLib.pm is forgotten in this list?\n> >> Not very easily that I'm aware of, but maybe some superior perl wizard\n> >> will know better.\n> > One can alias the symbol table, like https://metacpan.org/pod/Package::Alias\n> > does. I'm attaching what I plan to use. Today, check-world fails after\n> >\n> > sed -i 's/TestLib/PostgreSQL::Test::Utils/g; s/PostgresNode/PostgreSQL::Test::Cluster/g' **/*.pl\n> >\n> > on REL_14_STABLE, because today's alias list is incomplete. With this change,\n> > the same check-world passes.\n\nThe patch wasn't sufficient to make that experiment pass for REL_10_STABLE,\nwhere 017_shm.pl uses the %params argument of get_new_node(). The problem\ncall stack had PostgreSQL::Test::Cluster->get_new_code calling\nPostgreSQL::Test::Cluster->new, which needs v14- semantics. Here's a fixed\nversion, just changing the new() hack.\n\nI suspect v1 also misbehaved for non-core tests that subclass PostgresNode\n(via the approach from commit 54dacc7) or PostgreSQL::Test::Cluster. I expect\nthis version will work with subclasses written for v14- and with subclasses\nwritten for v15+. I didn't actually write dummy subclasses to test, and the\nrelevant permutations are numerous (e.g. whether or not the subclass overrides\nnew(), whether or not the subclass overrides get_new_node()).\n\n> Nice. 30 years of writing perl and I'm still learning of nifty features.\n\nThanks for reviewing.",
"msg_date": "Thu, 23 Jun 2022 22:45:40 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
},
{
"msg_contents": "On Thu, Jun 23, 2022 at 10:45:40PM -0700, Noah Misch wrote:\n> On Wed, Jun 22, 2022 at 11:03:22AM -0400, Andrew Dunstan wrote:\n> > On 2022-06-22 We 03:21, Noah Misch wrote:\n> > > On Tue, Apr 19, 2022 at 07:24:58PM -0400, Andrew Dunstan wrote:\n> > >> On 2022-04-19 Tu 18:39, Michael Paquier wrote:\n> > >>> +*generate_ascii_string = *TestLib::generate_ascii_string;\n> > >>> +*slurp_dir = *TestLib::slurp_dir;\n> > >>> +*slurp_file = *TestLib::slurp_file;\n> > >>>\n> > >>> I am not sure if it is possible and my perl-fu is limited in this\n> > >>> area, but could a failure be enforced when loading this path if a new\n> > >>> routine added in TestLib.pm is forgotten in this list?\n> > >> Not very easily that I'm aware of, but maybe some superior perl wizard\n> > >> will know better.\n> > > One can alias the symbol table, like https://metacpan.org/pod/Package::Alias\n> > > does. I'm attaching what I plan to use. Today, check-world fails after\n> > >\n> > > sed -i 's/TestLib/PostgreSQL::Test::Utils/g; s/PostgresNode/PostgreSQL::Test::Cluster/g' **/*.pl\n> > >\n> > > on REL_14_STABLE, because today's alias list is incomplete. With this change,\n> > > the same check-world passes.\n> \n> The patch wasn't sufficient to make that experiment pass for REL_10_STABLE,\n> where 017_shm.pl uses the %params argument of get_new_node(). The problem\n> call stack had PostgreSQL::Test::Cluster->get_new_code calling\n> PostgreSQL::Test::Cluster->new, which needs v14- semantics. Here's a fixed\n> version, just changing the new() hack.\n\nI pushed this, but it broke lapwing and wrasse. I will investigate.\n\n\n",
"msg_date": "Sat, 25 Jun 2022 10:15:33 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Postgres perl module namespace"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nWhen reading the code, I noticed some possible issue about fdw batch insert.\r\nWhen I set batch_size > 65535 and insert more than 65535 rows into foreign table, \r\nIt will throw an error:\r\n\r\nFor example:\r\n\r\n------------------\r\nCREATE FOREIGN TABLE vzdalena_tabulka2(a int, b varchar)\r\n SERVER omega_db\r\n OPTIONS (table_name 'tabulka', batch_size '65536');\r\n\r\nINSERT INTO vzdalena_tabulka2 SELECT i, 'AHOJ' || i FROM generate_series(1,65536) g(i);\r\n\r\nERROR: number of parameters must be between 0 and 65535\r\nCONTEXT: remote SQL command: INSERT INTO public.tabulka(a, b) VALUES ($1, $2), ($3, $4)...\r\n------------------\r\n\r\nActually, I think if the (number of columns) * (number of rows) > 65535, then we can\r\nget this error. But, I think almost no one will set such a large value, so I think adjust the\r\nbatch_size automatically when doing INSERT seems an acceptable solution.\r\n\r\nIn the postgresGetForeignModifyBatchSize(), if we found the (num of param) * batch_size\r\nIs greater than the limit(65535), then we set it to 65535 / (num of param).\r\n\r\nThoughts ?\r\n\r\nBest regards,\r\nhouzj",
"msg_date": "Fri, 21 May 2021 02:48:05 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Fdw batch insert error out when set batch_size > 65535"
},
{
"msg_contents": "On Fri, May 21, 2021 at 8:18 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> Hi,\n>\n> When reading the code, I noticed some possible issue about fdw batch insert.\n> When I set batch_size > 65535 and insert more than 65535 rows into foreign table,\n> It will throw an error:\n>\n> For example:\n>\n> ------------------\n> CREATE FOREIGN TABLE vzdalena_tabulka2(a int, b varchar)\n> SERVER omega_db\n> OPTIONS (table_name 'tabulka', batch_size '65536');\n>\n> INSERT INTO vzdalena_tabulka2 SELECT i, 'AHOJ' || i FROM generate_series(1,65536) g(i);\n>\n> ERROR: number of parameters must be between 0 and 65535\n> CONTEXT: remote SQL command: INSERT INTO public.tabulka(a, b) VALUES ($1, $2), ($3, $4)...\n> ------------------\n>\n> Actually, I think if the (number of columns) * (number of rows) > 65535, then we can\n> get this error. But, I think almost no one will set such a large value, so I think adjust the\n> batch_size automatically when doing INSERT seems an acceptable solution.\n>\n> In the postgresGetForeignModifyBatchSize(), if we found the (num of param) * batch_size\n> Is greater than the limit(65535), then we set it to 65535 / (num of param).\n>\n> Thoughts ?\n\n+1 to limit batch_size for postgres_fdw and it's a good idea to have a\nmacro for the max params.\n\nFew comments:\n1) How about using macro in the error message, something like below?\n appendPQExpBuffer(errorMessage,\n libpq_gettext(\"number of parameters must be\nbetween 0 and %d\\n\"),\n PQ_MAX_PARAM_NUMBER);\n2) I think we can choose not mention the 65535 because it's hard to\nmaintain if that's changed in libpq protocol sometime in future. How\nabout\n\"The final number of rows postgres_fdw inserts in a batch actually\ndepends on the number of columns and the provided batch_size value.\nThis is because of the limit the libpq protocol (which postgres_fdw\nuses to connect to a remote server) has on the number of query\nparameters that can be specified per query. For instance, if the\nnumber of columns * batch_size is more than the limit, then the libpq\nemits an error. But postgres_fdw adjusts the batch_size to avoid this\nerror.\"\ninstead of\n+ overrides an option specified for the server. Note if the batch size\n+ exceed the protocol limit (number of columns * batch_size > 65535),\n+ then the actual batch size will be less than the specified batch_size.\n3) I think \"postgres_fdw should insert in each insert operation\"\ndoesn't hold after this patch. We can reword it to \"postgres_fdw\ninserts in each insert operation\".\n This option specifies the number of rows\n<filename>postgres_fdw</filename>\n should insert in each insert operation. It can be specified for a\n4) How about naming the macro as PQ_QUERY_PARAM_MAX_LIMIT?\n5) We can use macro in code comments as well.\n+ * 65535, so set the batch_size to not exceed limit in a batch insert.\n6) I think both code and docs patches can be combined into a single patch.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 21 May 2021 11:11:40 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fdw batch insert error out when set batch_size > 65535"
},
{
"msg_contents": "From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>\r\nSent: Friday, May 21, 2021 1:42 PM\r\n> On Fri, May 21, 2021 at 8:18 AM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> > Actually, I think if the (number of columns) * (number of rows) >\r\n> > 65535, then we can get this error. But, I think almost no one will set\r\n> > such a large value, so I think adjust the batch_size automatically when doing\r\n> INSERT seems an acceptable solution.\r\n> >\r\n> > In the postgresGetForeignModifyBatchSize(), if we found the (num of\r\n> > param) * batch_size Is greater than the limit(65535), then we set it to 65535 /\r\n> (num of param).\r\n> >\r\n> > Thoughts ?\r\n> \r\n> +1 to limit batch_size for postgres_fdw and it's a good idea to have a\r\n> macro for the max params.\r\n>\r\n> Few comments:\r\n\r\nThanks for the comments. \r\n\r\n> 1) How about using macro in the error message, something like below?\r\n> appendPQExpBuffer(errorMessage,\r\n> libpq_gettext(\"number of parameters must be\r\n> between 0 and %d\\n\"),\r\n> PQ_MAX_PARAM_NUMBER);\r\n\r\nChanged.\r\n\r\n> 2) I think we can choose not mention the 65535 because it's hard to maintain if\r\n> that's changed in libpq protocol sometime in future. How about \"The final\r\n> number of rows postgres_fdw inserts in a batch actually depends on the\r\n> number of columns and the provided batch_size value.\r\n> This is because of the limit the libpq protocol (which postgres_fdw uses to\r\n> connect to a remote server) has on the number of query parameters that can\r\n> be specified per query. For instance, if the number of columns * batch_size is\r\n> more than the limit, then the libpq emits an error. But postgres_fdw adjusts the\r\n> batch_size to avoid this error.\"\r\n> instead of\r\n> + overrides an option specified for the server. Note if the batch size\r\n> + exceed the protocol limit (number of columns * batch_size > 65535),\r\n> + then the actual batch size will be less than the specified batch_size.\r\n\r\nThanks, your description looks better. Changed.\r\n\r\n> 3) I think \"postgres_fdw should insert in each insert operation\"\r\n> doesn't hold after this patch. We can reword it to \"postgres_fdw inserts in\r\n> each insert operation\".\r\n> This option specifies the number of rows\r\n> <filename>postgres_fdw</filename>\r\n> should insert in each insert operation. It can be specified for a\r\n\r\nChanged.\r\n\r\n> 4) How about naming the macro as PQ_QUERY_PARAM_MAX_LIMIT?\r\n\r\nChanged.\r\n\r\n> 5) We can use macro in code comments as well.\r\n\r\nThanks, I reorganized the code comments.\r\n\r\n> + * 65535, so set the batch_size to not exceed limit in a batch insert.\r\n> 6) I think both code and docs patches can be combined into a single patch.\r\nCombined.\r\n\r\nAttaching V2 patch. Please consider it for further review.\r\n\r\nBest regards,\r\nhouzj",
"msg_date": "Fri, 21 May 2021 07:49:55 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Fdw batch insert error out when set batch_size > 65535"
},
{
"msg_contents": "On Fri, May 21, 2021 at 1:19 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n> Attaching V2 patch. Please consider it for further review.\n\nThanks for the patch. Some more comments:\n\n1) Can fmstate->p_nums ever be 0 in postgresGetForeignModifyBatchSize?\nBy any chance, if it can, I think instead of an assert, we can have\nsomething like below, since we are using it in the division.\n if (fmstate->p_nums > 0 &&\n (batch_size * fmstate->p_nums > PQ_QUERY_PARAM_MAX_LIMIT))\n {\n batch_size = PQ_QUERY_PARAM_MAX_LIMIT / fmstate->p_nums;\n }\nAlso, please remove the { and } for the above if condition, because\nfor 1 line statements we don't normally use { and }\n2) An empty line after the macro definition will be good.\n+#define PQ_QUERY_PARAM_MAX_LIMIT 65535\n extern int PQsendQuery(PGconn *conn, const char *query);\n3) I think we use:\n<filename>postgres_fdw</filename> not postgres_fdw\n<literal>batch_size</literal> not batch_size\nthe number of columns * <literal>batch_size</literal> not the number\nof columns * batch_size\n+ overrides an option specified for the server. Note the final number\n+ of rows postgres_fdw inserts in a batch actually depends on the\n+ number of columns and the provided batch_size value. This is because\n+ of the limit the libpq protocol (which postgres_fdw uses to connect\n+ to a remote server) has on the number of query parameters that can\n+ be specified per query. For instance, if the number of columns\n* batch_size\n+ is more than the limit, then the libpq emits an error. But postgres_fdw\n+ adjusts the batch_size to avoid this error.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 21 May 2021 14:33:17 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fdw batch insert error out when set batch_size > 65535"
},
{
"msg_contents": "\nOn 5/21/21 5:03 AM, Bharath Rupireddy wrote:\n> On Fri, May 21, 2021 at 1:19 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n>> Attaching V2 patch. Please consider it for further review.\n> Thanks for the patch. Some more comments:\n>\n> 1) Can fmstate->p_nums ever be 0 in postgresGetForeignModifyBatchSize?\n> By any chance, if it can, I think instead of an assert, we can have\n> something like below, since we are using it in the division.\n> if (fmstate->p_nums > 0 &&\n> (batch_size * fmstate->p_nums > PQ_QUERY_PARAM_MAX_LIMIT))\n> {\n> batch_size = PQ_QUERY_PARAM_MAX_LIMIT / fmstate->p_nums;\n> }\n> Also, please remove the { and } for the above if condition, because\n> for 1 line statements we don't normally use { and }\n\n\n\nWe used to filter that out in pgindent IIRC but we don't any more.\nIMNSHO there are cases when it makes the code more readable, especially\nif (as here) the condition spans more than one line. I also personally\ndislike having one branch of an \"if\" statement with braces and another\nwithout - it looks far better to my eyes to have all or none with\nbraces. But I realize that's a matter of taste, and there are plenty of\nexamples in the code running counter to my taste here.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 21 May 2021 09:33:19 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Fdw batch insert error out when set batch_size > 65535"
},
{
"msg_contents": "From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>\r\nSent: Friday, May 21, 2021 5:03 PM\r\n> On Fri, May 21, 2021 at 1:19 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> > Attaching V2 patch. Please consider it for further review.\r\n> \r\n> Thanks for the patch. Some more comments:\r\n> \r\n> 1) Can fmstate->p_nums ever be 0 in postgresGetForeignModifyBatchSize?\r\n> By any chance, if it can, I think instead of an assert, we can have something like\r\n> below, since we are using it in the division.\r\n> if (fmstate->p_nums > 0 &&\r\n> (batch_size * fmstate->p_nums > PQ_QUERY_PARAM_MAX_LIMIT))\r\n> {\r\n> batch_size = PQ_QUERY_PARAM_MAX_LIMIT / fmstate->p_nums;\r\n> }\r\n> Also, please remove the { and } for the above if condition, because for 1 line\r\n> statements we don't normally use { and }\r\n> 2) An empty line after the macro definition will be good.\r\n> +#define PQ_QUERY_PARAM_MAX_LIMIT 65535\r\n> extern int PQsendQuery(PGconn *conn, const char *query);\r\n> 3) I think we use:\r\n> <filename>postgres_fdw</filename> not postgres_fdw\r\n> <literal>batch_size</literal> not batch_size the number of columns *\r\n> <literal>batch_size</literal> not the number of columns * batch_size\r\n> + overrides an option specified for the server. Note the final number\r\n> + of rows postgres_fdw inserts in a batch actually depends on the\r\n> + number of columns and the provided batch_size value. This is because\r\n> + of the limit the libpq protocol (which postgres_fdw uses to connect\r\n> + to a remote server) has on the number of query parameters that can\r\n> + be specified per query. For instance, if the number of columns\r\n> * batch_size\r\n> + is more than the limit, then the libpq emits an error. But postgres_fdw\r\n> + adjusts the batch_size to avoid this error.\r\n\r\nThanks for the comments. I have addressed all comments to the v3 patch.\r\nBTW, Is the batch_size issue here an Open Item of PG14 ?\r\n\r\nBest regards,\r\nhouzj",
"msg_date": "Tue, 25 May 2021 07:38:23 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Fdw batch insert error out when set batch_size > 65535"
},
{
"msg_contents": "On Tue, May 25, 2021 at 1:08 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n> Thanks for the comments. I have addressed all comments to the v3 patch.\n\nThanks! The patch basically looks good to me except that it is missing\na commit message. I think it can be added now.\n\n> BTW, Is the batch_size issue here an Open Item of PG14 ?\n\nIMO, the issue you found when setting batch_size to a too high value\nis an extreme case testing of the feature added by commit b663a4136\nthat introduced the batch_size parameter. So, it's a bug to me. I\nthink you can add it as a bug in the commitfest and let the committers\ntake the call.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 25 May 2021 14:47:55 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fdw batch insert error out when set batch_size > 65535"
},
{
"msg_contents": "On Tue, May 25, 2021 at 2:47 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, May 25, 2021 at 1:08 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> > Thanks for the comments. I have addressed all comments to the v3 patch.\n>\n> Thanks! The patch basically looks good to me except that it is missing\n> a commit message. I think it can be added now.\n\nWith v3 patch, I observed failure in postgres_fdw test cases with\ninsert query in prepared statements. Root cause is that in\npostgresGetForeignModifyBatchSize, fmstate can be null (see the\nexisting code which handles for fmstate null cases). I fixed this, and\nadded a commit message. PSA v4 patch.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 26 May 2021 12:27:24 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fdw batch insert error out when set batch_size > 65535"
},
{
"msg_contents": "On 5/26/21 8:57 AM, Bharath Rupireddy wrote:\n> On Tue, May 25, 2021 at 2:47 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>> On Tue, May 25, 2021 at 1:08 PM houzj.fnst@fujitsu.com\n>> <houzj.fnst@fujitsu.com> wrote:\n>>> Thanks for the comments. I have addressed all comments to the v3 patch.\n>>\n>> Thanks! The patch basically looks good to me except that it is missing\n>> a commit message. I think it can be added now.\n> \n> With v3 patch, I observed failure in postgres_fdw test cases with\n> insert query in prepared statements. Root cause is that in\n> postgresGetForeignModifyBatchSize, fmstate can be null (see the\n> existing code which handles for fmstate null cases). I fixed this, and\n> added a commit message. PSA v4 patch.\n> \n\nThanks. In what situation is the fmstate NULL? If it is NULL, the \ncurrent code simply skips the line adjusting it. Doesn't that mean we \nmay not actually fix the bug in that case?\n\nAlso, I think it'd be keep the existing comment, probably as the first \nline of the new comment block.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 26 May 2021 15:06:08 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Fdw batch insert error out when set batch_size > 65535"
},
{
"msg_contents": "On Wed, May 26, 2021 at 6:36 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 5/26/21 8:57 AM, Bharath Rupireddy wrote:\n> > On Tue, May 25, 2021 at 2:47 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >>\n> >> On Tue, May 25, 2021 at 1:08 PM houzj.fnst@fujitsu.com\n> >> <houzj.fnst@fujitsu.com> wrote:\n> >>> Thanks for the comments. I have addressed all comments to the v3 patch.\n> >>\n> >> Thanks! The patch basically looks good to me except that it is missing\n> >> a commit message. I think it can be added now.\n> >\n> > With v3 patch, I observed failure in postgres_fdw test cases with\n> > insert query in prepared statements. Root cause is that in\n> > postgresGetForeignModifyBatchSize, fmstate can be null (see the\n> > existing code which handles for fmstate null cases). I fixed this, and\n> > added a commit message. PSA v4 patch.\n> >\n>\n> Thanks. In what situation is the fmstate NULL? If it is NULL, the\n> current code simply skips the line adjusting it. Doesn't that mean we\n> may not actually fix the bug in that case?\n\nfmstate i.e. resultRelInfo->ri_FdwState is NULL for EXPLAIN without\nANALYZE cases, below comment says it and we can't get the bug because\nwe don't actually execute the insert statement. The bug occurs on the\nremote server when the insert query with those many query parameters\nis submitted to the remote server. I'm not sure if there are any other\ncases where it can be NULL.\n /*\n * In EXPLAIN without ANALYZE, ri_fdwstate is NULL, so we have to lookup\n * the option directly in server/table options. Otherwise just use the\n * value we determined earlier.\n */\n if (fmstate)\n batch_size = fmstate->batch_size;\n else\n batch_size = get_batch_size_option(resultRelInfo->ri_RelationDesc);\n\n> Also, I think it'd be keep the existing comment, probably as the first\n> line of the new comment block.\n\nDo you mean to say we need to retain \"/* Otherwise use the batch size\nspecified for server/table. */\"? If so, PSA v5.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 26 May 2021 19:25:44 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fdw batch insert error out when set batch_size > 65535"
},
{
"msg_contents": "From: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>\r\nSent: Wednesday, May 26, 2021 9:56 PM\r\n> On Wed, May 26, 2021 at 6:36 PM Tomas Vondra\r\n> <tomas.vondra@enterprisedb.com> wrote:\r\n> >\r\n> > On 5/26/21 8:57 AM, Bharath Rupireddy wrote:\r\n> > > On Tue, May 25, 2021 at 2:47 PM Bharath Rupireddy\r\n> > > <bharath.rupireddyforpostgres@gmail.com> wrote:\r\n> > >>\r\n> > >> On Tue, May 25, 2021 at 1:08 PM houzj.fnst@fujitsu.com\r\n> > >> <houzj.fnst@fujitsu.com> wrote:\r\n> > >>> Thanks for the comments. I have addressed all comments to the v3\r\n> patch.\r\n> > >>\r\n> > >> Thanks! The patch basically looks good to me except that it is\r\n> > >> missing a commit message. I think it can be added now.\r\n> > >\r\n> > > With v3 patch, I observed failure in postgres_fdw test cases with\r\n> > > insert query in prepared statements. Root cause is that in\r\n> > > postgresGetForeignModifyBatchSize, fmstate can be null (see the\r\n> > > existing code which handles for fmstate null cases). I fixed this,\r\n> > > and added a commit message. PSA v4 patch.\r\n> > >\r\n> >\r\n> > Thanks. In what situation is the fmstate NULL? If it is NULL, the\r\n> > current code simply skips the line adjusting it. Doesn't that mean we\r\n> > may not actually fix the bug in that case?\r\n> \r\n> fmstate i.e. resultRelInfo->ri_FdwState is NULL for EXPLAIN without ANALYZE\r\n> cases, below comment says it and we can't get the bug because we don't\r\n> actually execute the insert statement. The bug occurs on the remote server\r\n> when the insert query with those many query parameters is submitted to the\r\n> remote server.\r\n\r\nAgreed.\r\nThe \"ri_FdwState\" is initialized in postgresBeginForeignInsert or postgresBeginForeignModify.\r\nI think the above functions are always invoked before getting the batch_size.\r\n\r\nOnly in EXPLAIN mode, it will not initialize the ri_FdwState.\r\n\r\n\t/*\r\n\t * Do nothing in EXPLAIN (no ANALYZE) case. resultRelInfo->ri_FdwState\r\n\t * stays NULL.\r\n\t */\r\n\tif (eflags & EXEC_FLAG_EXPLAIN_ONLY)\r\n\t\treturn;\r\n\r\nBest regards,\r\nhouzj\r\n \r\n\r\n",
"msg_date": "Thu, 27 May 2021 03:12:16 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Fdw batch insert error out when set batch_size > 65535"
},
{
"msg_contents": "Hi,\n\nI took at this patch today. I did some minor changes, mostly:\n\n1) change the code limiting batch_size from\n\n if (fmstate->p_nums > 0 &&\n (batch_size * fmstate->p_nums > PQ_QUERY_PARAM_MAX_LIMIT))\n {\n batch_size = PQ_QUERY_PARAM_MAX_LIMIT / fmstate->p_nums;\n }\n\nto\n\n if (fmstate && fmstate->p_nums > 0)\n batch_size = Min(batch_size,\n PQ_QUERY_PARAM_MAX_LIMIT / fmstate->p_nums);\n\nwhich I think is somewhat clearer / more common patter.\n\n\n2) I've reworded the docs a bit, splitting the single para into two. I\nthink this makes it clearer.\n\n\nAttached is a patch doing this. Please check the commit message etc.\nBarring objections I'll get it committed in a couple days.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 30 May 2021 21:51:54 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Fdw batch insert error out when set batch_size > 65535"
},
{
"msg_contents": "On Mon, May 31, 2021 at 1:21 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> I took at this patch today. I did some minor changes, mostly:\n>\n> 1) change the code limiting batch_size from\n>\n> if (fmstate->p_nums > 0 &&\n> (batch_size * fmstate->p_nums > PQ_QUERY_PARAM_MAX_LIMIT))\n> {\n> batch_size = PQ_QUERY_PARAM_MAX_LIMIT / fmstate->p_nums;\n> }\n>\n> to\n>\n> if (fmstate && fmstate->p_nums > 0)\n> batch_size = Min(batch_size,\n> PQ_QUERY_PARAM_MAX_LIMIT / fmstate->p_nums);\n>\n> which I think is somewhat clearer / more common patter.\n\nAgree, that looks pretty good.\n\n> 2) I've reworded the docs a bit, splitting the single para into two. I\n> think this makes it clearer.\n\nLGTM, except one thing that the batch_size description says \"should\ninsert in\", but it's not that the value entered for batch_size is\nalways honoured right? Because this patch might it.\n\n This option specifies the number of rows\n<filename>postgres_fdw</filename>\n should insert in each insert operation. It can be specified for a\n\nSo, I suggest to remove \"should\" and change it to:\n\n This option specifies the number of rows\n<filename>postgres_fdw</filename>\n inserts in each insert operation. It can be specified for a\n\n> Attached is a patch doing this. Please check the commit message etc.\n> Barring objections I'll get it committed in a couple days.\n\nOne minor comment:\nIn the commit message, Int16 is used\nThe FE/BE protocol identifies parameters with an Int16 index, which\nlimits the maximum number of parameters per query to 65535. With\n\nand in the code comments uint16 is used.\n+ * parameters in a batch is limited to 64k (uint16), so make sure we don't\n\nIsn't it uint16 in the commit message too? Also, can we use 64k in the\ncommit message instead of 65535?\n\nWith Regards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 31 May 2021 09:31:24 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fdw batch insert error out when set batch_size > 65535"
},
{
"msg_contents": "\n\nOn 5/31/21 6:01 AM, Bharath Rupireddy wrote:\n> On Mon, May 31, 2021 at 1:21 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> Hi,\n>>\n>> I took at this patch today. I did some minor changes, mostly:\n>>\n>> 1) change the code limiting batch_size from\n>>\n>> if (fmstate->p_nums > 0 &&\n>> (batch_size * fmstate->p_nums > PQ_QUERY_PARAM_MAX_LIMIT))\n>> {\n>> batch_size = PQ_QUERY_PARAM_MAX_LIMIT / fmstate->p_nums;\n>> }\n>>\n>> to\n>>\n>> if (fmstate && fmstate->p_nums > 0)\n>> batch_size = Min(batch_size,\n>> PQ_QUERY_PARAM_MAX_LIMIT / fmstate->p_nums);\n>>\n>> which I think is somewhat clearer / more common patter.\n> \n> Agree, that looks pretty good.\n> \n>> 2) I've reworded the docs a bit, splitting the single para into two. I\n>> think this makes it clearer.\n> \n> LGTM, except one thing that the batch_size description says \"should\n> insert in\", but it's not that the value entered for batch_size is\n> always honoured right? Because this patch might it.\n> \n> This option specifies the number of rows\n> <filename>postgres_fdw</filename>\n> should insert in each insert operation. It can be specified for a\n> \n> So, I suggest to remove \"should\" and change it to:\n> \n> This option specifies the number of rows\n> <filename>postgres_fdw</filename>\n> inserts in each insert operation. It can be specified for a\n> \n\nI think the \"should\" indicates exactly that postgres_fdw may adjust the\nbatch size. Without it it sounds much more definitive, so I kept it.\n\n>> Attached is a patch doing this. Please check the commit message etc.\n>> Barring objections I'll get it committed in a couple days.\n> \n> One minor comment:\n> In the commit message, Int16 is used\n> The FE/BE protocol identifies parameters with an Int16 index, which\n> limits the maximum number of parameters per query to 65535. With\n> \n> and in the code comments uint16 is used.\n> + * parameters in a batch is limited to 64k (uint16), so make sure we don't\n> \n> Isn't it uint16 in the commit message too? Also, can we use 64k in the\n> commit message instead of 65535?\n> \n\nNo, the \"Int16\" refers to the FE/BE documentation, where we use Int16.\nBut in the C code we interpret it as uint16.\n\nI've added a simple regression test to postgres_fdw, testing that batch\nsizes > 65535 work fine, and pushed the fix.\n\n\nI've considered checking the value in postgres_fdw_validator and just\nrejecting anything over 65535, but I've decided against that. We'd still\nneed to adjust depending on number of columns anyway.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 8 Jun 2021 20:34:28 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Fdw batch insert error out when set batch_size > 65535"
},
{
"msg_contents": "On Wed, Jun 9, 2021 at 12:04 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> No, the \"Int16\" refers to the FE/BE documentation, where we use Int16.\n> But in the C code we interpret it as uint16.\n\nHm. I see that in protocol.sgml Int16 is being used.\n\n> I've added a simple regression test to postgres_fdw, testing that batch\n> sizes > 65535 work fine, and pushed the fix.\n\nI was earlier thinking of adding one, but stopped because it might\nincrease the regression test execution time. It looks like that's true\n- with and without the test case it takes 17 sec and 4 sec\nrespectively on my dev system which is 4X slower. I'm not sure if this\nis okay.\n\nWith Regards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Wed, 9 Jun 2021 09:50:57 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fdw batch insert error out when set batch_size > 65535"
},
{
"msg_contents": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n>> I've added a simple regression test to postgres_fdw, testing that batch\n>> sizes > 65535 work fine, and pushed the fix.\n\n> I was earlier thinking of adding one, but stopped because it might\n> increase the regression test execution time. It looks like that's true\n> - with and without the test case it takes 17 sec and 4 sec\n> respectively on my dev system which is 4X slower. I'm not sure if this\n> is okay.\n\nThe cost, versus the odds of ever detecting a problem, doesn't\nseem like a good tradeoff.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 09 Jun 2021 02:05:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fdw batch insert error out when set batch_size > 65535"
},
{
"msg_contents": "I wrote:\n> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n>>> I've added a simple regression test to postgres_fdw, testing that batch\n>>> sizes > 65535 work fine, and pushed the fix.\n\n>> I was earlier thinking of adding one, but stopped because it might\n>> increase the regression test execution time. It looks like that's true\n>> - with and without the test case it takes 17 sec and 4 sec\n>> respectively on my dev system which is 4X slower. I'm not sure if this\n>> is okay.\n\n> The cost, versus the odds of ever detecting a problem, doesn't\n> seem like a good tradeoff.\n\nI took a quick look and noted that on buildfarm member longfin\n(to take a random example that's sitting a few feet from me),\nthe time for contrib-install-check went from 34 seconds before\nthis patch to 40 seconds after. I find that completely\nunacceptable compared to the likely value of this test case.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 09 Jun 2021 02:28:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fdw batch insert error out when set batch_size > 65535"
},
{
"msg_contents": "\n\nOn 6/9/21 8:28 AM, Tom Lane wrote:\n> I wrote:\n>> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n>>>> I've added a simple regression test to postgres_fdw, testing that batch\n>>>> sizes > 65535 work fine, and pushed the fix.\n> \n>>> I was earlier thinking of adding one, but stopped because it might\n>>> increase the regression test execution time. It looks like that's true\n>>> - with and without the test case it takes 17 sec and 4 sec\n>>> respectively on my dev system which is 4X slower. I'm not sure if this\n>>> is okay.\n> \n>> The cost, versus the odds of ever detecting a problem, doesn't\n>> seem like a good tradeoff.\n> \n> I took a quick look and noted that on buildfarm member longfin\n> (to take a random example that's sitting a few feet from me),\n> the time for contrib-install-check went from 34 seconds before\n> this patch to 40 seconds after. I find that completely\n> unacceptable compared to the likely value of this test case.\n> \n\nNote that the problem here is [1] - we're creating a lot of slots \nreferencing the same tuple descriptor, which inflates the duration. \nThere's a fix in the other thread, which eliminates ~99% of the \noverhead. I plan to push that fix soon (a day or two).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 9 Jun 2021 12:22:15 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Fdw batch insert error out when set batch_size > 65535"
},
{
"msg_contents": "\n\nOn 6/9/21 12:22 PM, Tomas Vondra wrote:\n> \n> \n> On 6/9/21 8:28 AM, Tom Lane wrote:\n>> I wrote:\n>>> Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> writes:\n>>>>> I've added a simple regression test to postgres_fdw, testing that \n>>>>> batch\n>>>>> sizes > 65535 work fine, and pushed the fix.\n>>\n>>>> I was earlier thinking of adding one, but stopped because it might\n>>>> increase the regression test execution time. It looks like that's true\n>>>> - with and without the test case it takes 17 sec and 4 sec\n>>>> respectively on my dev system which is 4X slower. I'm not sure if this\n>>>> is okay.\n>>\n>>> The cost, versus the odds of ever detecting a problem, doesn't\n>>> seem like a good tradeoff.\n>>\n>> I took a quick look and noted that on buildfarm member longfin\n>> (to take a random example that's sitting a few feet from me),\n>> the time for contrib-install-check went from 34 seconds before\n>> this patch to 40 seconds after. I find that completely\n>> unacceptable compared to the likely value of this test case.\n>>\n> \n> Note that the problem here is [1] - we're creating a lot of slots \n> referencing the same tuple descriptor, which inflates the duration. \n> There's a fix in the other thread, which eliminates ~99% of the \n> overhead. I plan to push that fix soon (a day or two).\n> \n\nForgot to add the link:\n\n[1] \nhttps://www.postgresql.org/message-id/ebbbcc7d-4286-8c28-0272-61b4753af761%40enterprisedb.com\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 9 Jun 2021 12:23:45 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Fdw batch insert error out when set batch_size > 65535"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> Note that the problem here is [1] - we're creating a lot of slots \n> referencing the same tuple descriptor, which inflates the duration. \n> There's a fix in the other thread, which eliminates ~99% of the \n> overhead. I plan to push that fix soon (a day or two).\n\nOh, okay, as long as there's a plan to bring the time back down.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 09 Jun 2021 09:28:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fdw batch insert error out when set batch_size > 65535"
},
{
"msg_contents": "On 6/9/21 3:28 PM, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> Note that the problem here is [1] - we're creating a lot of slots\n>> referencing the same tuple descriptor, which inflates the duration.\n>> There's a fix in the other thread, which eliminates ~99% of the\n>> overhead. I plan to push that fix soon (a day or two).\n> \n> Oh, okay, as long as there's a plan to bring the time back down.\n> \n\nYeah. Sorry for not mentioning this in the original message about the \nnew regression test.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 9 Jun 2021 16:05:49 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Fdw batch insert error out when set batch_size > 65535"
},
{
"msg_contents": "\n\nOn 6/9/21 4:05 PM, Tomas Vondra wrote:\n> On 6/9/21 3:28 PM, Tom Lane wrote:\n>> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>>> Note that the problem here is [1] - we're creating a lot of slots\n>>> referencing the same tuple descriptor, which inflates the duration.\n>>> There's a fix in the other thread, which eliminates ~99% of the\n>>> overhead. I plan to push that fix soon (a day or two).\n>>\n>> Oh, okay, as long as there's a plan to bring the time back down.\n>>\n> \n> Yeah. Sorry for not mentioning this in the original message about the\n> new regression test.\n> \n\nI've pushed a fix addressing the performance issue.\n\nThere's one caveat, though - for regular builds the slowdown is pretty\nmuch eliminated. But with valgrind it's still considerably slower. For\npostgres_fdw the \"make check\" used to take ~5 minutes for me, now it\ntakes >1h. And yes, this is entirely due to the new test case which is\ngenerating / inserting 70k rows. So maybe the test case is not worth it\nafter all, and we should get rid of it.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 12 Jun 2021 00:39:02 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Fdw batch insert error out when set batch_size > 65535"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> There's one caveat, though - for regular builds the slowdown is pretty\n> much eliminated. But with valgrind it's still considerably slower. For\n> postgres_fdw the \"make check\" used to take ~5 minutes for me, now it\n> takes >1h. And yes, this is entirely due to the new test case which is\n> generating / inserting 70k rows. So maybe the test case is not worth it\n> after all, and we should get rid of it.\n\nI bet the CLOBBER_CACHE animals won't like it much either.\n\nI suggest what we do is leave it in place for long enough to get\na round of reports from those slow animals, and then (assuming\nthose reports are positive) drop the test.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 11 Jun 2021 18:44:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fdw batch insert error out when set batch_size > 65535"
},
{
"msg_contents": "On 2021-Jun-12, Tomas Vondra wrote:\n\n> There's one caveat, though - for regular builds the slowdown is pretty\n> much eliminated. But with valgrind it's still considerably slower. For\n> postgres_fdw the \"make check\" used to take ~5 minutes for me, now it\n> takes >1h. And yes, this is entirely due to the new test case which is\n> generating / inserting 70k rows. So maybe the test case is not worth it\n> after all, and we should get rid of it.\n\nHmm, what if the table is made 1600 columns wide -- would inserting 41\nrows be sufficient to trigger the problem case? If it does, maybe it\nwould reduce the runtime for valgrind/cache-clobber animals enough that\nit's no longer a concern.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"At least to kernel hackers, who really are human, despite occasional\nrumors to the contrary\" (LWN.net)\n\n\n",
"msg_date": "Sat, 12 Jun 2021 20:40:40 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Fdw batch insert error out when set batch_size > 65535"
},
{
"msg_contents": "On 6/13/21 2:40 AM, Alvaro Herrera wrote:\n> On 2021-Jun-12, Tomas Vondra wrote:\n> \n>> There's one caveat, though - for regular builds the slowdown is pretty\n>> much eliminated. But with valgrind it's still considerably slower. For\n>> postgres_fdw the \"make check\" used to take ~5 minutes for me, now it\n>> takes >1h. And yes, this is entirely due to the new test case which is\n>> generating / inserting 70k rows. So maybe the test case is not worth it\n>> after all, and we should get rid of it.\n> \n> Hmm, what if the table is made 1600 columns wide -- would inserting 41\n> rows be sufficient to trigger the problem case? If it does, maybe it\n> would reduce the runtime for valgrind/cache-clobber animals enough that\n> it's no longer a concern.\n> \n\nGood idea. I gave that a try, creating a table with 1500 columns and\ninserting 50 rows (so 75k parameters). See the attached patch.\n\nWhile this cuts the runtime about in half (to ~30 minutes on my laptop),\nthat's probably not enough - it's still about ~6x longer than it used to\ntake. All these timings are with valgrind.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 13 Jun 2021 15:54:57 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Fdw batch insert error out when set batch_size > 65535"
},
{
"msg_contents": "On Sun, Jun 13, 2021 at 6:10 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Jun-12, Tomas Vondra wrote:\n>\n> > There's one caveat, though - for regular builds the slowdown is pretty\n> > much eliminated. But with valgrind it's still considerably slower. For\n> > postgres_fdw the \"make check\" used to take ~5 minutes for me, now it\n> > takes >1h. And yes, this is entirely due to the new test case which is\n> > generating / inserting 70k rows. So maybe the test case is not worth it\n> > after all, and we should get rid of it.\n>\n> Hmm, what if the table is made 1600 columns wide -- would inserting 41\n> rows be sufficient to trigger the problem case? If it does, maybe it\n> would reduce the runtime for valgrind/cache-clobber animals enough that\n> it's no longer a concern.\n\nYeah, that's a good idea. PSA patch that creates the table of 1600\ncolumns and inserts 41 rows into the foreign table. If the batch_size\nadjustment fix isn't there, we will hit the error. On my dev system,\npostgres_fdw contrib regression tests execution time: with and without\nthe attached patch 4.5 sec and 5.7 sec respectively.\n\nOn Sun, Jun 13, 2021 at 7:25 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> Good idea. I gave that a try, creating a table with 1500 columns and\n> inserting 50 rows (so 75k parameters). See the attached patch.\n\nThanks for the patch. I also prepared a patch, just sharing. I'm okay\nif it's ignored.\n\nWith Regards,\nBharath Rupireddy.",
"msg_date": "Sun, 13 Jun 2021 20:55:23 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fdw batch insert error out when set batch_size > 65535"
},
{
"msg_contents": "\n\nOn 6/13/21 5:25 PM, Bharath Rupireddy wrote:\n> On Sun, Jun 13, 2021 at 6:10 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>>\n>> On 2021-Jun-12, Tomas Vondra wrote:\n>>\n>>> There's one caveat, though - for regular builds the slowdown is pretty\n>>> much eliminated. But with valgrind it's still considerably slower. For\n>>> postgres_fdw the \"make check\" used to take ~5 minutes for me, now it\n>>> takes >1h. And yes, this is entirely due to the new test case which is\n>>> generating / inserting 70k rows. So maybe the test case is not worth it\n>>> after all, and we should get rid of it.\n>>\n>> Hmm, what if the table is made 1600 columns wide -- would inserting 41\n>> rows be sufficient to trigger the problem case? If it does, maybe it\n>> would reduce the runtime for valgrind/cache-clobber animals enough that\n>> it's no longer a concern.\n> \n> Yeah, that's a good idea. PSA patch that creates the table of 1600\n> columns and inserts 41 rows into the foreign table. If the batch_size\n> adjustment fix isn't there, we will hit the error. On my dev system,\n> postgres_fdw contrib regression tests execution time: with and without\n> the attached patch 4.5 sec and 5.7 sec respectively.\n> \n\nBut we're discussing cases with valgrind and/or CLOBBER_CACHE_ALWAYS.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 13 Jun 2021 17:58:47 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Fdw batch insert error out when set batch_size > 65535"
},
{
"msg_contents": "On Sun, Jun 13, 2021 at 9:28 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> On 6/13/21 5:25 PM, Bharath Rupireddy wrote:\n> > On Sun, Jun 13, 2021 at 6:10 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >>\n> >> On 2021-Jun-12, Tomas Vondra wrote:\n> >>\n> >>> There's one caveat, though - for regular builds the slowdown is pretty\n> >>> much eliminated. But with valgrind it's still considerably slower. For\n> >>> postgres_fdw the \"make check\" used to take ~5 minutes for me, now it\n> >>> takes >1h. And yes, this is entirely due to the new test case which is\n> >>> generating / inserting 70k rows. So maybe the test case is not worth it\n> >>> after all, and we should get rid of it.\n> >>\n> >> Hmm, what if the table is made 1600 columns wide -- would inserting 41\n> >> rows be sufficient to trigger the problem case? If it does, maybe it\n> >> would reduce the runtime for valgrind/cache-clobber animals enough that\n> >> it's no longer a concern.\n> >\n> > Yeah, that's a good idea. PSA patch that creates the table of 1600\n> > columns and inserts 41 rows into the foreign table. If the batch_size\n> > adjustment fix isn't there, we will hit the error. On my dev system,\n> > postgres_fdw contrib regression tests execution time: with and without\n> > the attached patch 4.5 sec and 5.7 sec respectively.\n> >\n>\n> But we're discussing cases with valgrind and/or CLOBBER_CACHE_ALWAYS.\n\nOkay. Here are the readings on my dev system:\n1) on master with the existing test case with inserting 70K rows:\n4263200 ms (71.05 min)\n2) with Tomas's patch with the test case modified with 1500 table\ncolumns and 50 rows, (majority of the time ~30min it took in SELECT\ncreate_batch_tables(1500); statement. I measured this time manually\nlooking at the start and end time of the statement - 6649312 ms (110.8\nmin)\n3) with my patch with test case modified with 1600 table columns and\n41 rows: 4003007 ms (66.71 min)\n4) on master without the test case at all: 3770722 ms (62.84 min)\n\nWith Regards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Mon, 14 Jun 2021 17:33:49 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fdw batch insert error out when set batch_size > 65535"
},
{
"msg_contents": "On Mon, Jun 14, 2021, 5:33 PM Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> Okay. Here are the readings on my dev system:\n> 1) on master with the existing test case with inserting 70K rows:\n> 4263200 ms (71.05 min)\n> 2) with Tomas's patch with the test case modified with 1500 table\n> columns and 50 rows, (majority of the time ~30min it took in SELECT\n> create_batch_tables(1500); statement. I measured this time manually\n> looking at the start and end time of the statement - 6649312 ms (110.8\n> min)\n> 3) with my patch with test case modified with 1600 table columns and\n> 41 rows: 4003007 ms (66.71 min)\n> 4) on master without the test case at all: 3770722 ms (62.84 min)\n>\n\nI forgot to mention one thing: I ran the above tests with\nCLOBBER_CACHE_ALWAYS.\n\nRegards,\nBharath Rupireddy.\n\n>\n\nOn Mon, Jun 14, 2021, 5:33 PM Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\nOkay. Here are the readings on my dev system:\n1) on master with the existing test case with inserting 70K rows:\n4263200 ms (71.05 min)\n2) with Tomas's patch with the test case modified with 1500 table\ncolumns and 50 rows, (majority of the time ~30min it took in SELECT\ncreate_batch_tables(1500); statement. I measured this time manually\nlooking at the start and end time of the statement - 6649312 ms (110.8\nmin)\n3) with my patch with test case modified with 1600 table columns and\n41 rows: 4003007 ms (66.71 min)\n4) on master without the test case at all: 3770722 ms (62.84 min)I forgot to mention one thing: I ran the above tests with CLOBBER_CACHE_ALWAYS.Regards,Bharath Rupireddy.",
"msg_date": "Mon, 14 Jun 2021 18:08:47 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fdw batch insert error out when set batch_size > 65535"
},
{
"msg_contents": "Hi,\n\nOn 2021-06-11 18:44:28 -0400, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> > There's one caveat, though - for regular builds the slowdown is pretty\n> > much eliminated. But with valgrind it's still considerably slower. For\n> > postgres_fdw the \"make check\" used to take ~5 minutes for me, now it\n> > takes >1h. And yes, this is entirely due to the new test case which is\n> > generating / inserting 70k rows. So maybe the test case is not worth it\n> > after all, and we should get rid of it.\n> \n> I bet the CLOBBER_CACHE animals won't like it much either.\n> \n> I suggest what we do is leave it in place for long enough to get\n> a round of reports from those slow animals, and then (assuming\n> those reports are positive) drop the test.\n\nI just encountered this test because it doesn't succeed on a 32bit system with\naddress sanitizer enabled - it runs out of memory. At that point there are\n\"just\" 29895 parameters parsed...\n\nIt's also the slowest step on skink (valgrind animal), taking nearly an hour.\n\nI think two years later is long enough to have some confidence in this being\nfixed?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 1 Jul 2023 23:09:49 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Fdw batch insert error out when set batch_size > 65535"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-06-11 18:44:28 -0400, Tom Lane wrote:\n>> I suggest what we do is leave it in place for long enough to get\n>> a round of reports from those slow animals, and then (assuming\n>> those reports are positive) drop the test.\n\n> I think two years later is long enough to have some confidence in this being\n> fixed?\n\n+1, time to drop it (in the back branches too).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 02 Jul 2023 09:23:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fdw batch insert error out when set batch_size > 65535"
},
{
"msg_contents": "On 7/2/23 15:23, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> On 2021-06-11 18:44:28 -0400, Tom Lane wrote:\n>>> I suggest what we do is leave it in place for long enough to get\n>>> a round of reports from those slow animals, and then (assuming\n>>> those reports are positive) drop the test.\n> \n>> I think two years later is long enough to have some confidence in this being\n>> fixed?\n> \n> +1, time to drop it (in the back branches too).\n> \n\nOK, will do (unless someone else wants to handle this) on Monday.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 2 Jul 2023 15:50:47 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Fdw batch insert error out when set batch_size > 65535"
},
{
"msg_contents": "On 7/2/23 15:50, Tomas Vondra wrote:\n> On 7/2/23 15:23, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> On 2021-06-11 18:44:28 -0400, Tom Lane wrote:\n>>>> I suggest what we do is leave it in place for long enough to get\n>>>> a round of reports from those slow animals, and then (assuming\n>>>> those reports are positive) drop the test.\n>>\n>>> I think two years later is long enough to have some confidence in this being\n>>> fixed?\n>>\n>> +1, time to drop it (in the back branches too).\n>>\n> \n> OK, will do (unless someone else wants to handle this) on Monday.\n> \n\nFWIW I've removed the test from all branches where it was present.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 3 Jul 2023 18:48:14 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Fdw batch insert error out when set batch_size > 65535"
}
] |
[
{
"msg_contents": "-hackers,\n\nI think commit 82ed7748b710e3ddce3f7ebc74af80fe4869492f created some confusion that should be cleaned up before release. I'd like some guidance on what the intended behavior is before I submit a patch for this, though:\n\n+ALTER SUBSCRIPTION mysubscription SET PUBLICATION nosuchpub WITH (copy_data = false, refresh = false);\n+ALTER SUBSCRIPTION mysubscription ADD PUBLICATION nosuchpub WITH (copy_data = false, refresh = false);\n+ALTER SUBSCRIPTION mysubscription DROP PUBLICATION nosuchpub WITH (copy_data = false, refresh = false);\n+ERROR: unrecognized subscription parameter: \"copy_data\"\n+ALTER SUBSCRIPTION mysubscription SET (copy_data = false, refresh = false);\n+ERROR: unrecognized subscription parameter: \"copy_data\"\n\nFirst, it's quite odd to say that \"copy_data\" is unrecognized in the third and fourth ALTER commands when it was recognized just fine in the first two.\n\nMore than that, though, the docs in doc/src/sgml/ref/alter_subscription.sgml refer to this part of the grammar in the first three ALTER commands as a \"set_publication_option\", not as a \"subscription_parameter\", a term which is only used in the grammar for other forms of the ALTER command. Per the grammar in the docs, \"copy_data\" is not a valid set_publication_option, only \"refresh\" is.\n\nShould the first three ALTER commands fail with an error about \"copy_data\" being an invalid set_publication_option? Should they succeed, in which case the docs should mention that \"refresh\" is not the only valid set_publication_option?\n\nSomething else, perhaps?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Fri, 21 May 2021 13:19:20 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Fixing the docs for ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION"
},
{
"msg_contents": "On Sat, May 22, 2021 at 1:49 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n>\n> -hackers,\n>\n> I think commit 82ed7748b710e3ddce3f7ebc74af80fe4869492f created some confusion that should be cleaned up before release. I'd like some guidance on what the intended behavior is before I submit a patch for this, though:\n>\n> +ALTER SUBSCRIPTION mysubscription SET PUBLICATION nosuchpub WITH (copy_data = false, refresh = false);\n> +ALTER SUBSCRIPTION mysubscription ADD PUBLICATION nosuchpub WITH (copy_data = false, refresh = false);\n> +ALTER SUBSCRIPTION mysubscription DROP PUBLICATION nosuchpub WITH (copy_data = false, refresh = false);\n> +ERROR: unrecognized subscription parameter: \"copy_data\"\n> +ALTER SUBSCRIPTION mysubscription SET (copy_data = false, refresh = false);\n> +ERROR: unrecognized subscription parameter: \"copy_data\"\n>\n> First, it's quite odd to say that \"copy_data\" is unrecognized in the third and fourth ALTER commands when it was recognized just fine in the first two.\n\nFor ALTER SUBSCRIPTION ... DROP PUBLICATION, copy_data option is not\nrequired actually, because it doesn't add new publications. If the\nconcern here is \"why refresh is allowed but not copy_data\", then the\nanswer is \"with the refresh option the updated publications can be\nrefreshed, this avoids users to run REFRESH PUBLICATION after DROP\nPUBLICATION\". So, disallowing copy_data makes sense to me.\n\nFor ALTER SUBSCRIPTION ... SET, allowed options are slot_name,\nsynchronous_commit, binary and streaming which are part of\npg_subscription catalog and will be used by apply/sync workers.\nWhereas copy_data and refresh are not part of pg_subscription catalog\nand are not used by apply/sync workers (directly), but by the backend.\nWe have ALTER SUBSCRIPTION .. REFRESH specifically for refresh and\ncopy_data options.\n\n> More than that, though, the docs in doc/src/sgml/ref/alter_subscription.sgml refer to this part of the grammar in the first three ALTER commands as a \"set_publication_option\", not as a \"subscription_parameter\", a term which is only used in the grammar for other forms of the ALTER command. Per the grammar in the docs, \"copy_data\" is not a valid set_publication_option, only \"refresh\" is.\n\nset_publication_option - options are refresh and copy_data (this\noption comes implicitly, please see the note \"Additionally, refresh\noptions as described under REFRESH PUBLICATION may be specified.\",\nunder refresh_option we have copy_data)\n\nsubscription_parameter - options are slot_name, synchronous_commit,\nbinary, and streaming. This is correct.\n\n> Should the first three ALTER commands fail with an error about \"copy_data\" being an invalid set_publication_option? Should they succeed, in which case the docs should mention that \"refresh\" is not the only valid set_publication_option?\n\nNo that's not correct. As I said above, set_publication_option options\nare both refresh and copy_data.\n\n> Something else, perhaps?\n\nUnless I misunderstood any of your concerns, I think the existing docs\nand the code looks correct to me.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 22 May 2021 11:09:10 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fixing the docs for ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION"
},
{
"msg_contents": "\n\n> On May 21, 2021, at 10:39 PM, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> \n> On Sat, May 22, 2021 at 1:49 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> \n>> -hackers,\n>> \n>> I think commit 82ed7748b710e3ddce3f7ebc74af80fe4869492f created some confusion that should be cleaned up before release. I'd like some guidance on what the intended behavior is before I submit a patch for this, though:\n>> \n>> +ALTER SUBSCRIPTION mysubscription SET PUBLICATION nosuchpub WITH (copy_data = false, refresh = false);\n>> +ALTER SUBSCRIPTION mysubscription ADD PUBLICATION nosuchpub WITH (copy_data = false, refresh = false);\n>> +ALTER SUBSCRIPTION mysubscription DROP PUBLICATION nosuchpub WITH (copy_data = false, refresh = false);\n>> +ERROR: unrecognized subscription parameter: \"copy_data\"\n>> +ALTER SUBSCRIPTION mysubscription SET (copy_data = false, refresh = false);\n>> +ERROR: unrecognized subscription parameter: \"copy_data\"\n>> \n>> First, it's quite odd to say that \"copy_data\" is unrecognized in the third and fourth ALTER commands when it was recognized just fine in the first two.\n> \n> For ALTER SUBSCRIPTION ... DROP PUBLICATION, copy_data option is not\n> required actually, because it doesn't add new publications. If the\n> concern here is \"why refresh is allowed but not copy_data\", then the\n> answer is \"with the refresh option the updated publications can be\n> refreshed, this avoids users to run REFRESH PUBLICATION after DROP\n> PUBLICATION\". So, disallowing copy_data makes sense to me.\n\nMy concern isn't that the code is doing the wrong thing, but that the docs and the error messages are confusing. This is particularly troubling given that having a single action which combines the dropping of one publication with the refreshing of other publications is not particularly intuitive.\n\nI agree that disallowing copy_data DROP PUBLICATION is a reasonable design choice, but I do not agree that this prohibition is intuitive. If I want to copy the data for a set of tables on a remote server, and only copy that data exactly once, I might be looking for an atomic action to do so. The docs are totally unclear on whether this is supported, so I might try:\n\n CREATE SUBSCRIPTION tempsub CONNECTION 'dbname=remotedb' PUBLICATION remotepub WITH (connect = false, enabled = false, slot_name = NONE, create_slot = false);\n ALTER SUBSCRIPTION tempsub DROP PUBLICATION remotepub WITH (refresh = true, copy_data = true);\n\nwith the intention that the data will be copied right before the publication is dropped. When I get an error that says 'unrecognized subscription parameter: \"copy_data\"', I'm likely to think I mistyped the parameter name, not that it is disallowed in this setting. If I then decide to just drop the publication (since my experiment didn't work) and try to do so using:\n\n ALTER SUBSCRIPTION tempsub DROP PUBLICATION remotepub WITH (refresh = false, copy_data = false);\n\nI seem to be playing by the rules, since I am explicitly not requesting \"copy_data\". That's what the \"false\" means. But again, the command complains that \"copy_data\" is unrecognized. At this point, I go back to the docs and it clearly says that \"copy_data\" is a supported parameter in this command. I'm totally confused.\n\nI think the docs should say that \"copy_data\" is not allowed for DROP PUBLICATION. I think no error should occur for copy_data = false. For copy_data = true, I think the error message should say that copy_data is disallowed during a DROP PUBLICATION, rather than saying that the parameter is unrecognized.\n\n> For ALTER SUBSCRIPTION ... SET, allowed options are slot_name,\n> synchronous_commit, binary and streaming which are part of\n> pg_subscription catalog and will be used by apply/sync workers.\n> Whereas copy_data and refresh are not part of pg_subscription catalog\n> and are not used by apply/sync workers (directly), but by the backend.\n> We have ALTER SUBSCRIPTION .. REFRESH specifically for refresh and\n> copy_data options.\n> \n>> More than that, though, the docs in doc/src/sgml/ref/alter_subscription.sgml refer to this part of the grammar in the first three ALTER commands as a \"set_publication_option\", not as a \"subscription_parameter\", a term which is only used in the grammar for other forms of the ALTER command. Per the grammar in the docs, \"copy_data\" is not a valid set_publication_option, only \"refresh\" is.\n> \n> set_publication_option - options are refresh and copy_data (this\n> option comes implicitly, please see the note \"Additionally, refresh\n> options as described under REFRESH PUBLICATION may be specified.\",\n> under refresh_option we have copy_data)\n> \n> subscription_parameter - options are slot_name, synchronous_commit,\n> binary, and streaming. This is correct.\n> \n>> Should the first three ALTER commands fail with an error about \"copy_data\" being an invalid set_publication_option? Should they succeed, in which case the docs should mention that \"refresh\" is not the only valid set_publication_option?\n> \n> No that's not correct. As I said above, set_publication_option options\n> are both refresh and copy_data.\n\nWell, not really. We're using the phrase \"set_publication_option\" for all three of SET PUBLICATION, ADD PUBLICATION, and DROP PUBLICATION. Since that's not really supported, we should use it only for the first two, and have a separate \"drop_publication_option\" for the third.\n\n>> Something else, perhaps?\n> \n> Unless I misunderstood any of your concerns, I think the existing docs\n> and the code looks correct to me.\n\nThanks for your response. The docs and error messages still don't look right to me.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Sat, 22 May 2021 09:52:41 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Fixing the docs for ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION"
},
{
"msg_contents": "On Sat, May 22, 2021 at 10:22 PM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> My concern isn't that the code is doing the wrong thing, but that the docs and the error messages are confusing. This is particularly troubling given that having a single action which combines the dropping of one publication with the refreshing of other publications is not particularly intuitive.\n>\n> I agree that disallowing copy_data DROP PUBLICATION is a reasonable design choice, but I do not agree that this prohibition is intuitive. If I want to copy the data for a set of tables on a remote server, and only copy that data exactly once, I might be looking for an atomic action to do so. The docs are totally unclear on whether this is supported, so I might try:\n>\n> CREATE SUBSCRIPTION tempsub CONNECTION 'dbname=remotedb' PUBLICATION remotepub WITH (connect = false, enabled = false, slot_name = NONE, create_slot = false);\n> ALTER SUBSCRIPTION tempsub DROP PUBLICATION remotepub WITH (refresh = true, copy_data = true);\n>\n> with the intention that the data will be copied right before the publication is dropped. When I get an error that says 'unrecognized subscription parameter: \"copy_data\"', I'm likely to think I mistyped the parameter name, not that it is disallowed in this setting. If I then decide to just drop the publication (since my experiment didn't work) and try to do so using:\n>\n> ALTER SUBSCRIPTION tempsub DROP PUBLICATION remotepub WITH (refresh = false, copy_data = false);\n>\n> I seem to be playing by the rules, since I am explicitly not requesting \"copy_data\". That's what the \"false\" means. But again, the command complains that \"copy_data\" is unrecognized. At this point, I go back to the docs and it clearly says that \"copy_data\" is a supported parameter in this command. I'm totally confused.\n>\n> I think the docs should say that \"copy_data\" is not allowed for DROP PUBLICATION. I think no error should occur for copy_data = false. For copy_data = true, I think the error message should say that copy_data is disallowed during a DROP PUBLICATION, rather than saying that the parameter is unrecognized.\n\nThanks for the detailed explanation. I think there are two\npossibilities - unrecognised options and disallowed options. If a user\nenters an option 'blah_blah', then the error \"unrecognized\nsubscription parameter: \"blah_blah\"\" is meaningful. If a user enters\n'copy_data' for DROP PUBLICATION, then an error something like\n\"\"copy_data\" is not allowed for ALTER SUBSCRIPTION ... DROP\nPUBLICATION\" will be more meaningful. If this understanding is\ncorrect, I wonder we should also have similar change for:\n\npostgres=# ALTER SUBSCRIPTION testsub REFRESH PUBLICATION WITH (refresh=true);\nERROR: unrecognized subscription parameter: \"refresh\"\n\npostgres=# ALTER SUBSCRIPTION testsub REFRESH PUBLICATION WITH\n(synchronous_commit=' ');\nERROR: unrecognized subscription parameter: \"synchronous_commit\"\n\npostgres=# ALTER SUBSCRIPTION testsub SET (refresh=true);\nERROR: unrecognized subscription parameter: \"refresh\"\n\n> Well, not really. We're using the phrase \"set_publication_option\" for all three of SET PUBLICATION, ADD PUBLICATION, and DROP PUBLICATION. Since that's not really supported, we should use it only for the first two, and have a separate \"drop_publication_option\" for the third.\n\nThere's another thread [1], that tries to fix this, where the earlier\nsuggestion was to drop_publication_option, but later the agreement was\nto change the \"set_publication_option\" to \"publication_option\", and\nhad it for SET/ADD/DROP with a note like below. If that doesn't work,\nI suggest putting the thoughts there in that thread.\n- Additionally, refresh options as described\n- under <literal>REFRESH PUBLICATION</literal> may be specified.\n+ Additionally, refresh options as described under\n<literal>REFRESH PUBLICATION</literal>\n+ may be specified, except for <literal>DROP PUBLICATION</literal>.\n\n> Thanks for your response. The docs and error messages still don't look right to me.\n\nI think, for the docs part we can move the discussion to the thread\n[1], if you are okay, and have the error message discussion here.\n\n[1] - https://www.postgresql.org/message-id/flat/CALDaNm34qugTr5M0d1JgCgk2Qdo6LZ9UEbTBG%3DTBNV5QADPLWg%40mail.gmail.com\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 23 May 2021 11:10:28 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fixing the docs for ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION"
},
{
"msg_contents": "\n\n> On May 22, 2021, at 10:40 PM, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> \n> I think, for the docs part we can move the discussion to the thread\n> [1], if you are okay, and have the error message discussion here.\n> \n> [1] - https://www.postgresql.org/message-id/flat/CALDaNm34qugTr5M0d1JgCgk2Qdo6LZ9UEbTBG%3DTBNV5QADPLWg%40mail.gmail.com\n\nSure, and thanks for the link!\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Sun, 23 May 2021 07:17:17 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Fixing the docs for ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION"
},
{
"msg_contents": "On 21.05.21 22:19, Mark Dilger wrote:\n> +ALTER SUBSCRIPTION mysubscription DROP PUBLICATION nosuchpub WITH (copy_data = false, refresh = false);\n> +ERROR: unrecognized subscription parameter: \"copy_data\"\n> +ALTER SUBSCRIPTION mysubscription SET (copy_data = false, refresh = false);\n> +ERROR: unrecognized subscription parameter: \"copy_data\"\n\nBetter wording might be something along the lines of \"subscription \nparameter %s not supported in this context\". I'm not sure how easy this \nwould be to implement, but with enough brute force it would surely be \npossible.\n\n\n",
"msg_date": "Tue, 25 May 2021 13:36:16 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Fixing the docs for ALTER SUBSCRIPTION ... ADD/DROP PUBLICATION"
}
] |
[
{
"msg_contents": "Hello\n\nI some time ago asks about \"Proposition for autoname columns\"\nhttps://www.postgresql.org/message-id/131355559.20201102170529%40yandex.ru\n\n\nNow I have another idea. How about table_name.**?\n\nwhich will be expanded to: table_name.id, table_name.name, table_name.qty etc.\n\n\nIn my original query I can not just write:\nSELECT\n acc_i.*,\n acc_u.*\nFROM \"order_bt\" o\nLEFT JOIN acc_ready( 'Invoice', app_period(), o ) acc_i ON acc_i.ready\nLEFT JOIN acc_ready( 'Usage', app_period(), o ) acc_u ON acc_u.ready\n\nbecause I can not then refer columns from different tables, they are same =(\n\nSo I need to write:\nSELECT\n acc_i.ready as acc_i_ready,\n acc_i.acc_period as acc_i_period,\n acc_i.consumed_period as acc_i_consumed_period,\n acc_u.ready as acc_u_ready,\n acc_u.acc_period as acc_u_period,\n acc_u.consumed_period as acc_u_consumed_period,\nFROM \"order_bt\" o\nLEFT JOIN acc_ready( 'Invoice', app_period(), o ) acc_i ON acc_i.ready\nLEFT JOIN acc_ready( 'Usage', app_period(), o ) acc_u ON acc_u.ready\n\n\nIt would be cool if I can just write:\n\nSELECT\n acc_i.**,\n acc_u.**\nFROM \"order_bt\" o\nLEFT JOIN acc_ready( 'Invoice', app_period(), o ) acc_i ON acc_i.ready\nLEFT JOIN acc_ready( 'Usage', app_period(), o ) acc_u ON acc_u.ready\n\n\nWhat you can say about this proposition?\n-- \nBest regards,\nEugen Konkov\n\n\n\n",
"msg_date": "Sat, 22 May 2021 11:37:54 +0300",
"msg_from": "Eugen Konkov <kes-kes@yandex.ru>",
"msg_from_op": true,
"msg_subject": "Proposition for columns expanding: table_name.**"
}
] |
[
{
"msg_contents": "Our website links to:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=guaibasaurus&dt=latest&stg=make-doc\n\nin order to display the latest build log for the docs.\n\nThis appears to have stopped working at some point. I don't know when,\nunfortunately, it was just brought to my attention now.\n\nHopefully this is something that can easily be brought back? I'm not\nsure exactly how useful the build log is, but if it's easy enough to\nrestore...\n\n(The link works fine if I put the exact date in, but the whole point\nof it is to have a static link on the website that goes to whatever is\nlatest at the particular moment)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Sat, 22 May 2021 22:20:49 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Buildfarm latest links"
},
{
"msg_contents": "\nOn 5/22/21 4:20 PM, Magnus Hagander wrote:\n> Our website links to:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=guaibasaurus&dt=latest&stg=make-doc\n>\n> in order to display the latest build log for the docs.\n>\n> This appears to have stopped working at some point. I don't know when,\n> unfortunately, it was just brought to my attention now.\n>\n> Hopefully this is something that can easily be brought back? I'm not\n> sure exactly how useful the build log is, but if it's easy enough to\n> restore...\n>\n> (The link works fine if I put the exact date in, but the whole point\n> of it is to have a static link on the website that goes to whatever is\n> latest at the particular moment)\n>\n\n\nOdd. the code for it is still there. I will investigate.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 22 May 2021 17:40:19 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Buildfarm latest links"
},
{
"msg_contents": "On Sat, May 22, 2021, 23:40 Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> On 5/22/21 4:20 PM, Magnus Hagander wrote:\n> > Our website links to:\n> >\n> >\n> https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=guaibasaurus&dt=latest&stg=make-doc\n> >\n> > in order to display the latest build log for the docs.\n> >\n> > This appears to have stopped working at some point. I don't know when,\n> > unfortunately, it was just brought to my attention now.\n> >\n> > Hopefully this is something that can easily be brought back? I'm not\n> > sure exactly how useful the build log is, but if it's easy enough to\n> > restore...\n> >\n> > (The link works fine if I put the exact date in, but the whole point\n> > of it is to have a static link on the website that goes to whatever is\n> > latest at the particular moment)\n> >\n>\n>\n> Odd. the code for it is still there. I will investigate.\n>\n\n\nWithout looking closely, I wonder if you break it by validating the dates\ntoo early or so? That seems to be a relatively recent commit...\n\n/Magnus\n\n>\n\nOn Sat, May 22, 2021, 23:40 Andrew Dunstan <andrew@dunslane.net> wrote:\nOn 5/22/21 4:20 PM, Magnus Hagander wrote:\n> Our website links to:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=guaibasaurus&dt=latest&stg=make-doc\n>\n> in order to display the latest build log for the docs.\n>\n> This appears to have stopped working at some point. I don't know when,\n> unfortunately, it was just brought to my attention now.\n>\n> Hopefully this is something that can easily be brought back? I'm not\n> sure exactly how useful the build log is, but if it's easy enough to\n> restore...\n>\n> (The link works fine if I put the exact date in, but the whole point\n> of it is to have a static link on the website that goes to whatever is\n> latest at the particular moment)\n>\n\n\nOdd. the code for it is still there. I will investigate.Without looking closely, I wonder if you break it by validating the dates too early or so? That seems to be a relatively recent commit... /Magnus",
"msg_date": "Sat, 22 May 2021 23:42:48 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Buildfarm latest links"
},
{
"msg_contents": "\nOn 5/22/21 5:42 PM, Magnus Hagander wrote:\n>\n>\n> On Sat, May 22, 2021, 23:40 Andrew Dunstan <andrew@dunslane.net\n> <mailto:andrew@dunslane.net>> wrote:\n>\n>\n> On 5/22/21 4:20 PM, Magnus Hagander wrote:\n> > Our website links to:\n> >\n> >\n> https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=guaibasaurus&dt=latest&stg=make-doc\n> <https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=guaibasaurus&dt=latest&stg=make-doc>\n> >\n> > in order to display the latest build log for the docs.\n> >\n> > This appears to have stopped working at some point. I don't know\n> when,\n> > unfortunately, it was just brought to my attention now.\n> >\n> > Hopefully this is something that can easily be brought back? I'm not\n> > sure exactly how useful the build log is, but if it's easy enough to\n> > restore...\n> >\n> > (The link works fine if I put the exact date in, but the whole point\n> > of it is to have a static link on the website that goes to\n> whatever is\n> > latest at the particular moment)\n> >\n>\n>\n> Odd. the code for it is still there. I will investigate.\n>\n>\n>\n> Without looking closely, I wonder if you break it by validating the\n> dates too early or so? That seems to be a relatively recent commit... \n>\n>\n\nYeah. Should be fixed now.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 22 May 2021 17:50:35 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Buildfarm latest links"
},
{
"msg_contents": "On Sat, May 22, 2021 at 11:50 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>\n> On 5/22/21 5:42 PM, Magnus Hagander wrote:\n> >\n> >\n> > On Sat, May 22, 2021, 23:40 Andrew Dunstan <andrew@dunslane.net\n> > <mailto:andrew@dunslane.net>> wrote:\n> >\n> >\n> > On 5/22/21 4:20 PM, Magnus Hagander wrote:\n> > > Our website links to:\n> > >\n> > >\n> > https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=guaibasaurus&dt=latest&stg=make-doc\n> > <https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=guaibasaurus&dt=latest&stg=make-doc>\n> > >\n> > > in order to display the latest build log for the docs.\n> > >\n> > > This appears to have stopped working at some point. I don't know\n> > when,\n> > > unfortunately, it was just brought to my attention now.\n> > >\n> > > Hopefully this is something that can easily be brought back? I'm not\n> > > sure exactly how useful the build log is, but if it's easy enough to\n> > > restore...\n> > >\n> > > (The link works fine if I put the exact date in, but the whole point\n> > > of it is to have a static link on the website that goes to\n> > whatever is\n> > > latest at the particular moment)\n> > >\n> >\n> >\n> > Odd. the code for it is still there. I will investigate.\n> >\n> >\n> >\n> > Without looking closely, I wonder if you break it by validating the\n> > dates too early or so? That seems to be a relatively recent commit...\n> >\n> >\n>\n> Yeah. Should be fixed now.\n\nThanks!\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Sun, 23 May 2021 12:36:32 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": true,
"msg_subject": "Re: Buildfarm latest links"
}
] |
[
{
"msg_contents": "It appears the development version of the release notes are not\naccessible. If you go from:\n\n https://www.postgresql.org/developer/testing/\nto\n https://www.postgresql.org/docs/devel/\nto\n https://www.postgresql.org/docs/devel/release.html\n\nit is fine, but once you ask for the PG 14 release notes it switches to:\n\n https://www.postgresql.org/docs/14/release-14.html\n --\nMagnus says this is because we have not branched PG 14 yet, and there is\nsome code that tries to find the most appropriate release notes. (The\nrest of the docs seem to match current git master.) Actually, once we\nbranch, the PG 14 release notes will be removed from master. Therefore,\nany changes I make to the release notes will not appear until each beta\nis released since we only build the release notes during packaging.\n\nAnyway, this means that the markup I added post-packaging to PG 14 is\ninaccessible, and will not appear until PG 14 beta 2. Here is the\nactual markup from my local build:\n\n https://momjian.us/pgsql_docs/release-14.html\n\nI apologize I was not able to get the markup done before PG 14 beta1 was\npackaged. If people want to suggest changes to the release notes, they\nwill have to build it from source or use that URL.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Sat, 22 May 2021 17:56:32 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Development version of release notes"
},
{
"msg_contents": "On Sat, May 22, 2021 at 05:56:32PM -0400, Bruce Momjian wrote:\n> Anyway, this means that the markup I added post-packaging to PG 14 is\n> inaccessible, and will not appear until PG 14 beta 2. Here is the\n> actual markup from my local build:\n> \n> https://momjian.us/pgsql_docs/release-14.html\n> \n> I apologize I was not able to get the markup done before PG 14 beta1 was\n> packaged. If people want to suggest changes to the release notes, they\n> will have to build it from source or use that URL.\n\nThinking some more, since we no longer have backbranch release notes in\nour master tree, there really is no way to display updated release notes\n_except_ when they are packaged in beta/RC/final releases and then\nposted to our website. My local URL should be used any anyone wanting\nto view the current version, if they don't want to build it themselves.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Sat, 22 May 2021 19:27:06 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Development version of release notes"
}
] |
[
{
"msg_contents": "Hi,\n\nI was just looking at RelOptInfo's partitioning fields and noticed\nthat all_partrels seems to be set in a couple of places but never\nactually referenced.\n\nThe field was added in c8434d64c - Allow partitionwise joins in more\ncases for PG13.\n\nMaybe it did something during the development of that patch but the\ncode didn't end up being committed?\n\nShould we get rid of it before it's too late for PG14?\n\nDavid\n\n\n",
"msg_date": "Sun, 23 May 2021 19:01:30 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "RelOptInfo.all_partrels does not seem to do very much"
},
{
"msg_contents": "On Sun, 23 May 2021 at 19:01, David Rowley <dgrowleyml@gmail.com> wrote:\n> I was just looking at RelOptInfo's partitioning fields and noticed\n> that all_partrels seems to be set in a couple of places but never\n> actually referenced.\n\nLooks like I misread the code. It is used. Apologies for the noise.\n\nDavid\n\n\n",
"msg_date": "Sun, 23 May 2021 22:03:45 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: RelOptInfo.all_partrels does not seem to do very much"
}
] |
[
{
"msg_contents": "Hi\n\n\nDuring my review of a patch in the community,\nI've encountered failures of OSS HEAD'S make check-world in a continuous loop.\nI just repeated make check-world. Accordingly, this should be an existing issue.\nMake check-world fails once in about 20 times in my env. I'd like to report this.\n\nThe test itself ended with stderr messages below.\n\nNOTICE: database \"regression\" does not exist, skipping\nmake[2]: *** [check] Error 1\nmake[1]: *** [check-isolation-recurse] Error 2\nmake[1]: *** Waiting for unfinished jobs....\nmake: *** [check-world-src/test-recurse] Error 2\n\nAlso, I've gotten some logs left.\n* src/test/isolation/output_iso/regression.out\n\ntest detach-partition-concurrently-1 ... ok 682 ms\ntest detach-partition-concurrently-2 ... ok 321 ms\ntest detach-partition-concurrently-3 ... FAILED 1084 ms\ntest detach-partition-concurrently-4 ... ok 1078 ms\ntest fk-contention ... ok 77 ms\n\n* src/test/isolation/output_iso/regression.diffs\n\ndiff -U3 /(where/I/put/PG)/src/test/isolation/expected/detach-partition-concurrently-3.out /(where/I/put/PG)/src/test/isolation/output_iso/results/detach-partition-concurrently-3.out\n--- /(where/I/put/PG)/src/test/isolation/expected/detach-partition-concurrently-3.out 2021-05-24 03:30:15.735488295 +0000\n+++ /(where/I/put/PG)/src/test/isolation/output_iso/results/detach-partition-concurrently-3.out 2021-05-24 04:46:48.851488295 +0000\n@@ -12,9 +12,9 @@\n pg_cancel_backend\n \n t \n-step s2detach: <... completed>\n-error in steps s1cancel s2detach: ERROR: canceling statement due to user request\n step s1c: COMMIT;\n+step s2detach: <... completed>\n+error in steps s1c s2detach: ERROR: canceling statement due to user request\n step s1describe: SELECT 'd3_listp' AS root, * FROM pg_partition_tree('d3_listp')\n UNION ALL SELECT 'd3_listp1', * FROM pg_partition_tree('d3_listp1');\n root relid parentrelid isleaf level \n.\n\nThe steps I did :\n1 - ./configure --enable-cassert --enable-debug --enable-tap-tests --with-icu CFLAGS=-O0 --prefix=/where/I/put/binary\n2 - make -j2 2> make.log # no stderr output at this stage, of course\n3 - make check-world -j8 2> make_check_world.log\n\tFor the 1st RT, I succeeded. But, repeating the make check-world failed.\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Mon, 24 May 2021 06:37:07 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Test of a partition with an incomplete detach has a timing issue"
},
{
"msg_contents": "On 2021-May-24, osumi.takamichi@fujitsu.com wrote:\n\n> Also, I've gotten some logs left.\n> * src/test/isolation/output_iso/regression.out\n> \n> test detach-partition-concurrently-1 ... ok 682 ms\n> test detach-partition-concurrently-2 ... ok 321 ms\n> test detach-partition-concurrently-3 ... FAILED 1084 ms\n> test detach-partition-concurrently-4 ... ok 1078 ms\n> test fk-contention ... ok 77 ms\n> \n> * src/test/isolation/output_iso/regression.diffs\n> \n> diff -U3 /(where/I/put/PG)/src/test/isolation/expected/detach-partition-concurrently-3.out /(where/I/put/PG)/src/test/isolation/output_iso/results/detach-partition-concurrently-3.out\n> --- /(where/I/put/PG)/src/test/isolation/expected/detach-partition-concurrently-3.out 2021-05-24 03:30:15.735488295 +0000\n> +++ /(where/I/put/PG)/src/test/isolation/output_iso/results/detach-partition-concurrently-3.out 2021-05-24 04:46:48.851488295 +0000\n> @@ -12,9 +12,9 @@\n> pg_cancel_backend\n> \n> t \n> -step s2detach: <... completed>\n> -error in steps s1cancel s2detach: ERROR: canceling statement due to user request\n> step s1c: COMMIT;\n> +step s2detach: <... completed>\n> +error in steps s1c s2detach: ERROR: canceling statement due to user request\n\nUh, how annoying. If I understand correctly, I agree that this is a\ntiming issue: sometimes it is fast enough that the cancel is reported\ntogether with its own step, but other times it takes longer so it is\nreported with the next command of that session instead, s1c (commit).\n\nI suppose a fix would imply that the error report waits until after the\n\"cancel\" step is over, but I'm not sure how to do that.\n\nMaybe we can change the \"cancel\" query to something like\n\nSELECT pg_cancel_backend(pid), somehow_wait_for_detach_to_terminate() FROM d3_pid;\n\n... where maybe that function can check the \"state\" column in s3's\npg_stat_activity row? I'll give that a try.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"That sort of implies that there are Emacs keystrokes which aren't obscure.\nI've been using it daily for 2 years now and have yet to discover any key\nsequence which makes any sense.\" (Paul Thomas)\n\n\n",
"msg_date": "Mon, 24 May 2021 14:07:12 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Test of a partition with an incomplete detach has a timing issue"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-May-24, osumi.takamichi@fujitsu.com wrote:\n>> t \n>> -step s2detach: <... completed>\n>> -error in steps s1cancel s2detach: ERROR: canceling statement due to user request\n>> step s1c: COMMIT;\n>> +step s2detach: <... completed>\n>> +error in steps s1c s2detach: ERROR: canceling statement due to user request\n\n> Uh, how annoying. If I understand correctly, I agree that this is a\n> timing issue: sometimes it is fast enough that the cancel is reported\n> together with its own step, but other times it takes longer so it is\n> reported with the next command of that session instead, s1c (commit).\n\nYeah, we see such failures in the buildfarm with various isolation\ntests; some recent examples:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gharial&dt=2021-05-23%2019%3A43%3A04\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=anole&dt=2021-05-08%2006%3A34%3A13\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=anole&dt=2021-04-29%2009%3A43%3A04\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=gharial&dt=2021-04-22%2021%3A24%3A02\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2021-04-21%2010%3A38%3A32\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=fossa&dt=2021-04-08%2019%3A36%3A06\n\nI remember having tried to rewrite the isolation tester to eliminate\nthe race condition, without success (and I don't seem to have kept\nmy notes, which now I regret).\n\nHowever, the existing hazards seem to hit rarely enough to not be\nmuch of a problem. We might need to see if we can rejigger the\ntiming in this test to make it a little more stable.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 24 May 2021 14:21:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Test of a partition with an incomplete detach has a timing issue"
},
{
"msg_contents": "On Tuesday, May 25, 2021 3:07 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\r\n> On 2021-May-24, osumi.takamichi@fujitsu.com wrote:\r\n> \r\n> > Also, I've gotten some logs left.\r\n> > * src/test/isolation/output_iso/regression.out\r\n> >\r\n> > test detach-partition-concurrently-1 ... ok 682 ms\r\n> > test detach-partition-concurrently-2 ... ok 321 ms\r\n> > test detach-partition-concurrently-3 ... FAILED 1084 ms\r\n> > test detach-partition-concurrently-4 ... ok 1078 ms\r\n> > test fk-contention ... ok 77 ms\r\n> >\r\n> > * src/test/isolation/output_iso/regression.diffs\r\n> >\r\n> > diff -U3\r\n> /(where/I/put/PG)/src/test/isolation/expected/detach-partition-concurrently\r\n> -3.out\r\n> /(where/I/put/PG)/src/test/isolation/output_iso/results/detach-partition-con\r\n> currently-3.out\r\n> > ---\r\n> /(where/I/put/PG)/src/test/isolation/expected/detach-partition-concurrently\r\n> -3.out 2021-05-24 03:30:15.735488295 +0000\r\n> > +++\r\n> /(where/I/put/PG)/src/test/isolation/output_iso/results/detach-partition-con\r\n> currently-3.out 2021-05-24 04:46:48.851488295 +0000\r\n> > @@ -12,9 +12,9 @@\r\n> > pg_cancel_backend\r\n> >\r\n> > t\r\n> > -step s2detach: <... completed>\r\n> > -error in steps s1cancel s2detach: ERROR: canceling statement due to\r\n> > user request step s1c: COMMIT;\r\n> > +step s2detach: <... completed>\r\n> > +error in steps s1c s2detach: ERROR: canceling statement due to user\r\n> > +request\r\n> \r\n> Uh, how annoying. If I understand correctly, I agree that this is a timing issue:\r\n> sometimes it is fast enough that the cancel is reported together with its own\r\n> step, but other times it takes longer so it is reported with the next command of\r\n> that session instead, s1c (commit).\r\n> \r\n> I suppose a fix would imply that the error report waits until after the \"cancel\"\r\n> step is over, but I'm not sure how to do that.\r\n> \r\n> Maybe we can change the \"cancel\" query to something like\r\n> \r\n> SELECT pg_cancel_backend(pid), somehow_wait_for_detach_to_terminate()\r\n> FROM d3_pid;\r\n> \r\n> ... where maybe that function can check the \"state\" column in s3's\r\n> pg_stat_activity row? I'll give that a try.\r\nThank you so much for addressing this issue.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Tue, 25 May 2021 00:42:34 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Test of a partition with an incomplete detach has a timing issue"
},
{
"msg_contents": "On Mon, May 24, 2021 at 02:07:12PM -0400, Alvaro Herrera wrote:\n> I suppose a fix would imply that the error report waits until after the\n> \"cancel\" step is over, but I'm not sure how to do that.\n> \n> Maybe we can change the \"cancel\" query to something like\n> \n> SELECT pg_cancel_backend(pid), somehow_wait_for_detach_to_terminate() FROM d3_pid;\n> \n> ... where maybe that function can check the \"state\" column in s3's\n> pg_stat_activity row? I'll give that a try.\n\nCouldn't you achieve that with a small PL function in a way similar to\nwhat 32a9c0b did, except that you track a different state in\npg_stat_activity for this PID?\n--\nMichael",
"msg_date": "Tue, 25 May 2021 09:46:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Test of a partition with an incomplete detach has a timing issue"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, May 24, 2021 at 02:07:12PM -0400, Alvaro Herrera wrote:\n>> Maybe we can change the \"cancel\" query to something like\n>> SELECT pg_cancel_backend(pid), somehow_wait_for_detach_to_terminate() FROM d3_pid;\n>> ... where maybe that function can check the \"state\" column in s3's\n>> pg_stat_activity row? I'll give that a try.\n\n> Couldn't you achieve that with a small PL function in a way similar to\n> what 32a9c0b did, except that you track a different state in\n> pg_stat_activity for this PID?\n\nThe number of subsequent fixes to 32a9c0b seem to argue against\nusing that as a model :-(\n\nThe experiments I did awhile ago are coming back to me now. I tried\na number of variations on this same theme, and none of them closed\nthe gap entirely. The fundamental problem is that it's possible\nfor backend A to complete its transaction, and for backend B (which\nis the isolationtester's monitoring session) to observe that A has\ncompleted its transaction, and for B to report that fact to the\nisolationtester, and for that report to arrive at the isolationtester\n*before A's query result does*. You need some bad luck for that\nto happen, like A losing the CPU right before it flushes its output\nbuffer to the client, but I was able to demonstrate it fairly\nrepeatably. (IIRC, the reason I was looking into this was that\nthe clobber-cache-always buildfarm critters were showing such\nfailures somewhat regularly.)\n\nIt doesn't really matter whether B's observation technique involves\nlocks (as now), or the pgstat activity table, or what. Conceivably,\nif we used the activity data, we could have A postpone updating its\nstate to \"idle\" until after it's flushed its buffer to the client.\nBut that would likely break things for other use-cases. Moreover\nit still guarantees nothing, really, because we're still at the\nmercy of the kernel as to when it will choose to deliver network\npackets.\n\nSo a completely bulletproof interlock seems out of reach.\nMaybe something like what Alvaro's thinking of will get the\nfailure rate down to an acceptable level for most developers.\nA simple \"pg_sleep(1)\" might have about the same effect for\nmuch less work, though.\n\nI do agree we need to do something. I've had two or three failures\nin those test cases in the past few days in my own manual check-world\nruns, which is orders of magnitude worse than the previous\nreliability.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 24 May 2021 21:12:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Test of a partition with an incomplete detach has a timing issue"
},
{
"msg_contents": "On Mon, May 24, 2021 at 09:12:40PM -0400, Tom Lane wrote:\n> The experiments I did awhile ago are coming back to me now. I tried\n> a number of variations on this same theme, and none of them closed\n> the gap entirely. The fundamental problem is that it's possible\n> for backend A to complete its transaction, and for backend B (which\n> is the isolationtester's monitoring session) to observe that A has\n> completed its transaction, and for B to report that fact to the\n> isolationtester, and for that report to arrive at the isolationtester\n> *before A's query result does*. You need some bad luck for that\n> to happen, like A losing the CPU right before it flushes its output\n> buffer to the client, but I was able to demonstrate it fairly\n> repeatably.\n\n> So a completely bulletproof interlock seems out of reach.\n\nWhat if we had a standard that the step after the cancel shall send a query to\nthe backend that just received the cancel? Something like:\n\n--- a/src/test/isolation/specs/detach-partition-concurrently-3.spec\n+++ b/src/test/isolation/specs/detach-partition-concurrently-3.spec\n@@ -34,16 +34,18 @@ step \"s1describe\"\t{ SELECT 'd3_listp' AS root, * FROM pg_partition_tree('d3_list\n session \"s2\"\n step \"s2begin\"\t\t{ BEGIN; }\n step \"s2snitch\"\t\t{ INSERT INTO d3_pid SELECT pg_backend_pid(); }\n step \"s2detach\"\t\t{ ALTER TABLE d3_listp DETACH PARTITION d3_listp1 CONCURRENTLY; }\n+step \"s2noop\"\t\t{ UNLISTEN noop; }\n+# TODO follow every instance of s1cancel w/ s2noop\n step \"s2detach2\"\t{ ALTER TABLE d3_listp DETACH PARTITION d3_listp2 CONCURRENTLY; }\n step \"s2detachfinal\"\t{ ALTER TABLE d3_listp DETACH PARTITION d3_listp1 FINALIZE; }\n step \"s2drop\"\t\t{ DROP TABLE d3_listp1; }\n step \"s2commit\"\t\t{ COMMIT; }\n \n # Try various things while the partition is in \"being detached\" state, with\n # no session waiting.\n-permutation \"s2snitch\" \"s1b\" \"s1s\" \"s2detach\" \"s1cancel\" \"s1c\" \"s1describe\" \"s1alter\"\n+permutation \"s2snitch\" \"s1b\" \"s1s\" \"s2detach\" \"s1cancel\" \"s2noop\" \"s1c\" \"s1describe\" \"s1alter\"\n permutation \"s2snitch\" \"s1b\" \"s1s\" \"s2detach\" \"s1cancel\" \"s1insert\" \"s1c\"\n permutation \"s2snitch\" \"s1brr\" \"s1s\" \"s2detach\" \"s1cancel\" \"s1insert\" \"s1c\" \"s1spart\"\n permutation \"s2snitch\" \"s1b\" \"s1s\" \"s2detach\" \"s1cancel\" \"s1c\" \"s1insertpart\"\n \n\n\n",
"msg_date": "Mon, 24 May 2021 20:56:42 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Test of a partition with an incomplete detach has a timing issue"
},
{
"msg_contents": "So I had a hard time reproducing the problem, until I realized that I\nneeded to limit the server to use only one CPU, and in addition run some\nother stuff concurrently in the same server in order to keep it busy.\nWith that, I see about one failure every 10 runs.\n\nSo I start the server as \"numactl -C0 postmaster\", then another terminal\nwith an infinite loop doing \"make -C src/test/regress installcheck-parallel\";\nand a third terminal doing this\n\nwhile [ $? == 0 ]; do ../../../src/test/isolation/pg_isolation_regress --inputdir=/pgsql/source/master/src/test/isolation --outputdir=output_iso --bindir='/pgsql/install/master/bin' detach-partition-concurrently-3 detach-partition-concurrently-3 detach-partition-concurrently-3 detach-partition-concurrently-3 detach-partition-concurrently-3 detach-partition-concurrently-3 detach-partition-concurrently-3 detach-partition-concurrently-3 detach-partition-concurrently-3 detach-partition-concurrently-3 detach-partition-concurrently-3 detach-partition-concurrently-3 detach-partition-concurrently-3 ; done\n\nWith the test unpatched, I get about one failure in the set.\n\nOn 2021-May-24, Noah Misch wrote:\n\n> What if we had a standard that the step after the cancel shall send a query to\n> the backend that just received the cancel? Something like:\n\nHmm ... I don't understand why this fixes the problem, but it\ndrastically reduces the probability. Here's a complete patch. I got\nabout one failure in 1000 instead of 1 in 10. The new failure looks\nlike this:\n\ndiff -U3 /pgsql/source/master/src/test/isolation/expected/detach-partition-concurrently-3.out /home/alvherre/Code/pgsql-build/master/src/test/isolation/output_iso/results/detach-partition-concurrently-3.out\n--- /pgsql/source/master/src/test/isolation/expected/detach-partition-concurrently-3.out\t2021-05-25 11:12:42.333987835 -0400\n+++ /home/alvherre/Code/pgsql-build/master/src/test/isolation/output_iso/results/detach-partition-concurrently-3.out\t2021-05-25 11:19:03.714947775 -0400\n@@ -13,7 +13,7 @@\n \n t \n step s2detach: <... completed>\n-error in steps s1cancel s2detach: ERROR: canceling statement due to user request\n+ERROR: canceling statement due to user request\n step s2noop: UNLISTEN noop;\n step s1c: COMMIT;\n step s1describe: SELECT 'd3_listp' AS root, * FROM pg_partition_tree('d3_listp')\n\n\nI find this a bit weird and I'm wondering if it could be an\nisolationtester bug -- why is it not attributing the error message to\nany steps?\n\nThe problem disappears completely if I add a sleep to the cancel query:\n\nstep \"s1cancel\" \t{ SELECT pg_cancel_backend(pid), pg_sleep(0.01) FROM d3_pid; }\n\nI suppose a 0.01 second sleep is not going to be sufficient to close the\nproblem in slower animals, but I hesitate to propose a much longer sleep\nbecause this test has 18 permutations so even a one second sleep adds\nquite a lot of (mostly useless) test runtime.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n",
"msg_date": "Tue, 25 May 2021 11:32:38 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Test of a partition with an incomplete detach has a timing issue"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> The problem disappears completely if I add a sleep to the cancel query:\n> step \"s1cancel\" \t{ SELECT pg_cancel_backend(pid), pg_sleep(0.01) FROM d3_pid; }\n> I suppose a 0.01 second sleep is not going to be sufficient to close the\n> problem in slower animals, but I hesitate to propose a much longer sleep\n> because this test has 18 permutations so even a one second sleep adds\n> quite a lot of (mostly useless) test runtime.\n\nYeah ... maybe 0.1 second is the right tradeoff?\n\nNote that on slow (like CCA) animals, the extra query required by\nNoah's suggestion is likely to take more than 0.1 second.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 25 May 2021 11:37:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Test of a partition with an incomplete detach has a timing issue"
},
{
"msg_contents": "On 2021-May-25, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > The problem disappears completely if I add a sleep to the cancel query:\n> > step \"s1cancel\" \t{ SELECT pg_cancel_backend(pid), pg_sleep(0.01) FROM d3_pid; }\n> > I suppose a 0.01 second sleep is not going to be sufficient to close the\n> > problem in slower animals, but I hesitate to propose a much longer sleep\n> > because this test has 18 permutations so even a one second sleep adds\n> > quite a lot of (mostly useless) test runtime.\n> \n> Yeah ... maybe 0.1 second is the right tradeoff?\n\nPushed with a 0.1 sleep, and some commentary.\n\n> Note that on slow (like CCA) animals, the extra query required by\n> Noah's suggestion is likely to take more than 0.1 second.\n\nHmm, but the sleep is to compete with the cancelling of detach, not with\nthe noop query.\n\nI tried running the test under CCA here and it didn't fail, but of\ncourse that's not a guarantee of anything since it only completed one\niteration.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n",
"msg_date": "Tue, 25 May 2021 13:00:22 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Test of a partition with an incomplete detach has a timing issue"
},
{
"msg_contents": "On Tue, May 25, 2021 at 11:32:38AM -0400, Alvaro Herrera wrote:\n> On 2021-May-24, Noah Misch wrote:\n> > What if we had a standard that the step after the cancel shall send a query to\n> > the backend that just received the cancel? Something like:\n> \n> Hmm ... I don't understand why this fixes the problem, but it\n> drastically reduces the probability.\n\nThis comment, from run_permutation(), explains:\n\n\t\t/*\n\t\t * Check whether the session that needs to perform the next step is\n\t\t * still blocked on an earlier step. If so, wait for it to finish.\n\t\t *\n\t\t * (In older versions of this tool, we allowed precisely one session\n\t\t * to be waiting at a time. If we reached a step that required that\n\t\t * session to execute the next command, we would declare the whole\n\t\t * permutation invalid, cancel everything, and move on to the next\n\t\t * one. Unfortunately, that made it impossible to test the deadlock\n\t\t * detector using this framework, unless the number of processes\n\t\t * involved in the deadlock was precisely two. We now assume that if\n\t\t * we reach a step that is still blocked, we need to wait for it to\n\t\t * unblock itself.)\n\t\t */\n\n> Here's a complete patch. I got\n> about one failure in 1000 instead of 1 in 10. The new failure looks\n> like this:\n> \n> diff -U3 /pgsql/source/master/src/test/isolation/expected/detach-partition-concurrently-3.out /home/alvherre/Code/pgsql-build/master/src/test/isolation/output_iso/results/detach-partition-concurrently-3.out\n> --- /pgsql/source/master/src/test/isolation/expected/detach-partition-concurrently-3.out\t2021-05-25 11:12:42.333987835 -0400\n> +++ /home/alvherre/Code/pgsql-build/master/src/test/isolation/output_iso/results/detach-partition-concurrently-3.out\t2021-05-25 11:19:03.714947775 -0400\n> @@ -13,7 +13,7 @@\n> \n> t \n> step s2detach: <... completed>\n> -error in steps s1cancel s2detach: ERROR: canceling statement due to user request\n\nI'm guessing this is:\nreport_multiple_error_messages(\"s1cancel\", 1, {\"s2detach\"})\n\n> +ERROR: canceling statement due to user request\n\nAnd this is:\nreport_multiple_error_messages(\"s2detach\", 0, {})\n\n> step s2noop: UNLISTEN noop;\n> step s1c: COMMIT;\n> step s1describe: SELECT 'd3_listp' AS root, * FROM pg_partition_tree('d3_listp')\n> \n> \n> I find this a bit weird and I'm wondering if it could be an\n> isolationtester bug -- why is it not attributing the error message to\n> any steps?\n\nI agree that looks like an isolationtester bug. isolationtester already\nprinted the tuples from s1cancel, so s1cancel should be considered finished.\nThis error message emission should never have any step name prefixing it.\n\nThanks,\nnm\n\n\n",
"msg_date": "Thu, 27 May 2021 22:52:33 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Test of a partition with an incomplete detach has a timing issue"
}
] |
[
{
"msg_contents": "Hi,\n\ndc7420c2 has removed RecentGlobalXmin, but there are still references\nto it in the code, and a set of FIXME references, like this one in\nautovacuum.c (three in total):\n/*\n * Start a transaction so we can access pg_database, and get a snapshot.\n * We don't have a use for the snapshot itself, but we're interested in\n * the secondary effect that it sets RecentGlobalXmin. (This is critical\n * for anything that reads heap pages, because HOT may decide to prune\n * them even if the process doesn't attempt to modify any tuples.)\n *\n * FIXME: This comment is inaccurate / the code buggy. A snapshot that is\n * not pushed/active does not reliably prevent HOT pruning (->xmin could\n * e.g. be cleared when cache invalidations are processed).\n */\n\nWouldn't it be better to clean up that?\nThanks,\n--\nMichael",
"msg_date": "Mon, 24 May 2021 15:47:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Remaining references to RecentGlobalXmin"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-24 15:47:48 +0900, Michael Paquier wrote:\n> dc7420c2 has removed RecentGlobalXmin, but there are still references\n> to it in the code, and a set of FIXME references, like this one in\n> autovacuum.c (three in total):\n> /*\n> * Start a transaction so we can access pg_database, and get a snapshot.\n> * We don't have a use for the snapshot itself, but we're interested in\n> * the secondary effect that it sets RecentGlobalXmin. (This is critical\n> * for anything that reads heap pages, because HOT may decide to prune\n> * them even if the process doesn't attempt to modify any tuples.)\n> *\n> * FIXME: This comment is inaccurate / the code buggy. A snapshot that is\n> * not pushed/active does not reliably prevent HOT pruning (->xmin could\n> * e.g. be cleared when cache invalidations are processed).\n> */\n> \n> Wouldn't it be better to clean up that?\n\nSure, but the real cleanup necessary isn't to remove the reference to\nRecentGlobalXmin nor specific to 14. It's that the code isn't right, and\nhasn't been for a long time.\nhttps://www.postgresql.org/message-id/20200407072418.ccvnyjbrktyi3rzc%40alap3.anarazel.de\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 26 May 2021 19:30:03 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Remaining references to RecentGlobalXmin"
}
] |
[
{
"msg_contents": "Hi all,\n\nIf a logical replication worker cannot apply the change on the\nsubscriber for some reason (e.g., missing table or violating a\nconstraint, etc.), logical replication stops until the problem is\nresolved. Ideally, we resolve the problem on the subscriber (e.g., by\ncreating the missing table or removing the conflicting data, etc.) but\noccasionally a problem cannot be fixed and it may be necessary to skip\nthe entire transaction in question. Currently, we have two ways to\nskip transactions: advancing the LSN of the replication origin on the\nsubscriber and advancing the LSN of the replication slot on the\npublisher. But both ways might not be able to skip exactly one\ntransaction in question and end up skipping other transactions too.\n\nI’d like to propose a way to skip the particular transaction on the\nsubscriber side. As the first step, a transaction can be specified to\nbe skipped by specifying remote XID on the subscriber. This feature\nwould need two sub-features: (1) a sub-feature for users to identify\nthe problem subscription and the problem transaction’s XID, and (2) a\nsub-feature to skip the particular transaction to apply.\n\nFor (1), I think the simplest way would be to put the details of the\nchange being applied in errcontext. For example, the following\nerrcontext shows the remote XID as well as the action name, the\nrelation name, and commit timestamp:\n\nERROR: duplicate key value violates unique constraint \"test_pkey\"\nDETAIL: Key (c)=(1) already exists.\nCONTEXT: during apply of \"INSERT\" for relation \"public.test\" in\ntransaction with xid 590 commit timestamp 2021-05-21\n14:32:02.134273+09\n\nThe user can identify which remote XID has a problem during applying\nthe change (XID=590 in this case). As another idea, we can have a\nstatistics view for logical replication workers, showing information\nof the last failure transaction.\n\nFor (2), what I'm thinking is to add a new action to ALTER\nSUBSCRIPTION command like ALTER SUBSCRIPTION test_sub SET SKIP\nTRANSACTION 590. Also, we can have actions to reset it; ALTER\nSUBSCRIPTION test_sub RESET SKIP TRANSACTION. Those commands add the\nXID to a new column of pg_subscription or a new catalog, having the\nworker reread its subscription information. Once the worker skipped\nthe specified transaction, it resets the transaction to skip on the\ncatalog. The syntax allows users to specify one remote XID to skip. In\nthe future, it might be good if users can also specify multiple XIDs\n(a range of XIDs or a list of XIDs, etc).\n\nFeedback and comment are very welcome.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 24 May 2021 17:01:34 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, May 24, 2021 at 1:32 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> If a logical replication worker cannot apply the change on the\n> subscriber for some reason (e.g., missing table or violating a\n> constraint, etc.), logical replication stops until the problem is\n> resolved. Ideally, we resolve the problem on the subscriber (e.g., by\n> creating the missing table or removing the conflicting data, etc.) but\n> occasionally a problem cannot be fixed and it may be necessary to skip\n> the entire transaction in question. Currently, we have two ways to\n> skip transactions: advancing the LSN of the replication origin on the\n> subscriber and advancing the LSN of the replication slot on the\n> publisher. But both ways might not be able to skip exactly one\n> transaction in question and end up skipping other transactions too.\n>\n> I’d like to propose a way to skip the particular transaction on the\n> subscriber side. As the first step, a transaction can be specified to\n> be skipped by specifying remote XID on the subscriber. This feature\n> would need two sub-features: (1) a sub-feature for users to identify\n> the problem subscription and the problem transaction’s XID, and (2) a\n> sub-feature to skip the particular transaction to apply.\n>\n> For (1), I think the simplest way would be to put the details of the\n> change being applied in errcontext. For example, the following\n> errcontext shows the remote XID as well as the action name, the\n> relation name, and commit timestamp:\n>\n> ERROR: duplicate key value violates unique constraint \"test_pkey\"\n> DETAIL: Key (c)=(1) already exists.\n> CONTEXT: during apply of \"INSERT\" for relation \"public.test\" in\n> transaction with xid 590 commit timestamp 2021-05-21\n> 14:32:02.134273+09\n>\n\nIn the above, the subscription name/id is not mentioned. I think you\nneed it for sub-feature-2.\n\n> The user can identify which remote XID has a problem during applying\n> the change (XID=590 in this case). As another idea, we can have a\n> statistics view for logical replication workers, showing information\n> of the last failure transaction.\n>\n\nIt might be good to display at both places. Having subscriber-side\ninformation in the view might be helpful in other ways as well like we\ncan use it to display the number of transactions processed by a\nparticular subscriber.\n\nI think you need to consider few more things here:\n(a) Say the error occurs after applying some part of changes, then\njust skipping the remaining part won't be sufficient, we probably need\nto someway rollback the applied changes (by rolling back the\ntransaction or in some other way).\n(b) How do you handle streamed transactions? It is possible that some\nof the streams are successful and the error occurs after that, say\nwhen writing to the stream file. Now, would you skip writing to stream\nfile or will you write it, and then during apply, you will skip the\nentire transaction and remove the corresponding stream file.\n(c) There is also a possibility that the error occurs while applying\nthe changes of some subtransaction (this is only possible for\nstreaming xacts), so, in such cases, do we allow users to rollback the\nsubtransaction or user has to rollback the entire transaction. I am\nnot sure but maybe for very large transactions users might just want\nto rollback the subtransaction.\n(d) How about prepared transactions? Do we need to rollback the\nprepared transaction if user decides to skip such a transaction? We\nalready allow prepared transactions to be streamed to plugins and the\nwork for subscriber-side apply is in progress [1], so I think we need\nto consider this case as well.\n(e) Do we want to provide such a feature via output plugins as well,\nif not, why?\n\n> For (2), what I'm thinking is to add a new action to ALTER\n> SUBSCRIPTION command like ALTER SUBSCRIPTION test_sub SET SKIP\n> TRANSACTION 590. Also, we can have actions to reset it; ALTER\n> SUBSCRIPTION test_sub RESET SKIP TRANSACTION. Those commands add the\n> XID to a new column of pg_subscription or a new catalog, having the\n> worker reread its subscription information. Once the worker skipped\n> the specified transaction, it resets the transaction to skip on the\n> catalog.\n>\n\nWhat if we fail while updating the reset information in the catalog?\nWill it be the responsibility of the user to reset such a transaction\nor we will retry it after restart of worker? Now, say, we give such a\nresponsibility to the user and the user forgets to reset it then there\nis a possibility that after wraparound we will again skip the\ntransaction which is not intended. And, if we want to retry it after\nrestart of worker, how will the worker remember the previous failure?\n\nI think this will be a useful feature but we need to consider few more things.\n\n[1] - https://www.postgresql.org/message-id/CAHut%2BPsDysQA%3DJWXb6oGFr1npvqi1e7RzzXV-juCCxnbiwHvfA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 24 May 2021 16:21:34 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, May 24, 2021 at 1:32 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Hi all,\n>\n> If a logical replication worker cannot apply the change on the\n> subscriber for some reason (e.g., missing table or violating a\n> constraint, etc.), logical replication stops until the problem is\n> resolved. Ideally, we resolve the problem on the subscriber (e.g., by\n> creating the missing table or removing the conflicting data, etc.) but\n> occasionally a problem cannot be fixed and it may be necessary to skip\n> the entire transaction in question. Currently, we have two ways to\n> skip transactions: advancing the LSN of the replication origin on the\n> subscriber and advancing the LSN of the replication slot on the\n> publisher. But both ways might not be able to skip exactly one\n> transaction in question and end up skipping other transactions too.\n\nDoes it mean pg_replication_origin_advance() can't skip exactly one\ntxn? I'm not familiar with the function or never used it though, I was\njust searching for \"how to skip a single txn in postgres\" and ended up\nin [1]. Could you please give some more details on scenarios when we\ncan't skip exactly one txn? Is there any other way to advance the LSN,\nsomething like directly updating the pg_replication_slots catalog?\n\n[1] - https://www.postgresql.org/docs/devel/logical-replication-conflicts.html\n\n> I’d like to propose a way to skip the particular transaction on the\n> subscriber side. As the first step, a transaction can be specified to\n> be skipped by specifying remote XID on the subscriber. This feature\n> would need two sub-features: (1) a sub-feature for users to identify\n> the problem subscription and the problem transaction’s XID, and (2) a\n> sub-feature to skip the particular transaction to apply.\n>\n> For (1), I think the simplest way would be to put the details of the\n> change being applied in errcontext. For example, the following\n> errcontext shows the remote XID as well as the action name, the\n> relation name, and commit timestamp:\n>\n> ERROR: duplicate key value violates unique constraint \"test_pkey\"\n> DETAIL: Key (c)=(1) already exists.\n> CONTEXT: during apply of \"INSERT\" for relation \"public.test\" in\n> transaction with xid 590 commit timestamp 2021-05-21\n> 14:32:02.134273+09\n>\n> The user can identify which remote XID has a problem during applying\n> the change (XID=590 in this case). As another idea, we can have a\n> statistics view for logical replication workers, showing information\n> of the last failure transaction.\n\nAgree with Amit on this. At times, it is difficult to look around in\nthe server logs, so it will be better to have it in both places.\n\n> For (2), what I'm thinking is to add a new action to ALTER\n> SUBSCRIPTION command like ALTER SUBSCRIPTION test_sub SET SKIP\n> TRANSACTION 590. Also, we can have actions to reset it; ALTER\n> SUBSCRIPTION test_sub RESET SKIP TRANSACTION. Those commands add the\n> XID to a new column of pg_subscription or a new catalog, having the\n> worker reread its subscription information. Once the worker skipped\n> the specified transaction, it resets the transaction to skip on the\n> catalog. The syntax allows users to specify one remote XID to skip. In\n> the future, it might be good if users can also specify multiple XIDs\n> (a range of XIDs or a list of XIDs, etc).\n\nWhat's it like skipping a txn with txn id? Is it that the particular\ntxn is forced to commit or abort or just skipping some of the code in\nthe apply worker? IIUC, the behavior of RESET SKIP TRANSACTION is just\nto forget the txn id specified in SET SKIP TRANSACTION right?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 25 May 2021 11:19:08 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, May 24, 2021 at 7:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, May 24, 2021 at 1:32 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > If a logical replication worker cannot apply the change on the\n> > subscriber for some reason (e.g., missing table or violating a\n> > constraint, etc.), logical replication stops until the problem is\n> > resolved. Ideally, we resolve the problem on the subscriber (e.g., by\n> > creating the missing table or removing the conflicting data, etc.) but\n> > occasionally a problem cannot be fixed and it may be necessary to skip\n> > the entire transaction in question. Currently, we have two ways to\n> > skip transactions: advancing the LSN of the replication origin on the\n> > subscriber and advancing the LSN of the replication slot on the\n> > publisher. But both ways might not be able to skip exactly one\n> > transaction in question and end up skipping other transactions too.\n> >\n> > I’d like to propose a way to skip the particular transaction on the\n> > subscriber side. As the first step, a transaction can be specified to\n> > be skipped by specifying remote XID on the subscriber. This feature\n> > would need two sub-features: (1) a sub-feature for users to identify\n> > the problem subscription and the problem transaction’s XID, and (2) a\n> > sub-feature to skip the particular transaction to apply.\n> >\n> > For (1), I think the simplest way would be to put the details of the\n> > change being applied in errcontext. For example, the following\n> > errcontext shows the remote XID as well as the action name, the\n> > relation name, and commit timestamp:\n> >\n> > ERROR: duplicate key value violates unique constraint \"test_pkey\"\n> > DETAIL: Key (c)=(1) already exists.\n> > CONTEXT: during apply of \"INSERT\" for relation \"public.test\" in\n> > transaction with xid 590 commit timestamp 2021-05-21\n> > 14:32:02.134273+09\n> >\n>\n> In the above, the subscription name/id is not mentioned. I think you\n> need it for sub-feature-2.\n\nAgreed.\n\n>\n> > The user can identify which remote XID has a problem during applying\n> > the change (XID=590 in this case). As another idea, we can have a\n> > statistics view for logical replication workers, showing information\n> > of the last failure transaction.\n> >\n>\n> It might be good to display at both places. Having subscriber-side\n> information in the view might be helpful in other ways as well like we\n> can use it to display the number of transactions processed by a\n> particular subscriber.\n\nYes. I think we can report that information to the stats collector. It\nneeds to live even after the worker exiting.\n\n>\n> I think you need to consider few more things here:\n> (a) Say the error occurs after applying some part of changes, then\n> just skipping the remaining part won't be sufficient, we probably need\n> to someway rollback the applied changes (by rolling back the\n> transaction or in some other way).\n\nAfter more thought, it might be better to that setting and resetting\nthe XID to skip requires disabling the subscription. This would not be\na restriction for users since logical replication is likely to already\nstop (and possibly repeating restarting and stopping) due to an error.\nSetting and resetting the XID modifies the system catalog so it's a\ncrash-safe change and survives beyond the server restarts. When a\nlogical replication worker starts, it checks the XID. If the worker\nreceives changes associated with the transaction with the specified\nXID, it can ignore the entire transaction.\n\n> (b) How do you handle streamed transactions? It is possible that some\n> of the streams are successful and the error occurs after that, say\n> when writing to the stream file. Now, would you skip writing to stream\n> file or will you write it, and then during apply, you will skip the\n> entire transaction and remove the corresponding stream file.\n\nI think streamed transactions can be handled in the same way described in (a).\n\n> (c) There is also a possibility that the error occurs while applying\n> the changes of some subtransaction (this is only possible for\n> streaming xacts), so, in such cases, do we allow users to rollback the\n> subtransaction or user has to rollback the entire transaction. I am\n> not sure but maybe for very large transactions users might just want\n> to rollback the subtransaction.\n\nIf the user specifies XID of a subtransaction, it would be better to\nskip only the subtransaction. If specifies top transaction XID, it\nwould be better to skip the entire transaction. What do you think?\n\n> (d) How about prepared transactions? Do we need to rollback the\n> prepared transaction if user decides to skip such a transaction? We\n> already allow prepared transactions to be streamed to plugins and the\n> work for subscriber-side apply is in progress [1], so I think we need\n> to consider this case as well.\n\nIf a transaction replicated from the subscriber could be prepared on\nthe subscriber, it would be guaranteed to be able to be either\ncommitted or rolled back. Given that this feature is to skip a problem\ntransaction, I think it should not do anything for transactions that\nare already prepared on the subscriber.\n\n> (e) Do we want to provide such a feature via output plugins as well,\n> if not, why?\n\nYou mean to specify an XID to skip on the publisher side? Since I've\nbeen considering this feature as a way to resume the logical\nreplication having a problem I've not thought of that idea but It\nwould be a good idea. Do you have any use cases? If we specified the\nXID on the publisher, multiple subscribers would skip that\ntransaction.\n\n>\n> > For (2), what I'm thinking is to add a new action to ALTER\n> > SUBSCRIPTION command like ALTER SUBSCRIPTION test_sub SET SKIP\n> > TRANSACTION 590. Also, we can have actions to reset it; ALTER\n> > SUBSCRIPTION test_sub RESET SKIP TRANSACTION. Those commands add the\n> > XID to a new column of pg_subscription or a new catalog, having the\n> > worker reread its subscription information. Once the worker skipped\n> > the specified transaction, it resets the transaction to skip on the\n> > catalog.\n> >\n>\n> What if we fail while updating the reset information in the catalog?\n> Will it be the responsibility of the user to reset such a transaction\n> or we will retry it after restart of worker? Now, say, we give such a\n> responsibility to the user and the user forgets to reset it then there\n> is a possibility that after wraparound we will again skip the\n> transaction which is not intended. And, if we want to retry it after\n> restart of worker, how will the worker remember the previous failure?\n\nAs described above, setting and resetting XID to skip is implemented\nas a normal system catalog change, so it's crash-safe and persisted. I\nthink that the worker can either removes the XID or mark it as done\nonce it skipped the specified transaction so that it won't skip the\nsame XID again after wraparound. Also, it might be better if we reset\nthe XID also when a subscription field such as subconninfo is changed\nbecause it could imply the worker will connect to another publisher\nhaving a different XID space.\n\nWe also need to handle the cases where the user specifies an old XID\nor XID whose transaction is already prepared on the subscriber. I\nthink the worker can reset the XID with a warning when it finds out\nthat the XID seems no longer valid or it cannot skip the specified\nXID. For example in the former case, it can do that when the first\nreceived transaction’s XID is newer than the specified XID. In the\nlatter case, it can do that when it receives the commit/rollback\nprepared message of the specified XID.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 25 May 2021 15:55:36 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, May 25, 2021 at 2:49 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Mon, May 24, 2021 at 1:32 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Hi all,\n> >\n> > If a logical replication worker cannot apply the change on the\n> > subscriber for some reason (e.g., missing table or violating a\n> > constraint, etc.), logical replication stops until the problem is\n> > resolved. Ideally, we resolve the problem on the subscriber (e.g., by\n> > creating the missing table or removing the conflicting data, etc.) but\n> > occasionally a problem cannot be fixed and it may be necessary to skip\n> > the entire transaction in question. Currently, we have two ways to\n> > skip transactions: advancing the LSN of the replication origin on the\n> > subscriber and advancing the LSN of the replication slot on the\n> > publisher. But both ways might not be able to skip exactly one\n> > transaction in question and end up skipping other transactions too.\n>\n> Does it mean pg_replication_origin_advance() can't skip exactly one\n> txn? I'm not familiar with the function or never used it though, I was\n> just searching for \"how to skip a single txn in postgres\" and ended up\n> in [1]. Could you please give some more details on scenarios when we\n> can't skip exactly one txn? Is there any other way to advance the LSN,\n> something like directly updating the pg_replication_slots catalog?\n\nSorry, it's not impossible. Although the user mistakenly skips more\nthan one transaction by specifying a wrong LSN it's always possible to\nskip an exact one transaction.\n\n>\n> [1] - https://www.postgresql.org/docs/devel/logical-replication-conflicts.html\n>\n> > I’d like to propose a way to skip the particular transaction on the\n> > subscriber side. As the first step, a transaction can be specified to\n> > be skipped by specifying remote XID on the subscriber. This feature\n> > would need two sub-features: (1) a sub-feature for users to identify\n> > the problem subscription and the problem transaction’s XID, and (2) a\n> > sub-feature to skip the particular transaction to apply.\n> >\n> > For (1), I think the simplest way would be to put the details of the\n> > change being applied in errcontext. For example, the following\n> > errcontext shows the remote XID as well as the action name, the\n> > relation name, and commit timestamp:\n> >\n> > ERROR: duplicate key value violates unique constraint \"test_pkey\"\n> > DETAIL: Key (c)=(1) already exists.\n> > CONTEXT: during apply of \"INSERT\" for relation \"public.test\" in\n> > transaction with xid 590 commit timestamp 2021-05-21\n> > 14:32:02.134273+09\n> >\n> > The user can identify which remote XID has a problem during applying\n> > the change (XID=590 in this case). As another idea, we can have a\n> > statistics view for logical replication workers, showing information\n> > of the last failure transaction.\n>\n> Agree with Amit on this. At times, it is difficult to look around in\n> the server logs, so it will be better to have it in both places.\n>\n> > For (2), what I'm thinking is to add a new action to ALTER\n> > SUBSCRIPTION command like ALTER SUBSCRIPTION test_sub SET SKIP\n> > TRANSACTION 590. Also, we can have actions to reset it; ALTER\n> > SUBSCRIPTION test_sub RESET SKIP TRANSACTION. Those commands add the\n> > XID to a new column of pg_subscription or a new catalog, having the\n> > worker reread its subscription information. Once the worker skipped\n> > the specified transaction, it resets the transaction to skip on the\n> > catalog. The syntax allows users to specify one remote XID to skip. In\n> > the future, it might be good if users can also specify multiple XIDs\n> > (a range of XIDs or a list of XIDs, etc).\n>\n> What's it like skipping a txn with txn id? Is it that the particular\n> txn is forced to commit or abort or just skipping some of the code in\n> the apply worker?\n\nWhat I'm thinking is to ignore the entire transaction with the\nspecified XID. IOW Logical replication workers don't even start the\ntransaction and ignore all changes associated with the XID.\n\n> IIUC, the behavior of RESET SKIP TRANSACTION is just\n> to forget the txn id specified in SET SKIP TRANSACTION right?\n\nRight. I proposed this RESET command for users to cancel the skipping behavior.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 25 May 2021 17:13:37 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, May 25, 2021 at 1:44 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, May 25, 2021 at 2:49 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Mon, May 24, 2021 at 1:32 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > Hi all,\n> > >\n> > > If a logical replication worker cannot apply the change on the\n> > > subscriber for some reason (e.g., missing table or violating a\n> > > constraint, etc.), logical replication stops until the problem is\n> > > resolved. Ideally, we resolve the problem on the subscriber (e.g., by\n> > > creating the missing table or removing the conflicting data, etc.) but\n> > > occasionally a problem cannot be fixed and it may be necessary to skip\n> > > the entire transaction in question. Currently, we have two ways to\n> > > skip transactions: advancing the LSN of the replication origin on the\n> > > subscriber and advancing the LSN of the replication slot on the\n> > > publisher. But both ways might not be able to skip exactly one\n> > > transaction in question and end up skipping other transactions too.\n> >\n> > Does it mean pg_replication_origin_advance() can't skip exactly one\n> > txn? I'm not familiar with the function or never used it though, I was\n> > just searching for \"how to skip a single txn in postgres\" and ended up\n> > in [1]. Could you please give some more details on scenarios when we\n> > can't skip exactly one txn? Is there any other way to advance the LSN,\n> > something like directly updating the pg_replication_slots catalog?\n>\n> Sorry, it's not impossible. Although the user mistakenly skips more\n> than one transaction by specifying a wrong LSN it's always possible to\n> skip an exact one transaction.\n\nIIUC, if the user specifies the \"correct LSN\", then it's possible to\nskip exact txn for which the sync workers are unable to apply changes,\nright?\n\nHow can the user get the LSN (which we call \"correct LSN\")? Is it from\npg_replication_slots? Or some other way?\n\nIf the user somehow can get the \"correct LSN\", can't the exact txn be\nskipped using it with any of the existing ways, either using\npg_replication_origin_advance or any other ways?\n\nIf there's no way to get the \"correct LSN\", then why can't we just\nprint that LSN in the error context and/or in the new statistics view\nfor logical replication workers, so that any of the existing ways can\nbe used to skip exactly one txn?\n\nIIUC, the feature proposed here guards against the users specifying\nwrong LSN. If I'm right, what is the guarantee that users don't\nspecify the wrong txn id? Why can't we tell the users when a wrong LSN\nis specified that \"currently, an apply worker is failing to apply the\nLSN XXXX, and you specified LSN YYYY, are you sure this is\nintentional?\"\n\nPlease correct me if I'm missing anything.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 25 May 2021 15:51:09 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, May 25, 2021 at 7:21 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, May 25, 2021 at 1:44 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, May 25, 2021 at 2:49 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > On Mon, May 24, 2021 at 1:32 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > Hi all,\n> > > >\n> > > > If a logical replication worker cannot apply the change on the\n> > > > subscriber for some reason (e.g., missing table or violating a\n> > > > constraint, etc.), logical replication stops until the problem is\n> > > > resolved. Ideally, we resolve the problem on the subscriber (e.g., by\n> > > > creating the missing table or removing the conflicting data, etc.) but\n> > > > occasionally a problem cannot be fixed and it may be necessary to skip\n> > > > the entire transaction in question. Currently, we have two ways to\n> > > > skip transactions: advancing the LSN of the replication origin on the\n> > > > subscriber and advancing the LSN of the replication slot on the\n> > > > publisher. But both ways might not be able to skip exactly one\n> > > > transaction in question and end up skipping other transactions too.\n> > >\n> > > Does it mean pg_replication_origin_advance() can't skip exactly one\n> > > txn? I'm not familiar with the function or never used it though, I was\n> > > just searching for \"how to skip a single txn in postgres\" and ended up\n> > > in [1]. Could you please give some more details on scenarios when we\n> > > can't skip exactly one txn? Is there any other way to advance the LSN,\n> > > something like directly updating the pg_replication_slots catalog?\n> >\n> > Sorry, it's not impossible. Although the user mistakenly skips more\n> > than one transaction by specifying a wrong LSN it's always possible to\n> > skip an exact one transaction.\n>\n> IIUC, if the user specifies the \"correct LSN\", then it's possible to\n> skip exact txn for which the sync workers are unable to apply changes,\n> right?\n>\n> How can the user get the LSN (which we call \"correct LSN\")? Is it from\n> pg_replication_slots? Or some other way?\n>\n> If the user somehow can get the \"correct LSN\", can't the exact txn be\n> skipped using it with any of the existing ways, either using\n> pg_replication_origin_advance or any other ways?\n\nOne possible way I know is to copy the logical replication slot used\nby the subscriber and peek at the changes to identify the correct LSN\n(maybe there is another handy way though) . For example, suppose that\ntwo transactions insert tuples as follows on the publisher:\n\nTX-A: BEGIN;\nTX-A: INSERT INTO test VALUES (1);\nTX-B: BEGIN;\nTX-B: INSERT INTO test VALUES (10);\nTX-B: COMMIT;\nTX-A: INSERT INTO test VALUES (2);\nTX-A: COMMIT;\n\nAnd suppose further that the insertion with value = 10 (by TX-A)\ncannot be applied only on the subscriber due to unique constraint\nviolation. If we copy the slot by\npg_copy_logical_replication_slot('test_sub', 'copy_slot', true,\n'test_decoding') , we can peek at those changes with LSN as follows:\n\n=# select * from pg_logical_slot_peek_changes('copy', null, null) order by lsn;\n lsn | xid | data\n-----------+-----+------------------------------------------\n 0/1911548 | 736 | BEGIN 736\n 0/1911548 | 736 | table public.hoge: INSERT: c[integer]:1\n 0/1911588 | 737 | BEGIN 737\n 0/1911588 | 737 | table public.hoge: INSERT: c[integer]:10\n 0/19115F8 | 737 | COMMIT 737\n 0/1911630 | 736 | table public.hoge: INSERT: c[integer]:2\n 0/19116A0 | 736 | COMMIT 736\n(7 rows)\n\nIn this case, '0/19115F8' is the correct LSN to specify. We can\nadvance the replication origin to ' 0/19115F8' by\npg_replication_origin_advance() so that logical replication streams\ntransactions committed after ' 0/19115F8'. After the logical\nreplication restarting, it skips the transaction with xid = 737 but\nreplicates the transaction with xid = 736.\n\n> If there's no way to get the \"correct LSN\", then why can't we just\n> print that LSN in the error context and/or in the new statistics view\n> for logical replication workers, so that any of the existing ways can\n> be used to skip exactly one txn?\n\nI think specifying XID to the subscription is more understandable for users.\n\n>\n> IIUC, the feature proposed here guards against the users specifying\n> wrong LSN. If I'm right, what is the guarantee that users don't\n> specify the wrong txn id? Why can't we tell the users when a wrong LSN\n> is specified that \"currently, an apply worker is failing to apply the\n> LSN XXXX, and you specified LSN YYYY, are you sure this is\n> intentional?\"\n\nWith the initial idea, specifying the correct XID is the user's\nresponsibility. If they specify an old XID, the worker invalids it and\nraises a warning to tell \"the worker invalidated the specified XID as\nit's too old\". As the second idea, if we store the last failed XID\nsomewhere (e.g., a system catalog), the user can just specify to skip\nthat transaction. That is, instead of specifying the XID they could do\nsomething like \"ALTER SUBSCRIPTION test_sub RESOLVE CONFLICT BY SKIP\".\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 25 May 2021 21:41:27 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, May 25, 2021 at 12:26 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, May 24, 2021 at 7:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, May 24, 2021 at 1:32 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I think you need to consider few more things here:\n> > (a) Say the error occurs after applying some part of changes, then\n> > just skipping the remaining part won't be sufficient, we probably need\n> > to someway rollback the applied changes (by rolling back the\n> > transaction or in some other way).\n>\n> After more thought, it might be better to that setting and resetting\n> the XID to skip requires disabling the subscription.\n>\n\nIt might be better if it doesn't require disabling the subscription\nbecause it would be more steps for the user to disable/enable it. It\nis not clear to me what exactly you want to gain by disabling the\nsubscription in this case.\n\n> This would not be\n> a restriction for users since logical replication is likely to already\n> stop (and possibly repeating restarting and stopping) due to an error.\n> Setting and resetting the XID modifies the system catalog so it's a\n> crash-safe change and survives beyond the server restarts. When a\n> logical replication worker starts, it checks the XID. If the worker\n> receives changes associated with the transaction with the specified\n> XID, it can ignore the entire transaction.\n>\n> > (b) How do you handle streamed transactions? It is possible that some\n> > of the streams are successful and the error occurs after that, say\n> > when writing to the stream file. Now, would you skip writing to stream\n> > file or will you write it, and then during apply, you will skip the\n> > entire transaction and remove the corresponding stream file.\n>\n> I think streamed transactions can be handled in the same way described in (a).\n>\n> > (c) There is also a possibility that the error occurs while applying\n> > the changes of some subtransaction (this is only possible for\n> > streaming xacts), so, in such cases, do we allow users to rollback the\n> > subtransaction or user has to rollback the entire transaction. I am\n> > not sure but maybe for very large transactions users might just want\n> > to rollback the subtransaction.\n>\n> If the user specifies XID of a subtransaction, it would be better to\n> skip only the subtransaction. If specifies top transaction XID, it\n> would be better to skip the entire transaction. What do you think?\n>\n\nmakes sense.\n\n> > (d) How about prepared transactions? Do we need to rollback the\n> > prepared transaction if user decides to skip such a transaction? We\n> > already allow prepared transactions to be streamed to plugins and the\n> > work for subscriber-side apply is in progress [1], so I think we need\n> > to consider this case as well.\n>\n> If a transaction replicated from the subscriber could be prepared on\n> the subscriber, it would be guaranteed to be able to be either\n> committed or rolled back. Given that this feature is to skip a problem\n> transaction, I think it should not do anything for transactions that\n> are already prepared on the subscriber.\n>\n\nmakes sense, but I think we need to reset the XID in such a case.\n\n> > (e) Do we want to provide such a feature via output plugins as well,\n> > if not, why?\n>\n> You mean to specify an XID to skip on the publisher side? Since I've\n> been considering this feature as a way to resume the logical\n> replication having a problem I've not thought of that idea but It\n> would be a good idea. Do you have any use cases?\n>\n\nNo. On again thinking about this, I think we can leave this for now.\n\n> If we specified the\n> XID on the publisher, multiple subscribers would skip that\n> transaction.\n>\n> >\n> > > For (2), what I'm thinking is to add a new action to ALTER\n> > > SUBSCRIPTION command like ALTER SUBSCRIPTION test_sub SET SKIP\n> > > TRANSACTION 590. Also, we can have actions to reset it; ALTER\n> > > SUBSCRIPTION test_sub RESET SKIP TRANSACTION. Those commands add the\n> > > XID to a new column of pg_subscription or a new catalog, having the\n> > > worker reread its subscription information. Once the worker skipped\n> > > the specified transaction, it resets the transaction to skip on the\n> > > catalog.\n> > >\n> >\n> > What if we fail while updating the reset information in the catalog?\n> > Will it be the responsibility of the user to reset such a transaction\n> > or we will retry it after restart of worker? Now, say, we give such a\n> > responsibility to the user and the user forgets to reset it then there\n> > is a possibility that after wraparound we will again skip the\n> > transaction which is not intended. And, if we want to retry it after\n> > restart of worker, how will the worker remember the previous failure?\n>\n> As described above, setting and resetting XID to skip is implemented\n> as a normal system catalog change, so it's crash-safe and persisted. I\n> think that the worker can either removes the XID or mark it as done\n> once it skipped the specified transaction so that it won't skip the\n> same XID again after wraparound.\n>\n\nIt all depends on when exactly you want to update the catalog\ninformation. Say after skipping commit of the XID, we do update the\ncorresponding LSN to be communicated as already processed to the\nsubscriber and then get the error while updating the catalog\ninformation then next time we might not know whether to update the\ncatalog for skipped XID.\n\n> Also, it might be better if we reset\n> the XID also when a subscription field such as subconninfo is changed\n> because it could imply the worker will connect to another publisher\n> having a different XID space.\n>\n> We also need to handle the cases where the user specifies an old XID\n> or XID whose transaction is already prepared on the subscriber. I\n> think the worker can reset the XID with a warning when it finds out\n> that the XID seems no longer valid or it cannot skip the specified\n> XID. For example in the former case, it can do that when the first\n> received transaction’s XID is newer than the specified XID.\n>\n\nBut how can we guarantee that older XID can't be received later? Is\nthere a guarantee that we receive the transactions on subscriber in\nXID order.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 26 May 2021 12:13:43 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, May 25, 2021 at 6:12 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, May 25, 2021 at 7:21 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > If there's no way to get the \"correct LSN\", then why can't we just\n> > print that LSN in the error context and/or in the new statistics view\n> > for logical replication workers, so that any of the existing ways can\n> > be used to skip exactly one txn?\n>\n> I think specifying XID to the subscription is more understandable for users.\n>\n\nI agree with you that specifying XID could be easier and\nunderstandable for users. I was thinking and studying a bit about what\nother systems do in this regard. Why don't we try to provide conflict\nresolution methods for users? The idea could be that either the\nconflicts can be resolved automatically or manually. In the case of\nmanual resolution, users can use the existing methods or the XID stuff\nyou are proposing here and in case of automatic resolution, the\nin-built or corresponding user-defined functions will be invoked for\nconflict resolution. There are more details to figure out in the\nautomatic resolution scheme but I see a lot of value in doing the\nsame.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 26 May 2021 14:41:46 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, May 26, 2021 at 3:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, May 25, 2021 at 12:26 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, May 24, 2021 at 7:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, May 24, 2021 at 1:32 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > I think you need to consider few more things here:\n> > > (a) Say the error occurs after applying some part of changes, then\n> > > just skipping the remaining part won't be sufficient, we probably need\n> > > to someway rollback the applied changes (by rolling back the\n> > > transaction or in some other way).\n> >\n> > After more thought, it might be better to that setting and resetting\n> > the XID to skip requires disabling the subscription.\n> >\n>\n> It might be better if it doesn't require disabling the subscription\n> because it would be more steps for the user to disable/enable it. It\n> is not clear to me what exactly you want to gain by disabling the\n> subscription in this case.\n\nThe situation I’m considered is where the user specifies the XID while\nthe worker is applying the changes of the transaction with that XID.\nIn this case, I think we need to somehow rollback the changes applied\nso far. Perhaps we can either rollback the transaction and ignore the\nremaining changes or restart and ignore the entire transaction from\nthe beginning. Also, we need to handle the case where the user resets\nthe XID after the worker skips to write some stream files. I thought\nthose parts could be complicated but it might be not after more\nthought.\n\n>\n> > This would not be\n> > a restriction for users since logical replication is likely to already\n> > stop (and possibly repeating restarting and stopping) due to an error.\n> > Setting and resetting the XID modifies the system catalog so it's a\n> > crash-safe change and survives beyond the server restarts. When a\n> > logical replication worker starts, it checks the XID. If the worker\n> > receives changes associated with the transaction with the specified\n> > XID, it can ignore the entire transaction.\n> >\n> > > (b) How do you handle streamed transactions? It is possible that some\n> > > of the streams are successful and the error occurs after that, say\n> > > when writing to the stream file. Now, would you skip writing to stream\n> > > file or will you write it, and then during apply, you will skip the\n> > > entire transaction and remove the corresponding stream file.\n> >\n> > I think streamed transactions can be handled in the same way described in (a).\n\nIf setting and resetting the XID can be performed during the worker\nrunning, we would need to write stream files even if we’re receiving\nchanges that are associated with the specified XID. Since it could\nhappen that the user resets the XID after we processed some of the\nstreamed changes, we would need to decide whether or to skip the\ntransaction when starting to apply changes.\n\n> >\n> > > (c) There is also a possibility that the error occurs while applying\n> > > the changes of some subtransaction (this is only possible for\n> > > streaming xacts), so, in such cases, do we allow users to rollback the\n> > > subtransaction or user has to rollback the entire transaction. I am\n> > > not sure but maybe for very large transactions users might just want\n> > > to rollback the subtransaction.\n> >\n> > If the user specifies XID of a subtransaction, it would be better to\n> > skip only the subtransaction. If specifies top transaction XID, it\n> > would be better to skip the entire transaction. What do you think?\n> >\n>\n> makes sense.\n>\n> > > (d) How about prepared transactions? Do we need to rollback the\n> > > prepared transaction if user decides to skip such a transaction? We\n> > > already allow prepared transactions to be streamed to plugins and the\n> > > work for subscriber-side apply is in progress [1], so I think we need\n> > > to consider this case as well.\n> >\n> > If a transaction replicated from the subscriber could be prepared on\n> > the subscriber, it would be guaranteed to be able to be either\n> > committed or rolled back. Given that this feature is to skip a problem\n> > transaction, I think it should not do anything for transactions that\n> > are already prepared on the subscriber.\n> >\n>\n> makes sense, but I think we need to reset the XID in such a case.\n\nAgreed.\n\n>\n> > > (e) Do we want to provide such a feature via output plugins as well,\n> > > if not, why?\n> >\n> > You mean to specify an XID to skip on the publisher side? Since I've\n> > been considering this feature as a way to resume the logical\n> > replication having a problem I've not thought of that idea but It\n> > would be a good idea. Do you have any use cases?\n> >\n>\n> No. On again thinking about this, I think we can leave this for now.\n>\n> > If we specified the\n> > XID on the publisher, multiple subscribers would skip that\n> > transaction.\n> >\n> > >\n> > > > For (2), what I'm thinking is to add a new action to ALTER\n> > > > SUBSCRIPTION command like ALTER SUBSCRIPTION test_sub SET SKIP\n> > > > TRANSACTION 590. Also, we can have actions to reset it; ALTER\n> > > > SUBSCRIPTION test_sub RESET SKIP TRANSACTION. Those commands add the\n> > > > XID to a new column of pg_subscription or a new catalog, having the\n> > > > worker reread its subscription information. Once the worker skipped\n> > > > the specified transaction, it resets the transaction to skip on the\n> > > > catalog.\n> > > >\n> > >\n> > > What if we fail while updating the reset information in the catalog?\n> > > Will it be the responsibility of the user to reset such a transaction\n> > > or we will retry it after restart of worker? Now, say, we give such a\n> > > responsibility to the user and the user forgets to reset it then there\n> > > is a possibility that after wraparound we will again skip the\n> > > transaction which is not intended. And, if we want to retry it after\n> > > restart of worker, how will the worker remember the previous failure?\n> >\n> > As described above, setting and resetting XID to skip is implemented\n> > as a normal system catalog change, so it's crash-safe and persisted. I\n> > think that the worker can either removes the XID or mark it as done\n> > once it skipped the specified transaction so that it won't skip the\n> > same XID again after wraparound.\n> >\n>\n> It all depends on when exactly you want to update the catalog\n> information. Say after skipping commit of the XID, we do update the\n> corresponding LSN to be communicated as already processed to the\n> subscriber and then get the error while updating the catalog\n> information then next time we might not know whether to update the\n> catalog for skipped XID.\n>\n> > Also, it might be better if we reset\n> > the XID also when a subscription field such as subconninfo is changed\n> > because it could imply the worker will connect to another publisher\n> > having a different XID space.\n> >\n> > We also need to handle the cases where the user specifies an old XID\n> > or XID whose transaction is already prepared on the subscriber. I\n> > think the worker can reset the XID with a warning when it finds out\n> > that the XID seems no longer valid or it cannot skip the specified\n> > XID. For example in the former case, it can do that when the first\n> > received transaction’s XID is newer than the specified XID.\n> >\n>\n> But how can we guarantee that older XID can't be received later? Is\n> there a guarantee that we receive the transactions on subscriber in\n> XID order.\n\nConsidering the above two comments, it might be better to provide a\nway to skip the transaction that is already known to be conflicted\nrather than allowing users to specify the arbitrary XID.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 27 May 2021 13:25:54 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, May 27, 2021 at 9:56 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, May 26, 2021 at 3:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, May 25, 2021 at 12:26 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Mon, May 24, 2021 at 7:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Mon, May 24, 2021 at 1:32 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > I think you need to consider few more things here:\n> > > > (a) Say the error occurs after applying some part of changes, then\n> > > > just skipping the remaining part won't be sufficient, we probably need\n> > > > to someway rollback the applied changes (by rolling back the\n> > > > transaction or in some other way).\n> > >\n> > > After more thought, it might be better to that setting and resetting\n> > > the XID to skip requires disabling the subscription.\n> > >\n> >\n> > It might be better if it doesn't require disabling the subscription\n> > because it would be more steps for the user to disable/enable it. It\n> > is not clear to me what exactly you want to gain by disabling the\n> > subscription in this case.\n>\n> The situation I’m considered is where the user specifies the XID while\n> the worker is applying the changes of the transaction with that XID.\n> In this case, I think we need to somehow rollback the changes applied\n> so far. Perhaps we can either rollback the transaction and ignore the\n> remaining changes or restart and ignore the entire transaction from\n> the beginning.\n>\n\nIf we follow your suggestion of only allowing XIDs that have been\nknown to have conflicts then probably we don't need to worry about\nrollbacks.\n\n> > > >\n> > > > > For (2), what I'm thinking is to add a new action to ALTER\n> > > > > SUBSCRIPTION command like ALTER SUBSCRIPTION test_sub SET SKIP\n> > > > > TRANSACTION 590. Also, we can have actions to reset it; ALTER\n> > > > > SUBSCRIPTION test_sub RESET SKIP TRANSACTION. Those commands add the\n> > > > > XID to a new column of pg_subscription or a new catalog, having the\n> > > > > worker reread its subscription information. Once the worker skipped\n> > > > > the specified transaction, it resets the transaction to skip on the\n> > > > > catalog.\n> > > > >\n> > > >\n> > > > What if we fail while updating the reset information in the catalog?\n> > > > Will it be the responsibility of the user to reset such a transaction\n> > > > or we will retry it after restart of worker? Now, say, we give such a\n> > > > responsibility to the user and the user forgets to reset it then there\n> > > > is a possibility that after wraparound we will again skip the\n> > > > transaction which is not intended. And, if we want to retry it after\n> > > > restart of worker, how will the worker remember the previous failure?\n> > >\n> > > As described above, setting and resetting XID to skip is implemented\n> > > as a normal system catalog change, so it's crash-safe and persisted. I\n> > > think that the worker can either removes the XID or mark it as done\n> > > once it skipped the specified transaction so that it won't skip the\n> > > same XID again after wraparound.\n> > >\n> >\n> > It all depends on when exactly you want to update the catalog\n> > information. Say after skipping commit of the XID, we do update the\n> > corresponding LSN to be communicated as already processed to the\n> > subscriber and then get the error while updating the catalog\n> > information then next time we might not know whether to update the\n> > catalog for skipped XID.\n> >\n> > > Also, it might be better if we reset\n> > > the XID also when a subscription field such as subconninfo is changed\n> > > because it could imply the worker will connect to another publisher\n> > > having a different XID space.\n> > >\n> > > We also need to handle the cases where the user specifies an old XID\n> > > or XID whose transaction is already prepared on the subscriber. I\n> > > think the worker can reset the XID with a warning when it finds out\n> > > that the XID seems no longer valid or it cannot skip the specified\n> > > XID. For example in the former case, it can do that when the first\n> > > received transaction’s XID is newer than the specified XID.\n> > >\n> >\n> > But how can we guarantee that older XID can't be received later? Is\n> > there a guarantee that we receive the transactions on subscriber in\n> > XID order.\n>\n> Considering the above two comments, it might be better to provide a\n> way to skip the transaction that is already known to be conflicted\n> rather than allowing users to specify the arbitrary XID.\n>\n\nOkay, that makes sense but still not sure how will you identify if we\nneed to reset XID in case of failure doing that in the previous\nattempt. Also, I am thinking that instead of a stat view, do we need\nto consider having a system table (pg_replication_conflicts or\nsomething like that) for this because what if stats information is\nlost (say either due to crash or due to udp packet loss), can we rely\non stats view for this?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 27 May 2021 11:18:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, May 26, 2021 at 6:11 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, May 25, 2021 at 6:12 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, May 25, 2021 at 7:21 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > If there's no way to get the \"correct LSN\", then why can't we just\n> > > print that LSN in the error context and/or in the new statistics view\n> > > for logical replication workers, so that any of the existing ways can\n> > > be used to skip exactly one txn?\n> >\n> > I think specifying XID to the subscription is more understandable for users.\n> >\n>\n> I agree with you that specifying XID could be easier and\n> understandable for users. I was thinking and studying a bit about what\n> other systems do in this regard. Why don't we try to provide conflict\n> resolution methods for users? The idea could be that either the\n> conflicts can be resolved automatically or manually. In the case of\n> manual resolution, users can use the existing methods or the XID stuff\n> you are proposing here and in case of automatic resolution, the\n> in-built or corresponding user-defined functions will be invoked for\n> conflict resolution. There are more details to figure out in the\n> automatic resolution scheme but I see a lot of value in doing the\n> same.\n\nYeah, I also see a lot of value in automatic conflict resolution. But\nmaybe we can have both ways? For example, in case where the user wants\nto resolve conflicts in different ways or a conflict that cannot be\nresolved by automatic resolution (not sure there is in practice\nthough), the manual resolution would also have value.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 27 May 2021 15:30:48 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, May 27, 2021 at 2:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, May 27, 2021 at 9:56 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, May 26, 2021 at 3:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, May 25, 2021 at 12:26 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Mon, May 24, 2021 at 7:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Mon, May 24, 2021 at 1:32 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > I think you need to consider few more things here:\n> > > > > (a) Say the error occurs after applying some part of changes, then\n> > > > > just skipping the remaining part won't be sufficient, we probably need\n> > > > > to someway rollback the applied changes (by rolling back the\n> > > > > transaction or in some other way).\n> > > >\n> > > > After more thought, it might be better to that setting and resetting\n> > > > the XID to skip requires disabling the subscription.\n> > > >\n> > >\n> > > It might be better if it doesn't require disabling the subscription\n> > > because it would be more steps for the user to disable/enable it. It\n> > > is not clear to me what exactly you want to gain by disabling the\n> > > subscription in this case.\n> >\n> > The situation I’m considered is where the user specifies the XID while\n> > the worker is applying the changes of the transaction with that XID.\n> > In this case, I think we need to somehow rollback the changes applied\n> > so far. Perhaps we can either rollback the transaction and ignore the\n> > remaining changes or restart and ignore the entire transaction from\n> > the beginning.\n> >\n>\n> If we follow your suggestion of only allowing XIDs that have been\n> known to have conflicts then probably we don't need to worry about\n> rollbacks.\n>\n> > > > >\n> > > > > > For (2), what I'm thinking is to add a new action to ALTER\n> > > > > > SUBSCRIPTION command like ALTER SUBSCRIPTION test_sub SET SKIP\n> > > > > > TRANSACTION 590. Also, we can have actions to reset it; ALTER\n> > > > > > SUBSCRIPTION test_sub RESET SKIP TRANSACTION. Those commands add the\n> > > > > > XID to a new column of pg_subscription or a new catalog, having the\n> > > > > > worker reread its subscription information. Once the worker skipped\n> > > > > > the specified transaction, it resets the transaction to skip on the\n> > > > > > catalog.\n> > > > > >\n> > > > >\n> > > > > What if we fail while updating the reset information in the catalog?\n> > > > > Will it be the responsibility of the user to reset such a transaction\n> > > > > or we will retry it after restart of worker? Now, say, we give such a\n> > > > > responsibility to the user and the user forgets to reset it then there\n> > > > > is a possibility that after wraparound we will again skip the\n> > > > > transaction which is not intended. And, if we want to retry it after\n> > > > > restart of worker, how will the worker remember the previous failure?\n> > > >\n> > > > As described above, setting and resetting XID to skip is implemented\n> > > > as a normal system catalog change, so it's crash-safe and persisted. I\n> > > > think that the worker can either removes the XID or mark it as done\n> > > > once it skipped the specified transaction so that it won't skip the\n> > > > same XID again after wraparound.\n> > > >\n> > >\n> > > It all depends on when exactly you want to update the catalog\n> > > information. Say after skipping commit of the XID, we do update the\n> > > corresponding LSN to be communicated as already processed to the\n> > > subscriber and then get the error while updating the catalog\n> > > information then next time we might not know whether to update the\n> > > catalog for skipped XID.\n> > >\n> > > > Also, it might be better if we reset\n> > > > the XID also when a subscription field such as subconninfo is changed\n> > > > because it could imply the worker will connect to another publisher\n> > > > having a different XID space.\n> > > >\n> > > > We also need to handle the cases where the user specifies an old XID\n> > > > or XID whose transaction is already prepared on the subscriber. I\n> > > > think the worker can reset the XID with a warning when it finds out\n> > > > that the XID seems no longer valid or it cannot skip the specified\n> > > > XID. For example in the former case, it can do that when the first\n> > > > received transaction’s XID is newer than the specified XID.\n> > > >\n> > >\n> > > But how can we guarantee that older XID can't be received later? Is\n> > > there a guarantee that we receive the transactions on subscriber in\n> > > XID order.\n> >\n> > Considering the above two comments, it might be better to provide a\n> > way to skip the transaction that is already known to be conflicted\n> > rather than allowing users to specify the arbitrary XID.\n> >\n>\n> Okay, that makes sense but still not sure how will you identify if we\n> need to reset XID in case of failure doing that in the previous\n> attempt.\n\nIt's a just idea but we can record the failed transaction with XID as\nwell as its commit LSN passed? The sequence I'm thinking is,\n\n1. the worker records the XID and commit LSN of the failed transaction\nto a catalog.\n2. the user specifies how to resolve that conflict transaction\n(currently only 'skip' is supported) and writes to the catalog.\n3. the worker does the resolution method according to the catalog. If\nthe worker didn't start to apply those changes, it can skip the entire\ntransaction. If did, it rollbacks the transaction and ignores the\nremaining.\n\nThe worker needs neither to reset information of the last failed\ntransaction nor to mark the conflicted transaction as resolved. The\nworker will ignore that information when checking the catalog if the\ncommit LSN is passed.\n\n> Also, I am thinking that instead of a stat view, do we need\n> to consider having a system table (pg_replication_conflicts or\n> something like that) for this because what if stats information is\n> lost (say either due to crash or due to udp packet loss), can we rely\n> on stats view for this?\n\nYeah, it seems better to use a catalog.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 27 May 2021 17:15:41 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, May 27, 2021 at 1:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, May 27, 2021 at 2:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Okay, that makes sense but still not sure how will you identify if we\n> > need to reset XID in case of failure doing that in the previous\n> > attempt.\n>\n> It's a just idea but we can record the failed transaction with XID as\n> well as its commit LSN passed? The sequence I'm thinking is,\n>\n> 1. the worker records the XID and commit LSN of the failed transaction\n> to a catalog.\n>\n\nWhen will you record this info? I am not sure if we can try to update\nthis when an error has occurred. We can think of using try..catch in\napply worker and then record it in catch on error but would that be\nadvisable? One random thought that occurred to me is to that apply\nworker notifies such information to the launcher (or maybe another\nprocess) which will log this information.\n\n> 2. the user specifies how to resolve that conflict transaction\n> (currently only 'skip' is supported) and writes to the catalog.\n> 3. the worker does the resolution method according to the catalog. If\n> the worker didn't start to apply those changes, it can skip the entire\n> transaction. If did, it rollbacks the transaction and ignores the\n> remaining.\n>\n> The worker needs neither to reset information of the last failed\n> transaction nor to mark the conflicted transaction as resolved. The\n> worker will ignore that information when checking the catalog if the\n> commit LSN is passed.\n>\n\nSo won't this require us to check the required info in the catalog\nbefore applying each transaction? If so, that might be overhead, maybe\nwe can build some cache of the highest commitLSN that can be consulted\nrather than the catalog table. I think we need to think about when to\nremove rows for which conflict has been resolved as we can't let that\ninformation grow infinitely.\n\n> > Also, I am thinking that instead of a stat view, do we need\n> > to consider having a system table (pg_replication_conflicts or\n> > something like that) for this because what if stats information is\n> > lost (say either due to crash or due to udp packet loss), can we rely\n> > on stats view for this?\n>\n> Yeah, it seems better to use a catalog.\n>\n\nOkay.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 27 May 2021 15:34:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, May 27, 2021 at 12:01 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, May 26, 2021 at 6:11 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > I agree with you that specifying XID could be easier and\n> > understandable for users. I was thinking and studying a bit about what\n> > other systems do in this regard. Why don't we try to provide conflict\n> > resolution methods for users? The idea could be that either the\n> > conflicts can be resolved automatically or manually. In the case of\n> > manual resolution, users can use the existing methods or the XID stuff\n> > you are proposing here and in case of automatic resolution, the\n> > in-built or corresponding user-defined functions will be invoked for\n> > conflict resolution. There are more details to figure out in the\n> > automatic resolution scheme but I see a lot of value in doing the\n> > same.\n>\n> Yeah, I also see a lot of value in automatic conflict resolution. But\n> maybe we can have both ways? For example, in case where the user wants\n> to resolve conflicts in different ways or a conflict that cannot be\n> resolved by automatic resolution (not sure there is in practice\n> though), the manual resolution would also have value.\n>\n\nRight, that is exactly what I was saying. So, even if both can be done\nas separate patches, we should try to design the manual resolution in\na way that can be extended for an automatic resolution system. I think\nwe can try to have some initial idea/design/POC for an automatic\nresolution as well to ensure that the manual resolution scheme can be\nfurther extended.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 27 May 2021 15:56:33 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, May 27, 2021 at 7:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, May 27, 2021 at 12:01 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, May 26, 2021 at 6:11 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > I agree with you that specifying XID could be easier and\n> > > understandable for users. I was thinking and studying a bit about what\n> > > other systems do in this regard. Why don't we try to provide conflict\n> > > resolution methods for users? The idea could be that either the\n> > > conflicts can be resolved automatically or manually. In the case of\n> > > manual resolution, users can use the existing methods or the XID stuff\n> > > you are proposing here and in case of automatic resolution, the\n> > > in-built or corresponding user-defined functions will be invoked for\n> > > conflict resolution. There are more details to figure out in the\n> > > automatic resolution scheme but I see a lot of value in doing the\n> > > same.\n> >\n> > Yeah, I also see a lot of value in automatic conflict resolution. But\n> > maybe we can have both ways? For example, in case where the user wants\n> > to resolve conflicts in different ways or a conflict that cannot be\n> > resolved by automatic resolution (not sure there is in practice\n> > though), the manual resolution would also have value.\n> >\n>\n> Right, that is exactly what I was saying. So, even if both can be done\n> as separate patches, we should try to design the manual resolution in\n> a way that can be extended for an automatic resolution system. I think\n> we can try to have some initial idea/design/POC for an automatic\n> resolution as well to ensure that the manual resolution scheme can be\n> further extended.\n\nTotally agreed.\n\nBut perhaps we might want to note that the conflict resolution we're\ntalking about is to resolve conflicts at the row or column level. It\ndoesn't necessarily raise an ERROR and the granularity of resolution\nis per record or column. For example, if a DELETE and an UPDATE\nprocess the same tuple (searched by PK), the UPDATE may not find the\ntuple and be ignored due to the tuple having been already deleted. In\nthis case, no ERROR will occur (i.g. UPDATE will be ignored), but the\nuser may want to do another conflict resolution. On the other hand,\nthe feature proposed here assumes that an error has already occurred\nand logical replication has already been stopped. And resolves it by\nskipping the entire transaction.\n\nIIUC the conflict resolution can be thought of as a combination of\ntypes of conflicts and the resolution that can be applied to them. For\nexample, if there is a conflict between INSERT and INSERT and the\nlatter INSERT violates the unique constraint, an ERROR is raised. So\nDBA can resolve it manually. But there is another way to automatically\nresolve it by selecting the tuple having a newer timestamp. On the\nother hand, in the DELETE and UPDATE conflict described above, it's\npossible to automatically ignore the fact that the UPDATE could update\nthe tuple. Or we can even generate an ERROR so that DBA can resolve it\nmanually. DBA can manually resolve the conflict in various ways:\nskipping the entire transaction from the origin, choose the tuple\nhaving a newer/older timestamp, etc.\n\nIn that sense, we can think of the feature proposed here as a feature\nthat provides a way to resolve the conflict that would originally\ncause an ERROR by skipping the entire transaction. If we add a\nsolution that raises an ERROR for conflicts that don't originally\nraise an ERROR (like DELETE and UPDATE conflict) in the future, we\nwill be able to manually skip each transaction for all types of\nconflicts.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Sat, 29 May 2021 11:32:04 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, May 27, 2021 at 7:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, May 27, 2021 at 1:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, May 27, 2021 at 2:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > Okay, that makes sense but still not sure how will you identify if we\n> > > need to reset XID in case of failure doing that in the previous\n> > > attempt.\n> >\n> > It's a just idea but we can record the failed transaction with XID as\n> > well as its commit LSN passed? The sequence I'm thinking is,\n> >\n> > 1. the worker records the XID and commit LSN of the failed transaction\n> > to a catalog.\n> >\n>\n> When will you record this info? I am not sure if we can try to update\n> this when an error has occurred. We can think of using try..catch in\n> apply worker and then record it in catch on error but would that be\n> advisable? One random thought that occurred to me is to that apply\n> worker notifies such information to the launcher (or maybe another\n> process) which will log this information.\n\nYeah, I was concerned about that too and had the same idea. The\ninformation still could not be written if the server crashes before\nthe launcher writes it. But I think it's an acceptable.\n\n>\n> > 2. the user specifies how to resolve that conflict transaction\n> > (currently only 'skip' is supported) and writes to the catalog.\n> > 3. the worker does the resolution method according to the catalog. If\n> > the worker didn't start to apply those changes, it can skip the entire\n> > transaction. If did, it rollbacks the transaction and ignores the\n> > remaining.\n> >\n> > The worker needs neither to reset information of the last failed\n> > transaction nor to mark the conflicted transaction as resolved. The\n> > worker will ignore that information when checking the catalog if the\n> > commit LSN is passed.\n> >\n>\n> So won't this require us to check the required info in the catalog\n> before applying each transaction? If so, that might be overhead, maybe\n> we can build some cache of the highest commitLSN that can be consulted\n> rather than the catalog table.\n\nI think workers can cache that information when starts and invalidates\nand reload the cache when the catalog gets updated. Specifying to\nskip XID will update the catalog, invalidating the cache.\n\n> I think we need to think about when to\n> remove rows for which conflict has been resolved as we can't let that\n> information grow infinitely.\n\nI guess we can update catalog tuples in place when another conflict\nhappens next time. The catalog tuple should be fixed size. The\nalready-resolved conflict will have the commit LSN older than its\nreplication origin's LSN.\n\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Sat, 29 May 2021 11:56:39 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Sat, May 29, 2021 at 8:27 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, May 27, 2021 at 7:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, May 27, 2021 at 1:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > 1. the worker records the XID and commit LSN of the failed transaction\n> > > to a catalog.\n> > >\n> >\n> > When will you record this info? I am not sure if we can try to update\n> > this when an error has occurred. We can think of using try..catch in\n> > apply worker and then record it in catch on error but would that be\n> > advisable? One random thought that occurred to me is to that apply\n> > worker notifies such information to the launcher (or maybe another\n> > process) which will log this information.\n>\n> Yeah, I was concerned about that too and had the same idea. The\n> information still could not be written if the server crashes before\n> the launcher writes it. But I think it's an acceptable.\n>\n\nTrue, because even if the launcher restarts, the apply worker will\nerror out again and resend the information. I guess we can have an\nerror queue where apply workers can add their information and the\nlauncher will then process those. If we do that, then we need to\nprobably define what we want to do if the queue gets full, either\napply worker nudge launcher and wait or it can just throw an error and\ncontinue. If you have any better ideas to share this information then\nwe can consider those as well.\n\n> >\n> > > 2. the user specifies how to resolve that conflict transaction\n> > > (currently only 'skip' is supported) and writes to the catalog.\n> > > 3. the worker does the resolution method according to the catalog. If\n> > > the worker didn't start to apply those changes, it can skip the entire\n> > > transaction. If did, it rollbacks the transaction and ignores the\n> > > remaining.\n> > >\n> > > The worker needs neither to reset information of the last failed\n> > > transaction nor to mark the conflicted transaction as resolved. The\n> > > worker will ignore that information when checking the catalog if the\n> > > commit LSN is passed.\n> > >\n> >\n> > So won't this require us to check the required info in the catalog\n> > before applying each transaction? If so, that might be overhead, maybe\n> > we can build some cache of the highest commitLSN that can be consulted\n> > rather than the catalog table.\n>\n> I think workers can cache that information when starts and invalidates\n> and reload the cache when the catalog gets updated. Specifying to\n> skip XID will update the catalog, invalidating the cache.\n>\n> > I think we need to think about when to\n> > remove rows for which conflict has been resolved as we can't let that\n> > information grow infinitely.\n>\n> I guess we can update catalog tuples in place when another conflict\n> happens next time. The catalog tuple should be fixed size. The\n> already-resolved conflict will have the commit LSN older than its\n> replication origin's LSN.\n>\n\nOkay, but I have a slight concern that we will keep xid in the system\nwhich might have been no longer valid. So, we will keep this info\nabout subscribers around till one performs drop subscription,\nhopefully, that doesn't lead to too many rows. This will be okay as\nper the current design but say tomorrow we decide to parallelize the\napply for a subscription then there could be multiple errors\ncorresponding to a subscription and in that case, such a design might\nappear quite limiting. One possibility could be that when the launcher\nis periodically checking for new error messages, it can clean up the\nconflicts catalog as well, or maybe autovacuum does this periodically\nas it does for stats (via pgstat_vacuum_stat).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 29 May 2021 12:24:16 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Sat, May 29, 2021 at 3:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, May 29, 2021 at 8:27 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, May 27, 2021 at 7:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, May 27, 2021 at 1:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > 1. the worker records the XID and commit LSN of the failed transaction\n> > > > to a catalog.\n> > > >\n> > >\n> > > When will you record this info? I am not sure if we can try to update\n> > > this when an error has occurred. We can think of using try..catch in\n> > > apply worker and then record it in catch on error but would that be\n> > > advisable? One random thought that occurred to me is to that apply\n> > > worker notifies such information to the launcher (or maybe another\n> > > process) which will log this information.\n> >\n> > Yeah, I was concerned about that too and had the same idea. The\n> > information still could not be written if the server crashes before\n> > the launcher writes it. But I think it's an acceptable.\n> >\n>\n> True, because even if the launcher restarts, the apply worker will\n> error out again and resend the information. I guess we can have an\n> error queue where apply workers can add their information and the\n> launcher will then process those. If we do that, then we need to\n> probably define what we want to do if the queue gets full, either\n> apply worker nudge launcher and wait or it can just throw an error and\n> continue. If you have any better ideas to share this information then\n> we can consider those as well.\n\n+1 for using error queue. Maybe we need to avoid queuing the same\nerror more than once to avoid the catalog from being updated\nfrequently?\n\n>\n> > >\n> > > > 2. the user specifies how to resolve that conflict transaction\n> > > > (currently only 'skip' is supported) and writes to the catalog.\n> > > > 3. the worker does the resolution method according to the catalog. If\n> > > > the worker didn't start to apply those changes, it can skip the entire\n> > > > transaction. If did, it rollbacks the transaction and ignores the\n> > > > remaining.\n> > > >\n> > > > The worker needs neither to reset information of the last failed\n> > > > transaction nor to mark the conflicted transaction as resolved. The\n> > > > worker will ignore that information when checking the catalog if the\n> > > > commit LSN is passed.\n> > > >\n> > >\n> > > So won't this require us to check the required info in the catalog\n> > > before applying each transaction? If so, that might be overhead, maybe\n> > > we can build some cache of the highest commitLSN that can be consulted\n> > > rather than the catalog table.\n> >\n> > I think workers can cache that information when starts and invalidates\n> > and reload the cache when the catalog gets updated. Specifying to\n> > skip XID will update the catalog, invalidating the cache.\n> >\n> > > I think we need to think about when to\n> > > remove rows for which conflict has been resolved as we can't let that\n> > > information grow infinitely.\n> >\n> > I guess we can update catalog tuples in place when another conflict\n> > happens next time. The catalog tuple should be fixed size. The\n> > already-resolved conflict will have the commit LSN older than its\n> > replication origin's LSN.\n> >\n>\n> Okay, but I have a slight concern that we will keep xid in the system\n> which might have been no longer valid. So, we will keep this info\n> about subscribers around till one performs drop subscription,\n> hopefully, that doesn't lead to too many rows. This will be okay as\n> per the current design but say tomorrow we decide to parallelize the\n> apply for a subscription then there could be multiple errors\n> corresponding to a subscription and in that case, such a design might\n> appear quite limiting. One possibility could be that when the launcher\n> is periodically checking for new error messages, it can clean up the\n> conflicts catalog as well, or maybe autovacuum does this periodically\n> as it does for stats (via pgstat_vacuum_stat).\n\nYeah, it's better to have a way to cleanup no longer valid entries in\nthe catalog in the case where the worker failed to remove it. I prefer\nthe former idea so far, so I'll implement it in a PoC patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 31 May 2021 16:09:19 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, May 31, 2021 at 12:39 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Sat, May 29, 2021 at 3:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > > > 1. the worker records the XID and commit LSN of the failed transaction\n> > > > > to a catalog.\n> > > > >\n> > > >\n> > > > When will you record this info? I am not sure if we can try to update\n> > > > this when an error has occurred. We can think of using try..catch in\n> > > > apply worker and then record it in catch on error but would that be\n> > > > advisable? One random thought that occurred to me is to that apply\n> > > > worker notifies such information to the launcher (or maybe another\n> > > > process) which will log this information.\n> > >\n> > > Yeah, I was concerned about that too and had the same idea. The\n> > > information still could not be written if the server crashes before\n> > > the launcher writes it. But I think it's an acceptable.\n> > >\n> >\n> > True, because even if the launcher restarts, the apply worker will\n> > error out again and resend the information. I guess we can have an\n> > error queue where apply workers can add their information and the\n> > launcher will then process those. If we do that, then we need to\n> > probably define what we want to do if the queue gets full, either\n> > apply worker nudge launcher and wait or it can just throw an error and\n> > continue. If you have any better ideas to share this information then\n> > we can consider those as well.\n>\n> +1 for using error queue. Maybe we need to avoid queuing the same\n> error more than once to avoid the catalog from being updated\n> frequently?\n>\n\nYes, I think it is important because after logging the subscription\nmay still error again unless the user does something to skip or\nresolve the conflict. I guess you need to check for the existence of\nerror in systable and or in the queue.\n\n> >\n> > >\n> > > I guess we can update catalog tuples in place when another conflict\n> > > happens next time. The catalog tuple should be fixed size. The\n> > > already-resolved conflict will have the commit LSN older than its\n> > > replication origin's LSN.\n> > >\n> >\n> > Okay, but I have a slight concern that we will keep xid in the system\n> > which might have been no longer valid. So, we will keep this info\n> > about subscribers around till one performs drop subscription,\n> > hopefully, that doesn't lead to too many rows. This will be okay as\n> > per the current design but say tomorrow we decide to parallelize the\n> > apply for a subscription then there could be multiple errors\n> > corresponding to a subscription and in that case, such a design might\n> > appear quite limiting. One possibility could be that when the launcher\n> > is periodically checking for new error messages, it can clean up the\n> > conflicts catalog as well, or maybe autovacuum does this periodically\n> > as it does for stats (via pgstat_vacuum_stat).\n>\n> Yeah, it's better to have a way to cleanup no longer valid entries in\n> the catalog in the case where the worker failed to remove it. I prefer\n> the former idea so far,\n>\n\nWhich idea do you refer to here as former (cleaning up by launcher)?\n\n> so I'll implement it in a PoC patch.\n>\n\nOkay.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 31 May 2021 17:10:50 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On 27.05.21 12:04, Amit Kapila wrote:\n>>> Also, I am thinking that instead of a stat view, do we need\n>>> to consider having a system table (pg_replication_conflicts or\n>>> something like that) for this because what if stats information is\n>>> lost (say either due to crash or due to udp packet loss), can we rely\n>>> on stats view for this?\n>> Yeah, it seems better to use a catalog.\n>>\n> Okay.\n\nCould you store it shared memory? You don't need it to be crash safe, \nsince the subscription will just run into the same error again after \nrestart. You just don't want it to be lost, like with the statistics \ncollector.\n\n\n",
"msg_date": "Mon, 31 May 2021 21:25:55 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jun 1, 2021 at 12:55 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 27.05.21 12:04, Amit Kapila wrote:\n> >>> Also, I am thinking that instead of a stat view, do we need\n> >>> to consider having a system table (pg_replication_conflicts or\n> >>> something like that) for this because what if stats information is\n> >>> lost (say either due to crash or due to udp packet loss), can we rely\n> >>> on stats view for this?\n> >> Yeah, it seems better to use a catalog.\n> >>\n> > Okay.\n>\n> Could you store it shared memory? You don't need it to be crash safe,\n> since the subscription will just run into the same error again after\n> restart. You just don't want it to be lost, like with the statistics\n> collector.\n>\n\nBut, won't that be costly in cases where we have errors in the\nprocessing of very large transactions? Subscription has to process all\nthe data before it gets an error. I think we can even imagine this\nfeature to be extended to use commitLSN as a skip candidate in which\ncase we can even avoid getting the data of that transaction from the\npublisher. So if this information is persistent, the user can even set\nthe skip identifier after the restart before the publisher can send\nall the data.\n\nAlso, I think we can't assume after the restart we will get the same\nerror because the user can perform some operations after the restart\nand before we try to apply the same transaction. It might be that the\nuser wanted to see all the errors before the user can set the skip\nidentifier (and or method).\n\nI think the XID (or say another identifier like commitLSN) which we\nwant to use for skipping the transaction as specified by the user has\nto be stored in the catalog because otherwise, after the restart we\nwon't remember it and the user won't know that he needs to set it\nagain. Now, say we have multiple skip identifiers (XIDs, commitLSN,\n..), isn't it better to store all conflict-related information in a\nseparate catalog like pg_subscription_conflict or something like that.\nI think it might be also better to later extend it for auto conflict\nresolution where the user can specify auto conflict resolution info\nfor a subscription. Is it better to store all such information in\npg_subscription or have a separate catalog? It is possible that even\nif we have a separate catalog for conflict info, we might not want to\nstore error info there.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 1 Jun 2021 09:31:33 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jun 1, 2021 at 1:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jun 1, 2021 at 12:55 AM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n> >\n> > On 27.05.21 12:04, Amit Kapila wrote:\n> > >>> Also, I am thinking that instead of a stat view, do we need\n> > >>> to consider having a system table (pg_replication_conflicts or\n> > >>> something like that) for this because what if stats information is\n> > >>> lost (say either due to crash or due to udp packet loss), can we rely\n> > >>> on stats view for this?\n> > >> Yeah, it seems better to use a catalog.\n> > >>\n> > > Okay.\n> >\n> > Could you store it shared memory? You don't need it to be crash safe,\n> > since the subscription will just run into the same error again after\n> > restart. You just don't want it to be lost, like with the statistics\n> > collector.\n> >\n>\n> But, won't that be costly in cases where we have errors in the\n> processing of very large transactions? Subscription has to process all\n> the data before it gets an error.\n\nI had the same concern. Particularly, the approach we currently\ndiscussed is to skip the transaction based on the information written\nby the worker rather than require the user to specify the XID.\nTherefore, we will always require the worker to process the same large\ntransaction after the restart in order to skip the transaction.\n\n> I think we can even imagine this\n> feature to be extended to use commitLSN as a skip candidate in which\n> case we can even avoid getting the data of that transaction from the\n> publisher. So if this information is persistent, the user can even set\n> the skip identifier after the restart before the publisher can send\n> all the data.\n\nAnother possible benefit of writing it to a catalog is that we can\nreplicate it to the physical standbys. If we have failover slots in\nthe future, the physical standby server also can resolve the conflict\nwithout processing a possibly large transaction.\n\n> I think the XID (or say another identifier like commitLSN) which we\n> want to use for skipping the transaction as specified by the user has\n> to be stored in the catalog because otherwise, after the restart we\n> won't remember it and the user won't know that he needs to set it\n> again. Now, say we have multiple skip identifiers (XIDs, commitLSN,\n> ..), isn't it better to store all conflict-related information in a\n> separate catalog like pg_subscription_conflict or something like that.\n> I think it might be also better to later extend it for auto conflict\n> resolution where the user can specify auto conflict resolution info\n> for a subscription. Is it better to store all such information in\n> pg_subscription or have a separate catalog? It is possible that even\n> if we have a separate catalog for conflict info, we might not want to\n> store error info there.\n\nJust to be clear, we need to store only the conflict-related\ninformation that cannot be resolved without manual intervention,\nright? That is, conflicts cause an error, exiting the workers. In\ngeneral, replication conflicts include also conflicts that don’t cause\nan error. I think that those conflicts don’t necessarily need to be\nstored in the catalog and don’t require manual intervention.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 1 Jun 2021 13:37:22 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jun 1, 2021 at 10:07 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Jun 1, 2021 at 1:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Jun 1, 2021 at 12:55 AM Peter Eisentraut\n> > <peter.eisentraut@enterprisedb.com> wrote:\n> > >\n> > > On 27.05.21 12:04, Amit Kapila wrote:\n> > > >>> Also, I am thinking that instead of a stat view, do we need\n> > > >>> to consider having a system table (pg_replication_conflicts or\n> > > >>> something like that) for this because what if stats information is\n> > > >>> lost (say either due to crash or due to udp packet loss), can we rely\n> > > >>> on stats view for this?\n> > > >> Yeah, it seems better to use a catalog.\n> > > >>\n> > > > Okay.\n> > >\n> > > Could you store it shared memory? You don't need it to be crash safe,\n> > > since the subscription will just run into the same error again after\n> > > restart. You just don't want it to be lost, like with the statistics\n> > > collector.\n> > >\n> >\n> > But, won't that be costly in cases where we have errors in the\n> > processing of very large transactions? Subscription has to process all\n> > the data before it gets an error.\n>\n> I had the same concern. Particularly, the approach we currently\n> discussed is to skip the transaction based on the information written\n> by the worker rather than require the user to specify the XID.\n>\n\nYeah, but I was imagining that the user still needs to specify\nsomething to indicate that we need to skip it, otherwise, we might try\nto skip a transaction that the user wants to resolve by itself rather\nthan expecting us to skip it. Another point is if we don't store this\ninformation in a persistent way then how will we restrict a user to\nspecify some random XID which is not even errored after restart.\n\n> Therefore, we will always require the worker to process the same large\n> transaction after the restart in order to skip the transaction.\n>\n> > I think we can even imagine this\n> > feature to be extended to use commitLSN as a skip candidate in which\n> > case we can even avoid getting the data of that transaction from the\n> > publisher. So if this information is persistent, the user can even set\n> > the skip identifier after the restart before the publisher can send\n> > all the data.\n>\n> Another possible benefit of writing it to a catalog is that we can\n> replicate it to the physical standbys. If we have failover slots in\n> the future, the physical standby server also can resolve the conflict\n> without processing a possibly large transaction.\n>\n\nmakes sense.\n\n> > I think the XID (or say another identifier like commitLSN) which we\n> > want to use for skipping the transaction as specified by the user has\n> > to be stored in the catalog because otherwise, after the restart we\n> > won't remember it and the user won't know that he needs to set it\n> > again. Now, say we have multiple skip identifiers (XIDs, commitLSN,\n> > ..), isn't it better to store all conflict-related information in a\n> > separate catalog like pg_subscription_conflict or something like that.\n> > I think it might be also better to later extend it for auto conflict\n> > resolution where the user can specify auto conflict resolution info\n> > for a subscription. Is it better to store all such information in\n> > pg_subscription or have a separate catalog? It is possible that even\n> > if we have a separate catalog for conflict info, we might not want to\n> > store error info there.\n>\n> Just to be clear, we need to store only the conflict-related\n> information that cannot be resolved without manual intervention,\n> right? That is, conflicts cause an error, exiting the workers. In\n> general, replication conflicts include also conflicts that don’t cause\n> an error. I think that those conflicts don’t necessarily need to be\n> stored in the catalog and don’t require manual intervention.\n>\n\nYeah, I think we want to record the error cases but which other\nconflicts you are talking about here which doesn't lead to any sort of\nerror?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 1 Jun 2021 10:58:10 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jun 1, 2021 at 2:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jun 1, 2021 at 10:07 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Jun 1, 2021 at 1:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Jun 1, 2021 at 12:55 AM Peter Eisentraut\n> > > <peter.eisentraut@enterprisedb.com> wrote:\n> > > >\n> > > > On 27.05.21 12:04, Amit Kapila wrote:\n> > > > >>> Also, I am thinking that instead of a stat view, do we need\n> > > > >>> to consider having a system table (pg_replication_conflicts or\n> > > > >>> something like that) for this because what if stats information is\n> > > > >>> lost (say either due to crash or due to udp packet loss), can we rely\n> > > > >>> on stats view for this?\n> > > > >> Yeah, it seems better to use a catalog.\n> > > > >>\n> > > > > Okay.\n> > > >\n> > > > Could you store it shared memory? You don't need it to be crash safe,\n> > > > since the subscription will just run into the same error again after\n> > > > restart. You just don't want it to be lost, like with the statistics\n> > > > collector.\n> > > >\n> > >\n> > > But, won't that be costly in cases where we have errors in the\n> > > processing of very large transactions? Subscription has to process all\n> > > the data before it gets an error.\n> >\n> > I had the same concern. Particularly, the approach we currently\n> > discussed is to skip the transaction based on the information written\n> > by the worker rather than require the user to specify the XID.\n> >\n>\n> Yeah, but I was imagining that the user still needs to specify\n> something to indicate that we need to skip it, otherwise, we might try\n> to skip a transaction that the user wants to resolve by itself rather\n> than expecting us to skip it.\n\nYeah, currently what I'm thinking is that the worker writes the\nconflict that caused an error somewhere. If the user wants to resolve\nit manually they can specify the resolution method to the stopped\nsubscription. Until the user specifies the method and the worker\nresolves it or some fields of the subscription such as subconninfo are\nupdated, the conflict is not resolved and the information lasts.\n\n>\n> > > I think the XID (or say another identifier like commitLSN) which we\n> > > want to use for skipping the transaction as specified by the user has\n> > > to be stored in the catalog because otherwise, after the restart we\n> > > won't remember it and the user won't know that he needs to set it\n> > > again. Now, say we have multiple skip identifiers (XIDs, commitLSN,\n> > > ..), isn't it better to store all conflict-related information in a\n> > > separate catalog like pg_subscription_conflict or something like that.\n> > > I think it might be also better to later extend it for auto conflict\n> > > resolution where the user can specify auto conflict resolution info\n> > > for a subscription. Is it better to store all such information in\n> > > pg_subscription or have a separate catalog? It is possible that even\n> > > if we have a separate catalog for conflict info, we might not want to\n> > > store error info there.\n> >\n> > Just to be clear, we need to store only the conflict-related\n> > information that cannot be resolved without manual intervention,\n> > right? That is, conflicts cause an error, exiting the workers. In\n> > general, replication conflicts include also conflicts that don’t cause\n> > an error. I think that those conflicts don’t necessarily need to be\n> > stored in the catalog and don’t require manual intervention.\n> >\n>\n> Yeah, I think we want to record the error cases but which other\n> conflicts you are talking about here which doesn't lead to any sort of\n> error?\n\nFor example, I think it's one type of replication conflict that two\nupdates that arrived via logical replication or from the client update\nthe same record (e.g., having the same primary key) at the same time.\nIn that case an error doesn't happen and we always choose the update\nthat arrived later. But there are other possible resolution methods\nsuch as choosing the one that arrived former, using the one having a\nnewer commit timestamp, using something like priority of the node, and\neven raising an error so that the user manually resolves it.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 1 Jun 2021 16:53:41 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jun 1, 2021 at 1:24 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Jun 1, 2021 at 2:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Jun 1, 2021 at 10:07 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Tue, Jun 1, 2021 at 1:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Tue, Jun 1, 2021 at 12:55 AM Peter Eisentraut\n> > > > <peter.eisentraut@enterprisedb.com> wrote:\n> > > > >\n> > > > > On 27.05.21 12:04, Amit Kapila wrote:\n> > > > > >>> Also, I am thinking that instead of a stat view, do we need\n> > > > > >>> to consider having a system table (pg_replication_conflicts or\n> > > > > >>> something like that) for this because what if stats information is\n> > > > > >>> lost (say either due to crash or due to udp packet loss), can we rely\n> > > > > >>> on stats view for this?\n> > > > > >> Yeah, it seems better to use a catalog.\n> > > > > >>\n> > > > > > Okay.\n> > > > >\n> > > > > Could you store it shared memory? You don't need it to be crash safe,\n> > > > > since the subscription will just run into the same error again after\n> > > > > restart. You just don't want it to be lost, like with the statistics\n> > > > > collector.\n> > > > >\n> > > >\n> > > > But, won't that be costly in cases where we have errors in the\n> > > > processing of very large transactions? Subscription has to process all\n> > > > the data before it gets an error.\n> > >\n> > > I had the same concern. Particularly, the approach we currently\n> > > discussed is to skip the transaction based on the information written\n> > > by the worker rather than require the user to specify the XID.\n> > >\n> >\n> > Yeah, but I was imagining that the user still needs to specify\n> > something to indicate that we need to skip it, otherwise, we might try\n> > to skip a transaction that the user wants to resolve by itself rather\n> > than expecting us to skip it.\n>\n> Yeah, currently what I'm thinking is that the worker writes the\n> conflict that caused an error somewhere. If the user wants to resolve\n> it manually they can specify the resolution method to the stopped\n> subscription. Until the user specifies the method and the worker\n> resolves it or some fields of the subscription such as subconninfo are\n> updated, the conflict is not resolved and the information lasts.\n>\n\nI think we can work out such details but not sure tinkering anything\nwith subconninfo was not in my mind.\n\n> >\n> > > > I think the XID (or say another identifier like commitLSN) which we\n> > > > want to use for skipping the transaction as specified by the user has\n> > > > to be stored in the catalog because otherwise, after the restart we\n> > > > won't remember it and the user won't know that he needs to set it\n> > > > again. Now, say we have multiple skip identifiers (XIDs, commitLSN,\n> > > > ..), isn't it better to store all conflict-related information in a\n> > > > separate catalog like pg_subscription_conflict or something like that.\n> > > > I think it might be also better to later extend it for auto conflict\n> > > > resolution where the user can specify auto conflict resolution info\n> > > > for a subscription. Is it better to store all such information in\n> > > > pg_subscription or have a separate catalog? It is possible that even\n> > > > if we have a separate catalog for conflict info, we might not want to\n> > > > store error info there.\n> > >\n> > > Just to be clear, we need to store only the conflict-related\n> > > information that cannot be resolved without manual intervention,\n> > > right? That is, conflicts cause an error, exiting the workers. In\n> > > general, replication conflicts include also conflicts that don’t cause\n> > > an error. I think that those conflicts don’t necessarily need to be\n> > > stored in the catalog and don’t require manual intervention.\n> > >\n> >\n> > Yeah, I think we want to record the error cases but which other\n> > conflicts you are talking about here which doesn't lead to any sort of\n> > error?\n>\n> For example, I think it's one type of replication conflict that two\n> updates that arrived via logical replication or from the client update\n> the same record (e.g., having the same primary key) at the same time.\n> In that case an error doesn't happen and we always choose the update\n> that arrived later.\n>\n\nI think we choose whichever is earlier as we first try to find the\ntuple in local rel and if not found then we silently ignore the\nupdate/delete operation.\n\n> But there are other possible resolution methods\n> such as choosing the one that arrived former, using the one having a\n> newer commit timestamp, using something like priority of the node, and\n> even raising an error so that the user manually resolves it.\n>\n\nAgreed. I think we need to log only the ones which lead to error.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 1 Jun 2021 15:17:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On 01.06.21 06:01, Amit Kapila wrote:\n> But, won't that be costly in cases where we have errors in the\n> processing of very large transactions? Subscription has to process all\n> the data before it gets an error. I think we can even imagine this\n> feature to be extended to use commitLSN as a skip candidate in which\n> case we can even avoid getting the data of that transaction from the\n> publisher. So if this information is persistent, the user can even set\n> the skip identifier after the restart before the publisher can send\n> all the data.\n\nAt least in current practice, skipping parts of the logical replication \nstream on the subscriber is a rare, emergency-level operation when \nsomething that shouldn't have happened happened. So it doesn't really \nmatter how costly it is. It's not going to be more costly than the \nerror happening in the first place. All you'd need is one shared memory \nslot per subscription to store a xid to skip.\n\nWe will also want some proper conflict handling at some point. But I \nthink what is being discussed here is meant to be a repair tool, not a \npolicy tool, and I'm afraid it might get over-engineered.\n\n\n\n",
"msg_date": "Tue, 1 Jun 2021 17:35:44 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jun 1, 2021 at 9:05 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 01.06.21 06:01, Amit Kapila wrote:\n> > But, won't that be costly in cases where we have errors in the\n> > processing of very large transactions? Subscription has to process all\n> > the data before it gets an error. I think we can even imagine this\n> > feature to be extended to use commitLSN as a skip candidate in which\n> > case we can even avoid getting the data of that transaction from the\n> > publisher. So if this information is persistent, the user can even set\n> > the skip identifier after the restart before the publisher can send\n> > all the data.\n>\n> At least in current practice, skipping parts of the logical replication\n> stream on the subscriber is a rare, emergency-level operation when\n> something that shouldn't have happened happened. So it doesn't really\n> matter how costly it is. It's not going to be more costly than the\n> error happening in the first place. All you'd need is one shared memory\n> slot per subscription to store a xid to skip.\n>\n\nLeaving aside the performance point, how can we do by just storing\nskip identifier (XID/commitLSN) in shared_memory? How will the apply\nworker know about that information after restart? Do you expect the\nuser to set it again, if so, I think users might not like that? Also,\nhow will we prohibit users to give some identifier other than for\nfailed transactions, and if users provide that what should be our\naction? Without that, if users provide XID of some in-progress\ntransaction, we might need to do more work (rollback) than just\nskipping it.\n\n> We will also want some proper conflict handling at some point. But I\n> think what is being discussed here is meant to be a repair tool, not a\n> policy tool, and I'm afraid it might get over-engineered.\n>\n\nI got your point but I am also a bit skeptical that handling all\nboundary cases might become tricky if we go with a simple shared\nmemory technique but OTOH if we can handle all such cases then it is\nfine.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 2 Jun 2021 11:37:06 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jun 2, 2021 at 3:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jun 1, 2021 at 9:05 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n> >\n> > On 01.06.21 06:01, Amit Kapila wrote:\n> > > But, won't that be costly in cases where we have errors in the\n> > > processing of very large transactions? Subscription has to process all\n> > > the data before it gets an error. I think we can even imagine this\n> > > feature to be extended to use commitLSN as a skip candidate in which\n> > > case we can even avoid getting the data of that transaction from the\n> > > publisher. So if this information is persistent, the user can even set\n> > > the skip identifier after the restart before the publisher can send\n> > > all the data.\n> >\n> > At least in current practice, skipping parts of the logical replication\n> > stream on the subscriber is a rare, emergency-level operation when\n> > something that shouldn't have happened happened. So it doesn't really\n> > matter how costly it is. It's not going to be more costly than the\n> > error happening in the first place. All you'd need is one shared memory\n> > slot per subscription to store a xid to skip.\n> >\n>\n> Leaving aside the performance point, how can we do by just storing\n> skip identifier (XID/commitLSN) in shared_memory? How will the apply\n> worker know about that information after restart? Do you expect the\n> user to set it again, if so, I think users might not like that? Also,\n> how will we prohibit users to give some identifier other than for\n> failed transactions, and if users provide that what should be our\n> action? Without that, if users provide XID of some in-progress\n> transaction, we might need to do more work (rollback) than just\n> skipping it.\n\nI think the simplest solution would be to have a fixed-size array on\nthe shared memory to store information of skipping transactions on the\nparticular subscription. Given that this feature is meant to be a\nrepair tool in emergency cases, 32 or 64 entries seem enough. That\ninformation should be visible to users via a system view and each\nentry is cleared once the worker has skipped the transaction. Also, we\nalso would need to clear the entry if the meta information of the\nsubscription such as conninfo and slot name has been changed. The\nworker reads that information at least when starting logical\nreplication. The worker receives changes from the publication and\nchecks if the transaction should be skipped when start to apply those\nchanges. If so the worker skips applying all changes of the\ntransaction and removes stream files if exist.\n\nRegarding the point of how to check if the specified XID by the user\nis valid, I guess it’s not easy to do that since XIDs sent from the\npublisher are in random order. Considering the use case of this tool,\nthe situation seems like the logical replication gets stuck due to a\nproblem transaction and the worker repeatedly restarts and raises an\nerror. So I guess it also would be a good idea that the user can\nspecify to skip the first transaction (or first N transactions) since\nthe subscription starts logical replication. It’s less flexible but\nseems enough to solve such a situation and doesn’t have such a problem\nof validating the XID. If the functionality like letting the\nsubscriber know the oldest XID that is possibly sent is useful also\nfor other purposes it would also be a good idea to implement it but\nnot sure about other use cases.\n\nAnyway, it seems to me that we need to consider the user interface\nfirst, especially how and what the user specifies the transaction to\nskip. My current feeling is that specifying XID is intuitive and\nflexible but the user needs to have 2 steps: checks XID and then\nspecifies it, and there is a risk that the user mistakenly specifies a\nwrong XID. On the other hand, the idea of specifying to skip the first\ntransaction doesn’t require the user to check and specify XID but is\nless flexible, and “the first” transaction might be ambiguous for the\nuser.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 15 Jun 2021 09:43:18 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jun 15, 2021 at 6:13 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Jun 2, 2021 at 3:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Jun 1, 2021 at 9:05 PM Peter Eisentraut\n> > <peter.eisentraut@enterprisedb.com> wrote:\n> > >\n> > > On 01.06.21 06:01, Amit Kapila wrote:\n> > > > But, won't that be costly in cases where we have errors in the\n> > > > processing of very large transactions? Subscription has to process all\n> > > > the data before it gets an error. I think we can even imagine this\n> > > > feature to be extended to use commitLSN as a skip candidate in which\n> > > > case we can even avoid getting the data of that transaction from the\n> > > > publisher. So if this information is persistent, the user can even set\n> > > > the skip identifier after the restart before the publisher can send\n> > > > all the data.\n> > >\n> > > At least in current practice, skipping parts of the logical replication\n> > > stream on the subscriber is a rare, emergency-level operation when\n> > > something that shouldn't have happened happened. So it doesn't really\n> > > matter how costly it is. It's not going to be more costly than the\n> > > error happening in the first place. All you'd need is one shared memory\n> > > slot per subscription to store a xid to skip.\n> > >\n> >\n> > Leaving aside the performance point, how can we do by just storing\n> > skip identifier (XID/commitLSN) in shared_memory? How will the apply\n> > worker know about that information after restart? Do you expect the\n> > user to set it again, if so, I think users might not like that? Also,\n> > how will we prohibit users to give some identifier other than for\n> > failed transactions, and if users provide that what should be our\n> > action? Without that, if users provide XID of some in-progress\n> > transaction, we might need to do more work (rollback) than just\n> > skipping it.\n>\n> I think the simplest solution would be to have a fixed-size array on\n> the shared memory to store information of skipping transactions on the\n> particular subscription. Given that this feature is meant to be a\n> repair tool in emergency cases, 32 or 64 entries seem enough.\n>\n\nIIUC, here you are talking about xids specified by the user to skip?\nIf so, then how will you get that information after the restart, and\nwhy you need 32 or 64 entries for it?\n\n>\n> Anyway, it seems to me that we need to consider the user interface\n> first, especially how and what the user specifies the transaction to\n> skip. My current feeling is that specifying XID is intuitive and\n> flexible but the user needs to have 2 steps: checks XID and then\n> specifies it, and there is a risk that the user mistakenly specifies a\n> wrong XID. On the other hand, the idea of specifying to skip the first\n> transaction doesn’t require the user to check and specify XID but is\n> less flexible, and “the first” transaction might be ambiguous for the\n> user.\n>\n\nI see your point in allowing to specify First N transactions but OTOH,\nI am slightly afraid that it might lead to skipping some useful\ntransactions which will make replica out-of-sync. BTW, is there any\ndata point for the user to check how many transactions it can skip?\nNormally, we won't be able to proceed till we resolve/skip the\ntransaction that is generating an error. One possibility could be that\nwe provide some *superuser* functions like\npg_logical_replication_skip_xact()/pg_logical_replication_reset_skip_xact()\nwhich takes subscription name/id and xid as input parameters. Then, I\nthink we can store this information in ReplicationState and probably\ntry to map to originid from subscription name/id to retrieve that\ninfo. We can probably document that the effects of these functions\nwon't last after the restart. Now, if this function is used by super\nusers then we can probably trust that they provide the XIDs that we\ncan trust to be skipped but OTOH making a restriction to allow these\nfunctions to be used by superusers might restrict the usage of this\nrepair tool.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 16 Jun 2021 14:35:08 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jun 16, 2021 at 6:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jun 15, 2021 at 6:13 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Jun 2, 2021 at 3:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Jun 1, 2021 at 9:05 PM Peter Eisentraut\n> > > <peter.eisentraut@enterprisedb.com> wrote:\n> > > >\n> > > > On 01.06.21 06:01, Amit Kapila wrote:\n> > > > > But, won't that be costly in cases where we have errors in the\n> > > > > processing of very large transactions? Subscription has to process all\n> > > > > the data before it gets an error. I think we can even imagine this\n> > > > > feature to be extended to use commitLSN as a skip candidate in which\n> > > > > case we can even avoid getting the data of that transaction from the\n> > > > > publisher. So if this information is persistent, the user can even set\n> > > > > the skip identifier after the restart before the publisher can send\n> > > > > all the data.\n> > > >\n> > > > At least in current practice, skipping parts of the logical replication\n> > > > stream on the subscriber is a rare, emergency-level operation when\n> > > > something that shouldn't have happened happened. So it doesn't really\n> > > > matter how costly it is. It's not going to be more costly than the\n> > > > error happening in the first place. All you'd need is one shared memory\n> > > > slot per subscription to store a xid to skip.\n> > > >\n> > >\n> > > Leaving aside the performance point, how can we do by just storing\n> > > skip identifier (XID/commitLSN) in shared_memory? How will the apply\n> > > worker know about that information after restart? Do you expect the\n> > > user to set it again, if so, I think users might not like that? Also,\n> > > how will we prohibit users to give some identifier other than for\n> > > failed transactions, and if users provide that what should be our\n> > > action? Without that, if users provide XID of some in-progress\n> > > transaction, we might need to do more work (rollback) than just\n> > > skipping it.\n> >\n> > I think the simplest solution would be to have a fixed-size array on\n> > the shared memory to store information of skipping transactions on the\n> > particular subscription. Given that this feature is meant to be a\n> > repair tool in emergency cases, 32 or 64 entries seem enough.\n> >\n>\n> IIUC, here you are talking about xids specified by the user to skip?\n\nYes. I think we need to store pairs of subid and xid.\n\n> If so, then how will you get that information after the restart, and\n> why you need 32 or 64 entries for it?\n\nThat information doesn't last after the restart. I think that the\nsituation that DBA uses this tool would be that they fix the\nsubscription on the spot. Once the subscription skipped the\ntransaction, the entry of that information is cleared. So I’m thinking\nthat we don’t need to hold many entries and it does not necessarily to\nbe durable. I think your below idea of storing that information in\nReplicationState seems better to me.\n\n>\n> >\n> > Anyway, it seems to me that we need to consider the user interface\n> > first, especially how and what the user specifies the transaction to\n> > skip. My current feeling is that specifying XID is intuitive and\n> > flexible but the user needs to have 2 steps: checks XID and then\n> > specifies it, and there is a risk that the user mistakenly specifies a\n> > wrong XID. On the other hand, the idea of specifying to skip the first\n> > transaction doesn’t require the user to check and specify XID but is\n> > less flexible, and “the first” transaction might be ambiguous for the\n> > user.\n> >\n>\n> I see your point in allowing to specify First N transactions but OTOH,\n> I am slightly afraid that it might lead to skipping some useful\n> transactions which will make replica out-of-sync.\n\nAgreed.\n\nIt might be better to skip only the first transaction.\n\n> BTW, is there any\n> data point for the user to check how many transactions it can skip?\n> Normally, we won't be able to proceed till we resolve/skip the\n> transaction that is generating an error. One possibility could be that\n> we provide some *superuser* functions like\n> pg_logical_replication_skip_xact()/pg_logical_replication_reset_skip_xact()\n> which takes subscription name/id and xid as input parameters. Then, I\n> think we can store this information in ReplicationState and probably\n> try to map to originid from subscription name/id to retrieve that\n> info. We can probably document that the effects of these functions\n> won't last after the restart.\n\nReplicationState seems a reasonable place to store that information.\n\n> Now, if this function is used by super\n> users then we can probably trust that they provide the XIDs that we\n> can trust to be skipped but OTOH making a restriction to allow these\n> functions to be used by superusers might restrict the usage of this\n> repair tool.\n\nIf we specify the subscription id or name, maybe we can allow also the\nowner of subscription to do that operation?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 17 Jun 2021 15:24:03 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Jun 17, 2021 at 3:24 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> > Now, if this function is used by super\n> > users then we can probably trust that they provide the XIDs that we\n> > can trust to be skipped but OTOH making a restriction to allow these\n> > functions to be used by superusers might restrict the usage of this\n> > repair tool.\n>\n> If we specify the subscription id or name, maybe we can allow also the\n> owner of subscription to do that operation?\n\nAh, the owner of the subscription must be superuser.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 17 Jun 2021 18:20:21 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Jun 17, 2021 at 6:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Jun 17, 2021 at 3:24 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > > Now, if this function is used by super\n> > > users then we can probably trust that they provide the XIDs that we\n> > > can trust to be skipped but OTOH making a restriction to allow these\n> > > functions to be used by superusers might restrict the usage of this\n> > > repair tool.\n> >\n> > If we specify the subscription id or name, maybe we can allow also the\n> > owner of subscription to do that operation?\n>\n> Ah, the owner of the subscription must be superuser.\n\nI've attached PoC patches.\n\n0001 patch introduces the ability to skip transactions on the\nsubscriber side. We can specify XID to the subscription by like ALTER\nSUBSCRIPTION test_sub SET SKIP TRANSACTION 100. The implementation\nseems straightforward except for setting origin state. After skipping\nthe transaction we have to update the session origin state so that we\ncan start streaming the transaction next to the one that we just\nskipped in case of the server crash or restarting the apply worker. We\nset origin state to the commit WAL record. However, since we skip all\nchanges we don’t write any WAL even if we call CommitTransaction() at\nthe end of the skipped transaction. So the patch sets the origin state\nto the transaction that updates the pg_subscription system catalog to\nreset the skip XID. I think we need a discussion of this part.\n\nWith 0002 and 0003 patches, we report the error information in server\nlogs and the stats view, respectively. 0002 patch adds errcontext for\nmessages that happened during applying the changes:\n\nERROR: duplicate key value violates unique constraint \"hoge_pkey\"\nDETAIL: Key (c)=(1) already exists.\nCONTEXT: during apply of \"INSERT\" for relation \"public.hoge\" in\ntransaction with xid 736 committs 2021-06-27 12:12:30.053887+09\n\n0003 patch adds pg_stat_logical_replication_error statistics view\ndiscussed on another thread[1]. The apply worker sends the error\ninformation to the stats collector if an error happens during applying\nchanges. We can check those errors as follow:\n\npostgres(1:25250)=# select * from pg_stat_logical_replication_error;\n subname | relid | action | xid | last_failure\n----------+-------+--------+-----+-------------------------------\n test_sub | 16384 | INSERT | 736 | 2021-06-27 12:12:45.142675+09\n(1 row)\n\nI added only columns required for the skipping transaction feature to\nthe view for now.\n\nPlease note that those patches are meant to evaluate the concept we've\ndiscussed so far. Those don't have the doc update yet.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/DB35438F-9356-4841-89A0-412709EBD3AB%40enterprisedb.com\n\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Mon, 28 Jun 2021 13:42:05 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jun 28, 2021 at 10:12 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Jun 17, 2021 at 6:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Jun 17, 2021 at 3:24 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > > Now, if this function is used by super\n> > > > users then we can probably trust that they provide the XIDs that we\n> > > > can trust to be skipped but OTOH making a restriction to allow these\n> > > > functions to be used by superusers might restrict the usage of this\n> > > > repair tool.\n> > >\n> > > If we specify the subscription id or name, maybe we can allow also the\n> > > owner of subscription to do that operation?\n> >\n> > Ah, the owner of the subscription must be superuser.\n>\n> I've attached PoC patches.\n>\n> 0001 patch introduces the ability to skip transactions on the\n> subscriber side. We can specify XID to the subscription by like ALTER\n> SUBSCRIPTION test_sub SET SKIP TRANSACTION 100. The implementation\n> seems straightforward except for setting origin state. After skipping\n> the transaction we have to update the session origin state so that we\n> can start streaming the transaction next to the one that we just\n> skipped in case of the server crash or restarting the apply worker. We\n> set origin state to the commit WAL record. However, since we skip all\n> changes we don’t write any WAL even if we call CommitTransaction() at\n> the end of the skipped transaction. So the patch sets the origin state\n> to the transaction that updates the pg_subscription system catalog to\n> reset the skip XID. I think we need a discussion of this part.\n>\n\nIIUC, for streaming transactions you are allowing stream file to be\ncreated and then remove it at stream_commit/stream_abort time, is that\nright? If so, in which cases are you imagining the files to be\ncreated, is it in the case of relation message\n(LOGICAL_REP_MSG_RELATION)? Assuming the previous two statements are\ncorrect, this will skip the relation message as well as part of the\nremoval of stream files which might lead to a problem because the\npublisher won't know that we have skipped the relation message and it\nwon't send it again. This can cause problems while processing the next\nmessages.\n\n> With 0002 and 0003 patches, we report the error information in server\n> logs and the stats view, respectively. 0002 patch adds errcontext for\n> messages that happened during applying the changes:\n>\n> ERROR: duplicate key value violates unique constraint \"hoge_pkey\"\n> DETAIL: Key (c)=(1) already exists.\n> CONTEXT: during apply of \"INSERT\" for relation \"public.hoge\" in\n> transaction with xid 736 committs 2021-06-27 12:12:30.053887+09\n>\n> 0003 patch adds pg_stat_logical_replication_error statistics view\n> discussed on another thread[1]. The apply worker sends the error\n> information to the stats collector if an error happens during applying\n> changes. We can check those errors as follow:\n>\n> postgres(1:25250)=# select * from pg_stat_logical_replication_error;\n> subname | relid | action | xid | last_failure\n> ----------+-------+--------+-----+-------------------------------\n> test_sub | 16384 | INSERT | 736 | 2021-06-27 12:12:45.142675+09\n> (1 row)\n>\n> I added only columns required for the skipping transaction feature to\n> the view for now.\n>\n\nIsn't it better to add an error message if possible?\n\n> Please note that those patches are meant to evaluate the concept we've\n> discussed so far. Those don't have the doc update yet.\n>\n\nI think your patch is on the lines of what we have discussed. It would\nbe good if you can update docs and add few tests.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 30 Jun 2021 16:35:06 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jun 30, 2021 at 4:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jun 28, 2021 at 10:12 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> >\n> > 0003 patch adds pg_stat_logical_replication_error statistics view\n> > discussed on another thread[1]. The apply worker sends the error\n> > information to the stats collector if an error happens during applying\n> > changes. We can check those errors as follow:\n> >\n> > postgres(1:25250)=# select * from pg_stat_logical_replication_error;\n> > subname | relid | action | xid | last_failure\n> > ----------+-------+--------+-----+-------------------------------\n> > test_sub | 16384 | INSERT | 736 | 2021-06-27 12:12:45.142675+09\n> > (1 row)\n> >\n> > I added only columns required for the skipping transaction feature to\n> > the view for now.\n> >\n>\n> Isn't it better to add an error message if possible?\n>\n\nDon't we want to clear stats at drop subscription as well? We do drop\ndatabase stats in dropdb via pgstat_drop_database, so I think we need\nto clear subscription stats at the time of drop subscription.\n\nIn the 0003 patch, if I am reading it correctly then the patch is not\ndoing anything for tablesync worker. It is not clear to me at this\nstage what exactly we want to do about it? Do we want to just ignore\nerrors from tablesync worker and let the system behave as it is\nwithout this feature? If we want to do anything then I think the way\nto skip the initial table sync would be to behave like the user has\ngiven 'copy_data' option as false.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 1 Jul 2021 09:26:45 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jun 30, 2021 at 8:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jun 28, 2021 at 10:12 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Jun 17, 2021 at 6:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Thu, Jun 17, 2021 at 3:24 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > > Now, if this function is used by super\n> > > > > users then we can probably trust that they provide the XIDs that we\n> > > > > can trust to be skipped but OTOH making a restriction to allow these\n> > > > > functions to be used by superusers might restrict the usage of this\n> > > > > repair tool.\n> > > >\n> > > > If we specify the subscription id or name, maybe we can allow also the\n> > > > owner of subscription to do that operation?\n> > >\n> > > Ah, the owner of the subscription must be superuser.\n> >\n> > I've attached PoC patches.\n> >\n> > 0001 patch introduces the ability to skip transactions on the\n> > subscriber side. We can specify XID to the subscription by like ALTER\n> > SUBSCRIPTION test_sub SET SKIP TRANSACTION 100. The implementation\n> > seems straightforward except for setting origin state. After skipping\n> > the transaction we have to update the session origin state so that we\n> > can start streaming the transaction next to the one that we just\n> > skipped in case of the server crash or restarting the apply worker. We\n> > set origin state to the commit WAL record. However, since we skip all\n> > changes we don’t write any WAL even if we call CommitTransaction() at\n> > the end of the skipped transaction. So the patch sets the origin state\n> > to the transaction that updates the pg_subscription system catalog to\n> > reset the skip XID. I think we need a discussion of this part.\n> >\n>\n> IIUC, for streaming transactions you are allowing stream file to be\n> created and then remove it at stream_commit/stream_abort time, is that\n> right?\n\nRight.\n\n> If so, in which cases are you imagining the files to be\n> created, is it in the case of relation message\n> (LOGICAL_REP_MSG_RELATION)? Assuming the previous two statements are\n> correct, this will skip the relation message as well as part of the\n> removal of stream files which might lead to a problem because the\n> publisher won't know that we have skipped the relation message and it\n> won't send it again. This can cause problems while processing the next\n> messages.\n\nGood point. In the current patch, we skip all streamed changes at\nstream_commit/abort but it should apply changes while skipping only\ndata-modification changes as we do for non-stream changes.\n\n>\n> > With 0002 and 0003 patches, we report the error information in server\n> > logs and the stats view, respectively. 0002 patch adds errcontext for\n> > messages that happened during applying the changes:\n> >\n> > ERROR: duplicate key value violates unique constraint \"hoge_pkey\"\n> > DETAIL: Key (c)=(1) already exists.\n> > CONTEXT: during apply of \"INSERT\" for relation \"public.hoge\" in\n> > transaction with xid 736 committs 2021-06-27 12:12:30.053887+09\n> >\n> > 0003 patch adds pg_stat_logical_replication_error statistics view\n> > discussed on another thread[1]. The apply worker sends the error\n> > information to the stats collector if an error happens during applying\n> > changes. We can check those errors as follow:\n> >\n> > postgres(1:25250)=# select * from pg_stat_logical_replication_error;\n> > subname | relid | action | xid | last_failure\n> > ----------+-------+--------+-----+-------------------------------\n> > test_sub | 16384 | INSERT | 736 | 2021-06-27 12:12:45.142675+09\n> > (1 row)\n> >\n> > I added only columns required for the skipping transaction feature to\n> > the view for now.\n> >\n>\n> Isn't it better to add an error message if possible?\n>\n> > Please note that those patches are meant to evaluate the concept we've\n> > discussed so far. Those don't have the doc update yet.\n> >\n>\n> I think your patch is on the lines of what we have discussed. It would\n> be good if you can update docs and add few tests.\n\nOkay. I'll incorporate the above suggestions in the next version patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 1 Jul 2021 16:53:42 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Jul 1, 2021 at 1:24 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Jun 30, 2021 at 8:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > If so, in which cases are you imagining the files to be\n> > created, is it in the case of relation message\n> > (LOGICAL_REP_MSG_RELATION)? Assuming the previous two statements are\n> > correct, this will skip the relation message as well as part of the\n> > removal of stream files which might lead to a problem because the\n> > publisher won't know that we have skipped the relation message and it\n> > won't send it again. This can cause problems while processing the next\n> > messages.\n>\n> Good point. In the current patch, we skip all streamed changes at\n> stream_commit/abort but it should apply changes while skipping only\n> data-modification changes as we do for non-stream changes.\n>\n\nRight.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 1 Jul 2021 16:50:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Jul 1, 2021 at 12:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jun 30, 2021 at 4:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Jun 28, 2021 at 10:12 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > >\n> > > 0003 patch adds pg_stat_logical_replication_error statistics view\n> > > discussed on another thread[1]. The apply worker sends the error\n> > > information to the stats collector if an error happens during applying\n> > > changes. We can check those errors as follow:\n> > >\n> > > postgres(1:25250)=# select * from pg_stat_logical_replication_error;\n> > > subname | relid | action | xid | last_failure\n> > > ----------+-------+--------+-----+-------------------------------\n> > > test_sub | 16384 | INSERT | 736 | 2021-06-27 12:12:45.142675+09\n> > > (1 row)\n> > >\n> > > I added only columns required for the skipping transaction feature to\n> > > the view for now.\n> > >\n> >\n> > Isn't it better to add an error message if possible?\n> >\n>\n> Don't we want to clear stats at drop subscription as well? We do drop\n> database stats in dropdb via pgstat_drop_database, so I think we need\n> to clear subscription stats at the time of drop subscription.\n\nYes, it needs to be cleared. In the 0003 patch, pgstat_vacuum_stat()\nsends the message to clear the stats. I think it's better to have\npgstat_vacuum_stat() do that job similar to dropping replication slot\nstatistics rather than relying on the single message send at DROP\nSUBSCRIPTION. I've considered doing both: sending the message at DROP\nSUBSCRIPTION and periodical checking by pgstat_vacuum_stat(), but\ndropping subscription not setting a replication slot is able to\nrollback. So we need to send it only at commit time. Given that we\ndon’t necessarily need the stats to be updated immediately, I think\nit’s reasonable to go with only a way of pgstat_vacuum_stat().\n\n> In the 0003 patch, if I am reading it correctly then the patch is not\n> doing anything for tablesync worker. It is not clear to me at this\n> stage what exactly we want to do about it? Do we want to just ignore\n> errors from tablesync worker and let the system behave as it is\n> without this feature? If we want to do anything then I think the way\n> to skip the initial table sync would be to behave like the user has\n> given 'copy_data' option as false.\n\nIt might be better to have also sync workers report errors, even if\nSKIP TRANSACTION feature doesn’t support anything for initial table\nsynchronization. From the user perspective, The initial table\nsynchronization is also the part of logical replication operations. If\nwe report only error information of applying logical changes, it could\nconfuse users.\n\nBut I’m not sure about the way to skip the initial table\nsynchronization. Once we set `copy_data` to false, all table\nsynchronizations are disabled. Some of them might have been able to\nsynchronize successfully. It might be useful if the user can disable\nthe table initialization for the particular tables.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 1 Jul 2021 22:00:36 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Jul 1, 2021 at 6:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Jul 1, 2021 at 12:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > Don't we want to clear stats at drop subscription as well? We do drop\n> > database stats in dropdb via pgstat_drop_database, so I think we need\n> > to clear subscription stats at the time of drop subscription.\n>\n> Yes, it needs to be cleared. In the 0003 patch, pgstat_vacuum_stat()\n> sends the message to clear the stats. I think it's better to have\n> pgstat_vacuum_stat() do that job similar to dropping replication slot\n> statistics rather than relying on the single message send at DROP\n> SUBSCRIPTION. I've considered doing both: sending the message at DROP\n> SUBSCRIPTION and periodical checking by pgstat_vacuum_stat(), but\n> dropping subscription not setting a replication slot is able to\n> rollback. So we need to send it only at commit time. Given that we\n> don’t necessarily need the stats to be updated immediately, I think\n> it’s reasonable to go with only a way of pgstat_vacuum_stat().\n>\n\nOkay, that makes sense. Can we consider sending the multiple ids in\none message as we do for relations or functions in\npgstat_vacuum_stat()? That will reduce some message traffic. BTW, do\nwe have some way to avoid wrapping around the OID before we clean up\nvia pgstat_vacuum_stat()?\n\n\n> > In the 0003 patch, if I am reading it correctly then the patch is not\n> > doing anything for tablesync worker. It is not clear to me at this\n> > stage what exactly we want to do about it? Do we want to just ignore\n> > errors from tablesync worker and let the system behave as it is\n> > without this feature? If we want to do anything then I think the way\n> > to skip the initial table sync would be to behave like the user has\n> > given 'copy_data' option as false.\n>\n> It might be better to have also sync workers report errors, even if\n> SKIP TRANSACTION feature doesn’t support anything for initial table\n> synchronization. From the user perspective, The initial table\n> synchronization is also the part of logical replication operations. If\n> we report only error information of applying logical changes, it could\n> confuse users.\n>\n> But I’m not sure about the way to skip the initial table\n> synchronization. Once we set `copy_data` to false, all table\n> synchronizations are disabled. Some of them might have been able to\n> synchronize successfully. It might be useful if the user can disable\n> the table initialization for the particular tables.\n>\n\nTrue but I guess the user can wait for all the tablesyncs to either\nfinish or get an error corresponding to the table sync. After that, it\ncan use 'copy_data' as false. This is not a very good method but I\ndon't see any other option. I guess whatever is the case logging\nerrors from tablesyncs is anyway not a bad idea.\n\nInstead of using the syntax \"ALTER SUBSCRIPTION name SET SKIP\nTRANSACTION Iconst\", isn't it better to use it as a subscription\noption like Mark has done for his patch (disable_on_error)?\n\nI am slightly nervous about this way of allowing the user to skip the\nerrors because if it is not used carefully then it can easily lead to\ninconsistent data on the subscriber. I agree that as only superusers\nwill be allowed to use this option and we can document clearly the\nside-effects, the risk could be reduced but is that sufficient? It is\nnot that we don't have any other tool which allows users to make their\ndata inconsistent (one recent example is functions\n(heap_force_kill/heap_force_freeze) in pg_surgery module) if not used\ncarefully but it might be better to not expose such tools.\n\nOTOH, if we use the error infrastructure of this patch and allow users\nto just disable the subscription on error as was proposed by Mark then\nthat can't lead to any inconsistency.\n\nWhat do you think?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 5 Jul 2021 15:16:19 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jul 5, 2021 at 3:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jul 1, 2021 at 6:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Jul 1, 2021 at 12:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Instead of using the syntax \"ALTER SUBSCRIPTION name SET SKIP\n> TRANSACTION Iconst\", isn't it better to use it as a subscription\n> option like Mark has done for his patch (disable_on_error)?\n>\n> I am slightly nervous about this way of allowing the user to skip the\n> errors because if it is not used carefully then it can easily lead to\n> inconsistent data on the subscriber. I agree that as only superusers\n> will be allowed to use this option and we can document clearly the\n> side-effects, the risk could be reduced but is that sufficient?\n>\n\nI see that users can create a similar effect by using\npg_replication_origin_advance() and it is mentioned in the docs that\ncareless use of this function can lead to inconsistently replicated\ndata. So, this new way doesn't seem to be any more dangerous than what\nwe already have.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 5 Jul 2021 15:54:57 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "Hi,\nHave a few notes about pg_stat_logical_replication_error from the DBA point\nof view (which will use this view in the future).\n1. As I understand it, this view might contain many errors related to\ndifferent subscriptions. It is better to name\n\"pg_stat_logical_replication_errors\" using the plural form (like this done\nfor stat views for tables, indexes, functions). Also, I'd like to suggest\nthinking twice about the view name (and function used in view DDL) -\n\"pg_stat_logical_replication_error\" contains very common \"logical\nreplication\" words, but the view contains errors related to subscriptions\nonly. In the future there could be other kinds of errors related to logical\nreplication, but not related to subscriptions - what will you do?\n2. Add a field with database name or id - it helps to quickly understand to\nwhich database the subscription belongs.\n3. Add a counter field with total number of errors - it helps to calculate\nerrors rates and aggregations (sum), and don't lose information about\nerrors between view checks.\n4. Add text of last error (if it will not be too expensive).\n5. Rename the \"action\" field to \"command\", as I know this is right from\nterminology point of view.\n\nFinally, the view might seems like this:\n\npostgres(1:25250)=# select * from pg_stat_logical_replication_errors;\nsubname | datid | relid | command | xid | total | last_failure |\nlast_failure_text\n----------+--------+-------+---------+-----+-------+-------------------------------+---------------------------\nsub_1 | 12345 | 16384 | INSERT | 736 | 145 | 2021-06-27 12:12:45.142675+09\n| something goes wrong...\nsub_2 | 12346 | 16458 | UPDATE | 845 | 12 | 2021-06-27 12:16:01.458752+09 |\nhmm, something goes wrong\n\nRegards, Alexey\n\nOn Mon, Jul 5, 2021 at 2:59 PM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n> On Thu, Jun 17, 2021 at 6:20 PM Masahiko Sawada <sawada.mshk@gmail.com>\n> wrote:\n> >\n> > On Thu, Jun 17, 2021 at 3:24 PM Masahiko Sawada <sawada.mshk@gmail.com>\n> wrote:\n> > >\n> > > > Now, if this function is used by super\n> > > > users then we can probably trust that they provide the XIDs that we\n> > > > can trust to be skipped but OTOH making a restriction to allow these\n> > > > functions to be used by superusers might restrict the usage of this\n> > > > repair tool.\n> > >\n> > > If we specify the subscription id or name, maybe we can allow also the\n> > > owner of subscription to do that operation?\n> >\n> > Ah, the owner of the subscription must be superuser.\n>\n> I've attached PoC patches.\n>\n> 0001 patch introduces the ability to skip transactions on the\n> subscriber side. We can specify XID to the subscription by like ALTER\n> SUBSCRIPTION test_sub SET SKIP TRANSACTION 100. The implementation\n> seems straightforward except for setting origin state. After skipping\n> the transaction we have to update the session origin state so that we\n> can start streaming the transaction next to the one that we just\n> skipped in case of the server crash or restarting the apply worker. We\n> set origin state to the commit WAL record. However, since we skip all\n> changes we don’t write any WAL even if we call CommitTransaction() at\n> the end of the skipped transaction. So the patch sets the origin state\n> to the transaction that updates the pg_subscription system catalog to\n> reset the skip XID. I think we need a discussion of this part.\n>\n> With 0002 and 0003 patches, we report the error information in server\n> logs and the stats view, respectively. 0002 patch adds errcontext for\n> messages that happened during applying the changes:\n>\n> ERROR: duplicate key value violates unique constraint \"hoge_pkey\"\n> DETAIL: Key (c)=(1) already exists.\n> CONTEXT: during apply of \"INSERT\" for relation \"public.hoge\" in\n> transaction with xid 736 committs 2021-06-27 12:12:30.053887+09\n>\n> 0003 patch adds pg_stat_logical_replication_error statistics view\n> discussed on another thread[1]. The apply worker sends the error\n> information to the stats collector if an error happens during applying\n> changes. We can check those errors as follow:\n>\n> postgres(1:25250)=# select * from pg_stat_logical_replication_error;\n> subname | relid | action | xid | last_failure\n> ----------+-------+--------+-----+-------------------------------\n> test_sub | 16384 | INSERT | 736 | 2021-06-27 12:12:45.142675+09\n> (1 row)\n>\n> I added only columns required for the skipping transaction feature to\n> the view for now.\n>\n> Please note that those patches are meant to evaluate the concept we've\n> discussed so far. Those don't have the doc update yet.\n>\n> Regards,\n>\n> [1]\n> https://www.postgresql.org/message-id/DB35438F-9356-4841-89A0-412709EBD3AB%40enterprisedb.com\n>\n>\n> --\n> Masahiko Sawada\n> EDB: https://www.enterprisedb.com/\n>\n\n\n-- \nС уважением Алексей В. Лесовский\n\nHi,Have a few notes about pg_stat_logical_replication_error from the DBA point of view (which will use this view in the future).1. As I understand it, this view might contain many errors related to different subscriptions. It is better to name \"pg_stat_logical_replication_errors\" using the plural form (like this done for stat views for tables, indexes, functions). Also, I'd like to suggest thinking twice about the view name (and function used in view DDL) - \"pg_stat_logical_replication_error\" contains very common \"logical replication\" words, but the view contains errors related to subscriptions only. In the future there could be other kinds of errors related to logical replication, but not related to subscriptions - what will you do?2. Add a field with database name or id - it helps to quickly understand to which database the subscription belongs.3. Add a counter field with total number of errors - it helps to calculate errors rates and aggregations (sum), and don't lose information about errors between view checks.4. Add text of last error (if it will not be too expensive).5. Rename the \"action\" field to \"command\", as I know this is right from terminology point of view.Finally, the view might seems like this:postgres(1:25250)=# select * from pg_stat_logical_replication_errors; subname | datid | relid | command | xid | total | last_failure | last_failure_text ----------+--------+-------+---------+-----+-------+-------------------------------+--------------------------- sub_1 | 12345 | 16384 | INSERT | 736 | 145 | 2021-06-27 12:12:45.142675+09 | something goes wrong... sub_2 | 12346 | 16458 | UPDATE | 845 | 12 | 2021-06-27 12:16:01.458752+09 | hmm, something goes wrong Regards, AlexeyOn Mon, Jul 5, 2021 at 2:59 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:On Thu, Jun 17, 2021 at 6:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Jun 17, 2021 at 3:24 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > > Now, if this function is used by super\n> > > users then we can probably trust that they provide the XIDs that we\n> > > can trust to be skipped but OTOH making a restriction to allow these\n> > > functions to be used by superusers might restrict the usage of this\n> > > repair tool.\n> >\n> > If we specify the subscription id or name, maybe we can allow also the\n> > owner of subscription to do that operation?\n>\n> Ah, the owner of the subscription must be superuser.\n\nI've attached PoC patches.\n\n0001 patch introduces the ability to skip transactions on the\nsubscriber side. We can specify XID to the subscription by like ALTER\nSUBSCRIPTION test_sub SET SKIP TRANSACTION 100. The implementation\nseems straightforward except for setting origin state. After skipping\nthe transaction we have to update the session origin state so that we\ncan start streaming the transaction next to the one that we just\nskipped in case of the server crash or restarting the apply worker. We\nset origin state to the commit WAL record. However, since we skip all\nchanges we don’t write any WAL even if we call CommitTransaction() at\nthe end of the skipped transaction. So the patch sets the origin state\nto the transaction that updates the pg_subscription system catalog to\nreset the skip XID. I think we need a discussion of this part.\n\nWith 0002 and 0003 patches, we report the error information in server\nlogs and the stats view, respectively. 0002 patch adds errcontext for\nmessages that happened during applying the changes:\n\nERROR: duplicate key value violates unique constraint \"hoge_pkey\"\nDETAIL: Key (c)=(1) already exists.\nCONTEXT: during apply of \"INSERT\" for relation \"public.hoge\" in\ntransaction with xid 736 committs 2021-06-27 12:12:30.053887+09\n\n0003 patch adds pg_stat_logical_replication_error statistics view\ndiscussed on another thread[1]. The apply worker sends the error\ninformation to the stats collector if an error happens during applying\nchanges. We can check those errors as follow:\n\npostgres(1:25250)=# select * from pg_stat_logical_replication_error;\n subname | relid | action | xid | last_failure\n----------+-------+--------+-----+-------------------------------\n test_sub | 16384 | INSERT | 736 | 2021-06-27 12:12:45.142675+09\n(1 row)\n\nI added only columns required for the skipping transaction feature to\nthe view for now.\n\nPlease note that those patches are meant to evaluate the concept we've\ndiscussed so far. Those don't have the doc update yet.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/DB35438F-9356-4841-89A0-412709EBD3AB%40enterprisedb.com\n\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n-- С уважением Алексей В. Лесовский",
"msg_date": "Mon, 5 Jul 2021 15:33:42 +0500",
"msg_from": "Alexey Lesovsky <lesovsky@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jul 5, 2021 at 7:33 PM Alexey Lesovsky <lesovsky@gmail.com> wrote:\n>\n> Hi,\n> Have a few notes about pg_stat_logical_replication_error from the DBA point of view (which will use this view in the future).\n\nThank you for the comments!\n\n> 1. As I understand it, this view might contain many errors related to different subscriptions. It is better to name \"pg_stat_logical_replication_errors\" using the plural form (like this done for stat views for tables, indexes, functions).\n\nAgreed.\n\n> Also, I'd like to suggest thinking twice about the view name (and function used in view DDL) - \"pg_stat_logical_replication_error\" contains very common \"logical replication\" words, but the view contains errors related to subscriptions only. In the future there could be other kinds of errors related to logical replication, but not related to subscriptions - what will you do?\n\nIs pg_stat_subscription_errors or\npg_stat_logical_replication_apply_errors better?\n\n> 2. Add a field with database name or id - it helps to quickly understand to which database the subscription belongs.\n\nAgreed.\n\n> 3. Add a counter field with total number of errors - it helps to calculate errors rates and aggregations (sum), and don't lose information about errors between view checks.\n\nDo you mean to increment the error count if the error (command, xid,\nand relid) is the same as the previous one? or to have the total\nnumber of errors per subscription? And what can we infer from the\nerror rates and aggregations?\n\n> 4. Add text of last error (if it will not be too expensive).\n\nAgreed.\n\n> 5. Rename the \"action\" field to \"command\", as I know this is right from terminology point of view.\n\nOkay.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 6 Jul 2021 14:58:17 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jul 5, 2021 at 6:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jul 1, 2021 at 6:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Jul 1, 2021 at 12:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > Don't we want to clear stats at drop subscription as well? We do drop\n> > > database stats in dropdb via pgstat_drop_database, so I think we need\n> > > to clear subscription stats at the time of drop subscription.\n> >\n> > Yes, it needs to be cleared. In the 0003 patch, pgstat_vacuum_stat()\n> > sends the message to clear the stats. I think it's better to have\n> > pgstat_vacuum_stat() do that job similar to dropping replication slot\n> > statistics rather than relying on the single message send at DROP\n> > SUBSCRIPTION. I've considered doing both: sending the message at DROP\n> > SUBSCRIPTION and periodical checking by pgstat_vacuum_stat(), but\n> > dropping subscription not setting a replication slot is able to\n> > rollback. So we need to send it only at commit time. Given that we\n> > don’t necessarily need the stats to be updated immediately, I think\n> > it’s reasonable to go with only a way of pgstat_vacuum_stat().\n> >\n>\n> Okay, that makes sense. Can we consider sending the multiple ids in\n> one message as we do for relations or functions in\n> pgstat_vacuum_stat()? That will reduce some message traffic.\n\nYes. Since subscriptions are objects that are not frequently created\nand dropped I prioritized not to increase the message type. But if we\ndo that for subscriptions, is it better to do that for replication\nslots as well? It seems to me that the lifetime of subscriptions and\nreplication slots are similar.\n\n> BTW, do\n> we have some way to avoid wrapping around the OID before we clean up\n> via pgstat_vacuum_stat()?\n\nAs far as I know there is not.\n\n>\n>\n> > > In the 0003 patch, if I am reading it correctly then the patch is not\n> > > doing anything for tablesync worker. It is not clear to me at this\n> > > stage what exactly we want to do about it? Do we want to just ignore\n> > > errors from tablesync worker and let the system behave as it is\n> > > without this feature? If we want to do anything then I think the way\n> > > to skip the initial table sync would be to behave like the user has\n> > > given 'copy_data' option as false.\n> >\n> > It might be better to have also sync workers report errors, even if\n> > SKIP TRANSACTION feature doesn’t support anything for initial table\n> > synchronization. From the user perspective, The initial table\n> > synchronization is also the part of logical replication operations. If\n> > we report only error information of applying logical changes, it could\n> > confuse users.\n> >\n> > But I’m not sure about the way to skip the initial table\n> > synchronization. Once we set `copy_data` to false, all table\n> > synchronizations are disabled. Some of them might have been able to\n> > synchronize successfully. It might be useful if the user can disable\n> > the table initialization for the particular tables.\n> >\n>\n> True but I guess the user can wait for all the tablesyncs to either\n> finish or get an error corresponding to the table sync. After that, it\n> can use 'copy_data' as false. This is not a very good method but I\n> don't see any other option. I guess whatever is the case logging\n> errors from tablesyncs is anyway not a bad idea.\n>\n> Instead of using the syntax \"ALTER SUBSCRIPTION name SET SKIP\n> TRANSACTION Iconst\", isn't it better to use it as a subscription\n> option like Mark has done for his patch (disable_on_error)?\n\nAccording to the doc, ALTER SUBSCRIPTION ... SET is used to alter\nparameters originally set by CREATE SUBSCRIPTION. Therefore, we can\nspecify a subset of parameters that can be specified by CREATE\nSUBSCRIPTION. It makes sense to me for 'disable_on_error' since it can\nbe specified by CREATE SUBSCRIPTION. Whereas SKIP TRANSACTION stuff\ncannot be done. Are you concerned about adding a syntax to ALTER\nSUBSCRIPTION?\n\n>\n> I am slightly nervous about this way of allowing the user to skip the\n> errors because if it is not used carefully then it can easily lead to\n> inconsistent data on the subscriber. I agree that as only superusers\n> will be allowed to use this option and we can document clearly the\n> side-effects, the risk could be reduced but is that sufficient? It is\n> not that we don't have any other tool which allows users to make their\n> data inconsistent (one recent example is functions\n> (heap_force_kill/heap_force_freeze) in pg_surgery module) if not used\n> carefully but it might be better to not expose such tools.\n>\n> OTOH, if we use the error infrastructure of this patch and allow users\n> to just disable the subscription on error as was proposed by Mark then\n> that can't lead to any inconsistency.\n>\n> What do you think?\n\nAs you mentioned in another mail, what we can do with this feature is\nthe same as pg_replication_origin_advance(). Like there is a risk that\nthe user specifies a wrong LSN to pg_replication_origin_advance(),\nthere is a similar risk at this feature.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 6 Jul 2021 15:59:49 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jul 6, 2021 at 11:28 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Jul 5, 2021 at 7:33 PM Alexey Lesovsky <lesovsky@gmail.com> wrote:\n> >\n> > Hi,\n> > Have a few notes about pg_stat_logical_replication_error from the DBA point of view (which will use this view in the future).\n>\n> Thank you for the comments!\n>\n> > 1. As I understand it, this view might contain many errors related to different subscriptions. It is better to name \"pg_stat_logical_replication_errors\" using the plural form (like this done for stat views for tables, indexes, functions).\n>\n> Agreed.\n>\n> > Also, I'd like to suggest thinking twice about the view name (and function used in view DDL) - \"pg_stat_logical_replication_error\" contains very common \"logical replication\" words, but the view contains errors related to subscriptions only. In the future there could be other kinds of errors related to logical replication, but not related to subscriptions - what will you do?\n>\n> Is pg_stat_subscription_errors or\n> pg_stat_logical_replication_apply_errors better?\n>\n\nFew more to consider: pg_stat_apply_failures,\npg_stat_subscription_failures, pg_stat_apply_conflicts,\npg_stat_subscription_conflicts.\n\n> > 2. Add a field with database name or id - it helps to quickly understand to which database the subscription belongs.\n>\n> Agreed.\n>\n> > 3. Add a counter field with total number of errors - it helps to calculate errors rates and aggregations (sum), and don't lose information about errors between view checks.\n>\n> Do you mean to increment the error count if the error (command, xid,\n> and relid) is the same as the previous one? or to have the total\n> number of errors per subscription?\n>\n\nI would prefer the total number of errors per subscription.\n\n> And what can we infer from the\n> error rates and aggregations?\n>\n\nSay, if we add a column like failure_type/conflict_type as well and\none would be interested in knowing how many conflicts are due to\nprimary key conflicts vs. update/delete conflicts.\n\nYou might want to consider keeping this view patch before the skip_xid\npatch in your patch series as this will be base for the skip_xid\npatch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 6 Jul 2021 14:29:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jul 6, 2021 at 12:30 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Jul 5, 2021 at 6:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Jul 1, 2021 at 6:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Thu, Jul 1, 2021 at 12:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > >\n> > > > Don't we want to clear stats at drop subscription as well? We do drop\n> > > > database stats in dropdb via pgstat_drop_database, so I think we need\n> > > > to clear subscription stats at the time of drop subscription.\n> > >\n> > > Yes, it needs to be cleared. In the 0003 patch, pgstat_vacuum_stat()\n> > > sends the message to clear the stats. I think it's better to have\n> > > pgstat_vacuum_stat() do that job similar to dropping replication slot\n> > > statistics rather than relying on the single message send at DROP\n> > > SUBSCRIPTION. I've considered doing both: sending the message at DROP\n> > > SUBSCRIPTION and periodical checking by pgstat_vacuum_stat(), but\n> > > dropping subscription not setting a replication slot is able to\n> > > rollback. So we need to send it only at commit time. Given that we\n> > > don’t necessarily need the stats to be updated immediately, I think\n> > > it’s reasonable to go with only a way of pgstat_vacuum_stat().\n> > >\n> >\n> > Okay, that makes sense. Can we consider sending the multiple ids in\n> > one message as we do for relations or functions in\n> > pgstat_vacuum_stat()? That will reduce some message traffic.\n>\n> Yes. Since subscriptions are objects that are not frequently created\n> and dropped I prioritized not to increase the message type. But if we\n> do that for subscriptions, is it better to do that for replication\n> slots as well? It seems to me that the lifetime of subscriptions and\n> replication slots are similar.\n>\n\nYeah, I think it makes sense to do for both, we can work on slots\npatch separately. I don't see a reason why we shouldn't send a single\nmessage for multiple clear/drop entries.\n\n> >\n> > True but I guess the user can wait for all the tablesyncs to either\n> > finish or get an error corresponding to the table sync. After that, it\n> > can use 'copy_data' as false. This is not a very good method but I\n> > don't see any other option. I guess whatever is the case logging\n> > errors from tablesyncs is anyway not a bad idea.\n> >\n> > Instead of using the syntax \"ALTER SUBSCRIPTION name SET SKIP\n> > TRANSACTION Iconst\", isn't it better to use it as a subscription\n> > option like Mark has done for his patch (disable_on_error)?\n>\n> According to the doc, ALTER SUBSCRIPTION ... SET is used to alter\n> parameters originally set by CREATE SUBSCRIPTION. Therefore, we can\n> specify a subset of parameters that can be specified by CREATE\n> SUBSCRIPTION. It makes sense to me for 'disable_on_error' since it can\n> be specified by CREATE SUBSCRIPTION. Whereas SKIP TRANSACTION stuff\n> cannot be done. Are you concerned about adding a syntax to ALTER\n> SUBSCRIPTION?\n>\n\nBoth for additional syntax and consistency with disable_on_error.\nIsn't it just a current implementation that Alter only allows to\nchange parameters supported by Create? Is there a reason why we can't\nallow Alter to set/change some parameters not supported by Create?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 6 Jul 2021 15:03:32 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jul 6, 2021 at 10:58 AM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n> > Also, I'd like to suggest thinking twice about the view name (and\n> function used in view DDL) - \"pg_stat_logical_replication_error\" contains\n> very common \"logical replication\" words, but the view contains errors\n> related to subscriptions only. In the future there could be other kinds of\n> errors related to logical replication, but not related to subscriptions -\n> what will you do?\n>\n\n> Is pg_stat_subscription_errors or\n> pg_stat_logical_replication_apply_errors better?\n>\n\nIt seems to me 'pg_stat_subscription_conflicts' proposed by Amit Kapila is\nthe most suitable, because it directly says about conflicts occurring on\nthe subscription side. The name 'pg_stat_subscription_errors' is also good,\nespecially in case of further extension if some kind of similar errors will\nbe tracked.\n\n\n> > 3. Add a counter field with total number of errors - it helps to\n> calculate errors rates and aggregations (sum), and don't lose information\n> about errors between view checks.\n>\n> Do you mean to increment the error count if the error (command, xid,\n> and relid) is the same as the previous one? or to have the total\n> number of errors per subscription? And what can we infer from the\n> error rates and aggregations?\n>\n\nTo be honest, I hurried up when I wrote the first email, and read only\nabout stats view. Later, I read the starting email about the patch and\nrethought this note.\n\nAs I understand, when the conflict occurs, replication stops (until\nconflict is resolved), an error appears in the stats view. Now, no new\nerrors can occur in the blocked subscription. Hence, there are impossible\nsituations when many errors (like spikes) have occurred and a user didn't\nsee that. If I am correct in my assumption, there is no need for counters.\nThey are necessary only when errors might occur too frequently (like\npg_stat_database.deadlocks). But if this is possible, I would prefer the\ntotal number of errors per subscription, as also proposed by Amit.\n\nUnder \"error rates and aggregations\" I also mean in the context of when a\nhigh number of errors occured in a short period of time. If a user can\nread the \"total errors\" counter and keep this metric in his monitoring\nsystem, he will be able to calculate rates over time using functions in the\nmonitoring system. This is extremely useful.\n\nI also would like to clarify, when conflict is resolved - the error record\nis cleared or kept in the view? If it is cleared, the error counter is\nrequired (because we don't want to lose all history of errors). If it is\nkept - the flag telling about the error is resolved is needed (or set xid\nto NULL). I mean when the user is watching the view, he should be able to\nidentify if the error has already been resolved or not.\n\n--\nRegards, Alexey\n\n\nOn Tue, Jul 6, 2021 at 10:58 AM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n> On Mon, Jul 5, 2021 at 7:33 PM Alexey Lesovsky <lesovsky@gmail.com> wrote:\n> >\n> > Hi,\n> > Have a few notes about pg_stat_logical_replication_error from the DBA\n> point of view (which will use this view in the future).\n>\n> Thank you for the comments!\n>\n> > 1. As I understand it, this view might contain many errors related to\n> different subscriptions. It is better to name\n> \"pg_stat_logical_replication_errors\" using the plural form (like this done\n> for stat views for tables, indexes, functions).\n>\n> Agreed.\n>\n> > Also, I'd like to suggest thinking twice about the view name (and\n> function used in view DDL) - \"pg_stat_logical_replication_error\" contains\n> very common \"logical replication\" words, but the view contains errors\n> related to subscriptions only. In the future there could be other kinds of\n> errors related to logical replication, but not related to subscriptions -\n> what will you do?\n>\n> Is pg_stat_subscription_errors or\n> pg_stat_logical_replication_apply_errors better?\n>\n> > 2. Add a field with database name or id - it helps to quickly understand\n> to which database the subscription belongs.\n>\n> Agreed.\n>\n> > 3. Add a counter field with total number of errors - it helps to\n> calculate errors rates and aggregations (sum), and don't lose information\n> about errors between view checks.\n>\n> Do you mean to increment the error count if the error (command, xid,\n> and relid) is the same as the previous one? or to have the total\n> number of errors per subscription? And what can we infer from the\n> error rates and aggregations?\n>\n> > 4. Add text of last error (if it will not be too expensive).\n>\n> Agreed.\n>\n> > 5. Rename the \"action\" field to \"command\", as I know this is right from\n> terminology point of view.\n>\n> Okay.\n>\n> Regards,\n>\n> --\n> Masahiko Sawada\n> EDB: https://www.enterprisedb.com/\n>\n\n\n-- \nС уважением Алексей В. Лесовский\n\nOn Tue, Jul 6, 2021 at 10:58 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:> Also, I'd like to suggest thinking twice about the view name (and function used in view DDL) - \"pg_stat_logical_replication_error\" contains very common \"logical replication\" words, but the view contains errors related to subscriptions only. In the future there could be other kinds of errors related to logical replication, but not related to subscriptions - what will you do?Is pg_stat_subscription_errors orpg_stat_logical_replication_apply_errors better?It seems to me 'pg_stat_subscription_conflicts' proposed by Amit Kapila is the most suitable, because it directly says about conflicts occurring on the subscription side. The name 'pg_stat_subscription_errors' is also good, especially in case of further extension if some kind of similar errors will be tracked. > 3. Add a counter field with total number of errors - it helps to calculate errors rates and aggregations (sum), and don't lose information about errors between view checks.Do you mean to increment the error count if the error (command, xid,and relid) is the same as the previous one? or to have the totalnumber of errors per subscription? And what can we infer from theerror rates and aggregations?To be honest, I hurried up when I wrote the first email, and read only about stats view. Later, I read the starting email about the patch and rethought this note.As I understand, when the conflict occurs, replication stops (until conflict is resolved), an error appears in the stats view. Now, no new errors can occur in the blocked subscription. Hence, there are impossible situations when many errors (like spikes) have occurred and a user didn't see that. If I am correct in my assumption, there is no need for counters. They are necessary only when errors might occur too frequently (like pg_stat_database.deadlocks). But if this is possible, I would prefer the total number of errors per subscription, as also proposed by Amit.Under \"error rates and aggregations\" I also mean in the context of when a high number of errors occured in a short period of time. If a user can read the \"total errors\" counter and keep this metric in his monitoring system, he will be able to calculate rates over time using functions in the monitoring system. This is extremely useful.I also would like to clarify, when conflict is resolved - the error record is cleared or kept in the view? If it is cleared, the error counter is required (because we don't want to lose all history of errors). If it is kept - the flag telling about the error is resolved is needed (or set xid to NULL). I mean when the user is watching the view, he should be able to identify if the error has already been resolved or not.--Regards, AlexeyOn Tue, Jul 6, 2021 at 10:58 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:On Mon, Jul 5, 2021 at 7:33 PM Alexey Lesovsky <lesovsky@gmail.com> wrote:\n>\n> Hi,\n> Have a few notes about pg_stat_logical_replication_error from the DBA point of view (which will use this view in the future).\n\nThank you for the comments!\n\n> 1. As I understand it, this view might contain many errors related to different subscriptions. It is better to name \"pg_stat_logical_replication_errors\" using the plural form (like this done for stat views for tables, indexes, functions).\n\nAgreed.\n\n> Also, I'd like to suggest thinking twice about the view name (and function used in view DDL) - \"pg_stat_logical_replication_error\" contains very common \"logical replication\" words, but the view contains errors related to subscriptions only. In the future there could be other kinds of errors related to logical replication, but not related to subscriptions - what will you do?\n\nIs pg_stat_subscription_errors or\npg_stat_logical_replication_apply_errors better?\n\n> 2. Add a field with database name or id - it helps to quickly understand to which database the subscription belongs.\n\nAgreed.\n\n> 3. Add a counter field with total number of errors - it helps to calculate errors rates and aggregations (sum), and don't lose information about errors between view checks.\n\nDo you mean to increment the error count if the error (command, xid,\nand relid) is the same as the previous one? or to have the total\nnumber of errors per subscription? And what can we infer from the\nerror rates and aggregations?\n\n> 4. Add text of last error (if it will not be too expensive).\n\nAgreed.\n\n> 5. Rename the \"action\" field to \"command\", as I know this is right from terminology point of view.\n\nOkay.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n-- С уважением Алексей В. Лесовский",
"msg_date": "Tue, 6 Jul 2021 15:13:32 +0500",
"msg_from": "Alexey Lesovsky <lesovsky@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jul 6, 2021 at 6:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jul 6, 2021 at 12:30 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Jul 5, 2021 at 6:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Jul 1, 2021 at 6:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Thu, Jul 1, 2021 at 12:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > >\n> > > > > Don't we want to clear stats at drop subscription as well? We do drop\n> > > > > database stats in dropdb via pgstat_drop_database, so I think we need\n> > > > > to clear subscription stats at the time of drop subscription.\n> > > >\n> > > > Yes, it needs to be cleared. In the 0003 patch, pgstat_vacuum_stat()\n> > > > sends the message to clear the stats. I think it's better to have\n> > > > pgstat_vacuum_stat() do that job similar to dropping replication slot\n> > > > statistics rather than relying on the single message send at DROP\n> > > > SUBSCRIPTION. I've considered doing both: sending the message at DROP\n> > > > SUBSCRIPTION and periodical checking by pgstat_vacuum_stat(), but\n> > > > dropping subscription not setting a replication slot is able to\n> > > > rollback. So we need to send it only at commit time. Given that we\n> > > > don’t necessarily need the stats to be updated immediately, I think\n> > > > it’s reasonable to go with only a way of pgstat_vacuum_stat().\n> > > >\n> > >\n> > > Okay, that makes sense. Can we consider sending the multiple ids in\n> > > one message as we do for relations or functions in\n> > > pgstat_vacuum_stat()? That will reduce some message traffic.\n> >\n> > Yes. Since subscriptions are objects that are not frequently created\n> > and dropped I prioritized not to increase the message type. But if we\n> > do that for subscriptions, is it better to do that for replication\n> > slots as well? It seems to me that the lifetime of subscriptions and\n> > replication slots are similar.\n> >\n>\n> Yeah, I think it makes sense to do for both, we can work on slots\n> patch separately. I don't see a reason why we shouldn't send a single\n> message for multiple clear/drop entries.\n\n+1\n\n>\n> > >\n> > > True but I guess the user can wait for all the tablesyncs to either\n> > > finish or get an error corresponding to the table sync. After that, it\n> > > can use 'copy_data' as false. This is not a very good method but I\n> > > don't see any other option. I guess whatever is the case logging\n> > > errors from tablesyncs is anyway not a bad idea.\n> > >\n> > > Instead of using the syntax \"ALTER SUBSCRIPTION name SET SKIP\n> > > TRANSACTION Iconst\", isn't it better to use it as a subscription\n> > > option like Mark has done for his patch (disable_on_error)?\n> >\n> > According to the doc, ALTER SUBSCRIPTION ... SET is used to alter\n> > parameters originally set by CREATE SUBSCRIPTION. Therefore, we can\n> > specify a subset of parameters that can be specified by CREATE\n> > SUBSCRIPTION. It makes sense to me for 'disable_on_error' since it can\n> > be specified by CREATE SUBSCRIPTION. Whereas SKIP TRANSACTION stuff\n> > cannot be done. Are you concerned about adding a syntax to ALTER\n> > SUBSCRIPTION?\n> >\n>\n> Both for additional syntax and consistency with disable_on_error.\n> Isn't it just a current implementation that Alter only allows to\n> change parameters supported by Create? Is there a reason why we can't\n> allow Alter to set/change some parameters not supported by Create?\n\nI think there is not reason for that but looking at ALTER TABLE I\nthought there is such a policy. I thought the skipping transaction\nfeature is somewhat different from disable_on_error feature. The\nformer seems a feature to deal with a problem on the spot whereas the\nlatter seems a setting of a subscription. Anyway, if we use the\nsubscription option, we can reset the XID by setting 0? Or do we need\nALTER SUBSCRIPTION RESET?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 7 Jul 2021 15:17:05 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jul 7, 2021 at 11:47 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Jul 6, 2021 at 6:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > >\n> > > According to the doc, ALTER SUBSCRIPTION ... SET is used to alter\n> > > parameters originally set by CREATE SUBSCRIPTION. Therefore, we can\n> > > specify a subset of parameters that can be specified by CREATE\n> > > SUBSCRIPTION. It makes sense to me for 'disable_on_error' since it can\n> > > be specified by CREATE SUBSCRIPTION. Whereas SKIP TRANSACTION stuff\n> > > cannot be done. Are you concerned about adding a syntax to ALTER\n> > > SUBSCRIPTION?\n> > >\n> >\n> > Both for additional syntax and consistency with disable_on_error.\n> > Isn't it just a current implementation that Alter only allows to\n> > change parameters supported by Create? Is there a reason why we can't\n> > allow Alter to set/change some parameters not supported by Create?\n>\n> I think there is not reason for that but looking at ALTER TABLE I\n> thought there is such a policy.\n>\n\nIf we are looking for precedent then I think we allow to set\nconfiguration parameters via Alter Database but not via Create\nDatabase. Does that address your concern?\n\n> I thought the skipping transaction\n> feature is somewhat different from disable_on_error feature. The\n> former seems a feature to deal with a problem on the spot whereas the\n> latter seems a setting of a subscription. Anyway, if we use the\n> subscription option, we can reset the XID by setting 0? Or do we need\n> ALTER SUBSCRIPTION RESET?\n\nThe other commands like Alter Table, Alter Database, etc, which\nprovides a way to Set some parameter/option, have a Reset variant. I\nthink it would be good to have it for Alter Subscription as well but\nwe might want to allow other parameters to be reset by that as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 8 Jul 2021 14:58:41 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Jul 8, 2021 at 6:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jul 7, 2021 at 11:47 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Jul 6, 2021 at 6:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > >\n> > > > According to the doc, ALTER SUBSCRIPTION ... SET is used to alter\n> > > > parameters originally set by CREATE SUBSCRIPTION. Therefore, we can\n> > > > specify a subset of parameters that can be specified by CREATE\n> > > > SUBSCRIPTION. It makes sense to me for 'disable_on_error' since it can\n> > > > be specified by CREATE SUBSCRIPTION. Whereas SKIP TRANSACTION stuff\n> > > > cannot be done. Are you concerned about adding a syntax to ALTER\n> > > > SUBSCRIPTION?\n> > > >\n> > >\n> > > Both for additional syntax and consistency with disable_on_error.\n> > > Isn't it just a current implementation that Alter only allows to\n> > > change parameters supported by Create? Is there a reason why we can't\n> > > allow Alter to set/change some parameters not supported by Create?\n> >\n> > I think there is not reason for that but looking at ALTER TABLE I\n> > thought there is such a policy.\n> >\n>\n> If we are looking for precedent then I think we allow to set\n> configuration parameters via Alter Database but not via Create\n> Database. Does that address your concern?\n\nThank you for the info! But it seems like CREATE DATABASE doesn't\nsupport SET in the first place. Also interestingly, ALTER SUBSCRIPTION\nsupport both ENABLE/DISABLE and SET (enabled = on/off). I’m not sure\nfrom the point of view of consistency with other CREATE, ALTER\ncommands, and disable_on_error but it might be better to avoid adding\nadditional syntax.\n\n>\n> > I thought the skipping transaction\n> > feature is somewhat different from disable_on_error feature. The\n> > former seems a feature to deal with a problem on the spot whereas the\n> > latter seems a setting of a subscription. Anyway, if we use the\n> > subscription option, we can reset the XID by setting 0? Or do we need\n> > ALTER SUBSCRIPTION RESET?\n>\n> The other commands like Alter Table, Alter Database, etc, which\n> provides a way to Set some parameter/option, have a Reset variant. I\n> think it would be good to have it for Alter Subscription as well but\n> we might want to allow other parameters to be reset by that as well.\n\nAgreed.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 9 Jul 2021 09:27:35 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jul 6, 2021 at 7:13 PM Alexey Lesovsky <lesovsky@gmail.com> wrote:\n>\n> On Tue, Jul 6, 2021 at 10:58 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>> > Also, I'd like to suggest thinking twice about the view name (and function used in view DDL) - \"pg_stat_logical_replication_error\" contains very common \"logical replication\" words, but the view contains errors related to subscriptions only. In the future there could be other kinds of errors related to logical replication, but not related to subscriptions - what will you do?\n>>\n>>\n>> Is pg_stat_subscription_errors or\n>> pg_stat_logical_replication_apply_errors better?\n>\n>\n> It seems to me 'pg_stat_subscription_conflicts' proposed by Amit Kapila is the most suitable, because it directly says about conflicts occurring on the subscription side. The name 'pg_stat_subscription_errors' is also good, especially in case of further extension if some kind of similar errors will be tracked.\n\nI personally prefer pg_stat_subscription_errors since\npg_stat_subscription_conflicts could be used for conflict resolution\nfeatures in the future. This stats view I'm proposing is meant to\nfocus on errors that happened during applying logical changes. So\nusing the term 'errors' seems to make sense to me.\n\n>\n>>\n>> > 3. Add a counter field with total number of errors - it helps to calculate errors rates and aggregations (sum), and don't lose information about errors between view checks.\n>>\n>> Do you mean to increment the error count if the error (command, xid,\n>> and relid) is the same as the previous one? or to have the total\n>> number of errors per subscription? And what can we infer from the\n>> error rates and aggregations?\n>\n>\n> To be honest, I hurried up when I wrote the first email, and read only about stats view. Later, I read the starting email about the patch and rethought this note.\n>\n> As I understand, when the conflict occurs, replication stops (until conflict is resolved), an error appears in the stats view. Now, no new errors can occur in the blocked subscription. Hence, there are impossible situations when many errors (like spikes) have occurred and a user didn't see that. If I am correct in my assumption, there is no need for counters. They are necessary only when errors might occur too frequently (like pg_stat_database.deadlocks). But if this is possible, I would prefer the total number of errors per subscription, as also proposed by Amit.\n\nYeah, the total number of errors seems better.\n\n>\n> Under \"error rates and aggregations\" I also mean in the context of when a high number of errors occured in a short period of time. If a user can read the \"total errors\" counter and keep this metric in his monitoring system, he will be able to calculate rates over time using functions in the monitoring system. This is extremely useful.\n\nThanks for your explanation. Agreed. But the rate depends on\nwal_retrieve_retry_interval so is not likely to be high in practice.\n\n> I also would like to clarify, when conflict is resolved - the error record is cleared or kept in the view? If it is cleared, the error counter is required (because we don't want to lose all history of errors). If it is kept - the flag telling about the error is resolved is needed (or set xid to NULL). I mean when the user is watching the view, he should be able to identify if the error has already been resolved or not.\n\nWith the current patch, once the conflict is resolved by skipping the\ntransaction in question, its entry on the stats view is cleared. As\nyou suggested, if we have the total error counts in that view, it\nwould be good to keep the count and clear other fields such as xid,\nlast_failure, and command etc.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 9 Jul 2021 09:42:59 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Jul 9, 2021 at 5:43 AM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n> On Tue, Jul 6, 2021 at 7:13 PM Alexey Lesovsky <lesovsky@gmail.com> wrote:\n> >\n> > On Tue, Jul 6, 2021 at 10:58 AM Masahiko Sawada <sawada.mshk@gmail.com>\n> wrote:\n> >>\n> >> > Also, I'd like to suggest thinking twice about the view name (and\n> function used in view DDL) - \"pg_stat_logical_replication_error\" contains\n> very common \"logical replication\" words, but the view contains errors\n> related to subscriptions only. In the future there could be other kinds of\n> errors related to logical replication, but not related to subscriptions -\n> what will you do?\n> >>\n> >>\n> >> Is pg_stat_subscription_errors or\n> >> pg_stat_logical_replication_apply_errors better?\n> >\n> >\n> > It seems to me 'pg_stat_subscription_conflicts' proposed by Amit Kapila\n> is the most suitable, because it directly says about conflicts occurring on\n> the subscription side. The name 'pg_stat_subscription_errors' is also good,\n> especially in case of further extension if some kind of similar errors will\n> be tracked.\n>\n> I personally prefer pg_stat_subscription_errors since\n> pg_stat_subscription_conflicts could be used for conflict resolution\n> features in the future. This stats view I'm proposing is meant to\n> focus on errors that happened during applying logical changes. So\n> using the term 'errors' seems to make sense to me.\n>\n\nAgreed\n\n\n> >\n> >>\n> >> > 3. Add a counter field with total number of errors - it helps to\n> calculate errors rates and aggregations (sum), and don't lose information\n> about errors between view checks.\n> >>\n> >> Do you mean to increment the error count if the error (command, xid,\n> >> and relid) is the same as the previous one? or to have the total\n> >> number of errors per subscription? And what can we infer from the\n> >> error rates and aggregations?\n> >\n> >\n> > To be honest, I hurried up when I wrote the first email, and read only\n> about stats view. Later, I read the starting email about the patch and\n> rethought this note.\n> >\n> > As I understand, when the conflict occurs, replication stops (until\n> conflict is resolved), an error appears in the stats view. Now, no new\n> errors can occur in the blocked subscription. Hence, there are impossible\n> situations when many errors (like spikes) have occurred and a user didn't\n> see that. If I am correct in my assumption, there is no need for counters.\n> They are necessary only when errors might occur too frequently (like\n> pg_stat_database.deadlocks). But if this is possible, I would prefer the\n> total number of errors per subscription, as also proposed by Amit.\n>\n> Yeah, the total number of errors seems better.\n>\n\nAgreed\n\n\n> >\n> > Under \"error rates and aggregations\" I also mean in the context of when\n> a high number of errors occured in a short period of time. If a user can\n> read the \"total errors\" counter and keep this metric in his monitoring\n> system, he will be able to calculate rates over time using functions in the\n> monitoring system. This is extremely useful.\n>\n> Thanks for your explanation. Agreed. But the rate depends on\n> wal_retrieve_retry_interval so is not likely to be high in practice.\n>\n\nAgreed\n\n\n> > I also would like to clarify, when conflict is resolved - the error\n> record is cleared or kept in the view? If it is cleared, the error counter\n> is required (because we don't want to lose all history of errors). If it is\n> kept - the flag telling about the error is resolved is needed (or set xid\n> to NULL). I mean when the user is watching the view, he should be able to\n> identify if the error has already been resolved or not.\n>\n> With the current patch, once the conflict is resolved by skipping the\n> transaction in question, its entry on the stats view is cleared. As\n> you suggested, if we have the total error counts in that view, it\n> would be good to keep the count and clear other fields such as xid,\n> last_failure, and command etc.\n>\n\nOk, looks nice. But I am curious how this will work in the case when there\nare two (or more) errors in the same subscription, but different relations?\nAfter resolution all these records are kept or they will be merged into a\nsingle record (because subscription was the same for all errors)?\n\n\n> Regards,\n>\n> --\n> Masahiko Sawada\n> EDB: https://www.enterprisedb.com/\n>\n\n\n-- \nRegards, Alexey Lesovsky\n\nOn Fri, Jul 9, 2021 at 5:43 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:On Tue, Jul 6, 2021 at 7:13 PM Alexey Lesovsky <lesovsky@gmail.com> wrote:\n>\n> On Tue, Jul 6, 2021 at 10:58 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>> > Also, I'd like to suggest thinking twice about the view name (and function used in view DDL) - \"pg_stat_logical_replication_error\" contains very common \"logical replication\" words, but the view contains errors related to subscriptions only. In the future there could be other kinds of errors related to logical replication, but not related to subscriptions - what will you do?\n>>\n>>\n>> Is pg_stat_subscription_errors or\n>> pg_stat_logical_replication_apply_errors better?\n>\n>\n> It seems to me 'pg_stat_subscription_conflicts' proposed by Amit Kapila is the most suitable, because it directly says about conflicts occurring on the subscription side. The name 'pg_stat_subscription_errors' is also good, especially in case of further extension if some kind of similar errors will be tracked.\n\nI personally prefer pg_stat_subscription_errors since\npg_stat_subscription_conflicts could be used for conflict resolution\nfeatures in the future. This stats view I'm proposing is meant to\nfocus on errors that happened during applying logical changes. So\nusing the term 'errors' seems to make sense to me.Agreed \n>\n>>\n>> > 3. Add a counter field with total number of errors - it helps to calculate errors rates and aggregations (sum), and don't lose information about errors between view checks.\n>>\n>> Do you mean to increment the error count if the error (command, xid,\n>> and relid) is the same as the previous one? or to have the total\n>> number of errors per subscription? And what can we infer from the\n>> error rates and aggregations?\n>\n>\n> To be honest, I hurried up when I wrote the first email, and read only about stats view. Later, I read the starting email about the patch and rethought this note.\n>\n> As I understand, when the conflict occurs, replication stops (until conflict is resolved), an error appears in the stats view. Now, no new errors can occur in the blocked subscription. Hence, there are impossible situations when many errors (like spikes) have occurred and a user didn't see that. If I am correct in my assumption, there is no need for counters. They are necessary only when errors might occur too frequently (like pg_stat_database.deadlocks). But if this is possible, I would prefer the total number of errors per subscription, as also proposed by Amit.\n\nYeah, the total number of errors seems better.Agreed \n>\n> Under \"error rates and aggregations\" I also mean in the context of when a high number of errors occured in a short period of time. If a user can read the \"total errors\" counter and keep this metric in his monitoring system, he will be able to calculate rates over time using functions in the monitoring system. This is extremely useful.\n\nThanks for your explanation. Agreed. But the rate depends on\nwal_retrieve_retry_interval so is not likely to be high in practice.Agreed \n> I also would like to clarify, when conflict is resolved - the error record is cleared or kept in the view? If it is cleared, the error counter is required (because we don't want to lose all history of errors). If it is kept - the flag telling about the error is resolved is needed (or set xid to NULL). I mean when the user is watching the view, he should be able to identify if the error has already been resolved or not.\n\nWith the current patch, once the conflict is resolved by skipping the\ntransaction in question, its entry on the stats view is cleared. As\nyou suggested, if we have the total error counts in that view, it\nwould be good to keep the count and clear other fields such as xid,\nlast_failure, and command etc.Ok, looks nice. But I am curious how this will work in the case when there are two (or more) errors in the same subscription, but different relations? After resolution all these records are kept or they will be merged into a single record (because subscription was the same for all errors)? \nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n-- Regards, Alexey Lesovsky",
"msg_date": "Fri, 9 Jul 2021 08:32:19 +0500",
"msg_from": "Alexey Lesovsky <lesovsky@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Jul 9, 2021 at 5:58 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Jul 8, 2021 at 6:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Jul 7, 2021 at 11:47 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Tue, Jul 6, 2021 at 6:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > >\n> > > > > According to the doc, ALTER SUBSCRIPTION ... SET is used to alter\n> > > > > parameters originally set by CREATE SUBSCRIPTION. Therefore, we can\n> > > > > specify a subset of parameters that can be specified by CREATE\n> > > > > SUBSCRIPTION. It makes sense to me for 'disable_on_error' since it can\n> > > > > be specified by CREATE SUBSCRIPTION. Whereas SKIP TRANSACTION stuff\n> > > > > cannot be done. Are you concerned about adding a syntax to ALTER\n> > > > > SUBSCRIPTION?\n> > > > >\n> > > >\n> > > > Both for additional syntax and consistency with disable_on_error.\n> > > > Isn't it just a current implementation that Alter only allows to\n> > > > change parameters supported by Create? Is there a reason why we can't\n> > > > allow Alter to set/change some parameters not supported by Create?\n> > >\n> > > I think there is not reason for that but looking at ALTER TABLE I\n> > > thought there is such a policy.\n> > >\n> >\n> > If we are looking for precedent then I think we allow to set\n> > configuration parameters via Alter Database but not via Create\n> > Database. Does that address your concern?\n>\n> Thank you for the info! But it seems like CREATE DATABASE doesn't\n> support SET in the first place. Also interestingly, ALTER SUBSCRIPTION\n> support both ENABLE/DISABLE and SET (enabled = on/off).\n>\n\nI think that is redundant but not sure if there is any reason behind doing so.\n\n> I’m not sure\n> from the point of view of consistency with other CREATE, ALTER\n> commands, and disable_on_error but it might be better to avoid adding\n> additional syntax.\n>\n\nIf we can avoid introducing new syntax that in itself is a good reason\nto introduce it as an option.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 12 Jul 2021 09:01:23 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Jul 9, 2021 at 9:02 AM Alexey Lesovsky <lesovsky@gmail.com> wrote:\n>\n> On Fri, Jul 9, 2021 at 5:43 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>> > I also would like to clarify, when conflict is resolved - the error record is cleared or kept in the view? If it is cleared, the error counter is required (because we don't want to lose all history of errors). If it is kept - the flag telling about the error is resolved is needed (or set xid to NULL). I mean when the user is watching the view, he should be able to identify if the error has already been resolved or not.\n>>\n>> With the current patch, once the conflict is resolved by skipping the\n>> transaction in question, its entry on the stats view is cleared. As\n>> you suggested, if we have the total error counts in that view, it\n>> would be good to keep the count and clear other fields such as xid,\n>> last_failure, and command etc.\n>\n>\n> Ok, looks nice. But I am curious how this will work in the case when there are two (or more) errors in the same subscription, but different relations?\n>\n\nWe can't proceed unless the first error is resolved, so there\nshouldn't be multiple unresolved errors. However, there is an\nexception to it which is during initial table sync and I think the\nview should have separate rows for each table sync.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 12 Jul 2021 09:06:30 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jul 12, 2021 at 8:36 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> >\n> > Ok, looks nice. But I am curious how this will work in the case when\n> there are two (or more) errors in the same subscription, but different\n> relations?\n> >\n>\n> We can't proceed unless the first error is resolved, so there\n> shouldn't be multiple unresolved errors.\n>\n\nOk. I thought multiple errors are possible when many tables are initialized\nusing parallel workers (with max_sync_workers_per_subscription > 1).\n\n-- \nRegards, Alexey\n\nOn Mon, Jul 12, 2021 at 8:36 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Ok, looks nice. But I am curious how this will work in the case when there are two (or more) errors in the same subscription, but different relations?\n>\n\nWe can't proceed unless the first error is resolved, so there\nshouldn't be multiple unresolved errors. Ok. I thought multiple errors are possible when many tables are initialized using parallel workers (with max_sync_workers_per_subscription > 1).-- Regards, Alexey",
"msg_date": "Mon, 12 Jul 2021 09:07:18 +0500",
"msg_from": "Alexey Lesovsky <lesovsky@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jul 12, 2021 at 9:37 AM Alexey Lesovsky <lesovsky@gmail.com> wrote:\n>\n> On Mon, Jul 12, 2021 at 8:36 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> >\n>> > Ok, looks nice. But I am curious how this will work in the case when there are two (or more) errors in the same subscription, but different relations?\n>> >\n>>\n>> We can't proceed unless the first error is resolved, so there\n>> shouldn't be multiple unresolved errors.\n>\n>\n> Ok. I thought multiple errors are possible when many tables are initialized using parallel workers (with max_sync_workers_per_subscription > 1).\n>\n\nYeah, that is possible but that covers under the second condition\nmentioned by me and in such cases I think we should have separate rows\nfor each tablesync. Is that right, Sawada-san or do you have something\nelse in mind?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 12 Jul 2021 09:45:30 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jul 12, 2021 at 1:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jul 12, 2021 at 9:37 AM Alexey Lesovsky <lesovsky@gmail.com> wrote:\n> >\n> > On Mon, Jul 12, 2021 at 8:36 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>\n> >> >\n> >> > Ok, looks nice. But I am curious how this will work in the case when there are two (or more) errors in the same subscription, but different relations?\n> >> >\n> >>\n> >> We can't proceed unless the first error is resolved, so there\n> >> shouldn't be multiple unresolved errors.\n> >\n> >\n> > Ok. I thought multiple errors are possible when many tables are initialized using parallel workers (with max_sync_workers_per_subscription > 1).\n> >\n>\n> Yeah, that is possible but that covers under the second condition\n> mentioned by me and in such cases I think we should have separate rows\n> for each tablesync. Is that right, Sawada-san or do you have something\n> else in mind?\n\nYeah, I agree to have separate rows for each table sync. The table\nshould not be processed by both the table sync worker and the apply\nworker at a time so the pair of subscription OID and relation OID will\nbe unique. I think that we have a boolean column in the view,\nindicating whether the error entry is reported by the table sync\nworker or the apply worker, or maybe we also can have the action\ncolumn show \"TABLE SYNC\" if the error is reported by the table sync\nworker.\n\nWhen it comes to removing the subscription errors in\npgstat_vacuum_stat(), I think we need to seq scan on the hash table\nand send the messages to purge the subscription error entries.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 12 Jul 2021 14:42:42 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jul 12, 2021 at 11:13 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Jul 12, 2021 at 1:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Jul 12, 2021 at 9:37 AM Alexey Lesovsky <lesovsky@gmail.com> wrote:\n> > >\n> > > On Mon, Jul 12, 2021 at 8:36 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >>\n> > >> >\n> > >> > Ok, looks nice. But I am curious how this will work in the case when there are two (or more) errors in the same subscription, but different relations?\n> > >> >\n> > >>\n> > >> We can't proceed unless the first error is resolved, so there\n> > >> shouldn't be multiple unresolved errors.\n> > >\n> > >\n> > > Ok. I thought multiple errors are possible when many tables are initialized using parallel workers (with max_sync_workers_per_subscription > 1).\n> > >\n> >\n> > Yeah, that is possible but that covers under the second condition\n> > mentioned by me and in such cases I think we should have separate rows\n> > for each tablesync. Is that right, Sawada-san or do you have something\n> > else in mind?\n>\n> Yeah, I agree to have separate rows for each table sync. The table\n> should not be processed by both the table sync worker and the apply\n> worker at a time so the pair of subscription OID and relation OID will\n> be unique. I think that we have a boolean column in the view,\n> indicating whether the error entry is reported by the table sync\n> worker or the apply worker, or maybe we also can have the action\n> column show \"TABLE SYNC\" if the error is reported by the table sync\n> worker.\n>\n\nOr similar to backend_type (text) in pg_stat_activity, we can have\nsomething like error_source (text) which will display apply worker or\ntablesync worker? I think if we have this column then even if there is\na chance that both apply and sync worker operates on the same\nrelation, we can identify it via this column.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 12 Jul 2021 17:21:53 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jul 12, 2021 at 8:52 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jul 12, 2021 at 11:13 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Jul 12, 2021 at 1:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Jul 12, 2021 at 9:37 AM Alexey Lesovsky <lesovsky@gmail.com> wrote:\n> > > >\n> > > > On Mon, Jul 12, 2021 at 8:36 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >>\n> > > >> >\n> > > >> > Ok, looks nice. But I am curious how this will work in the case when there are two (or more) errors in the same subscription, but different relations?\n> > > >> >\n> > > >>\n> > > >> We can't proceed unless the first error is resolved, so there\n> > > >> shouldn't be multiple unresolved errors.\n> > > >\n> > > >\n> > > > Ok. I thought multiple errors are possible when many tables are initialized using parallel workers (with max_sync_workers_per_subscription > 1).\n> > > >\n> > >\n> > > Yeah, that is possible but that covers under the second condition\n> > > mentioned by me and in such cases I think we should have separate rows\n> > > for each tablesync. Is that right, Sawada-san or do you have something\n> > > else in mind?\n> >\n> > Yeah, I agree to have separate rows for each table sync. The table\n> > should not be processed by both the table sync worker and the apply\n> > worker at a time so the pair of subscription OID and relation OID will\n> > be unique. I think that we have a boolean column in the view,\n> > indicating whether the error entry is reported by the table sync\n> > worker or the apply worker, or maybe we also can have the action\n> > column show \"TABLE SYNC\" if the error is reported by the table sync\n> > worker.\n> >\n>\n> Or similar to backend_type (text) in pg_stat_activity, we can have\n> something like error_source (text) which will display apply worker or\n> tablesync worker? I think if we have this column then even if there is\n> a chance that both apply and sync worker operates on the same\n> relation, we can identify it via this column.\n\nSounds good. I'll incorporate this in the next version patch that I'm\nplanning to submit this week.\n\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 14 Jul 2021 17:14:32 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jul 14, 2021 at 5:14 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Jul 12, 2021 at 8:52 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Jul 12, 2021 at 11:13 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Mon, Jul 12, 2021 at 1:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Mon, Jul 12, 2021 at 9:37 AM Alexey Lesovsky <lesovsky@gmail.com> wrote:\n> > > > >\n> > > > > On Mon, Jul 12, 2021 at 8:36 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >>\n> > > > >> >\n> > > > >> > Ok, looks nice. But I am curious how this will work in the case when there are two (or more) errors in the same subscription, but different relations?\n> > > > >> >\n> > > > >>\n> > > > >> We can't proceed unless the first error is resolved, so there\n> > > > >> shouldn't be multiple unresolved errors.\n> > > > >\n> > > > >\n> > > > > Ok. I thought multiple errors are possible when many tables are initialized using parallel workers (with max_sync_workers_per_subscription > 1).\n> > > > >\n> > > >\n> > > > Yeah, that is possible but that covers under the second condition\n> > > > mentioned by me and in such cases I think we should have separate rows\n> > > > for each tablesync. Is that right, Sawada-san or do you have something\n> > > > else in mind?\n> > >\n> > > Yeah, I agree to have separate rows for each table sync. The table\n> > > should not be processed by both the table sync worker and the apply\n> > > worker at a time so the pair of subscription OID and relation OID will\n> > > be unique. I think that we have a boolean column in the view,\n> > > indicating whether the error entry is reported by the table sync\n> > > worker or the apply worker, or maybe we also can have the action\n> > > column show \"TABLE SYNC\" if the error is reported by the table sync\n> > > worker.\n> > >\n> >\n> > Or similar to backend_type (text) in pg_stat_activity, we can have\n> > something like error_source (text) which will display apply worker or\n> > tablesync worker? I think if we have this column then even if there is\n> > a chance that both apply and sync worker operates on the same\n> > relation, we can identify it via this column.\n>\n> Sounds good. I'll incorporate this in the next version patch that I'm\n> planning to submit this week.\n\nSorry, I could not make it this week. I'll submit them early next week.\nWhile updating the patch I thought we need to have more design\ndiscussion on two points of clearing error details after the error is\nresolved:\n\n1. How to clear apply worker errors. IIUC we've discussed that once\nthe apply worker skipped the transaction we leave the error entry\nitself but clear its fields except for some fields such as failure\ncounts. But given that the stats messages could be lost, how can we\nensure to clear those error details? For table sync workers’ error, we\ncan have autovacuum workers periodically check entires of\npg_subscription_rel and clear the error entry if the table sync worker\ncompletes table sync (i.g., checking if srsubstate = ‘r’). But there\nis no such information for the apply workers and subscriptions. In\naddition to sending the message clearing the error details just after\nskipping the transaction, I thought that we can have apply workers\nperiodically send the message clearing the error details but it seems\nnot good.\n\n2. Do we really want to leave the table sync worker even after the\nerror is resolved and the table sync completes? Unlike the apply\nworker error, the number of table sync worker errors could be very\nlarge, for example, if a subscriber subscribes to many tables. If we\nleave those errors in the stats view, it uses more memory space and\ncould affect writing and reading stats file performance. If such left\ntable sync error entries are not helpful in practice I think we can\nremove them rather than clear some fields. What do you think?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Sat, 17 Jul 2021 00:02:58 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Jul 16, 2021 at 8:33 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Jul 14, 2021 at 5:14 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Sounds good. I'll incorporate this in the next version patch that I'm\n> > planning to submit this week.\n>\n> Sorry, I could not make it this week. I'll submit them early next week.\n>\n\nNo problem.\n\n> While updating the patch I thought we need to have more design\n> discussion on two points of clearing error details after the error is\n> resolved:\n>\n> 1. How to clear apply worker errors. IIUC we've discussed that once\n> the apply worker skipped the transaction we leave the error entry\n> itself but clear its fields except for some fields such as failure\n> counts. But given that the stats messages could be lost, how can we\n> ensure to clear those error details? For table sync workers’ error, we\n> can have autovacuum workers periodically check entires of\n> pg_subscription_rel and clear the error entry if the table sync worker\n> completes table sync (i.g., checking if srsubstate = ‘r’). But there\n> is no such information for the apply workers and subscriptions.\n>\n\nBut won't the corresponding subscription (pg_subscription) have the\nXID as InvalidTransactionid once the xid is skipped or at least a\ndifferent XID then we would have in pg_stat view? Can we use that to\nreset entry via vacuum?\n\n> In\n> addition to sending the message clearing the error details just after\n> skipping the transaction, I thought that we can have apply workers\n> periodically send the message clearing the error details but it seems\n> not good.\n>\n\nYeah, such things should be a last resort.\n\n> 2. Do we really want to leave the table sync worker even after the\n> error is resolved and the table sync completes? Unlike the apply\n> worker error, the number of table sync worker errors could be very\n> large, for example, if a subscriber subscribes to many tables. If we\n> leave those errors in the stats view, it uses more memory space and\n> could affect writing and reading stats file performance. If such left\n> table sync error entries are not helpful in practice I think we can\n> remove them rather than clear some fields. What do you think?\n>\n\nSounds reasonable to me. One might think to update the subscription\nerror count by including table_sync errors but not sure if that is\nhelpful and even if that is helpful, we can extend it later.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 19 Jul 2021 10:52:35 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jul 19, 2021 at 2:22 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jul 16, 2021 at 8:33 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Jul 14, 2021 at 5:14 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > Sounds good. I'll incorporate this in the next version patch that I'm\n> > > planning to submit this week.\n> >\n> > Sorry, I could not make it this week. I'll submit them early next week.\n> >\n>\n> No problem.\n>\n> > While updating the patch I thought we need to have more design\n> > discussion on two points of clearing error details after the error is\n> > resolved:\n> >\n> > 1. How to clear apply worker errors. IIUC we've discussed that once\n> > the apply worker skipped the transaction we leave the error entry\n> > itself but clear its fields except for some fields such as failure\n> > counts. But given that the stats messages could be lost, how can we\n> > ensure to clear those error details? For table sync workers’ error, we\n> > can have autovacuum workers periodically check entires of\n> > pg_subscription_rel and clear the error entry if the table sync worker\n> > completes table sync (i.g., checking if srsubstate = ‘r’). But there\n> > is no such information for the apply workers and subscriptions.\n> >\n>\n> But won't the corresponding subscription (pg_subscription) have the\n> XID as InvalidTransactionid once the xid is skipped or at least a\n> different XID then we would have in pg_stat view? Can we use that to\n> reset entry via vacuum?\n\nI think the XID is InvalidTransaction until the user specifies it. So\nI think we cannot know whether we're before skipping or after skipping\nonly by the transaction ID. No?\n\n>\n> > In\n> > addition to sending the message clearing the error details just after\n> > skipping the transaction, I thought that we can have apply workers\n> > periodically send the message clearing the error details but it seems\n> > not good.\n> >\n>\n> Yeah, such things should be a last resort.\n>\n> > 2. Do we really want to leave the table sync worker even after the\n> > error is resolved and the table sync completes? Unlike the apply\n> > worker error, the number of table sync worker errors could be very\n> > large, for example, if a subscriber subscribes to many tables. If we\n> > leave those errors in the stats view, it uses more memory space and\n> > could affect writing and reading stats file performance. If such left\n> > table sync error entries are not helpful in practice I think we can\n> > remove them rather than clear some fields. What do you think?\n> >\n>\n> Sounds reasonable to me. One might think to update the subscription\n> error count by including table_sync errors but not sure if that is\n> helpful and even if that is helpful, we can extend it later.\n\nAgreed.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 19 Jul 2021 15:35:47 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Sat, Jul 17, 2021 at 12:02 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Jul 14, 2021 at 5:14 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Jul 12, 2021 at 8:52 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Jul 12, 2021 at 11:13 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Mon, Jul 12, 2021 at 1:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Mon, Jul 12, 2021 at 9:37 AM Alexey Lesovsky <lesovsky@gmail.com> wrote:\n> > > > > >\n> > > > > > On Mon, Jul 12, 2021 at 8:36 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > >>\n> > > > > >> >\n> > > > > >> > Ok, looks nice. But I am curious how this will work in the case when there are two (or more) errors in the same subscription, but different relations?\n> > > > > >> >\n> > > > > >>\n> > > > > >> We can't proceed unless the first error is resolved, so there\n> > > > > >> shouldn't be multiple unresolved errors.\n> > > > > >\n> > > > > >\n> > > > > > Ok. I thought multiple errors are possible when many tables are initialized using parallel workers (with max_sync_workers_per_subscription > 1).\n> > > > > >\n> > > > >\n> > > > > Yeah, that is possible but that covers under the second condition\n> > > > > mentioned by me and in such cases I think we should have separate rows\n> > > > > for each tablesync. Is that right, Sawada-san or do you have something\n> > > > > else in mind?\n> > > >\n> > > > Yeah, I agree to have separate rows for each table sync. The table\n> > > > should not be processed by both the table sync worker and the apply\n> > > > worker at a time so the pair of subscription OID and relation OID will\n> > > > be unique. I think that we have a boolean column in the view,\n> > > > indicating whether the error entry is reported by the table sync\n> > > > worker or the apply worker, or maybe we also can have the action\n> > > > column show \"TABLE SYNC\" if the error is reported by the table sync\n> > > > worker.\n> > > >\n> > >\n> > > Or similar to backend_type (text) in pg_stat_activity, we can have\n> > > something like error_source (text) which will display apply worker or\n> > > tablesync worker? I think if we have this column then even if there is\n> > > a chance that both apply and sync worker operates on the same\n> > > relation, we can identify it via this column.\n> >\n> > Sounds good. I'll incorporate this in the next version patch that I'm\n> > planning to submit this week.\n>\n> Sorry, I could not make it this week. I'll submit them early next week.\n> While updating the patch I thought we need to have more design\n> discussion on two points of clearing error details after the error is\n> resolved:\n>\n> 1. How to clear apply worker errors. IIUC we've discussed that once\n> the apply worker skipped the transaction we leave the error entry\n> itself but clear its fields except for some fields such as failure\n> counts. But given that the stats messages could be lost, how can we\n> ensure to clear those error details? For table sync workers’ error, we\n> can have autovacuum workers periodically check entires of\n> pg_subscription_rel and clear the error entry if the table sync worker\n> completes table sync (i.g., checking if srsubstate = ‘r’). But there\n> is no such information for the apply workers and subscriptions. In\n> addition to sending the message clearing the error details just after\n> skipping the transaction, I thought that we can have apply workers\n> periodically send the message clearing the error details but it seems\n> not good.\n\nI think that the motivation behind the idea of leaving error entries\nand clearing theirs some fields is that users can check if the error\nis successfully resolved and the worker is working find. But we can\ncheck it also in another way, for example, checking\npg_stat_subscription view. So is it worth considering leaving the\napply worker errors as they are?\n\n>\n> 2. Do we really want to leave the table sync worker even after the\n> error is resolved and the table sync completes? Unlike the apply\n> worker error, the number of table sync worker errors could be very\n> large, for example, if a subscriber subscribes to many tables. If we\n> leave those errors in the stats view, it uses more memory space and\n> could affect writing and reading stats file performance. If such left\n> table sync error entries are not helpful in practice I think we can\n> remove them rather than clear some fields. What do you think?\n>\n\nI've attached the updated version patch that incorporated all comments\nI got so far except for the clearing error details part I mentioned\nabove. After getting a consensus on those parts, I'll incorporate the\nidea into the patches.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Mon, 19 Jul 2021 15:39:30 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jul 19, 2021 at 12:10 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Sat, Jul 17, 2021 at 12:02 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > 1. How to clear apply worker errors. IIUC we've discussed that once\n> > the apply worker skipped the transaction we leave the error entry\n> > itself but clear its fields except for some fields such as failure\n> > counts. But given that the stats messages could be lost, how can we\n> > ensure to clear those error details? For table sync workers’ error, we\n> > can have autovacuum workers periodically check entires of\n> > pg_subscription_rel and clear the error entry if the table sync worker\n> > completes table sync (i.g., checking if srsubstate = ‘r’). But there\n> > is no such information for the apply workers and subscriptions. In\n> > addition to sending the message clearing the error details just after\n> > skipping the transaction, I thought that we can have apply workers\n> > periodically send the message clearing the error details but it seems\n> > not good.\n>\n> I think that the motivation behind the idea of leaving error entries\n> and clearing theirs some fields is that users can check if the error\n> is successfully resolved and the worker is working find. But we can\n> check it also in another way, for example, checking\n> pg_stat_subscription view. So is it worth considering leaving the\n> apply worker errors as they are?\n>\n\nI think so. Basically, we will send the clear message after skipping\nthe exact but I think it is fine if that message is lost. At worst, it\nwill be displayed as the last error details. If there is another error\nit will be overwritten or probably we should have a function *_reset()\nwhich allows the user to reset a particular subscription's error info.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 19 Jul 2021 14:17:24 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On July 19, 2021 2:40 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> I've attached the updated version patch that incorporated all comments\r\n> I got so far except for the clearing error details part I mentioned\r\n> above. After getting a consensus on those parts, I'll incorporate the\r\n> idea into the patches.\r\n\r\nHi Sawada-san,\r\n\r\nI am interested in this feature.\r\nAfter having a look at the patch, I have a few questions about it.\r\n(Sorry in advance if I missed something)\r\n\r\n1) In 0002 patch, it introduces a new view called pg_stat_subscription_errors.\r\n Since it won't be cleaned automatically after we resolve the conflict, do we\r\n need a reset function to clean the statistics in it ? Maybe something\r\n similar to pg_stat_reset_replication_slot which clean the\r\n pg_stat_replication_slots.\r\n\r\n2) For 0003 patch, When I am faced with a conflict, I set skip_xid = xxx, and\r\n then I resolve the conflict. If I reset skip_xid after resolving the\r\n conflict, will the change(which cause the conflict before) be applied again ?\r\n\r\n3) For 0003 patch, if user set skip_xid to a wrong xid which have not been\r\n assigned, and then will the change be skipped when the xid is assigned in\r\n the future even if it doesn't cause any conflicts ?\r\n\r\nBesides, It might be better to add some description of patch in each patch's\r\ncommit message which will make it easier for new reviewers to follow.\r\n\r\n\r\nBest regards,\r\nHouzj\r\n",
"msg_date": "Mon, 19 Jul 2021 11:38:54 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jul 19, 2021 at 5:47 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jul 19, 2021 at 12:10 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Sat, Jul 17, 2021 at 12:02 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > 1. How to clear apply worker errors. IIUC we've discussed that once\n> > > the apply worker skipped the transaction we leave the error entry\n> > > itself but clear its fields except for some fields such as failure\n> > > counts. But given that the stats messages could be lost, how can we\n> > > ensure to clear those error details? For table sync workers’ error, we\n> > > can have autovacuum workers periodically check entires of\n> > > pg_subscription_rel and clear the error entry if the table sync worker\n> > > completes table sync (i.g., checking if srsubstate = ‘r’). But there\n> > > is no such information for the apply workers and subscriptions. In\n> > > addition to sending the message clearing the error details just after\n> > > skipping the transaction, I thought that we can have apply workers\n> > > periodically send the message clearing the error details but it seems\n> > > not good.\n> >\n> > I think that the motivation behind the idea of leaving error entries\n> > and clearing theirs some fields is that users can check if the error\n> > is successfully resolved and the worker is working find. But we can\n> > check it also in another way, for example, checking\n> > pg_stat_subscription view. So is it worth considering leaving the\n> > apply worker errors as they are?\n> >\n>\n> I think so. Basically, we will send the clear message after skipping\n> the exact but I think it is fine if that message is lost. At worst, it\n> will be displayed as the last error details. If there is another error\n> it will be overwritten or probably we should have a function *_reset()\n> which allows the user to reset a particular subscription's error info.\n\nThat makes sense. I'll incorporate this idea in the next version patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 20 Jul 2021 10:08:56 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jul 19, 2021 at 8:38 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On July 19, 2021 2:40 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > I've attached the updated version patch that incorporated all comments\n> > I got so far except for the clearing error details part I mentioned\n> > above. After getting a consensus on those parts, I'll incorporate the\n> > idea into the patches.\n>\n> Hi Sawada-san,\n>\n> I am interested in this feature.\n> After having a look at the patch, I have a few questions about it.\n\nThank you for having a look at the patches!\n\n>\n> 1) In 0002 patch, it introduces a new view called pg_stat_subscription_errors.\n> Since it won't be cleaned automatically after we resolve the conflict, do we\n> need a reset function to clean the statistics in it ? Maybe something\n> similar to pg_stat_reset_replication_slot which clean the\n> pg_stat_replication_slots.\n\nAgreed. As Amit also mentioned, providing a reset function to clean\nthe statistics seems a good idea. If the message clearing the stats\nthat is sent after skipping the transaction gets lost, the user is\nable to reset those stats manually.\n\n>\n> 2) For 0003 patch, When I am faced with a conflict, I set skip_xid = xxx, and\n> then I resolve the conflict. If I reset skip_xid after resolving the\n> conflict, will the change(which cause the conflict before) be applied again ?\n\nThe apply worker checks skip_xid when it reads the subscription.\nTherefore, if you reset skip_xid before the apply worker restarts and\nskips the transaction, the change is applied. But if you reset\nskip_xid after the apply worker skips transaction, the change is\nalready skipped and your resetting skip_xid has no effect.\n\n>\n> 3) For 0003 patch, if user set skip_xid to a wrong xid which have not been\n> assigned, and then will the change be skipped when the xid is assigned in\n> the future even if it doesn't cause any conflicts ?\n\nYes. Currently, setting a correct xid is the user's responsibility. I\nthink it would be better to disable it or emit WARNING/ERROR when the\nuser mistakenly set the wrong xid if we find out a convenient way to\ndetect that.\n\n>\n> Besides, It might be better to add some description of patch in each patch's\n> commit message which will make it easier for new reviewers to follow.\n\nI'll add commit messages in the next version patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 20 Jul 2021 10:25:52 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jul 20, 2021 at 6:56 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Jul 19, 2021 at 8:38 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > 3) For 0003 patch, if user set skip_xid to a wrong xid which have not been\n> > assigned, and then will the change be skipped when the xid is assigned in\n> > the future even if it doesn't cause any conflicts ?\n>\n> Yes. Currently, setting a correct xid is the user's responsibility. I\n> think it would be better to disable it or emit WARNING/ERROR when the\n> user mistakenly set the wrong xid if we find out a convenient way to\n> detect that.\n>\n\nI think in this regard we should clearly document how this can be\nmisused by users. I see that you have mentioned about skip_xid but\nmaybe we can add more on how it could lead to skipping a\nnon-conflicting XID and can lead to an inconsistent replica. As\ndiscussed earlier as well, users can anyway do similar harm by using\npg_replication_slot_advance(). I think if possible we might want to\ngive some examples as well where it would be helpful for users to use\nthis functionality.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 20 Jul 2021 14:57:31 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On July 20, 2021 9:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> On Mon, Jul 19, 2021 at 8:38 PM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On July 19, 2021 2:40 PM Masahiko Sawada <sawada.mshk@gmail.com>\r\n> wrote:\r\n> > > I've attached the updated version patch that incorporated all\r\n> > > comments I got so far except for the clearing error details part I\r\n> > > mentioned above. After getting a consensus on those parts, I'll\r\n> > > incorporate the idea into the patches.\r\n> >\r\n> > 3) For 0003 patch, if user set skip_xid to a wrong xid which have not been\r\n> > assigned, and then will the change be skipped when the xid is assigned in\r\n> > the future even if it doesn't cause any conflicts ?\r\n> \r\n> Yes. Currently, setting a correct xid is the user's responsibility. I think it would\r\n> be better to disable it or emit WARNING/ERROR when the user mistakenly set\r\n> the wrong xid if we find out a convenient way to detect that.\r\n\r\nThanks for the explanation. As Amit suggested, it seems we can document the\r\nrisk of misusing skip_xid. Besides, I found some minor things in the patch.\r\n\r\n1) In 0002 patch\r\n\r\n+ */\r\n+static void\r\n+pgstat_recv_subscription_purge(PgStat_MsgSubscriptionPurge *msg, int len)\r\n+{\r\n+\tif (subscriptionErrHash != NULL)\r\n+\t\treturn;\r\n+\r\n\r\n+static void\r\n+pgstat_recv_subscription_error(PgStat_MsgSubscriptionErr *msg, int len)\r\n+{\r\n\r\nthe second paramater \"len\" seems not used in the function\r\npgstat_recv_subscription_purge() and pgstat_recv_subscription_error().\r\n\r\n\r\n2) in 0003 patch\r\n\r\n * Helper function for apply_handle_commit and apply_handle_stream_commit.\r\n */\r\n static void\r\n-apply_handle_commit_internal(StringInfo s, LogicalRepCommitData *commit_data)\r\n+apply_handle_commit_internal(LogicalRepCommitData *commit_data)\r\n {\r\n\r\nThis looks like a separate change which remove unused paramater in existing\r\ncode, maybe we can get this committed first ?\r\n\r\nBest regards,\r\nHouzj\r\n",
"msg_date": "Thu, 22 Jul 2021 11:53:26 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Jul 22, 2021 at 8:53 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On July 20, 2021 9:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > On Mon, Jul 19, 2021 at 8:38 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > On July 19, 2021 2:40 PM Masahiko Sawada <sawada.mshk@gmail.com>\n> > wrote:\n> > > > I've attached the updated version patch that incorporated all\n> > > > comments I got so far except for the clearing error details part I\n> > > > mentioned above. After getting a consensus on those parts, I'll\n> > > > incorporate the idea into the patches.\n> > >\n> > > 3) For 0003 patch, if user set skip_xid to a wrong xid which have not been\n> > > assigned, and then will the change be skipped when the xid is assigned in\n> > > the future even if it doesn't cause any conflicts ?\n> >\n> > Yes. Currently, setting a correct xid is the user's responsibility. I think it would\n> > be better to disable it or emit WARNING/ERROR when the user mistakenly set\n> > the wrong xid if we find out a convenient way to detect that.\n>\n> Thanks for the explanation. As Amit suggested, it seems we can document the\n> risk of misusing skip_xid. Besides, I found some minor things in the patch.\n>\n> 1) In 0002 patch\n>\n> + */\n> +static void\n> +pgstat_recv_subscription_purge(PgStat_MsgSubscriptionPurge *msg, int len)\n> +{\n> + if (subscriptionErrHash != NULL)\n> + return;\n> +\n>\n> +static void\n> +pgstat_recv_subscription_error(PgStat_MsgSubscriptionErr *msg, int len)\n> +{\n>\n> the second paramater \"len\" seems not used in the function\n> pgstat_recv_subscription_purge() and pgstat_recv_subscription_error().\n>\n\n'len' is not used at all in not only functions the patch added but\nalso other pgstat_recv_* functions. Can we remove all of them in a\nseparate patch? 'len' in pgstat_recv_* functions has never been used\nsince the stats collector code is introduced. It seems like that it\nwas mistakenly introduced in the first commit and other pgstat_recv_*\nfunctions were added that followed it to define ‘len’ but didn’t also\nuse it at all.\n\n>\n> 2) in 0003 patch\n>\n> * Helper function for apply_handle_commit and apply_handle_stream_commit.\n> */\n> static void\n> -apply_handle_commit_internal(StringInfo s, LogicalRepCommitData *commit_data)\n> +apply_handle_commit_internal(LogicalRepCommitData *commit_data)\n> {\n>\n> This looks like a separate change which remove unused paramater in existing\n> code, maybe we can get this committed first ?\n\nYeah, it seems to be introduced by commit 0926e96c493. I've attached\nthe patch for that.\n\nAlso, I've attached the updated version patches. This version patch\nhas pg_stat_reset_subscription_error() SQL function and sends a clear\nmessage after skipping the transaction. 0004 patch includes the\nskipping transaction feature and introducing RESET to ALTER\nSUBSCRIPTION. It would be better to separate them.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Mon, 26 Jul 2021 11:58:42 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jul 26, 2021 at 11:58 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Jul 22, 2021 at 8:53 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On July 20, 2021 9:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > On Mon, Jul 19, 2021 at 8:38 PM houzj.fnst@fujitsu.com\n> > > <houzj.fnst@fujitsu.com> wrote:\n> > > >\n> > > > On July 19, 2021 2:40 PM Masahiko Sawada <sawada.mshk@gmail.com>\n> > > wrote:\n> > > > > I've attached the updated version patch that incorporated all\n> > > > > comments I got so far except for the clearing error details part I\n> > > > > mentioned above. After getting a consensus on those parts, I'll\n> > > > > incorporate the idea into the patches.\n> > > >\n> > > > 3) For 0003 patch, if user set skip_xid to a wrong xid which have not been\n> > > > assigned, and then will the change be skipped when the xid is assigned in\n> > > > the future even if it doesn't cause any conflicts ?\n> > >\n> > > Yes. Currently, setting a correct xid is the user's responsibility. I think it would\n> > > be better to disable it or emit WARNING/ERROR when the user mistakenly set\n> > > the wrong xid if we find out a convenient way to detect that.\n> >\n> > Thanks for the explanation. As Amit suggested, it seems we can document the\n> > risk of misusing skip_xid. Besides, I found some minor things in the patch.\n> >\n> > 1) In 0002 patch\n> >\n> > + */\n> > +static void\n> > +pgstat_recv_subscription_purge(PgStat_MsgSubscriptionPurge *msg, int len)\n> > +{\n> > + if (subscriptionErrHash != NULL)\n> > + return;\n> > +\n> >\n> > +static void\n> > +pgstat_recv_subscription_error(PgStat_MsgSubscriptionErr *msg, int len)\n> > +{\n> >\n> > the second paramater \"len\" seems not used in the function\n> > pgstat_recv_subscription_purge() and pgstat_recv_subscription_error().\n> >\n>\n> 'len' is not used at all in not only functions the patch added but\n> also other pgstat_recv_* functions. Can we remove all of them in a\n> separate patch? 'len' in pgstat_recv_* functions has never been used\n> since the stats collector code is introduced. It seems like that it\n> was mistakenly introduced in the first commit and other pgstat_recv_*\n> functions were added that followed it to define ‘len’ but didn’t also\n> use it at all.\n>\n> >\n> > 2) in 0003 patch\n> >\n> > * Helper function for apply_handle_commit and apply_handle_stream_commit.\n> > */\n> > static void\n> > -apply_handle_commit_internal(StringInfo s, LogicalRepCommitData *commit_data)\n> > +apply_handle_commit_internal(LogicalRepCommitData *commit_data)\n> > {\n> >\n> > This looks like a separate change which remove unused paramater in existing\n> > code, maybe we can get this committed first ?\n>\n> Yeah, it seems to be introduced by commit 0926e96c493. I've attached\n> the patch for that.\n>\n> Also, I've attached the updated version patches. This version patch\n> has pg_stat_reset_subscription_error() SQL function and sends a clear\n> message after skipping the transaction. 0004 patch includes the\n> skipping transaction feature and introducing RESET to ALTER\n> SUBSCRIPTION. It would be better to separate them.\n>\n\nI've attached the new version patches that fix cfbot failure.\n\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Thu, 29 Jul 2021 14:04:50 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 2:04 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Jul 26, 2021 at 11:58 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Jul 22, 2021 at 8:53 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > On July 20, 2021 9:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > On Mon, Jul 19, 2021 at 8:38 PM houzj.fnst@fujitsu.com\n> > > > <houzj.fnst@fujitsu.com> wrote:\n> > > > >\n> > > > > On July 19, 2021 2:40 PM Masahiko Sawada <sawada.mshk@gmail.com>\n> > > > wrote:\n> > > > > > I've attached the updated version patch that incorporated all\n> > > > > > comments I got so far except for the clearing error details part I\n> > > > > > mentioned above. After getting a consensus on those parts, I'll\n> > > > > > incorporate the idea into the patches.\n> > > > >\n> > > > > 3) For 0003 patch, if user set skip_xid to a wrong xid which have not been\n> > > > > assigned, and then will the change be skipped when the xid is assigned in\n> > > > > the future even if it doesn't cause any conflicts ?\n> > > >\n> > > > Yes. Currently, setting a correct xid is the user's responsibility. I think it would\n> > > > be better to disable it or emit WARNING/ERROR when the user mistakenly set\n> > > > the wrong xid if we find out a convenient way to detect that.\n> > >\n> > > Thanks for the explanation. As Amit suggested, it seems we can document the\n> > > risk of misusing skip_xid. Besides, I found some minor things in the patch.\n> > >\n> > > 1) In 0002 patch\n> > >\n> > > + */\n> > > +static void\n> > > +pgstat_recv_subscription_purge(PgStat_MsgSubscriptionPurge *msg, int len)\n> > > +{\n> > > + if (subscriptionErrHash != NULL)\n> > > + return;\n> > > +\n> > >\n> > > +static void\n> > > +pgstat_recv_subscription_error(PgStat_MsgSubscriptionErr *msg, int len)\n> > > +{\n> > >\n> > > the second paramater \"len\" seems not used in the function\n> > > pgstat_recv_subscription_purge() and pgstat_recv_subscription_error().\n> > >\n> >\n> > 'len' is not used at all in not only functions the patch added but\n> > also other pgstat_recv_* functions. Can we remove all of them in a\n> > separate patch? 'len' in pgstat_recv_* functions has never been used\n> > since the stats collector code is introduced. It seems like that it\n> > was mistakenly introduced in the first commit and other pgstat_recv_*\n> > functions were added that followed it to define ‘len’ but didn’t also\n> > use it at all.\n> >\n> > >\n> > > 2) in 0003 patch\n> > >\n> > > * Helper function for apply_handle_commit and apply_handle_stream_commit.\n> > > */\n> > > static void\n> > > -apply_handle_commit_internal(StringInfo s, LogicalRepCommitData *commit_data)\n> > > +apply_handle_commit_internal(LogicalRepCommitData *commit_data)\n> > > {\n> > >\n> > > This looks like a separate change which remove unused paramater in existing\n> > > code, maybe we can get this committed first ?\n> >\n> > Yeah, it seems to be introduced by commit 0926e96c493. I've attached\n> > the patch for that.\n> >\n> > Also, I've attached the updated version patches. This version patch\n> > has pg_stat_reset_subscription_error() SQL function and sends a clear\n> > message after skipping the transaction. 0004 patch includes the\n> > skipping transaction feature and introducing RESET to ALTER\n> > SUBSCRIPTION. It would be better to separate them.\n> >\n>\n> I've attached the new version patches that fix cfbot failure.\n\nSorry I've attached wrong ones. Reattached the correct version patches.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Thu, 29 Jul 2021 14:47:35 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Jul 29, 2021 at 11:18 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Jul 29, 2021 at 2:04 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > > Yeah, it seems to be introduced by commit 0926e96c493. I've attached\n> > > the patch for that.\n> > >\n> > > Also, I've attached the updated version patches. This version patch\n> > > has pg_stat_reset_subscription_error() SQL function and sends a clear\n> > > message after skipping the transaction. 0004 patch includes the\n> > > skipping transaction feature and introducing RESET to ALTER\n> > > SUBSCRIPTION. It would be better to separate them.\n> > >\n\n+1, to separate out the reset part.\n\n> >\n> > I've attached the new version patches that fix cfbot failure.\n>\n> Sorry I've attached wrong ones. Reattached the correct version patches.\n>\n\nPushed the 0001* patch that removes the unused parameter.\n\nFew comments on v4-0001-Add-errcontext-to-errors-of-the-applying-logical-\n===========================================================\n1.\n--- a/src/backend/commands/tablecmds.c\n+++ b/src/backend/commands/tablecmds.c\n@@ -78,6 +78,7 @@\n #include \"partitioning/partbounds.h\"\n #include \"partitioning/partdesc.h\"\n #include \"pgstat.h\"\n+#include \"replication/logicalworker.h\"\n #include \"rewrite/rewriteDefine.h\"\n #include \"rewrite/rewriteHandler.h\"\n #include \"rewrite/rewriteManip.h\"\n@@ -1899,6 +1900,9 @@ ExecuteTruncateGuts(List *explicit_rels,\n continue;\n }\n\n+ /* Set logical replication error callback info if necessary */\n+ set_logicalrep_error_context_rel(rel);\n+\n /*\n * Build the lists of foreign tables belonging to each foreign server\n * and pass each list to the foreign data wrapper's callback function,\n@@ -2006,6 +2010,9 @@ ExecuteTruncateGuts(List *explicit_rels,\n pgstat_count_truncate(rel);\n }\n\n+ /* Reset logical replication error callback info */\n+ reset_logicalrep_error_context_rel();\n+\n\nSetting up logical rep error context in a generic function looks a bit\nodd to me. Do we really need to set up error context here? I\nunderstand we can't do this in caller but anyway I think we are not\nsending this to logical replication view as well, so not sure we need\nto do it here.\n\n2.\n+/* Struct for saving and restoring apply information */\n+typedef struct ApplyErrCallbackArg\n+{\n+ LogicalRepMsgType command; /* 0 if invalid */\n+\n+ /* Local relation information */\n+ char *nspname; /* used for error context */\n+ char *relname; /* used for error context */\n+\n+ TransactionId remote_xid;\n+ TimestampTz committs;\n+} ApplyErrCallbackArg;\n+static ApplyErrCallbackArg apply_error_callback_arg =\n+{\n+ .command = 0,\n+ .relname = NULL,\n+ .nspname = NULL,\n+ .remote_xid = InvalidTransactionId,\n+ .committs = 0,\n+};\n+\n\nBetter to have a space between the above two declarations.\n\n3. commit message:\nThis commit adds the error context to errors happening during applying\nlogical replication changes, showing the command, the relation\nrelation, transaction ID, and commit timestamp in the server log.\n\n'relation' is mentioned twice.\n\nThe patch is not getting applied probably due to yesterday's commit in\nthis area.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 30 Jul 2021 09:22:19 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On July 29, 2021 1:48 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> Sorry I've attached wrong ones. Reattached the correct version patches.\r\n\r\nHi,\r\n\r\nI had some comments on the new version patches.\r\n\r\n1)\r\n\r\n- relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));\r\n- relstate->relid = subrel->srrelid;\r\n+ relstate = (SubscriptionRelState *) hash_search(htab, (void *) &subrel->srrelid,\r\n+ HASH_ENTER, NULL);\r\n\r\nI found the new version patch changes the List type 'relstate' to hash table type\r\n'relstate'. Will this bring significant performance improvements ?\r\n\r\n2)\r\n+ * PgStat_StatSubRelErrEntry represents a error happened during logical\r\n\r\na error => an error\r\n\r\n3)\r\n+CREATE VIEW pg_stat_subscription_errors AS\r\n+ SELECT\r\n+ d.datname,\r\n+ sr.subid,\r\n+ s.subname,\r\n\r\nIt seems the 'subid' column is not mentioned in the document of the\r\npg_stat_subscription_errors view.\r\n\r\n\r\n4)\r\n+\r\n+ if (fread(&nrels, 1, sizeof(long), fpin) != sizeof(long))\r\n+ {\r\n ...\r\n+ for (int i = 0; i < nrels; i++)\r\n\r\nthe type of i(int) seems different of the type or 'nrels'(long), it might be\r\nbetter to use the same type.\r\n\r\nBest regards,\r\nhouzj\r\n",
"msg_date": "Fri, 30 Jul 2021 06:46:54 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Jul 30, 2021 at 12:52 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jul 29, 2021 at 11:18 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Jul 29, 2021 at 2:04 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > > Yeah, it seems to be introduced by commit 0926e96c493. I've attached\n> > > > the patch for that.\n> > > >\n> > > > Also, I've attached the updated version patches. This version patch\n> > > > has pg_stat_reset_subscription_error() SQL function and sends a clear\n> > > > message after skipping the transaction. 0004 patch includes the\n> > > > skipping transaction feature and introducing RESET to ALTER\n> > > > SUBSCRIPTION. It would be better to separate them.\n> > > >\n>\n> +1, to separate out the reset part.\n\nOkay, I'll do that.\n\n>\n> > >\n> > > I've attached the new version patches that fix cfbot failure.\n> >\n> > Sorry I've attached wrong ones. Reattached the correct version patches.\n> >\n>\n> Pushed the 0001* patch that removes the unused parameter.\n\nThanks!\n\n>\n> Few comments on v4-0001-Add-errcontext-to-errors-of-the-applying-logical-\n> ===========================================================\n\nThank you for the comments!\n\n> 1.\n> --- a/src/backend/commands/tablecmds.c\n> +++ b/src/backend/commands/tablecmds.c\n> @@ -78,6 +78,7 @@\n> #include \"partitioning/partbounds.h\"\n> #include \"partitioning/partdesc.h\"\n> #include \"pgstat.h\"\n> +#include \"replication/logicalworker.h\"\n> #include \"rewrite/rewriteDefine.h\"\n> #include \"rewrite/rewriteHandler.h\"\n> #include \"rewrite/rewriteManip.h\"\n> @@ -1899,6 +1900,9 @@ ExecuteTruncateGuts(List *explicit_rels,\n> continue;\n> }\n>\n> + /* Set logical replication error callback info if necessary */\n> + set_logicalrep_error_context_rel(rel);\n> +\n> /*\n> * Build the lists of foreign tables belonging to each foreign server\n> * and pass each list to the foreign data wrapper's callback function,\n> @@ -2006,6 +2010,9 @@ ExecuteTruncateGuts(List *explicit_rels,\n> pgstat_count_truncate(rel);\n> }\n>\n> + /* Reset logical replication error callback info */\n> + reset_logicalrep_error_context_rel();\n> +\n>\n> Setting up logical rep error context in a generic function looks a bit\n> odd to me. Do we really need to set up error context here? I\n> understand we can't do this in caller but anyway I think we are not\n> sending this to logical replication view as well, so not sure we need\n> to do it here.\n\nYeah, I'm not convinced of this part yet. I wanted to show relid also\nin truncate cases but I came up with only this idea.\n\nIf an error happens during truncating the table (in\nExecuteTruncateGuts()), relid set by\nset_logicalrep_error_context_rel() is actually sent to the view. If we\ndon’t have it, the view always shows relid as NULL in truncate cases.\nOn the other hand, it doesn’t cover all cases. For example, it doesn’t\ncover an error that the target table doesn’t exist on the subscriber,\nwhich happens when opening the target table. Anyway, in most cases,\neven if relid is NULL, the error message in the view helps users to\nknow which relation the error happened on. What do you think?\n\n>\n> 2.\n> +/* Struct for saving and restoring apply information */\n> +typedef struct ApplyErrCallbackArg\n> +{\n> + LogicalRepMsgType command; /* 0 if invalid */\n> +\n> + /* Local relation information */\n> + char *nspname; /* used for error context */\n> + char *relname; /* used for error context */\n> +\n> + TransactionId remote_xid;\n> + TimestampTz committs;\n> +} ApplyErrCallbackArg;\n> +static ApplyErrCallbackArg apply_error_callback_arg =\n> +{\n> + .command = 0,\n> + .relname = NULL,\n> + .nspname = NULL,\n> + .remote_xid = InvalidTransactionId,\n> + .committs = 0,\n> +};\n> +\n>\n> Better to have a space between the above two declarations.\n\nWill fix.\n\n>\n> 3. commit message:\n> This commit adds the error context to errors happening during applying\n> logical replication changes, showing the command, the relation\n> relation, transaction ID, and commit timestamp in the server log.\n>\n> 'relation' is mentioned twice.\n\nWill fix.\n\n>\n> The patch is not getting applied probably due to yesterday's commit in\n> this area.\n\nOkay. I'll rebase the patches to the current HEAD.\n\nI'm incorporating all comments from you and Houzj, and will submit the\nnew patch soon.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 2 Aug 2021 11:15:03 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Jul 30, 2021 at 3:47 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On July 29, 2021 1:48 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Sorry I've attached wrong ones. Reattached the correct version patches.\n>\n> Hi,\n>\n> I had some comments on the new version patches.\n\nThank you for the comments!\n\n>\n> 1)\n>\n> - relstate = (SubscriptionRelState *) palloc(sizeof(SubscriptionRelState));\n> - relstate->relid = subrel->srrelid;\n> + relstate = (SubscriptionRelState *) hash_search(htab, (void *) &subrel->srrelid,\n> + HASH_ENTER, NULL);\n>\n> I found the new version patch changes the List type 'relstate' to hash table type\n> 'relstate'. Will this bring significant performance improvements ?\n\nFor pgstat_vacuum_stat() purposes, I think it's better to use a hash\ntable to avoid O(N) lookup. But it might not be good to change the\ntype of the return value of GetSubscriptionNotReadyRelations() since\nthis returned value is used by other functions to iterate over\nelements. The list iteration is faster than the hash table’s one. It\nwould be better to change it so that pgstat_vacuum_stat() constructs a\nhash table for its own purpose.\n\n>\n> 2)\n> + * PgStat_StatSubRelErrEntry represents a error happened during logical\n>\n> a error => an error\n\nWill fix.\n\n>\n> 3)\n> +CREATE VIEW pg_stat_subscription_errors AS\n> + SELECT\n> + d.datname,\n> + sr.subid,\n> + s.subname,\n>\n> It seems the 'subid' column is not mentioned in the document of the\n> pg_stat_subscription_errors view.\n\nWill fix.\n\n>\n>\n> 4)\n> +\n> + if (fread(&nrels, 1, sizeof(long), fpin) != sizeof(long))\n> + {\n> ...\n> + for (int i = 0; i < nrels; i++)\n>\n> the type of i(int) seems different of the type or 'nrels'(long), it might be\n> better to use the same type.\n\nWill fix.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 2 Aug 2021 11:36:12 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Aug 2, 2021 at 7:45 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Jul 30, 2021 at 12:52 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Jul 29, 2021 at 11:18 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Setting up logical rep error context in a generic function looks a bit\n> > odd to me. Do we really need to set up error context here? I\n> > understand we can't do this in caller but anyway I think we are not\n> > sending this to logical replication view as well, so not sure we need\n> > to do it here.\n>\n> Yeah, I'm not convinced of this part yet. I wanted to show relid also\n> in truncate cases but I came up with only this idea.\n>\n> If an error happens during truncating the table (in\n> ExecuteTruncateGuts()), relid set by\n> set_logicalrep_error_context_rel() is actually sent to the view. If we\n> don’t have it, the view always shows relid as NULL in truncate cases.\n> On the other hand, it doesn’t cover all cases. For example, it doesn’t\n> cover an error that the target table doesn’t exist on the subscriber,\n> which happens when opening the target table. Anyway, in most cases,\n> even if relid is NULL, the error message in the view helps users to\n> know which relation the error happened on. What do you think?\n>\n\nYeah, I also think at this stage error message is sufficient in such cases.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 2 Aug 2021 08:51:06 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Aug 2, 2021 at 12:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Aug 2, 2021 at 7:45 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Jul 30, 2021 at 12:52 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Jul 29, 2021 at 11:18 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > Setting up logical rep error context in a generic function looks a bit\n> > > odd to me. Do we really need to set up error context here? I\n> > > understand we can't do this in caller but anyway I think we are not\n> > > sending this to logical replication view as well, so not sure we need\n> > > to do it here.\n> >\n> > Yeah, I'm not convinced of this part yet. I wanted to show relid also\n> > in truncate cases but I came up with only this idea.\n> >\n> > If an error happens during truncating the table (in\n> > ExecuteTruncateGuts()), relid set by\n> > set_logicalrep_error_context_rel() is actually sent to the view. If we\n> > don’t have it, the view always shows relid as NULL in truncate cases.\n> > On the other hand, it doesn’t cover all cases. For example, it doesn’t\n> > cover an error that the target table doesn’t exist on the subscriber,\n> > which happens when opening the target table. Anyway, in most cases,\n> > even if relid is NULL, the error message in the view helps users to\n> > know which relation the error happened on. What do you think?\n> >\n>\n> Yeah, I also think at this stage error message is sufficient in such cases.\n\nI've attached new patches that incorporate all comments I got so far.\nPlease review them.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Tue, 3 Aug 2021 15:49:23 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Aug 3, 2021 at 12:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Aug 2, 2021 at 12:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Aug 2, 2021 at 7:45 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Fri, Jul 30, 2021 at 12:52 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Thu, Jul 29, 2021 at 11:18 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > Setting up logical rep error context in a generic function looks a bit\n> > > > odd to me. Do we really need to set up error context here? I\n> > > > understand we can't do this in caller but anyway I think we are not\n> > > > sending this to logical replication view as well, so not sure we need\n> > > > to do it here.\n> > >\n> > > Yeah, I'm not convinced of this part yet. I wanted to show relid also\n> > > in truncate cases but I came up with only this idea.\n> > >\n> > > If an error happens during truncating the table (in\n> > > ExecuteTruncateGuts()), relid set by\n> > > set_logicalrep_error_context_rel() is actually sent to the view. If we\n> > > don’t have it, the view always shows relid as NULL in truncate cases.\n> > > On the other hand, it doesn’t cover all cases. For example, it doesn’t\n> > > cover an error that the target table doesn’t exist on the subscriber,\n> > > which happens when opening the target table. Anyway, in most cases,\n> > > even if relid is NULL, the error message in the view helps users to\n> > > know which relation the error happened on. What do you think?\n> > >\n> >\n> > Yeah, I also think at this stage error message is sufficient in such cases.\n>\n> I've attached new patches that incorporate all comments I got so far.\n> Please review them.\n\nI had a look at the first patch, couple of minor comments:\n1) Should we include this in typedefs.lst\n+/* Struct for saving and restoring apply information */\n+typedef struct ApplyErrCallbackArg\n+{\n+ LogicalRepMsgType command; /* 0 if invalid */\n+\n+ /* Local relation information */\n+ char *nspname;\n\n2) We can keep the case statement in the same order as in the\nLogicalRepMsgType enum, this will help in easily identifying if any\nenum gets missed.\n+ case LOGICAL_REP_MSG_RELATION:\n+ return \"RELATION\";\n+ case LOGICAL_REP_MSG_TYPE:\n+ return \"TYPE\";\n+ case LOGICAL_REP_MSG_ORIGIN:\n+ return \"ORIGIN\";\n+ case LOGICAL_REP_MSG_MESSAGE:\n+ return \"MESSAGE\";\n+ case LOGICAL_REP_MSG_STREAM_START:\n+ return \"STREAM START\";\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 3 Aug 2021 16:24:38 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tuesday, August 3, 2021 2:49 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> I've attached new patches that incorporate all comments I got so far.\r\n> Please review them.\r\n\r\nHi,\r\n\r\nI had a few comments for the 0003 patch.\r\n\r\n1).\r\n- This clause alters parameters originally set by\r\n- <xref linkend=\"sql-createsubscription\"/>. See there for more\r\n- information. The parameters that can be altered\r\n- are <literal>slot_name</literal>,\r\n- <literal>synchronous_commit</literal>,\r\n- <literal>binary</literal>, and\r\n- <literal>streaming</literal>.\r\n+ This clause sets or resets a subscription option. The parameters that can be\r\n+ set are the parameters originally set by <xref linkend=\"sql-createsubscription\"/>:\r\n+ <literal>slot_name</literal>, <literal>synchronous_commit</literal>,\r\n+ <literal>binary</literal>, <literal>streaming</literal>.\r\n+ </para>\r\n+ <para>\r\n+ The parameters that can be reset are: <literal>streaming</literal>,\r\n+ <literal>binary</literal>, <literal>synchronous_commit</literal>.\r\n\r\nMaybe the doc looks better like the following ?\r\n\r\n+ This clause alters parameters originally set by\r\n+ <xref linkend=\"sql-createsubscription\"/>. See there for more\r\n+ information. The parameters that can be set\r\n+ are <literal>slot_name</literal>,\r\n+ <literal>synchronous_commit</literal>,\r\n+ <literal>binary</literal>, and\r\n+ <literal>streaming</literal>.\r\n+ </para>\r\n+ <para>\r\n+ The parameters that can be reset are: <literal>streaming</literal>,\r\n+ <literal>binary</literal>, <literal>synchronous_commit</literal>.\r\n\r\n2).\r\n- opts->create_slot = defGetBoolean(defel);\r\n+ if (!is_reset)\r\n+ opts->create_slot = defGetBoolean(defel);\r\n }\r\n\r\nSince we only support RESET streaming/binary/synchronous_commit, it\r\nmight be unnecessary to add the check 'if (!is_reset)' for other\r\noption.\r\n\r\n3).\r\ntypedef struct AlterSubscriptionStmt\r\n{\r\n NodeTag type;\r\n AlterSubscriptionType kind; /* ALTER_SUBSCRIPTION_OPTIONS, etc */\r\n\r\nSince the patch change the remove the enum value\r\n'ALTER_SUBSCRIPTION_OPTIONS', it'd better to change the comment here\r\nas well.\r\n\r\nBest regards,\r\nhouzj\r\n",
"msg_date": "Wed, 4 Aug 2021 04:02:52 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Aug 3, 2021 at 7:54 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, Aug 3, 2021 at 12:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Aug 2, 2021 at 12:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Aug 2, 2021 at 7:45 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Fri, Jul 30, 2021 at 12:52 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Thu, Jul 29, 2021 at 11:18 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > Setting up logical rep error context in a generic function looks a bit\n> > > > > odd to me. Do we really need to set up error context here? I\n> > > > > understand we can't do this in caller but anyway I think we are not\n> > > > > sending this to logical replication view as well, so not sure we need\n> > > > > to do it here.\n> > > >\n> > > > Yeah, I'm not convinced of this part yet. I wanted to show relid also\n> > > > in truncate cases but I came up with only this idea.\n> > > >\n> > > > If an error happens during truncating the table (in\n> > > > ExecuteTruncateGuts()), relid set by\n> > > > set_logicalrep_error_context_rel() is actually sent to the view. If we\n> > > > don’t have it, the view always shows relid as NULL in truncate cases.\n> > > > On the other hand, it doesn’t cover all cases. For example, it doesn’t\n> > > > cover an error that the target table doesn’t exist on the subscriber,\n> > > > which happens when opening the target table. Anyway, in most cases,\n> > > > even if relid is NULL, the error message in the view helps users to\n> > > > know which relation the error happened on. What do you think?\n> > > >\n> > >\n> > > Yeah, I also think at this stage error message is sufficient in such cases.\n> >\n> > I've attached new patches that incorporate all comments I got so far.\n> > Please review them.\n>\n> I had a look at the first patch, couple of minor comments:\n> 1) Should we include this in typedefs.lst\n> +/* Struct for saving and restoring apply information */\n> +typedef struct ApplyErrCallbackArg\n> +{\n> + LogicalRepMsgType command; /* 0 if invalid */\n> +\n> + /* Local relation information */\n> + char *nspname;\n>\n> 2) We can keep the case statement in the same order as in the\n> LogicalRepMsgType enum, this will help in easily identifying if any\n> enum gets missed.\n> + case LOGICAL_REP_MSG_RELATION:\n> + return \"RELATION\";\n> + case LOGICAL_REP_MSG_TYPE:\n> + return \"TYPE\";\n> + case LOGICAL_REP_MSG_ORIGIN:\n> + return \"ORIGIN\";\n> + case LOGICAL_REP_MSG_MESSAGE:\n> + return \"MESSAGE\";\n> + case LOGICAL_REP_MSG_STREAM_START:\n> + return \"STREAM START\";\n>\n\nThank you for reviewing the patch!\n\nI agreed with all comments and will fix them in the next version patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 4 Aug 2021 20:43:33 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Aug 4, 2021 at 1:02 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Tuesday, August 3, 2021 2:49 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached new patches that incorporate all comments I got so far.\n> > Please review them.\n>\n> Hi,\n>\n> I had a few comments for the 0003 patch.\n\nThanks for reviewing the patch!\n\n>\n> 1).\n> - This clause alters parameters originally set by\n> - <xref linkend=\"sql-createsubscription\"/>. See there for more\n> - information. The parameters that can be altered\n> - are <literal>slot_name</literal>,\n> - <literal>synchronous_commit</literal>,\n> - <literal>binary</literal>, and\n> - <literal>streaming</literal>.\n> + This clause sets or resets a subscription option. The parameters that can be\n> + set are the parameters originally set by <xref linkend=\"sql-createsubscription\"/>:\n> + <literal>slot_name</literal>, <literal>synchronous_commit</literal>,\n> + <literal>binary</literal>, <literal>streaming</literal>.\n> + </para>\n> + <para>\n> + The parameters that can be reset are: <literal>streaming</literal>,\n> + <literal>binary</literal>, <literal>synchronous_commit</literal>.\n>\n> Maybe the doc looks better like the following ?\n>\n> + This clause alters parameters originally set by\n> + <xref linkend=\"sql-createsubscription\"/>. See there for more\n> + information. The parameters that can be set\n> + are <literal>slot_name</literal>,\n> + <literal>synchronous_commit</literal>,\n> + <literal>binary</literal>, and\n> + <literal>streaming</literal>.\n> + </para>\n> + <para>\n> + The parameters that can be reset are: <literal>streaming</literal>,\n> + <literal>binary</literal>, <literal>synchronous_commit</literal>.\n\nAgreed.\n\n>\n> 2).\n> - opts->create_slot = defGetBoolean(defel);\n> + if (!is_reset)\n> + opts->create_slot = defGetBoolean(defel);\n> }\n>\n> Since we only support RESET streaming/binary/synchronous_commit, it\n> might be unnecessary to add the check 'if (!is_reset)' for other\n> option.\n\nGood point.\n\n>\n> 3).\n> typedef struct AlterSubscriptionStmt\n> {\n> NodeTag type;\n> AlterSubscriptionType kind; /* ALTER_SUBSCRIPTION_OPTIONS, etc */\n>\n> Since the patch change the remove the enum value\n> 'ALTER_SUBSCRIPTION_OPTIONS', it'd better to change the comment here\n> as well.\n\nAgreed.\n\nI'll incorporate those comments in the next version patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 4 Aug 2021 20:46:09 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tuesday, August 3, 2021 3:49 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> I've attached new patches that incorporate all comments I got so far.\r\n> Please review them.\r\nHi, I had a chance to look at the patch-set during my other development.\r\nJust let me share some minor cosmetic things.\r\n\r\n\r\n[1] unnatural wording ? in v5-0002.\r\n+ * create tells whether to create the new subscription entry if it is not\r\n+ * create tells whether to create the new subscription relation entry if it is\r\n\r\nI'm not sure if this wording is correct or not.\r\nYou meant just \"tells whether to create ....\" ?,\r\nalthough we already have 1 other \"create tells\" in HEAD.\r\n\r\n[2] typo \"kep\" in v05-0002.\r\n\r\nI think you meant \"kept\" in below sentence.\r\n\r\n+/*\r\n+ * Subscription error statistics kep in the stats collector. One entry represents\r\n+ * an error that happened during logical replication, reported by the apply worker\r\n+ * (subrelid is InvalidOid) or by the table sync worker (subrelid is a valid OID).\r\n\r\n[3] typo \"lotigcal\" in the v05-0004 commit message.\r\n\r\nIf incoming change violates any constraint, lotigcal replication stops\r\nuntil it's resolved. This commit introduces another way to skip the\r\ntransaction in question.\r\n\r\nIt should be \"logical\".\r\n\r\n[4] warning of doc build\r\n\r\nI've gotten an output like below during my process of make html.\r\nCould you please check this ?\r\n\r\nLink element has no content and no Endterm. Nothing to show in the link to monitoring-pg-stat-subscription-errors\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Thu, 5 Aug 2021 08:58:35 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wednesday, August 4, 2021 8:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> I'll incorporate those comments in the next version patch.\r\nHi, when are you going to make and share the updated v6 ?\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Tue, 10 Aug 2021 04:52:13 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Aug 5, 2021 at 5:58 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Tuesday, August 3, 2021 3:49 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > I've attached new patches that incorporate all comments I got so far.\n> > Please review them.\n> Hi, I had a chance to look at the patch-set during my other development.\n> Just let me share some minor cosmetic things.\n\nThank you for reviewing the patches!\n\n>\n>\n> [1] unnatural wording ? in v5-0002.\n> + * create tells whether to create the new subscription entry if it is not\n> + * create tells whether to create the new subscription relation entry if it is\n>\n> I'm not sure if this wording is correct or not.\n> You meant just \"tells whether to create ....\" ?,\n> although we already have 1 other \"create tells\" in HEAD.\n\ncreate here means the function argument of\npgstat_get_subscription_entry() and\npgstat_get_subscription_error_entry(). That is, the function argument\n'create' tells whether to create the new entry if not found. I\nsingle-quoted the 'create' to avoid confusion.g\n\n>\n> [2] typo \"kep\" in v05-0002.\n>\n> I think you meant \"kept\" in below sentence.\n>\n> +/*\n> + * Subscription error statistics kep in the stats collector. One entry represents\n> + * an error that happened during logical replication, reported by the apply worker\n> + * (subrelid is InvalidOid) or by the table sync worker (subrelid is a valid OID).\n\nFixed.\n\n>\n> [3] typo \"lotigcal\" in the v05-0004 commit message.\n>\n> If incoming change violates any constraint, lotigcal replication stops\n> until it's resolved. This commit introduces another way to skip the\n> transaction in question.\n>\n> It should be \"logical\".\n\nFixed.\n\n>\n> [4] warning of doc build\n>\n> I've gotten an output like below during my process of make html.\n> Could you please check this ?\n>\n> Link element has no content and no Endterm. Nothing to show in the link to monitoring-pg-stat-subscription-errors\n\nFixed.\n\nI've attached the latest patches that incorporated all comments I got\nso far. Please review them.\n\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Tue, 10 Aug 2021 14:07:00 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Aug 10, 2021 at 10:37 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached the latest patches that incorporated all comments I got\n> so far. Please review them.\n>\n\nI am not able to apply the latest patch\n(v6-0001-Add-errcontext-to-errors-happening-during-applyin) on HEAD,\ngetting the below error:\npatching file src/backend/replication/logical/worker.c\nHunk #11 succeeded at 1195 (offset 50 lines).\nHunk #12 succeeded at 1253 (offset 50 lines).\nHunk #13 succeeded at 1277 (offset 50 lines).\nHunk #14 succeeded at 1305 (offset 50 lines).\nHunk #15 succeeded at 1330 (offset 50 lines).\nHunk #16 succeeded at 1362 (offset 50 lines).\nHunk #17 succeeded at 1508 (offset 50 lines).\nHunk #18 succeeded at 1524 (offset 50 lines).\nHunk #19 succeeded at 1645 (offset 50 lines).\nHunk #20 succeeded at 1671 (offset 50 lines).\nHunk #21 succeeded at 1772 (offset 50 lines).\nHunk #22 succeeded at 1828 (offset 50 lines).\nHunk #23 succeeded at 1934 (offset 50 lines).\nHunk #24 succeeded at 1962 (offset 50 lines).\nHunk #25 succeeded at 2399 (offset 50 lines).\nHunk #26 FAILED at 2405.\nHunk #27 succeeded at 3730 (offset 54 lines).\n1 out of 27 hunks FAILED -- saving rejects to file\nsrc/backend/replication/logical/worker.c.rej\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 10 Aug 2021 11:59:26 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Aug 10, 2021 at 11:59 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Aug 10, 2021 at 10:37 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached the latest patches that incorporated all comments I got\n> > so far. Please review them.\n> >\n>\n> I am not able to apply the latest patch\n> (v6-0001-Add-errcontext-to-errors-happening-during-applyin) on HEAD,\n> getting the below error:\n>\n\nFew comments on v6-0001-Add-errcontext-to-errors-happening-during-applyin\n==============================================================\n\n1. While applying DML operations, we are setting up the error context\nmultiple times due to which the context information is not\nappropriate. The first is set in apply_dispatch and then during\nprocessing, we set another error callback slot_store_error_callback in\nslot_store_data and slot_modify_data. When I forced one of the errors\nin slot_store_data(), it displays the below information in CONTEXT\nwhich doesn't make much sense.\n\n2021-08-10 15:16:39.887 IST [6784] ERROR: incorrect binary data\nformat in logical replication column 1\n2021-08-10 15:16:39.887 IST [6784] CONTEXT: processing remote data\nfor replication target relation \"public.test1\" column \"id\"\n during apply of \"INSERT\" for relation \"public.test1\" in\ntransaction with xid 740 committs 2021-08-10 14:44:38.058174+05:30\n\n2.\nI think we can slightly change the new context information as below:\nBefore\nduring apply of \"INSERT\" for relation \"public.test1\" in transaction\nwith xid 740 committs 2021-08-10 14:44:38.058174+05:30\nAfter\nduring apply of \"INSERT\" for relation \"public.test1\" in transaction id\n740 with commit timestamp 2021-08-10 14:44:38.058174+05:30\n\n\n3.\n+/* Struct for saving and restoring apply information */\n+typedef struct ApplyErrCallbackArg\n+{\n+ LogicalRepMsgType command; /* 0 if invalid */\n+\n+ /* Local relation information */\n+ char *nspname;\n+ char *relname;\n\n...\n...\n\n+\n+static ApplyErrCallbackArg apply_error_callback_arg =\n+{\n+ .command = 0,\n+ .relname = NULL,\n+ .nspname = NULL,\n\nLet's initialize the struct members in the order they are declared.\nThe order of relname and nspname should be another way.\n\n4.\n+\n+ TransactionId remote_xid;\n+ TimestampTz committs;\n+} ApplyErrCallbackArg;\n\nIt might be better to add a comment like \"remote xact information\"\nabove these structure members.\n\n5.\n+static void\n+apply_error_callback(void *arg)\n+{\n+ StringInfoData buf;\n+\n+ if (apply_error_callback_arg.command == 0)\n+ return;\n+\n+ initStringInfo(&buf);\n\nAt the end of this call, it is better to free this (pfree(buf.data))\n\n6. In the commit message, you might want to indicate that this\nadditional information can be used by the future patch to skip the\nconflicting transaction.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 10 Aug 2021 15:48:15 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Aug 10, 2021 at 3:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Aug 10, 2021 at 10:37 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached the latest patches that incorporated all comments I got\n> > so far. Please review them.\n> >\n>\n> I am not able to apply the latest patch\n> (v6-0001-Add-errcontext-to-errors-happening-during-applyin) on HEAD,\n> getting the below error:\n> patching file src/backend/replication/logical/worker.c\n> Hunk #11 succeeded at 1195 (offset 50 lines).\n> Hunk #12 succeeded at 1253 (offset 50 lines).\n> Hunk #13 succeeded at 1277 (offset 50 lines).\n> Hunk #14 succeeded at 1305 (offset 50 lines).\n> Hunk #15 succeeded at 1330 (offset 50 lines).\n> Hunk #16 succeeded at 1362 (offset 50 lines).\n> Hunk #17 succeeded at 1508 (offset 50 lines).\n> Hunk #18 succeeded at 1524 (offset 50 lines).\n> Hunk #19 succeeded at 1645 (offset 50 lines).\n> Hunk #20 succeeded at 1671 (offset 50 lines).\n> Hunk #21 succeeded at 1772 (offset 50 lines).\n> Hunk #22 succeeded at 1828 (offset 50 lines).\n> Hunk #23 succeeded at 1934 (offset 50 lines).\n> Hunk #24 succeeded at 1962 (offset 50 lines).\n> Hunk #25 succeeded at 2399 (offset 50 lines).\n> Hunk #26 FAILED at 2405.\n> Hunk #27 succeeded at 3730 (offset 54 lines).\n> 1 out of 27 hunks FAILED -- saving rejects to file\n> src/backend/replication/logical/worker.c.rej\n>\n\nSorry, I forgot to rebase the patches to the current HEAD. Since\nstream_prepare is introduced, I'll add some tests to the patches. I’ll\nsubmit the new patches tomorrow that also incorporates your comments\non v6-0001 patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 10 Aug 2021 20:08:35 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Aug 10, 2021 at 3:07 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached the latest patches that incorporated all comments I got\n> so far. Please review them.\n>\n\nSome initial review comments on the v6-0001 patch:\n\n\nsrc/backend/replication/logical/proto.c:\n(1)\n\n+ TimestampTz committs;\n\nI think it looks better to name \"committs\" as \"commit_ts\", and also is\nmore consistent with naming for other member \"remote_xid\".\n\nsrc/backend/replication/logical/worker.c:\n(2)\nTo be consistent with all other function headers, should start\nsentence with capital: \"get\" -> \"Get\"\n\n+ * get string representing LogicalRepMsgType.\n\n(3) It looks a bit cumbersome and repetitive to set/update the members\nof apply_error_callback_arg in numerous places.\n\nI suggest making the \"set_apply_error_context...\" and\n\"reset_apply_error_context...\" functions as \"static inline void\"\nfunctions (moving them to the top part of the source file, and\nremoving the existing function declarations for these).\n\nAlso, can add something similar to below:\n\nstatic inline void\nset_apply_error_callback_xid(TransactionId xid)\n{\n apply_error_callback_arg.remote_xid = xid;\n}\n\nstatic inline void\nset_apply_error_callback_xid_info(TransactionId xid, TimestampTz commit_ts)\n{\n apply_error_callback_arg.remote_xid = xid;\n apply_error_callback_arg.commit_ts = commit_ts;\n}\n\nso that instances of, for example:\n\n apply_error_callback_arg.remote_xid = prepare_data.xid;\n apply_error_callback_arg.committs = prepare_data.commit_time;\n\ncan be:\n\n set_apply_error_callback_tx_info(prepare_data.xid, prepare_data.commit_time);\n\n(4) The apply_error_callback() function is missing a function header/comment.\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 10 Aug 2021 23:27:14 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Aug 10, 2021 at 7:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Aug 10, 2021 at 11:59 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Aug 10, 2021 at 10:37 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > I've attached the latest patches that incorporated all comments I got\n> > > so far. Please review them.\n> > >\n> >\n> > I am not able to apply the latest patch\n> > (v6-0001-Add-errcontext-to-errors-happening-during-applyin) on HEAD,\n> > getting the below error:\n> >\n>\n> Few comments on v6-0001-Add-errcontext-to-errors-happening-during-applyin\n\nThank you for the comments!\n\n> ==============================================================\n>\n> 1. While applying DML operations, we are setting up the error context\n> multiple times due to which the context information is not\n> appropriate. The first is set in apply_dispatch and then during\n> processing, we set another error callback slot_store_error_callback in\n> slot_store_data and slot_modify_data. When I forced one of the errors\n> in slot_store_data(), it displays the below information in CONTEXT\n> which doesn't make much sense.\n>\n> 2021-08-10 15:16:39.887 IST [6784] ERROR: incorrect binary data\n> format in logical replication column 1\n> 2021-08-10 15:16:39.887 IST [6784] CONTEXT: processing remote data\n> for replication target relation \"public.test1\" column \"id\"\n> during apply of \"INSERT\" for relation \"public.test1\" in\n> transaction with xid 740 committs 2021-08-10 14:44:38.058174+05:30\n\nYes, but we cannot change the error context message depending on other\nerror context messages. So it seems hard to construct a complete\nsentence in the context message that is okay in terms of English\ngrammar. Is the following message better?\n\nCONTEXT: processing remote data for replication target relation\n\"public.test1\" column “id\"\n applying \"INSERT\" for relation \"public.test1” in transaction\nwith xid 740 committs 2021-08-10 14:44:38.058174+05:30\n\n>\n> 2.\n> I think we can slightly change the new context information as below:\n> Before\n> during apply of \"INSERT\" for relation \"public.test1\" in transaction\n> with xid 740 committs 2021-08-10 14:44:38.058174+05:30\n> After\n> during apply of \"INSERT\" for relation \"public.test1\" in transaction id\n> 740 with commit timestamp 2021-08-10 14:44:38.058174+05:30\n\nFixed.\n\n>\n> 3.\n> +/* Struct for saving and restoring apply information */\n> +typedef struct ApplyErrCallbackArg\n> +{\n> + LogicalRepMsgType command; /* 0 if invalid */\n> +\n> + /* Local relation information */\n> + char *nspname;\n> + char *relname;\n>\n> ...\n> ...\n>\n> +\n> +static ApplyErrCallbackArg apply_error_callback_arg =\n> +{\n> + .command = 0,\n> + .relname = NULL,\n> + .nspname = NULL,\n>\n> Let's initialize the struct members in the order they are declared.\n> The order of relname and nspname should be another way.\n\nFixed.\n\n> 4.\n> +\n> + TransactionId remote_xid;\n> + TimestampTz committs;\n> +} ApplyErrCallbackArg;\n>\n> It might be better to add a comment like \"remote xact information\"\n> above these structure members.\n\nFixed.\n\n>\n> 5.\n> +static void\n> +apply_error_callback(void *arg)\n> +{\n> + StringInfoData buf;\n> +\n> + if (apply_error_callback_arg.command == 0)\n> + return;\n> +\n> + initStringInfo(&buf);\n>\n> At the end of this call, it is better to free this (pfree(buf.data))\n\nFixed.\n\n>\n> 6. In the commit message, you might want to indicate that this\n> additional information can be used by the future patch to skip the\n> conflicting transaction.\n\nFixed.\n\nI've attached the new patches. Please review them.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Wed, 11 Aug 2021 14:48:48 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Aug 10, 2021 at 10:27 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Tue, Aug 10, 2021 at 3:07 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached the latest patches that incorporated all comments I got\n> > so far. Please review them.\n> >\n>\n> Some initial review comments on the v6-0001 patch:\n\nThanks for reviewing the patch!\n\n>\n>\n> src/backend/replication/logical/proto.c:\n> (1)\n>\n> + TimestampTz committs;\n>\n> I think it looks better to name \"committs\" as \"commit_ts\", and also is\n> more consistent with naming for other member \"remote_xid\".\n\nFixed.\n\n>\n> src/backend/replication/logical/worker.c:\n> (2)\n> To be consistent with all other function headers, should start\n> sentence with capital: \"get\" -> \"Get\"\n>\n> + * get string representing LogicalRepMsgType.\n\nFixed\n\n>\n> (3) It looks a bit cumbersome and repetitive to set/update the members\n> of apply_error_callback_arg in numerous places.\n>\n> I suggest making the \"set_apply_error_context...\" and\n> \"reset_apply_error_context...\" functions as \"static inline void\"\n> functions (moving them to the top part of the source file, and\n> removing the existing function declarations for these).\n>\n> Also, can add something similar to below:\n>\n> static inline void\n> set_apply_error_callback_xid(TransactionId xid)\n> {\n> apply_error_callback_arg.remote_xid = xid;\n> }\n>\n> static inline void\n> set_apply_error_callback_xid_info(TransactionId xid, TimestampTz commit_ts)\n> {\n> apply_error_callback_arg.remote_xid = xid;\n> apply_error_callback_arg.commit_ts = commit_ts;\n> }\n>\n> so that instances of, for example:\n>\n> apply_error_callback_arg.remote_xid = prepare_data.xid;\n> apply_error_callback_arg.committs = prepare_data.commit_time;\n>\n> can be:\n>\n> set_apply_error_callback_tx_info(prepare_data.xid, prepare_data.commit_time);\n\nOkay. I've added set_apply_error_callback_xact() function to set\ntransaction information to apply error callback. Also, I inlined those\nhelper functions since we call them every change.\n\n>\n> (4) The apply_error_callback() function is missing a function header/comment.\n\nAdded.\n\nThe fixes for the above comments are incorporated in the v7 patch I\njust submitted[1].\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoALAq_0q_Zz2K0tO%3DkuUj8aBrDdMJXbey1P6t4w8snpQQ%40mail.gmail.com\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 11 Aug 2021 14:52:03 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Aug 11, 2021 at 11:19 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Aug 10, 2021 at 7:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > ==============================================================\n> >\n> > 1. While applying DML operations, we are setting up the error context\n> > multiple times due to which the context information is not\n> > appropriate. The first is set in apply_dispatch and then during\n> > processing, we set another error callback slot_store_error_callback in\n> > slot_store_data and slot_modify_data. When I forced one of the errors\n> > in slot_store_data(), it displays the below information in CONTEXT\n> > which doesn't make much sense.\n> >\n> > 2021-08-10 15:16:39.887 IST [6784] ERROR: incorrect binary data\n> > format in logical replication column 1\n> > 2021-08-10 15:16:39.887 IST [6784] CONTEXT: processing remote data\n> > for replication target relation \"public.test1\" column \"id\"\n> > during apply of \"INSERT\" for relation \"public.test1\" in\n> > transaction with xid 740 committs 2021-08-10 14:44:38.058174+05:30\n>\n> Yes, but we cannot change the error context message depending on other\n> error context messages. So it seems hard to construct a complete\n> sentence in the context message that is okay in terms of English\n> grammar. Is the following message better?\n>\n> CONTEXT: processing remote data for replication target relation\n> \"public.test1\" column “id\"\n> applying \"INSERT\" for relation \"public.test1” in transaction\n> with xid 740 committs 2021-08-10 14:44:38.058174+05:30\n>\n\nI don't like the proposed text. How about if we combine both and have\nsomething like: \"processing remote data during \"UPDATE\" for\nreplication target relation \"public.test1\" column \"id\" in transaction\nid 740 with commit timestamp 2021-08-10 14:44:38.058174+05:30\"? For\nthis, I think we need to remove slot_store_error_callback and\nadd/change the ApplyErrCallbackArg to include the additional required\ninformation in that callback.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 11 Aug 2021 13:48:52 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Aug 11, 2021 at 2:48 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n>\n> I've attached the new patches. Please review them.\n\nPlease note that newly added tap tests fail due to known assertion\nfailure in pgstats that I reported here[1].\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoCCAa%2BJ1-udHRo5-Hbtv%3DD38WdZDAaXZGDbQQ_Vg_d3bQ%40mail.gmail.com\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 11 Aug 2021 17:33:51 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Aug 11, 2021 at 5:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Aug 11, 2021 at 11:19 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Aug 10, 2021 at 7:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > ==============================================================\n> > >\n> > > 1. While applying DML operations, we are setting up the error context\n> > > multiple times due to which the context information is not\n> > > appropriate. The first is set in apply_dispatch and then during\n> > > processing, we set another error callback slot_store_error_callback in\n> > > slot_store_data and slot_modify_data. When I forced one of the errors\n> > > in slot_store_data(), it displays the below information in CONTEXT\n> > > which doesn't make much sense.\n> > >\n> > > 2021-08-10 15:16:39.887 IST [6784] ERROR: incorrect binary data\n> > > format in logical replication column 1\n> > > 2021-08-10 15:16:39.887 IST [6784] CONTEXT: processing remote data\n> > > for replication target relation \"public.test1\" column \"id\"\n> > > during apply of \"INSERT\" for relation \"public.test1\" in\n> > > transaction with xid 740 committs 2021-08-10 14:44:38.058174+05:30\n> >\n> > Yes, but we cannot change the error context message depending on other\n> > error context messages. So it seems hard to construct a complete\n> > sentence in the context message that is okay in terms of English\n> > grammar. Is the following message better?\n> >\n> > CONTEXT: processing remote data for replication target relation\n> > \"public.test1\" column “id\"\n> > applying \"INSERT\" for relation \"public.test1” in transaction\n> > with xid 740 committs 2021-08-10 14:44:38.058174+05:30\n> >\n>\n> I don't like the proposed text. How about if we combine both and have\n> something like: \"processing remote data during \"UPDATE\" for\n> replication target relation \"public.test1\" column \"id\" in transaction\n> id 740 with commit timestamp 2021-08-10 14:44:38.058174+05:30\"? For\n> this, I think we need to remove slot_store_error_callback and\n> add/change the ApplyErrCallbackArg to include the additional required\n> information in that callback.\n\nOh, I've never thought about that. That's a good idea.\n\nI've attached the updated patches. FYI I've included the patch\n(v8-0005) that fixes the assertion failure during shared fileset\ncleanup to make cfbot tests happy.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Thu, 12 Aug 2021 14:53:15 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Aug 12, 2021 at 3:54 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached the updated patches. FYI I've included the patch\n> (v8-0005) that fixes the assertion failure during shared fileset\n> cleanup to make cfbot tests happy.\n>\n\nA minor comment on the 0001 patch: In the message I think that using\n\"ID\" would look better than lowercase \"id\" and AFAICS it's more\nconsistent with existing messages.\n\n+ appendStringInfo(&buf, _(\" in transaction id %u with commit timestamp %s\"),\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 12 Aug 2021 17:51:24 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Aug 12, 2021 at 1:21 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Thu, Aug 12, 2021 at 3:54 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached the updated patches. FYI I've included the patch\n> > (v8-0005) that fixes the assertion failure during shared fileset\n> > cleanup to make cfbot tests happy.\n> >\n>\n> A minor comment on the 0001 patch: In the message I think that using\n> \"ID\" would look better than lowercase \"id\" and AFAICS it's more\n> consistent with existing messages.\n>\n> + appendStringInfo(&buf, _(\" in transaction id %u with commit timestamp %s\"),\n>\n\nYou have a point but I think in this case it might look a bit odd as\nwe have another field 'commit timestamp' after that which is\nlowercase.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 12 Aug 2021 16:48:24 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Aug 12, 2021 at 9:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > A minor comment on the 0001 patch: In the message I think that using\n> > \"ID\" would look better than lowercase \"id\" and AFAICS it's more\n> > consistent with existing messages.\n> >\n> > + appendStringInfo(&buf, _(\" in transaction id %u with commit timestamp %s\"),\n> >\n>\n> You have a point but I think in this case it might look a bit odd as\n> we have another field 'commit timestamp' after that which is\n> lowercase.\n>\n\nI did a quick search and I couldn't find any other messages in the\nPostgres code that use \"transaction id\", but I could find some that\nuse \"transaction ID\" and \"transaction identifier\".\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 12 Aug 2021 22:11:41 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Aug 12, 2021 at 5:41 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Thu, Aug 12, 2021 at 9:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > A minor comment on the 0001 patch: In the message I think that using\n> > > \"ID\" would look better than lowercase \"id\" and AFAICS it's more\n> > > consistent with existing messages.\n> > >\n> > > + appendStringInfo(&buf, _(\" in transaction id %u with commit timestamp %s\"),\n> > >\n> >\n> > You have a point but I think in this case it might look a bit odd as\n> > we have another field 'commit timestamp' after that which is\n> > lowercase.\n> >\n>\n> I did a quick search and I couldn't find any other messages in the\n> Postgres code that use \"transaction id\", but I could find some that\n> use \"transaction ID\" and \"transaction identifier\".\n>\n\nOkay, but that doesn't mean using it here is bad. I am personally fine\nwith a message containing something like \"... in transaction\nid 740 with commit timestamp 2021-08-10 14:44:38.058174+05:30\" but I\nwon't mind if you and or others find some other way convenient. Any\nopinion from others?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 13 Aug 2021 09:36:35 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Aug 13, 2021 at 2:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Aug 12, 2021 at 5:41 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> >\n> > On Thu, Aug 12, 2021 at 9:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > > A minor comment on the 0001 patch: In the message I think that using\n> > > > \"ID\" would look better than lowercase \"id\" and AFAICS it's more\n> > > > consistent with existing messages.\n> > > >\n> > > > + appendStringInfo(&buf, _(\" in transaction id %u with commit timestamp %s\"),\n> > > >\n> > >\n> > > You have a point but I think in this case it might look a bit odd as\n> > > we have another field 'commit timestamp' after that which is\n> > > lowercase.\n> > >\n> >\n> > I did a quick search and I couldn't find any other messages in the\n> > Postgres code that use \"transaction id\", but I could find some that\n> > use \"transaction ID\" and \"transaction identifier\".\n> >\n>\n> Okay, but that doesn't mean using it here is bad. I am personally fine\n> with a message containing something like \"... in transaction\n> id 740 with commit timestamp 2021-08-10 14:44:38.058174+05:30\" but I\n> won't mind if you and or others find some other way convenient. Any\n> opinion from others?\n>\n\nJust to be clear, all I was saying is that I thought using uppercase\n\"ID\" looked better in the message, and was more consistent with\nexisting logged messages, than using lowercase \"id\".\ni.e. my suggestion was a trivial change:\n\nBEFORE:\n+ appendStringInfo(&buf, _(\" in transaction id %u with commit timestamp %s\"),\nAFTER:\n+ appendStringInfo(&buf, _(\" in transaction ID %u with commit timestamp %s\"),\n\nBut it was just a suggestion. Maybe others feel differently.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 13 Aug 2021 16:18:14 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Aug 13, 2021 at 1:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Aug 12, 2021 at 5:41 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> >\n> > On Thu, Aug 12, 2021 at 9:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > > A minor comment on the 0001 patch: In the message I think that using\n> > > > \"ID\" would look better than lowercase \"id\" and AFAICS it's more\n> > > > consistent with existing messages.\n> > > >\n> > > > + appendStringInfo(&buf, _(\" in transaction id %u with commit timestamp %s\"),\n> > > >\n> > >\n> > > You have a point but I think in this case it might look a bit odd as\n> > > we have another field 'commit timestamp' after that which is\n> > > lowercase.\n> > >\n> >\n> > I did a quick search and I couldn't find any other messages in the\n> > Postgres code that use \"transaction id\", but I could find some that\n> > use \"transaction ID\" and \"transaction identifier\".\n> >\n>\n> Okay, but that doesn't mean using it here is bad. I am personally fine\n> with a message containing something like \"... in transaction\n> id 740 with commit timestamp 2021-08-10 14:44:38.058174+05:30\" but I\n> won't mind if you and or others find some other way convenient. Any\n> opinion from others?\n\nI don't have a strong opinion on this but in terms of consistency we\noften use like \"transaction %u\" in messages when showing XID value,\nrather than \"transaction [id|ID|identifier]\":\n\n$ git grep -i \"errmsg.*transaction %u\" src/backend/\nsrc/backend/access/transam/commit_ts.c: errmsg(\"cannot\nretrieve commit timestamp for transaction %u\", xid)));\nsrc/backend/access/transam/slru.c: errmsg(\"could not\naccess status of transaction %u\", xid),\nsrc/backend/access/transam/slru.c: errmsg(\"could not\naccess status of transaction %u\", xid),\nsrc/backend/access/transam/slru.c: errmsg(\"could\nnot access status of transaction %u\", xid),\nsrc/backend/access/transam/slru.c: (errmsg(\"could\nnot access status of transaction %u\", xid),\nsrc/backend/access/transam/slru.c: errmsg(\"could\nnot access status of transaction %u\", xid),\nsrc/backend/access/transam/slru.c: (errmsg(\"could\nnot access status of transaction %u\", xid),\nsrc/backend/access/transam/slru.c: errmsg(\"could not\naccess status of transaction %u\", xid),\nsrc/backend/access/transam/slru.c: errmsg(\"could not\naccess status of transaction %u\", xid),\nsrc/backend/access/transam/twophase.c:\n(errmsg(\"recovering prepared transaction %u from shared memory\",\nxid)));\nsrc/backend/access/transam/twophase.c:\n(errmsg(\"removing stale two-phase state file for transaction %u\",\nsrc/backend/access/transam/twophase.c:\n(errmsg(\"removing stale two-phase state from memory for transaction\n%u\",\nsrc/backend/access/transam/twophase.c:\n(errmsg(\"removing future two-phase state file for transaction %u\",\nsrc/backend/access/transam/twophase.c:\n(errmsg(\"removing future two-phase state from memory for transaction\n%u\",\nsrc/backend/access/transam/twophase.c:\nerrmsg(\"corrupted two-phase state file for transaction %u\",\nsrc/backend/access/transam/twophase.c:\nerrmsg(\"corrupted two-phase state in memory for transaction %u\",\nsrc/backend/access/transam/xlog.c: (errmsg(\"recovery\nstopping before commit of transaction %u, time %s\",\nsrc/backend/access/transam/xlog.c: (errmsg(\"recovery\nstopping before abort of transaction %u, time %s\",\nsrc/backend/access/transam/xlog.c:\n(errmsg(\"recovery stopping after commit of transaction %u, time %s\",\nsrc/backend/access/transam/xlog.c:\n(errmsg(\"recovery stopping after abort of transaction %u, time %s\",\nsrc/backend/replication/logical/worker.c:\nerrmsg_internal(\"transaction %u not found in stream XID hash table\",\nsrc/backend/replication/logical/worker.c:\nerrmsg_internal(\"transaction %u not found in stream XID hash table\",\nsrc/backend/replication/logical/worker.c:\nerrmsg_internal(\"transaction %u not found in stream XID hash table\",\nsrc/backend/replication/logical/worker.c:\nerrmsg_internal(\"transaction %u not found in stream XID hash table\",\n\n$ git grep -i \"errmsg.*transaction identifier\" src/backend/\nsrc/backend/access/transam/twophase.c:\nerrmsg(\"transaction identifier \\\"%s\\\" is too long\",\nsrc/backend/access/transam/twophase.c:\nerrmsg(\"transaction identifier \\\"%s\\\" is already in use\",\n\n$ git grep -i \"errmsg.*transaction id\" src/backend/\nsrc/backend/access/transam/twophase.c:\nerrmsg(\"transaction identifier \\\"%s\\\" is too long\",\nsrc/backend/access/transam/twophase.c:\nerrmsg(\"transaction identifier \\\"%s\\\" is already in use\",\nsrc/backend/access/transam/varsup.c:\n(errmsg_internal(\"transaction ID wrap limit is %u, limited by database\nwith OID %u\",\nsrc/backend/access/transam/xlog.c: (errmsg_internal(\"next\ntransaction ID: \" UINT64_FORMAT \"; next OID: %u\",\nsrc/backend/access/transam/xlog.c: (errmsg_internal(\"oldest\nunfrozen transaction ID: %u, in database %u\",\nsrc/backend/access/transam/xlog.c: (errmsg(\"invalid next\ntransaction ID\")));\nsrc/backend/replication/logical/snapbuild.c:\n(errmsg_plural(\"exported logical decoding snapshot: \\\"%s\\\" with %u\ntransaction ID\",\nsrc/backend/replication/logical/worker.c:\nerrmsg_internal(\"invalid transaction ID in streamed replication\ntransaction\")));\nsrc/backend/replication/logical/worker.c:\nerrmsg_internal(\"invalid transaction ID in streamed replication\ntransaction\")));\nsrc/backend/replication/logical/worker.c:\nerrmsg_internal(\"invalid two-phase transaction ID\")));\nsrc/backend/utils/adt/xid8funcs.c: errmsg(\"transaction\nID %s is in the future\",\n\nTherefore, perhaps a message like \"... in transaction 740 with commit\ntimestamp 2021-08-10 14:44:38.058174+05:30\" is better in terms of\nconsistency with other messages?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 16 Aug 2021 05:24:01 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Aug 16, 2021 at 6:24 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Therefore, perhaps a message like \"... in transaction 740 with commit\n> timestamp 2021-08-10 14:44:38.058174+05:30\" is better in terms of\n> consistency with other messages?\n>\n\nYes, I think that would be more consistent.\n\nOn another note, for the 0001 patch, the elog ERROR at the bottom of\nthe logicalrep_message_type() function seems to assume that the\nunrecognized \"action\" is a printable character (with its use of %c)\nand also that the character is meaningful to the user in some way.\nBut given that the compiler normally warns of an unhandled enum value\nwhen switching on an enum, such an error would most likely be when\naction is some int value that wouldn't be meaningful to the user (as\nit wouldn't be one of the LogicalRepMsgType enum values).\nI therefore think it would be better to use %d in that ERROR:\n\ni.e.\n\n+ elog(ERROR, \"invalid logical replication message type %d\", action);\n\nSimilar comments apply to the apply_dispatch() function (and I realise\nit used %c before your patch).\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 16 Aug 2021 13:03:24 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Aug 16, 2021 at 1:54 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Aug 13, 2021 at 1:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > Okay, but that doesn't mean using it here is bad. I am personally fine\n> > with a message containing something like \"... in transaction\n> > id 740 with commit timestamp 2021-08-10 14:44:38.058174+05:30\" but I\n> > won't mind if you and or others find some other way convenient. Any\n> > opinion from others?\n>\n> I don't have a strong opinion on this but in terms of consistency we\n> often use like \"transaction %u\" in messages when showing XID value,\n> rather than \"transaction [id|ID|identifier]\":\n>\n..\n>\n> Therefore, perhaps a message like \"... in transaction 740 with commit\n> timestamp 2021-08-10 14:44:38.058174+05:30\" is better in terms of\n> consistency with other messages?\n>\n\n+1.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 16 Aug 2021 10:51:43 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Aug 12, 2021 1:53 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> I've attached the updated patches. FYI I've included the patch\r\n> (v8-0005) that fixes the assertion failure during shared fileset cleanup to make\r\n> cfbot tests happy.\r\n\r\nHi,\r\n\r\nThanks for the new patches.\r\nI have a few comments on the v8-0001 patch.\r\n\r\n1)\r\n+\r\n+\tif (TransactionIdIsNormal(errarg->remote_xid))\r\n+\t\tappendStringInfo(&buf, _(\" in transaction id %u with commit timestamp %s\"),\r\n+\t\t\t\t\t\t errarg->remote_xid,\r\n+\t\t\t\t\t\t errarg->commit_ts == 0\r\n+\t\t\t\t\t\t ? \"(unset)\"\r\n+\t\t\t\t\t\t : timestamptz_to_str(errarg->commit_ts));\r\n+\r\n+\terrcontext(\"%s\", buf.data);\r\n\r\nI think we can output the timestamp in a separete check which can be more\r\nconsistent with the other code style in apply_error_callback()\r\n(ie)\r\n+\tif (errarg->commit_ts != 0)\r\n+\t\tappendStringInfo(&buf, _(\" with commit timestamp %s\"),\r\n+\t\t\t\t\t\ttimestamptz_to_str(errarg->commit_ts));\r\n\r\n\r\n2)\r\n+/*\r\n+ * Get string representing LogicalRepMsgType.\r\n+ */\r\n+char *\r\n+logicalrep_message_type(LogicalRepMsgType action)\r\n+{\r\n...\r\n+\r\n+\telog(ERROR, \"invalid logical replication message type \\\"%c\\\"\", action);\r\n+}\r\n\r\nSome old compilers might complain that the function doesn't have a return value\r\nat the end of the function, maybe we can code like the following:\r\n\r\n+char *\r\n+logicalrep_message_type(LogicalRepMsgType action)\r\n+{\r\n+\tswitch (action)\r\n+\t{\r\n+\t\tcase LOGICAL_REP_MSG_BEGIN:\r\n+\t\t\treturn \"BEGIN\";\r\n...\r\n+\t\tdefault:\r\n+\t\t\telog(ERROR, \"invalid logical replication message type \\\"%c\\\"\", action);\r\n+\t}\r\n+\treturn NULL;\t\t\t\t/* keep compiler quiet */\r\n+}\r\n\r\n\r\n3)\r\nDo we need to invoke set_apply_error_context_xact() in the function\r\napply_handle_stream_prepare() to save the xid and timestamp ?\r\n\r\nBest regards,\r\nHou zj\r\n",
"msg_date": "Mon, 16 Aug 2021 06:59:36 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "Monday, August 16, 2021 3:00 PM Hou, Zhijie wrote:\r\n> On Thu, Aug 12, 2021 1:53 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> > I've attached the updated patches. FYI I've included the patch\r\n> > (v8-0005) that fixes the assertion failure during shared fileset\r\n> > cleanup to make cfbot tests happy.\r\n> \r\n> Hi,\r\n> \r\n> Thanks for the new patches.\r\n> I have a few comments on the v8-0001 patch.\r\n> 3)\r\n> Do we need to invoke set_apply_error_context_xact() in the function\r\n> apply_handle_stream_prepare() to save the xid and timestamp ?\r\n\r\nSorry, this comment wasn't correct, please ignore it.\r\nHere is another comment:\r\n\r\n+char *\r\n+logicalrep_message_type(LogicalRepMsgType action)\r\n+{\r\n...\r\n+\t\tcase LOGICAL_REP_MSG_STREAM_END:\r\n+\t\t\treturn \"STREAM END\";\r\n...\r\n\r\nI think most the existing code use \"STREAM STOP\" to describe the\r\nLOGICAL_REP_MSG_STREAM_END message, is it better to return \"STREAM STOP\" in\r\nfunction logicalrep_message_type() too ?\r\n\r\nBest regards,\r\nHou zj\r\n",
"msg_date": "Mon, 16 Aug 2021 07:53:52 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Aug 16, 2021 at 5:54 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> Here is another comment:\n>\n> +char *\n> +logicalrep_message_type(LogicalRepMsgType action)\n> +{\n> ...\n> + case LOGICAL_REP_MSG_STREAM_END:\n> + return \"STREAM END\";\n> ...\n>\n> I think most the existing code use \"STREAM STOP\" to describe the\n> LOGICAL_REP_MSG_STREAM_END message, is it better to return \"STREAM STOP\" in\n> function logicalrep_message_type() too ?\n>\n\n+1\nI think you're right, it should be \"STREAM STOP\" in that case.\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 16 Aug 2021 18:30:46 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Aug 12, 2021 at 3:54 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached the updated patches. FYI I've included the patch\n> (v8-0005) that fixes the assertion failure during shared fileset\n> cleanup to make cfbot tests happy.\n>\n\nAnother comment on the 0001 patch: as there is now a mix of setting\n\"apply_error_callback_arg\" members directly and also through inline\nfunctions, it might look better to have it done consistently with\nfunctions having prototypes something like the following:\n\nstatic inline void set_apply_error_context_rel(LogicalRepRelMapEntry *rel);\nstatic inline void reset_apply_error_context_rel(void);\nstatic inline void set_apply_error_context_attnum(int remote_attnum);\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 17 Aug 2021 13:00:07 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Aug 16, 2021 at 3:59 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Thu, Aug 12, 2021 1:53 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > I've attached the updated patches. FYI I've included the patch\n> > (v8-0005) that fixes the assertion failure during shared fileset cleanup to make\n> > cfbot tests happy.\n>\n> Hi,\n>\n> Thanks for the new patches.\n> I have a few comments on the v8-0001 patch.\n\nThank you for the comments!\n\n>\n>\n> 2)\n> +/*\n> + * Get string representing LogicalRepMsgType.\n> + */\n> +char *\n> +logicalrep_message_type(LogicalRepMsgType action)\n> +{\n> ...\n> +\n> + elog(ERROR, \"invalid logical replication message type \\\"%c\\\"\", action);\n> +}\n>\n> Some old compilers might complain that the function doesn't have a return value\n> at the end of the function, maybe we can code like the following:\n>\n> +char *\n> +logicalrep_message_type(LogicalRepMsgType action)\n> +{\n> + switch (action)\n> + {\n> + case LOGICAL_REP_MSG_BEGIN:\n> + return \"BEGIN\";\n> ...\n> + default:\n> + elog(ERROR, \"invalid logical replication message type \\\"%c\\\"\", action);\n> + }\n> + return NULL; /* keep compiler quiet */\n> +}\n\nFixed.\n\n>\n>\n> 3)\n> Do we need to invoke set_apply_error_context_xact() in the function\n> apply_handle_stream_prepare() to save the xid and timestamp ?\n\nYes. I think that v8-0001 patch already set xid and timestamp just\nafter parsing stream_prepare message. You meant it's not necessary?\n\nI'll submit the updated patches soon.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 17 Aug 2021 14:00:43 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Aug 16, 2021 at 5:30 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Mon, Aug 16, 2021 at 5:54 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > Here is another comment:\n> >\n> > +char *\n> > +logicalrep_message_type(LogicalRepMsgType action)\n> > +{\n> > ...\n> > + case LOGICAL_REP_MSG_STREAM_END:\n> > + return \"STREAM END\";\n> > ...\n> >\n> > I think most the existing code use \"STREAM STOP\" to describe the\n> > LOGICAL_REP_MSG_STREAM_END message, is it better to return \"STREAM STOP\" in\n> > function logicalrep_message_type() too ?\n> >\n>\n> +1\n> I think you're right, it should be \"STREAM STOP\" in that case.\n\nIt's right that we use \"STREAM STOP\" rather than \"STREAM END\" in many\nplaces such as elog messages, a callback name, and source code\ncomments. As far as I have found there are two places where we’re\nusing \"STREAM STOP\": LOGICAL_REP_MSG_STREAM_END and a description in\ndoc/src/sgml/protocol.sgml. Isn't it better to fix these\ninconsistencies in the first place? I think “STREAM STOP” would be\nmore appropriate.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 17 Aug 2021 14:16:12 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Aug 17, 2021 at 10:46 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Aug 16, 2021 at 5:30 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> >\n> > On Mon, Aug 16, 2021 at 5:54 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > Here is another comment:\n> > >\n> > > +char *\n> > > +logicalrep_message_type(LogicalRepMsgType action)\n> > > +{\n> > > ...\n> > > + case LOGICAL_REP_MSG_STREAM_END:\n> > > + return \"STREAM END\";\n> > > ...\n> > >\n> > > I think most the existing code use \"STREAM STOP\" to describe the\n> > > LOGICAL_REP_MSG_STREAM_END message, is it better to return \"STREAM STOP\" in\n> > > function logicalrep_message_type() too ?\n> > >\n> >\n> > +1\n> > I think you're right, it should be \"STREAM STOP\" in that case.\n>\n> It's right that we use \"STREAM STOP\" rather than \"STREAM END\" in many\n> places such as elog messages, a callback name, and source code\n> comments. As far as I have found there are two places where we’re\n> using \"STREAM STOP\": LOGICAL_REP_MSG_STREAM_END and a description in\n> doc/src/sgml/protocol.sgml. Isn't it better to fix these\n> inconsistencies in the first place? I think “STREAM STOP” would be\n> more appropriate.\n>\n\nI think keeping STREAM_END in the enum 'LOGICAL_REP_MSG_STREAM_END'\nseems to be a bit better because of the value 'E' we use for it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 17 Aug 2021 11:05:26 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thursday, August 12, 2021 1:53 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n>\r\n> I've attached the updated patches. FYI I've included the patch\r\n> (v8-0005) that fixes the assertion failure during shared fileset\r\n> cleanup to make cfbot tests happy.\r\n\r\n\r\nHi\r\n\r\nThanks for your patch. I met a problem when using it. The log is not what I expected in some cases, but in streaming mode, they work well.\r\n\r\nFor example:\r\n------publisher------\r\ncreate table test (a int primary key, b varchar);\r\ncreate publication pub for table test;\r\n\r\n------subscriber------\r\ncreate table test (a int primary key, b varchar);\r\ninsert into test values (10000);\r\ncreate subscription sub connection 'dbname=postgres port=5432' publication pub with(streaming=on);\r\n\r\n------publisher------\r\ninsert into test values (10000);\r\n\r\nSubscriber log:\r\n2021-08-17 14:24:43.415 CST [3630341] ERROR: duplicate key value violates unique constraint \"test_pkey\"\r\n2021-08-17 14:24:43.415 CST [3630341] DETAIL: Key (a)=(10000) already exists.\r\n\r\nIt didn't give more context info generated by apply_error_callback function.\r\n\r\nIn streaming mode(which worked as I expected):\r\n------publisher------\r\nINSERT INTO test SELECT i, md5(i::text) FROM generate_series(1, 10000) s(i);\r\n\r\nSubscriber log:\r\n2021-08-17 14:26:26.521 CST [3630510] ERROR: duplicate key value violates unique constraint \"test_pkey\"\r\n2021-08-17 14:26:26.521 CST [3630510] DETAIL: Key (a)=(10000) already exists.\r\n2021-08-17 14:26:26.521 CST [3630510] CONTEXT: processing remote data during \"INSERT\" for replication target relation \"public.test\" in transaction id 710 with commit timestamp 2021-08-17 14:26:26.403214+08\r\n\r\nI looked into it briefly and thought it was related to some code in\r\napply_dispatch function. It set callback when apply_error_callback_arg.command\r\nis 0, and reset the callback back at the end of the function. But\r\napply_error_callback_arg.command was not reset to 0, so it won't set callback\r\nwhen calling apply_dispatch function next time.\r\n\r\nI tried to fix it with the following change, thoughts?\r\n\r\n@@ -2455,7 +2455,10 @@ apply_dispatch(StringInfo s)\r\n\r\n /* Pop the error context stack */\r\n if (set_callback)\r\n+ {\r\n error_context_stack = errcallback.previous;\r\n+ apply_error_callback_arg.command = 0;\r\n+ }\r\n }\r\n\r\nBesides, if we make the changes like this, do we still need to reset\r\napply_error_callback_arg.command in reset_apply_error_context_info function?\r\n\r\nRegards\r\nTang\r\n",
"msg_date": "Tue, 17 Aug 2021 08:21:41 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Aug 17, 2021 at 2:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Aug 17, 2021 at 10:46 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Aug 16, 2021 at 5:30 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> > >\n> > > On Mon, Aug 16, 2021 at 5:54 PM houzj.fnst@fujitsu.com\n> > > <houzj.fnst@fujitsu.com> wrote:\n> > > >\n> > > > Here is another comment:\n> > > >\n> > > > +char *\n> > > > +logicalrep_message_type(LogicalRepMsgType action)\n> > > > +{\n> > > > ...\n> > > > + case LOGICAL_REP_MSG_STREAM_END:\n> > > > + return \"STREAM END\";\n> > > > ...\n> > > >\n> > > > I think most the existing code use \"STREAM STOP\" to describe the\n> > > > LOGICAL_REP_MSG_STREAM_END message, is it better to return \"STREAM STOP\" in\n> > > > function logicalrep_message_type() too ?\n> > > >\n> > >\n> > > +1\n> > > I think you're right, it should be \"STREAM STOP\" in that case.\n> >\n> > It's right that we use \"STREAM STOP\" rather than \"STREAM END\" in many\n> > places such as elog messages, a callback name, and source code\n> > comments. As far as I have found there are two places where we’re\n> > using \"STREAM STOP\": LOGICAL_REP_MSG_STREAM_END and a description in\n> > doc/src/sgml/protocol.sgml. Isn't it better to fix these\n> > inconsistencies in the first place? I think “STREAM STOP” would be\n> > more appropriate.\n> >\n>\n> I think keeping STREAM_END in the enum 'LOGICAL_REP_MSG_STREAM_END'\n> seems to be a bit better because of the value 'E' we use for it.\n\nBut I think we don't care about the actual value of\nLOGICAL_REP_MSG_STREAM_END since we use the enum value rather than\n'E'?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 18 Aug 2021 10:23:05 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Aug 18, 2021 at 6:53 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Aug 17, 2021 at 2:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > It's right that we use \"STREAM STOP\" rather than \"STREAM END\" in many\n> > > places such as elog messages, a callback name, and source code\n> > > comments. As far as I have found there are two places where we’re\n> > > using \"STREAM STOP\": LOGICAL_REP_MSG_STREAM_END and a description in\n> > > doc/src/sgml/protocol.sgml. Isn't it better to fix these\n> > > inconsistencies in the first place? I think “STREAM STOP” would be\n> > > more appropriate.\n> > >\n> >\n> > I think keeping STREAM_END in the enum 'LOGICAL_REP_MSG_STREAM_END'\n> > seems to be a bit better because of the value 'E' we use for it.\n>\n> But I think we don't care about the actual value of\n> LOGICAL_REP_MSG_STREAM_END since we use the enum value rather than\n> 'E'?\n>\n\nTrue, but here we are trying to be consistent with other enum values\nwhere we try to use the first letter of the last word (which is E in\nthis case). I can see there are other cases where we are not\nconsistent so it won't be a big deal if we won't be consistent here. I\nam neutral on this one, so, if you feel using STREAM_STOP would be\nbetter from a code readability perspective then that is fine.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 18 Aug 2021 08:32:17 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Aug 18, 2021 at 12:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Aug 18, 2021 at 6:53 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Aug 17, 2021 at 2:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > > It's right that we use \"STREAM STOP\" rather than \"STREAM END\" in many\n> > > > places such as elog messages, a callback name, and source code\n> > > > comments. As far as I have found there are two places where we’re\n> > > > using \"STREAM STOP\": LOGICAL_REP_MSG_STREAM_END and a description in\n> > > > doc/src/sgml/protocol.sgml. Isn't it better to fix these\n> > > > inconsistencies in the first place? I think “STREAM STOP” would be\n> > > > more appropriate.\n> > > >\n> > >\n> > > I think keeping STREAM_END in the enum 'LOGICAL_REP_MSG_STREAM_END'\n> > > seems to be a bit better because of the value 'E' we use for it.\n> >\n> > But I think we don't care about the actual value of\n> > LOGICAL_REP_MSG_STREAM_END since we use the enum value rather than\n> > 'E'?\n> >\n>\n> True, but here we are trying to be consistent with other enum values\n> where we try to use the first letter of the last word (which is E in\n> this case). I can see there are other cases where we are not\n> consistent so it won't be a big deal if we won't be consistent here. I\n> am neutral on this one, so, if you feel using STREAM_STOP would be\n> better from a code readability perspective then that is fine.\n\nIn addition of a code readability, there is a description in the doc\nthat mentions \"Stream End\" but we describe \"Stream Stop\" in the later\ndescription, which seems a bug in the doc to me:\n\nThe following messages (Stream Start, Stream End, Stream Commit, and\nStream Abort) are available since protocol version 2.\n\n</para>\n\n(snip)\n\n<varlistentry>\n<term>\nStream Stop\n</term>\n<listitem>\n\nPerhaps it's better to hear other opinions too, but I've attached the\npatch. Please review it.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Wed, 18 Aug 2021 13:29:54 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Aug 18, 2021 at 10:00 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Aug 18, 2021 at 12:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Aug 18, 2021 at 6:53 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Tue, Aug 17, 2021 at 2:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > > It's right that we use \"STREAM STOP\" rather than \"STREAM END\" in many\n> > > > > places such as elog messages, a callback name, and source code\n> > > > > comments. As far as I have found there are two places where we’re\n> > > > > using \"STREAM STOP\": LOGICAL_REP_MSG_STREAM_END and a description in\n> > > > > doc/src/sgml/protocol.sgml. Isn't it better to fix these\n> > > > > inconsistencies in the first place? I think “STREAM STOP” would be\n> > > > > more appropriate.\n> > > > >\n> > > >\n> > > > I think keeping STREAM_END in the enum 'LOGICAL_REP_MSG_STREAM_END'\n> > > > seems to be a bit better because of the value 'E' we use for it.\n> > >\n> > > But I think we don't care about the actual value of\n> > > LOGICAL_REP_MSG_STREAM_END since we use the enum value rather than\n> > > 'E'?\n> > >\n> >\n> > True, but here we are trying to be consistent with other enum values\n> > where we try to use the first letter of the last word (which is E in\n> > this case). I can see there are other cases where we are not\n> > consistent so it won't be a big deal if we won't be consistent here. I\n> > am neutral on this one, so, if you feel using STREAM_STOP would be\n> > better from a code readability perspective then that is fine.\n>\n> In addition of a code readability, there is a description in the doc\n> that mentions \"Stream End\" but we describe \"Stream Stop\" in the later\n> description, which seems a bug in the doc to me:\n>\n\nDoc changes looks good to me. But, I have question for code change:\n\n--- a/src/include/replication/logicalproto.h\n+++ b/src/include/replication/logicalproto.h\n@@ -65,7 +65,7 @@ typedef enum LogicalRepMsgType\n LOGICAL_REP_MSG_COMMIT_PREPARED = 'K',\n LOGICAL_REP_MSG_ROLLBACK_PREPARED = 'r',\n LOGICAL_REP_MSG_STREAM_START = 'S',\n- LOGICAL_REP_MSG_STREAM_END = 'E',\n+ LOGICAL_REP_MSG_STREAM_STOP = 'E',\n LOGICAL_REP_MSG_STREAM_COMMIT = 'c',\n\nAs this is changing the enum name and if any extension (logical\nreplication extension) has started using it then they would require a\nchange. As this is the latest change in PG-14, so it might be okay but\nOTOH, as this is just a code readability change, shall we do it only\nfor PG-15?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 18 Aug 2021 11:44:57 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tues, Aug 17, 2021 1:01 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> On Mon, Aug 16, 2021 at 3:59 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\r\n> > 3)\r\n> > Do we need to invoke set_apply_error_context_xact() in the function\r\n> > apply_handle_stream_prepare() to save the xid and timestamp ?\r\n> \r\n> Yes. I think that v8-0001 patch already set xid and timestamp just after parsing\r\n> stream_prepare message. You meant it's not necessary?\r\n\r\nSorry, I thought of something wrong, please ignore the above comment.\r\n\r\n> \r\n> I'll submit the updated patches soon.\r\n\r\nI was thinking about the place to set the errcallback.callback.\r\n\r\napply_dispatch(StringInfo s)\r\n {\r\n \tLogicalRepMsgType action = pq_getmsgbyte(s);\r\n+\tErrorContextCallback errcallback;\r\n+\tbool\t\tset_callback = false;\r\n+\r\n+\t/*\r\n+\t * Push apply error context callback if not yet. Other fields will be\r\n+\t * filled during applying the change. Since this function can be called\r\n+\t * recursively when applying spooled changes, we set the callback only\r\n+\t * once.\r\n+\t */\r\n+\tif (apply_error_callback_arg.command == 0)\r\n+\t{\r\n+\t\terrcallback.callback = apply_error_callback;\r\n+\t\terrcallback.previous = error_context_stack;\r\n+\t\terror_context_stack = &errcallback;\r\n+\t\tset_callback = true;\r\n+\t}\r\n...\r\n+\t/* Pop the error context stack */\r\n+\tif (set_callback)\r\n+\t\terror_context_stack = errcallback.previous;\r\n\r\nIt seems we can put the above code in the function LogicalRepApplyLoop()\r\naround invoking apply_dispatch(), and in that approach we don't need to worry\r\nabout the recursively case. What do you think ?\r\n\r\nBest regards,\r\nHou zj\r\n",
"msg_date": "Wed, 18 Aug 2021 06:33:15 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Aug 18, 2021 at 3:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Aug 18, 2021 at 10:00 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Aug 18, 2021 at 12:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Aug 18, 2021 at 6:53 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Tue, Aug 17, 2021 at 2:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > > It's right that we use \"STREAM STOP\" rather than \"STREAM END\" in many\n> > > > > > places such as elog messages, a callback name, and source code\n> > > > > > comments. As far as I have found there are two places where we’re\n> > > > > > using \"STREAM STOP\": LOGICAL_REP_MSG_STREAM_END and a description in\n> > > > > > doc/src/sgml/protocol.sgml. Isn't it better to fix these\n> > > > > > inconsistencies in the first place? I think “STREAM STOP” would be\n> > > > > > more appropriate.\n> > > > > >\n> > > > >\n> > > > > I think keeping STREAM_END in the enum 'LOGICAL_REP_MSG_STREAM_END'\n> > > > > seems to be a bit better because of the value 'E' we use for it.\n> > > >\n> > > > But I think we don't care about the actual value of\n> > > > LOGICAL_REP_MSG_STREAM_END since we use the enum value rather than\n> > > > 'E'?\n> > > >\n> > >\n> > > True, but here we are trying to be consistent with other enum values\n> > > where we try to use the first letter of the last word (which is E in\n> > > this case). I can see there are other cases where we are not\n> > > consistent so it won't be a big deal if we won't be consistent here. I\n> > > am neutral on this one, so, if you feel using STREAM_STOP would be\n> > > better from a code readability perspective then that is fine.\n> >\n> > In addition of a code readability, there is a description in the doc\n> > that mentions \"Stream End\" but we describe \"Stream Stop\" in the later\n> > description, which seems a bug in the doc to me:\n> >\n>\n> Doc changes looks good to me. But, I have question for code change:\n>\n> --- a/src/include/replication/logicalproto.h\n> +++ b/src/include/replication/logicalproto.h\n> @@ -65,7 +65,7 @@ typedef enum LogicalRepMsgType\n> LOGICAL_REP_MSG_COMMIT_PREPARED = 'K',\n> LOGICAL_REP_MSG_ROLLBACK_PREPARED = 'r',\n> LOGICAL_REP_MSG_STREAM_START = 'S',\n> - LOGICAL_REP_MSG_STREAM_END = 'E',\n> + LOGICAL_REP_MSG_STREAM_STOP = 'E',\n> LOGICAL_REP_MSG_STREAM_COMMIT = 'c',\n>\n> As this is changing the enum name and if any extension (logical\n> replication extension) has started using it then they would require a\n> change. As this is the latest change in PG-14, so it might be okay but\n> OTOH, as this is just a code readability change, shall we do it only\n> for PG-15?\n\nI think that the doc changes could be backpatched to PG14 but I think\nwe should do the code change only for PG15.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 18 Aug 2021 15:41:27 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Aug 18, 2021 2:41 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> On Wed, Aug 18, 2021 at 3:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > On Wed, Aug 18, 2021 at 10:00 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> > >\r\n> > > In addition of a code readability, there is a description in the doc\r\n> > > that mentions \"Stream End\" but we describe \"Stream Stop\" in the\r\n> > > later description, which seems a bug in the doc to me:\r\n> > >\r\n> >\r\n> > Doc changes looks good to me. But, I have question for code change:\r\n> >\r\n> > --- a/src/include/replication/logicalproto.h\r\n> > +++ b/src/include/replication/logicalproto.h\r\n> > @@ -65,7 +65,7 @@ typedef enum LogicalRepMsgType\r\n> > LOGICAL_REP_MSG_COMMIT_PREPARED = 'K',\r\n> > LOGICAL_REP_MSG_ROLLBACK_PREPARED = 'r',\r\n> > LOGICAL_REP_MSG_STREAM_START = 'S',\r\n> > - LOGICAL_REP_MSG_STREAM_END = 'E',\r\n> > + LOGICAL_REP_MSG_STREAM_STOP = 'E',\r\n> > LOGICAL_REP_MSG_STREAM_COMMIT = 'c',\r\n> >\r\n> > As this is changing the enum name and if any extension (logical\r\n> > replication extension) has started using it then they would require a\r\n> > change. As this is the latest change in PG-14, so it might be okay but\r\n> > OTOH, as this is just a code readability change, shall we do it only\r\n> > for PG-15?\r\n> \r\n> I think that the doc changes could be backpatched to PG14 but I think we\r\n> should do the code change only for PG15.\r\n\r\n+1, and the patch looks good to me.\r\n\r\nBest regards,\r\nHou zj\r\n",
"msg_date": "Wed, 18 Aug 2021 07:19:07 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Aug 18, 2021 at 3:33 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Tues, Aug 17, 2021 1:01 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > On Mon, Aug 16, 2021 at 3:59 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\n> > > 3)\n> > > Do we need to invoke set_apply_error_context_xact() in the function\n> > > apply_handle_stream_prepare() to save the xid and timestamp ?\n> >\n> > Yes. I think that v8-0001 patch already set xid and timestamp just after parsing\n> > stream_prepare message. You meant it's not necessary?\n>\n> Sorry, I thought of something wrong, please ignore the above comment.\n>\n> >\n> > I'll submit the updated patches soon.\n>\n> I was thinking about the place to set the errcallback.callback.\n>\n> apply_dispatch(StringInfo s)\n> {\n> LogicalRepMsgType action = pq_getmsgbyte(s);\n> + ErrorContextCallback errcallback;\n> + bool set_callback = false;\n> +\n> + /*\n> + * Push apply error context callback if not yet. Other fields will be\n> + * filled during applying the change. Since this function can be called\n> + * recursively when applying spooled changes, we set the callback only\n> + * once.\n> + */\n> + if (apply_error_callback_arg.command == 0)\n> + {\n> + errcallback.callback = apply_error_callback;\n> + errcallback.previous = error_context_stack;\n> + error_context_stack = &errcallback;\n> + set_callback = true;\n> + }\n> ...\n> + /* Pop the error context stack */\n> + if (set_callback)\n> + error_context_stack = errcallback.previous;\n>\n> It seems we can put the above code in the function LogicalRepApplyLoop()\n> around invoking apply_dispatch(), and in that approach we don't need to worry\n> about the recursively case. What do you think ?\n\nThank you for the comment!\n\nI think you're right. Maybe we can set the callback before entering to\nthe main loop and pop it after breaking from it. It would also fix the\nproblem reported by Tang[1]. But one thing we need to note that since\nwe want to reset apply_error_callback_arg.command at the end of\napply_dispatch() (otherwise we could end up setting the apply error\ncontext to an irrelevant error such as network error), when\napply_dispatch() is called recursively probably we need to save the\napply_error_callback_arg.command before setting the new command and\nthen revert back to the saved command. Is that right?\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/OS0PR01MB6113E5BC24922A2D05D16051FBFE9%40OS0PR01MB6113.jpnprd01.prod.outlook.com\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 18 Aug 2021 17:39:08 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Aug 18, 2021 at 5:39 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Aug 18, 2021 at 3:33 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Tues, Aug 17, 2021 1:01 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > On Mon, Aug 16, 2021 at 3:59 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\n> > > > 3)\n> > > > Do we need to invoke set_apply_error_context_xact() in the function\n> > > > apply_handle_stream_prepare() to save the xid and timestamp ?\n> > >\n> > > Yes. I think that v8-0001 patch already set xid and timestamp just after parsing\n> > > stream_prepare message. You meant it's not necessary?\n> >\n> > Sorry, I thought of something wrong, please ignore the above comment.\n> >\n> > >\n> > > I'll submit the updated patches soon.\n> >\n> > I was thinking about the place to set the errcallback.callback.\n> >\n> > apply_dispatch(StringInfo s)\n> > {\n> > LogicalRepMsgType action = pq_getmsgbyte(s);\n> > + ErrorContextCallback errcallback;\n> > + bool set_callback = false;\n> > +\n> > + /*\n> > + * Push apply error context callback if not yet. Other fields will be\n> > + * filled during applying the change. Since this function can be called\n> > + * recursively when applying spooled changes, we set the callback only\n> > + * once.\n> > + */\n> > + if (apply_error_callback_arg.command == 0)\n> > + {\n> > + errcallback.callback = apply_error_callback;\n> > + errcallback.previous = error_context_stack;\n> > + error_context_stack = &errcallback;\n> > + set_callback = true;\n> > + }\n> > ...\n> > + /* Pop the error context stack */\n> > + if (set_callback)\n> > + error_context_stack = errcallback.previous;\n> >\n> > It seems we can put the above code in the function LogicalRepApplyLoop()\n> > around invoking apply_dispatch(), and in that approach we don't need to worry\n> > about the recursively case. What do you think ?\n>\n> Thank you for the comment!\n>\n> I think you're right. Maybe we can set the callback before entering to\n> the main loop and pop it after breaking from it. It would also fix the\n> problem reported by Tang[1]. But one thing we need to note that since\n> we want to reset apply_error_callback_arg.command at the end of\n> apply_dispatch() (otherwise we could end up setting the apply error\n> context to an irrelevant error such as network error), when\n> apply_dispatch() is called recursively probably we need to save the\n> apply_error_callback_arg.command before setting the new command and\n> then revert back to the saved command. Is that right?\n\nI've attached the updated version patches that incorporated all\ncomments I got so far unless I'm missing something. Please review\nthem.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Thu, 19 Aug 2021 10:47:30 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Aug 17, 2021 at 5:21 PM tanghy.fnst@fujitsu.com\n<tanghy.fnst@fujitsu.com> wrote:\n>\n> On Thursday, August 12, 2021 1:53 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached the updated patches. FYI I've included the patch\n> > (v8-0005) that fixes the assertion failure during shared fileset\n> > cleanup to make cfbot tests happy.\n>\n>\n> Hi\n>\n> Thanks for your patch. I met a problem when using it. The log is not what I expected in some cases, but in streaming mode, they work well.\n>\n> For example:\n> ------publisher------\n> create table test (a int primary key, b varchar);\n> create publication pub for table test;\n>\n> ------subscriber------\n> create table test (a int primary key, b varchar);\n> insert into test values (10000);\n> create subscription sub connection 'dbname=postgres port=5432' publication pub with(streaming=on);\n>\n> ------publisher------\n> insert into test values (10000);\n>\n> Subscriber log:\n> 2021-08-17 14:24:43.415 CST [3630341] ERROR: duplicate key value violates unique constraint \"test_pkey\"\n> 2021-08-17 14:24:43.415 CST [3630341] DETAIL: Key (a)=(10000) already exists.\n>\n> It didn't give more context info generated by apply_error_callback function.\n\nThank you for reporting the issue! This issue must be fixed in the\nlatest (v9) patches I've just submitted[1].\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoCH4Jwn_NkJhvS6W5bZJKSaAYnC9inXqMJc6dLLvhvTQg%40mail.gmail.com\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 19 Aug 2021 10:53:01 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Aug 18, 2021 at 12:12 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Aug 18, 2021 at 3:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Aug 18, 2021 at 10:00 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Wed, Aug 18, 2021 at 12:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Wed, Aug 18, 2021 at 6:53 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > On Tue, Aug 17, 2021 at 2:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > >\n> > > > > > > It's right that we use \"STREAM STOP\" rather than \"STREAM END\" in many\n> > > > > > > places such as elog messages, a callback name, and source code\n> > > > > > > comments. As far as I have found there are two places where we’re\n> > > > > > > using \"STREAM STOP\": LOGICAL_REP_MSG_STREAM_END and a description in\n> > > > > > > doc/src/sgml/protocol.sgml. Isn't it better to fix these\n> > > > > > > inconsistencies in the first place? I think “STREAM STOP” would be\n> > > > > > > more appropriate.\n> > > > > > >\n> > > > > >\n> > > > > > I think keeping STREAM_END in the enum 'LOGICAL_REP_MSG_STREAM_END'\n> > > > > > seems to be a bit better because of the value 'E' we use for it.\n> > > > >\n> > > > > But I think we don't care about the actual value of\n> > > > > LOGICAL_REP_MSG_STREAM_END since we use the enum value rather than\n> > > > > 'E'?\n> > > > >\n> > > >\n> > > > True, but here we are trying to be consistent with other enum values\n> > > > where we try to use the first letter of the last word (which is E in\n> > > > this case). I can see there are other cases where we are not\n> > > > consistent so it won't be a big deal if we won't be consistent here. I\n> > > > am neutral on this one, so, if you feel using STREAM_STOP would be\n> > > > better from a code readability perspective then that is fine.\n> > >\n> > > In addition of a code readability, there is a description in the doc\n> > > that mentions \"Stream End\" but we describe \"Stream Stop\" in the later\n> > > description, which seems a bug in the doc to me:\n> > >\n> >\n> > Doc changes looks good to me. But, I have question for code change:\n> >\n> > --- a/src/include/replication/logicalproto.h\n> > +++ b/src/include/replication/logicalproto.h\n> > @@ -65,7 +65,7 @@ typedef enum LogicalRepMsgType\n> > LOGICAL_REP_MSG_COMMIT_PREPARED = 'K',\n> > LOGICAL_REP_MSG_ROLLBACK_PREPARED = 'r',\n> > LOGICAL_REP_MSG_STREAM_START = 'S',\n> > - LOGICAL_REP_MSG_STREAM_END = 'E',\n> > + LOGICAL_REP_MSG_STREAM_STOP = 'E',\n> > LOGICAL_REP_MSG_STREAM_COMMIT = 'c',\n> >\n> > As this is changing the enum name and if any extension (logical\n> > replication extension) has started using it then they would require a\n> > change. As this is the latest change in PG-14, so it might be okay but\n> > OTOH, as this is just a code readability change, shall we do it only\n> > for PG-15?\n>\n> I think that the doc changes could be backpatched to PG14 but I think\n> we should do the code change only for PG15.\n>\n\nOkay, done that way!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 19 Aug 2021 10:36:01 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Aug 19, 2021 at 11:48 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached the updated version patches that incorporated all\n> comments I got so far unless I'm missing something. Please review\n> them.\n>\n\nThe comments I made on Aug 16 and Aug 17 for the v8-0001 patch don't\nseem to be addressed in the v9-0001 patch (if you disagree with them\nthat's fine, but best to say so and why).\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 19 Aug 2021 15:18:00 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Aug 16, 2021 at 8:33 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Mon, Aug 16, 2021 at 6:24 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Therefore, perhaps a message like \"... in transaction 740 with commit\n> > timestamp 2021-08-10 14:44:38.058174+05:30\" is better in terms of\n> > consistency with other messages?\n> >\n>\n> Yes, I think that would be more consistent.\n>\n> On another note, for the 0001 patch, the elog ERROR at the bottom of\n> the logicalrep_message_type() function seems to assume that the\n> unrecognized \"action\" is a printable character (with its use of %c)\n> and also that the character is meaningful to the user in some way.\n> But given that the compiler normally warns of an unhandled enum value\n> when switching on an enum, such an error would most likely be when\n> action is some int value that wouldn't be meaningful to the user (as\n> it wouldn't be one of the LogicalRepMsgType enum values).\n> I therefore think it would be better to use %d in that ERROR:\n>\n> i.e.\n>\n> + elog(ERROR, \"invalid logical replication message type %d\", action);\n>\n> Similar comments apply to the apply_dispatch() function (and I realise\n> it used %c before your patch).\n>\n\nThe action in apply_dispatch is always a single byte so not sure why\nwe need %d here. Also, if it is used as %c before the patch then I\nthink it is better not to change it in this patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 19 Aug 2021 12:21:09 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Aug 19, 2021 at 2:18 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Thu, Aug 19, 2021 at 11:48 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached the updated version patches that incorporated all\n> > comments I got so far unless I'm missing something. Please review\n> > them.\n> >\n>\n> The comments I made on Aug 16 and Aug 17 for the v8-0001 patch don't\n> seem to be addressed in the v9-0001 patch (if you disagree with them\n> that's fine, but best to say so and why).\n\nOops, sorry about that. I had just missed those comments. Let's\ndiscuss them and I'll incorporate those comments in the v10 patch if\nwe agree with the changes.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 19 Aug 2021 16:09:12 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Aug 19, 2021 at 3:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Aug 16, 2021 at 8:33 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> >\n> > On Mon, Aug 16, 2021 at 6:24 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > Therefore, perhaps a message like \"... in transaction 740 with commit\n> > > timestamp 2021-08-10 14:44:38.058174+05:30\" is better in terms of\n> > > consistency with other messages?\n> > >\n> >\n> > Yes, I think that would be more consistent.\n> >\n> > On another note, for the 0001 patch, the elog ERROR at the bottom of\n> > the logicalrep_message_type() function seems to assume that the\n> > unrecognized \"action\" is a printable character (with its use of %c)\n> > and also that the character is meaningful to the user in some way.\n> > But given that the compiler normally warns of an unhandled enum value\n> > when switching on an enum, such an error would most likely be when\n> > action is some int value that wouldn't be meaningful to the user (as\n> > it wouldn't be one of the LogicalRepMsgType enum values).\n> > I therefore think it would be better to use %d in that ERROR:\n> >\n> > i.e.\n> >\n> > + elog(ERROR, \"invalid logical replication message type %d\", action);\n> >\n> > Similar comments apply to the apply_dispatch() function (and I realise\n> > it used %c before your patch).\n> >\n>\n> The action in apply_dispatch is always a single byte so not sure why\n> we need %d here. Also, if it is used as %c before the patch then I\n> think it is better not to change it in this patch.\n\nYes, I agree that it's better no to change it in this patch since %c\nis used before the patch. Also I can see some error messages in\nwalsender.c also use %c. If we conclude that it should use %d instead\nof %c, we can change all of them as another patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 19 Aug 2021 16:10:08 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Aug 17, 2021 at 12:00 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Thu, Aug 12, 2021 at 3:54 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached the updated patches. FYI I've included the patch\n> > (v8-0005) that fixes the assertion failure during shared fileset\n> > cleanup to make cfbot tests happy.\n> >\n>\n\nThank you for the comment!\n\n> Another comment on the 0001 patch: as there is now a mix of setting\n> \"apply_error_callback_arg\" members directly and also through inline\n> functions, it might look better to have it done consistently with\n> functions having prototypes something like the following:\n>\n> static inline void set_apply_error_context_rel(LogicalRepRelMapEntry *rel);\n> static inline void reset_apply_error_context_rel(void);\n> static inline void set_apply_error_context_attnum(int remote_attnum);\n\nIt might look consistent, but if we do that, we will end up needing\nfunctions every field to update when we add new fields to the struct\nin the future?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 19 Aug 2021 16:16:29 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Aug 19, 2021 at 4:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> The action in apply_dispatch is always a single byte so not sure why\n> we need %d here. Also, if it is used as %c before the patch then I\n> think it is better not to change it in this patch.\n>\n\nAs I explained before, the point is that all the known message types\nare handled in the switch statement cases (and you will get a compiler\nwarning if you miss one of the enum values in the switch cases).\nSo anything NOT handled in the switch, will be some OTHER value (and\nnote that any \"int\" value can be assigned to an enum).\nWho says its value will be a printable character (%c) in this case?\nAnd even if it is printable, will it help?\nI think in this case it would be better to know the exact value of the\nbyte (\"%d\" or \"0x%x\" etc.), not the character equivalent.\nI'm OK if it's done as a separate patch.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 19 Aug 2021 17:29:50 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Aug 19, 2021 at 2:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Aug 18, 2021 at 12:12 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Aug 18, 2021 at 3:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Aug 18, 2021 at 10:00 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Wed, Aug 18, 2021 at 12:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Wed, Aug 18, 2021 at 6:53 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > >\n> > > > > > On Tue, Aug 17, 2021 at 2:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > > >\n> > > > > > > > It's right that we use \"STREAM STOP\" rather than \"STREAM END\" in many\n> > > > > > > > places such as elog messages, a callback name, and source code\n> > > > > > > > comments. As far as I have found there are two places where we’re\n> > > > > > > > using \"STREAM STOP\": LOGICAL_REP_MSG_STREAM_END and a description in\n> > > > > > > > doc/src/sgml/protocol.sgml. Isn't it better to fix these\n> > > > > > > > inconsistencies in the first place? I think “STREAM STOP” would be\n> > > > > > > > more appropriate.\n> > > > > > > >\n> > > > > > >\n> > > > > > > I think keeping STREAM_END in the enum 'LOGICAL_REP_MSG_STREAM_END'\n> > > > > > > seems to be a bit better because of the value 'E' we use for it.\n> > > > > >\n> > > > > > But I think we don't care about the actual value of\n> > > > > > LOGICAL_REP_MSG_STREAM_END since we use the enum value rather than\n> > > > > > 'E'?\n> > > > > >\n> > > > >\n> > > > > True, but here we are trying to be consistent with other enum values\n> > > > > where we try to use the first letter of the last word (which is E in\n> > > > > this case). I can see there are other cases where we are not\n> > > > > consistent so it won't be a big deal if we won't be consistent here. I\n> > > > > am neutral on this one, so, if you feel using STREAM_STOP would be\n> > > > > better from a code readability perspective then that is fine.\n> > > >\n> > > > In addition of a code readability, there is a description in the doc\n> > > > that mentions \"Stream End\" but we describe \"Stream Stop\" in the later\n> > > > description, which seems a bug in the doc to me:\n> > > >\n> > >\n> > > Doc changes looks good to me. But, I have question for code change:\n> > >\n> > > --- a/src/include/replication/logicalproto.h\n> > > +++ b/src/include/replication/logicalproto.h\n> > > @@ -65,7 +65,7 @@ typedef enum LogicalRepMsgType\n> > > LOGICAL_REP_MSG_COMMIT_PREPARED = 'K',\n> > > LOGICAL_REP_MSG_ROLLBACK_PREPARED = 'r',\n> > > LOGICAL_REP_MSG_STREAM_START = 'S',\n> > > - LOGICAL_REP_MSG_STREAM_END = 'E',\n> > > + LOGICAL_REP_MSG_STREAM_STOP = 'E',\n> > > LOGICAL_REP_MSG_STREAM_COMMIT = 'c',\n> > >\n> > > As this is changing the enum name and if any extension (logical\n> > > replication extension) has started using it then they would require a\n> > > change. As this is the latest change in PG-14, so it might be okay but\n> > > OTOH, as this is just a code readability change, shall we do it only\n> > > for PG-15?\n> >\n> > I think that the doc changes could be backpatched to PG14 but I think\n> > we should do the code change only for PG15.\n> >\n>\n> Okay, done that way!\n\nThanks!\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 19 Aug 2021 16:50:48 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Aug 19, 2021 9:48 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> I've attached the updated version patches that incorporated all comments I\r\n> got so far unless I'm missing something. Please review them.\r\n\r\nThanks for the new version patches.\r\nThe v9-0001 patch looks good to me and I will start to review other patches.\r\n\r\nBest regards,\r\nHou zj\r\n",
"msg_date": "Thu, 19 Aug 2021 09:04:22 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Aug 19, 2021 at 12:47 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Aug 17, 2021 at 12:00 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> >\n>\n> > Another comment on the 0001 patch: as there is now a mix of setting\n> > \"apply_error_callback_arg\" members directly and also through inline\n> > functions, it might look better to have it done consistently with\n> > functions having prototypes something like the following:\n> >\n> > static inline void set_apply_error_context_rel(LogicalRepRelMapEntry *rel);\n> > static inline void reset_apply_error_context_rel(void);\n> > static inline void set_apply_error_context_attnum(int remote_attnum);\n>\n> It might look consistent, but if we do that, we will end up needing\n> functions every field to update when we add new fields to the struct\n> in the future?\n>\n\nYeah, I also think it is too much, but we can add comments where ever\nwe set the information for error callback. I see it is missing when\nthe patch is setting remote_attnum, see similar other places and add\ncomments if already not there.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 19 Aug 2021 17:44:42 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Aug 19, 2021 at 9:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Aug 19, 2021 at 12:47 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Aug 17, 2021 at 12:00 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> > >\n> >\n> > > Another comment on the 0001 patch: as there is now a mix of setting\n> > > \"apply_error_callback_arg\" members directly and also through inline\n> > > functions, it might look better to have it done consistently with\n> > > functions having prototypes something like the following:\n> > >\n> > > static inline void set_apply_error_context_rel(LogicalRepRelMapEntry *rel);\n> > > static inline void reset_apply_error_context_rel(void);\n> > > static inline void set_apply_error_context_attnum(int remote_attnum);\n> >\n> > It might look consistent, but if we do that, we will end up needing\n> > functions every field to update when we add new fields to the struct\n> > in the future?\n> >\n>\n> Yeah, I also think it is too much, but we can add comments where ever\n> we set the information for error callback. I see it is missing when\n> the patch is setting remote_attnum, see similar other places and add\n> comments if already not there.\n\nAgred. Will add comments in the next version patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 19 Aug 2021 22:09:45 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "> On Thursday, August 19, 2021 9:53 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n>\r\n> Thank you for reporting the issue! This issue must be fixed in the\r\n> latest (v9) patches I've just submitted[1].\r\n> \r\n\r\nThanks for your patch.\r\nI've confirmed the issue is fixed as you said.\r\n\r\nRegards\r\nTang\r\n",
"msg_date": "Fri, 20 Aug 2021 09:14:34 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Aug 20, 2021 at 6:14 PM tanghy.fnst@fujitsu.com\n<tanghy.fnst@fujitsu.com> wrote:\n>\n> > On Thursday, August 19, 2021 9:53 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Thank you for reporting the issue! This issue must be fixed in the\n> > latest (v9) patches I've just submitted[1].\n> >\n>\n> Thanks for your patch.\n> I've confirmed the issue is fixed as you said.\n\nThanks for your confirmation!\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 23 Aug 2021 11:53:24 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Aug 19, 2021 at 10:09 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Aug 19, 2021 at 9:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Aug 19, 2021 at 12:47 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Tue, Aug 17, 2021 at 12:00 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> > > >\n> > >\n> > > > Another comment on the 0001 patch: as there is now a mix of setting\n> > > > \"apply_error_callback_arg\" members directly and also through inline\n> > > > functions, it might look better to have it done consistently with\n> > > > functions having prototypes something like the following:\n> > > >\n> > > > static inline void set_apply_error_context_rel(LogicalRepRelMapEntry *rel);\n> > > > static inline void reset_apply_error_context_rel(void);\n> > > > static inline void set_apply_error_context_attnum(int remote_attnum);\n> > >\n> > > It might look consistent, but if we do that, we will end up needing\n> > > functions every field to update when we add new fields to the struct\n> > > in the future?\n> > >\n> >\n> > Yeah, I also think it is too much, but we can add comments where ever\n> > we set the information for error callback. I see it is missing when\n> > the patch is setting remote_attnum, see similar other places and add\n> > comments if already not there.\n>\n> Agred. Will add comments in the next version patch.\n\nI've attached updated patches. Please review them.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Mon, 23 Aug 2021 12:09:21 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Monday, August 23, 2021 11:09 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> I've attached updated patches. Please review them.\r\n> \r\n\r\nI tested v10-0001 patch in both streaming and no-streaming more. All tests works well.\r\n\r\nI also tried two-phase commit feature, the error context was set as expected, \r\nbut please allow me to propose a fix suggestion on the error description:\r\n\r\nCONTEXT: processing remote data during \"INSERT\" for replication target relation\r\n\"public.test\" in transaction 714 with commit timestamp 2021-08-24\r\n13:20:22.480532+08\r\n\r\nIt said \"commit timestamp\", but for 2pc feature, the timestamp could be \"prepare timestamp\" or \"rollback timestamp\", too.\r\nCould we make some change to make the error log more comprehensive?\r\n\r\nRegards\r\nTang\r\n",
"msg_date": "Tue, 24 Aug 2021 06:13:55 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Aug 24, 2021 at 11:44 AM tanghy.fnst@fujitsu.com\n<tanghy.fnst@fujitsu.com> wrote:\n>\n> On Monday, August 23, 2021 11:09 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached updated patches. Please review them.\n> >\n>\n> I tested v10-0001 patch in both streaming and no-streaming more. All tests works well.\n>\n> I also tried two-phase commit feature, the error context was set as expected,\n> but please allow me to propose a fix suggestion on the error description:\n>\n> CONTEXT: processing remote data during \"INSERT\" for replication target relation\n> \"public.test\" in transaction 714 with commit timestamp 2021-08-24\n> 13:20:22.480532+08\n>\n> It said \"commit timestamp\", but for 2pc feature, the timestamp could be \"prepare timestamp\" or \"rollback timestamp\", too.\n> Could we make some change to make the error log more comprehensive?\n>\n\nI think we can write something like: (processing remote data during\n\"INSERT\" for replication target relation \"public.test\" in transaction\n714 at 2021-08-24 13:20:22.480532+08). Basically replacing \"with\ncommit timestamp\" with \"at\". This is similar to what we do\ntest_decoding module for transaction timestamp. The other idea could\nbe we print the exact operation like commit/prepare/rollback which is\nalso possible because we have that information while setting context\ninfo but that might add a bit more complexity which I don't think is\nworth it.\n\nOne more point about the v10-0001* patch: From the commit message\n\"Add logical changes details to errcontext of apply worker errors.\",\nit appears that the context will be added only for the apply worker\nbut won't it get added for tablesync worker as well during its sync\nphase (when it tries to catch up with apply worker)?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 24 Aug 2021 18:35:01 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Aug 24, 2021 at 10:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Aug 24, 2021 at 11:44 AM tanghy.fnst@fujitsu.com\n> <tanghy.fnst@fujitsu.com> wrote:\n> >\n> > On Monday, August 23, 2021 11:09 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > I've attached updated patches. Please review them.\n> > >\n> >\n> > I tested v10-0001 patch in both streaming and no-streaming more. All tests works well.\n> >\n> > I also tried two-phase commit feature, the error context was set as expected,\n> > but please allow me to propose a fix suggestion on the error description:\n\nThank you for the suggestion!\n\n> >\n> > CONTEXT: processing remote data during \"INSERT\" for replication target relation\n> > \"public.test\" in transaction 714 with commit timestamp 2021-08-24\n> > 13:20:22.480532+08\n> >\n> > It said \"commit timestamp\", but for 2pc feature, the timestamp could be \"prepare timestamp\" or \"rollback timestamp\", too.\n> > Could we make some change to make the error log more comprehensive?\n> >\n>\n> I think we can write something like: (processing remote data during\n> \"INSERT\" for replication target relation \"public.test\" in transaction\n> 714 at 2021-08-24 13:20:22.480532+08). Basically replacing \"with\n> commit timestamp\" with \"at\". This is similar to what we do\n> test_decoding module for transaction timestamp.\n\n+1\n\n> The other idea could\n> be we print the exact operation like commit/prepare/rollback which is\n> also possible because we have that information while setting context\n> info but that might add a bit more complexity which I don't think is\n> worth it.\n\nAgreed.\n\nI replaced \"with commit timestamp\" with \"at\" and rename 'commit_ts'\nfield name to 'ts'.\n\n>\n> One more point about the v10-0001* patch: From the commit message\n> \"Add logical changes details to errcontext of apply worker errors.\",\n> it appears that the context will be added only for the apply worker\n> but won't it get added for tablesync worker as well during its sync\n> phase (when it tries to catch up with apply worker)?\n\nRight. I've updated the message.\n\nAttached updated version patches. Please review them.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Wed, 25 Aug 2021 13:22:11 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wednesday, August 25, 2021 12:22 PM Masahiko Sawada <sawada.mshk@gmail.com>wrote:\r\n> \r\n> Attached updated version patches. Please review them.\r\n> \r\n\r\nThanks for your new patch. The v11-0001 patch LGTM.\r\n\r\nRegards\r\nTang\r\n",
"msg_date": "Wed, 25 Aug 2021 05:40:39 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Aug 25, 2021 at 2:22 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Attached updated version patches. Please review them.\n>\n\nRegarding the v11-0001 patch, it looks OK to me, but I do have one point:\nIn apply_dispatch(), wouldn't it be better to NOT move the error\nreporting for an invalid message type into the switch as the default\ncase - because then, if you add a new message type, you won't get a\ncompiler warning (when warnings are enabled) for a missing switch\ncase, which is a handy way to alert you that the new message type\nneeds to be added as a case to the switch.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 26 Aug 2021 11:45:18 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Aug 26, 2021 at 7:15 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Wed, Aug 25, 2021 at 2:22 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Attached updated version patches. Please review them.\n> >\n>\n> Regarding the v11-0001 patch, it looks OK to me, but I do have one point:\n> In apply_dispatch(), wouldn't it be better to NOT move the error\n> reporting for an invalid message type into the switch as the default\n> case - because then, if you add a new message type, you won't get a\n> compiler warning (when warnings are enabled) for a missing switch\n> case, which is a handy way to alert you that the new message type\n> needs to be added as a case to the switch.\n>\n\nDo you have any suggestions on how to achieve that without adding some\nadditional variable? I think it is not a very hard requirement as we\ndon't follow the same at other places in code.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 26 Aug 2021 09:21:30 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Aug 26, 2021 at 12:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Aug 26, 2021 at 7:15 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> >\n> > On Wed, Aug 25, 2021 at 2:22 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > Attached updated version patches. Please review them.\n> > >\n> >\n> > Regarding the v11-0001 patch, it looks OK to me, but I do have one point:\n> > In apply_dispatch(), wouldn't it be better to NOT move the error\n> > reporting for an invalid message type into the switch as the default\n> > case - because then, if you add a new message type, you won't get a\n> > compiler warning (when warnings are enabled) for a missing switch\n> > case, which is a handy way to alert you that the new message type\n> > needs to be added as a case to the switch.\n> >\n>\n> Do you have any suggestions on how to achieve that without adding some\n> additional variable? I think it is not a very hard requirement as we\n> don't follow the same at other places in code.\n\nYeah, I agree that it's a handy way to detect missing a switch case\nbut I think that we don't necessarily need it in this case. Because\nthere are many places in the code where doing similar things and when\nit comes to apply_dispatch() it's the entry function to handle the\nincoming message so it will be unlikely that we miss adding a switch\ncase until the patch gets committed. If we don't move it, we would end\nup either adding the code resetting the\napply_error_callback_arg.command to every message type, adding a flag\nindicating the message is handled and checking later, or having a big\nif statement checking if the incoming message type is valid etc.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 26 Aug 2021 13:20:20 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Aug 26, 2021 at 1:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Do you have any suggestions on how to achieve that without adding some\n> additional variable? I think it is not a very hard requirement as we\n> don't follow the same at other places in code.\n>\n\nSorry, forget my suggestion, I see it's not easy to achieve it and\nstill execute the non-error-case code after the switch.\n(you'd have to use a variable set in the default case, defeating the\npurpose, or have the switch in a separate function with return for\neach case)\n\nSo the 0001 patch LGTM.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 26 Aug 2021 15:11:20 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Aug 25, 2021 12:22 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> Attached updated version patches. Please review them.\r\n\r\nThe v11-0001 patch LGTM.\r\n\r\nBest regards,\r\nHou zj\r\n",
"msg_date": "Thu, 26 Aug 2021 05:49:36 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Aug 26, 2021 at 9:50 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Aug 26, 2021 at 12:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Yeah, I agree that it's a handy way to detect missing a switch case\n> but I think that we don't necessarily need it in this case. Because\n> there are many places in the code where doing similar things and when\n> it comes to apply_dispatch() it's the entry function to handle the\n> incoming message so it will be unlikely that we miss adding a switch\n> case until the patch gets committed. If we don't move it, we would end\n> up either adding the code resetting the\n> apply_error_callback_arg.command to every message type, adding a flag\n> indicating the message is handled and checking later, or having a big\n> if statement checking if the incoming message type is valid etc.\n>\n\nI was reviewing and making minor edits to your v11-0001* patch and\nnoticed that the below parts of the code could be improved:\n1.\n+ if (errarg->rel)\n+ appendStringInfo(&buf, _(\" for replication target relation \\\"%s.%s\\\"\"),\n+ errarg->rel->remoterel.nspname,\n+ errarg->rel->remoterel.relname);\n+\n+ if (errarg->remote_attnum >= 0)\n+ appendStringInfo(&buf, _(\" column \\\"%s\\\"\"),\n+ errarg->rel->remoterel.attnames[errarg->remote_attnum]);\n\nIsn't it better if 'remote_attnum' check is inside if (errargrel)\ncheck? It will be weird to print column information without rel\ninformation and in the current code, we don't set remote_attnum\nwithout rel. The other possibility could be to have an Assert for rel\nin 'remote_attnum' check.\n\n2.\n+ /* Reset relation for error callback */\n+ apply_error_callback_arg.rel = NULL;\n+\n logicalrep_rel_close(rel, NoLock);\n\n end_replication_step();\n\nIsn't it better to reset relation info as the last thing in\napply_handle_insert/update/delete as you do for a few other\nparameters? There is very little chance of error from those two\nfunctions but still, it will be good if they ever throw an error and\nit might be clear for future edits in this function that this needs to\nbe set as the last thing in these functions.\n\nNote - I can take care of the above points based on whatever we agree\nwith, you don't need to send a new version for this.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 26 Aug 2021 11:39:43 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Aug 26, 2021 at 11:39 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Aug 26, 2021 at 9:50 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Aug 26, 2021 at 12:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Yeah, I agree that it's a handy way to detect missing a switch case\n> > but I think that we don't necessarily need it in this case. Because\n> > there are many places in the code where doing similar things and when\n> > it comes to apply_dispatch() it's the entry function to handle the\n> > incoming message so it will be unlikely that we miss adding a switch\n> > case until the patch gets committed. If we don't move it, we would end\n> > up either adding the code resetting the\n> > apply_error_callback_arg.command to every message type, adding a flag\n> > indicating the message is handled and checking later, or having a big\n> > if statement checking if the incoming message type is valid etc.\n> >\n>\n> I was reviewing and making minor edits to your v11-0001* patch and\n> noticed that the below parts of the code could be improved:\n> 1.\n> + if (errarg->rel)\n> + appendStringInfo(&buf, _(\" for replication target relation \\\"%s.%s\\\"\"),\n> + errarg->rel->remoterel.nspname,\n> + errarg->rel->remoterel.relname);\n> +\n> + if (errarg->remote_attnum >= 0)\n> + appendStringInfo(&buf, _(\" column \\\"%s\\\"\"),\n> + errarg->rel->remoterel.attnames[errarg->remote_attnum]);\n>\n> Isn't it better if 'remote_attnum' check is inside if (errargrel)\n> check? It will be weird to print column information without rel\n> information and in the current code, we don't set remote_attnum\n> without rel. The other possibility could be to have an Assert for rel\n> in 'remote_attnum' check.\n>\n> 2.\n> + /* Reset relation for error callback */\n> + apply_error_callback_arg.rel = NULL;\n> +\n> logicalrep_rel_close(rel, NoLock);\n>\n> end_replication_step();\n>\n> Isn't it better to reset relation info as the last thing in\n> apply_handle_insert/update/delete as you do for a few other\n> parameters? There is very little chance of error from those two\n> functions but still, it will be good if they ever throw an error and\n> it might be clear for future edits in this function that this needs to\n> be set as the last thing in these functions.\n>\n\nI see that resetting it before logicalrep_rel_close has an advantage\nthat we might not accidentally access some information after close\nwhich is not there in rel. I am not sure if that is the reason you\nhave in mind for resetting it before close.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 26 Aug 2021 12:00:23 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Aug 26, 2021 at 3:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Aug 26, 2021 at 9:50 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Aug 26, 2021 at 12:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Yeah, I agree that it's a handy way to detect missing a switch case\n> > but I think that we don't necessarily need it in this case. Because\n> > there are many places in the code where doing similar things and when\n> > it comes to apply_dispatch() it's the entry function to handle the\n> > incoming message so it will be unlikely that we miss adding a switch\n> > case until the patch gets committed. If we don't move it, we would end\n> > up either adding the code resetting the\n> > apply_error_callback_arg.command to every message type, adding a flag\n> > indicating the message is handled and checking later, or having a big\n> > if statement checking if the incoming message type is valid etc.\n> >\n>\n> I was reviewing and making minor edits to your v11-0001* patch and\n> noticed that the below parts of the code could be improved:\n\nThank you for the comments!\n\n> 1.\n> + if (errarg->rel)\n> + appendStringInfo(&buf, _(\" for replication target relation \\\"%s.%s\\\"\"),\n> + errarg->rel->remoterel.nspname,\n> + errarg->rel->remoterel.relname);\n> +\n> + if (errarg->remote_attnum >= 0)\n> + appendStringInfo(&buf, _(\" column \\\"%s\\\"\"),\n> + errarg->rel->remoterel.attnames[errarg->remote_attnum]);\n>\n> Isn't it better if 'remote_attnum' check is inside if (errargrel)\n> check? It will be weird to print column information without rel\n> information and in the current code, we don't set remote_attnum\n> without rel. The other possibility could be to have an Assert for rel\n> in 'remote_attnum' check.\n\nAgreed to check 'remote_attnum' inside \"if(errargrel)\".\n\n>\n> 2.\n> + /* Reset relation for error callback */\n> + apply_error_callback_arg.rel = NULL;\n> +\n> logicalrep_rel_close(rel, NoLock);\n>\n> end_replication_step();\n>\n> Isn't it better to reset relation info as the last thing in\n> apply_handle_insert/update/delete as you do for a few other\n> parameters? There is very little chance of error from those two\n> functions but still, it will be good if they ever throw an error and\n> it might be clear for future edits in this function that this needs to\n> be set as the last thing in these functions.\n\nOn Thu, Aug 26, 2021 at 3:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> I see that resetting it before logicalrep_rel_close has an advantage\n> that we might not accidentally access some information after close\n> which is not there in rel. I am not sure if that is the reason you\n> have in mind for resetting it before close.\n\nYes, that's why I reset the apply_error_callback_arg.rel before\nlogicalrep_rel_close(), not at the end of the function.\n\nSince the callback function refers to apply_error_callback_arg.rel it\nstill needs to be valid when an error occurs. Moving it to the end of\nthe function is no problem for now, but if we always reset relation\ninfo as the last thing, I think that we cannot allow adding changes\nbetween setting relation info and the end of the function (i.g.,\nresetting relation info) that could lead to invalidate fields of\napply_error_callback_arg.rel (e.g, freeing a string value etc).\n\n> Note - I can take care of the above points based on whatever we agree\n> with, you don't need to send a new version for this.\n\nThanks!\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 26 Aug 2021 20:12:01 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Aug 26, 2021 at 4:42 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Aug 26, 2021 at 3:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > 1.\n> > + if (errarg->rel)\n> > + appendStringInfo(&buf, _(\" for replication target relation \\\"%s.%s\\\"\"),\n> > + errarg->rel->remoterel.nspname,\n> > + errarg->rel->remoterel.relname);\n> > +\n> > + if (errarg->remote_attnum >= 0)\n> > + appendStringInfo(&buf, _(\" column \\\"%s\\\"\"),\n> > + errarg->rel->remoterel.attnames[errarg->remote_attnum]);\n> >\n> > Isn't it better if 'remote_attnum' check is inside if (errargrel)\n> > check? It will be weird to print column information without rel\n> > information and in the current code, we don't set remote_attnum\n> > without rel. The other possibility could be to have an Assert for rel\n> > in 'remote_attnum' check.\n>\n> Agreed to check 'remote_attnum' inside \"if(errargrel)\".\n>\n\nOkay, changed accordingly. Additionally, I have changed the code which\nsets timestamp to (unset) when it is 0 so that it won't display the\ntimestamp in that case. I have made few other cosmetic changes in the\nattached patch. See and let me know what you think of it?\n\nNote - I have just attached the first patch here, once this is\ncommitted we can focus on others.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Thu, 26 Aug 2021 17:40:55 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Aug 26, 2021 at 9:11 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Aug 26, 2021 at 4:42 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Aug 26, 2021 at 3:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > 1.\n> > > + if (errarg->rel)\n> > > + appendStringInfo(&buf, _(\" for replication target relation \\\"%s.%s\\\"\"),\n> > > + errarg->rel->remoterel.nspname,\n> > > + errarg->rel->remoterel.relname);\n> > > +\n> > > + if (errarg->remote_attnum >= 0)\n> > > + appendStringInfo(&buf, _(\" column \\\"%s\\\"\"),\n> > > + errarg->rel->remoterel.attnames[errarg->remote_attnum]);\n> > >\n> > > Isn't it better if 'remote_attnum' check is inside if (errargrel)\n> > > check? It will be weird to print column information without rel\n> > > information and in the current code, we don't set remote_attnum\n> > > without rel. The other possibility could be to have an Assert for rel\n> > > in 'remote_attnum' check.\n> >\n> > Agreed to check 'remote_attnum' inside \"if(errargrel)\".\n> >\n>\n> Okay, changed accordingly. Additionally, I have changed the code which\n> sets timestamp to (unset) when it is 0 so that it won't display the\n> timestamp in that case. I have made few other cosmetic changes in the\n> attached patch. See and let me know what you think of it?\n\nThank you for the patch!\n\nAgreed with these changes. The patch looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 26 Aug 2021 21:53:46 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Aug 26, 2021 at 6:24 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Aug 26, 2021 at 9:11 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > Okay, changed accordingly. Additionally, I have changed the code which\n> > sets timestamp to (unset) when it is 0 so that it won't display the\n> > timestamp in that case. I have made few other cosmetic changes in the\n> > attached patch. See and let me know what you think of it?\n>\n> Thank you for the patch!\n>\n> Agreed with these changes. The patch looks good to me.\n>\n\nPushed, feel free to rebase and send the remaining patch set.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 27 Aug 2021 10:06:48 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Aug 27, 2021 at 1:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Aug 26, 2021 at 6:24 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Aug 26, 2021 at 9:11 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > Okay, changed accordingly. Additionally, I have changed the code which\n> > > sets timestamp to (unset) when it is 0 so that it won't display the\n> > > timestamp in that case. I have made few other cosmetic changes in the\n> > > attached patch. See and let me know what you think of it?\n> >\n> > Thank you for the patch!\n> >\n> > Agreed with these changes. The patch looks good to me.\n> >\n>\n> Pushed, feel free to rebase and send the remaining patch set.\n\nThanks!\n\nI'll post the updated version patch on Monday.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 27 Aug 2021 20:03:05 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Aug 27, 2021 at 8:03 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Aug 27, 2021 at 1:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Aug 26, 2021 at 6:24 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Thu, Aug 26, 2021 at 9:11 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > >\n> > > > Okay, changed accordingly. Additionally, I have changed the code which\n> > > > sets timestamp to (unset) when it is 0 so that it won't display the\n> > > > timestamp in that case. I have made few other cosmetic changes in the\n> > > > attached patch. See and let me know what you think of it?\n> > >\n> > > Thank you for the patch!\n> > >\n> > > Agreed with these changes. The patch looks good to me.\n> > >\n> >\n> > Pushed, feel free to rebase and send the remaining patch set.\n>\n> Thanks!\n>\n> I'll post the updated version patch on Monday.\n\nI've attached rebased patches. 0004 patch is not the scope of this\npatch. It's borrowed from another thread[1] to fix the assertion\nfailure for newly added tests. Please review them.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAFiTN-v-zFpmm7Ze1Dai5LJjhhNYXvK8QABs35X89WY0HDG4Ww%40mail.gmail.com\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Mon, 30 Aug 2021 16:06:55 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Aug 30, 2021 at 5:07 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n>\n> I've attached rebased patches. 0004 patch is not the scope of this\n> patch. It's borrowed from another thread[1] to fix the assertion\n> failure for newly added tests. Please review them.\n>\n\nI have some initial feedback on the v12-0001 patch.\nMost of these are suggested improvements to wording, and some typo fixes.\n\n\n(0) Patch comment\n\nSuggestion to improve the patch comment:\n\nBEFORE:\nAdd pg_stat_subscription_errors statistics view.\n\nThis commits adds new system view pg_stat_logical_replication_error,\nshowing errors happening during applying logical replication changes\nas well as during performing initial table synchronization.\n\nThe subscription error entries are removed by autovacuum workers when\nthe table synchronization competed in table sync worker cases and when\ndropping the subscription in apply worker cases.\n\nIt also adds SQL function pg_stat_reset_subscription_error() to\nreset the single subscription error.\n\nAFTER:\nAdd a subscription errors statistics view \"pg_stat_subscription_errors\".\n\nThis commits adds a new system view pg_stat_logical_replication_errors,\nthat records information about any errors which occur during application\nof logical replication changes as well as during performing initial table\nsynchronization.\n\nThe subscription error entries are removed by autovacuum workers after\ntable synchronization completes in table sync worker cases and after\ndropping the subscription in apply worker cases.\n\nIt also adds an SQL function pg_stat_reset_subscription_error() to\nreset a single subscription error.\n\n\n\ndoc/src/sgml/monitoring.sgml:\n\n(1)\nBEFORE:\n+ <entry>One row per error that happened on subscription, showing\ninformation about\n+ the subscription errors.\nAFTER:\n+ <entry>One row per error that occurred on subscription,\nproviding information about\n+ each subscription error.\n\n(2)\nBEFORE:\n+ The <structname>pg_stat_subscription_errors</structname> view will\ncontain one\nAFTER:\n+ The <structname>pg_stat_subscription_errors</structname> view contains one\n\n\n(3)\nBEFORE:\n+ Name of the database in which the subscription is created.\nAFTER:\n+ Name of the database in which the subscription was created.\n\n\n(4)\nBEFORE:\n+ OID of the relation that the worker is processing when the\n+ error happened.\nAFTER:\n+ OID of the relation that the worker was processing when the\n+ error occurred.\n\n\n(5)\nBEFORE:\n+ Name of command being applied when the error happened. This\n+ field is always NULL if the error is reported by\n+ <literal>tablesync</literal> worker.\nAFTER:\n+ Name of command being applied when the error occurred. This\n+ field is always NULL if the error is reported by a\n+ <literal>tablesync</literal> worker.\n\n(6)\nBEFORE:\n+ Transaction ID of publisher node being applied when the error\n+ happened. This field is always NULL if the error is reported\n+ by <literal>tablesync</literal> worker.\nAFTER:\n+ Transaction ID of the publisher node being applied when the error\n+ happened. This field is always NULL if the error is reported\n+ by a <literal>tablesync</literal> worker.\n\n(7)\nBEFORE:\n+ Type of worker reported the error: <literal>apply</literal> or\n+ <literal>tablesync</literal>.\nAFTER:\n+ Type of worker reporting the error: <literal>apply</literal> or\n+ <literal>tablesync</literal>.\n\n\n(8)\nBEFORE:\n+ Number of times error happened on the worker.\nAFTER:\n+ Number of times the error occurred in the worker.\n\n[or \"Number of times the worker reported the error\" ?]\n\n\n(9)\nBEFORE:\n+ Time at which the last error happened.\nAFTER:\n+ Time at which the last error occurred.\n\n(10)\nBEFORE:\n+ Error message which is reported last failure time.\nAFTER:\n+ Error message which was reported at the last failure time.\n\nMaybe this should just say \"Last reported error message\" ?\n\n\n(11)\nYou shouldn't call hash_get_num_entries() on a NULL pointer.\n\nSuggest swappling lines, as shown below:\n\nBEFORE:\n+ int32 nerrors = hash_get_num_entries(subent->suberrors);\n+\n+ /* Skip this subscription if not have any errors */\n+ if (subent->suberrors == NULL)\n+ continue;\nAFTER:\n+ int32 nerrors;\n+\n+ /* Skip this subscription if not have any errors */\n+ if (subent->suberrors == NULL)\n+ continue;\n+ nerrors = hash_get_num_entries(subent->suberrors);\n\n\n(12)\nTypo: legnth -> length\n\n+ * contains the fixed-legnth error message string which is\n\n\n\nsrc/backend/postmaster/pgstat.c\n\n(13)\n\"Subscription stat entries\" hashtable is created in two different\nplaces, one with HASH_CONTEXT and the other without. Is this\nintentional?\nShouldn't there be a single function for creating this?\n\n\n(14)\n+ * PgStat_MsgSubscriptionPurge Sent by the autovacuum purge the subscriptions.\n\nSeems to be missing a word, is it meant to say \"Sent by the autovacuum\nto purge the subscriptions.\" ?\n\n(15)\n+ * PgStat_MsgSubscriptionErrPurge Sent by the autovacuum purge the subscription\n+ * errors.\n\nSeems to be missing a word, is it meant to say \"Sent by the autovacuum\nto purge the subscription errors.\" ?\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 2 Sep 2021 13:06:34 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Aug 30, 2021 at 5:07 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n>\n> I've attached rebased patches. 0004 patch is not the scope of this\n> patch. It's borrowed from another thread[1] to fix the assertion\n> failure for newly added tests. Please review them.\n>\n\nI have a few comments on the v12-0002 patch:\n\n(1) Patch comment\n\nHas a typo and could be expressed a bit better.\n\nSuggestion:\n\nBEFORE:\nRESET command is reuiqred by follow-up commit introducing to a new\nparameter skip_xid to reset.\nAFTER:\nThe RESET parameter for ALTER SUBSCRIPTION is required by the\nfollow-up commit that introduces a new resettable subscription\nparameter \"skip_xid\".\n\n\ndoc/src/sgml/ref/alter_subscription.sgml\n\n(2)\nI don't think \"RESET\" is sufficiently described in\nalter_subscription.sgml. Just putting it under \"SET\" and changing\n\"altered\" to \"set\" doesn't explain what resetting does. It should say\nsomething about setting the parameter back to its original (default)\nvalue.\n\n\n(3)\ncase ALTER_SUBSCRIPTION_RESET_OPTIONS\n\nSome comments here would be helpful e.g. Reset the specified\nparameters back to their default values.\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 2 Sep 2021 15:55:37 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "From Mon, Aug 30, 2021 3:07 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> I've attached rebased patches. 0004 patch is not the scope of this patch. It's\r\n> borrowed from another thread[1] to fix the assertion failure for newly added\r\n> tests. Please review them.\r\n\r\nHi,\r\n\r\nI reviewed the v12-0001 patch, here are some comments:\r\n\r\n1)\r\n--- a/src/backend/utils/error/elog.c\r\n+++ b/src/backend/utils/error/elog.c\r\n@@ -1441,7 +1441,6 @@ getinternalerrposition(void)\r\n \treturn edata->internalpos;\r\n }\r\n \r\n-\r\n\r\nIt seems a miss change in elog.c\r\n\r\n2)\r\n\r\n+\tTupleDescInitEntry(tupdesc, (AttrNumber) 10, \"stats_reset\",\r\n+\t\t\t\t\t TIMESTAMPTZOID, -1, 0);\r\n\r\nThe document doesn't mention the column \"stats_reset\".\r\n\r\n3)\r\n\r\n+typedef struct PgStat_StatSubErrEntry\r\n+{\r\n+\tOid\t\t\tsubrelid;\t\t/* InvalidOid if the apply worker, otherwise\r\n+\t\t\t\t\t\t\t\t * the table sync worker. hash table key. */\r\n\r\nFrom the comments of subrelid, I think one subscription only have one apply\r\nworker error entry, right ? If so, I was thinking can we move the the apply\r\nerror entry to PgStat_StatSubEntry. In that approach, we don't need to build a\r\ninner hash table when there are no table sync error entry.\r\n\r\n4)\r\nIs it possible to add some testcases to test the subscription error entry delete ?\r\n\r\n\r\nBest regards,\r\nHou zj\r\n\r\n",
"msg_date": "Thu, 2 Sep 2021 08:40:56 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "From Mon, Aug 30, 2021 3:07 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> I've attached rebased patches. 0004 patch is not the scope of this \r\n> patch. It's borrowed from another thread[1] to fix the assertion \r\n> failure for newly added tests. Please review them.\r\n\r\nHi,\r\n\r\nI reviewed the 0002 patch and have a suggestion for it.\r\n\r\n+\t\t\t\tif (IsSet(opts.specified_opts, SUBOPT_SYNCHRONOUS_COMMIT))\r\n+\t\t\t\t{\r\n+\t\t\t\t\tvalues[Anum_pg_subscription_subsynccommit - 1] =\r\n+\t\t\t\t\t\tCStringGetTextDatum(\"off\");\r\n+\t\t\t\t\treplaces[Anum_pg_subscription_subsynccommit - 1] = true;\r\n+\t\t\t\t}\r\n\r\nCurrently, the patch set the default value out of parse_subscription_options(),\r\nbut I think It might be more standard to set the value in\r\nparse_subscription_options(). Like:\r\n\r\n\t\t\tif (!is_reset)\r\n\t\t\t{\r\n\t\t\t\t...\r\n+\t\t\t}\r\n+\t\t\telse\r\n+\t\t\t\topts->synchronous_commit = \"off\";\r\n\r\nAnd then, we can set the value like:\r\n\r\n\t\t\t\t\tvalues[Anum_pg_subscription_subsynccommit - 1] =\r\n\t\t\t\t\t\tCStringGetTextDatum(opts.synchronous_commit);\r\n\r\n\r\nBesides, instead of adding a switch case like the following:\r\n+\t\tcase ALTER_SUBSCRIPTION_RESET_OPTIONS:\r\n+\t\t\t{\r\n\r\nWe can add a bool flag(isReset) in AlterSubscriptionStmt and check the flag\r\nwhen invoking parse_subscription_options(). In this approach, the code can be\r\nshorter.\r\n\r\nAttach a diff file based on the v12-0002 which change the code like the above\r\nsuggestion.\r\n\r\nBest regards,\r\nHou zj",
"msg_date": "Thu, 2 Sep 2021 11:37:04 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Aug 30, 2021 at 5:07 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached rebased patches. 0004 patch is not the scope of this\n> patch. It's borrowed from another thread[1] to fix the assertion\n> failure for newly added tests. Please review them.\n>\n\nSome initial comments for the v12-0003 patch:\n\n(1) Patch comment\n\"This commit introduces another way to skip the transaction in question.\"\n\nI think it should further explain: \"This commit introduces another way\nto skip the transaction in question, other than manually updating the\nsubscriber's database or using pg_replication_origin_advance().\"\n\n\ndoc/src/sgml/logical-replication.sgml\n(2)\n\nSuggested minor update:\n\nBEFORE:\n+ transaction that conflicts with the existing data. When a conflict produce\n+ an error, it is shown in\n<structname>pg_stat_subscription_errors</structname>\n+ view as follows:\nAFTER:\n+ transaction that conflicts with the existing data. When a conflict produces\n+ an error, it is recorded in the\n<structname>pg_stat_subscription_errors</structname>\n+ view as follows:\n\n(3)\n+ found from those outputs (transaction ID 740 in the above case).\nThe transaction\n\nShouldn't it be transaction ID 716?\n\n(4)\n+ can be skipped by setting <replaceable>skip_xid</replaceable> to\nthe subscription\n\nIs it better to say here ... \"on the subscription\" ?\n\n(5)\nJust skipping a transaction could make a subscriber inconsistent, right?\n\nWould it be better as follows?\n\nBEFORE:\n+ In either way, those should be used as a last resort. They skip the whole\n+ transaction including changes that may not violate any constraint and easily\n+ make subscriber inconsistent if a user specifies the wrong transaction ID or\n+ the position of origin.\n\nAFTER:\n+ Either way, those transaction skipping methods should be used as a\nlast resort.\n+ They skip the whole transaction, including changes that may not violate any\n+ constraint. They may easily make the subscriber inconsistent,\nespecially if a\n+ user specifies the wrong transaction ID or the position of origin.\n\n(6)\nThe grammar is not great in the following description, so here's a\nsuggested improvement:\n\nBEFORE:\n+ incoming change or by skipping the whole transaction. This option\n+ specifies transaction ID that logical replication worker skips to\n+ apply. The logical replication worker skips all data modification\n\nAFTER:\n+ incoming changes or by skipping the whole transaction. This option\n+ specifies the ID of the transaction whose application is to\nbe skipped\n+ by the logical replication worker. The logical replication worker\n+ skips all data modification\n\n\nsrc/backend/postmaster/pgstat.c\n(7)\nBEFORE:\n+ * Tell the collector about clear the error of subscription.\nAFTER:\n+ * Tell the collector to clear the subscription error.\n\n\nsrc/backend/replication/logical/worker.c\n(8)\n+ * subscription is invalidated and* MySubscription->skipxid gets\nchanged or reset.\n\nThere is a \"*\" after \"and\".\n\n(9)\nDo these lines really need to be moved up?\n\n+ /* extract XID of the top-level transaction */\n+ stream_xid = logicalrep_read_stream_start(s, &first_segment);\n+\n\nsrc/backend/postmaster/pgstat.c\n(10)\n\n+ bool m_clear; /* clear all fields except for last_failure and\n+ * last_errmsg */\n\nI think it should say: clear all fields except for last_failure,\nlast_errmsg and stat_reset_timestamp.\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 2 Sep 2021 22:03:36 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "> On Aug 30, 2021, at 12:06 AM, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> \n> I've attached rebased patches.\n\nThanks for these patches, Sawada-san!\n\nThe first patch in your series, v12-0001, seems useful to me even before committing any of the rest. I would like to integrate the new pg_stat_subscription_errors view it creates into regression tests for other logical replication features under development.\n\nIn particular, it can be hard to write TAP tests that need to wait for subscriptions to catch up or fail. With your view committed, a new PostgresNode function to wait for catchup or for failure can be added, and then developers of different projects can all use that. I am attaching a version of such a function, plus some tests of your patch (since it does not appear to have any). Would you mind reviewing these and giving comments or including them in your next patch version?\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 2 Sep 2021 12:33:52 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "\n\n> On Aug 30, 2021, at 12:06 AM, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> \n> I've attached rebased patches. \n\nHere are some review comments:\n\nFor the v12-0002 patch:\n\nThe documentation changes for ALTER SUBSCRIPTION .. RESET look strange to me. You grouped SET and RESET together, much like sql-altertable.html has them grouped, but I don't think it flows naturally here, as the two commands do not support the same set of parameters. It might look better if you documented these separately. It might also be good to order the parameters the same, so that the differences can more quickly be seen.\n\nFor the v12-0003 patch:\n\nI believe this feature is needed, but it also seems like a very powerful foot-gun. Can we do anything to make it less likely that users will hurt themselves with this tool?\n\nI am thinking back to support calls I have attended. When a production system is down, there is often some hesitancy to perform ad-hoc operations on the database, but once the decision has been made to do so, people try to get the whole process done as quickly as possible. If multiple transactions on the publisher fail on the subscriber, they will do so in series, not in parallel. The process of clearing these errors will amount to copying the xid of each failed transaction to the ALTER SUBSCRIPTION ... SET (skip_xid = xxx) command and running it, then the next, then the next, .... Perhaps the first couple times through the process, the customer will look to see that the failure is of the same type and on the same table, but after a short time they will likely just script something to clear the rest as quickly as possible. In the heat of the moment, they may not include a check of the failure message, but merely a grep of the failing xid.\n\nIf the user could instead clear all failed transactions of the same type, that might make it less likely that they unthinkingly also skip subsequent errors of some different type. Perhaps something like ALTER SUBSCRIPTION ... SET (skip_failures = 'duplicate key value violates unique constraint \"test_pkey\"')? This is arguably a different feature request, and not something your patch is required to address, but I wonder how much we should limit people shooting themselves in the foot? If we built something like this using your skip_xid feature, rather than instead of your skip_xid feature, would your feature need to be modified?\n\nThe docs could use some rewording, too:\n\n+ If incoming data violates any constraints the logical replication\n+ will stop until it is resolved. \n\nIn my experience, logical replication doesn't stop, but instead goes into an infinite loop of retries.\n\n+ The resolution can be done either\n+ by changing data on the subscriber so that it doesn't conflict with\n+ incoming change or by skipping the whole transaction.\n\nI'm having trouble thinking of an example conflict where skipping a transaction would be better than writing a BEFORE INSERT trigger on the conflicting table which suppresses or redirects conflicting rows somewhere else. Particularly for larger transactions containing multiple statements, suppressing the conflicting rows using a trigger would be less messy than skipping the transaction. I think your patch adds a useful tool to the toolkit, but maybe we should mention more alternatives in the docs? Something like, \"changing the data on the subscriber so that it doesn't conflict with incoming changes, or dropping the conflicting constraint or unique index, or writing a trigger on the subscriber to suppress or redirect conflicting incoming changes, or as a last resort, by skipping the whole transaction\"?\n\nPerhaps I'm reading your phrase \"changing the data on the subscriber\" too narrowly. To me, that means running DML (either a DELETE or an UPDATE) on the existing data in the table where the conflict arises. These other options are DDL and do not easily come to mind when I read that phrase.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 2 Sep 2021 13:44:58 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Aug 30, 2021 at 5:07 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached rebased patches. 0004 patch is not the scope of this\n> patch. It's borrowed from another thread[1] to fix the assertion\n> failure for newly added tests. Please review them.\n>\n\nBTW, these patches need rebasing (broken by recent commits, patches\n0001, 0003 and 0004 no longer apply, and it's failing in the cfbot).\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 3 Sep 2021 16:46:07 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Sep 3, 2021 at 2:15 AM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>\n> > On Aug 30, 2021, at 12:06 AM, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached rebased patches.\n> For the v12-0003 patch:\n>\n> I believe this feature is needed, but it also seems like a very powerful foot-gun. Can we do anything to make it less likely that users will hurt themselves with this tool?\n>\n\nThis won't do any more harm than currently, users can do via\npg_replication_slot_advance and the same is documented as well, see\n[1]. This will be allowed to only superusers. Its effect will be\ndocumented with a precautionary note to use it only when the other\navailable ways can't be used. Any better ideas?\n\n> I am thinking back to support calls I have attended. When a production system is down, there is often some hesitancy to perform ad-hoc operations on the database, but once the decision has been made to do so, people try to get the whole process done as quickly as possible. If multiple transactions on the publisher fail on the subscriber, they will do so in series, not in parallel.\n>\n\nThe subscriber will know only one transaction failure at a time, till\nthat is resolved, the apply won't move ahead and it won't know even if\nthere are other transactions that are going to fail in the future.\n\n>\n> If the user could instead clear all failed transactions of the same type, that might make it less likely that they unthinkingly also skip subsequent errors of some different type. Perhaps something like ALTER SUBSCRIPTION ... SET (skip_failures = 'duplicate key value violates unique constraint \"test_pkey\"')?\n>\n\nI think if we want we can allow to skip particular error via\nskip_error_code instead of via error message but not sure if it would\nbe better to skip a particular operation of the transaction rather\nthan the entire transaction. Normally from the atomicity purpose the\ntransaction can be either committed or rolled-back but not partially\ndone so I think it would be preferable to skip the entire transaction\nrather than skipping it partially.\n\n> This is arguably a different feature request, and not something your patch is required to address, but I wonder how much we should limit people shooting themselves in the foot? If we built something like this using your skip_xid feature, rather than instead of your skip_xid feature, would your feature need to be modified?\n>\n\nSawada-San can answer better but I don't see any problem building any\nsuch feature on top of what is currently proposed.\n\n>\n> I'm having trouble thinking of an example conflict where skipping a transaction would be better than writing a BEFORE INSERT trigger on the conflicting table which suppresses or redirects conflicting rows somewhere else. Particularly for larger transactions containing multiple statements, suppressing the conflicting rows using a trigger would be less messy than skipping the transaction. I think your patch adds a useful tool to the toolkit, but maybe we should mention more alternatives in the docs? Something like, \"changing the data on the subscriber so that it doesn't conflict with incoming changes, or dropping the conflicting constraint or unique index, or writing a trigger on the subscriber to suppress or redirect conflicting incoming changes, or as a last resort, by skipping the whole transaction\"?\n>\n\n+1 for extending the docs as per this suggestion.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 4 Sep 2021 08:54:32 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Sat, Sep 4, 2021 at 8:54 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Sep 3, 2021 at 2:15 AM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> >\n> > > On Aug 30, 2021, at 12:06 AM, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > I've attached rebased patches.\n> > For the v12-0003 patch:\n> >\n> > I believe this feature is needed, but it also seems like a very powerful foot-gun. Can we do anything to make it less likely that users will hurt themselves with this tool?\n> >\n>\n> This won't do any more harm than currently, users can do via\n> pg_replication_slot_advance and the same is documented as well, see\n> [1].\n>\n\nSorry, forgot to give the link.\n\n[1] - https://www.postgresql.org/docs/devel/logical-replication-conflicts.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 4 Sep 2021 09:08:15 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Sep 2, 2021 at 12:06 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Mon, Aug 30, 2021 at 5:07 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> >\n> > I've attached rebased patches. 0004 patch is not the scope of this\n> > patch. It's borrowed from another thread[1] to fix the assertion\n> > failure for newly added tests. Please review them.\n> >\n>\n> I have some initial feedback on the v12-0001 patch.\n> Most of these are suggested improvements to wording, and some typo fixes.\n\nThank you for the comments!\n\n>\n>\n> (0) Patch comment\n>\n> Suggestion to improve the patch comment:\n>\n> BEFORE:\n> Add pg_stat_subscription_errors statistics view.\n>\n> This commits adds new system view pg_stat_logical_replication_error,\n\nOops, I realized that it should be pg_stat_subscription_errors.\n\n> showing errors happening during applying logical replication changes\n> as well as during performing initial table synchronization.\n>\n> The subscription error entries are removed by autovacuum workers when\n> the table synchronization competed in table sync worker cases and when\n> dropping the subscription in apply worker cases.\n>\n> It also adds SQL function pg_stat_reset_subscription_error() to\n> reset the single subscription error.\n>\n> AFTER:\n> Add a subscription errors statistics view \"pg_stat_subscription_errors\".\n>\n> This commits adds a new system view pg_stat_logical_replication_errors,\n> that records information about any errors which occur during application\n> of logical replication changes as well as during performing initial table\n> synchronization.\n\nI think that views don't have any data so \"show information\" seems\nappropriate to me here. Thoughts?\n\n>\n> The subscription error entries are removed by autovacuum workers after\n> table synchronization completes in table sync worker cases and after\n> dropping the subscription in apply worker cases.\n>\n> It also adds an SQL function pg_stat_reset_subscription_error() to\n> reset a single subscription error.\n>\n>\n>\n> doc/src/sgml/monitoring.sgml:\n>\n> (1)\n> BEFORE:\n> + <entry>One row per error that happened on subscription, showing\n> information about\n> + the subscription errors.\n> AFTER:\n> + <entry>One row per error that occurred on subscription,\n> providing information about\n> + each subscription error.\n\nFixed.\n\n>\n> (2)\n> BEFORE:\n> + The <structname>pg_stat_subscription_errors</structname> view will\n> contain one\n> AFTER:\n> + The <structname>pg_stat_subscription_errors</structname> view contains one\n>\n\nI think that descriptions of other statistics view also say \"XXX view\nwill contain ...\".\n\n>\n> (3)\n> BEFORE:\n> + Name of the database in which the subscription is created.\n> AFTER:\n> + Name of the database in which the subscription was created.\n\nFixed.\n\n>\n> (4)\n> BEFORE:\n> + OID of the relation that the worker is processing when the\n> + error happened.\n> AFTER:\n> + OID of the relation that the worker was processing when the\n> + error occurred.\n>\n\nFixed.\n\n>\n> (5)\n> BEFORE:\n> + Name of command being applied when the error happened. This\n> + field is always NULL if the error is reported by\n> + <literal>tablesync</literal> worker.\n> AFTER:\n> + Name of command being applied when the error occurred. This\n> + field is always NULL if the error is reported by a\n> + <literal>tablesync</literal> worker.\n\nFixed.\n\n> (6)\n> BEFORE:\n> + Transaction ID of publisher node being applied when the error\n> + happened. This field is always NULL if the error is reported\n> + by <literal>tablesync</literal> worker.\n> AFTER:\n> + Transaction ID of the publisher node being applied when the error\n> + happened. This field is always NULL if the error is reported\n> + by a <literal>tablesync</literal> worker.\n\nFixed.\n\n> (7)\n> BEFORE:\n> + Type of worker reported the error: <literal>apply</literal> or\n> + <literal>tablesync</literal>.\n> AFTER:\n> + Type of worker reporting the error: <literal>apply</literal> or\n> + <literal>tablesync</literal>.\n\nFixed.\n\n>\n> (8)\n> BEFORE:\n> + Number of times error happened on the worker.\n> AFTER:\n> + Number of times the error occurred in the worker.\n>\n> [or \"Number of times the worker reported the error\" ?]\n\nI prefer \"Number of times the error occurred in the worker.\"\n\n>\n> (9)\n> BEFORE:\n> + Time at which the last error happened.\n> AFTER:\n> + Time at which the last error occurred.\n\nFixed.\n\n>\n> (10)\n> BEFORE:\n> + Error message which is reported last failure time.\n> AFTER:\n> + Error message which was reported at the last failure time.\n>\n> Maybe this should just say \"Last reported error message\" ?\n\nFixed.\n\n>\n>\n> (11)\n> You shouldn't call hash_get_num_entries() on a NULL pointer.\n>\n> Suggest swappling lines, as shown below:\n>\n> BEFORE:\n> + int32 nerrors = hash_get_num_entries(subent->suberrors);\n> +\n> + /* Skip this subscription if not have any errors */\n> + if (subent->suberrors == NULL)\n> + continue;\n> AFTER:\n> + int32 nerrors;\n> +\n> + /* Skip this subscription if not have any errors */\n> + if (subent->suberrors == NULL)\n> + continue;\n> + nerrors = hash_get_num_entries(subent->suberrors);\n\nRight. Fixed.\n\n>\n>\n> (12)\n> Typo: legnth -> length\n>\n> + * contains the fixed-legnth error message string which is\n\nFixed.\n\n>\n>\n> src/backend/postmaster/pgstat.c\n>\n> (13)\n> \"Subscription stat entries\" hashtable is created in two different\n> places, one with HASH_CONTEXT and the other without. Is this\n> intentional?\n> Shouldn't there be a single function for creating this?\n\nYes, it's intentional. It's consistent with hash tables for other statistics.\n\n>\n>\n> (14)\n> + * PgStat_MsgSubscriptionPurge Sent by the autovacuum purge the subscriptions.\n>\n> Seems to be missing a word, is it meant to say \"Sent by the autovacuum\n> to purge the subscriptions.\" ?\n\nYes, fixed.\n\n>\n> (15)\n> + * PgStat_MsgSubscriptionErrPurge Sent by the autovacuum purge the subscription\n> + * errors.\n>\n> Seems to be missing a word, is it meant to say \"Sent by the autovacuum\n> to purge the subscription errors.\" ?\n\nThanks, fixed.\n\nRegards,\n\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Sun, 5 Sep 2021 22:41:20 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Sep 2, 2021 at 2:55 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Mon, Aug 30, 2021 at 5:07 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> >\n> > I've attached rebased patches. 0004 patch is not the scope of this\n> > patch. It's borrowed from another thread[1] to fix the assertion\n> > failure for newly added tests. Please review them.\n> >\n>\n> I have a few comments on the v12-0002 patch:\n\nThank you for the comments!\n\n>\n> (1) Patch comment\n>\n> Has a typo and could be expressed a bit better.\n>\n> Suggestion:\n>\n> BEFORE:\n> RESET command is reuiqred by follow-up commit introducing to a new\n> parameter skip_xid to reset.\n> AFTER:\n> The RESET parameter for ALTER SUBSCRIPTION is required by the\n> follow-up commit that introduces a new resettable subscription\n> parameter \"skip_xid\".\n\nFixed.\n\n>\n>\n> doc/src/sgml/ref/alter_subscription.sgml\n>\n> (2)\n> I don't think \"RESET\" is sufficiently described in\n> alter_subscription.sgml. Just putting it under \"SET\" and changing\n> \"altered\" to \"set\" doesn't explain what resetting does. It should say\n> something about setting the parameter back to its original (default)\n> value.\n\nDoesn't \"RESET\" normally mean to change the parameter back to its default value?\n\n>\n>\n> (3)\n> case ALTER_SUBSCRIPTION_RESET_OPTIONS\n>\n> Some comments here would be helpful e.g. Reset the specified\n> parameters back to their default values.\n\nOkay, added.\n\nRegards,\n\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Sun, 5 Sep 2021 22:41:43 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Sep 2, 2021 at 9:03 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Mon, Aug 30, 2021 at 5:07 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached rebased patches. 0004 patch is not the scope of this\n> > patch. It's borrowed from another thread[1] to fix the assertion\n> > failure for newly added tests. Please review them.\n> >\n>\n\nThank you for the comments!\n\n> Some initial comments for the v12-0003 patch:\n>\n> (1) Patch comment\n> \"This commit introduces another way to skip the transaction in question.\"\n>\n> I think it should further explain: \"This commit introduces another way\n> to skip the transaction in question, other than manually updating the\n> subscriber's database or using pg_replication_origin_advance().\"\n\nUpdated.\n\n>\n>\n> doc/src/sgml/logical-replication.sgml\n> (2)\n>\n> Suggested minor update:\n>\n> BEFORE:\n> + transaction that conflicts with the existing data. When a conflict produce\n> + an error, it is shown in\n> <structname>pg_stat_subscription_errors</structname>\n> + view as follows:\n> AFTER:\n> + transaction that conflicts with the existing data. When a conflict produces\n> + an error, it is recorded in the\n> <structname>pg_stat_subscription_errors</structname>\n> + view as follows:\n\nFixed.\n\n>\n> (3)\n> + found from those outputs (transaction ID 740 in the above case).\n> The transaction\n>\n> Shouldn't it be transaction ID 716?\n\nRight, fixed.\n\n>\n> (4)\n> + can be skipped by setting <replaceable>skip_xid</replaceable> to\n> the subscription\n>\n> Is it better to say here ... \"on the subscription\" ?\n\nOkay, fixed.\n\n>\n> (5)\n> Just skipping a transaction could make a subscriber inconsistent, right?\n>\n> Would it be better as follows?\n>\n> BEFORE:\n> + In either way, those should be used as a last resort. They skip the whole\n> + transaction including changes that may not violate any constraint and easily\n> + make subscriber inconsistent if a user specifies the wrong transaction ID or\n> + the position of origin.\n>\n> AFTER:\n> + Either way, those transaction skipping methods should be used as a\n> last resort.\n> + They skip the whole transaction, including changes that may not violate any\n> + constraint. They may easily make the subscriber inconsistent,\n> especially if a\n> + user specifies the wrong transaction ID or the position of origin.\n\nAgreed, fixed.\n\n>\n> (6)\n> The grammar is not great in the following description, so here's a\n> suggested improvement:\n>\n> BEFORE:\n> + incoming change or by skipping the whole transaction. This option\n> + specifies transaction ID that logical replication worker skips to\n> + apply. The logical replication worker skips all data modification\n>\n> AFTER:\n> + incoming changes or by skipping the whole transaction. This option\n> + specifies the ID of the transaction whose application is to\n> be skipped\n> + by the logical replication worker. The logical replication worker\n> + skips all data modification\n\nFixed.\n\n>\n>\n> src/backend/postmaster/pgstat.c\n> (7)\n> BEFORE:\n> + * Tell the collector about clear the error of subscription.\n> AFTER:\n> + * Tell the collector to clear the subscription error.\n\nFixed.\n\n>\n>\n> src/backend/replication/logical/worker.c\n> (8)\n> + * subscription is invalidated and* MySubscription->skipxid gets\n> changed or reset.\n>\n> There is a \"*\" after \"and\".\n\nFixed.\n\n>\n> (9)\n> Do these lines really need to be moved up?\n>\n> + /* extract XID of the top-level transaction */\n> + stream_xid = logicalrep_read_stream_start(s, &first_segment);\n> +\n\nI had missed to revert this change, fixed.\n\n>\n> src/backend/postmaster/pgstat.c\n> (10)\n>\n> + bool m_clear; /* clear all fields except for last_failure and\n> + * last_errmsg */\n>\n> I think it should say: clear all fields except for last_failure,\n> last_errmsg and stat_reset_timestamp.\n\nFixed.\n\nThose comments including your comments on the v12-0001 and v12-0002\nare incorporated into local branch. I'll submit the updated patches\nafter incorporating all other comments.\n\nRegards,\n\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Sun, 5 Sep 2021 22:42:25 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Sep 2, 2021 at 5:41 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> From Mon, Aug 30, 2021 3:07 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > I've attached rebased patches. 0004 patch is not the scope of this patch. It's\n> > borrowed from another thread[1] to fix the assertion failure for newly added\n> > tests. Please review them.\n>\n> Hi,\n>\n> I reviewed the v12-0001 patch, here are some comments:\n\nThank you for the comments!\n\n>\n> 1)\n> --- a/src/backend/utils/error/elog.c\n> +++ b/src/backend/utils/error/elog.c\n> @@ -1441,7 +1441,6 @@ getinternalerrposition(void)\n> return edata->internalpos;\n> }\n>\n> -\n>\n> It seems a miss change in elog.c\n\nFixed.\n\n>\n> 2)\n>\n> + TupleDescInitEntry(tupdesc, (AttrNumber) 10, \"stats_reset\",\n> + TIMESTAMPTZOID, -1, 0);\n>\n> The document doesn't mention the column \"stats_reset\".\n\nAdded.\n\n> 3)\n>\n> +typedef struct PgStat_StatSubErrEntry\n> +{\n> + Oid subrelid; /* InvalidOid if the apply worker, otherwise\n> + * the table sync worker. hash table key. */\n>\n> From the comments of subrelid, I think one subscription only have one apply\n> worker error entry, right ? If so, I was thinking can we move the the apply\n> error entry to PgStat_StatSubEntry. In that approach, we don't need to build a\n> inner hash table when there are no table sync error entry.\n\nI wanted to avoid having unnecessary error entry fields when there is\nno apply worker error but there is a table sync worker error. But\nafter more thoughts, the apply worker is likely to raise an error than\ntable sync workers. So it might be better to have both\nPgStat_StatSubErrEntry for the apply worker error and hash table for\ntable sync workers errors in PgStat_StatSubEntry.\n\n>\n> 4)\n> Is it possible to add some testcases to test the subscription error entry delete ?\n\nDo you mean the tests checking if subscription error entry is deleted\nafter DROP SUBSCRIPTION?\n\nThose comments are incorporated into local branches. I'll submit the\nupdated patches after incorporating other comments.\n\nRegards,\n\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Sun, 5 Sep 2021 22:57:28 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Sep 2, 2021 at 8:37 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> From Mon, Aug 30, 2021 3:07 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > I've attached rebased patches. 0004 patch is not the scope of this\n> > patch. It's borrowed from another thread[1] to fix the assertion\n> > failure for newly added tests. Please review them.\n>\n> Hi,\n>\n> I reviewed the 0002 patch and have a suggestion for it.\n>\n> + if (IsSet(opts.specified_opts, SUBOPT_SYNCHRONOUS_COMMIT))\n> + {\n> + values[Anum_pg_subscription_subsynccommit - 1] =\n> + CStringGetTextDatum(\"off\");\n> + replaces[Anum_pg_subscription_subsynccommit - 1] = true;\n> + }\n>\n> Currently, the patch set the default value out of parse_subscription_options(),\n> but I think It might be more standard to set the value in\n> parse_subscription_options(). Like:\n>\n> if (!is_reset)\n> {\n> ...\n> + }\n> + else\n> + opts->synchronous_commit = \"off\";\n>\n> And then, we can set the value like:\n>\n> values[Anum_pg_subscription_subsynccommit - 1] =\n> CStringGetTextDatum(opts.synchronous_commit);\n\nYou're right. Fixed.\n\n>\n>\n> Besides, instead of adding a switch case like the following:\n> + case ALTER_SUBSCRIPTION_RESET_OPTIONS:\n> + {\n>\n> We can add a bool flag(isReset) in AlterSubscriptionStmt and check the flag\n> when invoking parse_subscription_options(). In this approach, the code can be\n> shorter.\n>\n> Attach a diff file based on the v12-0002 which change the code like the above\n> suggestion.\n\nThank you for the patch!\n\n@@ -3672,11 +3671,12 @@ typedef enum AlterSubscriptionType\n typedef struct AlterSubscriptionStmt\n {\n NodeTag type;\n- AlterSubscriptionType kind; /* ALTER_SUBSCRIPTION_SET_OPTIONS, etc */\n+ AlterSubscriptionType kind; /* ALTER_SUBSCRIPTION_OPTIONS, etc */\n char *subname; /* Name of the subscription */\n char *conninfo; /* Connection string to publisher */\n List *publication; /* One or more publication to\nsubscribe to */\n List *options; /* List of DefElem nodes */\n+ bool isReset; /* true if RESET option */\n } AlterSubscriptionStmt;\n\nIt's unnatural to me that AlterSubscriptionStmt has isReset flag even\nin spite of having 'kind' indicating the command. How about having\nRESET comand use the same logic of SET as you suggested while having\nboth ALTER_SUBSCRIPTION_SET_OPTIONS and\nALTER_SUBSCRIPTION_RESET_OPTIONS?\n\nRegards,\n\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Sun, 5 Sep 2021 22:57:58 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Sep 3, 2021 at 3:46 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Mon, Aug 30, 2021 at 5:07 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached rebased patches. 0004 patch is not the scope of this\n> > patch. It's borrowed from another thread[1] to fix the assertion\n> > failure for newly added tests. Please review them.\n> >\n>\n> BTW, these patches need rebasing (broken by recent commits, patches\n> 0001, 0003 and 0004 no longer apply, and it's failing in the cfbot).\n\nThanks! I'll submit the updated patches early this week.\n\nRegards,\n\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Sun, 5 Sep 2021 22:58:32 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "From Sun, Sep 5, 2021 9:58 PM Masahiko Sawada <sawada.mshk@gmail.com>:\r\n> On Thu, Sep 2, 2021 at 8:37 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > From Mon, Aug 30, 2021 3:07 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> > > I've attached rebased patches. 0004 patch is not the scope of this\r\n> > > patch. It's borrowed from another thread[1] to fix the assertion\r\n> > > failure for newly added tests. Please review them.\r\n> >\r\n> > Hi,\r\n> >\r\n> > I reviewed the 0002 patch and have a suggestion for it.\r\n> @@ -3672,11 +3671,12 @@ typedef enum AlterSubscriptionType typedef\r\n> struct AlterSubscriptionStmt {\r\n> NodeTag type;\r\n> - AlterSubscriptionType kind; /* ALTER_SUBSCRIPTION_SET_OPTIONS,\r\n> etc */\r\n> + AlterSubscriptionType kind; /* ALTER_SUBSCRIPTION_OPTIONS, etc\r\n> + */\r\n> char *subname; /* Name of the subscription */\r\n> char *conninfo; /* Connection string to publisher */\r\n> List *publication; /* One or more publication to\r\n> subscribe to */\r\n> List *options; /* List of DefElem nodes */\r\n> + bool isReset; /* true if RESET option */\r\n> } AlterSubscriptionStmt;\r\n> \r\n> It's unnatural to me that AlterSubscriptionStmt has isReset flag even in spite of\r\n> having 'kind' indicating the command. How about having RESET comand use\r\n> the same logic of SET as you suggested while having both\r\n> ALTER_SUBSCRIPTION_SET_OPTIONS and\r\n> ALTER_SUBSCRIPTION_RESET_OPTIONS?\r\n\r\nYes, I agree with you it will look more natural with ALTER_SUBSCRIPTION_RESET_OPTIONS.\r\n\r\nBest regards,\r\nHou zj\r\n",
"msg_date": "Mon, 6 Sep 2021 01:26:54 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Sat, Sep 4, 2021 at 12:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Sep 3, 2021 at 2:15 AM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> >\n> > > On Aug 30, 2021, at 12:06 AM, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > I've attached rebased patches.\n> > For the v12-0003 patch:\n> >\n> > I believe this feature is needed, but it also seems like a very powerful foot-gun. Can we do anything to make it less likely that users will hurt themselves with this tool?\n> >\n>\n> This won't do any more harm than currently, users can do via\n> pg_replication_slot_advance and the same is documented as well, see\n> [1]. This will be allowed to only superusers. Its effect will be\n> documented with a precautionary note to use it only when the other\n> available ways can't be used.\n\nRight.\n\n>\n> > I am thinking back to support calls I have attended. When a production system is down, there is often some hesitancy to perform ad-hoc operations on the database, but once the decision has been made to do so, people try to get the whole process done as quickly as possible. If multiple transactions on the publisher fail on the subscriber, they will do so in series, not in parallel.\n> >\n>\n> The subscriber will know only one transaction failure at a time, till\n> that is resolved, the apply won't move ahead and it won't know even if\n> there are other transactions that are going to fail in the future.\n>\n> >\n> > If the user could instead clear all failed transactions of the same type, that might make it less likely that they unthinkingly also skip subsequent errors of some different type. Perhaps something like ALTER SUBSCRIPTION ... SET (skip_failures = 'duplicate key value violates unique constraint \"test_pkey\"')?\n> >\n>\n> I think if we want we can allow to skip particular error via\n> skip_error_code instead of via error message but not sure if it would\n> be better to skip a particular operation of the transaction rather\n> than the entire transaction. Normally from the atomicity purpose the\n> transaction can be either committed or rolled-back but not partially\n> done so I think it would be preferable to skip the entire transaction\n> rather than skipping it partially.\n\nI think the suggestion by Mark is to skip the entire transaction if\nthe kind of error matches the specified error.\n\nI think my proposed feature is meant to be a tool to cover the\nsituation like where something should not happen have happened, rather\nthan conflict resolution. If the users failed into a difficult\nsituation where they need to skip a lot of transaction by this\nskip_xid feature, they should rebuild the logical replication from\nscratch. It seems to me that skipping all transactions that failed due\nto the same type of failure seems to be problematic, for example, if\nthe user forget to reset it. If we want to skip the particular\noperation that failed due to the specified error, we should have a\nproper conflict resolution feature that can handle various types of\nconflicts by various types of resolutions methods, like other RDBMS\nsupports.\n\n>\n> > This is arguably a different feature request, and not something your patch is required to address, but I wonder how much we should limit people shooting themselves in the foot? If we built something like this using your skip_xid feature, rather than instead of your skip_xid feature, would your feature need to be modified?\n> >\n>\n> Sawada-San can answer better but I don't see any problem building any\n> such feature on top of what is currently proposed.\n\nIf the feature you proposed is to skip the entire transaction, I also\ndon't see any problem building the feature on top of my patch. The\npatch adds the mechanism to skip the entire transaction so what we\nneed to do for that feature is to extend how to trigger the skipping\nbehavior.\n\n>\n> >\n> > I'm having trouble thinking of an example conflict where skipping a transaction would be better than writing a BEFORE INSERT trigger on the conflicting table which suppresses or redirects conflicting rows somewhere else. Particularly for larger transactions containing multiple statements, suppressing the conflicting rows using a trigger would be less messy than skipping the transaction. I think your patch adds a useful tool to the toolkit, but maybe we should mention more alternatives in the docs? Something like, \"changing the data on the subscriber so that it doesn't conflict with incoming changes, or dropping the conflicting constraint or unique index, or writing a trigger on the subscriber to suppress or redirect conflicting incoming changes, or as a last resort, by skipping the whole transaction\"?\n> >\n>\n> +1 for extending the docs as per this suggestion.\n\nAgreed. I'll add such description to the doc.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 6 Sep 2021 14:49:49 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Sun, Sep 5, 2021 at 10:58 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Sep 3, 2021 at 3:46 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> >\n> > On Mon, Aug 30, 2021 at 5:07 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > I've attached rebased patches. 0004 patch is not the scope of this\n> > > patch. It's borrowed from another thread[1] to fix the assertion\n> > > failure for newly added tests. Please review them.\n> > >\n> >\n> > BTW, these patches need rebasing (broken by recent commits, patches\n> > 0001, 0003 and 0004 no longer apply, and it's failing in the cfbot).\n>\n> Thanks! I'll submit the updated patches early this week.\n>\n\nSorry for the late response. I've attached the updated patches that\nincorporate all comments unless I missed something. Please review\nthem.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Thu, 9 Sep 2021 23:32:54 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Sep 10, 2021 at 12:33 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Sorry for the late response. I've attached the updated patches that\n> incorporate all comments unless I missed something. Please review\n> them.\n>\n\nHere's some review comments for the v13-0001 patch:\n\ndoc/src/sgml/monitoring.sgml\n\n(1)\nThere's an extra space in the following line, before \"processing\":\n\n+ OID of the relation that the worker was processing when the\n\n(2) Suggested wording update:\nBEFORE:\n+ field is always NULL if the error is reported by\nAFTER:\n+ field is always NULL if the error is reported by the\n\n(3) Suggested wording update:\nBEFORE:\n+ by <literal>tablesync</literal> worker.\nAFTER:\n+ by the <literal>tablesync</literal> worker.\n\n(4)\nMissing \".\" at end of following description (inconsistent with other doc):\n\n+ Time at which these statistics were last reset\n\n(5) Suggested wording update:\nBEFORE:\n+ can be granted EXECUTE to run the function.\nAFTER:\n+ can be granted EXECUTE privilege to run the function.\n\n\nsrc/backend/postmaster/pgstat.c\n\n(6) Suggested wording update:\nBEFORE:\n+ * for this relation already completes or the table is no\nAFTER:\n+ * for this relation already completed or the table is no\n\n\n(7)\nIn the code below, since errmsg.m_nentries only ever gets incremented\nby the 1st IF condition, it's probably best to include the 2nd IF\nblock within the 1st IF condition. Then can avoid checking\n\"errmsg.m_nentries\" each loop iteration.\n\n+ if (hash_search(not_ready_rels_htab, (void *) &(errent->relid),\n+ HASH_FIND, NULL) == NULL)\n+ errmsg.m_relids[errmsg.m_nentries++] = errent->relid;\n+\n+ /*\n+ * If the message is full, send it out and reinitialize to\n+ * empty\n+ */\n+ if (errmsg.m_nentries >= PGSTAT_NUM_SUBSCRIPTIONERRPURGE)\n+ {\n+ len = offsetof(PgStat_MsgSubscriptionErrPurge, m_relids[0])\n+ + errmsg.m_nentries * sizeof(Oid);\n+\n+ pgstat_setheader(&errmsg.m_hdr, PGSTAT_MTYPE_SUBSCRIPTIONERRPURGE);\n+ pgstat_send(&errmsg, len);\n+ errmsg.m_nentries = 0;\n+ }\n\n\n(8)\n+ * Tell the collector about reset the subscription error.\n\nIs this meant to say \"Tell the collector to reset the subscription error.\" ?\n\n\n(9)\nI think the following:\n\n+ len = offsetof(PgStat_MsgSubscriptionErr, m_errmsg[0]) + strlen(errmsg);\n\nshould be:\n\n+ len = offsetof(PgStat_MsgSubscriptionErr, m_errmsg[0]) + strlen(errmsg) + 1;\n\nto account for the \\0 terminator.\n\n(10)\nI don't think that using the following Assert is really correct here,\nbecause PgStat_MsgSubscriptionErr is not setup to have the maximum\nnumber of m_errmsg[] entries to fill up to PGSTAT_MAX_MSG_SIZE (as are\nsome of the other pgstat structs):\n\n+ Assert(len < PGSTAT_MAX_MSG_SIZE);\n\n(the max size of all of the pgstat structs is statically asserted anyway)\n\nIt would be correct to do the following instead:\n\n+ Assert(strlen(errmsg) < PGSTAT_SUBSCRIPTIONERR_MSGLEN);\n\nThe overflow is guarded by the strlcpy() in any case.\n\n(11)\nWould be better to write:\n\n+ rc = fwrite(&nerrors, sizeof(nerrors), 1, fpout);\n\ninstead of:\n\n+ rc = fwrite(&nerrors, sizeof(int32), 1, fpout);\n\n\n(12)\nWould be better to write:\n\n+ if (fread(&nerrors, 1, sizeof(nerrors), fpin) != sizeof(nerrors))\n\ninstead of:\n\n+ if (fread(&nerrors, 1, sizeof(int32), fpin) != sizeof(int32))\n\n\nsrc/include/pgstat.h\n\n(13)\nBEFORE:\n+ * update/reset the error happening during logical\nAFTER:\n+ * update/reset the error occurring during logical\n\n(14)\nTypo: replicatoin -> replication\n\n+ * an error that occurred during application of logical replicatoin or\n\n\n(15) Suggested wording update:\nBEFORE:\n+ * there is no table sync error, where is the common case in practice.\nAFTER:\n+ * there is no table sync error, which is the common case in practice.\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 10 Sep 2021 21:46:31 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Sep 10, 2021 at 12:33 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Sorry for the late response. I've attached the updated patches that\n> incorporate all comments unless I missed something. Please review\n> them.\n>\n\nA few review comments for the v13-0002 patch:\n\n(1)\nI suggest a small update to the patch comment:\n\nBEFORE:\nALTER SUBSCRIPTION ... RESET command resets subscription\nparameters. The parameters that can be set are streaming, binary,\nsynchronous_commit.\nAFTER:\nALTER SUBSCRIPTION ... RESET command resets subscription\nparameters to their default value. The parameters that can be reset\nare streaming, binary, and synchronous_commit.\n\n\n(2)\nIn the documentation, the RESETable parameters should be listed in the\nsame way and order as for SET:\n\nBEFORE:\n+ <para>\n+ The parameters that can be reset are: <literal>streaming</literal>,\n+ <literal>binary</literal>, <literal>synchronous_commit</literal>.\n+ </para>\nAFTER:\n+ <para>\n+ The parameters that can be reset are\n<literal>synchronous_commit</literal>,\n+ <literal>binary</literal>, and <literal>streaming</literal>.\n+ </para>\n\n\nAlso I'm thinking it would be beneficial to say before this:\n\nRESET is used to set parameters back to their default value.\n\n(3)\nI notice that if you try to reset the slot_name, you get the following message:\n\npostgres=# alter subscription sub reset (slot_name);\nERROR: unrecognized subscription parameter: \"slot_name\"\n\nThis is a bit misleading, because slot_name IS a subscription\nparameter, just not resettable.\nIt would be better if it said something like: ERROR: not a resettable\nsubscription parameter: \"slot_name\"\n\nHowever, it seems that this is also an existing issue with SET (e.g.\nfor \"refresh\" or \"two_phase\")\npostgres=# alter subscription sub set (refresh=true);\nERROR: unrecognized subscription parameter: \"refresh\"\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 13 Sep 2021 19:06:06 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Sep 10, 2021 at 8:46 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Fri, Sep 10, 2021 at 12:33 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Sorry for the late response. I've attached the updated patches that\n> > incorporate all comments unless I missed something. Please review\n> > them.\n> >\n>\n> Here's some review comments for the v13-0001 patch:\n\nThank you for the comments!\n\n>\n> doc/src/sgml/monitoring.sgml\n>\n> (1)\n> There's an extra space in the following line, before \"processing\":\n>\n> + OID of the relation that the worker was processing when the\n\nFixed.\n\n>\n> (2) Suggested wording update:\n> BEFORE:\n> + field is always NULL if the error is reported by\n> AFTER:\n> + field is always NULL if the error is reported by the\n\nFixed.\n\n>\n> (3) Suggested wording update:\n> BEFORE:\n> + by <literal>tablesync</literal> worker.\n> AFTER:\n> + by the <literal>tablesync</literal> worker.\n\nFixed.\n\n>\n> (4)\n> Missing \".\" at end of following description (inconsistent with other doc):\n>\n> + Time at which these statistics were last reset\n\nFixed.\n\n>\n> (5) Suggested wording update:\n> BEFORE:\n> + can be granted EXECUTE to run the function.\n> AFTER:\n> + can be granted EXECUTE privilege to run the function.\n\nSince descriptions of other stats reset functions don't use \"EXECUTE\nprivilege\" so I think it'd be better to leave it for consistency.\n\n>\n>\n> src/backend/postmaster/pgstat.c\n>\n> (6) Suggested wording update:\n> BEFORE:\n> + * for this relation already completes or the table is no\n> AFTER:\n> + * for this relation already completed or the table is no\n\nFixed.\n\n>\n>\n> (7)\n> In the code below, since errmsg.m_nentries only ever gets incremented\n> by the 1st IF condition, it's probably best to include the 2nd IF\n> block within the 1st IF condition. Then can avoid checking\n> \"errmsg.m_nentries\" each loop iteration.\n>\n> + if (hash_search(not_ready_rels_htab, (void *) &(errent->relid),\n> + HASH_FIND, NULL) == NULL)\n> + errmsg.m_relids[errmsg.m_nentries++] = errent->relid;\n> +\n> + /*\n> + * If the message is full, send it out and reinitialize to\n> + * empty\n> + */\n> + if (errmsg.m_nentries >= PGSTAT_NUM_SUBSCRIPTIONERRPURGE)\n> + {\n> + len = offsetof(PgStat_MsgSubscriptionErrPurge, m_relids[0])\n> + + errmsg.m_nentries * sizeof(Oid);\n> +\n> + pgstat_setheader(&errmsg.m_hdr, PGSTAT_MTYPE_SUBSCRIPTIONERRPURGE);\n> + pgstat_send(&errmsg, len);\n> + errmsg.m_nentries = 0;\n> + }\n\nAgreed. Instead of including the 2nd if block within the 1st if block,\nI changed the 1st if condition to check the opposite condition and\ncontinued the loop if it's true (i.g., the table is still under table\nsynchronization).\n\n>\n>\n> (8)\n> + * Tell the collector about reset the subscription error.\n>\n> Is this meant to say \"Tell the collector to reset the subscription error.\" ?\n\nYes, fixed.\n\n>\n>\n> (9)\n> I think the following:\n>\n> + len = offsetof(PgStat_MsgSubscriptionErr, m_errmsg[0]) + strlen(errmsg);\n>\n> should be:\n>\n> + len = offsetof(PgStat_MsgSubscriptionErr, m_errmsg[0]) + strlen(errmsg) + 1;\n>\n> to account for the \\0 terminator.\n\nFixed.\n\n>\n> (10)\n> I don't think that using the following Assert is really correct here,\n> because PgStat_MsgSubscriptionErr is not setup to have the maximum\n> number of m_errmsg[] entries to fill up to PGSTAT_MAX_MSG_SIZE (as are\n> some of the other pgstat structs):\n>\n> + Assert(len < PGSTAT_MAX_MSG_SIZE);\n>\n> (the max size of all of the pgstat structs is statically asserted anyway)\n>\n> It would be correct to do the following instead:\n>\n> + Assert(strlen(errmsg) < PGSTAT_SUBSCRIPTIONERR_MSGLEN);\n>\n> The overflow is guarded by the strlcpy() in any case.\n\nAgreed. Fixed.\n\n>\n> (11)\n> Would be better to write:\n>\n> + rc = fwrite(&nerrors, sizeof(nerrors), 1, fpout);\n>\n> instead of:\n>\n> + rc = fwrite(&nerrors, sizeof(int32), 1, fpout);\n>\n>\n> (12)\n> Would be better to write:\n>\n> + if (fread(&nerrors, 1, sizeof(nerrors), fpin) != sizeof(nerrors))\n>\n> instead of:\n>\n> + if (fread(&nerrors, 1, sizeof(int32), fpin) != sizeof(int32))\n>\n>\n\nAgreed.\n\n> src/include/pgstat.h\n>\n> (13)\n> BEFORE:\n> + * update/reset the error happening during logical\n> AFTER:\n> + * update/reset the error occurring during logical\n>\n\nFixed.\n\n> (14)\n> Typo: replicatoin -> replication\n>\n> + * an error that occurred during application of logical replicatoin or\n>\n\nFixed.\n\n>\n> (15) Suggested wording update:\n> BEFORE:\n> + * there is no table sync error, where is the common case in practice.\n> AFTER:\n> + * there is no table sync error, which is the common case in practice.\n>\n\nFixed.\n\nI'll submit the updated patches.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 14 Sep 2021 00:43:09 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "From Thur, Sep 9, 2021 10:33 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> Sorry for the late response. I've attached the updated patches that incorporate\r\n> all comments unless I missed something. Please review them.\r\n\r\nThanks for the new version patches.\r\nHere are some comments for the v13-0001 patch.\r\n\r\n1)\r\n\r\n+\t\t\t\t\tpgstat_setheader(&errmsg.m_hdr, PGSTAT_MTYPE_SUBSCRIPTIONERRPURGE);\r\n+\t\t\t\t\tpgstat_send(&errmsg, len);\r\n+\t\t\t\t\terrmsg.m_nentries = 0;\r\n+\t\t\t\t}\r\n\r\nIt seems we can invoke pgstat_setheader once before the loop like the\r\nfollowing:\r\n\r\n+\t\t\terrmsg.m_nentries = 0;\r\n+\t\t\terrmsg.m_subid = subent->subid;\r\n+\t\t\tpgstat_setheader(&errmsg.m_hdr, PGSTAT_MTYPE_SUBSCRIPTIONERRPURGE);\r\n\r\n2)\r\n+\t\t\t\t\tpgstat_setheader(&submsg.m_hdr, PGSTAT_MTYPE_SUBSCRIPTIONPURGE);\r\n+\t\t\t\t\tpgstat_send(&submsg, len);\r\n\r\nSame as 1), we can invoke pgstat_setheader once before the loop like:\r\n+\t\tsubmsg.m_nentries = 0;\r\n+\t\tpgstat_setheader(&submsg.m_hdr, PGSTAT_MTYPE_SUBSCRIPTIONPURGE);\r\n\r\n\r\n3)\r\n\r\n+/* ----------\r\n+ * PgStat_MsgSubscriptionErrPurge\tSent by the autovacuum to purge the subscription\r\n+ *\t\t\t\t\t\t\t\t\terrors.\r\n\r\nThe comments said it's sent by autovacuum, would the manual vacuum also send\r\nthis message ?\r\n\r\n\r\n4)\r\n+\r\n+\tpgstat_send(&msg, offsetof(PgStat_MsgSubscriptionErr, m_reset) + sizeof(bool));\r\n+}\r\n\r\nDoes it look cleaner that we use the offset of m_relid here like the following ?\r\n\r\npgstat_send(&msg, offsetof(PgStat_MsgSubscriptionErr, m_relid));\r\n\r\nBest regards,\r\nHou zj\r\n",
"msg_date": "Tue, 14 Sep 2021 02:26:57 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "Sorry for the late reply. I was on vacation.\n\nOn Tue, Sep 14, 2021 at 11:27 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> From Thur, Sep 9, 2021 10:33 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > Sorry for the late response. I've attached the updated patches that incorporate\n> > all comments unless I missed something. Please review them.\n>\n> Thanks for the new version patches.\n> Here are some comments for the v13-0001 patch.\n\nThank you for the comments!\n\n>\n> 1)\n>\n> + pgstat_setheader(&errmsg.m_hdr, PGSTAT_MTYPE_SUBSCRIPTIONERRPURGE);\n> + pgstat_send(&errmsg, len);\n> + errmsg.m_nentries = 0;\n> + }\n>\n> It seems we can invoke pgstat_setheader once before the loop like the\n> following:\n>\n> + errmsg.m_nentries = 0;\n> + errmsg.m_subid = subent->subid;\n> + pgstat_setheader(&errmsg.m_hdr, PGSTAT_MTYPE_SUBSCRIPTIONERRPURGE);\n>\n> 2)\n> + pgstat_setheader(&submsg.m_hdr, PGSTAT_MTYPE_SUBSCRIPTIONPURGE);\n> + pgstat_send(&submsg, len);\n>\n> Same as 1), we can invoke pgstat_setheader once before the loop like:\n> + submsg.m_nentries = 0;\n> + pgstat_setheader(&submsg.m_hdr, PGSTAT_MTYPE_SUBSCRIPTIONPURGE);\n>\n\nBut if we do that, we set the header even if there is no message to\nsend, right? Looking at other similar code in pgstat_vacuum_stat(), we\nset the header just before sending the message. So I'd like to leave\nthem since it's cleaner.\n\n>\n> 3)\n>\n> +/* ----------\n> + * PgStat_MsgSubscriptionErrPurge Sent by the autovacuum to purge the subscription\n> + * errors.\n>\n> The comments said it's sent by autovacuum, would the manual vacuum also send\n> this message ?\n\nRight. Fixed.\n\n>\n>\n> 4)\n> +\n> + pgstat_send(&msg, offsetof(PgStat_MsgSubscriptionErr, m_reset) + sizeof(bool));\n> +}\n>\n> Does it look cleaner that we use the offset of m_relid here like the following ?\n>\n> pgstat_send(&msg, offsetof(PgStat_MsgSubscriptionErr, m_relid));\n\nThank you for the suggestion. After more thought, it was a bit odd to\nuse PgStat_MsgSubscriptionErr to both report and reset the stats by\nsending the part or the full struct. So in the latest version, I've\nadded a new message struct type to reset the subscription error\nstatistics.\n\nI've attached the updated version patches. Please review them.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Tue, 21 Sep 2021 13:53:01 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "Hi,\n\nOn Fri, Sep 3, 2021 at 4:33 AM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>\n>\n>\n> > On Aug 30, 2021, at 12:06 AM, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached rebased patches.\n>\n> Thanks for these patches, Sawada-san!\n\nSorry for the very late response.\n\nThank you for the suggestions and the patch!\n\n>\n> The first patch in your series, v12-0001, seems useful to me even before committing any of the rest. I would like to integrate the new pg_stat_subscription_errors view it creates into regression tests for other logical replication features under development.\n>\n> In particular, it can be hard to write TAP tests that need to wait for subscriptions to catch up or fail. With your view committed, a new PostgresNode function to wait for catchup or for failure can be added, and then developers of different projects can all use that.\n\nI like the idea of creating a common function that waits for the\nsubscription to be ready (i.e., all relations are either in 'r' or 's'\nstate). There are many places where we wait for all subscription\nrelations to be ready in existing tap tests. We would be able to\nreplace those codes with the function. But I'm not sure that it's\nuseful to have a function that waits for the subscriptions to either\nbe ready or raise an error. In tap tests, I think that if we wait for\nthe subscription to raise an error, we should wait only for the error\nbut not for the subscription to be ready. Thoughts?\n\n> I am attaching a version of such a function, plus some tests of your patch (since it does not appear to have any). Would you mind reviewing these and giving comments or including them in your next patch version?\n>\n\nI've looked at the patch and here are some comments:\n\n+\n+-- no errors should be reported\n+SELECT * FROM pg_stat_subscription_errors;\n+\n\n+\n+-- Test that the subscription errors view exists, and has the right columns\n+-- If we expected any rows to exist, we would need to filter out unstable\n+-- columns. But since there should be no errors, we just select them all.\n+select * from pg_stat_subscription_errors;\n\nThe patch adds checks of pg_stat_subscription_errors in order to test\nif the subscription doesn't have any error. But since the subscription\nerrors are updated in an asynchronous manner, we cannot say the\nsubscription is working fine by checking the view only once.\n\n---\nThe newly added tap tests by 025_errors.pl have two subscribers raise\na table sync error, which seems very similar to the tests that\n024_skip_xact.pl adds. So I'm not sure we need this tap test as a\nseparate tap test file.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 24 Sep 2021 10:31:09 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Sep 21, 2021 at 2:53 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached the updated version patches. Please review them.\n>\n\nSome comments on the v14-0001 patch:\n\n(1)\nPatch comment\n\nThe existing patch comment doesn't read well. I suggest the following updates:\n\nBEFORE:\nAdd pg_stat_subscription_errors statistics view.\n\nThis commits adds new system view pg_stat_logical_replication_error,\nshowing errors happening during applying logical replication changes\nas well as during performing initial table synchronization.\n\nThe subscription error entries are removed by autovacuum workers when\nthe table synchronization competed in table sync worker cases and when\ndropping the subscription in apply worker cases.\n\nIt also adds SQL function pg_stat_reset_subscription_error() to\nreset the single subscription error.\n\nAFTER:\nAdd a subscription errors statistics view \"pg_stat_subscription_errors\".\n\nThis commit adds a new system view pg_stat_logical_replication_errors,\nthat shows information about any errors which occur during application\nof logical replication changes as well as during performing initial table\nsynchronization.\n\nThe subscription error entries are removed by autovacuum workers after\ntable synchronization completes in table sync worker cases and after\ndropping the subscription in apply worker cases.\n\nIt also adds an SQL function pg_stat_reset_subscription_error() to\nreset a single subscription error.\n\n\nsrc/backend/postmaster/pgstat.c\n(2)\nIn pgstat_read_db_statsfile_timestamp(), you've added the following\ncode for case 'S':\n\n+ case 'S':\n+ {\n+ PgStat_StatSubEntry subbuf;\n+ PgStat_StatSubErrEntry errbuf;\n+ int32 nerrors;\n+\n+ if (fread(&subbuf, 1, sizeof(PgStat_StatSubEntry), fpin)\n+ != sizeof(PgStat_StatSubEntry))\n+ {\n+ ereport(pgStatRunningInCollector ? LOG : WARNING,\n+ (errmsg(\"corrupted statistics file \\\"%s\\\"\",\n+ statfile)));\n+ FreeFile(fpin);\n+ return false;\n+ }\n+\n+ if (fread(&nerrors, 1, sizeof(nerrors), fpin) != sizeof(nerrors))\n+ {\n+ ereport(pgStatRunningInCollector ? LOG : WARNING,\n+ (errmsg(\"corrupted statistics file \\\"%s\\\"\",\n+ statfile)));\n+ goto done;\n+ }\n+\n+ for (int i = 0; i < nerrors; i++)\n+ {\n+ if (fread(&errbuf, 1, sizeof(PgStat_StatSubErrEntry), fpin) !=\n+ sizeof(PgStat_StatSubErrEntry))\n+ {\n+ ereport(pgStatRunningInCollector ? LOG : WARNING,\n+ (errmsg(\"corrupted statistics file \\\"%s\\\"\",\n+ statfile)));\n+ goto done;\n+ }\n+ }\n+ }\n+\n+ break;\n+\n\nWhy in the 2nd and 3rd instances of calling fread() and detecting a\ncorrupted statistics file, does it:\n goto done;\ninstead of:\n FreeFile(fpin);\n return false;\n\n?\n(so ends up returning true for these instances)\n\nIt looks like a mistake, but if it's intentional then comments need to\nbe added to explain it.\n\n(3)\nIn pgstat_get_subscription_error_entry(), there seems to be a bad comment.\n\nShouldn't:\n\n+ /* Return the apply error worker */\n+ return &(subent->apply_error);\n\nbe:\n\n+ /* Return the apply worker error */\n+ return &(subent->apply_error);\n\n\nsrc/tools/pgindent/typedefs.list\n(4)\n\n\"PgStat_MsgSubscriptionErrReset\" is missing from the list.\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 24 Sep 2021 14:38:10 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Sep 21, 2021 at 2:53 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached the updated version patches. Please review them.\n>\n\nA few review comments for the v14-0002 patch:\n\n(1)\nI suggest a small update to the patch comment:\n\nBEFORE:\nALTER SUBSCRIPTION ... RESET command resets subscription\nparameters. The parameters that can be set are streaming, binary,\nsynchronous_commit.\n\nAFTER:\nALTER SUBSCRIPTION ... RESET command resets subscription\nparameters to their default value. The parameters that can be reset\nare streaming, binary, and synchronous_commit.\n\n(2)\nIn the documentation, the RESETable parameters should be listed in the\nsame way and order as for SET:\n\nBEFORE:\n+ <para>\n+ The parameters that can be reset are: <literal>streaming</literal>,\n+ <literal>binary</literal>, <literal>synchronous_commit</literal>.\n+ </para>\nAFTER:\n+ <para>\n+ The parameters that can be reset are\n<literal>synchronous_commit</literal>,\n+ <literal>binary</literal>, and <literal>streaming</literal>.\n+ </para>\n\nAlso, I'm thinking it would be beneficial to say the following before this:\n\n RESET is used to set parameters back to their default value.\n\n(3)\nI notice that if you try to reset the slot_name, you get the following message:\n postgres=# alter subscription sub reset (slot_name);\n ERROR: unrecognized subscription parameter: \"slot_name\"\n\nThis is a bit misleading, because \"slot_name\" actually IS a\nsubscription parameter, just not resettable.\nIt would be better in this case if it said something like:\n ERROR: not a resettable subscription parameter: \"slot_name\"\n\nHowever, it seems that this is also an existing issue with SET (e.g.\nfor \"refresh\" or \"two_phase\"):\n postgres=# alter subscription sub set (refresh=true);\n ERROR: unrecognized subscription parameter: \"refresh\"\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 24 Sep 2021 18:27:17 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tuesday, September 21, 2021 12:53 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> I've attached the updated version patches. Please review them.\r\n\r\nThanks for updating the patch,\r\nhere are a few comments on the v14-0001 patch.\r\n\r\n1)\r\n+\t\t\t\thash_ctl.keysize = sizeof(Oid);\r\n+\t\t\t\thash_ctl.entrysize = sizeof(SubscriptionRelState);\r\n+\t\t\t\tnot_ready_rels_htab = hash_create(\"not ready relations in subscription\",\r\n+\t\t\t\t\t\t\t\t\t\t\t\t 64,\r\n+\t\t\t\t\t\t\t\t\t\t\t\t &hash_ctl,\r\n+\t\t\t\t\t\t\t\t\t\t\t\t HASH_ELEM | HASH_BLOBS);\r\n+\r\n\r\nISTM we can pass list_length(not_ready_rels_list) as the nelem to hash_create.\r\n\r\n2)\r\n\r\n+\t/*\r\n+\t * Search for all the dead subscriptions and error entries in stats\r\n+\t * hashtable and tell the stats collector to drop them.\r\n+\t */\r\n+\tif (subscriptionHash)\r\n+\t{\r\n...\r\n+\t\tHTAB\t *htab;\r\n+\r\n\r\nIt seems we already delacre a \"HTAB *htab;\" in function pgstat_vacuum_stat(),\r\ncan we use the existing htab here ?\r\n\r\n\r\n3)\r\n\r\n \tPGSTAT_MTYPE_RESETREPLSLOTCOUNTER,\r\n+\tPGSTAT_MTYPE_SUBSCRIPTIONERR,\r\n+\tPGSTAT_MTYPE_SUBSCRIPTIONERRRESET,\r\n+\tPGSTAT_MTYPE_SUBSCRIPTIONERRPURGE,\r\n+\tPGSTAT_MTYPE_SUBSCRIPTIONPURGE,\r\n \tPGSTAT_MTYPE_AUTOVAC_START,\r\n\r\nCan we append these values at the end of the Enum struct which won't affect the\r\nother Enum values.\r\n\r\nBest regards,\r\nHou zj\r\n",
"msg_date": "Fri, 24 Sep 2021 08:53:10 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Sep 21, 2021 at 10:23 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached the updated version patches. Please review them.\n>\n\nReview comments for v14-0001-Add-pg_stat_subscription_errors-statistics-view\n==============================================================\n1.\n<entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>command</structfield> <type>text</type>\n+ </para>\n+ <para>\n+ Name of command being applied when the error occurred. This\n+ field is always NULL if the error is reported by the\n+ <literal>tablesync</literal> worker.\n+ </para></entry>\n+ </row>\n..\n..\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>xid</structfield> <type>xid</type>\n+ </para>\n+ <para>\n+ Transaction ID of the publisher node being applied when the error\n+ occurred. This field is always NULL if the error is reported\n+ by the <literal>tablesync</literal> worker.\n+ </para></entry>\n\nShouldn't we display command and transaction id even for table sync\nworker if it occurs during sync phase (syncing with apply worker\nposition)\n\n2.\n+ /*\n+ * The number of not-ready relations can be high for example right\n+ * after creating a subscription, so we load the list of\n+ * SubscriptionRelState into the hash table for faster lookups.\n+ */\n\nI am not sure this optimization of converting to not-ready relations\nlist to hash table is worth it. Are we expecting thousands of\nrelations per subscription? I think that will be a rare case even if\nit is there.\n\n3.\n+static void\n+pgstat_recv_subscription_purge(PgStat_MsgSubscriptionPurge *msg, int len)\n+{\n+ if (subscriptionHash == NULL)\n+ return;\n+\n+ for (int i = 0; i < msg->m_nentries; i++)\n+ {\n+ PgStat_StatSubEntry *subent;\n+\n+ subent = pgstat_get_subscription_entry(msg->m_subids[i], false);\n+\n+ /*\n+ * Nothing to do if the subscription entry is not found. This could\n+ * happen when the subscription is dropped and the message for\n+ * dropping subscription entry arrived before the message for\n+ * reporting the error.\n+ */\n+ if (subent == NULL)\n\nIs the above comment true even during the purge? I can think of this\nduring normal processing but not during the purge.\n\n4.\n+typedef struct PgStat_MsgSubscriptionErr\n+{\n+ PgStat_MsgHdr m_hdr;\n+\n+ /*\n+ * m_subid and m_subrelid are used to determine the subscription and the\n+ * reporter of this error. m_subrelid is InvalidOid if reported by the\n+ * apply worker, otherwise by the table sync worker. In table sync worker\n+ * case, m_subrelid must be the same as m_relid.\n+ */\n+ Oid m_subid;\n+ Oid m_subrelid;\n+\n+ /* Error information */\n+ Oid m_relid;\n\nIs m_subrelid is used only to distinguish the type of worker? I think\nit could be InvalidOid during the syncing phase in the table sync\nworker.\n\n5.\n+/*\n+ * Subscription error statistics kept in the stats collector, representing\n+ * an error that occurred during application of logical replication or\n\nThe part of the message \" ... application of logical replication ...\"\nsounds a little unclear. Shall we instead write: \" ... application of\nlogical message ...\"?\n\n6.\n+typedef struct PgStat_StatSubEntry\n+{\n+ Oid subid; /* hash table key */\n+\n+ /*\n+ * Statistics of errors that occurred during logical replication. While\n+ * having the hash table for table sync errors we have a separate\n+ * statistics value for apply error (apply_error), because we can avoid\n+ * building a nested hash table for table sync errors in the case where\n+ * there is no table sync error, which is the common case in practice.\n+ *\n\nThe above comment is not clear to me. Why do you need to have a\nseparate hash table for table sync errors? And what makes it avoid\nbuilding nested hash table?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 24 Sep 2021 16:31:00 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Sep 24, 2021 at 8:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Sep 21, 2021 at 10:23 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached the updated version patches. Please review them.\n> >\n>\n> Review comments for v14-0001-Add-pg_stat_subscription_errors-statistics-view\n> ==============================================================\n> 1.\n> <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>command</structfield> <type>text</type>\n> + </para>\n> + <para>\n> + Name of command being applied when the error occurred. This\n> + field is always NULL if the error is reported by the\n> + <literal>tablesync</literal> worker.\n> + </para></entry>\n> + </row>\n> ..\n> ..\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>xid</structfield> <type>xid</type>\n> + </para>\n> + <para>\n> + Transaction ID of the publisher node being applied when the error\n> + occurred. This field is always NULL if the error is reported\n> + by the <literal>tablesync</literal> worker.\n> + </para></entry>\n>\n> Shouldn't we display command and transaction id even for table sync\n> worker if it occurs during sync phase (syncing with apply worker\n> position)\n\nRight. I'll fix it.\n\n>\n> 2.\n> + /*\n> + * The number of not-ready relations can be high for example right\n> + * after creating a subscription, so we load the list of\n> + * SubscriptionRelState into the hash table for faster lookups.\n> + */\n>\n> I am not sure this optimization of converting to not-ready relations\n> list to hash table is worth it. Are we expecting thousands of\n> relations per subscription? I think that will be a rare case even if\n> it is there.\n\nYeah, it seems overkill. I'll use the simple list. If this becomes a\nproblem, we can add such optimization later.\n\n>\n> 3.\n> +static void\n> +pgstat_recv_subscription_purge(PgStat_MsgSubscriptionPurge *msg, int len)\n> +{\n> + if (subscriptionHash == NULL)\n> + return;\n> +\n> + for (int i = 0; i < msg->m_nentries; i++)\n> + {\n> + PgStat_StatSubEntry *subent;\n> +\n> + subent = pgstat_get_subscription_entry(msg->m_subids[i], false);\n> +\n> + /*\n> + * Nothing to do if the subscription entry is not found. This could\n> + * happen when the subscription is dropped and the message for\n> + * dropping subscription entry arrived before the message for\n> + * reporting the error.\n> + */\n> + if (subent == NULL)\n>\n> Is the above comment true even during the purge? I can think of this\n> during normal processing but not during the purge.\n\nRight, the comment is not true during the purge. Since subent could be\nNULL if concurrent autovacuum workers do pgstat_vacuum_stat() I'll\nchange the comment.\n\n>\n> 4.\n> +typedef struct PgStat_MsgSubscriptionErr\n> +{\n> + PgStat_MsgHdr m_hdr;\n> +\n> + /*\n> + * m_subid and m_subrelid are used to determine the subscription and the\n> + * reporter of this error. m_subrelid is InvalidOid if reported by the\n> + * apply worker, otherwise by the table sync worker. In table sync worker\n> + * case, m_subrelid must be the same as m_relid.\n> + */\n> + Oid m_subid;\n> + Oid m_subrelid;\n> +\n> + /* Error information */\n> + Oid m_relid;\n>\n> Is m_subrelid is used only to distinguish the type of worker? I think\n> it could be InvalidOid during the syncing phase in the table sync\n> worker.\n\nRight. I'll fix it.\n\n>\n> 5.\n> +/*\n> + * Subscription error statistics kept in the stats collector, representing\n> + * an error that occurred during application of logical replication or\n>\n> The part of the message \" ... application of logical replication ...\"\n> sounds a little unclear. Shall we instead write: \" ... application of\n> logical message ...\"?\n\nWill fix.\n\n>\n> 6.\n> +typedef struct PgStat_StatSubEntry\n> +{\n> + Oid subid; /* hash table key */\n> +\n> + /*\n> + * Statistics of errors that occurred during logical replication. While\n> + * having the hash table for table sync errors we have a separate\n> + * statistics value for apply error (apply_error), because we can avoid\n> + * building a nested hash table for table sync errors in the case where\n> + * there is no table sync error, which is the common case in practice.\n> + *\n>\n> The above comment is not clear to me. Why do you need to have a\n> separate hash table for table sync errors? And what makes it avoid\n> building nested hash table?\n\nIn the previous patch, a subscription stats entry\n(PgStat_StatSubEntry) had one hash table that had error entries of\nboth apply and table sync. Since a subscription can have one apply\nworker and multiple table sync workers it makes sense to me to have\nthe subscription entry have a hash table for them. The reason why we\nhave one error entry for an apply error and a hash table for table\nsync errors is that there is the common case where an apply error\nhappens whereas any table sync error doesn’t. With this optimization,\nif the subscription has only apply error, since we can store it into\naply_error field, we can avoid building a hash table for sync errors.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 24 Sep 2021 22:13:50 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Sep 24, 2021 at 5:27 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Tue, Sep 21, 2021 at 2:53 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached the updated version patches. Please review them.\n> >\n>\n> A few review comments for the v14-0002 patch:\n\nThank you for the comments!\n\n>\n> (1)\n> I suggest a small update to the patch comment:\n>\n> BEFORE:\n> ALTER SUBSCRIPTION ... RESET command resets subscription\n> parameters. The parameters that can be set are streaming, binary,\n> synchronous_commit.\n>\n> AFTER:\n> ALTER SUBSCRIPTION ... RESET command resets subscription\n> parameters to their default value. The parameters that can be reset\n> are streaming, binary, and synchronous_commit.\n>\n> (2)\n> In the documentation, the RESETable parameters should be listed in the\n> same way and order as for SET:\n>\n> BEFORE:\n> + <para>\n> + The parameters that can be reset are: <literal>streaming</literal>,\n> + <literal>binary</literal>, <literal>synchronous_commit</literal>.\n> + </para>\n> AFTER:\n> + <para>\n> + The parameters that can be reset are\n> <literal>synchronous_commit</literal>,\n> + <literal>binary</literal>, and <literal>streaming</literal>.\n> + </para>\n>\n> Also, I'm thinking it would be beneficial to say the following before this:\n>\n> RESET is used to set parameters back to their default value.\n>\n\nI agreed with all of the above comments. I'll incorporate them into\nthe next version patch that I'm going to submit next Monday.\n\n> (3)\n> I notice that if you try to reset the slot_name, you get the following message:\n> postgres=# alter subscription sub reset (slot_name);\n> ERROR: unrecognized subscription parameter: \"slot_name\"\n>\n> This is a bit misleading, because \"slot_name\" actually IS a\n> subscription parameter, just not resettable.\n> It would be better in this case if it said something like:\n> ERROR: not a resettable subscription parameter: \"slot_name\"\n>\n> However, it seems that this is also an existing issue with SET (e.g.\n> for \"refresh\" or \"two_phase\"):\n> postgres=# alter subscription sub set (refresh=true);\n> ERROR: unrecognized subscription parameter: \"refresh\"\n\nGood point. Maybe we can improve it in a separate patch?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 24 Sep 2021 22:16:09 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Sep 24, 2021 at 5:53 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Tuesday, September 21, 2021 12:53 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached the updated version patches. Please review them.\n>\n> Thanks for updating the patch,\n> here are a few comments on the v14-0001 patch.\n\nThank you for the comments!\n\n>\n> 1)\n> + hash_ctl.keysize = sizeof(Oid);\n> + hash_ctl.entrysize = sizeof(SubscriptionRelState);\n> + not_ready_rels_htab = hash_create(\"not ready relations in subscription\",\n> + 64,\n> + &hash_ctl,\n> + HASH_ELEM | HASH_BLOBS);\n> +\n>\n> ISTM we can pass list_length(not_ready_rels_list) as the nelem to hash_create.\n\nAs Amit pointed out, it seems not necessary to build a temporary hash\ntable for this purpose.\n\n>\n> 2)\n>\n> + /*\n> + * Search for all the dead subscriptions and error entries in stats\n> + * hashtable and tell the stats collector to drop them.\n> + */\n> + if (subscriptionHash)\n> + {\n> ...\n> + HTAB *htab;\n> +\n>\n> It seems we already delacre a \"HTAB *htab;\" in function pgstat_vacuum_stat(),\n> can we use the existing htab here ?\n\nRight. Will remove it.\n\n>\n>\n> 3)\n>\n> PGSTAT_MTYPE_RESETREPLSLOTCOUNTER,\n> + PGSTAT_MTYPE_SUBSCRIPTIONERR,\n> + PGSTAT_MTYPE_SUBSCRIPTIONERRRESET,\n> + PGSTAT_MTYPE_SUBSCRIPTIONERRPURGE,\n> + PGSTAT_MTYPE_SUBSCRIPTIONPURGE,\n> PGSTAT_MTYPE_AUTOVAC_START,\n>\n> Can we append these values at the end of the Enum struct which won't affect the\n> other Enum values.\n\nYes, I'll move them to the end of the Enum struct.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 24 Sep 2021 22:18:16 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Sep 24, 2021 at 6:44 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Sep 24, 2021 at 8:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> >\n> > 6.\n> > +typedef struct PgStat_StatSubEntry\n> > +{\n> > + Oid subid; /* hash table key */\n> > +\n> > + /*\n> > + * Statistics of errors that occurred during logical replication. While\n> > + * having the hash table for table sync errors we have a separate\n> > + * statistics value for apply error (apply_error), because we can avoid\n> > + * building a nested hash table for table sync errors in the case where\n> > + * there is no table sync error, which is the common case in practice.\n> > + *\n> >\n> > The above comment is not clear to me. Why do you need to have a\n> > separate hash table for table sync errors? And what makes it avoid\n> > building nested hash table?\n>\n> In the previous patch, a subscription stats entry\n> (PgStat_StatSubEntry) had one hash table that had error entries of\n> both apply and table sync. Since a subscription can have one apply\n> worker and multiple table sync workers it makes sense to me to have\n> the subscription entry have a hash table for them.\n>\n\nSure, but each tablesync worker must have a separate relid. Why can't\nwe have a single hash table for both apply and table sync workers\nwhich are hashed by sub_id + rel_id? For apply worker, the rel_id will\nalways be zero (InvalidOId) and tablesync workers will have a unique\nOID for rel_id, so we should be able to uniquely identify each of\napply and table sync workers.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 25 Sep 2021 12:53:11 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Sat, Sep 25, 2021 at 4:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Sep 24, 2021 at 6:44 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Sep 24, 2021 at 8:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > >\n> > > 6.\n> > > +typedef struct PgStat_StatSubEntry\n> > > +{\n> > > + Oid subid; /* hash table key */\n> > > +\n> > > + /*\n> > > + * Statistics of errors that occurred during logical replication. While\n> > > + * having the hash table for table sync errors we have a separate\n> > > + * statistics value for apply error (apply_error), because we can avoid\n> > > + * building a nested hash table for table sync errors in the case where\n> > > + * there is no table sync error, which is the common case in practice.\n> > > + *\n> > >\n> > > The above comment is not clear to me. Why do you need to have a\n> > > separate hash table for table sync errors? And what makes it avoid\n> > > building nested hash table?\n> >\n> > In the previous patch, a subscription stats entry\n> > (PgStat_StatSubEntry) had one hash table that had error entries of\n> > both apply and table sync. Since a subscription can have one apply\n> > worker and multiple table sync workers it makes sense to me to have\n> > the subscription entry have a hash table for them.\n> >\n>\n> Sure, but each tablesync worker must have a separate relid. Why can't\n> we have a single hash table for both apply and table sync workers\n> which are hashed by sub_id + rel_id? For apply worker, the rel_id will\n> always be zero (InvalidOId) and tablesync workers will have a unique\n> OID for rel_id, so we should be able to uniquely identify each of\n> apply and table sync workers.\n\nWhat I imagined is to extend the subscription statistics, for\ninstance, transaction stats[1]. By having a hash table for\nsubscriptions, we can store those statistics into an entry of the hash\ntable and we can think of subscription errors as also statistics of\nthe subscription. So we can have another hash table for errors in an\nentry of the subscription hash table. For example, the subscription\nentry struct will be something like:\n\ntypedef struct PgStat_StatSubEntry\n{\n Oid subid; /* hash key */\n\n HTAB *errors; /* apply and table sync errors */\n\n /* transaction stats of subscription */\n PgStat_Counter xact_commit;\n PgStat_Counter xact_commit_bytes;\n PgStat_Counter xact_error;\n PgStat_Counter xact_error_bytes;\n PgStat_Counter xact_abort;\n PgStat_Counter xact_abort_bytes;\n PgStat_Counter failure_count;\n} PgStat_StatSubEntry;\n\nWhen a subscription is dropped, we can easily drop the subscription\nentry along with those statistics including the errors from the hash\ntable.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/OSBPR01MB48887CA8F40C8D984A6DC00CED199%40OSBPR01MB4888.jpnprd01.prod.outlook.com\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 27 Sep 2021 09:50:53 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Sep 27, 2021 at 6:21 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Sat, Sep 25, 2021 at 4:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Sure, but each tablesync worker must have a separate relid. Why can't\n> > we have a single hash table for both apply and table sync workers\n> > which are hashed by sub_id + rel_id? For apply worker, the rel_id will\n> > always be zero (InvalidOId) and tablesync workers will have a unique\n> > OID for rel_id, so we should be able to uniquely identify each of\n> > apply and table sync workers.\n>\n> What I imagined is to extend the subscription statistics, for\n> instance, transaction stats[1]. By having a hash table for\n> subscriptions, we can store those statistics into an entry of the hash\n> table and we can think of subscription errors as also statistics of\n> the subscription. So we can have another hash table for errors in an\n> entry of the subscription hash table. For example, the subscription\n> entry struct will be something like:\n>\n> typedef struct PgStat_StatSubEntry\n> {\n> Oid subid; /* hash key */\n>\n> HTAB *errors; /* apply and table sync errors */\n>\n> /* transaction stats of subscription */\n> PgStat_Counter xact_commit;\n> PgStat_Counter xact_commit_bytes;\n> PgStat_Counter xact_error;\n> PgStat_Counter xact_error_bytes;\n> PgStat_Counter xact_abort;\n> PgStat_Counter xact_abort_bytes;\n> PgStat_Counter failure_count;\n> } PgStat_StatSubEntry;\n>\n\nI think these additional stats will be displayed via\npg_stat_subscription, right? If so, the current stats of that view are\nall in-memory and are per LogicalRepWorker which means that for those\nstats also we will have different entries for apply and table sync\nworker. If this understanding is correct, won't it be better to\nrepresent this as below?\n\ntypedef struct PgStat_StatSubWorkerEntry\n{\n /* hash key */\n Oid subid;\n Oid relid\n\n /* worker stats which includes xact stats */\n PgStat_SubWorkerStats worker_stats\n\n /* error stats */\n PgStat_StatSubErrEntry worker_error_stats;\n} PgStat_StatSubWorkerEntry;\n\n\ntypedef struct PgStat_SubWorkerStats\n{\n /* define existing stats here */\n....\n\n /* transaction stats of subscription */\n PgStat_Counter xact_commit;\n PgStat_Counter xact_commit_bytes;\n PgStat_Counter xact_error;\n PgStat_Counter xact_error_bytes;\n PgStat_Counter xact_abort;\n PgStat_Counter xact_abort_bytes;\n} PgStat_SubWorkerStats;\n\nNow, at drop subscription, we do need to find and remove all the subid\n+ relid entries.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 27 Sep 2021 08:54:26 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Sep 24, 2021 at 7:01 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Sep 3, 2021 at 4:33 AM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>\n> > I am attaching a version of such a function, plus some tests of your patch (since it does not appear to have any). Would you mind reviewing these and giving comments or including them in your next patch version?\n> >\n>\n> I've looked at the patch and here are some comments:\n>\n> +\n> +-- no errors should be reported\n> +SELECT * FROM pg_stat_subscription_errors;\n> +\n>\n> +\n> +-- Test that the subscription errors view exists, and has the right columns\n> +-- If we expected any rows to exist, we would need to filter out unstable\n> +-- columns. But since there should be no errors, we just select them all.\n> +select * from pg_stat_subscription_errors;\n>\n> The patch adds checks of pg_stat_subscription_errors in order to test\n> if the subscription doesn't have any error. But since the subscription\n> errors are updated in an asynchronous manner, we cannot say the\n> subscription is working fine by checking the view only once.\n>\n\nOne question I have here is, can we reliably write few tests just for\nthe new view patch? Right now, it has no test, having a few tests will\nbe better. Here, because the apply worker will keep on failing till we\nstop it or resolve the conflict, can we rely on that fact? The idea\nis that even if one of the entry is missed by stats collector, a new\none (probably the same one) will be issued and we can wait till we see\none error in view. We can add additional PostgresNode.pm\ninfrastructure once the main patch is committed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 27 Sep 2021 09:15:53 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Sep 27, 2021 at 12:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Sep 27, 2021 at 6:21 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Sat, Sep 25, 2021 at 4:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > Sure, but each tablesync worker must have a separate relid. Why can't\n> > > we have a single hash table for both apply and table sync workers\n> > > which are hashed by sub_id + rel_id? For apply worker, the rel_id will\n> > > always be zero (InvalidOId) and tablesync workers will have a unique\n> > > OID for rel_id, so we should be able to uniquely identify each of\n> > > apply and table sync workers.\n> >\n> > What I imagined is to extend the subscription statistics, for\n> > instance, transaction stats[1]. By having a hash table for\n> > subscriptions, we can store those statistics into an entry of the hash\n> > table and we can think of subscription errors as also statistics of\n> > the subscription. So we can have another hash table for errors in an\n> > entry of the subscription hash table. For example, the subscription\n> > entry struct will be something like:\n> >\n> > typedef struct PgStat_StatSubEntry\n> > {\n> > Oid subid; /* hash key */\n> >\n> > HTAB *errors; /* apply and table sync errors */\n> >\n> > /* transaction stats of subscription */\n> > PgStat_Counter xact_commit;\n> > PgStat_Counter xact_commit_bytes;\n> > PgStat_Counter xact_error;\n> > PgStat_Counter xact_error_bytes;\n> > PgStat_Counter xact_abort;\n> > PgStat_Counter xact_abort_bytes;\n> > PgStat_Counter failure_count;\n> > } PgStat_StatSubEntry;\n> >\n>\n> I think these additional stats will be displayed via\n> pg_stat_subscription, right? If so, the current stats of that view are\n> all in-memory and are per LogicalRepWorker which means that for those\n> stats also we will have different entries for apply and table sync\n> worker. If this understanding is correct, won't it be better to\n> represent this as below?\n\nI was thinking that we have a different stats view for example\npg_stat_subscription_xacts that has entries per subscription. But your\nidea seems better to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 27 Sep 2021 12:50:39 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Sep 27, 2021 at 12:50 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Sep 27, 2021 at 12:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Sep 27, 2021 at 6:21 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Sat, Sep 25, 2021 at 4:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > Sure, but each tablesync worker must have a separate relid. Why can't\n> > > > we have a single hash table for both apply and table sync workers\n> > > > which are hashed by sub_id + rel_id? For apply worker, the rel_id will\n> > > > always be zero (InvalidOId) and tablesync workers will have a unique\n> > > > OID for rel_id, so we should be able to uniquely identify each of\n> > > > apply and table sync workers.\n> > >\n> > > What I imagined is to extend the subscription statistics, for\n> > > instance, transaction stats[1]. By having a hash table for\n> > > subscriptions, we can store those statistics into an entry of the hash\n> > > table and we can think of subscription errors as also statistics of\n> > > the subscription. So we can have another hash table for errors in an\n> > > entry of the subscription hash table. For example, the subscription\n> > > entry struct will be something like:\n> > >\n> > > typedef struct PgStat_StatSubEntry\n> > > {\n> > > Oid subid; /* hash key */\n> > >\n> > > HTAB *errors; /* apply and table sync errors */\n> > >\n> > > /* transaction stats of subscription */\n> > > PgStat_Counter xact_commit;\n> > > PgStat_Counter xact_commit_bytes;\n> > > PgStat_Counter xact_error;\n> > > PgStat_Counter xact_error_bytes;\n> > > PgStat_Counter xact_abort;\n> > > PgStat_Counter xact_abort_bytes;\n> > > PgStat_Counter failure_count;\n> > > } PgStat_StatSubEntry;\n> > >\n> >\n> > I think these additional stats will be displayed via\n> > pg_stat_subscription, right? If so, the current stats of that view are\n> > all in-memory and are per LogicalRepWorker which means that for those\n> > stats also we will have different entries for apply and table sync\n> > worker. If this understanding is correct, won't it be better to\n> > represent this as below?\n>\n> I was thinking that we have a different stats view for example\n> pg_stat_subscription_xacts that has entries per subscription. But your\n> idea seems better to me.\n\nI mean that showing statistics (including transaction statistics and\nerrors) per logical replication worker seems better to me, no matter\nwhat view shows these statistics. I'll change the patch in that way.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 27 Sep 2021 14:31:37 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Sep 27, 2021 at 12:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Sep 24, 2021 at 7:01 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Sep 3, 2021 at 4:33 AM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> >\n> > > I am attaching a version of such a function, plus some tests of your patch (since it does not appear to have any). Would you mind reviewing these and giving comments or including them in your next patch version?\n> > >\n> >\n> > I've looked at the patch and here are some comments:\n> >\n> > +\n> > +-- no errors should be reported\n> > +SELECT * FROM pg_stat_subscription_errors;\n> > +\n> >\n> > +\n> > +-- Test that the subscription errors view exists, and has the right columns\n> > +-- If we expected any rows to exist, we would need to filter out unstable\n> > +-- columns. But since there should be no errors, we just select them all.\n> > +select * from pg_stat_subscription_errors;\n> >\n> > The patch adds checks of pg_stat_subscription_errors in order to test\n> > if the subscription doesn't have any error. But since the subscription\n> > errors are updated in an asynchronous manner, we cannot say the\n> > subscription is working fine by checking the view only once.\n> >\n>\n> One question I have here is, can we reliably write few tests just for\n> the new view patch? Right now, it has no test, having a few tests will\n> be better. Here, because the apply worker will keep on failing till we\n> stop it or resolve the conflict, can we rely on that fact? The idea\n> is that even if one of the entry is missed by stats collector, a new\n> one (probably the same one) will be issued and we can wait till we see\n> one error in view. We can add additional PostgresNode.pm\n> infrastructure once the main patch is committed.\n\nYes, the new tests added by 0003 patch (skip_xid patch) use that fact.\nAfter the error is shown in the view, we fetch the XID from the view\nto specify as skip_xid. The tests just for the\npg_stat_subscription_errors view will be a subset of these tests. So\nprobably we can add it in 0001 patch and 0003 patch can extend the\ntests so that it tests skip_xid option.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 27 Sep 2021 14:49:50 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Sep 27, 2021 at 11:20 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Sep 27, 2021 at 12:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Sep 24, 2021 at 7:01 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Fri, Sep 3, 2021 at 4:33 AM Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> > >\n> > > > I am attaching a version of such a function, plus some tests of your patch (since it does not appear to have any). Would you mind reviewing these and giving comments or including them in your next patch version?\n> > > >\n> > >\n> > > I've looked at the patch and here are some comments:\n> > >\n> > > +\n> > > +-- no errors should be reported\n> > > +SELECT * FROM pg_stat_subscription_errors;\n> > > +\n> > >\n> > > +\n> > > +-- Test that the subscription errors view exists, and has the right columns\n> > > +-- If we expected any rows to exist, we would need to filter out unstable\n> > > +-- columns. But since there should be no errors, we just select them all.\n> > > +select * from pg_stat_subscription_errors;\n> > >\n> > > The patch adds checks of pg_stat_subscription_errors in order to test\n> > > if the subscription doesn't have any error. But since the subscription\n> > > errors are updated in an asynchronous manner, we cannot say the\n> > > subscription is working fine by checking the view only once.\n> > >\n> >\n> > One question I have here is, can we reliably write few tests just for\n> > the new view patch? Right now, it has no test, having a few tests will\n> > be better. Here, because the apply worker will keep on failing till we\n> > stop it or resolve the conflict, can we rely on that fact? The idea\n> > is that even if one of the entry is missed by stats collector, a new\n> > one (probably the same one) will be issued and we can wait till we see\n> > one error in view. We can add additional PostgresNode.pm\n> > infrastructure once the main patch is committed.\n>\n> Yes, the new tests added by 0003 patch (skip_xid patch) use that fact.\n> After the error is shown in the view, we fetch the XID from the view\n> to specify as skip_xid. The tests just for the\n> pg_stat_subscription_errors view will be a subset of these tests. So\n> probably we can add it in 0001 patch and 0003 patch can extend the\n> tests so that it tests skip_xid option.\n>\n\nThis makes sense to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 27 Sep 2021 11:24:56 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Sep 27, 2021 at 11:02 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Sep 27, 2021 at 12:50 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Sep 27, 2021 at 12:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Sep 27, 2021 at 6:21 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Sat, Sep 25, 2021 at 4:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > Sure, but each tablesync worker must have a separate relid. Why can't\n> > > > > we have a single hash table for both apply and table sync workers\n> > > > > which are hashed by sub_id + rel_id? For apply worker, the rel_id will\n> > > > > always be zero (InvalidOId) and tablesync workers will have a unique\n> > > > > OID for rel_id, so we should be able to uniquely identify each of\n> > > > > apply and table sync workers.\n> > > >\n> > > > What I imagined is to extend the subscription statistics, for\n> > > > instance, transaction stats[1]. By having a hash table for\n> > > > subscriptions, we can store those statistics into an entry of the hash\n> > > > table and we can think of subscription errors as also statistics of\n> > > > the subscription. So we can have another hash table for errors in an\n> > > > entry of the subscription hash table. For example, the subscription\n> > > > entry struct will be something like:\n> > > >\n> > > > typedef struct PgStat_StatSubEntry\n> > > > {\n> > > > Oid subid; /* hash key */\n> > > >\n> > > > HTAB *errors; /* apply and table sync errors */\n> > > >\n> > > > /* transaction stats of subscription */\n> > > > PgStat_Counter xact_commit;\n> > > > PgStat_Counter xact_commit_bytes;\n> > > > PgStat_Counter xact_error;\n> > > > PgStat_Counter xact_error_bytes;\n> > > > PgStat_Counter xact_abort;\n> > > > PgStat_Counter xact_abort_bytes;\n> > > > PgStat_Counter failure_count;\n> > > > } PgStat_StatSubEntry;\n> > > >\n> > >\n> > > I think these additional stats will be displayed via\n> > > pg_stat_subscription, right? If so, the current stats of that view are\n> > > all in-memory and are per LogicalRepWorker which means that for those\n> > > stats also we will have different entries for apply and table sync\n> > > worker. If this understanding is correct, won't it be better to\n> > > represent this as below?\n> >\n> > I was thinking that we have a different stats view for example\n> > pg_stat_subscription_xacts that has entries per subscription. But your\n> > idea seems better to me.\n>\n> I mean that showing statistics (including transaction statistics and\n> errors) per logical replication worker seems better to me, no matter\n> what view shows these statistics. I'll change the patch in that way.\n>\n\nSounds good.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 27 Sep 2021 11:25:34 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Sep 27, 2021 at 2:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Sep 27, 2021 at 11:02 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Sep 27, 2021 at 12:50 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Mon, Sep 27, 2021 at 12:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Mon, Sep 27, 2021 at 6:21 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > On Sat, Sep 25, 2021 at 4:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > >\n> > > > > > Sure, but each tablesync worker must have a separate relid. Why can't\n> > > > > > we have a single hash table for both apply and table sync workers\n> > > > > > which are hashed by sub_id + rel_id? For apply worker, the rel_id will\n> > > > > > always be zero (InvalidOId) and tablesync workers will have a unique\n> > > > > > OID for rel_id, so we should be able to uniquely identify each of\n> > > > > > apply and table sync workers.\n> > > > >\n> > > > > What I imagined is to extend the subscription statistics, for\n> > > > > instance, transaction stats[1]. By having a hash table for\n> > > > > subscriptions, we can store those statistics into an entry of the hash\n> > > > > table and we can think of subscription errors as also statistics of\n> > > > > the subscription. So we can have another hash table for errors in an\n> > > > > entry of the subscription hash table. For example, the subscription\n> > > > > entry struct will be something like:\n> > > > >\n> > > > > typedef struct PgStat_StatSubEntry\n> > > > > {\n> > > > > Oid subid; /* hash key */\n> > > > >\n> > > > > HTAB *errors; /* apply and table sync errors */\n> > > > >\n> > > > > /* transaction stats of subscription */\n> > > > > PgStat_Counter xact_commit;\n> > > > > PgStat_Counter xact_commit_bytes;\n> > > > > PgStat_Counter xact_error;\n> > > > > PgStat_Counter xact_error_bytes;\n> > > > > PgStat_Counter xact_abort;\n> > > > > PgStat_Counter xact_abort_bytes;\n> > > > > PgStat_Counter failure_count;\n> > > > > } PgStat_StatSubEntry;\n> > > > >\n> > > >\n> > > > I think these additional stats will be displayed via\n> > > > pg_stat_subscription, right? If so, the current stats of that view are\n> > > > all in-memory and are per LogicalRepWorker which means that for those\n> > > > stats also we will have different entries for apply and table sync\n> > > > worker. If this understanding is correct, won't it be better to\n> > > > represent this as below?\n> > >\n> > > I was thinking that we have a different stats view for example\n> > > pg_stat_subscription_xacts that has entries per subscription. But your\n> > > idea seems better to me.\n> >\n> > I mean that showing statistics (including transaction statistics and\n> > errors) per logical replication worker seems better to me, no matter\n> > what view shows these statistics. I'll change the patch in that way.\n> >\n>\n\nI've attached updated patches that incorporate all comments I got so\nfar. Please review them.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Thu, 30 Sep 2021 14:45:20 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On 30.09.21 07:45, Masahiko Sawada wrote:\n> I've attached updated patches that incorporate all comments I got so\n> far. Please review them.\n\nI'm uneasy about the way the xids-to-be-skipped are presented as \nsubscriptions options, similar to settings such as \"binary\". I see how \nthat is convenient, but it's not really the same thing, in how you use \nit, is it? Even if we share some details internally, I feel that there \nshould be a separate syntax somehow.\n\nAlso, what happens when you forget to reset the xid after it has passed? \n Will it get skipped again after wraparound?\n\n\n",
"msg_date": "Thu, 30 Sep 2021 22:05:35 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Oct 1, 2021 at 5:05 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 30.09.21 07:45, Masahiko Sawada wrote:\n> > I've attached updated patches that incorporate all comments I got so\n> > far. Please review them.\n>\n> I'm uneasy about the way the xids-to-be-skipped are presented as\n> subscriptions options, similar to settings such as \"binary\". I see how\n> that is convenient, but it's not really the same thing, in how you use\n> it, is it? Even if we share some details internally, I feel that there\n> should be a separate syntax somehow.\n\nSince I was thinking that ALTER SUBSCRIPTION ... SET is used to alter\nparameters originally set by CREATE SUBSCRIPTION, in the first several\nversion patches it added a separate syntax for this feature like ALTER\nSUBSCRIPTION ... SET SKIP TRANSACTION xxx. But Amit was concerned\nabout an additional syntax and consistency with disable_on_error[1]\nwhich is proposed by Mark Diliger[2], so I’ve changed it to a\nsubscription option. I tried to find a policy of that by checking the\nexisting syntaxes but I could not find, and interestingly when it\ncomes to ALTER SUBSCRIPTION syntax, we support both ENABLE/DISABLE\nsyntax and SET (enabled = on/off) option.\n\n> Also, what happens when you forget to reset the xid after it has passed?\n> Will it get skipped again after wraparound?\n\nYes. Currently it's a user's responsibility. We thoroughly documented\nthe risk of this feature and thus it should be used as a last resort\nsince it may easily make the subscriber inconsistent, especially if a\nuser specifies the wrong transaction ID.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAA4eK1LjrU8x%2Bx%3DbFazVD10pgOVy0PEE8mpz3nQhDG%2BmmU8ivQ%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/DB35438F-9356-4841-89A0-412709EBD3AB%40enterprisedb.com\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 1 Oct 2021 10:00:05 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Oct 1, 2021 at 6:30 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Oct 1, 2021 at 5:05 AM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n> >\n> > Also, what happens when you forget to reset the xid after it has passed?\n> > Will it get skipped again after wraparound?\n>\n> Yes.\n>\n\nAren't we resetting the skip_xid once we skip that transaction in\nstop_skipping_changes()? If so, it shouldn't be possible to skip it\nagain after the wraparound. Am I missing something?\n\nNow, if the user has wrongly set some XID which we can't skip as that\nis already in past or something like that then I think it is the\nuser's problem and that's why it can be done only by super users. I\nthink we have even thought of protecting that via cross-checking with\nthe information in view but as the view data is lossy, we can't rely\non that. I think users can even set some valid XID that never has any\nerror and we will still skip it which is what can be done today also\nby pg_replication_origin_advance(). I am not sure if we can do much\nabout such scenarios except to carefully document them.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 1 Oct 2021 11:20:20 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Oct 1, 2021 at 6:30 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Oct 1, 2021 at 5:05 AM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n> >\n> > On 30.09.21 07:45, Masahiko Sawada wrote:\n> > > I've attached updated patches that incorporate all comments I got so\n> > > far. Please review them.\n> >\n> > I'm uneasy about the way the xids-to-be-skipped are presented as\n> > subscriptions options, similar to settings such as \"binary\". I see how\n> > that is convenient, but it's not really the same thing, in how you use\n> > it, is it? Even if we share some details internally, I feel that there\n> > should be a separate syntax somehow.\n>\n> Since I was thinking that ALTER SUBSCRIPTION ... SET is used to alter\n> parameters originally set by CREATE SUBSCRIPTION, in the first several\n> version patches it added a separate syntax for this feature like ALTER\n> SUBSCRIPTION ... SET SKIP TRANSACTION xxx. But Amit was concerned\n> about an additional syntax and consistency with disable_on_error[1]\n> which is proposed by Mark Diliger[2], so I’ve changed it to a\n> subscription option.\n>\n\nYeah, the basic idea is that this is not the only option we will\nsupport for taking actions on error/conflict. For example, we might\nwant to disable subscriptions or allow skipping transactions based on\nXID, LSN, etc. So, developing separate syntax for each of the options\ndoesn't seem like a good idea. However considering Peter's point, how\nabout something like:\n\nAlter Subscription <sub_name> On Error ( subscription_parameter [=\nvalue] [, ... ] );\nOR\nAlter Subscription <sub_name> On Conflict ( subscription_parameter [=\nvalue] [, ... ] );\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 1 Oct 2021 14:02:00 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Oct 1, 2021 at 2:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Oct 1, 2021 at 6:30 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Oct 1, 2021 at 5:05 AM Peter Eisentraut\n> > <peter.eisentraut@enterprisedb.com> wrote:\n> > >\n> > > Also, what happens when you forget to reset the xid after it has passed?\n> > > Will it get skipped again after wraparound?\n> >\n> > Yes.\n> >\n>\n> Aren't we resetting the skip_xid once we skip that transaction in\n> stop_skipping_changes()? If so, it shouldn't be possible to skip it\n> again after the wraparound. Am I missing something?\n\nOops, I'd misunderstood the question. Yes, Amit is right. Once we skip\nthe transaction, skip_xid is automatically reset. So users don't need\nto reset it manually after skipping the transaction. Sorry for the\nconfusion.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 1 Oct 2021 18:11:21 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Oct 1, 2021 at 5:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Oct 1, 2021 at 6:30 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Oct 1, 2021 at 5:05 AM Peter Eisentraut\n> > <peter.eisentraut@enterprisedb.com> wrote:\n> > >\n> > > On 30.09.21 07:45, Masahiko Sawada wrote:\n> > > > I've attached updated patches that incorporate all comments I got so\n> > > > far. Please review them.\n> > >\n> > > I'm uneasy about the way the xids-to-be-skipped are presented as\n> > > subscriptions options, similar to settings such as \"binary\". I see how\n> > > that is convenient, but it's not really the same thing, in how you use\n> > > it, is it? Even if we share some details internally, I feel that there\n> > > should be a separate syntax somehow.\n> >\n> > Since I was thinking that ALTER SUBSCRIPTION ... SET is used to alter\n> > parameters originally set by CREATE SUBSCRIPTION, in the first several\n> > version patches it added a separate syntax for this feature like ALTER\n> > SUBSCRIPTION ... SET SKIP TRANSACTION xxx. But Amit was concerned\n> > about an additional syntax and consistency with disable_on_error[1]\n> > which is proposed by Mark Diliger[2], so I’ve changed it to a\n> > subscription option.\n> >\n>\n> Yeah, the basic idea is that this is not the only option we will\n> support for taking actions on error/conflict. For example, we might\n> want to disable subscriptions or allow skipping transactions based on\n> XID, LSN, etc.\n\nI guess disabling subscriptions on error/conflict and skipping the\nparticular transactions are somewhat different types of functions.\nDisabling subscriptions on error/conflict seems likes a setting\nparameter of subscriptions. The users might want to specify this\noption at creation time. Whereas, skipping the particular transaction\nis a repair function that the user might want to use on the spot in\ncase of a failure. I’m concerned a bit that combining these functions\nto one syntax could confuse the users.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 4 Oct 2021 09:31:06 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Oct 4, 2021 at 6:01 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Oct 1, 2021 at 5:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Oct 1, 2021 at 6:30 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Fri, Oct 1, 2021 at 5:05 AM Peter Eisentraut\n> > > <peter.eisentraut@enterprisedb.com> wrote:\n> > > >\n> > > > On 30.09.21 07:45, Masahiko Sawada wrote:\n> > > > > I've attached updated patches that incorporate all comments I got so\n> > > > > far. Please review them.\n> > > >\n> > > > I'm uneasy about the way the xids-to-be-skipped are presented as\n> > > > subscriptions options, similar to settings such as \"binary\". I see how\n> > > > that is convenient, but it's not really the same thing, in how you use\n> > > > it, is it? Even if we share some details internally, I feel that there\n> > > > should be a separate syntax somehow.\n> > >\n> > > Since I was thinking that ALTER SUBSCRIPTION ... SET is used to alter\n> > > parameters originally set by CREATE SUBSCRIPTION, in the first several\n> > > version patches it added a separate syntax for this feature like ALTER\n> > > SUBSCRIPTION ... SET SKIP TRANSACTION xxx. But Amit was concerned\n> > > about an additional syntax and consistency with disable_on_error[1]\n> > > which is proposed by Mark Diliger[2], so I’ve changed it to a\n> > > subscription option.\n> > >\n> >\n> > Yeah, the basic idea is that this is not the only option we will\n> > support for taking actions on error/conflict. For example, we might\n> > want to disable subscriptions or allow skipping transactions based on\n> > XID, LSN, etc.\n>\n> I guess disabling subscriptions on error/conflict and skipping the\n> particular transactions are somewhat different types of functions.\n> Disabling subscriptions on error/conflict seems likes a setting\n> parameter of subscriptions. The users might want to specify this\n> option at creation time.\n>\n\nOkay, but they can still specify it by using \"On Error\" syntax.\n\n> Whereas, skipping the particular transaction\n> is a repair function that the user might want to use on the spot in\n> case of a failure. I’m concerned a bit that combining these functions\n> to one syntax could confuse the users.\n>\n\nFair enough, I was mainly trying to combine the syntax for all actions\nthat we can take \"On Error\". We can allow to set them either at Create\nSubscription or Alter Subscription time.\n\nI think here the main point is that does this addresses Peter's\nconcern for this Patch to use a separate syntax? Peter E., can you\nplease confirm? Do let us know if you have something else going in\nyour mind?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 4 Oct 2021 11:01:24 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Oct 4, 2021 at 4:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> I think here the main point is that does this addresses Peter's\n> concern for this Patch to use a separate syntax? Peter E., can you\n> please confirm? Do let us know if you have something else going in\n> your mind?\n>\n\nPeter's concern seemed to be that the use of a subscription option,\nthough convenient, isn't an intuitive natural fit for providing this\nfeature (i.e. ability to skip a transaction by xid). I tend to have\nthat feeling about using a subscription option for this feature. I'm\nnot sure what possible alternative syntax he had in mind and currently\ncan't really think of a good one myself that fits the purpose.\n\nI think that the 1st and 2nd patch are useful in their own right, but\ncouldn't this feature (i.e. the 3rd patch) be provided instead as an\nadditional Replication Management function (see 9.27.6)?\ne.g. pg_replication_skip_xid\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 6 Oct 2021 13:18:39 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thursday, September 30, 2021 2:45 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> I've attached updated patches that incorporate all comments I got so far. Please\r\n> review them.\r\nHi\r\n\r\n\r\nSorry, if I misunderstand something but\r\ndid someone check what happens when we\r\nexecute ALTER SUBSCRIPTION ... RESET (streaming)\r\nin the middle of one txn which has several streaming of data to the sub,\r\nespecially after some part of txn has been already streamed.\r\nMy intention of this is something like *if* we can find an actual harm of this,\r\nI wanted to suggest the necessity of a safeguard or some measure into the patch.\r\n\r\nAn example)\r\n\r\nSet the logical_decoding_work_mem = 64kB on the pub.\r\nand create a table and subscription with streaming = true.\r\nIn addition, log_min_messages = DEBUG1 on the sub\r\nis helpful to check the LOG on the sub in stream_open_file().\r\n\r\n<Session 1> connect to the publisher\r\n\r\nBEGIN;\r\nINSERT INTO tab VALUES (generate_series(1, 1000)); -- this exceeds the memory limit\r\nSELECT * FROM pg_stat_replication_slots; -- check the actual streaming bytes&counts just in case\r\n\r\n<Session 2> connect to the subscriber\r\n-- after checking some logs of \"open file .... for streamed changes\" on the sub\r\nALTER SUBSCRIPTION mysub RESET (streaming)\r\n\r\n<Session 1>\r\nINSERT INTO tab VALUES (generate_series(1001, 2000)); -- again, exceeds the limit\r\nCOMMIT;\r\n\r\n\r\nI observed that the subscriber doesn't\r\naccept STREAM_COMMIT in this case but gets BEGIN&COMMIT instead at the end.\r\nI couldn't find any apparent and immediate issue from those steps\r\nbut is that no problem ?\r\nProbably, this kind of situation applies to other reset target options ?\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Fri, 8 Oct 2021 07:09:36 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Sep 30, 2021 at 3:45 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached updated patches that incorporate all comments I got so\n> far. Please review them.\n>\n\nSome comments about the v15-0001 patch:\n\n(1) patch adds a whitespace error\n\nApplying: Add a subscription errors statistics view\n\"pg_stat_subscription_errors\".\n.git/rebase-apply/patch:1656: new blank line at EOF.\n+\nwarning: 1 line adds whitespace errors.\n\n(2) Patch comment says \"This commit adds a new system view\npg_stat_logical_replication_errors ...\"\nBUT this is the wrong name, it should be \"pg_stat_subscription_errors\".\n\n\ndoc/src/sgml/monitoring.sgml\n\n(3)\n\"Message of the error\" doesn't sound right. I suggest just saying \"The\nerror message\".\n\n(4) view column \"last_failed_time\"\nI think it would be better to name this \"last_error_time\".\n\n\nsrc/backend/postmaster/pgstat.c\n\n(5) pgstat_vacuum_subworker_stats()\n\nSpelling mistake in the following comment:\n\n/* Create a map for mapping subscriptoin OID and database OID */\n\nsubscriptoin -> subscription\n\n(6)\nIn the following functions:\n\npgstat_read_statsfiles\npgstat_read_db_statsfile_timestamp\n\nThe following comment should say \"... struct describing subscription\nworker statistics.\"\n(i.e. need to remove the \"a\")\n\n+ * 'S' A PgStat_StatSubWorkerEntry struct describing a\n+ * subscription worker statistics.\n\n\n(7) pgstat_get_subworker_entry\n\nSuggest comment change:\n\nBEFORE:\n+ * Return the entry of subscription worker entry with the subscription\nAFTER:\n+ * Return subscription worker entry with the given subscription\n\n(8) pgstat_recv_subworker_error\n\n+ /*\n+ * Update only the counter and timestamp if we received the same error\n+ * again\n+ */\n+ if (wentry->relid == msg->m_relid &&\n+ wentry->command == msg->m_command &&\n+ wentry->xid == msg->m_xid &&\n+ strncmp(wentry->message, msg->m_message, strlen(wentry->message)) == 0)\n+ {\n\nIs there a reason that the above check uses strncmp() with\nstrlen(wentry->message), instead of just strcmp()?\nmsg->m_message is treated as the same error message if it is the same\nup to strlen(wentry->message)?\nPerhaps if that is intentional, then the comment should be updated.\n\nsrc/tools/pgindent/typedefs.list\n\n(9)\nThe added \"PgStat_SubWorkerError\" should be removed from the\ntypedefs.list (as there is no such new typedef).\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 8 Oct 2021 22:17:38 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thursday, September 30, 2021 2:45 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> I've attached updated patches that incorporate all comments I got so far. Please\r\n> review them.\r\nHello\r\n\r\n\r\nMinor two comments for v15-0001 patch.\r\n\r\n(1) a typo in pgstat_vacuum_subworker_stat()\r\n\r\n+ /*\r\n+ * This subscription is live. The next step is that we search errors\r\n+ * of the table sync workers who are already in sync state. These\r\n+ * errors should be removed.\r\n+ */\r\n\r\nThis subscription is \"alive\" ?\r\n\r\n\r\n(2) Suggestion to add one comment next to '0' in ApplyWorkerMain()\r\n\r\n+ /* report the table sync error */\r\n+ pgstat_report_subworker_error(MyLogicalRepWorker->subid,\r\n+ MyLogicalRepWorker->relid,\r\n+ MyLogicalRepWorker->relid,\r\n+ 0,\r\n+ InvalidTransactionId,\r\n+ errdata->message);\r\n\r\nHow about writing /* no corresponding message type for table synchronization */ or something ?\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Fri, 8 Oct 2021 12:22:07 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On 04.10.21 02:31, Masahiko Sawada wrote:\n> I guess disabling subscriptions on error/conflict and skipping the\n> particular transactions are somewhat different types of functions.\n> Disabling subscriptions on error/conflict seems likes a setting\n> parameter of subscriptions. The users might want to specify this\n> option at creation time. Whereas, skipping the particular transaction\n> is a repair function that the user might want to use on the spot in\n> case of a failure. I’m concerned a bit that combining these functions\n> to one syntax could confuse the users.\n\nAlso, would the skip option be dumped and restored using pg_dump? Maybe \nthere is an argument for yes, but if not, then we probably need a \ndifferent path of handling it separate from the more permanent options.\n\n\n",
"msg_date": "Sun, 10 Oct 2021 16:04:31 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Oct 8, 2021 at 4:09 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Thursday, September 30, 2021 2:45 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > I've attached updated patches that incorporate all comments I got so far. Please\n> > review them.\n> Hi\n>\n>\n> Sorry, if I misunderstand something but\n> did someone check what happens when we\n> execute ALTER SUBSCRIPTION ... RESET (streaming)\n> in the middle of one txn which has several streaming of data to the sub,\n> especially after some part of txn has been already streamed.\n> My intention of this is something like *if* we can find an actual harm of this,\n> I wanted to suggest the necessity of a safeguard or some measure into the patch.\n>\n> An example)\n>\n> Set the logical_decoding_work_mem = 64kB on the pub.\n> and create a table and subscription with streaming = true.\n> In addition, log_min_messages = DEBUG1 on the sub\n> is helpful to check the LOG on the sub in stream_open_file().\n>\n> <Session 1> connect to the publisher\n>\n> BEGIN;\n> INSERT INTO tab VALUES (generate_series(1, 1000)); -- this exceeds the memory limit\n> SELECT * FROM pg_stat_replication_slots; -- check the actual streaming bytes&counts just in case\n>\n> <Session 2> connect to the subscriber\n> -- after checking some logs of \"open file .... for streamed changes\" on the sub\n> ALTER SUBSCRIPTION mysub RESET (streaming)\n>\n> <Session 1>\n> INSERT INTO tab VALUES (generate_series(1001, 2000)); -- again, exceeds the limit\n> COMMIT;\n>\n>\n> I observed that the subscriber doesn't\n> accept STREAM_COMMIT in this case but gets BEGIN&COMMIT instead at the end.\n> I couldn't find any apparent and immediate issue from those steps\n> but is that no problem ?\n> Probably, this kind of situation applies to other reset target options ?\n\nI think that if a subscription parameter such as ‘streaming’ and\n‘binary’ is changed, an apply worker exits and the launcher starts a\nnew worker (see maybe_reread_subscription()). So I guess that in this\ncase, the apply worker exited during receiving streamed changes,\nrestarted, and received the same changes with ‘streaming = off’,\ntherefore it got BEGIN and COMMIT instead. I think that this happens\neven by using ‘SET (‘streaming’ = off)’.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 11 Oct 2021 11:51:16 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Monday, October 11, 2021 11:51 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> On Fri, Oct 8, 2021 at 4:09 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > On Thursday, September 30, 2021 2:45 PM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> > > I've attached updated patches that incorporate all comments I got so\r\n> > > far. Please review them.\r\n> > Sorry, if I misunderstand something but did someone check what happens\r\n> > when we execute ALTER SUBSCRIPTION ... RESET (streaming) in the middle\r\n> > of one txn which has several streaming of data to the sub, especially\r\n> > after some part of txn has been already streamed.\r\n> > My intention of this is something like *if* we can find an actual harm\r\n> > of this, I wanted to suggest the necessity of a safeguard or some measure\r\n> into the patch.\r\n...\r\n> > I observed that the subscriber doesn't accept STREAM_COMMIT in this\r\n> > case but gets BEGIN&COMMIT instead at the end.\r\n> > I couldn't find any apparent and immediate issue from those steps but\r\n> > is that no problem ?\r\n> > Probably, this kind of situation applies to other reset target options ?\r\n> \r\n> I think that if a subscription parameter such as ‘streaming’ and ‘binary’ is\r\n> changed, an apply worker exits and the launcher starts a new worker (see\r\n> maybe_reread_subscription()). So I guess that in this case, the apply worker\r\n> exited during receiving streamed changes, restarted, and received the same\r\n> changes with ‘streaming = off’, therefore it got BEGIN and COMMIT instead. I\r\n> think that this happens even by using ‘SET (‘streaming’ = off)’.\r\nYou are right. Yes, I checked that the apply worker did exit\r\nand the new apply worker process dealt with the INSERT in the above case.\r\nAlso, setting streaming = false was same.\r\n\r\nThanks a lot for your explanation.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Mon, 11 Oct 2021 07:27:43 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Sun, Oct 10, 2021 at 11:04 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 04.10.21 02:31, Masahiko Sawada wrote:\n> > I guess disabling subscriptions on error/conflict and skipping the\n> > particular transactions are somewhat different types of functions.\n> > Disabling subscriptions on error/conflict seems likes a setting\n> > parameter of subscriptions. The users might want to specify this\n> > option at creation time. Whereas, skipping the particular transaction\n> > is a repair function that the user might want to use on the spot in\n> > case of a failure. I’m concerned a bit that combining these functions\n> > to one syntax could confuse the users.\n>\n> Also, would the skip option be dumped and restored using pg_dump? Maybe\n> there is an argument for yes, but if not, then we probably need a\n> different path of handling it separate from the more permanent options.\n\nGood point. I don’t think the skip option should be dumped and\nrestored using pg_dump since the utilization of transaction ids in\nanother installation is different.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 11 Oct 2021 16:30:04 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Oct 8, 2021 at 8:17 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Thu, Sep 30, 2021 at 3:45 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached updated patches that incorporate all comments I got so\n> > far. Please review them.\n> >\n>\n> Some comments about the v15-0001 patch:\n\nThank you for the comments!\n\n>\n> (1) patch adds a whitespace error\n>\n> Applying: Add a subscription errors statistics view\n> \"pg_stat_subscription_errors\".\n> .git/rebase-apply/patch:1656: new blank line at EOF.\n> +\n> warning: 1 line adds whitespace errors.\n\nFixed.\n\n>\n> (2) Patch comment says \"This commit adds a new system view\n> pg_stat_logical_replication_errors ...\"\n> BUT this is the wrong name, it should be \"pg_stat_subscription_errors\".\n>\n>\n\nFixed.\n\n> doc/src/sgml/monitoring.sgml\n>\n> (3)\n> \"Message of the error\" doesn't sound right. I suggest just saying \"The\n> error message\".\n\nFixed.\n\n>\n> (4) view column \"last_failed_time\"\n> I think it would be better to name this \"last_error_time\".\n\nOkay, fixed.\n\n>\n>\n> src/backend/postmaster/pgstat.c\n>\n> (5) pgstat_vacuum_subworker_stats()\n>\n> Spelling mistake in the following comment:\n>\n> /* Create a map for mapping subscriptoin OID and database OID */\n>\n> subscriptoin -> subscription\n\nFixed.\n\n>\n> (6)\n> In the following functions:\n>\n> pgstat_read_statsfiles\n> pgstat_read_db_statsfile_timestamp\n>\n> The following comment should say \"... struct describing subscription\n> worker statistics.\"\n> (i.e. need to remove the \"a\")\n>\n> + * 'S' A PgStat_StatSubWorkerEntry struct describing a\n> + * subscription worker statistics.\n>\n\nFixed.\n\n>\n> (7) pgstat_get_subworker_entry\n>\n> Suggest comment change:\n>\n> BEFORE:\n> + * Return the entry of subscription worker entry with the subscription\n> AFTER:\n> + * Return subscription worker entry with the given subscription\n\nFixed.\n\n>\n> (8) pgstat_recv_subworker_error\n>\n> + /*\n> + * Update only the counter and timestamp if we received the same error\n> + * again\n> + */\n> + if (wentry->relid == msg->m_relid &&\n> + wentry->command == msg->m_command &&\n> + wentry->xid == msg->m_xid &&\n> + strncmp(wentry->message, msg->m_message, strlen(wentry->message)) == 0)\n> + {\n>\n> Is there a reason that the above check uses strncmp() with\n> strlen(wentry->message), instead of just strcmp()?\n> msg->m_message is treated as the same error message if it is the same\n> up to strlen(wentry->message)?\n> Perhaps if that is intentional, then the comment should be updated.\n\nIt's better to use strcmp() in this case. Fixed.\n\n>\n> src/tools/pgindent/typedefs.list\n>\n> (9)\n> The added \"PgStat_SubWorkerError\" should be removed from the\n> typedefs.list (as there is no such new typedef).\n\nFixed.\n\nI've attached updated patches.\n\nRegards,\n\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Tue, 12 Oct 2021 13:59:23 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Oct 8, 2021 at 9:22 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Thursday, September 30, 2021 2:45 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > I've attached updated patches that incorporate all comments I got so far. Please\n> > review them.\n> Hello\n>\n>\n> Minor two comments for v15-0001 patch.\n>\n> (1) a typo in pgstat_vacuum_subworker_stat()\n>\n> + /*\n> + * This subscription is live. The next step is that we search errors\n> + * of the table sync workers who are already in sync state. These\n> + * errors should be removed.\n> + */\n>\n> This subscription is \"alive\" ?\n>\n>\n> (2) Suggestion to add one comment next to '0' in ApplyWorkerMain()\n>\n> + /* report the table sync error */\n> + pgstat_report_subworker_error(MyLogicalRepWorker->subid,\n> + MyLogicalRepWorker->relid,\n> + MyLogicalRepWorker->relid,\n> + 0,\n> + InvalidTransactionId,\n> + errdata->message);\n>\n> How about writing /* no corresponding message type for table synchronization */ or something ?\n>\n\nThank you for the comments! Those comments are incorporated into the\nlatest patches I just submitted[1].\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoDST8-ykrCLcWbWnTLj1u52-ZhiEP%2BbRU7kv5oBhfSy_Q%40mail.gmail.com\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 12 Oct 2021 14:01:41 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Oct 12, 2021 at 4:00 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached updated patches.\n>\n\nSome comments for the v16-0003 patch:\n\n(1) doc/src/sgml/logical-replication.sgml\n\nThe output from \"SELECT * FROM pg_stat_subscription_errors;\" still\nshows \"last_failed_time\" instead of \"last_error_time\".\n\ndoc/src/sgml/ref/alter_subscription.sgml\n(2)\n\nSuggested update (and fix typo: restrited -> restricted):\n\nBEFORE:\n+ Setting and resetting of <literal>skip_xid</literal> option is\n+ restrited to superusers.\nAFTER:\n+ The setting and resetting of the\n<literal>skip_xid</literal> option is\n+ restricted to superusers.\n\n(3)\nSuggested improvement to the wording:\n\nBEFORE:\n+ incoming change or by skipping the whole transaction. This option\n+ specifies transaction ID that logical replication worker skips to\n+ apply. The logical replication worker skips all data modification\nAFTER:\n+ incoming changes or by skipping the whole transaction. This option\n+ specifies the ID of the transaction whose application is to\nbe skipped\n+ by the logical replication worker. The logical replication worker\n+ skips all data modification\n\n(4) src/backend/replication/logical/worker.c\n\nSuggested improvement to the comment wording:\n\nBEFORE:\n+ * Stop the skipping transaction if enabled. Otherwise, commit the changes\nAFTER:\n+ * Stop skipping the transaction changes, if enabled. Otherwise,\ncommit the changes\n\n\n(5) skip_xid value validation\n\nThe validation of the specified skip_xid XID value isn't great.\nFor example, the following value are accepted:\n\n ALTER SUBSCRIPTION sub SET (skip_xid='123abcz');\n ALTER SUBSCRIPTION sub SET (skip_xid='99$@*');\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 12 Oct 2021 21:58:44 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Oct 12, 2021 at 4:00 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached updated patches.\n>\n\nSome comments for the v16-0001 patch:\n\n\nsrc/backend/postmaster/pgstat.c\n\n(1) pgstat_vacuum_subworker_stat()\n\nRemove \"the\" from beginning of the following comment line:\n\n+ * the all the dead subscription worker statistics.\n\n\n(2) pgstat_reset_subscription_error_stats()\n\nThis function would be better named \"pgstat_reset_subscription_subworker_error\".\n\n\n(3) pgstat_report_subworker_purge()\n\nImprove comment:\n\nBEFORE:\n+ * Tell the collector about dead subscriptions.\nAFTER:\n+ * Tell the collector to remove dead subscriptions.\n\n\n(4) pgstat_get_subworker_entry()\n\nI notice that in the following code:\n\n+ if (create && !found)\n+ pgstat_reset_subworker_error(wentry, 0);\n\nThe newly-created PgStat_StatSubWorkerEntry doesn't get the \"dbid\"\nmember set, so I think it's a junk value in this case, yet the caller\nof pgstat_get_subworker_entry(..., true) is referencing it:\n\n+ /* Get the subscription worker stats */\n+ wentry = pgstat_get_subworker_entry(msg->m_subid, msg->m_subrelid, true);\n+ Assert(wentry);\n+\n+ /*\n+ * Update only the counter and timestamp if we received the same error\n+ * again\n+ */\n+ if (wentry->dbid == msg->m_dbid &&\n+ wentry->relid == msg->m_relid &&\n+ wentry->command == msg->m_command &&\n+ wentry->xid == msg->m_xid &&\n+ strcmp(wentry->message, msg->m_message) == 0)\n+ {\n+ wentry->count++;\n+ wentry->timestamp = msg->m_timestamp;\n+ return;\n+ }\n\nMaybe the cheapest solution is to just set dbid in\npgstat_reset_subworker_error()?\n\n\nsrc/backend/replication/logical/worker.c\n\n(5) Fix typo\n\nsynchroniztion -> synchronization\n\n+ * type for table synchroniztion.\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 13 Oct 2021 12:59:53 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Oct 12, 2021 at 4:00 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached updated patches.\n>\n\nA couple more comments for some issues that I noticed in the v16 patches:\n\nv16-0002\n\ndoc/src/sgml/ref/alter_subscription.sgml\n\n(1) Order of parameters that can be reset doesn't match those that can be set.\nAlso, it doesn't match the order specified in the documentation\nupdates in the v16-0003 patch.\n\nSuggested change:\n\nBEFORE:\n+ The parameters that can be reset are: <literal>streaming</literal>,\n+ <literal>binary</literal>, <literal>synchronous_commit</literal>.\nAFTER:\n+ The parameters that can be reset are:\n<literal>synchronous_commit</literal>,\n+ <literal>binary</literal>, <literal>streaming</literal>.\n\n\nv16-0003\n\ndoc/src/sgml/ref/alter_subscription.sgml\n\n(1) Documentation update says \"slot_name\" is a parameter that can be\nreset, but this is not correct, it can't be reset.\nAlso, the doc update is missing \"the\" before \"parameter\".\n\nSuggested change:\n\nBEFORE:\n+ The parameters that can be reset are: <literal>slot_name</literal>,\n+ <literal>synchronous_commit</literal>, <literal>binary</literal>,\n+ <literal>streaming</literal>, and following parameter:\nAFTER:\n+ The parameters that can be reset are:\n<literal>synchronous_commit</literal>,\n+ <literal>binary</literal>, <literal>streaming</literal>, and\nthe following\n+ parameter:\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 14 Oct 2021 19:45:33 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Oct 12, 2021 at 7:58 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Tue, Oct 12, 2021 at 4:00 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached updated patches.\n> >\n>\n> Some comments for the v16-0003 patch:\n\nThank you for the comments!\n\n>\n> (1) doc/src/sgml/logical-replication.sgml\n>\n> The output from \"SELECT * FROM pg_stat_subscription_errors;\" still\n> shows \"last_failed_time\" instead of \"last_error_time\".\n\nFixed.\n\n>\n> doc/src/sgml/ref/alter_subscription.sgml\n> (2)\n>\n> Suggested update (and fix typo: restrited -> restricted):\n>\n> BEFORE:\n> + Setting and resetting of <literal>skip_xid</literal> option is\n> + restrited to superusers.\n> AFTER:\n> + The setting and resetting of the\n> <literal>skip_xid</literal> option is\n> + restricted to superusers.\n\nFixed.\n\n>\n> (3)\n> Suggested improvement to the wording:\n>\n> BEFORE:\n> + incoming change or by skipping the whole transaction. This option\n> + specifies transaction ID that logical replication worker skips to\n> + apply. The logical replication worker skips all data modification\n> AFTER:\n> + incoming changes or by skipping the whole transaction. This option\n> + specifies the ID of the transaction whose application is to\n> be skipped\n> + by the logical replication worker. The logical replication worker\n> + skips all data modification\n\nUpdated.\n\n>\n> (4) src/backend/replication/logical/worker.c\n>\n> Suggested improvement to the comment wording:\n>\n> BEFORE:\n> + * Stop the skipping transaction if enabled. Otherwise, commit the changes\n> AFTER:\n> + * Stop skipping the transaction changes, if enabled. Otherwise,\n> commit the changes\n\nFixed.\n\n>\n>\n> (5) skip_xid value validation\n>\n> The validation of the specified skip_xid XID value isn't great.\n> For example, the following value are accepted:\n>\n> ALTER SUBSCRIPTION sub SET (skip_xid='123abcz');\n> ALTER SUBSCRIPTION sub SET (skip_xid='99$@*');\n\nHmm, this is probably a problem of xid data type. For example, we can do like:\n\npostgres(1:12686)=# select 'aa123'::xid;\n xid\n-----\n 0\n(1 row)\n\npostgres(1:12686)=# select '123aa'::xid;\n xid\n-----\n 123\n(1 row)\n\nIt seems a problem to me. Perhaps we can fix it in a separate patch.\nWhat do you think?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 18 Oct 2021 10:33:21 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Oct 13, 2021 at 10:59 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Tue, Oct 12, 2021 at 4:00 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached updated patches.\n> >\n>\n> Some comments for the v16-0001 patch:\n>\n\nThank you for the comments!\n\n>\n> src/backend/postmaster/pgstat.c\n>\n> (1) pgstat_vacuum_subworker_stat()\n>\n> Remove \"the\" from beginning of the following comment line:\n>\n> + * the all the dead subscription worker statistics.\n\nFixed.\n\n>\n>\n> (2) pgstat_reset_subscription_error_stats()\n>\n> This function would be better named \"pgstat_reset_subscription_subworker_error\".\n\n'subworker' contains an abbreviation of 'subscription'. So it seems\nredundant to me. No?\n\n>\n>\n> (3) pgstat_report_subworker_purge()\n>\n> Improve comment:\n>\n> BEFORE:\n> + * Tell the collector about dead subscriptions.\n> AFTER:\n> + * Tell the collector to remove dead subscriptions.\n\nFixed.\n\n>\n>\n> (4) pgstat_get_subworker_entry()\n>\n> I notice that in the following code:\n>\n> + if (create && !found)\n> + pgstat_reset_subworker_error(wentry, 0);\n>\n> The newly-created PgStat_StatSubWorkerEntry doesn't get the \"dbid\"\n> member set, so I think it's a junk value in this case, yet the caller\n> of pgstat_get_subworker_entry(..., true) is referencing it:\n>\n> + /* Get the subscription worker stats */\n> + wentry = pgstat_get_subworker_entry(msg->m_subid, msg->m_subrelid, true);\n> + Assert(wentry);\n> +\n> + /*\n> + * Update only the counter and timestamp if we received the same error\n> + * again\n> + */\n> + if (wentry->dbid == msg->m_dbid &&\n> + wentry->relid == msg->m_relid &&\n> + wentry->command == msg->m_command &&\n> + wentry->xid == msg->m_xid &&\n> + strcmp(wentry->message, msg->m_message) == 0)\n> + {\n> + wentry->count++;\n> + wentry->timestamp = msg->m_timestamp;\n> + return;\n> + }\n>\n> Maybe the cheapest solution is to just set dbid in\n> pgstat_reset_subworker_error()?\n\nI've change the code to reset dbid in pgstat_reset_subworker_error().\n\n>\n>\n> src/backend/replication/logical/worker.c\n>\n> (5) Fix typo\n>\n> synchroniztion -> synchronization\n>\n> + * type for table synchroniztion.\n\nFixed.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 18 Oct 2021 10:33:37 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Oct 14, 2021 at 5:45 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Tue, Oct 12, 2021 at 4:00 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached updated patches.\n> >\n>\n> A couple more comments for some issues that I noticed in the v16 patches:\n>\n> v16-0002\n>\n> doc/src/sgml/ref/alter_subscription.sgml\n>\n> (1) Order of parameters that can be reset doesn't match those that can be set.\n> Also, it doesn't match the order specified in the documentation\n> updates in the v16-0003 patch.\n>\n> Suggested change:\n>\n> BEFORE:\n> + The parameters that can be reset are: <literal>streaming</literal>,\n> + <literal>binary</literal>, <literal>synchronous_commit</literal>.\n> AFTER:\n> + The parameters that can be reset are:\n> <literal>synchronous_commit</literal>,\n> + <literal>binary</literal>, <literal>streaming</literal>.\n>\n\nFixed.\n\n>\n> v16-0003\n>\n> doc/src/sgml/ref/alter_subscription.sgml\n>\n> (1) Documentation update says \"slot_name\" is a parameter that can be\n> reset, but this is not correct, it can't be reset.\n> Also, the doc update is missing \"the\" before \"parameter\".\n>\n> Suggested change:\n>\n> BEFORE:\n> + The parameters that can be reset are: <literal>slot_name</literal>,\n> + <literal>synchronous_commit</literal>, <literal>binary</literal>,\n> + <literal>streaming</literal>, and following parameter:\n> AFTER:\n> + The parameters that can be reset are:\n> <literal>synchronous_commit</literal>,\n> + <literal>binary</literal>, <literal>streaming</literal>, and\n> the following\n> + parameter:\n\nFixed.\n\nI've attached updated patches that incorporate all comments I got so far.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Mon, 18 Oct 2021 10:34:22 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Oct 11, 2021 at 12:57 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Monday, October 11, 2021 11:51 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > On Fri, Oct 8, 2021 at 4:09 PM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > >\n> > > On Thursday, September 30, 2021 2:45 PM Masahiko Sawada\n> > <sawada.mshk@gmail.com> wrote:\n> > > > I've attached updated patches that incorporate all comments I got so\n> > > > far. Please review them.\n> > > Sorry, if I misunderstand something but did someone check what happens\n> > > when we execute ALTER SUBSCRIPTION ... RESET (streaming) in the middle\n> > > of one txn which has several streaming of data to the sub, especially\n> > > after some part of txn has been already streamed.\n> > > My intention of this is something like *if* we can find an actual harm\n> > > of this, I wanted to suggest the necessity of a safeguard or some measure\n> > into the patch.\n> ...\n> > > I observed that the subscriber doesn't accept STREAM_COMMIT in this\n> > > case but gets BEGIN&COMMIT instead at the end.\n> > > I couldn't find any apparent and immediate issue from those steps but\n> > > is that no problem ?\n> > > Probably, this kind of situation applies to other reset target options ?\n> >\n> > I think that if a subscription parameter such as ‘streaming’ and ‘binary’ is\n> > changed, an apply worker exits and the launcher starts a new worker (see\n> > maybe_reread_subscription()). So I guess that in this case, the apply worker\n> > exited during receiving streamed changes, restarted, and received the same\n> > changes with ‘streaming = off’, therefore it got BEGIN and COMMIT instead. I\n> > think that this happens even by using ‘SET (‘streaming’ = off)’.\n> You are right. Yes, I checked that the apply worker did exit\n> and the new apply worker process dealt with the INSERT in the above case.\n> Also, setting streaming = false was same.\n>\n\nI think you can additionally verify that temporary streaming files get\nremoved after restart.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 18 Oct 2021 14:18:46 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Oct 11, 2021 at 1:00 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Sun, Oct 10, 2021 at 11:04 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n> >\n> > On 04.10.21 02:31, Masahiko Sawada wrote:\n> > > I guess disabling subscriptions on error/conflict and skipping the\n> > > particular transactions are somewhat different types of functions.\n> > > Disabling subscriptions on error/conflict seems likes a setting\n> > > parameter of subscriptions. The users might want to specify this\n> > > option at creation time. Whereas, skipping the particular transaction\n> > > is a repair function that the user might want to use on the spot in\n> > > case of a failure. I’m concerned a bit that combining these functions\n> > > to one syntax could confuse the users.\n> >\n> > Also, would the skip option be dumped and restored using pg_dump? Maybe\n> > there is an argument for yes, but if not, then we probably need a\n> > different path of handling it separate from the more permanent options.\n>\n> Good point. I don’t think the skip option should be dumped and\n> restored using pg_dump since the utilization of transaction ids in\n> another installation is different.\n>\n\nThis is a xid of publisher which subscriber wants to skip. So, even if\none restores the subscriber data in a different installation why would\nit matter till it points to the same publisher?\n\nEither way, can't we handle this in pg_dump?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 18 Oct 2021 14:37:19 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Oct 18, 2021 at 6:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Oct 11, 2021 at 1:00 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Sun, Oct 10, 2021 at 11:04 PM Peter Eisentraut\n> > <peter.eisentraut@enterprisedb.com> wrote:\n> > >\n> > > On 04.10.21 02:31, Masahiko Sawada wrote:\n> > > > I guess disabling subscriptions on error/conflict and skipping the\n> > > > particular transactions are somewhat different types of functions.\n> > > > Disabling subscriptions on error/conflict seems likes a setting\n> > > > parameter of subscriptions. The users might want to specify this\n> > > > option at creation time. Whereas, skipping the particular transaction\n> > > > is a repair function that the user might want to use on the spot in\n> > > > case of a failure. I’m concerned a bit that combining these functions\n> > > > to one syntax could confuse the users.\n> > >\n> > > Also, would the skip option be dumped and restored using pg_dump? Maybe\n> > > there is an argument for yes, but if not, then we probably need a\n> > > different path of handling it separate from the more permanent options.\n> >\n> > Good point. I don’t think the skip option should be dumped and\n> > restored using pg_dump since the utilization of transaction ids in\n> > another installation is different.\n> >\n>\n> This is a xid of publisher which subscriber wants to skip. So, even if\n> one restores the subscriber data in a different installation why would\n> it matter till it points to the same publisher?\n>\n> Either way, can't we handle this in pg_dump?\n\nBecause of backups (dumps), I think we cannot expect that the user\nrestore it somewhere soon. If the dump is restored several months\nlater, the publisher could be a different installation (by rebuilding\nfrom scratch) or XID of the publisher could already be wrapped around.\nIt might be useful to dump the skip_xid by pg_dump in some cases, but\nI think it should be optional if we want to do that.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 19 Oct 2021 11:52:44 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Oct 19, 2021 at 8:23 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Oct 18, 2021 at 6:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Oct 11, 2021 at 1:00 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Sun, Oct 10, 2021 at 11:04 PM Peter Eisentraut\n> > > <peter.eisentraut@enterprisedb.com> wrote:\n> > > >\n> > > > On 04.10.21 02:31, Masahiko Sawada wrote:\n> > > > > I guess disabling subscriptions on error/conflict and skipping the\n> > > > > particular transactions are somewhat different types of functions.\n> > > > > Disabling subscriptions on error/conflict seems likes a setting\n> > > > > parameter of subscriptions. The users might want to specify this\n> > > > > option at creation time. Whereas, skipping the particular transaction\n> > > > > is a repair function that the user might want to use on the spot in\n> > > > > case of a failure. I’m concerned a bit that combining these functions\n> > > > > to one syntax could confuse the users.\n> > > >\n> > > > Also, would the skip option be dumped and restored using pg_dump? Maybe\n> > > > there is an argument for yes, but if not, then we probably need a\n> > > > different path of handling it separate from the more permanent options.\n> > >\n> > > Good point. I don’t think the skip option should be dumped and\n> > > restored using pg_dump since the utilization of transaction ids in\n> > > another installation is different.\n> > >\n> >\n> > This is a xid of publisher which subscriber wants to skip. So, even if\n> > one restores the subscriber data in a different installation why would\n> > it matter till it points to the same publisher?\n> >\n> > Either way, can't we handle this in pg_dump?\n>\n> Because of backups (dumps), I think we cannot expect that the user\n> restore it somewhere soon. If the dump is restored several months\n> later, the publisher could be a different installation (by rebuilding\n> from scratch) or XID of the publisher could already be wrapped around.\n> It might be useful to dump the skip_xid by pg_dump in some cases, but\n> I think it should be optional if we want to do that.\n>\n\nAgreed, I think it depends on the use case, so we can keep it\noptional, or maybe in the initial version let's not dump it, and only\nif we later see the use case then we can add an optional parameter in\npg_dump. Do you think we need any special handling if we decide not to\ndump it? I think if we decide to dump it either optionally or\notherwise, then we do need changes in pg_dump.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 19 Oct 2021 09:07:56 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Oct 19, 2021 at 12:38 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Oct 19, 2021 at 8:23 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Oct 18, 2021 at 6:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Oct 11, 2021 at 1:00 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Sun, Oct 10, 2021 at 11:04 PM Peter Eisentraut\n> > > > <peter.eisentraut@enterprisedb.com> wrote:\n> > > > >\n> > > > > On 04.10.21 02:31, Masahiko Sawada wrote:\n> > > > > > I guess disabling subscriptions on error/conflict and skipping the\n> > > > > > particular transactions are somewhat different types of functions.\n> > > > > > Disabling subscriptions on error/conflict seems likes a setting\n> > > > > > parameter of subscriptions. The users might want to specify this\n> > > > > > option at creation time. Whereas, skipping the particular transaction\n> > > > > > is a repair function that the user might want to use on the spot in\n> > > > > > case of a failure. I’m concerned a bit that combining these functions\n> > > > > > to one syntax could confuse the users.\n> > > > >\n> > > > > Also, would the skip option be dumped and restored using pg_dump? Maybe\n> > > > > there is an argument for yes, but if not, then we probably need a\n> > > > > different path of handling it separate from the more permanent options.\n> > > >\n> > > > Good point. I don’t think the skip option should be dumped and\n> > > > restored using pg_dump since the utilization of transaction ids in\n> > > > another installation is different.\n> > > >\n> > >\n> > > This is a xid of publisher which subscriber wants to skip. So, even if\n> > > one restores the subscriber data in a different installation why would\n> > > it matter till it points to the same publisher?\n> > >\n> > > Either way, can't we handle this in pg_dump?\n> >\n> > Because of backups (dumps), I think we cannot expect that the user\n> > restore it somewhere soon. If the dump is restored several months\n> > later, the publisher could be a different installation (by rebuilding\n> > from scratch) or XID of the publisher could already be wrapped around.\n> > It might be useful to dump the skip_xid by pg_dump in some cases, but\n> > I think it should be optional if we want to do that.\n> >\n>\n> Agreed, I think it depends on the use case, so we can keep it\n> optional, or maybe in the initial version let's not dump it, and only\n> if we later see the use case then we can add an optional parameter in\n> pg_dump.\n\nAgreed. I prefer not to dump it in the first version since it's\ndifficult to remove the option once it's introduced.\n\n> Do you think we need any special handling if we decide not to\n> dump it? I think if we decide to dump it either optionally or\n> otherwise, then we do need changes in pg_dump.\n\nYeah, if we don't dump the skip_xid (which is the current patch\nbehavior), any special handling is not required for pg_dump. On the\nother hand, if we do that in any way, we need changes for pg_dump.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 19 Oct 2021 14:49:54 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Oct 18, 2021 9:34 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> I've attached updated patches that incorporate all comments I got so far.\r\n\r\nHi,\r\n\r\nHere are some minor comments for the patches.\r\n\r\nv17-0001-Add-a-subscription-errors-statistics-view-pg_sta.patch\r\n\r\n1)\r\n\r\n+\t/* Clean up */\r\n+\tif (not_ready_rels != NIL)\r\n+\t\tlist_free_deep(not_ready_rels);\r\n\r\nMaybe we don't need the ' if (not_ready_rels != NIL)' check as\r\nlist_free_deep will do this check internally.\r\n\r\n2)\r\n\r\n+\tfor (int i = 0; i < msg->m_nentries; i++)\r\n+\t{\r\n+\t\tHASH_SEQ_STATUS sstat;\r\n+\t\tPgStat_StatSubWorkerEntry *wentry;\r\n+\r\n+\t\t/* Remove all worker statistics of the subscription */\r\n+\t\thash_seq_init(&sstat, subWorkerStatHash);\r\n+\t\twhile ((wentry = (PgStat_StatSubWorkerEntry *) hash_seq_search(&sstat)) != NULL)\r\n+\t\t{\r\n+\t\t\tif (wentry->key.subid == msg->m_subids[i])\r\n+\t\t\t\t(void) hash_search(subWorkerStatHash, (void *) &(wentry->key),\r\n+\t\t\t\t\t\t\t\t HASH_REMOVE, NULL);\r\n\r\nWould it be a little faster if we scan hashtable in outerloop and\r\nscan the msg in innerloop ?\r\nLike:\r\nwhile ((wentry = (PgStat_StatSubWorkerEntry *) hash_seq_search(&sstat)) != NULL)\r\n{\r\n\tfor (int i = 0; i < msg->m_nentries; i++)\r\n\t...\r\n\r\n\r\nv17-0002-Add-RESET-command-to-ALTER-SUBSCRIPTION-command\r\n\r\n1)\r\nI noticed that we cannot RESET slot_name while we can SET it.\r\nAnd the slot_name have a default behavior that \" use the name of the subscription for the slot name.\".\r\nSo, is it possible to support RESET it ?\r\n\r\nBest regards,\r\nHou zj\r\n\r\n",
"msg_date": "Wed, 20 Oct 2021 03:02:55 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Oct 18, 2021 at 12:34 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached updated patches that incorporate all comments I got so far.\n>\n\nMinor comment on patch 17-0003\n\nsrc/backend/replication/logical/worker.c\n\n(1) Typo in apply_handle_stream_abort() comment:\n\n/* Stop skipping transaction transaction, if enabled */\nshould be:\n/* Stop skipping transaction changes, if enabled */\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 20 Oct 2021 14:33:22 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Oct 20, 2021 at 12:03 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Mon, Oct 18, 2021 9:34 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > I've attached updated patches that incorporate all comments I got so far.\n>\n> Hi,\n>\n> Here are some minor comments for the patches.\n\nThank you for the comments!\n\n>\n> v17-0001-Add-a-subscription-errors-statistics-view-pg_sta.patch\n>\n> 1)\n>\n> + /* Clean up */\n> + if (not_ready_rels != NIL)\n> + list_free_deep(not_ready_rels);\n>\n> Maybe we don't need the ' if (not_ready_rels != NIL)' check as\n> list_free_deep will do this check internally.\n\nAgreed.\n\n>\n> 2)\n>\n> + for (int i = 0; i < msg->m_nentries; i++)\n> + {\n> + HASH_SEQ_STATUS sstat;\n> + PgStat_StatSubWorkerEntry *wentry;\n> +\n> + /* Remove all worker statistics of the subscription */\n> + hash_seq_init(&sstat, subWorkerStatHash);\n> + while ((wentry = (PgStat_StatSubWorkerEntry *) hash_seq_search(&sstat)) != NULL)\n> + {\n> + if (wentry->key.subid == msg->m_subids[i])\n> + (void) hash_search(subWorkerStatHash, (void *) &(wentry->key),\n> + HASH_REMOVE, NULL);\n>\n> Would it be a little faster if we scan hashtable in outerloop and\n> scan the msg in innerloop ?\n> Like:\n> while ((wentry = (PgStat_StatSubWorkerEntry *) hash_seq_search(&sstat)) != NULL)\n> {\n> for (int i = 0; i < msg->m_nentries; i++)\n> ...\n>\n\nAgreed.\n\n>\n> v17-0002-Add-RESET-command-to-ALTER-SUBSCRIPTION-command\n>\n> 1)\n> I noticed that we cannot RESET slot_name while we can SET it.\n> And the slot_name have a default behavior that \" use the name of the subscription for the slot name.\".\n> So, is it possible to support RESET it ?\n\nHmm, I'm not sure resetting slot_name is useful. I think that it’s\ncommon to change the slot name to NONE by ALTER SUBSCRIPTION and vise\nversa. But I think resetting the slot name (i.g., changing a\nnon-default name to the default name) is not the common use case. If\nthe user wants to do that, it seems safer to explicitly specify the\nslot name using by ALTER SUBSCRIPTION ... SET (slot_name = 'XXX').\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 21 Oct 2021 12:06:32 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Oct 20, 2021 at 12:33 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Mon, Oct 18, 2021 at 12:34 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached updated patches that incorporate all comments I got so far.\n> >\n>\n> Minor comment on patch 17-0003\n\nThank you for the comment!\n\n>\n> src/backend/replication/logical/worker.c\n>\n> (1) Typo in apply_handle_stream_abort() comment:\n>\n> /* Stop skipping transaction transaction, if enabled */\n> should be:\n> /* Stop skipping transaction changes, if enabled */\n\nFixed.\n\nI've attached updated patches. In this version, in addition to the\nreview comments I go so far, I've changed the view name from\npg_stat_subscription_errors to pg_stat_subscription_workers as per the\ndiscussion on including xact info to the view on another thread[1].\nI’ve also changed related codes accordingly.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoDF7LmSALzMfmPshRw_xFcRz3WvB-me8T2gO6Ht%3D3zL2w%40mail.gmail.com\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Thu, 21 Oct 2021 13:59:17 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Oct 6, 2021 at 11:18 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Mon, Oct 4, 2021 at 4:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > I think here the main point is that does this addresses Peter's\n> > concern for this Patch to use a separate syntax? Peter E., can you\n> > please confirm? Do let us know if you have something else going in\n> > your mind?\n> >\n>\n> Peter's concern seemed to be that the use of a subscription option,\n> though convenient, isn't an intuitive natural fit for providing this\n> feature (i.e. ability to skip a transaction by xid). I tend to have\n> that feeling about using a subscription option for this feature. I'm\n> not sure what possible alternative syntax he had in mind and currently\n> can't really think of a good one myself that fits the purpose.\n>\n> I think that the 1st and 2nd patch are useful in their own right, but\n> couldn't this feature (i.e. the 3rd patch) be provided instead as an\n> additional Replication Management function (see 9.27.6)?\n> e.g. pg_replication_skip_xid\n>\n\nAfter some thoughts on the syntax, it's somewhat natural to me if we\nsupport the skip transaction feature with another syntax like (I\nprefer the former):\n\nALTER SUBSCRIPTION ... [SET|RESET] SKIP TRANSACTION xxx;\n\nor\n\nALTER SUBSCRIPTION ... SKIP TRANSACTION xxx; (setting NONE as XID to\nreset the XID to skip)\n\nThe primary reason to have another syntax is that ability to skip a\ntransaction seems not to be other subscription parameters such as\nslot_name, binary, streaming that are dumped by pg_dump. FWIW IMO the\nability to disable the subscription on an error would be a\nsubscription parameter. The user is likely to want to specify this\noption also at CREATE SUBSCRIPTION and wants it to be dumped by\npg_dump. So I think we can think of the skip xid option separately\nfrom this parameter.\n\nAlso, I think we can think of the syntax for this ability (skipping a\ntransaction) separately from the syntax of the general conflict\nresolution feature. I guess that we might rather need a whole new\nsyntax for conflict resolution. In addition, the user will want to\ndump the definitions of confliction resolution by pg_dump in common\ncases, unlike the skip XID.\n\nAs Amit pointed out, we might want to allow users to skip changes\nbased on something other than XID but the candidates seem only a few\nto me (LSN, time, and something else?). If these are only a few,\nprobably we don’t need to worry about syntax bloat.\n\nRegarding an additional replication management function proposed by\nGreg, it seems a bit unnatural to me; the subscription is created and\naltered by DDL but why is only skipping the transaction option\nspecified by an SQL function?\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 25 Oct 2021 10:44:17 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Oct 25, 2021 at 7:14 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Oct 6, 2021 at 11:18 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> >\n> > I think that the 1st and 2nd patch are useful in their own right, but\n> > couldn't this feature (i.e. the 3rd patch) be provided instead as an\n> > additional Replication Management function (see 9.27.6)?\n> > e.g. pg_replication_skip_xid\n> >\n>\n> After some thoughts on the syntax, it's somewhat natural to me if we\n> support the skip transaction feature with another syntax like (I\n> prefer the former):\n>\n> ALTER SUBSCRIPTION ... [SET|RESET] SKIP TRANSACTION xxx;\n>\n> or\n>\n> ALTER SUBSCRIPTION ... SKIP TRANSACTION xxx; (setting NONE as XID to\n> reset the XID to skip)\n>\n> The primary reason to have another syntax is that ability to skip a\n> transaction seems not to be other subscription parameters such as\n> slot_name, binary, streaming that are dumped by pg_dump. FWIW IMO the\n> ability to disable the subscription on an error would be a\n> subscription parameter. The user is likely to want to specify this\n> option also at CREATE SUBSCRIPTION and wants it to be dumped by\n> pg_dump. So I think we can think of the skip xid option separately\n> from this parameter.\n>\n> Also, I think we can think of the syntax for this ability (skipping a\n> transaction) separately from the syntax of the general conflict\n> resolution feature. I guess that we might rather need a whole new\n> syntax for conflict resolution.\n>\n\nI agree that we will need a separate syntax for conflict resolution\nbut there is some similarity in what I proposed above (On\nError/Conflict [1]) with the existing syntax of Insert ... On\nConflict. I understand that here the context is different and we are\nstoring this information in the catalog but still there is some syntax\nsimilarity and it will avoid adding new syntax variants.\n\n> In addition, the user will want to\n> dump the definitions of confliction resolution by pg_dump in common\n> cases, unlike the skip XID.\n>\n> As Amit pointed out, we might want to allow users to skip changes\n> based on something other than XID but the candidates seem only a few\n> to me (LSN, time, and something else?). If these are only a few,\n> probably we don’t need to worry about syntax bloat.\n>\n\nI guess one might want to skip particular operations that cause an\nerror and that would be possible as we are providing the relevant\ninformation via a view.\n\n> Regarding an additional replication management function proposed by\n> Greg, it seems a bit unnatural to me; the subscription is created and\n> altered by DDL but why is only skipping the transaction option\n> specified by an SQL function?\n>\n\nThe one advantage I see is that it will be similar to what we already\nhave via pg_replication_origin_advance() for skipping WAL during\napply. The other thing could be that this feature can lead to problems\nif not used carefully so maybe it is better to provide it only by\nspecial functions. Having said that, I still feel we should do it via\nAlter Subscription in some way as that will be convenient to use.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1%2BBOHXC%3D0S2kA7GkErWq3-QKj34oQvwAPfuTHq%3Depf34w%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 26 Oct 2021 11:46:35 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Oct 26, 2021 at 5:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> I agree that we will need a separate syntax for conflict resolution\n> but there is some similarity in what I proposed above (On\n> Error/Conflict [1]) with the existing syntax of Insert ... On\n> Conflict. I understand that here the context is different and we are\n> storing this information in the catalog but still there is some syntax\n> similarity and it will avoid adding new syntax variants.\n>\n\nThe problem I see with the suggested syntax:\n\nAlter Subscription <sub_name> On Error ( subscription_parameter [=\nvalue] [, ... ] );\nOR\nAlter Subscription <sub_name> On Conflict ( subscription_parameter [=\nvalue] [, ... ] );\n\nis that \"On Error ...\" and \"On Conflict\" imply an action to be done on\na future condition (Error/Conflict), whereas at least in this case\n(skip_xid) it's only AFTER the problem condition has occurred that we\nknow the XID of the failed transaction that we want to skip. So that\nsyntax looks a little confusing to me. Unless you had something else\nin mind on how it would work?\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 26 Oct 2021 19:57:32 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Oct 26, 2021 at 2:27 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Tue, Oct 26, 2021 at 5:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > I agree that we will need a separate syntax for conflict resolution\n> > but there is some similarity in what I proposed above (On\n> > Error/Conflict [1]) with the existing syntax of Insert ... On\n> > Conflict. I understand that here the context is different and we are\n> > storing this information in the catalog but still there is some syntax\n> > similarity and it will avoid adding new syntax variants.\n> >\n>\n> The problem I see with the suggested syntax:\n>\n> Alter Subscription <sub_name> On Error ( subscription_parameter [=\n> value] [, ... ] );\n> OR\n> Alter Subscription <sub_name> On Conflict ( subscription_parameter [=\n> value] [, ... ] );\n>\n> is that \"On Error ...\" and \"On Conflict\" imply an action to be done on\n> a future condition (Error/Conflict), whereas at least in this case\n> (skip_xid) it's only AFTER the problem condition has occurred that we\n> know the XID of the failed transaction that we want to skip. So that\n> syntax looks a little confusing to me. Unless you had something else\n> in mind on how it would work?\n>\n\nYou have a point. The other alternatives on this line could be:\n\nAlter Subscription <sub_name> SKIP ( subscription_parameter [=value] [, ... ] );\n\nwhere subscription_parameter can be one of:\nxid = <xid_val>\nlsn = <lsn_val>\n...\n\nInstead of using Skip, we can use WITH as used in Alter Database\nsyntax but we are already using WITH in Create Subscription for a\ndifferent purpose, so that may not be a very good idea.\n\nThe basic idea is that I am trying to use options here rather than a\nkeyword-based syntax as there can be multiple such options.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 26 Oct 2021 15:59:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Oct 26, 2021 at 7:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Oct 26, 2021 at 2:27 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> >\n> > On Tue, Oct 26, 2021 at 5:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > I agree that we will need a separate syntax for conflict resolution\n> > > but there is some similarity in what I proposed above (On\n> > > Error/Conflict [1]) with the existing syntax of Insert ... On\n> > > Conflict. I understand that here the context is different and we are\n> > > storing this information in the catalog but still there is some syntax\n> > > similarity and it will avoid adding new syntax variants.\n> > >\n> >\n> > The problem I see with the suggested syntax:\n> >\n> > Alter Subscription <sub_name> On Error ( subscription_parameter [=\n> > value] [, ... ] );\n> > OR\n> > Alter Subscription <sub_name> On Conflict ( subscription_parameter [=\n> > value] [, ... ] );\n> >\n> > is that \"On Error ...\" and \"On Conflict\" imply an action to be done on\n> > a future condition (Error/Conflict), whereas at least in this case\n> > (skip_xid) it's only AFTER the problem condition has occurred that we\n> > know the XID of the failed transaction that we want to skip. So that\n> > syntax looks a little confusing to me. Unless you had something else\n> > in mind on how it would work?\n> >\n>\n> You have a point. The other alternatives on this line could be:\n>\n> Alter Subscription <sub_name> SKIP ( subscription_parameter [=value] [, ... ] );\n>\n> where subscription_parameter can be one of:\n> xid = <xid_val>\n> lsn = <lsn_val>\n> ...\n\nLooks better.\n\nBTW how useful is specifying LSN instead of XID in practice? Given\nthat this skipping behavior is used to skip the particular transaction\n(or its part of operations) in question, I’m not sure specifying LSN\nor time is useful. And, if it’s essentially the same as\npg_replication_origin_advance(), we don’t need to have it.\n\n> The basic idea is that I am trying to use options here rather than a\n> keyword-based syntax as there can be multiple such options.\n\nAgreed.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 27 Oct 2021 12:02:22 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thurs, Oct 21, 2021 12:59 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> I've attached updated patches. In this version, in addition to the review\r\n> comments I go so far, I've changed the view name from\r\n> pg_stat_subscription_errors to pg_stat_subscription_workers as per the\r\n> discussion on including xact info to the view on another thread[1].\r\n> I’ve also changed related codes accordingly.\r\n\r\nWhen reviewing the v18-0002 patch.\r\nI noticed that \"RESET SYNCHRONOUS_COMMIT\" does not take effect\r\n(RESET doesn't change the value to 'off').\r\n\r\n\r\n+\t\t\tif (!is_reset)\r\n+\t\t\t{\r\n+\t\t\t\topts->synchronous_commit = defGetString(defel);\r\n \r\n-\t\t\t...\r\n+\t\t\t}\r\n\r\nI think we need to add else branch here to set the synchronous_commit to 'off'.\r\n\r\nBest regards,\r\nHou zj\r\n",
"msg_date": "Wed, 27 Oct 2021 03:28:09 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Oct 27, 2021 at 8:32 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Oct 26, 2021 at 7:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > You have a point. The other alternatives on this line could be:\n> >\n> > Alter Subscription <sub_name> SKIP ( subscription_parameter [=value] [, ... ] );\n> >\n> > where subscription_parameter can be one of:\n> > xid = <xid_val>\n> > lsn = <lsn_val>\n> > ...\n>\n> Looks better.\n>\n> BTW how useful is specifying LSN instead of XID in practice? Given\n> that this skipping behavior is used to skip the particular transaction\n> (or its part of operations) in question, I’m not sure specifying LSN\n> or time is useful.\n>\n\nI think if the user wants to skip multiple xacts, she might want to\nuse the highest LSN to skip instead of specifying individual xids.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 27 Oct 2021 09:04:57 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Oct 27, 2021 at 2:28 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> When reviewing the v18-0002 patch.\n> I noticed that \"RESET SYNCHRONOUS_COMMIT\" does not take effect\n> (RESET doesn't change the value to 'off').\n>\n>\n> + if (!is_reset)\n> + {\n> + opts->synchronous_commit = defGetString(defel);\n>\n> - ...\n> + }\n>\n> I think we need to add else branch here to set the synchronous_commit to 'off'.\n>\n\nI agree that it doesn't seem to handle the RESET of synchronous_commit.\nI think that for consistency, the default value \"off\" for\nsynchronous_commit should be set (in the SubOpts) near where the\ndefault values of the boolean supported options are currently set -\nnear the top of parse_subscription_options().\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 27 Oct 2021 15:22:17 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Oct 27, 2021 at 12:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Oct 27, 2021 at 8:32 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Oct 26, 2021 at 7:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > You have a point. The other alternatives on this line could be:\n> > >\n> > > Alter Subscription <sub_name> SKIP ( subscription_parameter [=value] [, ... ] );\n> > >\n> > > where subscription_parameter can be one of:\n> > > xid = <xid_val>\n> > > lsn = <lsn_val>\n> > > ...\n> >\n> > Looks better.\n> >\n> > BTW how useful is specifying LSN instead of XID in practice? Given\n> > that this skipping behavior is used to skip the particular transaction\n> > (or its part of operations) in question, I’m not sure specifying LSN\n> > or time is useful.\n> >\n>\n> I think if the user wants to skip multiple xacts, she might want to\n> use the highest LSN to skip instead of specifying individual xids.\n\nI think it assumes that the situation where the user already knows\nmultiple transactions that cannot be applied on the subscription but\nhow do they know?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 27 Oct 2021 14:13:12 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Oct 27, 2021 at 10:43 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Oct 27, 2021 at 12:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Oct 27, 2021 at 8:32 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Tue, Oct 26, 2021 at 7:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > >\n> > > > You have a point. The other alternatives on this line could be:\n> > > >\n> > > > Alter Subscription <sub_name> SKIP ( subscription_parameter [=value] [, ... ] );\n> > > >\n> > > > where subscription_parameter can be one of:\n> > > > xid = <xid_val>\n> > > > lsn = <lsn_val>\n> > > > ...\n> > >\n> > > Looks better.\n> > >\n> > > BTW how useful is specifying LSN instead of XID in practice? Given\n> > > that this skipping behavior is used to skip the particular transaction\n> > > (or its part of operations) in question, I’m not sure specifying LSN\n> > > or time is useful.\n> > >\n> >\n> > I think if the user wants to skip multiple xacts, she might want to\n> > use the highest LSN to skip instead of specifying individual xids.\n>\n> I think it assumes that the situation where the user already knows\n> multiple transactions that cannot be applied on the subscription but\n> how do they know?\n>\n\nEither from the error messages in the server log or from the new view\nwe are planning to add. I think such a case is possible during the\ninitial synchronization phase where apply worker went ahead then\ntablesync worker by skipping to apply the changes on the corresponding\ntable. After that it is possible, that table sync worker failed during\ncopy and apply worker fails during the processing of some other rel.\nNow, I think the only way to move is via LSNs. Currently, figuring out\nLSNs to skip is not straight forward but improving that area is the\nwork of another patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 27 Oct 2021 11:13:17 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thursday, October 21, 2021 12:59 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> I've attached updated patches. In this version, in addition to the\r\n> review comments I go so far, I've changed the view name from\r\n> pg_stat_subscription_errors to pg_stat_subscription_workers as per the\r\n> discussion on including xact info to the view on another thread[1].\r\n> I’ve also changed related codes accordingly.\r\n> \r\n\r\nThanks for your patch.\r\nI have some minor comments on your 0001 and 0002 patch.\r\n\r\n1. For 0001 patch, src/backend/catalog/system_views.sql\r\n+CREATE VIEW pg_stat_subscription_workers AS\r\n+ SELECT\r\n+\te.subid,\r\n+\ts.subname,\r\n+\te.subrelid,\r\n+\te.relid,\r\n+\te.command,\r\n+\te.xid,\r\n+\te.count,\r\n+\te.error_message,\r\n+\te.last_error_time,\r\n+\te.stats_reset\r\n+ FROM (SELECT\r\n+ oid as subid,\r\n...\r\n\r\nSome places use TABs, I think it's better to use spaces here, to be consistent\r\nwith other places in this file.\r\n\r\n2. For 0002 patch, I think we can add some changes to tab-complete.c, maybe\r\nsomething like this:\r\n\r\ndiff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c\r\nindex ecae9df8ed..96665f6115 100644\r\n--- a/src/bin/psql/tab-complete.c\r\n+++ b/src/bin/psql/tab-complete.c\r\n@@ -1654,7 +1654,7 @@ psql_completion(const char *text, int start, int end)\r\n /* ALTER SUBSCRIPTION <name> */\r\n else if (Matches(\"ALTER\", \"SUBSCRIPTION\", MatchAny))\r\n COMPLETE_WITH(\"CONNECTION\", \"ENABLE\", \"DISABLE\", \"OWNER TO\",\r\n- \"RENAME TO\", \"REFRESH PUBLICATION\", \"SET\",\r\n+ \"RENAME TO\", \"REFRESH PUBLICATION\", \"SET\", \"RESET\",\r\n \"ADD PUBLICATION\", \"DROP PUBLICATION\");\r\n /* ALTER SUBSCRIPTION <name> REFRESH PUBLICATION */\r\n else if (HeadMatches(\"ALTER\", \"SUBSCRIPTION\", MatchAny) &&\r\n@@ -1670,6 +1670,12 @@ psql_completion(const char *text, int start, int end)\r\n /* ALTER SUBSCRIPTION <name> SET ( */\r\n else if (HeadMatches(\"ALTER\", \"SUBSCRIPTION\", MatchAny) && TailMatches(\"SET\", \"(\"))\r\n COMPLETE_WITH(\"binary\", \"slot_name\", \"streaming\", \"synchronous_commit\");\r\n+ /* ALTER SUBSCRIPTION <name> RESET */\r\n+ else if (Matches(\"ALTER\", \"SUBSCRIPTION\", MatchAny, \"RESET\"))\r\n+ COMPLETE_WITH(\"(\");\r\n+ /* ALTER SUBSCRIPTION <name> RESET ( */\r\n+ else if (HeadMatches(\"ALTER\", \"SUBSCRIPTION\", MatchAny) && TailMatches(\"RESET\", \"(\"))\r\n+ COMPLETE_WITH(\"binary\", \"streaming\", \"synchronous_commit\");\r\n /* ALTER SUBSCRIPTION <name> SET PUBLICATION */\r\n else if (HeadMatches(\"ALTER\", \"SUBSCRIPTION\", MatchAny) && TailMatches(\"SET\", \"PUBLICATION\"))\r\n {\r\n\r\nRegards\r\nTang\r\n",
"msg_date": "Wed, 27 Oct 2021 06:34:23 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Oct 21, 2021 at 10:29 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n>\n> I've attached updated patches.\n>\n\nFew comments:\n==============\n1. Is the patch cleaning tablesync error entries except via vacuum? If\nnot, can't we send a message to remove tablesync errors once tablesync\nis successful (say when we reset skip_xid or when tablesync is\nfinished) or when we drop subscription? I think the same applies to\napply worker. I think we may want to track it in some way whether an\nerror has occurred before sending the message but relying completely\non a vacuum might be the recipe of bloat. I think in the case of a\ndrop subscription we can simply send the message as that is not a\nfrequent operation. I might be missing something here because in the\ntests after drop subscription you are expecting the entries from the\nview to get cleared\n\n2.\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>count</structfield> <type>uint8</type>\n+ </para>\n+ <para>\n+ Number of consecutive times the error occurred\n+ </para></entry>\n\nShall we name this field as error_count as there will be other fields\nin this view in the future that may not be directly related to the\nerror?\n\n3.\n+\n+CREATE VIEW pg_stat_subscription_workers AS\n+ SELECT\n+ e.subid,\n+ s.subname,\n+ e.subrelid,\n+ e.relid,\n+ e.command,\n+ e.xid,\n+ e.count,\n+ e.error_message,\n+ e.last_error_time,\n+ e.stats_reset\n+ FROM (SELECT\n+ oid as subid,\n+ NULL as relid\n+ FROM pg_subscription\n+ UNION ALL\n+ SELECT\n+ srsubid as subid,\n+ srrelid as relid\n+ FROM pg_subscription_rel\n+ WHERE srsubstate <> 'r') sr,\n+ LATERAL pg_stat_get_subscription_worker(sr.subid, sr.relid) e\n\nIt might be better to use 'w' as an alias instead of 'e' as the\ninformation is now not restricted to only errors.\n\n4. +# Test if the error reported on pg_subscription_workers view is expected.\n\nThe view name is wrong in the above comment\n\n5.\n+# Check if the view doesn't show any entries after dropping the subscriptions.\n+$node_subscriber->safe_psql(\n+ 'postgres',\n+ q[\n+DROP SUBSCRIPTION tap_sub;\n+DROP SUBSCRIPTION tap_sub_streaming;\n+]);\n+$result = $node_subscriber->safe_psql('postgres',\n+ \"SELECT count(1) FROM pg_stat_subscription_workers\");\n+is($result, q(0), 'no error after dropping subscription');\n\nDon't we need to wait after dropping the subscription and before\nchecking the view as there might be a slight delay in messages to get\ncleared?\n\n7.\n+# Create subscriptions. The table sync for test_tab2 on tap_sub will enter to\n+# infinite error due to violating the unique constraint.\n+my $appname = 'tap_sub';\n+$node_subscriber->safe_psql(\n+ 'postgres',\n+ \"CREATE SUBSCRIPTION tap_sub CONNECTION '$publisher_connstr\napplication_name=$appname' PUBLICATION tap_pub WITH (streaming = off,\ntwo_phase = on);\");\n+my $appname_streaming = 'tap_sub_streaming';\n+$node_subscriber->safe_psql(\n+ 'postgres',\n+ \"CREATE SUBSCRIPTION tap_sub_streaming CONNECTION\n'$publisher_connstr application_name=$appname_streaming' PUBLICATION\ntap_pub_streaming WITH (streaming = on, two_phase = on);\");\n+\n+$node_publisher->wait_for_catchup($appname);\n+$node_publisher->wait_for_catchup($appname_streaming);\n\nHow can we ensure that subscriber would have caught up when one of the\ntablesync workers is constantly in the error loop? Isn't it possible\nthat the subscriber didn't send the latest lsn feedback till the table\nsync worker is finished?\n\n8.\n+# Create subscriptions. The table sync for test_tab2 on tap_sub will enter to\n+# infinite error due to violating the unique constraint.\n\nThe second sentence of the comment can be written as: \"The table sync\nfor test_tab2 on tap_sub will enter into infinite error loop due to\nviolating the unique constraint.\"\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 27 Oct 2021 15:31:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Oct 27, 2021 at 8:32 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Oct 26, 2021 at 7:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Oct 26, 2021 at 2:27 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> > >\n> > > On Tue, Oct 26, 2021 at 5:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > I agree that we will need a separate syntax for conflict resolution\n> > > > but there is some similarity in what I proposed above (On\n> > > > Error/Conflict [1]) with the existing syntax of Insert ... On\n> > > > Conflict. I understand that here the context is different and we are\n> > > > storing this information in the catalog but still there is some syntax\n> > > > similarity and it will avoid adding new syntax variants.\n> > > >\n> > >\n> > > The problem I see with the suggested syntax:\n> > >\n> > > Alter Subscription <sub_name> On Error ( subscription_parameter [=\n> > > value] [, ... ] );\n> > > OR\n> > > Alter Subscription <sub_name> On Conflict ( subscription_parameter [=\n> > > value] [, ... ] );\n> > >\n> > > is that \"On Error ...\" and \"On Conflict\" imply an action to be done on\n> > > a future condition (Error/Conflict), whereas at least in this case\n> > > (skip_xid) it's only AFTER the problem condition has occurred that we\n> > > know the XID of the failed transaction that we want to skip. So that\n> > > syntax looks a little confusing to me. Unless you had something else\n> > > in mind on how it would work?\n> > >\n> >\n> > You have a point. The other alternatives on this line could be:\n> >\n> > Alter Subscription <sub_name> SKIP ( subscription_parameter [=value] [, ... ] );\n> >\n> > where subscription_parameter can be one of:\n> > xid = <xid_val>\n> > lsn = <lsn_val>\n> > ...\n>\n> Looks better.\n>\n\nIf we want to follow the above, then how do we allow users to reset\nthe parameter? One way is to allow the user to set xid as 0 which\nwould mean that we reset it. The other way is to allow SET/RESET\nbefore SKIP but not sure if that is a good option. I was also thinking\nabout how we can extend the current syntax in the future if we want to\nallow users to specify multiple xids? I guess we can either make xid\nas a list or allow it to be specified multiple times. We don't need to\ndo this now but just from the point that we should be able to extend\nit later if required.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 27 Oct 2021 16:07:55 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Oct 27, 2021 at 2:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Oct 27, 2021 at 10:43 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Oct 27, 2021 at 12:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Oct 27, 2021 at 8:32 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Tue, Oct 26, 2021 at 7:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > >\n> > > > > You have a point. The other alternatives on this line could be:\n> > > > >\n> > > > > Alter Subscription <sub_name> SKIP ( subscription_parameter [=value] [, ... ] );\n> > > > >\n> > > > > where subscription_parameter can be one of:\n> > > > > xid = <xid_val>\n> > > > > lsn = <lsn_val>\n> > > > > ...\n> > > >\n> > > > Looks better.\n> > > >\n> > > > BTW how useful is specifying LSN instead of XID in practice? Given\n> > > > that this skipping behavior is used to skip the particular transaction\n> > > > (or its part of operations) in question, I’m not sure specifying LSN\n> > > > or time is useful.\n> > > >\n> > >\n> > > I think if the user wants to skip multiple xacts, she might want to\n> > > use the highest LSN to skip instead of specifying individual xids.\n> >\n> > I think it assumes that the situation where the user already knows\n> > multiple transactions that cannot be applied on the subscription but\n> > how do they know?\n> >\n>\n> Either from the error messages in the server log or from the new view\n> we are planning to add. I think such a case is possible during the\n> initial synchronization phase where apply worker went ahead then\n> tablesync worker by skipping to apply the changes on the corresponding\n> table. After that it is possible, that table sync worker failed during\n> copy and apply worker fails during the processing of some other rel.\n\nDoes it mean that if both initial copy for the corresponding table by\ntable sync worker and applying changes for other rels by apply worker\nfail, we skip both by specifying LSN? If so, can't we disable the\ninitial copy for the table and skip only the changes for other rels\nthat cannot be applied?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 28 Oct 2021 11:18:52 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Oct 28, 2021 at 7:49 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Oct 27, 2021 at 2:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Oct 27, 2021 at 10:43 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > > > BTW how useful is specifying LSN instead of XID in practice? Given\n> > > > > that this skipping behavior is used to skip the particular transaction\n> > > > > (or its part of operations) in question, I’m not sure specifying LSN\n> > > > > or time is useful.\n> > > > >\n> > > >\n> > > > I think if the user wants to skip multiple xacts, she might want to\n> > > > use the highest LSN to skip instead of specifying individual xids.\n> > >\n> > > I think it assumes that the situation where the user already knows\n> > > multiple transactions that cannot be applied on the subscription but\n> > > how do they know?\n> > >\n> >\n> > Either from the error messages in the server log or from the new view\n> > we are planning to add. I think such a case is possible during the\n> > initial synchronization phase where apply worker went ahead then\n> > tablesync worker by skipping to apply the changes on the corresponding\n> > table. After that it is possible, that table sync worker failed during\n> > copy and apply worker fails during the processing of some other rel.\n>\n> Does it mean that if both initial copy for the corresponding table by\n> table sync worker and applying changes for other rels by apply worker\n> fail, we skip both by specifying LSN?\n>\n\nYes.\n\n> If so, can't we disable the\n> initial copy for the table and skip only the changes for other rels\n> that cannot be applied?\n>\n\nBut anyway you need some way to skip changes via a particular\ntablesync worker so that it can mark the relation in 'ready' state. I\nthink one can also try to use disable_on_error option in such\nscenarios depending on how we expose it. Say, if the option means that\nall workers (apply or table sync) should be disabled on an error then\nit would be a bit tricky but if we can come up with a way to behave\ndifferently for different workers then it is possible to disable one\nset of workers and skip the changes in another set of workers.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 28 Oct 2021 09:35:15 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Oct 27, 2021 at 4:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Oct 27, 2021 at 8:32 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Oct 26, 2021 at 7:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > You have a point. The other alternatives on this line could be:\n> > >\n> > > Alter Subscription <sub_name> SKIP ( subscription_parameter [=value] [, ... ] );\n> > >\n> > > where subscription_parameter can be one of:\n> > > xid = <xid_val>\n> > > lsn = <lsn_val>\n> > > ...\n> >\n> > Looks better.\n> >\n>\n> If we want to follow the above, then how do we allow users to reset\n> the parameter? One way is to allow the user to set xid as 0 which\n> would mean that we reset it. The other way is to allow SET/RESET\n> before SKIP but not sure if that is a good option.\n>\n\nAfter thinking some more on this, I think it is better to not use\nSET/RESET keyword here. I think we can use a model similar to how we\nallow setting some of the options in Alter Database:\n\n# Set the connection limit for a database:\nAlter Database akapila WITH connection_limit = 1;\n# Reset the connection limit\nAlter Database akapila WITH connection_limit = -1;\n\nThoughts?\n\nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 28 Oct 2021 09:59:03 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Oct 27, 2021 at 7:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Oct 21, 2021 at 10:29 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> >\n> > I've attached updated patches.\n\nThank you for the comments!\n\n>\n> Few comments:\n> ==============\n> 1. Is the patch cleaning tablesync error entries except via vacuum? If\n> not, can't we send a message to remove tablesync errors once tablesync\n> is successful (say when we reset skip_xid or when tablesync is\n> finished) or when we drop subscription? I think the same applies to\n> apply worker. I think we may want to track it in some way whether an\n> error has occurred before sending the message but relying completely\n> on a vacuum might be the recipe of bloat. I think in the case of a\n> drop subscription we can simply send the message as that is not a\n> frequent operation. I might be missing something here because in the\n> tests after drop subscription you are expecting the entries from the\n> view to get cleared\n\nYes, I think we can have tablesync worker send a message to drop stats\nonce tablesync is successful. But if we do that also when dropping a\nsubscription, I think we need to do that only the transaction is\ncommitted since we can drop a subscription that doesn't have a\nreplication slot and rollback the transaction. Probably we can send\nthe message only when the subscritpion does have a replication slot.\n\nIn other cases, we can remember the subscriptions being dropped and\nsend the message to drop the statistics of them after committing the\ntransaction but I’m not sure it’s worth having it. FWIW, we completely\nrely on pg_stat_vacuum_stats() for cleaning up the dead tables and\nfunctions. And we don't expect there are many subscriptions on the\ndatabase.\n\nWhat do you think?\n\n>\n> 2.\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>count</structfield> <type>uint8</type>\n> + </para>\n> + <para>\n> + Number of consecutive times the error occurred\n> + </para></entry>\n>\n> Shall we name this field as error_count as there will be other fields\n> in this view in the future that may not be directly related to the\n> error?\n\nAgreed.\n\n>\n> 3.\n> +\n> +CREATE VIEW pg_stat_subscription_workers AS\n> + SELECT\n> + e.subid,\n> + s.subname,\n> + e.subrelid,\n> + e.relid,\n> + e.command,\n> + e.xid,\n> + e.count,\n> + e.error_message,\n> + e.last_error_time,\n> + e.stats_reset\n> + FROM (SELECT\n> + oid as subid,\n> + NULL as relid\n> + FROM pg_subscription\n> + UNION ALL\n> + SELECT\n> + srsubid as subid,\n> + srrelid as relid\n> + FROM pg_subscription_rel\n> + WHERE srsubstate <> 'r') sr,\n> + LATERAL pg_stat_get_subscription_worker(sr.subid, sr.relid) e\n>\n> It might be better to use 'w' as an alias instead of 'e' as the\n> information is now not restricted to only errors.\n\nAgreed.\n\n>\n> 4. +# Test if the error reported on pg_subscription_workers view is expected.\n>\n> The view name is wrong in the above comment\n\nFixed.\n\n>\n> 5.\n> +# Check if the view doesn't show any entries after dropping the subscriptions.\n> +$node_subscriber->safe_psql(\n> + 'postgres',\n> + q[\n> +DROP SUBSCRIPTION tap_sub;\n> +DROP SUBSCRIPTION tap_sub_streaming;\n> +]);\n> +$result = $node_subscriber->safe_psql('postgres',\n> + \"SELECT count(1) FROM pg_stat_subscription_workers\");\n> +is($result, q(0), 'no error after dropping subscription');\n>\n> Don't we need to wait after dropping the subscription and before\n> checking the view as there might be a slight delay in messages to get\n> cleared?\n\nI think the test always passes without waiting for the statistics to\nbe updated since we fetch the subscription worker statistics from the\nstats collector based on the entries of pg_subscription catalog. So\nthis test checks if statistics of already-dropped subscription doesn’t\nshow up in the view after DROP SUBSCRIPTION, but does not check if the\nsubscription worker statistics entry in the stats collector gets\nremoved. The primary reason is that as I mentioned above, the patch\nrelies on pgstat_vacuum_stat() for cleaning up the dead subscriptions.\n\n>\n> 7.\n> +# Create subscriptions. The table sync for test_tab2 on tap_sub will enter to\n> +# infinite error due to violating the unique constraint.\n> +my $appname = 'tap_sub';\n> +$node_subscriber->safe_psql(\n> + 'postgres',\n> + \"CREATE SUBSCRIPTION tap_sub CONNECTION '$publisher_connstr\n> application_name=$appname' PUBLICATION tap_pub WITH (streaming = off,\n> two_phase = on);\");\n> +my $appname_streaming = 'tap_sub_streaming';\n> +$node_subscriber->safe_psql(\n> + 'postgres',\n> + \"CREATE SUBSCRIPTION tap_sub_streaming CONNECTION\n> '$publisher_connstr application_name=$appname_streaming' PUBLICATION\n> tap_pub_streaming WITH (streaming = on, two_phase = on);\");\n> +\n> +$node_publisher->wait_for_catchup($appname);\n> +$node_publisher->wait_for_catchup($appname_streaming);\n>\n> How can we ensure that subscriber would have caught up when one of the\n> tablesync workers is constantly in the error loop? Isn't it possible\n> that the subscriber didn't send the latest lsn feedback till the table\n> sync worker is finished?\n>\n\nI thought that even if tablesync for a table is still ongoing, the\napply worker can apply commit records, update write LSN and flush LSN,\nand send the feedback to the wal sender. No?\n\n> 8.\n> +# Create subscriptions. The table sync for test_tab2 on tap_sub will enter to\n> +# infinite error due to violating the unique constraint.\n>\n> The second sentence of the comment can be written as: \"The table sync\n> for test_tab2 on tap_sub will enter into infinite error loop due to\n> violating the unique constraint.\"\n\nFixed.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 28 Oct 2021 14:05:54 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Oct 28, 2021 at 1:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Oct 28, 2021 at 7:49 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Oct 27, 2021 at 2:43 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Oct 27, 2021 at 10:43 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > > > BTW how useful is specifying LSN instead of XID in practice? Given\n> > > > > > that this skipping behavior is used to skip the particular transaction\n> > > > > > (or its part of operations) in question, I’m not sure specifying LSN\n> > > > > > or time is useful.\n> > > > > >\n> > > > >\n> > > > > I think if the user wants to skip multiple xacts, she might want to\n> > > > > use the highest LSN to skip instead of specifying individual xids.\n> > > >\n> > > > I think it assumes that the situation where the user already knows\n> > > > multiple transactions that cannot be applied on the subscription but\n> > > > how do they know?\n> > > >\n> > >\n> > > Either from the error messages in the server log or from the new view\n> > > we are planning to add. I think such a case is possible during the\n> > > initial synchronization phase where apply worker went ahead then\n> > > tablesync worker by skipping to apply the changes on the corresponding\n> > > table. After that it is possible, that table sync worker failed during\n> > > copy and apply worker fails during the processing of some other rel.\n> >\n> > Does it mean that if both initial copy for the corresponding table by\n> > table sync worker and applying changes for other rels by apply worker\n> > fail, we skip both by specifying LSN?\n> >\n>\n> Yes.\n>\n> > If so, can't we disable the\n> > initial copy for the table and skip only the changes for other rels\n> > that cannot be applied?\n> >\n>\n> But anyway you need some way to skip changes via a particular\n> tablesync worker so that it can mark the relation in 'ready' state.\n\nRight.\n\n> I\n> think one can also try to use disable_on_error option in such\n> scenarios depending on how we expose it. Say, if the option means that\n> all workers (apply or table sync) should be disabled on an error then\n> it would be a bit tricky but if we can come up with a way to behave\n> differently for different workers then it is possible to disable one\n> set of workers and skip the changes in another set of workers.\n\nYes, I would prefer to skip individual transactions in question rather\nthan skip changes until the particular LSN. It’s not advisable to use\nLSN to skip changes since it has a risk of skipping unrelated changes\ntoo.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 28 Oct 2021 14:17:37 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Oct 28, 2021 at 1:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Oct 27, 2021 at 4:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Oct 27, 2021 at 8:32 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Tue, Oct 26, 2021 at 7:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > You have a point. The other alternatives on this line could be:\n> > > >\n> > > > Alter Subscription <sub_name> SKIP ( subscription_parameter [=value] [, ... ] );\n> > > >\n> > > > where subscription_parameter can be one of:\n> > > > xid = <xid_val>\n> > > > lsn = <lsn_val>\n> > > > ...\n> > >\n> > > Looks better.\n> > >\n> >\n> > If we want to follow the above, then how do we allow users to reset\n> > the parameter? One way is to allow the user to set xid as 0 which\n> > would mean that we reset it. The other way is to allow SET/RESET\n> > before SKIP but not sure if that is a good option.\n> >\n>\n> After thinking some more on this, I think it is better to not use\n> SET/RESET keyword here. I think we can use a model similar to how we\n> allow setting some of the options in Alter Database:\n>\n> # Set the connection limit for a database:\n> Alter Database akapila WITH connection_limit = 1;\n> # Reset the connection limit\n> Alter Database akapila WITH connection_limit = -1;\n>\n> Thoughts?\n\nAgreed.\n\nAnother thing I'm concerned is that the syntax \"SKIP (\nsubscription_parameter [=value] [, ...])\" looks like we can specify\nmultiple options for example, \"SKIP (xid = '100', lsn =\n'0/12345678’)”. Is there a case where we need to specify multiple\noptions? Perhaps when specifying the target XID and operations for\nexample, “SKIP (xid = 100, action = ‘insert, update’)”?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 28 Oct 2021 14:25:33 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Oct 28, 2021 at 10:56 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Oct 28, 2021 at 1:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Oct 27, 2021 at 4:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Oct 27, 2021 at 8:32 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Tue, Oct 26, 2021 at 7:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > You have a point. The other alternatives on this line could be:\n> > > > >\n> > > > > Alter Subscription <sub_name> SKIP ( subscription_parameter [=value] [, ... ] );\n> > > > >\n> > > > > where subscription_parameter can be one of:\n> > > > > xid = <xid_val>\n> > > > > lsn = <lsn_val>\n> > > > > ...\n> > > >\n> > > > Looks better.\n> > > >\n> > >\n> > > If we want to follow the above, then how do we allow users to reset\n> > > the parameter? One way is to allow the user to set xid as 0 which\n> > > would mean that we reset it. The other way is to allow SET/RESET\n> > > before SKIP but not sure if that is a good option.\n> > >\n> >\n> > After thinking some more on this, I think it is better to not use\n> > SET/RESET keyword here. I think we can use a model similar to how we\n> > allow setting some of the options in Alter Database:\n> >\n> > # Set the connection limit for a database:\n> > Alter Database akapila WITH connection_limit = 1;\n> > # Reset the connection limit\n> > Alter Database akapila WITH connection_limit = -1;\n> >\n> > Thoughts?\n>\n> Agreed.\n>\n> Another thing I'm concerned is that the syntax \"SKIP (\n> subscription_parameter [=value] [, ...])\" looks like we can specify\n> multiple options for example, \"SKIP (xid = '100', lsn =\n> '0/12345678’)”. Is there a case where we need to specify multiple\n> options? Perhaps when specifying the target XID and operations for\n> example, “SKIP (xid = 100, action = ‘insert, update’)”?\n>\n\nYeah, or maybe prepared transaction identifier and actions. BTW, if we\nwant to proceed without the SET/RESET keyword then you can prepare the\nSKIP xid patch as the second in the series and we can probably work on\nthe RESET syntax as a completely independent patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 28 Oct 2021 15:04:08 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Oct 28, 2021 at 10:48 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Oct 28, 2021 at 1:05 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Oct 28, 2021 at 7:49 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > >\n> > > > Either from the error messages in the server log or from the new view\n> > > > we are planning to add. I think such a case is possible during the\n> > > > initial synchronization phase where apply worker went ahead then\n> > > > tablesync worker by skipping to apply the changes on the corresponding\n> > > > table. After that it is possible, that table sync worker failed during\n> > > > copy and apply worker fails during the processing of some other rel.\n> > >\n> > > Does it mean that if both initial copy for the corresponding table by\n> > > table sync worker and applying changes for other rels by apply worker\n> > > fail, we skip both by specifying LSN?\n> > >\n> >\n> > Yes.\n> >\n> > > If so, can't we disable the\n> > > initial copy for the table and skip only the changes for other rels\n> > > that cannot be applied?\n> > >\n> >\n> > But anyway you need some way to skip changes via a particular\n> > tablesync worker so that it can mark the relation in 'ready' state.\n>\n> Right.\n>\n> > I\n> > think one can also try to use disable_on_error option in such\n> > scenarios depending on how we expose it. Say, if the option means that\n> > all workers (apply or table sync) should be disabled on an error then\n> > it would be a bit tricky but if we can come up with a way to behave\n> > differently for different workers then it is possible to disable one\n> > set of workers and skip the changes in another set of workers.\n>\n> Yes, I would prefer to skip individual transactions in question rather\n> than skip changes until the particular LSN. It’s not advisable to use\n> LSN to skip changes since it has a risk of skipping unrelated changes\n> too.\n>\n\nFair enough but I think providing LSN is also useful if user can\nidentify the same easily as otherwise there might be more\nadministrative work to make replication progress.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 28 Oct 2021 15:07:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Oct 28, 2021 at 10:36 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Oct 27, 2021 at 7:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Oct 21, 2021 at 10:29 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > >\n> > > I've attached updated patches.\n>\n> Thank you for the comments!\n>\n> >\n> > Few comments:\n> > ==============\n> > 1. Is the patch cleaning tablesync error entries except via vacuum? If\n> > not, can't we send a message to remove tablesync errors once tablesync\n> > is successful (say when we reset skip_xid or when tablesync is\n> > finished) or when we drop subscription? I think the same applies to\n> > apply worker. I think we may want to track it in some way whether an\n> > error has occurred before sending the message but relying completely\n> > on a vacuum might be the recipe of bloat. I think in the case of a\n> > drop subscription we can simply send the message as that is not a\n> > frequent operation. I might be missing something here because in the\n> > tests after drop subscription you are expecting the entries from the\n> > view to get cleared\n>\n> Yes, I think we can have tablesync worker send a message to drop stats\n> once tablesync is successful. But if we do that also when dropping a\n> subscription, I think we need to do that only the transaction is\n> committed since we can drop a subscription that doesn't have a\n> replication slot and rollback the transaction. Probably we can send\n> the message only when the subscritpion does have a replication slot.\n>\n\nRight. And probably for apply worker after updating skip xid.\n\n> In other cases, we can remember the subscriptions being dropped and\n> send the message to drop the statistics of them after committing the\n> transaction but I’m not sure it’s worth having it.\n>\n\nYeah, let's not go to that extent. I think in most cases subscriptions\nwill have corresponding slots.\n\n FWIW, we completely\n> rely on pg_stat_vacuum_stats() for cleaning up the dead tables and\n> functions. And we don't expect there are many subscriptions on the\n> database.\n>\n\nTrue, but we do send it for the database, so let's do it for the cases\nyou explained in the first paragraph.\n\n> >\n> > 5.\n> > +# Check if the view doesn't show any entries after dropping the subscriptions.\n> > +$node_subscriber->safe_psql(\n> > + 'postgres',\n> > + q[\n> > +DROP SUBSCRIPTION tap_sub;\n> > +DROP SUBSCRIPTION tap_sub_streaming;\n> > +]);\n> > +$result = $node_subscriber->safe_psql('postgres',\n> > + \"SELECT count(1) FROM pg_stat_subscription_workers\");\n> > +is($result, q(0), 'no error after dropping subscription');\n> >\n> > Don't we need to wait after dropping the subscription and before\n> > checking the view as there might be a slight delay in messages to get\n> > cleared?\n>\n> I think the test always passes without waiting for the statistics to\n> be updated since we fetch the subscription worker statistics from the\n> stats collector based on the entries of pg_subscription catalog. So\n> this test checks if statistics of already-dropped subscription doesn’t\n> show up in the view after DROP SUBSCRIPTION, but does not check if the\n> subscription worker statistics entry in the stats collector gets\n> removed. The primary reason is that as I mentioned above, the patch\n> relies on pgstat_vacuum_stat() for cleaning up the dead subscriptions.\n>\n\nThat makes sense.\n\n> >\n> > 7.\n> > +# Create subscriptions. The table sync for test_tab2 on tap_sub will enter to\n> > +# infinite error due to violating the unique constraint.\n> > +my $appname = 'tap_sub';\n> > +$node_subscriber->safe_psql(\n> > + 'postgres',\n> > + \"CREATE SUBSCRIPTION tap_sub CONNECTION '$publisher_connstr\n> > application_name=$appname' PUBLICATION tap_pub WITH (streaming = off,\n> > two_phase = on);\");\n> > +my $appname_streaming = 'tap_sub_streaming';\n> > +$node_subscriber->safe_psql(\n> > + 'postgres',\n> > + \"CREATE SUBSCRIPTION tap_sub_streaming CONNECTION\n> > '$publisher_connstr application_name=$appname_streaming' PUBLICATION\n> > tap_pub_streaming WITH (streaming = on, two_phase = on);\");\n> > +\n> > +$node_publisher->wait_for_catchup($appname);\n> > +$node_publisher->wait_for_catchup($appname_streaming);\n> >\n> > How can we ensure that subscriber would have caught up when one of the\n> > tablesync workers is constantly in the error loop? Isn't it possible\n> > that the subscriber didn't send the latest lsn feedback till the table\n> > sync worker is finished?\n> >\n>\n> I thought that even if tablesync for a table is still ongoing, the\n> apply worker can apply commit records, update write LSN and flush LSN,\n> and send the feedback to the wal sender. No?\n>\n\nYou are right, this case will work.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 28 Oct 2021 16:10:24 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Oct 21, 2021 at 10:30 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Oct 20, 2021 at 12:33 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> >\n> > On Mon, Oct 18, 2021 at 12:34 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > I've attached updated patches that incorporate all comments I got so far.\n> > >\n> >\n> > Minor comment on patch 17-0003\n>\n> Thank you for the comment!\n>\n> >\n> > src/backend/replication/logical/worker.c\n> >\n> > (1) Typo in apply_handle_stream_abort() comment:\n> >\n> > /* Stop skipping transaction transaction, if enabled */\n> > should be:\n> > /* Stop skipping transaction changes, if enabled */\n>\n> Fixed.\n>\n> I've attached updated patches.\n\nI have started to have a look at the feature and review the patch, my\ninitial comments:\n1) I could specify invalid subscriber id to\npg_stat_reset_subscription_worker which creates an assertion failure?\n\n+static void\n+pgstat_recv_resetsubworkercounter(PgStat_MsgResetsubworkercounter\n*msg, int len)\n+{\n+ PgStat_StatSubWorkerEntry *wentry;\n+\n+ Assert(OidIsValid(msg->m_subid));\n+\n+ /* Get subscription worker stats */\n+ wentry = pgstat_get_subworker_entry(msg->m_subid,\nmsg->m_subrelid, false);\n\npostgres=# select pg_stat_reset_subscription_worker(NULL, NULL);\n pg_stat_reset_subscription_worker\n-----------------------------------\n\n(1 row)\n\nTRAP: FailedAssertion(\"OidIsValid(msg->m_subid)\", File: \"pgstat.c\",\nLine: 5742, PID: 789588)\npostgres: stats collector (ExceptionalCondition+0xd0)[0x55d33bba4778]\npostgres: stats collector (+0x545a43)[0x55d33b90aa43]\npostgres: stats collector (+0x541fad)[0x55d33b906fad]\npostgres: stats collector (pgstat_start+0xdd)[0x55d33b9020e1]\npostgres: stats collector (+0x54ae0c)[0x55d33b90fe0c]\n/lib/x86_64-linux-gnu/libpthread.so.0(+0x141f0)[0x7f8509ccc1f0]\n/lib/x86_64-linux-gnu/libc.so.6(__select+0x57)[0x7f8509a78ac7]\npostgres: stats collector (+0x548cab)[0x55d33b90dcab]\npostgres: stats collector (PostmasterMain+0x134c)[0x55d33b90d5c6]\npostgres: stats collector (+0x43b8be)[0x55d33b8008be]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xd5)[0x7f8509992565]\npostgres: stats collector (_start+0x2e)[0x55d33b48e4fe]\n\n2) I was able to provide invalid relation id for\npg_stat_reset_subscription_worker? Should we add any validation for\nthis?\nselect pg_stat_reset_subscription_worker(16389, -1);\n\n+pg_stat_reset_subscription_worker(PG_FUNCTION_ARGS)\n+{\n+ Oid subid = PG_GETARG_OID(0);\n+ Oid relid;\n+\n+ if (PG_ARGISNULL(1))\n+ relid = InvalidOid; /* reset apply worker\nerror stats */\n+ else\n+ relid = PG_GETARG_OID(1); /* reset table sync\nworker error stats */\n+\n+ pgstat_reset_subworker_stats(subid, relid);\n+\n+ PG_RETURN_VOID();\n+}\n\n3) 025_error_report test is failing because of one of the recent\ncommit that has made some changes in the way node is initialized in\nthe tap tests, corresponding changes need to be done in\n025_error_report:\nt/025_error_report.pl .............. Dubious, test returned 2 (wstat 512, 0x200)\nNo subtests run\nt/100_bugs.pl ...................... ok\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 28 Oct 2021 16:16:51 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Oct 28, 2021 at 6:34 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Oct 28, 2021 at 10:56 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Oct 28, 2021 at 1:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Oct 27, 2021 at 4:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Wed, Oct 27, 2021 at 8:32 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > On Tue, Oct 26, 2021 at 7:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > >\n> > > > > > You have a point. The other alternatives on this line could be:\n> > > > > >\n> > > > > > Alter Subscription <sub_name> SKIP ( subscription_parameter [=value] [, ... ] );\n> > > > > >\n> > > > > > where subscription_parameter can be one of:\n> > > > > > xid = <xid_val>\n> > > > > > lsn = <lsn_val>\n> > > > > > ...\n> > > > >\n> > > > > Looks better.\n> > > > >\n> > > >\n> > > > If we want to follow the above, then how do we allow users to reset\n> > > > the parameter? One way is to allow the user to set xid as 0 which\n> > > > would mean that we reset it. The other way is to allow SET/RESET\n> > > > before SKIP but not sure if that is a good option.\n> > > >\n> > >\n> > > After thinking some more on this, I think it is better to not use\n> > > SET/RESET keyword here. I think we can use a model similar to how we\n> > > allow setting some of the options in Alter Database:\n> > >\n> > > # Set the connection limit for a database:\n> > > Alter Database akapila WITH connection_limit = 1;\n> > > # Reset the connection limit\n> > > Alter Database akapila WITH connection_limit = -1;\n> > >\n> > > Thoughts?\n> >\n> > Agreed.\n> >\n> > Another thing I'm concerned is that the syntax \"SKIP (\n> > subscription_parameter [=value] [, ...])\" looks like we can specify\n> > multiple options for example, \"SKIP (xid = '100', lsn =\n> > '0/12345678’)”. Is there a case where we need to specify multiple\n> > options? Perhaps when specifying the target XID and operations for\n> > example, “SKIP (xid = 100, action = ‘insert, update’)”?\n> >\n>\n> Yeah, or maybe prepared transaction identifier and actions.\n\nPrepared transactions seem not to need to be skipped since those\nchanges are already successfully applied, though.\n\n> BTW, if we\n> want to proceed without the SET/RESET keyword then you can prepare the\n> SKIP xid patch as the second in the series and we can probably work on\n> the RESET syntax as a completely independent patch.\n\nRight. If we do that, the second patch can be an independent patch.\n\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 29 Oct 2021 09:47:27 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Oct 28, 2021 at 7:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Oct 28, 2021 at 10:36 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Oct 27, 2021 at 7:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Oct 21, 2021 at 10:29 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > >\n> > > > I've attached updated patches.\n> >\n> > Thank you for the comments!\n> >\n> > >\n> > > Few comments:\n> > > ==============\n> > > 1. Is the patch cleaning tablesync error entries except via vacuum? If\n> > > not, can't we send a message to remove tablesync errors once tablesync\n> > > is successful (say when we reset skip_xid or when tablesync is\n> > > finished) or when we drop subscription? I think the same applies to\n> > > apply worker. I think we may want to track it in some way whether an\n> > > error has occurred before sending the message but relying completely\n> > > on a vacuum might be the recipe of bloat. I think in the case of a\n> > > drop subscription we can simply send the message as that is not a\n> > > frequent operation. I might be missing something here because in the\n> > > tests after drop subscription you are expecting the entries from the\n> > > view to get cleared\n> >\n> > Yes, I think we can have tablesync worker send a message to drop stats\n> > once tablesync is successful. But if we do that also when dropping a\n> > subscription, I think we need to do that only the transaction is\n> > committed since we can drop a subscription that doesn't have a\n> > replication slot and rollback the transaction. Probably we can send\n> > the message only when the subscritpion does have a replication slot.\n> >\n>\n> Right. And probably for apply worker after updating skip xid.\n\nI'm not sure it's better to drop apply worker stats after resetting\nskip xid (i.g., after skipping the transaction). Since the view is a\ncumulative view and has last_error_time, I thought we can have the\napply worker stats until the subscription gets dropped. Since the\nerror reporting message could get lost, no entry in the view doesn’t\nmean the worker doesn’t face an issue.\n\n>\n> > In other cases, we can remember the subscriptions being dropped and\n> > send the message to drop the statistics of them after committing the\n> > transaction but I’m not sure it’s worth having it.\n> >\n>\n> Yeah, let's not go to that extent. I think in most cases subscriptions\n> will have corresponding slots.\n\nAgreed.\n\n>\n> FWIW, we completely\n> > rely on pg_stat_vacuum_stats() for cleaning up the dead tables and\n> > functions. And we don't expect there are many subscriptions on the\n> > database.\n> >\n>\n> True, but we do send it for the database, so let's do it for the cases\n> you explained in the first paragraph.\n\nAgreed.\n\nI've attached a new version patch. Since the syntax of skipping\ntransaction id is under the discussion I've attached only the error\nreporting patch for now.\n\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Fri, 29 Oct 2021 14:24:11 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Oct 28, 2021 at 7:47 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Thu, Oct 21, 2021 at 10:30 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Oct 20, 2021 at 12:33 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> > >\n> > > On Mon, Oct 18, 2021 at 12:34 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > I've attached updated patches that incorporate all comments I got so far.\n> > > >\n> > >\n> > > Minor comment on patch 17-0003\n> >\n> > Thank you for the comment!\n> >\n> > >\n> > > src/backend/replication/logical/worker.c\n> > >\n> > > (1) Typo in apply_handle_stream_abort() comment:\n> > >\n> > > /* Stop skipping transaction transaction, if enabled */\n> > > should be:\n> > > /* Stop skipping transaction changes, if enabled */\n> >\n> > Fixed.\n> >\n> > I've attached updated patches.\n>\n> I have started to have a look at the feature and review the patch, my\n> initial comments:\n\nThank you for the comments!\n\n> 1) I could specify invalid subscriber id to\n> pg_stat_reset_subscription_worker which creates an assertion failure?\n>\n> +static void\n> +pgstat_recv_resetsubworkercounter(PgStat_MsgResetsubworkercounter\n> *msg, int len)\n> +{\n> + PgStat_StatSubWorkerEntry *wentry;\n> +\n> + Assert(OidIsValid(msg->m_subid));\n> +\n> + /* Get subscription worker stats */\n> + wentry = pgstat_get_subworker_entry(msg->m_subid,\n> msg->m_subrelid, false);\n>\n> postgres=# select pg_stat_reset_subscription_worker(NULL, NULL);\n> pg_stat_reset_subscription_worker\n> -----------------------------------\n>\n> (1 row)\n>\n> TRAP: FailedAssertion(\"OidIsValid(msg->m_subid)\", File: \"pgstat.c\",\n> Line: 5742, PID: 789588)\n> postgres: stats collector (ExceptionalCondition+0xd0)[0x55d33bba4778]\n> postgres: stats collector (+0x545a43)[0x55d33b90aa43]\n> postgres: stats collector (+0x541fad)[0x55d33b906fad]\n> postgres: stats collector (pgstat_start+0xdd)[0x55d33b9020e1]\n> postgres: stats collector (+0x54ae0c)[0x55d33b90fe0c]\n> /lib/x86_64-linux-gnu/libpthread.so.0(+0x141f0)[0x7f8509ccc1f0]\n> /lib/x86_64-linux-gnu/libc.so.6(__select+0x57)[0x7f8509a78ac7]\n> postgres: stats collector (+0x548cab)[0x55d33b90dcab]\n> postgres: stats collector (PostmasterMain+0x134c)[0x55d33b90d5c6]\n> postgres: stats collector (+0x43b8be)[0x55d33b8008be]\n> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xd5)[0x7f8509992565]\n> postgres: stats collector (_start+0x2e)[0x55d33b48e4fe]\n\nGood catch. Fixed.\n\n>\n> 2) I was able to provide invalid relation id for\n> pg_stat_reset_subscription_worker? Should we add any validation for\n> this?\n> select pg_stat_reset_subscription_worker(16389, -1);\n>\n> +pg_stat_reset_subscription_worker(PG_FUNCTION_ARGS)\n> +{\n> + Oid subid = PG_GETARG_OID(0);\n> + Oid relid;\n> +\n> + if (PG_ARGISNULL(1))\n> + relid = InvalidOid; /* reset apply worker\n> error stats */\n> + else\n> + relid = PG_GETARG_OID(1); /* reset table sync\n> worker error stats */\n> +\n> + pgstat_reset_subworker_stats(subid, relid);\n> +\n> + PG_RETURN_VOID();\n> +}\n\nI think that validation is not necessarily necessary. OID '-1' is interpreted as\n4294967295 and we don't reject it.\n\n>\n> 3) 025_error_report test is failing because of one of the recent\n> commit that has made some changes in the way node is initialized in\n> the tap tests, corresponding changes need to be done in\n> 025_error_report:\n> t/025_error_report.pl .............. Dubious, test returned 2 (wstat 512, 0x200)\n> No subtests run\n> t/100_bugs.pl ...................... ok\n\nFixed.\n\nThese comments are incorporated into the latest version patch I just\nsubmitted[1].\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoDY-9_x819F_m1_wfCVXXFJrGiSmR2MfC9Nw4nW8Om0qA%40mail.gmail.com\n\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 29 Oct 2021 14:29:52 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Oct 29, 2021 at 6:18 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Oct 28, 2021 at 6:34 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Oct 28, 2021 at 10:56 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > >\n> > > Another thing I'm concerned is that the syntax \"SKIP (\n> > > subscription_parameter [=value] [, ...])\" looks like we can specify\n> > > multiple options for example, \"SKIP (xid = '100', lsn =\n> > > '0/12345678’)”. Is there a case where we need to specify multiple\n> > > options? Perhaps when specifying the target XID and operations for\n> > > example, “SKIP (xid = 100, action = ‘insert, update’)”?\n> > >\n> >\n> > Yeah, or maybe prepared transaction identifier and actions.\n>\n> Prepared transactions seem not to need to be skipped since those\n> changes are already successfully applied, though.\n>\n\nI think it can also fail before apply of prepare is successful. Right\nnow, we are just logging xid in error cases bug gid could also be\nlogged as we receive that in begin_prepare. I think currently xid is\nsufficient but I have given this as an example for future\nconsideration.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 29 Oct 2021 14:32:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Oct 29, 2021 at 10:54 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Oct 28, 2021 at 7:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Oct 28, 2021 at 10:36 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Wed, Oct 27, 2021 at 7:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Thu, Oct 21, 2021 at 10:29 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > >\n> > > > > I've attached updated patches.\n> > >\n> > > Thank you for the comments!\n> > >\n> > > >\n> > > > Few comments:\n> > > > ==============\n> > > > 1. Is the patch cleaning tablesync error entries except via vacuum? If\n> > > > not, can't we send a message to remove tablesync errors once tablesync\n> > > > is successful (say when we reset skip_xid or when tablesync is\n> > > > finished) or when we drop subscription? I think the same applies to\n> > > > apply worker. I think we may want to track it in some way whether an\n> > > > error has occurred before sending the message but relying completely\n> > > > on a vacuum might be the recipe of bloat. I think in the case of a\n> > > > drop subscription we can simply send the message as that is not a\n> > > > frequent operation. I might be missing something here because in the\n> > > > tests after drop subscription you are expecting the entries from the\n> > > > view to get cleared\n> > >\n> > > Yes, I think we can have tablesync worker send a message to drop stats\n> > > once tablesync is successful. But if we do that also when dropping a\n> > > subscription, I think we need to do that only the transaction is\n> > > committed since we can drop a subscription that doesn't have a\n> > > replication slot and rollback the transaction. Probably we can send\n> > > the message only when the subscritpion does have a replication slot.\n> > >\n> >\n> > Right. And probably for apply worker after updating skip xid.\n>\n> I'm not sure it's better to drop apply worker stats after resetting\n> skip xid (i.g., after skipping the transaction). Since the view is a\n> cumulative view and has last_error_time, I thought we can have the\n> apply worker stats until the subscription gets dropped.\n>\n\nFair enough. So statistics can be removed either by vacuum or drop\nsubscription. Also, if we go by this logic then there is no harm in\nretaining the stat entries for tablesync errors. Why have different\nbehavior for apply and tablesync workers?\n\nI have another question in this regard. Currently, the reset function\nseems to be resetting only the first stat entry for a subscription.\nBut can't we have multiple stat entries for a subscription considering\nthe view's cumulative nature?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 29 Oct 2021 16:49:59 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Oct 29, 2021 at 4:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Oct 29, 2021 at 10:54 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I'm not sure it's better to drop apply worker stats after resetting\n> > skip xid (i.g., after skipping the transaction). Since the view is a\n> > cumulative view and has last_error_time, I thought we can have the\n> > apply worker stats until the subscription gets dropped.\n> >\n>\n> Fair enough. So statistics can be removed either by vacuum or drop\n> subscription. Also, if we go by this logic then there is no harm in\n> retaining the stat entries for tablesync errors. Why have different\n> behavior for apply and tablesync workers?\n>\n> I have another question in this regard. Currently, the reset function\n> seems to be resetting only the first stat entry for a subscription.\n> But can't we have multiple stat entries for a subscription considering\n> the view's cumulative nature?\n>\n\nDon't we want these stats to be dealt in the same way as tables and\nfunctions as all the stats entries (subscription entries) are specific\nto a particular database? If so, I think we should write/read these\nto/from db specific stats file in the same way as we do for tables or\nfunctions. I think in the current patch, it will unnecessarily read\nand probably write subscription stats even when those are not\nrequired.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 30 Oct 2021 08:51:05 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Oct 29, 2021 at 8:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Oct 29, 2021 at 10:54 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Oct 28, 2021 at 7:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Oct 28, 2021 at 10:36 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Wed, Oct 27, 2021 at 7:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Thu, Oct 21, 2021 at 10:29 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > >\n> > > > > >\n> > > > > > I've attached updated patches.\n> > > >\n> > > > Thank you for the comments!\n> > > >\n> > > > >\n> > > > > Few comments:\n> > > > > ==============\n> > > > > 1. Is the patch cleaning tablesync error entries except via vacuum? If\n> > > > > not, can't we send a message to remove tablesync errors once tablesync\n> > > > > is successful (say when we reset skip_xid or when tablesync is\n> > > > > finished) or when we drop subscription? I think the same applies to\n> > > > > apply worker. I think we may want to track it in some way whether an\n> > > > > error has occurred before sending the message but relying completely\n> > > > > on a vacuum might be the recipe of bloat. I think in the case of a\n> > > > > drop subscription we can simply send the message as that is not a\n> > > > > frequent operation. I might be missing something here because in the\n> > > > > tests after drop subscription you are expecting the entries from the\n> > > > > view to get cleared\n> > > >\n> > > > Yes, I think we can have tablesync worker send a message to drop stats\n> > > > once tablesync is successful. But if we do that also when dropping a\n> > > > subscription, I think we need to do that only the transaction is\n> > > > committed since we can drop a subscription that doesn't have a\n> > > > replication slot and rollback the transaction. Probably we can send\n> > > > the message only when the subscritpion does have a replication slot.\n> > > >\n> > >\n> > > Right. And probably for apply worker after updating skip xid.\n> >\n> > I'm not sure it's better to drop apply worker stats after resetting\n> > skip xid (i.g., after skipping the transaction). Since the view is a\n> > cumulative view and has last_error_time, I thought we can have the\n> > apply worker stats until the subscription gets dropped.\n> >\n>\n> Fair enough. So statistics can be removed either by vacuum or drop\n> subscription. Also, if we go by this logic then there is no harm in\n> retaining the stat entries for tablesync errors. Why have different\n> behavior for apply and tablesync workers?\n\nMy understanding is that the subscription worker statistics entry\ncorresponds to workers (but not physical workers since the physical\nprocess is changed after restarting). So if the worker finishes its\njobs, it is no longer necessary to show errors since further problems\nwill not occur after that. Table sync worker’s job finishes when\ncompleting table copy (unless table sync is performed again by REFRESH\nPUBLICATION) whereas apply worker’s job finishes when the subscription\nis dropped. Also, I’m concerned about a situation like where a lot of\ntable sync failed. In which case, if we don’t drop table sync worker\nstatistics after completing its job, we end up having a lot of entries\nin the view unless the subscription is dropped.\n\n>\n> I have another question in this regard. Currently, the reset function\n> seems to be resetting only the first stat entry for a subscription.\n> But can't we have multiple stat entries for a subscription considering\n> the view's cumulative nature?\n\nI might be missing your points but I think that with the current\npatch, the view has multiple entries for a subscription. That is,\nthere is one apply worker stats and multiple table sync worker stats\nper subscription. And pg_stat_reset_subscription() function can reset\nany stats by specifying subscription OID and relation OID.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 1 Nov 2021 10:48:17 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Sat, Oct 30, 2021 at 12:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Oct 29, 2021 at 4:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Oct 29, 2021 at 10:54 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > I'm not sure it's better to drop apply worker stats after resetting\n> > > skip xid (i.g., after skipping the transaction). Since the view is a\n> > > cumulative view and has last_error_time, I thought we can have the\n> > > apply worker stats until the subscription gets dropped.\n> > >\n> >\n> > Fair enough. So statistics can be removed either by vacuum or drop\n> > subscription. Also, if we go by this logic then there is no harm in\n> > retaining the stat entries for tablesync errors. Why have different\n> > behavior for apply and tablesync workers?\n> >\n> > I have another question in this regard. Currently, the reset function\n> > seems to be resetting only the first stat entry for a subscription.\n> > But can't we have multiple stat entries for a subscription considering\n> > the view's cumulative nature?\n> >\n>\n> Don't we want these stats to be dealt in the same way as tables and\n> functions as all the stats entries (subscription entries) are specific\n> to a particular database? If so, I think we should write/read these\n> to/from db specific stats file in the same way as we do for tables or\n> functions. I think in the current patch, it will unnecessarily read\n> and probably write subscription stats even when those are not\n> required.\n\nGood point! So probably we should have PgStat_StatDBEntry have the\nhash table for subscription worker statistics, right?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 1 Nov 2021 10:54:31 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Oct 29, 2021 at 4:24 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached a new version patch. Since the syntax of skipping\n> transaction id is under the discussion I've attached only the error\n> reporting patch for now.\n>\n\nI have some comments on the v19-0001 patch:\n\nv19-0001\n\n(1) doc/src/sgml/monitoring.sgml\nSeems to be missing the word \"information\":\n\nBEFORE:\n+ <entry>At least one row per subscription, showing about errors that\n+ occurred on subscription.\nAFTER:\n+ <entry>At least one row per subscription, showing information about\n+ errors that occurred on subscription.\n\n\n(2) pg_stat_reset_subscription_worker(subid Oid, relid Oid)\nFirst of all, I think that the documentation for this function should\nmake it clear that a non-NULL \"subid\" parameter is required for both\nreset cases (tablesync and apply).\nPerhaps this could be done by simply changing the first sentence to say:\n\"Resets statistics of a single subscription worker error, for a worker\nrunning on subscription with <parameter>subid</parameter>.\"\n(and then can remove \" running on the subscription with\n<parameter>subid</parameter>\" from the last sentence)\n\nI think that the documentation for this function should say that it\nshould be used in conjunction with the \"pg_stat_subscription_workers\"\nview in order to obtain the required subid/relid values for resetting.\n(and should provide a link to the documentation for that view)\nAlso, I think that the function documentation should make it clear\nthat the tablesync error case is indicated by a NULL \"command\" in the\ninformation returned from the \"pg_stat_subscription_workers\" view\n(otherwise the user needs to look at the server log in order to\ndetermine whether the error is for the apply/tablesync worker).\n\nFinally, there are currently no tests for this new function.\n\n(3) pg_stat_subscription_workers\nIn the documentation for this, the description for the \"command\"\ncolumn says: \"This field is always NULL if the error was reported\nduring the initial data copy.\"\nSome users may not realise that this refers to \"tablesync\", so perhaps\nadd \" (tablesync)\" to the end of this sentence, or similar.\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 2 Nov 2021 14:51:34 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Nov 1, 2021 at 7:18 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Oct 29, 2021 at 8:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > Fair enough. So statistics can be removed either by vacuum or drop\n> > subscription. Also, if we go by this logic then there is no harm in\n> > retaining the stat entries for tablesync errors. Why have different\n> > behavior for apply and tablesync workers?\n>\n> My understanding is that the subscription worker statistics entry\n> corresponds to workers (but not physical workers since the physical\n> process is changed after restarting). So if the worker finishes its\n> jobs, it is no longer necessary to show errors since further problems\n> will not occur after that. Table sync worker’s job finishes when\n> completing table copy (unless table sync is performed again by REFRESH\n> PUBLICATION) whereas apply worker’s job finishes when the subscription\n> is dropped.\n>\n\nActually, I am not very sure how users can use the old error\ninformation after we allowed skipping the conflicting xid. Say, if\nthey want to add/remove some constraints on the table based on\nprevious errors then they might want to refer to errors of both the\napply worker and table sync worker.\n\n> Also, I’m concerned about a situation like where a lot of\n> table sync failed. In which case, if we don’t drop table sync worker\n> statistics after completing its job, we end up having a lot of entries\n> in the view unless the subscription is dropped.\n>\n\nTrue, but the same could be said for apply workers where errors can be\naccumulated over a period of time.\n\n> >\n> > I have another question in this regard. Currently, the reset function\n> > seems to be resetting only the first stat entry for a subscription.\n> > But can't we have multiple stat entries for a subscription considering\n> > the view's cumulative nature?\n>\n> I might be missing your points but I think that with the current\n> patch, the view has multiple entries for a subscription. That is,\n> there is one apply worker stats and multiple table sync worker stats\n> per subscription.\n>\n\nCan't we have multiple entries for one apply worker?\n\n> And pg_stat_reset_subscription() function can reset\n> any stats by specifying subscription OID and relation OID.\n>\n\nSay, if the user has supplied just subscription OID then isn't it\nbetter to reset all the error entries for that subscription?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 2 Nov 2021 11:04:57 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Nov 1, 2021 at 7:25 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Sat, Oct 30, 2021 at 12:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Don't we want these stats to be dealt in the same way as tables and\n> > functions as all the stats entries (subscription entries) are specific\n> > to a particular database? If so, I think we should write/read these\n> > to/from db specific stats file in the same way as we do for tables or\n> > functions. I think in the current patch, it will unnecessarily read\n> > and probably write subscription stats even when those are not\n> > required.\n>\n> Good point! So probably we should have PgStat_StatDBEntry have the\n> hash table for subscription worker statistics, right?\n>\n\nYes.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 2 Nov 2021 11:06:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Nov 2, 2021 at 2:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Nov 1, 2021 at 7:18 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Oct 29, 2021 at 8:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > Fair enough. So statistics can be removed either by vacuum or drop\n> > > subscription. Also, if we go by this logic then there is no harm in\n> > > retaining the stat entries for tablesync errors. Why have different\n> > > behavior for apply and tablesync workers?\n> >\n> > My understanding is that the subscription worker statistics entry\n> > corresponds to workers (but not physical workers since the physical\n> > process is changed after restarting). So if the worker finishes its\n> > jobs, it is no longer necessary to show errors since further problems\n> > will not occur after that. Table sync worker’s job finishes when\n> > completing table copy (unless table sync is performed again by REFRESH\n> > PUBLICATION) whereas apply worker’s job finishes when the subscription\n> > is dropped.\n> >\n>\n> Actually, I am not very sure how users can use the old error\n> information after we allowed skipping the conflicting xid. Say, if\n> they want to add/remove some constraints on the table based on\n> previous errors then they might want to refer to errors of both the\n> apply worker and table sync worker.\n\nI think that in general, statistics should be retained as long as a\ncorresponding object exists on the database, like other cumulative\nstatistic views. So I’m concerned that an entry of a cumulative stats\nview is automatically removed by a non-stats-related function (i.g.,\nALTER SUBSCRIPTION SKIP). Which seems a new behavior for cumulative\nstats views.\n\nWe can retain the stats entries for table sync worker but what I want\nto avoid is that the view shows many old entries that will never be\nupdated. I've sometimes seen cases where the user mistakenly restored\ntable data on the subscriber before creating a subscription, failed\ntable sync on many tables due to unique violation, and truncated\ntables on the subscriber. I think that unlike the stats entries for\napply worker, retaining the stats entries for table sync could be\nharmful since it’s likely to be a large amount (even hundreds of\nentries). Especially, it could lead to bloat the stats file since it\nhas an error message. So if we do that, I'd like to provide a function\nfor users to remove (not reset) stats entries manually. Even if we\nremoved stats entries after skipping the transaction in question, the\nstats entries would be left if we resolve the conflict in another way.\n\n>\n> > >\n> > > I have another question in this regard. Currently, the reset function\n> > > seems to be resetting only the first stat entry for a subscription.\n> > > But can't we have multiple stat entries for a subscription considering\n> > > the view's cumulative nature?\n> >\n> > I might be missing your points but I think that with the current\n> > patch, the view has multiple entries for a subscription. That is,\n> > there is one apply worker stats and multiple table sync worker stats\n> > per subscription.\n> >\n>\n> Can't we have multiple entries for one apply worker?\n\nUmm, I think we have one stats entry per one logical replication\nworker (apply worker or table sync worker). Am I missing something?\n\n>\n> > And pg_stat_reset_subscription() function can reset\n> > any stats by specifying subscription OID and relation OID.\n> >\n>\n> Say, if the user has supplied just subscription OID then isn't it\n> better to reset all the error entries for that subscription?\n\nAgreed. So pg_stat_reset_subscription_worker(oid) removes all errors\nfor the subscription whereas pg_stat_reset_subscription_worker(oid,\nnull) reset only the apply worker error for the subscription?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 2 Nov 2021 17:47:14 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Friday, October 29, 2021 1:24 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> I've attached a new version patch. Since the syntax of skipping\r\n> transaction id is under the discussion I've attached only the error\r\n> reporting patch for now.\r\n> \r\n> \r\n\r\nThanks for your patch. Some comments on 026_error_report.pl file.\r\n\r\n1. For test_tab_streaming table, the test only checks initial table sync and\r\ndoesn't check anything related to the new view pg_stat_subscription_workers. Do\r\nyou want to add more test cases for it?\r\n\r\n2. The subscriptions are created with two_phase option on, but I didn't see two\r\nphase transactions. Should we add some test cases for two phase transactions?\r\n\r\n3. Errors reported by table sync worker will be cleaned up if the table sync\r\nworker finish, should we add this case to the test? (After checking the table\r\nsync worker's error in the view, delete data which caused the error, then check\r\nthe view again after table sync worker finished.)\r\n\r\nRegards\r\nTang\r\n",
"msg_date": "Tue, 2 Nov 2021 09:45:27 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Nov 2, 2021 at 2:17 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Nov 2, 2021 at 2:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > > >\n> > > > I have another question in this regard. Currently, the reset function\n> > > > seems to be resetting only the first stat entry for a subscription.\n> > > > But can't we have multiple stat entries for a subscription considering\n> > > > the view's cumulative nature?\n> > >\n> > > I might be missing your points but I think that with the current\n> > > patch, the view has multiple entries for a subscription. That is,\n> > > there is one apply worker stats and multiple table sync worker stats\n> > > per subscription.\n> > >\n> >\n> > Can't we have multiple entries for one apply worker?\n>\n> Umm, I think we have one stats entry per one logical replication\n> worker (apply worker or table sync worker). Am I missing something?\n>\n\nNo, you are right. I got confused.\n\n> >\n> > > And pg_stat_reset_subscription() function can reset\n> > > any stats by specifying subscription OID and relation OID.\n> > >\n> >\n> > Say, if the user has supplied just subscription OID then isn't it\n> > better to reset all the error entries for that subscription?\n>\n> Agreed. So pg_stat_reset_subscription_worker(oid) removes all errors\n> for the subscription whereas pg_stat_reset_subscription_worker(oid,\n> null) reset only the apply worker error for the subscription?\n>\n\nYes.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 2 Nov 2021 15:37:18 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Nov 2, 2021 at 2:17 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Nov 2, 2021 at 2:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Nov 1, 2021 at 7:18 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Fri, Oct 29, 2021 at 8:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > >\n> > > > Fair enough. So statistics can be removed either by vacuum or drop\n> > > > subscription. Also, if we go by this logic then there is no harm in\n> > > > retaining the stat entries for tablesync errors. Why have different\n> > > > behavior for apply and tablesync workers?\n> > >\n> > > My understanding is that the subscription worker statistics entry\n> > > corresponds to workers (but not physical workers since the physical\n> > > process is changed after restarting). So if the worker finishes its\n> > > jobs, it is no longer necessary to show errors since further problems\n> > > will not occur after that. Table sync worker’s job finishes when\n> > > completing table copy (unless table sync is performed again by REFRESH\n> > > PUBLICATION) whereas apply worker’s job finishes when the subscription\n> > > is dropped.\n> > >\n> >\n> > Actually, I am not very sure how users can use the old error\n> > information after we allowed skipping the conflicting xid. Say, if\n> > they want to add/remove some constraints on the table based on\n> > previous errors then they might want to refer to errors of both the\n> > apply worker and table sync worker.\n>\n> I think that in general, statistics should be retained as long as a\n> corresponding object exists on the database, like other cumulative\n> statistic views. So I’m concerned that an entry of a cumulative stats\n> view is automatically removed by a non-stats-related function (i.g.,\n> ALTER SUBSCRIPTION SKIP). Which seems a new behavior for cumulative\n> stats views.\n>\n> We can retain the stats entries for table sync worker but what I want\n> to avoid is that the view shows many old entries that will never be\n> updated. I've sometimes seen cases where the user mistakenly restored\n> table data on the subscriber before creating a subscription, failed\n> table sync on many tables due to unique violation, and truncated\n> tables on the subscriber. I think that unlike the stats entries for\n> apply worker, retaining the stats entries for table sync could be\n> harmful since it’s likely to be a large amount (even hundreds of\n> entries). Especially, it could lead to bloat the stats file since it\n> has an error message. So if we do that, I'd like to provide a function\n> for users to remove (not reset) stats entries manually.\n>\n\nIf we follow the idea of keeping stats at db level (in\nPgStat_StatDBEntry) as discussed above then I think we already have a\nway to remove stat entries via pg_stat_reset which removes the stats\ncorresponding to tables, functions and after this patch corresponding\nto subscriptions as well for the current database. Won't that be\nsufficient? I see your point but I think it may be better if we keep\nthe same behavior for stats of apply and table sync workers.\n\nFollowing the tables, functions, I thought of keeping the name of the\nreset function similar to \"pg_stat_reset_single_table_counters\" but I\nfeel the currently used name \"pg_stat_reset_subscription_worker\" in\nthe patch is better. Do let me know what you think?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 3 Nov 2021 09:11:26 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Oct 29, 2021 at 10:55 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Oct 28, 2021 at 7:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Oct 28, 2021 at 10:36 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Wed, Oct 27, 2021 at 7:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Thu, Oct 21, 2021 at 10:29 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > >\n> > > > > I've attached updated patches.\n> > >\n> > > Thank you for the comments!\n> > >\n> > > >\n> > > > Few comments:\n> > > > ==============\n> > > > 1. Is the patch cleaning tablesync error entries except via vacuum? If\n> > > > not, can't we send a message to remove tablesync errors once tablesync\n> > > > is successful (say when we reset skip_xid or when tablesync is\n> > > > finished) or when we drop subscription? I think the same applies to\n> > > > apply worker. I think we may want to track it in some way whether an\n> > > > error has occurred before sending the message but relying completely\n> > > > on a vacuum might be the recipe of bloat. I think in the case of a\n> > > > drop subscription we can simply send the message as that is not a\n> > > > frequent operation. I might be missing something here because in the\n> > > > tests after drop subscription you are expecting the entries from the\n> > > > view to get cleared\n> > >\n> > > Yes, I think we can have tablesync worker send a message to drop stats\n> > > once tablesync is successful. But if we do that also when dropping a\n> > > subscription, I think we need to do that only the transaction is\n> > > committed since we can drop a subscription that doesn't have a\n> > > replication slot and rollback the transaction. Probably we can send\n> > > the message only when the subscritpion does have a replication slot.\n> > >\n> >\n> > Right. And probably for apply worker after updating skip xid.\n>\n> I'm not sure it's better to drop apply worker stats after resetting\n> skip xid (i.g., after skipping the transaction). Since the view is a\n> cumulative view and has last_error_time, I thought we can have the\n> apply worker stats until the subscription gets dropped. Since the\n> error reporting message could get lost, no entry in the view doesn’t\n> mean the worker doesn’t face an issue.\n>\n> >\n> > > In other cases, we can remember the subscriptions being dropped and\n> > > send the message to drop the statistics of them after committing the\n> > > transaction but I’m not sure it’s worth having it.\n> > >\n> >\n> > Yeah, let's not go to that extent. I think in most cases subscriptions\n> > will have corresponding slots.\n>\n> Agreed.\n>\n> >\n> > FWIW, we completely\n> > > rely on pg_stat_vacuum_stats() for cleaning up the dead tables and\n> > > functions. And we don't expect there are many subscriptions on the\n> > > database.\n> > >\n> >\n> > True, but we do send it for the database, so let's do it for the cases\n> > you explained in the first paragraph.\n>\n> Agreed.\n>\n> I've attached a new version patch. Since the syntax of skipping\n> transaction id is under the discussion I've attached only the error\n> reporting patch for now.\n\nThanks for the updated patch, few comments:\n1) This check and return can be moved above CreateTemplateTupleDesc so\nthat the tuple descriptor need not be created if there is no worker\nstatistics\n+ BlessTupleDesc(tupdesc);\n+\n+ /* Get subscription worker stats */\n+ wentry = pgstat_fetch_subworker(subid, subrelid);\n+\n+ /* Return NULL if there is no worker statistics */\n+ if (wentry == NULL)\n+ PG_RETURN_NULL();\n+\n+ /* Initialise values and NULL flags arrays */\n+ MemSet(values, 0, sizeof(values));\n+ MemSet(nulls, 0, sizeof(nulls));\n\n2) \"NULL for the main apply worker\" is mentioned as \"null for the main\napply worker\" in case of pg_stat_subscription view, we can mention it\nsimilarly.\n+ <para>\n+ OID of the relation that the worker is synchronizing; NULL for the\n+ main apply worker\n+ </para></entry>\n\n3) Variable assignment can be done during declaration and this the\nassignment can be removed\n+ i = 0;\n+ /* subid */\n+ values[i++] = ObjectIdGetDatum(subid);\n\n4) I noticed that the worker error is still present when queried from\npg_stat_subscription_workers even after conflict is resolved in the\nsubscriber and the worker proceeds with applying the other\ntransactions, should this be documented somewhere?\n\n5) This needs to be aligned, the columns in select have used TAB, we\nshould align it using spaces.\n+CREATE VIEW pg_stat_subscription_workers AS\n+ SELECT\n+ w.subid,\n+ s.subname,\n+ w.subrelid,\n+ w.relid,\n+ w.command,\n+ w.xid,\n+ w.error_count,\n+ w.error_message,\n+ w.last_error_time,\n+ w.stats_reset\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 4 Nov 2021 21:27:43 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Nov 3, 2021 at 12:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Nov 2, 2021 at 2:17 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Nov 2, 2021 at 2:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Nov 1, 2021 at 7:18 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Fri, Oct 29, 2021 at 8:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > >\n> > > > > Fair enough. So statistics can be removed either by vacuum or drop\n> > > > > subscription. Also, if we go by this logic then there is no harm in\n> > > > > retaining the stat entries for tablesync errors. Why have different\n> > > > > behavior for apply and tablesync workers?\n> > > >\n> > > > My understanding is that the subscription worker statistics entry\n> > > > corresponds to workers (but not physical workers since the physical\n> > > > process is changed after restarting). So if the worker finishes its\n> > > > jobs, it is no longer necessary to show errors since further problems\n> > > > will not occur after that. Table sync worker’s job finishes when\n> > > > completing table copy (unless table sync is performed again by REFRESH\n> > > > PUBLICATION) whereas apply worker’s job finishes when the subscription\n> > > > is dropped.\n> > > >\n> > >\n> > > Actually, I am not very sure how users can use the old error\n> > > information after we allowed skipping the conflicting xid. Say, if\n> > > they want to add/remove some constraints on the table based on\n> > > previous errors then they might want to refer to errors of both the\n> > > apply worker and table sync worker.\n> >\n> > I think that in general, statistics should be retained as long as a\n> > corresponding object exists on the database, like other cumulative\n> > statistic views. So I’m concerned that an entry of a cumulative stats\n> > view is automatically removed by a non-stats-related function (i.g.,\n> > ALTER SUBSCRIPTION SKIP). Which seems a new behavior for cumulative\n> > stats views.\n> >\n> > We can retain the stats entries for table sync worker but what I want\n> > to avoid is that the view shows many old entries that will never be\n> > updated. I've sometimes seen cases where the user mistakenly restored\n> > table data on the subscriber before creating a subscription, failed\n> > table sync on many tables due to unique violation, and truncated\n> > tables on the subscriber. I think that unlike the stats entries for\n> > apply worker, retaining the stats entries for table sync could be\n> > harmful since it’s likely to be a large amount (even hundreds of\n> > entries). Especially, it could lead to bloat the stats file since it\n> > has an error message. So if we do that, I'd like to provide a function\n> > for users to remove (not reset) stats entries manually.\n> >\n>\n> If we follow the idea of keeping stats at db level (in\n> PgStat_StatDBEntry) as discussed above then I think we already have a\n> way to remove stat entries via pg_stat_reset which removes the stats\n> corresponding to tables, functions and after this patch corresponding\n> to subscriptions as well for the current database. Won't that be\n> sufficient? I see your point but I think it may be better if we keep\n> the same behavior for stats of apply and table sync workers.\n\nMake sense.\n\n>\n> Following the tables, functions, I thought of keeping the name of the\n> reset function similar to \"pg_stat_reset_single_table_counters\" but I\n> feel the currently used name \"pg_stat_reset_subscription_worker\" in\n> the patch is better. Do let me know what you think?\n\nYeah, I also tend to prefer pg_stat_reset_subscription_worker name\nsince \"single\" isn't clear in the context of subscription worker. And\nthe behavior of the reset function for subscription workers is also\ndifferent from pg_stat_reset_single_xxx_counters.\n\nI've attached an updated patch. In this version patch, subscription\nworker statistics are collected per-database and handled in a similar\nway to tables and functions. I think perhaps we still need to discuss\ndetails of how the statistics should be handled but I'd like to share\nthe patch for discussion.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Sun, 7 Nov 2021 23:19:48 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Nov 5, 2021 at 12:57 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Fri, Oct 29, 2021 at 10:55 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Oct 28, 2021 at 7:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Oct 28, 2021 at 10:36 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Wed, Oct 27, 2021 at 7:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Thu, Oct 21, 2021 at 10:29 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > >\n> > > > > >\n> > > > > > I've attached updated patches.\n> > > >\n> > > > Thank you for the comments!\n> > > >\n> > > > >\n> > > > > Few comments:\n> > > > > ==============\n> > > > > 1. Is the patch cleaning tablesync error entries except via vacuum? If\n> > > > > not, can't we send a message to remove tablesync errors once tablesync\n> > > > > is successful (say when we reset skip_xid or when tablesync is\n> > > > > finished) or when we drop subscription? I think the same applies to\n> > > > > apply worker. I think we may want to track it in some way whether an\n> > > > > error has occurred before sending the message but relying completely\n> > > > > on a vacuum might be the recipe of bloat. I think in the case of a\n> > > > > drop subscription we can simply send the message as that is not a\n> > > > > frequent operation. I might be missing something here because in the\n> > > > > tests after drop subscription you are expecting the entries from the\n> > > > > view to get cleared\n> > > >\n> > > > Yes, I think we can have tablesync worker send a message to drop stats\n> > > > once tablesync is successful. But if we do that also when dropping a\n> > > > subscription, I think we need to do that only the transaction is\n> > > > committed since we can drop a subscription that doesn't have a\n> > > > replication slot and rollback the transaction. Probably we can send\n> > > > the message only when the subscritpion does have a replication slot.\n> > > >\n> > >\n> > > Right. And probably for apply worker after updating skip xid.\n> >\n> > I'm not sure it's better to drop apply worker stats after resetting\n> > skip xid (i.g., after skipping the transaction). Since the view is a\n> > cumulative view and has last_error_time, I thought we can have the\n> > apply worker stats until the subscription gets dropped. Since the\n> > error reporting message could get lost, no entry in the view doesn’t\n> > mean the worker doesn’t face an issue.\n> >\n> > >\n> > > > In other cases, we can remember the subscriptions being dropped and\n> > > > send the message to drop the statistics of them after committing the\n> > > > transaction but I’m not sure it’s worth having it.\n> > > >\n> > >\n> > > Yeah, let's not go to that extent. I think in most cases subscriptions\n> > > will have corresponding slots.\n> >\n> > Agreed.\n> >\n> > >\n> > > FWIW, we completely\n> > > > rely on pg_stat_vacuum_stats() for cleaning up the dead tables and\n> > > > functions. And we don't expect there are many subscriptions on the\n> > > > database.\n> > > >\n> > >\n> > > True, but we do send it for the database, so let's do it for the cases\n> > > you explained in the first paragraph.\n> >\n> > Agreed.\n> >\n> > I've attached a new version patch. Since the syntax of skipping\n> > transaction id is under the discussion I've attached only the error\n> > reporting patch for now.\n>\n> Thanks for the updated patch, few comments:\n> 1) This check and return can be moved above CreateTemplateTupleDesc so\n> that the tuple descriptor need not be created if there is no worker\n> statistics\n> + BlessTupleDesc(tupdesc);\n> +\n> + /* Get subscription worker stats */\n> + wentry = pgstat_fetch_subworker(subid, subrelid);\n> +\n> + /* Return NULL if there is no worker statistics */\n> + if (wentry == NULL)\n> + PG_RETURN_NULL();\n> +\n> + /* Initialise values and NULL flags arrays */\n> + MemSet(values, 0, sizeof(values));\n> + MemSet(nulls, 0, sizeof(nulls));\n>\n> 2) \"NULL for the main apply worker\" is mentioned as \"null for the main\n> apply worker\" in case of pg_stat_subscription view, we can mention it\n> similarly.\n> + <para>\n> + OID of the relation that the worker is synchronizing; NULL for the\n> + main apply worker\n> + </para></entry>\n>\n> 3) Variable assignment can be done during declaration and this the\n> assignment can be removed\n> + i = 0;\n> + /* subid */\n> + values[i++] = ObjectIdGetDatum(subid);\n>\n> 4) I noticed that the worker error is still present when queried from\n> pg_stat_subscription_workers even after conflict is resolved in the\n> subscriber and the worker proceeds with applying the other\n> transactions, should this be documented somewhere?\n>\n> 5) This needs to be aligned, the columns in select have used TAB, we\n> should align it using spaces.\n> +CREATE VIEW pg_stat_subscription_workers AS\n> + SELECT\n> + w.subid,\n> + s.subname,\n> + w.subrelid,\n> + w.relid,\n> + w.command,\n> + w.xid,\n> + w.error_count,\n> + w.error_message,\n> + w.last_error_time,\n> + w.stats_reset\n>\n\nThank you for the comments! These comments are incorporated into the\nlatest (v20) patch I just submitted[1].\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoAT42mhcqeB1jPfRL1%2BEUHbZk8MMY_fBgsyZvJeKNpG%2Bw%40mail.gmail.com\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Sun, 7 Nov 2021 23:21:20 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Nov 8, 2021 at 1:20 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached an updated patch. In this version patch, subscription\n> worker statistics are collected per-database and handled in a similar\n> way to tables and functions. I think perhaps we still need to discuss\n> details of how the statistics should be handled but I'd like to share\n> the patch for discussion.\n>\n\nThat's for the updated patch.\nSome initial comments on the v20 patch:\n\n\ndoc/src/sgml/monitoring.sgml\n\n(1) wording\nThe word \"information\" seems to be missing after \"showing\" (otherwise\nis reads \"showing about errors\", which isn't correct grammar).\nI suggest the following change:\n\nBEFORE:\n+ <entry>At least one row per subscription, showing about errors that\n+ occurred on subscription.\nAFTER:\n+ <entry>At least one row per subscription, showing information about\n+ errors that occurred on subscription.\n\n(2) pg_stat_reset_subscription_worker(subid Oid, relid Oid) function\ndocumentation\nThe description doesn't read well. I'd suggest the following change:\n\nBEFORE:\n* Resets statistics of a single subscription worker statistics.\nAFTER:\n* Resets the statistics of a single subscription worker.\n\nI think that the documentation for this function should make it clear\nthat a non-NULL \"subid\" parameter is required for both reset cases\n(tablesync and apply).\nPerhaps this could be done by simply changing the first sentence to say:\n\"Resets the statistics of a single subscription worker, for a worker\nrunning on the subscription with <parameter>subid</parameter>.\"\n(and then can remove \" running on the subscription with\n<parameter>subid</parameter>\" from the last sentence)\n\nI think that the documentation for this function should say that it\nshould be used in conjunction with the \"pg_stat_subscription_workers\"\nview in order to obtain the required subid/relid values for resetting.\n(and should provide a link to the documentation for that view)\n\nAlso, I think that the function documentation should make it clear how\nto distinguish the tablesync vs apply worker statistics case.\ne.g. the tablesync error case is indicated by a null \"command\" in the\ninformation returned from the \"pg_stat_subscription_workers\" view\n(otherwise it seems a user could only know this by looking at the server log).\n\nFinally, there are currently no tests for this new function.\n\n(3) pg_stat_subscription_workers\nIn the documentation for this, some users may not realise that \"the\ninitial data copy\" refers to \"tablesync\", so maybe say \"the initial\ndata copy (tablesync)\", or similar.\n\n(4) stats_reset\n\"stats_reset\" is currently documented as the last column of the\n\"pg_stat_subscription_workers\" view - but it's actually no longer\nincluded in the view.\n\n(5) src/tools/pgindent/typedefs.list\nThe following current entries are bogus:\nPgStat_MsgSubWorkerErrorPurge\nPgStat_MsgSubWorkerPurge\n\nThe following entry is missing:\nPgStat_MsgSubscriptionPurge\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 8 Nov 2021 18:10:30 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Sun, Nov 7, 2021 at 7:50 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> I've attached an updated patch. In this version patch, subscription\n> worker statistics are collected per-database and handled in a similar\n> way to tables and functions. I think perhaps we still need to discuss\n> details of how the statistics should be handled but I'd like to share\n> the patch for discussion.\n\nWhile reviewing the v20, I have some initial comments,\n\n+ <row>\n+ <entry><structname>pg_stat_subscription_workers</structname><indexterm><primary>pg_stat_subscription_workers</primary></indexterm></entry>\n+ <entry>At least one row per subscription, showing about errors that\n+ occurred on subscription.\n+ See <link linkend=\"monitoring-pg-stat-subscription-workers\">\n+ <structname>pg_stat_subscription_workers</structname></link> for details.\n+ </entry>\n\n1.\nI don't like the fact that this view is very specific for showing the\nerrors but the name of the view is very generic. So are we keeping\nthis name to expand the scope of the view in the future? If this is\nmeant only for showing the errors then the name should be more\nspecific.\n\n2.\nWhy comment says \"At least one row per subscription\"? this looks\nconfusing, I mean if there is no error then there will not be even one\nrow right?\n\n\n+ <para>\n+ The <structname>pg_stat_subscription_workers</structname> view will contain\n+ one row per subscription error reported by workers applying logical\n+ replication changes and workers handling the initial data copy of the\n+ subscribed tables.\n+ </para>\n\n3.\nSo there will only be one row per subscription? I did not read the\ncode, but suppose there was an error due to some constraint now if\nthat constraint is removed and there is a new error then the old error\nwill be removed immediately or it will be removed by auto vacuum? If\nit is not removed immediately then there could be multiple errors per\nsubscription in the view so the comment is not correct.\n\n4.\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>last_error_time</structfield> <type>timestamp\nwith time zone</type>\n+ </para>\n+ <para>\n+ Time at which the last error occurred\n+ </para></entry>\n+ </row>\n\nWill it be useful to know when the first time error occurred?\n\n5.\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>stats_reset</structfield> <type>timestamp with\ntime zone</type>\n+ </para>\n+ <para>\n\nThe actual view does not contain this column.\n\n6.\n+ <para>\n+ Resets statistics of a single subscription worker statistics.\n\n/Resets statistics of a single subscription worker statistics/Resets\nstatistics of a single subscription worker\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 9 Nov 2021 11:37:14 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Sun, Nov 7, 2021 at 7:50 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Nov 3, 2021 at 12:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Nov 2, 2021 at 2:17 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > If we follow the idea of keeping stats at db level (in\n> > PgStat_StatDBEntry) as discussed above then I think we already have a\n> > way to remove stat entries via pg_stat_reset which removes the stats\n> > corresponding to tables, functions and after this patch corresponding\n> > to subscriptions as well for the current database. Won't that be\n> > sufficient? I see your point but I think it may be better if we keep\n> > the same behavior for stats of apply and table sync workers.\n>\n> Make sense.\n>\n\nWe can document this point.\n\n> >\n> > Following the tables, functions, I thought of keeping the name of the\n> > reset function similar to \"pg_stat_reset_single_table_counters\" but I\n> > feel the currently used name \"pg_stat_reset_subscription_worker\" in\n> > the patch is better. Do let me know what you think?\n>\n> Yeah, I also tend to prefer pg_stat_reset_subscription_worker name\n> since \"single\" isn't clear in the context of subscription worker. And\n> the behavior of the reset function for subscription workers is also\n> different from pg_stat_reset_single_xxx_counters.\n>\n> I've attached an updated patch. In this version patch, subscription\n> worker statistics are collected per-database and handled in a similar\n> way to tables and functions. I think perhaps we still need to discuss\n> details of how the statistics should be handled but I'd like to share\n> the patch for discussion.\n>\n\nDo you have something specific in mind to discuss the details of how\nstats should be handled?\n\nFew comments/questions:\n====================\n1.\n static void pgstat_reset_replslot(PgStat_StatReplSlotEntry\n*slotstats, TimestampTz ts);\n\n+\n static void pgstat_send_tabstat(PgStat_MsgTabstat *tsmsg, TimestampTz now);\n\nSpurious line addition.\n\n2. Why now there is no code to deal with dead table sync entries as\ncompared to previous version of patch?\n\n3. Why do we need two different functions\npg_stat_reset_subscription_worker_sub and\npg_stat_reset_subscription_worker_subrel to handle reset? Isn't it\nsufficient to reset all entries for a subscription if relid is\nInvalidOid?\n\n4. It seems now stats_reset entry is not present in\npg_stat_subscription_workers? How will users find that information if\nrequired?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 9 Nov 2021 11:37:48 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Nov 9, 2021 at 11:37 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Sun, Nov 7, 2021 at 7:50 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > I've attached an updated patch. In this version patch, subscription\n> > worker statistics are collected per-database and handled in a similar\n> > way to tables and functions. I think perhaps we still need to discuss\n> > details of how the statistics should be handled but I'd like to share\n> > the patch for discussion.\n>\n> While reviewing the v20, I have some initial comments,\n>\n> + <row>\n> + <entry><structname>pg_stat_subscription_workers</structname><indexterm><primary>pg_stat_subscription_workers</primary></indexterm></entry>\n> + <entry>At least one row per subscription, showing about errors that\n> + occurred on subscription.\n> + See <link linkend=\"monitoring-pg-stat-subscription-workers\">\n> + <structname>pg_stat_subscription_workers</structname></link> for details.\n> + </entry>\n>\n> 1.\n> I don't like the fact that this view is very specific for showing the\n> errors but the name of the view is very generic. So are we keeping\n> this name to expand the scope of the view in the future?\n>\n\nYes, we are planning to display some other xact specific stats as well\ncorresponding to subscription workers. See [1][2].\n\n[1] - https://www.postgresql.org/message-id/OSBPR01MB48887CA8F40C8D984A6DC00CED199%40OSBPR01MB4888.jpnprd01.prod.outlook.com\n[2] - https://www.postgresql.org/message-id/CAA4eK1%2B1n3upCMB-Y_k9b1wPNCtNE7MEHan9kA1s6GNsZGB0Og%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 9 Nov 2021 11:57:10 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Nov 9, 2021 at 3:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sun, Nov 7, 2021 at 7:50 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Nov 3, 2021 at 12:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Nov 2, 2021 at 2:17 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > If we follow the idea of keeping stats at db level (in\n> > > PgStat_StatDBEntry) as discussed above then I think we already have a\n> > > way to remove stat entries via pg_stat_reset which removes the stats\n> > > corresponding to tables, functions and after this patch corresponding\n> > > to subscriptions as well for the current database. Won't that be\n> > > sufficient? I see your point but I think it may be better if we keep\n> > > the same behavior for stats of apply and table sync workers.\n> >\n> > Make sense.\n> >\n>\n> We can document this point.\n\nOkay.\n\n>\n> > >\n> > > Following the tables, functions, I thought of keeping the name of the\n> > > reset function similar to \"pg_stat_reset_single_table_counters\" but I\n> > > feel the currently used name \"pg_stat_reset_subscription_worker\" in\n> > > the patch is better. Do let me know what you think?\n> >\n> > Yeah, I also tend to prefer pg_stat_reset_subscription_worker name\n> > since \"single\" isn't clear in the context of subscription worker. And\n> > the behavior of the reset function for subscription workers is also\n> > different from pg_stat_reset_single_xxx_counters.\n> >\n> > I've attached an updated patch. In this version patch, subscription\n> > worker statistics are collected per-database and handled in a similar\n> > way to tables and functions. I think perhaps we still need to discuss\n> > details of how the statistics should be handled but I'd like to share\n> > the patch for discussion.\n> >\n>\n> Do you have something specific in mind to discuss the details of how\n> stats should be handled?\n\nAs you commented, I removed stats_reset column from\npg_stat_subscription_workers view since tables and functions stats\nview doesn't have it.\n\n>\n> Few comments/questions:\n> ====================\n> 1.\n> static void pgstat_reset_replslot(PgStat_StatReplSlotEntry\n> *slotstats, TimestampTz ts);\n>\n> +\n> static void pgstat_send_tabstat(PgStat_MsgTabstat *tsmsg, TimestampTz now);\n>\n> Spurious line addition.\n\nWill fix.\n\n>\n> 2. Why now there is no code to deal with dead table sync entries as\n> compared to previous version of patch?\n\nI think we discussed that it's better if we keep the same behavior for\nstats of apply and table sync workers. So the table sync entries are\ndead after the subscription is dropped, like apply entries. No?\n\n\n>\n> 3. Why do we need two different functions\n> pg_stat_reset_subscription_worker_sub and\n> pg_stat_reset_subscription_worker_subrel to handle reset? Isn't it\n> sufficient to reset all entries for a subscription if relid is\n> InvalidOid?\n\nSince setting InvalidOid to relid means an apply entry we cannot use\nit for that purpose.\n\n>\n> 4. It seems now stats_reset entry is not present in\n> pg_stat_subscription_workers? How will users find that information if\n> required?\n\nUsers can find it in pg_stat_databases. The same is true for table and\nfunction statistics -- they don't have stats_reset column but reset\nstats_reset of its entry on pg_stat_database.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 9 Nov 2021 15:43:11 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Nov 9, 2021 at 11:57 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Nov 9, 2021 at 11:37 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > 1.\n> > I don't like the fact that this view is very specific for showing the\n> > errors but the name of the view is very generic. So are we keeping\n> > this name to expand the scope of the view in the future?\n> >\n>\n> Yes, we are planning to display some other xact specific stats as well\n> corresponding to subscription workers. See [1][2].\n>\n> [1] - https://www.postgresql.org/message-id/OSBPR01MB48887CA8F40C8D984A6DC00CED199%40OSBPR01MB4888.jpnprd01.prod.outlook.com\n> [2] - https://www.postgresql.org/message-id/CAA4eK1%2B1n3upCMB-Y_k9b1wPNCtNE7MEHan9kA1s6GNsZGB0Og%40mail.gmail.com\n\nThanks for pointing me to this thread, I will have a look.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 9 Nov 2021 13:10:04 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Nov 9, 2021 at 12:13 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Nov 9, 2021 at 3:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > 4. It seems now stats_reset entry is not present in\n> > pg_stat_subscription_workers? How will users find that information if\n> > required?\n>\n> Users can find it in pg_stat_databases. The same is true for table and\n> function statistics -- they don't have stats_reset column but reset\n> stats_reset of its entry on pg_stat_database.\n>\n\nOkay, but isn't it better to deal with the reset of subscription\nworkers via pgstat_recv_resetsinglecounter by introducing subobjectid?\nI think that will make code consistent for all database-related stats.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 9 Nov 2021 15:33:50 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Nov 9, 2021 at 7:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Nov 9, 2021 at 12:13 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Nov 9, 2021 at 3:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > 4. It seems now stats_reset entry is not present in\n> > > pg_stat_subscription_workers? How will users find that information if\n> > > required?\n> >\n> > Users can find it in pg_stat_databases. The same is true for table and\n> > function statistics -- they don't have stats_reset column but reset\n> > stats_reset of its entry on pg_stat_database.\n> >\n>\n> Okay, but isn't it better to deal with the reset of subscription\n> workers via pgstat_recv_resetsinglecounter by introducing subobjectid?\n> I think that will make code consistent for all database-related stats.\n>\n\nAgreed. It's better to use the same function internally even if the\nSQL-callable interfaces are different.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 9 Nov 2021 21:55:34 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Nov 9, 2021 at 1:10 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Nov 9, 2021 at 11:57 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Nov 9, 2021 at 11:37 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > 1.\n> > > I don't like the fact that this view is very specific for showing the\n> > > errors but the name of the view is very generic. So are we keeping\n> > > this name to expand the scope of the view in the future?\n> > >\n> >\n> > Yes, we are planning to display some other xact specific stats as well\n> > corresponding to subscription workers. See [1][2].\n> >\n> > [1] - https://www.postgresql.org/message-id/OSBPR01MB48887CA8F40C8D984A6DC00CED199%40OSBPR01MB4888.jpnprd01.prod.outlook.com\n> > [2] - https://www.postgresql.org/message-id/CAA4eK1%2B1n3upCMB-Y_k9b1wPNCtNE7MEHan9kA1s6GNsZGB0Og%40mail.gmail.com\n>\n> Thanks for pointing me to this thread, I will have a look.\n>\n\nI think we can even add a line in the commit message stating that this\ncan be extended in the future to track other xact related stats for\nsubscription workers. I think it will help readers of the patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 9 Nov 2021 19:01:33 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Sun, Nov 7, 2021 at 7:50 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Nov 3, 2021 at 12:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Nov 2, 2021 at 2:17 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Tue, Nov 2, 2021 at 2:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Mon, Nov 1, 2021 at 7:18 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > On Fri, Oct 29, 2021 at 8:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > >\n> > > > > >\n> > > > > > Fair enough. So statistics can be removed either by vacuum or drop\n> > > > > > subscription. Also, if we go by this logic then there is no harm in\n> > > > > > retaining the stat entries for tablesync errors. Why have different\n> > > > > > behavior for apply and tablesync workers?\n> > > > >\n> > > > > My understanding is that the subscription worker statistics entry\n> > > > > corresponds to workers (but not physical workers since the physical\n> > > > > process is changed after restarting). So if the worker finishes its\n> > > > > jobs, it is no longer necessary to show errors since further problems\n> > > > > will not occur after that. Table sync worker’s job finishes when\n> > > > > completing table copy (unless table sync is performed again by REFRESH\n> > > > > PUBLICATION) whereas apply worker’s job finishes when the subscription\n> > > > > is dropped.\n> > > > >\n> > > >\n> > > > Actually, I am not very sure how users can use the old error\n> > > > information after we allowed skipping the conflicting xid. Say, if\n> > > > they want to add/remove some constraints on the table based on\n> > > > previous errors then they might want to refer to errors of both the\n> > > > apply worker and table sync worker.\n> > >\n> > > I think that in general, statistics should be retained as long as a\n> > > corresponding object exists on the database, like other cumulative\n> > > statistic views. So I’m concerned that an entry of a cumulative stats\n> > > view is automatically removed by a non-stats-related function (i.g.,\n> > > ALTER SUBSCRIPTION SKIP). Which seems a new behavior for cumulative\n> > > stats views.\n> > >\n> > > We can retain the stats entries for table sync worker but what I want\n> > > to avoid is that the view shows many old entries that will never be\n> > > updated. I've sometimes seen cases where the user mistakenly restored\n> > > table data on the subscriber before creating a subscription, failed\n> > > table sync on many tables due to unique violation, and truncated\n> > > tables on the subscriber. I think that unlike the stats entries for\n> > > apply worker, retaining the stats entries for table sync could be\n> > > harmful since it’s likely to be a large amount (even hundreds of\n> > > entries). Especially, it could lead to bloat the stats file since it\n> > > has an error message. So if we do that, I'd like to provide a function\n> > > for users to remove (not reset) stats entries manually.\n> > >\n> >\n> > If we follow the idea of keeping stats at db level (in\n> > PgStat_StatDBEntry) as discussed above then I think we already have a\n> > way to remove stat entries via pg_stat_reset which removes the stats\n> > corresponding to tables, functions and after this patch corresponding\n> > to subscriptions as well for the current database. Won't that be\n> > sufficient? I see your point but I think it may be better if we keep\n> > the same behavior for stats of apply and table sync workers.\n>\n> Make sense.\n>\n> >\n> > Following the tables, functions, I thought of keeping the name of the\n> > reset function similar to \"pg_stat_reset_single_table_counters\" but I\n> > feel the currently used name \"pg_stat_reset_subscription_worker\" in\n> > the patch is better. Do let me know what you think?\n>\n> Yeah, I also tend to prefer pg_stat_reset_subscription_worker name\n> since \"single\" isn't clear in the context of subscription worker. And\n> the behavior of the reset function for subscription workers is also\n> different from pg_stat_reset_single_xxx_counters.\n>\n> I've attached an updated patch. In this version patch, subscription\n> worker statistics are collected per-database and handled in a similar\n> way to tables and functions. I think perhaps we still need to discuss\n> details of how the statistics should be handled but I'd like to share\n> the patch for discussion.\n\nThanks for the updated patch, Few comments:\n1) should we change \"Tables and functions hashes are initialized to\nempty\" to \"Tables, functions and subworker hashes are initialized to\nempty\"\n+ hash_ctl.keysize = sizeof(PgStat_StatSubWorkerKey);\n+ hash_ctl.entrysize = sizeof(PgStat_StatSubWorkerEntry);\n+ dbentry->subworkers = hash_create(\"Per-database subscription worker\",\n+\n PGSTAT_SUBWORKER_HASH_SIZE,\n+\n &hash_ctl,\n+\n HASH_ELEM | HASH_BLOBS);\n\n2) Since databaseid, tabhash, funchash and subworkerhash are members\nof dbentry, can we remove the function arguments databaseid, tabhash,\nfunchash and subworkerhash and pass dbentry similar to\npgstat_write_db_statsfile function?\n@@ -4370,12 +4582,14 @@ done:\n */\n static void\n pgstat_read_db_statsfile(Oid databaseid, HTAB *tabhash, HTAB *funchash,\n- bool permanent)\n+ HTAB *subworkerhash,\nbool permanent)\n {\n PgStat_StatTabEntry *tabentry;\n PgStat_StatTabEntry tabbuf;\n PgStat_StatFuncEntry funcbuf;\n PgStat_StatFuncEntry *funcentry;\n+ PgStat_StatSubWorkerEntry subwbuf;\n+ PgStat_StatSubWorkerEntry *subwentry;\n\n3) Can we move pgstat_get_subworker_entry below pgstat_get_db_entry\nand pgstat_get_tab_entry, so that the hash lookup can be together\nconsistently. Similarly pgstat_send_subscription_purge can be moved\nafter pgstat_send_slru.\n+/* ----------\n+ * pgstat_get_subworker_entry\n+ *\n+ * Return subscription worker entry with the given subscription OID and\n+ * relation OID. If subrelid is InvalidOid, it returns an entry of the\n+ * apply worker otherwise of the table sync worker associated with subrelid.\n+ * If no subscription entry exists, initialize it, if the create parameter\n+ * is true. Else, return NULL.\n+ * ----------\n+ */\n+static PgStat_StatSubWorkerEntry *\n+pgstat_get_subworker_entry(PgStat_StatDBEntry *dbentry, Oid subid,\nOid subrelid,\n+ bool create)\n+{\n+ PgStat_StatSubWorkerEntry *subwentry;\n+ PgStat_StatSubWorkerKey key;\n+ bool found;\n\n4) This change can be removed from pgstat.c:\n@@ -332,9 +339,11 @@ static bool pgstat_db_requested(Oid databaseid);\n static PgStat_StatReplSlotEntry *pgstat_get_replslot_entry(NameData\nname, bool create_it);\n static void pgstat_reset_replslot(PgStat_StatReplSlotEntry\n*slotstats, TimestampTz ts);\n\n+\n static void pgstat_send_tabstat(PgStat_MsgTabstat *tsmsg, TimestampTz now);\n static void pgstat_send_funcstats(void);\n\n5) I was able to compile without including\ncatalog/pg_subscription_rel.h, we can remove including\ncatalog/pg_subscription_rel.h if not required.\n--- a/src/backend/postmaster/pgstat.c\n+++ b/src/backend/postmaster/pgstat.c\n@@ -41,6 +41,8 @@\n #include \"catalog/catalog.h\"\n #include \"catalog/pg_database.h\"\n #include \"catalog/pg_proc.h\"\n+#include \"catalog/pg_subscription.h\"\n+#include \"catalog/pg_subscription_rel.h\"\n\n 6) Similarly replication/logicalproto.h also need not be included\n --- a/src/backend/utils/adt/pgstatfuncs.c\n+++ b/src/backend/utils/adt/pgstatfuncs.c\n@@ -24,6 +24,7 @@\n #include \"pgstat.h\"\n #include \"postmaster/bgworker_internals.h\"\n #include \"postmaster/postmaster.h\"\n+#include \"replication/logicalproto.h\"\n #include \"replication/slot.h\"\n #include \"storage/proc.h\"\n\n7) There is an extra \";\", We can remove one \";\" from below:\n+ PgStat_StatSubWorkerKey key;\n+ bool found;\n+ HASHACTION action = (create ? HASH_ENTER : HASH_FIND);;\n+\n+ key.subid = subid;\n+ key.subrelid = subrelid;\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 10 Nov 2021 09:19:30 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Nov 8, 2021 at 4:10 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Mon, Nov 8, 2021 at 1:20 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached an updated patch. In this version patch, subscription\n> > worker statistics are collected per-database and handled in a similar\n> > way to tables and functions. I think perhaps we still need to discuss\n> > details of how the statistics should be handled but I'd like to share\n> > the patch for discussion.\n> >\n>\n> That's for the updated patch.\n> Some initial comments on the v20 patch:\n\nThank you for the comments!\n\n>\n>\n> doc/src/sgml/monitoring.sgml\n>\n> (1) wording\n> The word \"information\" seems to be missing after \"showing\" (otherwise\n> is reads \"showing about errors\", which isn't correct grammar).\n> I suggest the following change:\n>\n> BEFORE:\n> + <entry>At least one row per subscription, showing about errors that\n> + occurred on subscription.\n> AFTER:\n> + <entry>At least one row per subscription, showing information about\n> + errors that occurred on subscription.\n\nFixed.\n\n>\n> (2) pg_stat_reset_subscription_worker(subid Oid, relid Oid) function\n> documentation\n> The description doesn't read well. I'd suggest the following change:\n>\n> BEFORE:\n> * Resets statistics of a single subscription worker statistics.\n> AFTER:\n> * Resets the statistics of a single subscription worker.\n>\n> I think that the documentation for this function should make it clear\n> that a non-NULL \"subid\" parameter is required for both reset cases\n> (tablesync and apply).\n> Perhaps this could be done by simply changing the first sentence to say:\n> \"Resets the statistics of a single subscription worker, for a worker\n> running on the subscription with <parameter>subid</parameter>.\"\n> (and then can remove \" running on the subscription with\n> <parameter>subid</parameter>\" from the last sentence)\n\nFixed.\n\n>\n> I think that the documentation for this function should say that it\n> should be used in conjunction with the \"pg_stat_subscription_workers\"\n> view in order to obtain the required subid/relid values for resetting.\n> (and should provide a link to the documentation for that view)\n\nI think it's not necessarily true that users should use\npg_stat_subscription_workers in order to obtain subid/relid since we\ncan obtain the same also from pg_subscription_rel. But I agree that it\nshould clarify that this function resets entries of\npg_stat_subscription view. Fixed.\n\n>\n> Also, I think that the function documentation should make it clear how\n> to distinguish the tablesync vs apply worker statistics case.\n> e.g. the tablesync error case is indicated by a null \"command\" in the\n> information returned from the \"pg_stat_subscription_workers\" view\n> (otherwise it seems a user could only know this by looking at the server log).\n\nThe documentation of pg_stat_subscription_workers explains that\nsubrelid is always NULL for apply workers. Is it not enough?\n\n>\n> Finally, there are currently no tests for this new function.\n\nI've added some tests.\n\n>\n> (3) pg_stat_subscription_workers\n> In the documentation for this, some users may not realise that \"the\n> initial data copy\" refers to \"tablesync\", so maybe say \"the initial\n> data copy (tablesync)\", or similar.\n>\n\nPerhaps it's better not to use the term \"tablesync\" since we don't use\nthe term anywhere now. Instead, we should say more clearly, say\n\"subscription worker handling initial data copy of the relation, as\nthe description pg_stat_subscription says.\n\n> (4) stats_reset\n> \"stats_reset\" is currently documented as the last column of the\n> \"pg_stat_subscription_workers\" view - but it's actually no longer\n> included in the view.\n\nRemoved.\n\n>\n> (5) src/tools/pgindent/typedefs.list\n> The following current entries are bogus:\n> PgStat_MsgSubWorkerErrorPurge\n> PgStat_MsgSubWorkerPurge\n>\n> The following entry is missing:\n> PgStat_MsgSubscriptionPurge\n\nFixed.\n\nI'll submit an updated patch soon.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 15 Nov 2021 10:37:36 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Nov 9, 2021 at 3:07 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Sun, Nov 7, 2021 at 7:50 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > I've attached an updated patch. In this version patch, subscription\n> > worker statistics are collected per-database and handled in a similar\n> > way to tables and functions. I think perhaps we still need to discuss\n> > details of how the statistics should be handled but I'd like to share\n> > the patch for discussion.\n>\n> While reviewing the v20, I have some initial comments,\n>\n> + <row>\n> + <entry><structname>pg_stat_subscription_workers</structname><indexterm><primary>pg_stat_subscription_workers</primary></indexterm></entry>\n> + <entry>At least one row per subscription, showing about errors that\n> + occurred on subscription.\n> + See <link linkend=\"monitoring-pg-stat-subscription-workers\">\n> + <structname>pg_stat_subscription_workers</structname></link> for details.\n> + </entry>\n>\n> 1.\n> I don't like the fact that this view is very specific for showing the\n> errors but the name of the view is very generic. So are we keeping\n> this name to expand the scope of the view in the future? If this is\n> meant only for showing the errors then the name should be more\n> specific.\n\nAs Amit already mentioned, we're planning to add more xact statistics\nto this view. I've mentioned that in the commit message.\n\n>\n> 2.\n> Why comment says \"At least one row per subscription\"? this looks\n> confusing, I mean if there is no error then there will not be even one\n> row right?\n>\n>\n> + <para>\n> + The <structname>pg_stat_subscription_workers</structname> view will contain\n> + one row per subscription error reported by workers applying logical\n> + replication changes and workers handling the initial data copy of the\n> + subscribed tables.\n> + </para>\n\nRight. Fixed.\n\n>\n> 3.\n> So there will only be one row per subscription? I did not read the\n> code, but suppose there was an error due to some constraint now if\n> that constraint is removed and there is a new error then the old error\n> will be removed immediately or it will be removed by auto vacuum? If\n> it is not removed immediately then there could be multiple errors per\n> subscription in the view so the comment is not correct.\n\nThere is one row per subscription worker (apply worker and tablesync\nworker). If the same error consecutively occurred, error_count is\nincremented and last_error_time is updated. Otherwise, i.g., if a\ndifferent error occurred on the apply worker, all statistics are\nupdated.\n\n>\n> 4.\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>last_error_time</structfield> <type>timestamp\n> with time zone</type>\n> + </para>\n> + <para>\n> + Time at which the last error occurred\n> + </para></entry>\n> + </row>\n>\n> Will it be useful to know when the first time error occurred?\n\nGood idea. Users can know when the subscription stopped due to this\nerror. Added.\n\n>\n> 5.\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>stats_reset</structfield> <type>timestamp with\n> time zone</type>\n> + </para>\n> + <para>\n>\n> The actual view does not contain this column.\n\nRemoved.\n\n>\n> 6.\n> + <para>\n> + Resets statistics of a single subscription worker statistics.\n>\n> /Resets statistics of a single subscription worker statistics/Resets\n> statistics of a single subscription worker\n\nFixed.\n\nI'll update an updated patch soon.\n\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 15 Nov 2021 10:38:26 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Nov 10, 2021 at 12:49 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n>\n> Thanks for the updated patch, Few comments:\n\nThank you for the comments!\n\n> 1) should we change \"Tables and functions hashes are initialized to\n> empty\" to \"Tables, functions and subworker hashes are initialized to\n> empty\"\n> + hash_ctl.keysize = sizeof(PgStat_StatSubWorkerKey);\n> + hash_ctl.entrysize = sizeof(PgStat_StatSubWorkerEntry);\n> + dbentry->subworkers = hash_create(\"Per-database subscription worker\",\n> +\n> PGSTAT_SUBWORKER_HASH_SIZE,\n> +\n> &hash_ctl,\n> +\n> HASH_ELEM | HASH_BLOBS);\n\nFixed.\n\n>\n> 2) Since databaseid, tabhash, funchash and subworkerhash are members\n> of dbentry, can we remove the function arguments databaseid, tabhash,\n> funchash and subworkerhash and pass dbentry similar to\n> pgstat_write_db_statsfile function?\n> @@ -4370,12 +4582,14 @@ done:\n> */\n> static void\n> pgstat_read_db_statsfile(Oid databaseid, HTAB *tabhash, HTAB *funchash,\n> - bool permanent)\n> + HTAB *subworkerhash,\n> bool permanent)\n> {\n> PgStat_StatTabEntry *tabentry;\n> PgStat_StatTabEntry tabbuf;\n> PgStat_StatFuncEntry funcbuf;\n> PgStat_StatFuncEntry *funcentry;\n> + PgStat_StatSubWorkerEntry subwbuf;\n> + PgStat_StatSubWorkerEntry *subwentry;\n>\n\nAs the comment of this function says, this function has the ability to\nskip storing per-table or per-function (and or\nper-subscription-workers) data, if NULL is passed for the\ncorresponding hashtable, although that's not used at the moment. IMO\nit'd be better to keep such behavior.\n\n> 3) Can we move pgstat_get_subworker_entry below pgstat_get_db_entry\n> and pgstat_get_tab_entry, so that the hash lookup can be together\n> consistently. Similarly pgstat_send_subscription_purge can be moved\n> after pgstat_send_slru.\n> +/* ----------\n> + * pgstat_get_subworker_entry\n> + *\n> + * Return subscription worker entry with the given subscription OID and\n> + * relation OID. If subrelid is InvalidOid, it returns an entry of the\n> + * apply worker otherwise of the table sync worker associated with subrelid.\n> + * If no subscription entry exists, initialize it, if the create parameter\n> + * is true. Else, return NULL.\n> + * ----------\n> + */\n> +static PgStat_StatSubWorkerEntry *\n> +pgstat_get_subworker_entry(PgStat_StatDBEntry *dbentry, Oid subid,\n> Oid subrelid,\n> + bool create)\n> +{\n> + PgStat_StatSubWorkerEntry *subwentry;\n> + PgStat_StatSubWorkerKey key;\n> + bool found;\n\nAgreed. Moved.\n\n>\n> 4) This change can be removed from pgstat.c:\n> @@ -332,9 +339,11 @@ static bool pgstat_db_requested(Oid databaseid);\n> static PgStat_StatReplSlotEntry *pgstat_get_replslot_entry(NameData\n> name, bool create_it);\n> static void pgstat_reset_replslot(PgStat_StatReplSlotEntry\n> *slotstats, TimestampTz ts);\n>\n> +\n> static void pgstat_send_tabstat(PgStat_MsgTabstat *tsmsg, TimestampTz now);\n> static void pgstat_send_funcstats(void);\n\nRemoved.\n\n>\n> 5) I was able to compile without including\n> catalog/pg_subscription_rel.h, we can remove including\n> catalog/pg_subscription_rel.h if not required.\n> --- a/src/backend/postmaster/pgstat.c\n> +++ b/src/backend/postmaster/pgstat.c\n> @@ -41,6 +41,8 @@\n> #include \"catalog/catalog.h\"\n> #include \"catalog/pg_database.h\"\n> #include \"catalog/pg_proc.h\"\n> +#include \"catalog/pg_subscription.h\"\n> +#include \"catalog/pg_subscription_rel.h\"\n\nRemoved.\n\n>\n> 6) Similarly replication/logicalproto.h also need not be included\n> --- a/src/backend/utils/adt/pgstatfuncs.c\n> +++ b/src/backend/utils/adt/pgstatfuncs.c\n> @@ -24,6 +24,7 @@\n> #include \"pgstat.h\"\n> #include \"postmaster/bgworker_internals.h\"\n> #include \"postmaster/postmaster.h\"\n> +#include \"replication/logicalproto.h\"\n> #include \"replication/slot.h\"\n> #include \"storage/proc.h\"\n\nRemoved;\n\n>\n> 7) There is an extra \";\", We can remove one \";\" from below:\n> + PgStat_StatSubWorkerKey key;\n> + bool found;\n> + HASHACTION action = (create ? HASH_ENTER : HASH_FIND);;\n> +\n> + key.subid = subid;\n> + key.subrelid = subrelid;\n\nFixed.\n\nI've attached an updated patch that incorporates all comments I got so\nfar. Please review it.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Mon, 15 Nov 2021 11:48:34 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Nov 15, 2021 at 1:49 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached an updated patch that incorporates all comments I got so\n> far. Please review it.\n>\n\nThanks for the updated patch.\nA few minor comments:\n\ndoc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml\n\n(1) tab in doc updates\n\nThere's a tab before \"Otherwise,\":\n\n+ copy of the relation with <parameter>relid</parameter>.\n Otherwise,\n\nsrc/backend/utils/adt/pgstatfuncs.c\n\n(2) The function comment for \"pg_stat_reset_subscription_worker_sub\"\nseems a bit long and I expected it to be multi-line (did you run\npg_indent?)\n\nsrc/include/pgstat.h\n\n(3) Remove PgStat_StatSubWorkerEntry.dbid?\n\nThe \"dbid\" member of the new PgStat_StatSubWorkerEntry struct doesn't\nseem to be used, so I think it should be removed.\n(I could remove it and everything builds OK and tests pass).\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 15 Nov 2021 18:49:05 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Nov 15, 2021 at 4:49 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Mon, Nov 15, 2021 at 1:49 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached an updated patch that incorporates all comments I got so\n> > far. Please review it.\n> >\n>\n> Thanks for the updated patch.\n> A few minor comments:\n>\n> doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml\n>\n> (1) tab in doc updates\n>\n> There's a tab before \"Otherwise,\":\n>\n> + copy of the relation with <parameter>relid</parameter>.\n> Otherwise,\n\nFixed.\n\n>\n> src/backend/utils/adt/pgstatfuncs.c\n>\n> (2) The function comment for \"pg_stat_reset_subscription_worker_sub\"\n> seems a bit long and I expected it to be multi-line (did you run\n> pg_indent?)\n\nI ran pg_indent on pgstatfuncs.c but it didn't become a multi-line comment.\n\n>\n> src/include/pgstat.h\n>\n> (3) Remove PgStat_StatSubWorkerEntry.dbid?\n>\n> The \"dbid\" member of the new PgStat_StatSubWorkerEntry struct doesn't\n> seem to be used, so I think it should be removed.\n> (I could remove it and everything builds OK and tests pass).\n>\n\nFixed.\n\nThank you for the comments! I've updated an updated version patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Mon, 15 Nov 2021 18:17:34 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Nov 15, 2021 at 2:48 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Nov 15, 2021 at 4:49 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> >\n> > On Mon, Nov 15, 2021 at 1:49 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > I've attached an updated patch that incorporates all comments I got so\n> > > far. Please review it.\n> > >\n> >\n> > Thanks for the updated patch.\n> > A few minor comments:\n> >\n> > doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml\n> >\n> > (1) tab in doc updates\n> >\n> > There's a tab before \"Otherwise,\":\n> >\n> > + copy of the relation with <parameter>relid</parameter>.\n> > Otherwise,\n>\n> Fixed.\n>\n> >\n> > src/backend/utils/adt/pgstatfuncs.c\n> >\n> > (2) The function comment for \"pg_stat_reset_subscription_worker_sub\"\n> > seems a bit long and I expected it to be multi-line (did you run\n> > pg_indent?)\n>\n> I ran pg_indent on pgstatfuncs.c but it didn't become a multi-line comment.\n>\n> >\n> > src/include/pgstat.h\n> >\n> > (3) Remove PgStat_StatSubWorkerEntry.dbid?\n> >\n> > The \"dbid\" member of the new PgStat_StatSubWorkerEntry struct doesn't\n> > seem to be used, so I think it should be removed.\n> > (I could remove it and everything builds OK and tests pass).\n> >\n>\n> Fixed.\n>\n> Thank you for the comments! I've updated an updated version patch.\n\nThanks for the updated patch.\nI found one issue:\nThis Assert can fail in few cases:\n+void\n+pgstat_report_subworker_error(Oid subid, Oid subrelid, Oid relid,\n+\nLogicalRepMsgType command, TransactionId xid,\n+ const char *errmsg)\n+{\n+ PgStat_MsgSubWorkerError msg;\n+ int len;\n+\n+ Assert(strlen(errmsg) < PGSTAT_SUBWORKERERROR_MSGLEN);\n+ len = offsetof(PgStat_MsgSubWorkerError, m_message[0]) +\nstrlen(errmsg) + 1;\n+\n\nI could reproduce the problem with the following scenario:\nPublisher:\ncreate table t1 (c1 varchar);\ncreate publication pub1 for table t1;\ninsert into t1 values(repeat('abcd', 5000));\n\nSubscriber:\ncreate table t1(c1 smallint);\ncreate subscription sub1 connection 'dbname=postgres port=5432'\npublication pub1 with ( two_phase = true);\npostgres=# select * from pg_stat_subscription_workers;\nWARNING: terminating connection because of crash of another server process\nDETAIL: The postmaster has commanded this server process to roll back\nthe current transaction and exit, because another server process\nexited abnormally and possibly corrupted shared memory.\nHINT: In a moment you should be able to reconnect to the database and\nrepeat your command.\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\nSubscriber logs:\n2021-11-15 19:27:56.380 IST [15685] LOG: logical replication apply\nworker for subscription \"sub1\" has started\n2021-11-15 19:27:56.384 IST [15687] LOG: logical replication table\nsynchronization worker for subscription \"sub1\", table \"t1\" has started\nTRAP: FailedAssertion(\"strlen(errmsg) < PGSTAT_SUBWORKERERROR_MSGLEN\",\nFile: \"pgstat.c\", Line: 1946, PID: 15687)\npostgres: logical replication worker for subscription 16387 sync 16384\n(ExceptionalCondition+0xd0)[0x55a18f3c727f]\npostgres: logical replication worker for subscription 16387 sync 16384\n(pgstat_report_subworker_error+0x7a)[0x55a18f126417]\npostgres: logical replication worker for subscription 16387 sync 16384\n(ApplyWorkerMain+0x493)[0x55a18f176611]\npostgres: logical replication worker for subscription 16387 sync 16384\n(StartBackgroundWorker+0x23c)[0x55a18f11f7e2]\npostgres: logical replication worker for subscription 16387 sync 16384\n(+0x54efc0)[0x55a18f134fc0]\npostgres: logical replication worker for subscription 16387 sync 16384\n(+0x54f3af)[0x55a18f1353af]\npostgres: logical replication worker for subscription 16387 sync 16384\n(+0x54e338)[0x55a18f134338]\n/lib/x86_64-linux-gnu/libpthread.so.0(+0x141f0)[0x7feef84371f0]\n/lib/x86_64-linux-gnu/libc.so.6(__select+0x57)[0x7feef81e3ac7]\npostgres: logical replication worker for subscription 16387 sync 16384\n(+0x5498c2)[0x55a18f12f8c2]\npostgres: logical replication worker for subscription 16387 sync 16384\n(PostmasterMain+0x134c)[0x55a18f12f1dd]\npostgres: logical replication worker for subscription 16387 sync 16384\n(+0x43c3d4)[0x55a18f0223d4]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xd5)[0x7feef80fd565]\npostgres: logical replication worker for subscription 16387 sync 16384\n(_start+0x2e)[0x55a18ecaf4fe]\n2021-11-15 19:27:56.483 IST [15645] LOG: background worker \"logical\nreplication worker\" (PID 15687) was terminated by signal 6: Aborted\n2021-11-15 19:27:56.483 IST [15645] LOG: terminating any other active\nserver processes\n2021-11-15 19:27:56.485 IST [15645] LOG: all server processes\nterminated; reinitializing\n\nHere it fails because of a long error message \"\"invalid input syntax\nfor type smallint:\n\\\"abcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabc....\"\nbecause we try to insert varchar type data into smallint type. Maybe\nwe should trim the error message in this case.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 15 Nov 2021 20:13:14 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Nov 15, 2021 at 11:43 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, Nov 15, 2021 at 2:48 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Nov 15, 2021 at 4:49 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> > >\n> > > On Mon, Nov 15, 2021 at 1:49 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > I've attached an updated patch that incorporates all comments I got so\n> > > > far. Please review it.\n> > > >\n> > >\n> > > Thanks for the updated patch.\n> > > A few minor comments:\n> > >\n> > > doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml\n> > >\n> > > (1) tab in doc updates\n> > >\n> > > There's a tab before \"Otherwise,\":\n> > >\n> > > + copy of the relation with <parameter>relid</parameter>.\n> > > Otherwise,\n> >\n> > Fixed.\n> >\n> > >\n> > > src/backend/utils/adt/pgstatfuncs.c\n> > >\n> > > (2) The function comment for \"pg_stat_reset_subscription_worker_sub\"\n> > > seems a bit long and I expected it to be multi-line (did you run\n> > > pg_indent?)\n> >\n> > I ran pg_indent on pgstatfuncs.c but it didn't become a multi-line comment.\n> >\n> > >\n> > > src/include/pgstat.h\n> > >\n> > > (3) Remove PgStat_StatSubWorkerEntry.dbid?\n> > >\n> > > The \"dbid\" member of the new PgStat_StatSubWorkerEntry struct doesn't\n> > > seem to be used, so I think it should be removed.\n> > > (I could remove it and everything builds OK and tests pass).\n> > >\n> >\n> > Fixed.\n> >\n> > Thank you for the comments! I've updated an updated version patch.\n>\n> Thanks for the updated patch.\n> I found one issue:\n> This Assert can fail in few cases:\n> +void\n> +pgstat_report_subworker_error(Oid subid, Oid subrelid, Oid relid,\n> +\n> LogicalRepMsgType command, TransactionId xid,\n> + const char *errmsg)\n> +{\n> + PgStat_MsgSubWorkerError msg;\n> + int len;\n> +\n> + Assert(strlen(errmsg) < PGSTAT_SUBWORKERERROR_MSGLEN);\n> + len = offsetof(PgStat_MsgSubWorkerError, m_message[0]) +\n> strlen(errmsg) + 1;\n> +\n>\n> I could reproduce the problem with the following scenario:\n> Publisher:\n> create table t1 (c1 varchar);\n> create publication pub1 for table t1;\n> insert into t1 values(repeat('abcd', 5000));\n>\n> Subscriber:\n> create table t1(c1 smallint);\n> create subscription sub1 connection 'dbname=postgres port=5432'\n> publication pub1 with ( two_phase = true);\n> postgres=# select * from pg_stat_subscription_workers;\n> WARNING: terminating connection because of crash of another server process\n> DETAIL: The postmaster has commanded this server process to roll back\n> the current transaction and exit, because another server process\n> exited abnormally and possibly corrupted shared memory.\n> HINT: In a moment you should be able to reconnect to the database and\n> repeat your command.\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n>\n> Subscriber logs:\n> 2021-11-15 19:27:56.380 IST [15685] LOG: logical replication apply\n> worker for subscription \"sub1\" has started\n> 2021-11-15 19:27:56.384 IST [15687] LOG: logical replication table\n> synchronization worker for subscription \"sub1\", table \"t1\" has started\n> TRAP: FailedAssertion(\"strlen(errmsg) < PGSTAT_SUBWORKERERROR_MSGLEN\",\n> File: \"pgstat.c\", Line: 1946, PID: 15687)\n> postgres: logical replication worker for subscription 16387 sync 16384\n> (ExceptionalCondition+0xd0)[0x55a18f3c727f]\n> postgres: logical replication worker for subscription 16387 sync 16384\n> (pgstat_report_subworker_error+0x7a)[0x55a18f126417]\n> postgres: logical replication worker for subscription 16387 sync 16384\n> (ApplyWorkerMain+0x493)[0x55a18f176611]\n> postgres: logical replication worker for subscription 16387 sync 16384\n> (StartBackgroundWorker+0x23c)[0x55a18f11f7e2]\n> postgres: logical replication worker for subscription 16387 sync 16384\n> (+0x54efc0)[0x55a18f134fc0]\n> postgres: logical replication worker for subscription 16387 sync 16384\n> (+0x54f3af)[0x55a18f1353af]\n> postgres: logical replication worker for subscription 16387 sync 16384\n> (+0x54e338)[0x55a18f134338]\n> /lib/x86_64-linux-gnu/libpthread.so.0(+0x141f0)[0x7feef84371f0]\n> /lib/x86_64-linux-gnu/libc.so.6(__select+0x57)[0x7feef81e3ac7]\n> postgres: logical replication worker for subscription 16387 sync 16384\n> (+0x5498c2)[0x55a18f12f8c2]\n> postgres: logical replication worker for subscription 16387 sync 16384\n> (PostmasterMain+0x134c)[0x55a18f12f1dd]\n> postgres: logical replication worker for subscription 16387 sync 16384\n> (+0x43c3d4)[0x55a18f0223d4]\n> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xd5)[0x7feef80fd565]\n> postgres: logical replication worker for subscription 16387 sync 16384\n> (_start+0x2e)[0x55a18ecaf4fe]\n> 2021-11-15 19:27:56.483 IST [15645] LOG: background worker \"logical\n> replication worker\" (PID 15687) was terminated by signal 6: Aborted\n> 2021-11-15 19:27:56.483 IST [15645] LOG: terminating any other active\n> server processes\n> 2021-11-15 19:27:56.485 IST [15645] LOG: all server processes\n> terminated; reinitializing\n>\n> Here it fails because of a long error message \"\"invalid input syntax\n> for type smallint:\n\nGood catch!\n\n> \\\"abcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabc....\"\n> because we try to insert varchar type data into smallint type. Maybe\n> we should trim the error message in this case.\n\nRight. I've fixed this issue and attached an updated patch.\n\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Tue, 16 Nov 2021 15:31:18 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tues, Nov 16, 2021 2:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> Right. I've fixed this issue and attached an updated patch.\r\n\r\nHi,\r\n\r\nThanks for updating the patch.\r\nHere are few comments.\r\n\r\n1)\r\n\r\n+ <function>pg_stat_reset_subscription_worker</function> ( <parameter>subid</parameter> <type>oid</type>, <optional> <parameter>relid</parameter> <type>oid</type> </optional> )\r\n\r\nIt seems we should put '<optional>' before the comma(',').\r\n\r\n\r\n2)\r\n+ <row>\r\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n+ <structfield>subrelid</structfield> <type>oid</type>\r\n+ </para>\r\n+ <para>\r\n+ OID of the relation that the worker is synchronizing; null for the\r\n+ main apply worker\r\n+ </para></entry>\r\n+ </row>\r\n\r\nIs the 'subrelid' only used for distinguishing the worker type ? If so, would it\r\nbe clear to have a string value here. I recalled the previous version patch has\r\nfailure_source column but was removed. Maybe I missed something.\r\n\r\n\r\n3)\r\n.\r\n+extern void pgstat_reset_subworker_stats(Oid subid, Oid subrelid, bool allstats);\r\n\r\nI didn't find the code of this functions, maybe we can remove this declaration ?\r\n\r\nBest regards,\r\nHou zj\r\n",
"msg_date": "Wed, 17 Nov 2021 03:43:00 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Nov 17, 2021 at 9:13 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Tues, Nov 16, 2021 2:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> 2)\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>subrelid</structfield> <type>oid</type>\n> + </para>\n> + <para>\n> + OID of the relation that the worker is synchronizing; null for the\n> + main apply worker\n> + </para></entry>\n> + </row>\n>\n> Is the 'subrelid' only used for distinguishing the worker type ?\n>\n\nI think it will additionally tell which table sync worker as well.\n\n> If so, would it\n> be clear to have a string value here. I recalled the previous version patch has\n> failure_source column but was removed. Maybe I missed something.\n>\n\nI also don't remember the reason for this but like to know.\n\nI am also reviewing the latest version of the patch and will share\ncomments/questions sometime today.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 17 Nov 2021 10:28:46 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Nov 17, 2021 at 1:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Nov 17, 2021 at 9:13 AM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Tues, Nov 16, 2021 2:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > 2)\n> > + <row>\n> > + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > + <structfield>subrelid</structfield> <type>oid</type>\n> > + </para>\n> > + <para>\n> > + OID of the relation that the worker is synchronizing; null for the\n> > + main apply worker\n> > + </para></entry>\n> > + </row>\n> >\n> > Is the 'subrelid' only used for distinguishing the worker type ?\n> >\n>\n> I think it will additionally tell which table sync worker as well.\n\nRight.\n\n>\n> > If so, would it\n> > be clear to have a string value here. I recalled the previous version patch has\n> > failure_source column but was removed. Maybe I missed something.\n> >\n>\n> I also don't remember the reason for this but like to know.\n\nI felt it's a bit redundant. Setting subrelid to NULL already means\nthat it’s an entry for a tablesync worker. If users want the value\nlike “apply” or “tablesync” for each entry, they can use the subrelid\nvalue.\n\n> I am also reviewing the latest version of the patch and will share\n> comments/questions sometime today.\n\nThanks!\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 17 Nov 2021 14:56:03 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Nov 16, 2021 at 12:01 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Nov 15, 2021 at 11:43 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Mon, Nov 15, 2021 at 2:48 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Mon, Nov 15, 2021 at 4:49 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> > > >\n> > > > On Mon, Nov 15, 2021 at 1:49 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > I've attached an updated patch that incorporates all comments I got so\n> > > > > far. Please review it.\n> > > > >\n> > > >\n> > > > Thanks for the updated patch.\n> > > > A few minor comments:\n> > > >\n> > > > doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml\n> > > >\n> > > > (1) tab in doc updates\n> > > >\n> > > > There's a tab before \"Otherwise,\":\n> > > >\n> > > > + copy of the relation with <parameter>relid</parameter>.\n> > > > Otherwise,\n> > >\n> > > Fixed.\n> > >\n> > > >\n> > > > src/backend/utils/adt/pgstatfuncs.c\n> > > >\n> > > > (2) The function comment for \"pg_stat_reset_subscription_worker_sub\"\n> > > > seems a bit long and I expected it to be multi-line (did you run\n> > > > pg_indent?)\n> > >\n> > > I ran pg_indent on pgstatfuncs.c but it didn't become a multi-line comment.\n> > >\n> > > >\n> > > > src/include/pgstat.h\n> > > >\n> > > > (3) Remove PgStat_StatSubWorkerEntry.dbid?\n> > > >\n> > > > The \"dbid\" member of the new PgStat_StatSubWorkerEntry struct doesn't\n> > > > seem to be used, so I think it should be removed.\n> > > > (I could remove it and everything builds OK and tests pass).\n> > > >\n> > >\n> > > Fixed.\n> > >\n> > > Thank you for the comments! I've updated an updated version patch.\n> >\n> > Thanks for the updated patch.\n> > I found one issue:\n> > This Assert can fail in few cases:\n> > +void\n> > +pgstat_report_subworker_error(Oid subid, Oid subrelid, Oid relid,\n> > +\n> > LogicalRepMsgType command, TransactionId xid,\n> > + const char *errmsg)\n> > +{\n> > + PgStat_MsgSubWorkerError msg;\n> > + int len;\n> > +\n> > + Assert(strlen(errmsg) < PGSTAT_SUBWORKERERROR_MSGLEN);\n> > + len = offsetof(PgStat_MsgSubWorkerError, m_message[0]) +\n> > strlen(errmsg) + 1;\n> > +\n> >\n> > I could reproduce the problem with the following scenario:\n> > Publisher:\n> > create table t1 (c1 varchar);\n> > create publication pub1 for table t1;\n> > insert into t1 values(repeat('abcd', 5000));\n> >\n> > Subscriber:\n> > create table t1(c1 smallint);\n> > create subscription sub1 connection 'dbname=postgres port=5432'\n> > publication pub1 with ( two_phase = true);\n> > postgres=# select * from pg_stat_subscription_workers;\n> > WARNING: terminating connection because of crash of another server process\n> > DETAIL: The postmaster has commanded this server process to roll back\n> > the current transaction and exit, because another server process\n> > exited abnormally and possibly corrupted shared memory.\n> > HINT: In a moment you should be able to reconnect to the database and\n> > repeat your command.\n> > server closed the connection unexpectedly\n> > This probably means the server terminated abnormally\n> > before or while processing the request.\n> > The connection to the server was lost. Attempting reset: Failed.\n> >\n> > Subscriber logs:\n> > 2021-11-15 19:27:56.380 IST [15685] LOG: logical replication apply\n> > worker for subscription \"sub1\" has started\n> > 2021-11-15 19:27:56.384 IST [15687] LOG: logical replication table\n> > synchronization worker for subscription \"sub1\", table \"t1\" has started\n> > TRAP: FailedAssertion(\"strlen(errmsg) < PGSTAT_SUBWORKERERROR_MSGLEN\",\n> > File: \"pgstat.c\", Line: 1946, PID: 15687)\n> > postgres: logical replication worker for subscription 16387 sync 16384\n> > (ExceptionalCondition+0xd0)[0x55a18f3c727f]\n> > postgres: logical replication worker for subscription 16387 sync 16384\n> > (pgstat_report_subworker_error+0x7a)[0x55a18f126417]\n> > postgres: logical replication worker for subscription 16387 sync 16384\n> > (ApplyWorkerMain+0x493)[0x55a18f176611]\n> > postgres: logical replication worker for subscription 16387 sync 16384\n> > (StartBackgroundWorker+0x23c)[0x55a18f11f7e2]\n> > postgres: logical replication worker for subscription 16387 sync 16384\n> > (+0x54efc0)[0x55a18f134fc0]\n> > postgres: logical replication worker for subscription 16387 sync 16384\n> > (+0x54f3af)[0x55a18f1353af]\n> > postgres: logical replication worker for subscription 16387 sync 16384\n> > (+0x54e338)[0x55a18f134338]\n> > /lib/x86_64-linux-gnu/libpthread.so.0(+0x141f0)[0x7feef84371f0]\n> > /lib/x86_64-linux-gnu/libc.so.6(__select+0x57)[0x7feef81e3ac7]\n> > postgres: logical replication worker for subscription 16387 sync 16384\n> > (+0x5498c2)[0x55a18f12f8c2]\n> > postgres: logical replication worker for subscription 16387 sync 16384\n> > (PostmasterMain+0x134c)[0x55a18f12f1dd]\n> > postgres: logical replication worker for subscription 16387 sync 16384\n> > (+0x43c3d4)[0x55a18f0223d4]\n> > /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xd5)[0x7feef80fd565]\n> > postgres: logical replication worker for subscription 16387 sync 16384\n> > (_start+0x2e)[0x55a18ecaf4fe]\n> > 2021-11-15 19:27:56.483 IST [15645] LOG: background worker \"logical\n> > replication worker\" (PID 15687) was terminated by signal 6: Aborted\n> > 2021-11-15 19:27:56.483 IST [15645] LOG: terminating any other active\n> > server processes\n> > 2021-11-15 19:27:56.485 IST [15645] LOG: all server processes\n> > terminated; reinitializing\n> >\n> > Here it fails because of a long error message \"\"invalid input syntax\n> > for type smallint:\n>\n> Good catch!\n>\n> > \\\"abcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabc....\"\n> > because we try to insert varchar type data into smallint type. Maybe\n> > we should trim the error message in this case.\n>\n> Right. I've fixed this issue and attached an updated patch.\n\nThanks for the updated patch. The issue is fixed in the patch provided.\nI found that in one of the scenarios the statistics is getting lost:\nTest steps:\nStep 1:\nSetup Publisher(create 100 publications pub1...pub100 for t1...t100) like below:\n===============================================\ncreate table t1(c1 int);\ncreate publication pub1 for table t1;\ninsert into t1 values(10);\ninsert into t1 values(10);\ncreate table t2(c1 int);\ncreate publication pub1 for table t2;\ninsert into t2 values(10);\ninsert into t2 values(10);\n....\n\nScript can be generated using:\nwhile [ $a -lt 100 ]\ndo\n a=`expr $a + 1`\n echo \"./psql -d postgres -p 5432 -c \\\"create table t$a(c1\nint);\\\"\" >> publisher.sh\n echo \"./psql -d postgres -p 5432 -c \\\"create publication pub$a\nfor table t$a;\\\"\" >> publisher.sh\n echo \"./psql -d postgres -p 5432 -c \\\"insert into t$a\nvalues(10);\\\"\" >> publisher.sh\n echo \"./psql -d postgres -p 5432 -c \\\"insert into t$a\nvalues(10);\\\"\" >> publisher.sh\ndone\n\nStep 2:\nSetup Subscriber(create 100 subscriptions):\n===============================================\ncreate table t1(c1 int primary key);\ncreate subscription sub1 connection 'dbname=postgres port=5432'\npublication pub1;\ncreate table t2(c1 int primary key);\ncreate subscription sub2 connection 'dbname=postgres port=5432'\npublication pub2;\n....\n\nScript can be generated using:\nwhile [ $a -lt 100]\ndo\n a=`expr $a + 1`\n echo \"./psql -d postgres -p 5433 -c \\\"create table t$a(c1 int\nprimary key);\\\"\" >> subscriber.sh\n echo \"./psql -d postgres -p 5433 -c \\\"create subscription\nsub$a connection 'dbname=postgres port=5432' publication pub$a;\\\"\" >>\nsubscriber.sh\ndone\n\nStep 3:\npostgres=# select * from pg_stat_subscription_workers order by subid;\nsubid | subname | subrelid | relid | command | xid | error_count |\nerror_message | first_error_time | last_error_time\n-------+---------+----------+-------+---------+-----+-------------+------------------------------------------------------------+----------------------------------+----------------------------------\n16389 | sub1 | 16384 | 16384 | | | 17 | duplicate key value violates\nunique constraint \"t1_pkey\" | 2021-11-17 12:01:46.141086+05:30 |\n2021-11-17 12:03:13.175698+05:30\n16395 | sub2 | 16390 | 16390 | | | 16 | duplicate key value violates\nunique constraint \"t2_pkey\" | 2021-11-17 12:01:51.337055+05:30 |\n2021-11-17 12:03:15.512249+05:30\n16401 | sub3 | 16396 | 16396 | | | 16 | duplicate key value violates\nunique constraint \"t3_pkey\" | 2021-11-17 12:01:51.352157+05:30 |\n2021-11-17 12:03:15.802225+05:30\n16407 | sub4 | 16402 | 16402 | | | 16 | duplicate key value violates\nunique constraint \"t4_pkey\" | 2021-11-17 12:01:51.390638+05:30 |\n2021-11-17 12:03:14.709496+05:30\n16413 | sub5 | 16408 | 16408 | | | 16 | duplicate key value violates\nunique constraint \"t5_pkey\" | 2021-11-17 12:01:51.418825+05:30 |\n2021-11-17 12:03:15.257235+05:30\n\nStep 4:\nThen restart the publisher\n\nStep 5:\npostgres=# select * from pg_stat_subscription_workers order by subid;\nsubid | subname | subrelid | relid | command | xid | error_count |\nerror_message |\nfirst_error_time | last_error_time\n-------+---------+----------+-------+---------+-----+-------------+------------------------------------------------------------------------------------------------------------------------------------------+-----\n-----------------------------+----------------------------------\n16389 | sub1 | 16384 | 16384 | | | 1 | could not create replication\nslot \"pg_16389_sync_16384_7031422794938304519\": FATAL: terminating\nconnection due to administrator command+| 2021\n-11-17 12:03:28.201247+05:30 | 2021-11-17 12:03:28.201247+05:30\n| | | | | | | server closed the connection unexpectedly +|\n|\n| | | | | | | This probably means the server terminated abnormally +|\n|\n| | | | | | | before or while proce |\n|\n16395 | sub2 | 16390 | 16390 | | | 18 | duplicate key value violates\nunique constraint \"t2_pkey\" | 2021\n-11-17 12:01:51.337055+05:30 | 2021-11-17 12:03:23.832585+05:30\n16401 | sub3 | 16396 | 16396 | | | 18 | duplicate key value violates\nunique constraint \"t3_pkey\" | 2021\n-11-17 12:01:51.352157+05:30 | 2021-11-17 12:03:26.567873+05:30\n16407 | sub4 | 16402 | 16402 | | | 1 | could not create replication\nslot \"pg_16407_sync_16402_7031422794938304519\": FATAL: terminating\nconnection due to administrator command+| 2021\n-11-17 12:03:28.196958+05:30 | 2021-11-17 12:03:28.196958+05:30\n| | | | | | | server closed the connection unexpectedly +|\n|\n| | | | | | | This probably means the server terminated abnormally +|\n|\n| | | | | | | before or while proce |\n|\n16413 | sub5 | 16408 | 16408 | | | 18 | duplicate key value violates\nunique constraint \"t5_pkey\" | 2021\n-11-17 12:01:51.418825+05:30 | 2021-11-17 12:03:25.595697+05:30\n\nStep 6:\npostgres=# select * from pg_stat_subscription_workers order by subid;\nsubid | subname | subrelid | relid | command | xid | error_count |\nerror_message | first_error_time | last_error_time\n-------+---------+----------+-------+---------+-----+-------------+------------------------------------------------------------+----------------------------------+----------------------------------\n16389 | sub1 | 16384 | 16384 | | | 1 | duplicate key value violates\nunique constraint \"t1_pkey\" | 2021-11-17 12:03:33.346514+05:30 |\n2021-11-17 12:03:33.346514+05:30\n16395 | sub2 | 16390 | 16390 | | | 19 | duplicate key value violates\nunique constraint \"t2_pkey\" | 2021-11-17 12:01:51.337055+05:30 |\n2021-11-17 12:03:33.437505+05:30\n16401 | sub3 | 16396 | 16396 | | | 19 | duplicate key value violates\nunique constraint \"t3_pkey\" | 2021-11-17 12:01:51.352157+05:30 |\n2021-11-17 12:03:33.482954+05:30\n16407 | sub4 | 16402 | 16402 | | | 1 | duplicate key value violates\nunique constraint \"t4_pkey\" | 2021-11-17 12:03:33.327489+05:30 |\n2021-11-17 12:03:33.327489+05:30\n16413 | sub5 | 16408 | 16408 | | | 19 | duplicate key value violates\nunique constraint \"t5_pkey\" | 2021-11-17 12:01:51.418825+05:30 |\n2021-11-17 12:03:33.374522+05:30\n\nWe can see that sub1 and sub4 statistics are lost, old error_count\nvalue is lost. I'm not sure if this behavior is ok or not. Thoughts?\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 17 Nov 2021 12:22:05 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Nov 17, 2021 at 3:52 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, Nov 16, 2021 at 12:01 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Nov 15, 2021 at 11:43 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Mon, Nov 15, 2021 at 2:48 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Mon, Nov 15, 2021 at 4:49 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> > > > >\n> > > > > On Mon, Nov 15, 2021 at 1:49 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > >\n> > > > > > I've attached an updated patch that incorporates all comments I got so\n> > > > > > far. Please review it.\n> > > > > >\n> > > > >\n> > > > > Thanks for the updated patch.\n> > > > > A few minor comments:\n> > > > >\n> > > > > doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml\n> > > > >\n> > > > > (1) tab in doc updates\n> > > > >\n> > > > > There's a tab before \"Otherwise,\":\n> > > > >\n> > > > > + copy of the relation with <parameter>relid</parameter>.\n> > > > > Otherwise,\n> > > >\n> > > > Fixed.\n> > > >\n> > > > >\n> > > > > src/backend/utils/adt/pgstatfuncs.c\n> > > > >\n> > > > > (2) The function comment for \"pg_stat_reset_subscription_worker_sub\"\n> > > > > seems a bit long and I expected it to be multi-line (did you run\n> > > > > pg_indent?)\n> > > >\n> > > > I ran pg_indent on pgstatfuncs.c but it didn't become a multi-line comment.\n> > > >\n> > > > >\n> > > > > src/include/pgstat.h\n> > > > >\n> > > > > (3) Remove PgStat_StatSubWorkerEntry.dbid?\n> > > > >\n> > > > > The \"dbid\" member of the new PgStat_StatSubWorkerEntry struct doesn't\n> > > > > seem to be used, so I think it should be removed.\n> > > > > (I could remove it and everything builds OK and tests pass).\n> > > > >\n> > > >\n> > > > Fixed.\n> > > >\n> > > > Thank you for the comments! I've updated an updated version patch.\n> > >\n> > > Thanks for the updated patch.\n> > > I found one issue:\n> > > This Assert can fail in few cases:\n> > > +void\n> > > +pgstat_report_subworker_error(Oid subid, Oid subrelid, Oid relid,\n> > > +\n> > > LogicalRepMsgType command, TransactionId xid,\n> > > + const char *errmsg)\n> > > +{\n> > > + PgStat_MsgSubWorkerError msg;\n> > > + int len;\n> > > +\n> > > + Assert(strlen(errmsg) < PGSTAT_SUBWORKERERROR_MSGLEN);\n> > > + len = offsetof(PgStat_MsgSubWorkerError, m_message[0]) +\n> > > strlen(errmsg) + 1;\n> > > +\n> > >\n> > > I could reproduce the problem with the following scenario:\n> > > Publisher:\n> > > create table t1 (c1 varchar);\n> > > create publication pub1 for table t1;\n> > > insert into t1 values(repeat('abcd', 5000));\n> > >\n> > > Subscriber:\n> > > create table t1(c1 smallint);\n> > > create subscription sub1 connection 'dbname=postgres port=5432'\n> > > publication pub1 with ( two_phase = true);\n> > > postgres=# select * from pg_stat_subscription_workers;\n> > > WARNING: terminating connection because of crash of another server process\n> > > DETAIL: The postmaster has commanded this server process to roll back\n> > > the current transaction and exit, because another server process\n> > > exited abnormally and possibly corrupted shared memory.\n> > > HINT: In a moment you should be able to reconnect to the database and\n> > > repeat your command.\n> > > server closed the connection unexpectedly\n> > > This probably means the server terminated abnormally\n> > > before or while processing the request.\n> > > The connection to the server was lost. Attempting reset: Failed.\n> > >\n> > > Subscriber logs:\n> > > 2021-11-15 19:27:56.380 IST [15685] LOG: logical replication apply\n> > > worker for subscription \"sub1\" has started\n> > > 2021-11-15 19:27:56.384 IST [15687] LOG: logical replication table\n> > > synchronization worker for subscription \"sub1\", table \"t1\" has started\n> > > TRAP: FailedAssertion(\"strlen(errmsg) < PGSTAT_SUBWORKERERROR_MSGLEN\",\n> > > File: \"pgstat.c\", Line: 1946, PID: 15687)\n> > > postgres: logical replication worker for subscription 16387 sync 16384\n> > > (ExceptionalCondition+0xd0)[0x55a18f3c727f]\n> > > postgres: logical replication worker for subscription 16387 sync 16384\n> > > (pgstat_report_subworker_error+0x7a)[0x55a18f126417]\n> > > postgres: logical replication worker for subscription 16387 sync 16384\n> > > (ApplyWorkerMain+0x493)[0x55a18f176611]\n> > > postgres: logical replication worker for subscription 16387 sync 16384\n> > > (StartBackgroundWorker+0x23c)[0x55a18f11f7e2]\n> > > postgres: logical replication worker for subscription 16387 sync 16384\n> > > (+0x54efc0)[0x55a18f134fc0]\n> > > postgres: logical replication worker for subscription 16387 sync 16384\n> > > (+0x54f3af)[0x55a18f1353af]\n> > > postgres: logical replication worker for subscription 16387 sync 16384\n> > > (+0x54e338)[0x55a18f134338]\n> > > /lib/x86_64-linux-gnu/libpthread.so.0(+0x141f0)[0x7feef84371f0]\n> > > /lib/x86_64-linux-gnu/libc.so.6(__select+0x57)[0x7feef81e3ac7]\n> > > postgres: logical replication worker for subscription 16387 sync 16384\n> > > (+0x5498c2)[0x55a18f12f8c2]\n> > > postgres: logical replication worker for subscription 16387 sync 16384\n> > > (PostmasterMain+0x134c)[0x55a18f12f1dd]\n> > > postgres: logical replication worker for subscription 16387 sync 16384\n> > > (+0x43c3d4)[0x55a18f0223d4]\n> > > /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xd5)[0x7feef80fd565]\n> > > postgres: logical replication worker for subscription 16387 sync 16384\n> > > (_start+0x2e)[0x55a18ecaf4fe]\n> > > 2021-11-15 19:27:56.483 IST [15645] LOG: background worker \"logical\n> > > replication worker\" (PID 15687) was terminated by signal 6: Aborted\n> > > 2021-11-15 19:27:56.483 IST [15645] LOG: terminating any other active\n> > > server processes\n> > > 2021-11-15 19:27:56.485 IST [15645] LOG: all server processes\n> > > terminated; reinitializing\n> > >\n> > > Here it fails because of a long error message \"\"invalid input syntax\n> > > for type smallint:\n> >\n> > Good catch!\n> >\n> > > \\\"abcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabcdabc....\"\n> > > because we try to insert varchar type data into smallint type. Maybe\n> > > we should trim the error message in this case.\n> >\n> > Right. I've fixed this issue and attached an updated patch.\n>\n> Thanks for the updated patch. The issue is fixed in the patch provided.\n> I found that in one of the scenarios the statistics is getting lost:\n\nThank you for the tests!!\n\n>\n> Step 3:\n> postgres=# select * from pg_stat_subscription_workers order by subid;\n> subid | subname | subrelid | relid | command | xid | error_count |\n> error_message | first_error_time | last_error_time\n> -------+---------+----------+-------+---------+-----+-------------+------------------------------------------------------------+----------------------------------+----------------------------------\n> 16389 | sub1 | 16384 | 16384 | | | 17 | duplicate key value violates\n> unique constraint \"t1_pkey\" | 2021-11-17 12:01:46.141086+05:30 |\n> 2021-11-17 12:03:13.175698+05:30\n> 16395 | sub2 | 16390 | 16390 | | | 16 | duplicate key value violates\n> unique constraint \"t2_pkey\" | 2021-11-17 12:01:51.337055+05:30 |\n> 2021-11-17 12:03:15.512249+05:30\n> 16401 | sub3 | 16396 | 16396 | | | 16 | duplicate key value violates\n> unique constraint \"t3_pkey\" | 2021-11-17 12:01:51.352157+05:30 |\n> 2021-11-17 12:03:15.802225+05:30\n> 16407 | sub4 | 16402 | 16402 | | | 16 | duplicate key value violates\n> unique constraint \"t4_pkey\" | 2021-11-17 12:01:51.390638+05:30 |\n> 2021-11-17 12:03:14.709496+05:30\n> 16413 | sub5 | 16408 | 16408 | | | 16 | duplicate key value violates\n> unique constraint \"t5_pkey\" | 2021-11-17 12:01:51.418825+05:30 |\n> 2021-11-17 12:03:15.257235+05:30\n>\n> Step 4:\n> Then restart the publisher\n>\n> Step 5:\n> postgres=# select * from pg_stat_subscription_workers order by subid;\n> subid | subname | subrelid | relid | command | xid | error_count |\n> error_message |\n> first_error_time | last_error_time\n> -------+---------+----------+-------+---------+-----+-------------+------------------------------------------------------------------------------------------------------------------------------------------+-----\n> -----------------------------+----------------------------------\n> 16389 | sub1 | 16384 | 16384 | | | 1 | could not create replication\n> slot \"pg_16389_sync_16384_7031422794938304519\": FATAL: terminating\n> connection due to administrator command+| 2021\n> -11-17 12:03:28.201247+05:30 | 2021-11-17 12:03:28.201247+05:30\n> | | | | | | | server closed the connection unexpectedly +|\n> |\n> | | | | | | | This probably means the server terminated abnormally +|\n> |\n> | | | | | | | before or while proce |\n> |\n> 16395 | sub2 | 16390 | 16390 | | | 18 | duplicate key value violates\n> unique constraint \"t2_pkey\" | 2021\n> -11-17 12:01:51.337055+05:30 | 2021-11-17 12:03:23.832585+05:30\n> 16401 | sub3 | 16396 | 16396 | | | 18 | duplicate key value violates\n> unique constraint \"t3_pkey\" | 2021\n> -11-17 12:01:51.352157+05:30 | 2021-11-17 12:03:26.567873+05:30\n> 16407 | sub4 | 16402 | 16402 | | | 1 | could not create replication\n> slot \"pg_16407_sync_16402_7031422794938304519\": FATAL: terminating\n> connection due to administrator command+| 2021\n> -11-17 12:03:28.196958+05:30 | 2021-11-17 12:03:28.196958+05:30\n> | | | | | | | server closed the connection unexpectedly +|\n> |\n> | | | | | | | This probably means the server terminated abnormally +|\n> |\n> | | | | | | | before or while proce |\n> |\n> 16413 | sub5 | 16408 | 16408 | | | 18 | duplicate key value violates\n> unique constraint \"t5_pkey\" | 2021\n> -11-17 12:01:51.418825+05:30 | 2021-11-17 12:03:25.595697+05:30\n>\n> Step 6:\n> postgres=# select * from pg_stat_subscription_workers order by subid;\n> subid | subname | subrelid | relid | command | xid | error_count |\n> error_message | first_error_time | last_error_time\n> -------+---------+----------+-------+---------+-----+-------------+------------------------------------------------------------+----------------------------------+----------------------------------\n> 16389 | sub1 | 16384 | 16384 | | | 1 | duplicate key value violates\n> unique constraint \"t1_pkey\" | 2021-11-17 12:03:33.346514+05:30 |\n> 2021-11-17 12:03:33.346514+05:30\n> 16395 | sub2 | 16390 | 16390 | | | 19 | duplicate key value violates\n> unique constraint \"t2_pkey\" | 2021-11-17 12:01:51.337055+05:30 |\n> 2021-11-17 12:03:33.437505+05:30\n> 16401 | sub3 | 16396 | 16396 | | | 19 | duplicate key value violates\n> unique constraint \"t3_pkey\" | 2021-11-17 12:01:51.352157+05:30 |\n> 2021-11-17 12:03:33.482954+05:30\n> 16407 | sub4 | 16402 | 16402 | | | 1 | duplicate key value violates\n> unique constraint \"t4_pkey\" | 2021-11-17 12:03:33.327489+05:30 |\n> 2021-11-17 12:03:33.327489+05:30\n> 16413 | sub5 | 16408 | 16408 | | | 19 | duplicate key value violates\n> unique constraint \"t5_pkey\" | 2021-11-17 12:01:51.418825+05:30 |\n> 2021-11-17 12:03:33.374522+05:30\n>\n> We can see that sub1 and sub4 statistics are lost, old error_count\n> value is lost. I'm not sure if this behavior is ok or not. Thoughts?\n>\n\nLooking at the outputs of steps 3, 5, and 6, the error messages are\ndifferent. In the current design, error_count is incremented only when\nthe exact same error (i.g., xid, command, relid, error message are the\nsame) comes. Since some different kinds of errors happened on the\nsubscription the error_count was reset. Similarly, the\nfirst_error_time value was also reset.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 17 Nov 2021 16:54:07 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Nov 16, 2021 at 12:01 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Nov 15, 2021 at 11:43 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Mon, Nov 15, 2021 at 2:48 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Mon, Nov 15, 2021 at 4:49 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> > > >\n> > > > On Mon, Nov 15, 2021 at 1:49 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n>\n> Right. I've fixed this issue and attached an updated patch.\n\nFew comments:\n1) should we set subwentry to NULL to handle !create && !found case\nor we could return NULL similar to the earlier function.\n+static PgStat_StatSubWorkerEntry *\n+pgstat_get_subworker_entry(PgStat_StatDBEntry *dbentry, Oid subid,\nOid subrelid,\n+ bool create)\n+{\n+ PgStat_StatSubWorkerEntry *subwentry;\n+ PgStat_StatSubWorkerKey key;\n+ bool found;\n+ HASHACTION action = (create ? HASH_ENTER : HASH_FIND);\n+\n+ key.subid = subid;\n+ key.subrelid = subrelid;\n+ subwentry = (PgStat_StatSubWorkerEntry *)\nhash_search(dbentry->subworkers,\n+\n (void *) &key,\n+\n action, &found);\n+\n+ /* If not found, initialize the new one */\n+ if (create && !found)\n\n2) Should we keep the line width to 80 chars:\n+/* ----------\n+ * PgStat_MsgSubWorkerError Sent by the apply worker or\nthe table sync worker to\n+ * report\nthe error occurred during logical replication.\n+ * ----------\n+ */\n+#define PGSTAT_SUBWORKERERROR_MSGLEN 256\n+typedef struct PgStat_MsgSubWorkerError\n+{\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 17 Nov 2021 16:15:49 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Nov 16, 2021 at 12:01 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Right. I've fixed this issue and attached an updated patch.\n>\n\nFew comments/questions:\n=====================\n1.\n+ <para>\n+ The <structname>pg_stat_subscription_workers</structname> view will contain\n+ one row per subscription error reported by workers applying logical\n+ replication changes and workers handling the initial data copy of the\n+ subscribed tables. The statistics entry is removed when the subscription\n+ the worker is running on is removed.\n+ </para>\n\nThe last line of this paragraph is not clear to me. First \"the\" before\n\"worker\" in the following part of the sentence seems unnecessary\n\"..when the subscription the worker..\". Then the part \"running on is\nremoved\" is unclear because it could also mean that we remove the\nentry when a subscription is disabled. Can we rephrase it to: \"The\nstatistics entry is removed when the corresponding subscription is\ndropped\"?\n\n2.\nBetween v20 and v23 versions of patch the size of hash table\nPGSTAT_SUBWORKER_HASH_SIZE is increased from 32 to 256. I might have\nmissed the comment which lead to this change, can you point me to the\nsame or if you changed it for some other reason, can you let me know\nthe same?\n\n3.\n+\n+ /*\n+ * Repeat for subscription workers. Similarly, we needn't bother\n+ * in the common case where no function stats are being collected.\n+ */\n\n/function/subscription workers'\n\n4.\n+ <para>\n+ Name of command being applied when the error occurred. This field\n+ is always NULL if the error was reported during the initial data\n+ copy.\n+ </para></entry>\n+ </row>\n+\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>xid</structfield> <type>xid</type>\n+ </para>\n+ <para>\n+ Transaction ID of the publisher node being applied when the error\n+ occurred. This field is always NULL if the error was reported\n+ during the initial data copy.\n+ </para></entry>\n\nIs it important to stress on 'always' in the above two descriptions?\n\n5.\nThe current description of first/last_error_time seems sliglthy\nmisleading as one can interpret that these are about different errors.\nLet's slightly change the description of first/last_error_time as\nfollows or something on those lines:\n\n</para>\n+ <para>\n+ Time at which the first error occurred\n+ </para></entry>\n+ </row>\n\nFirst time at which this error occurred\n\n<structfield>last_error_time</structfield> <type>timestamp with time zone</type>\n+ </para>\n+ <para>\n+ Time at which the last error occurred\n\nLast time at which this error occurred. This will be the same as\nfirst_error_time except when the same error occurred more than once\nconsecutively.\n\n6.\n+ </indexterm>\n+ <function>pg_stat_reset_subscription_worker</function> (\n<parameter>subid</parameter> <type>oid</type>, <optional>\n<parameter>relid</parameter> <type>oid</type> </optional> )\n+ <returnvalue>void</returnvalue>\n+ </para>\n+ <para>\n+ Resets the statistics of a single subscription worker running on the\n+ subscription with <parameter>subid</parameter> shown in the\n+ <structname>pg_stat_subscription_worker</structname> view. If the\n+ argument <parameter>relid</parameter> is not <literal>NULL</literal>,\n+ resets statistics of the subscription worker handling the initial data\n+ copy of the relation with <parameter>relid</parameter>. Otherwise,\n+ resets the subscription worker statistics of the main apply worker.\n+ If the argument <parameter>relid</parameter> is omitted, resets the\n+ statistics of all subscription workers running on the subscription\n+ with <parameter>subid</parameter>.\n+ </para>\n\nThe first line of this description seems to indicate that we can only\nreset the stats of a single worker but the later part indicates that\nwe can reset stats of all subscription workers. Can we change the\nfirst line as: \"Resets the statistics of subscription workers running\non the subscription with <parameter>subid</parameter> shown in the\n<structname>pg_stat_subscription_worker</structname> view.\".\n\n7.\npgstat_vacuum_stat()\n{\n..\n+ pgstat_setheader(&spmsg.m_hdr, PGSTAT_MTYPE_SUBSCRIPTIONPURGE);\n+ spmsg.m_databaseid = MyDatabaseId;\n+ spmsg.m_nentries = 0;\n..\n}\n\nDo we really need to set the header here? It seems to be getting set\nin pgstat_send_subscription_purge() while sending this message.\n\n8.\npgstat_vacuum_stat()\n{\n..\n+\n+ if (hash_search(htab, (void *) &(subwentry->key.subid), HASH_FIND, NULL)\n+ != NULL)\n+ continue;\n+\n+ /* This subscription is dead, add the subid to the message */\n+ spmsg.m_subids[spmsg.m_nentries++] = subwentry->key.subid;\n..\n}\n\nI think it is better to use a separate variable here for subid as we\nare using for funcid and tableid. That will make this part of the code\neasier to follow and look consistent.\n\n9.\n+/* ----------\n+ * PgStat_MsgSubWorkerError Sent by the apply worker or the table\nsync worker to\n+ * report the error occurred during logical replication.\n+ * ----------\n\nIn this comment \"during logical replication\" sounds too generic. Can\nwe instead use \"while processing changes.\" or something like that to\nmake it a bit more specific?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 17 Nov 2021 16:44:11 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Nov 17, 2021 at 4:16 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Few comments:\n> 1) should we set subwentry to NULL to handle !create && !found case\n> or we could return NULL similar to the earlier function.\n>\n\nI think it is good to be consistent with the nearby code in this case.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 17 Nov 2021 16:47:18 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tuesday, November 16, 2021 2:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> Right. I've fixed this issue and attached an updated patch.\r\n> \r\nHi,\r\n\r\nI have few comments for the testcases.\r\n\r\n1)\r\n\r\n+my $appname = 'tap_sub';\r\n+$node_subscriber->safe_psql(\r\n+ 'postgres',\r\n+ \"CREATE SUBSCRIPTION tap_sub CONNECTION '$publisher_connstr application_name=$appname' PUBLICATION tap_pub WITH (streaming = off, two_phase = on);\");\r\n+my $appname_streaming = 'tap_sub_streaming';\r\n+$node_subscriber->safe_psql(\r\n+ 'postgres',\r\n+ \"CREATE SUBSCRIPTION tap_sub_streaming CONNECTION '$publisher_connstr application_name=$appname_streaming' PUBLICATION tap_pub_streaming WITH (streaming = on, two_phase = on);\");\r\n+\r\n\r\nI think we can remove the 'application_name=$appname', so that the command\r\ncould be shorter. \r\n\r\n2)\r\n+...(streaming = on, two_phase = on);\");\r\nBesides, is there some reasons to set two_phase to ? If so,\r\nIt might be better to add some comments about it.\r\n\r\n\r\n3)\r\n+CREATE PUBLICATION tap_pub_streaming FOR TABLE test_tab_streaming;\r\n+]);\r\n+\r\n\r\nIt seems there's no tests to use the table test_tab_streaming. I guess this\r\ntable is used to test streaming change error, maybe we can add some tests for\r\nit ?\r\n\r\nBest regards,\r\nHou zj\r\n",
"msg_date": "Thu, 18 Nov 2021 03:52:23 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tuesday, November 16, 2021 2:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> Right. I've fixed this issue and attached an updated patch.\r\n> \r\n>\r\n\r\nThanks for your patch.\r\n\r\nI read the discussion about stats entries for table sync worker[1], the\r\nstatistics are retained after table sync worker finished its jobs and user can remove\r\nthem via pg_stat_reset_subscription_worker function.\r\n\r\nBut I notice that, if a table sync worker finished its jobs, the error reported by\r\nthis worker will not be shown in the pg_stat_subscription_workers view. (It seemed caused by this condition: \"WHERE srsubstate <> 'r'\") Is it intentional? I think this may cause a result that users don't know the statistics are still exist, and won't remove the statistics manually. And that is not friendly to users' storage, right?\r\n\r\n[1] https://www.postgresql.org/message-id/CAD21AoAT42mhcqeB1jPfRL1%2BEUHbZk8MMY_fBgsyZvJeKNpG%2Bw%40mail.gmail.com\r\n\r\nRegards\r\nTang\r\n",
"msg_date": "Thu, 18 Nov 2021 08:45:29 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Nov 18, 2021 at 5:45 PM tanghy.fnst@fujitsu.com\n<tanghy.fnst@fujitsu.com> wrote:\n>\n> On Tuesday, November 16, 2021 2:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Right. I've fixed this issue and attached an updated patch.\n> >\n> >\n>\n> Thanks for your patch.\n>\n> I read the discussion about stats entries for table sync worker[1], the\n> statistics are retained after table sync worker finished its jobs and user can remove\n> them via pg_stat_reset_subscription_worker function.\n>\n> But I notice that, if a table sync worker finished its jobs, the error reported by\n> this worker will not be shown in the pg_stat_subscription_workers view. (It seemed caused by this condition: \"WHERE srsubstate <> 'r'\") Is it intentional? I think this may cause a result that users don't know the statistics are still exist, and won't remove the statistics manually. And that is not friendly to users' storage, right?\n>\n\nYou're right. The condition \"WHERE substate <> 'r') should be removed.\nI'll do that change in the next version patch. Thanks!\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 18 Nov 2021 20:39:44 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Nov 17, 2021 at 12:43 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Tues, Nov 16, 2021 2:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > Right. I've fixed this issue and attached an updated patch.\n>\n> Hi,\n>\n> Thanks for updating the patch.\n> Here are few comments.\n\nThank you for the comments!\n\n>\n> 1)\n>\n> + <function>pg_stat_reset_subscription_worker</function> ( <parameter>subid</parameter> <type>oid</type>, <optional> <parameter>relid</parameter> <type>oid</type> </optional> )\n>\n> It seems we should put '<optional>' before the comma(',').\n\nWill fix.\n\n>\n>\n> 2)\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>subrelid</structfield> <type>oid</type>\n> + </para>\n> + <para>\n> + OID of the relation that the worker is synchronizing; null for the\n> + main apply worker\n> + </para></entry>\n> + </row>\n>\n> Is the 'subrelid' only used for distinguishing the worker type ? If so, would it\n> be clear to have a string value here. I recalled the previous version patch has\n> failure_source column but was removed. Maybe I missed something.\n\nAs Amit mentioned, users can use this check which table sync worker.\n\n>\n>\n> 3)\n> .\n> +extern void pgstat_reset_subworker_stats(Oid subid, Oid subrelid, bool allstats);\n>\n> I didn't find the code of this functions, maybe we can remove this declaration ?\n\nWill remove.\n\nI'll submit an updated patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 18 Nov 2021 22:59:28 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Nov 17, 2021 at 7:46 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, Nov 16, 2021 at 12:01 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Nov 15, 2021 at 11:43 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Mon, Nov 15, 2021 at 2:48 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Mon, Nov 15, 2021 at 4:49 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> > > > >\n> > > > > On Mon, Nov 15, 2021 at 1:49 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > >\n> >\n> > Right. I've fixed this issue and attached an updated patch.\n>\n> Few comments:\n\nThank you for the comments!\n\n> 1) should we set subwentry to NULL to handle !create && !found case\n> or we could return NULL similar to the earlier function.\n> +static PgStat_StatSubWorkerEntry *\n> +pgstat_get_subworker_entry(PgStat_StatDBEntry *dbentry, Oid subid,\n> Oid subrelid,\n> + bool create)\n> +{\n> + PgStat_StatSubWorkerEntry *subwentry;\n> + PgStat_StatSubWorkerKey key;\n> + bool found;\n> + HASHACTION action = (create ? HASH_ENTER : HASH_FIND);\n> +\n> + key.subid = subid;\n> + key.subrelid = subrelid;\n> + subwentry = (PgStat_StatSubWorkerEntry *)\n> hash_search(dbentry->subworkers,\n> +\n> (void *) &key,\n> +\n> action, &found);\n> +\n> + /* If not found, initialize the new one */\n> + if (create && !found)\n\nIt's better to return NULL if !create && !found. WIll fix.\n\n>\n> 2) Should we keep the line width to 80 chars:\n> +/* ----------\n> + * PgStat_MsgSubWorkerError Sent by the apply worker or\n> the table sync worker to\n> + * report\n> the error occurred during logical replication.\n> + * ----------\n> + */\n> +#define PGSTAT_SUBWORKERERROR_MSGLEN 256\n> +typedef struct PgStat_MsgSubWorkerError\n> +{\n\nHmm, pg_indent seems not to fix it. Anyway, will fix.\n\nI'll fix an updated patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 18 Nov 2021 22:59:56 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Nov 16, 2021 at 5:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Right. I've fixed this issue and attached an updated patch.\n>\n\nA couple of comments for the v23 patch:\n\ndoc/src/sgml/monitoring.sgml\n(1) inconsistent decription\nI think that the following description seems inconsistent with the\nprevious description given above it in the patch (i.e. \"One row per\nsubscription worker, showing statistics about errors that occurred on\nthat subscription worker\"):\n\n\"The <structname>pg_stat_subscription_workers</structname> view will\ncontain one row per subscription error reported by workers applying\nlogical replication changes and workers handling the initial data copy\nof the subscribed tables.\"\n\nI think it is inconsistent because it implies there could be multiple\nsubscription error rows for the same worker.\nMaybe the following wording could be used instead, or something similar:\n\n\"The <structname>pg_stat_subscription_workers</structname> view will\ncontain one row per subscription worker on which errors have occurred,\nfor workers applying logical replication changes and workers handling\nthe initial data copy of the subscribed tables.\"\n\n(2) null vs NULL\nThe \"subrelid\" column description uses \"null\" but the \"command\" column\ndescription uses \"NULL\".\nI think \"NULL\" should be used for consistency.\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 19 Nov 2021 13:07:05 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Nov 18, 2021 at 5:10 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Nov 18, 2021 at 5:45 PM tanghy.fnst@fujitsu.com\n> <tanghy.fnst@fujitsu.com> wrote:\n> >\n> > On Tuesday, November 16, 2021 2:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > Right. I've fixed this issue and attached an updated patch.\n> > >\n> > >\n> >\n> > Thanks for your patch.\n> >\n> > I read the discussion about stats entries for table sync worker[1], the\n> > statistics are retained after table sync worker finished its jobs and user can remove\n> > them via pg_stat_reset_subscription_worker function.\n> >\n> > But I notice that, if a table sync worker finished its jobs, the error reported by\n> > this worker will not be shown in the pg_stat_subscription_workers view. (It seemed caused by this condition: \"WHERE srsubstate <> 'r'\") Is it intentional? I think this may cause a result that users don't know the statistics are still exist, and won't remove the statistics manually. And that is not friendly to users' storage, right?\n> >\n>\n> You're right. The condition \"WHERE substate <> 'r') should be removed.\n> I'll do that change in the next version patch. Thanks!\n>\n\nOne more thing you might want to consider for the next version is\nwhether to rename the columns as discussed in the related thread [1]?\nI think we should consider future work and name them accordingly.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1KR41bRUuPeNBSGv2%2Bq7ROKukS3myeAUqrZMD8MEwR0DQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 19 Nov 2021 09:21:52 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Nov 19, 2021 at 9:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Nov 18, 2021 at 5:10 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Nov 18, 2021 at 5:45 PM tanghy.fnst@fujitsu.com\n> > <tanghy.fnst@fujitsu.com> wrote:\n> > >\n> > > On Tuesday, November 16, 2021 2:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > Right. I've fixed this issue and attached an updated patch.\n> > > >\n> > > >\n> > >\n> > > Thanks for your patch.\n> > >\n> > > I read the discussion about stats entries for table sync worker[1], the\n> > > statistics are retained after table sync worker finished its jobs and user can remove\n> > > them via pg_stat_reset_subscription_worker function.\n> > >\n> > > But I notice that, if a table sync worker finished its jobs, the error reported by\n> > > this worker will not be shown in the pg_stat_subscription_workers view. (It seemed caused by this condition: \"WHERE srsubstate <> 'r'\") Is it intentional? I think this may cause a result that users don't know the statistics are still exist, and won't remove the statistics manually. And that is not friendly to users' storage, right?\n> > >\n> >\n> > You're right. The condition \"WHERE substate <> 'r') should be removed.\n> > I'll do that change in the next version patch. Thanks!\n> >\n>\n> One more thing you might want to consider for the next version is\n> whether to rename the columns as discussed in the related thread [1]?\n> I think we should consider future work and name them accordingly.\n>\n> [1] - https://www.postgresql.org/message-id/CAA4eK1KR41bRUuPeNBSGv2%2Bq7ROKukS3myeAUqrZMD8MEwR0DQ%40mail.gmail.com\n\nSince the statistics collector process uses UDP socket, the sequencing\nof the messages is not guaranteed. Will there be a problem if\nSubscription is dropped and stats collector receives\nPGSTAT_MTYPE_SUBSCRIPTIONPURGE first and the subscription worker entry\nis removed and then receives PGSTAT_MTYPE_SUBWORKERERROR(this order\ncan happen because of UDP socket). I'm not sure if the Assert will be\na problem in this case. If this scenario is possible we could just\nsilently return in that case.\n\n+static void\n+pgstat_recv_subworker_error(PgStat_MsgSubWorkerError *msg, int len)\n+{\n+ PgStat_StatDBEntry *dbentry;\n+ PgStat_StatSubWorkerEntry *subwentry;\n+\n+ dbentry = pgstat_get_db_entry(msg->m_databaseid, true);\n+\n+ /* Get the subscription worker stats */\n+ subwentry = pgstat_get_subworker_entry(dbentry, msg->m_subid,\n+\n msg->m_subrelid, true);\n+ Assert(subwentry);\n+\n+ /*\n+ * Update only the counter and last error timestamp if we received\n+ * the same error again\n+ */\n\nThoughts?\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 19 Nov 2021 11:08:56 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Nov 19, 2021 at 4:39 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Since the statistics collector process uses UDP socket, the sequencing\n> of the messages is not guaranteed. Will there be a problem if\n> Subscription is dropped and stats collector receives\n> PGSTAT_MTYPE_SUBSCRIPTIONPURGE first and the subscription worker entry\n> is removed and then receives PGSTAT_MTYPE_SUBWORKERERROR(this order\n> can happen because of UDP socket). I'm not sure if the Assert will be\n> a problem in this case. If this scenario is possible we could just\n> silently return in that case.\n>\n\nGiven that the message sequencing is not guaranteed, it looks like\nthat Assert and the current code after it won't handle that scenario\nwell. Silently returning if subwentry is NULL does seem like the way\nto deal with that possibility.\nDoesn't this possibility of out-of-sequence messaging due to UDP\nsimilarly mean that \"first_error_time\" and \"last_error_time\" may not\nbe currently handled correctly?\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 19 Nov 2021 17:32:23 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Nov 19, 2021 at 11:09 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Fri, Nov 19, 2021 at 9:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Nov 18, 2021 at 5:10 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Thu, Nov 18, 2021 at 5:45 PM tanghy.fnst@fujitsu.com\n> > > <tanghy.fnst@fujitsu.com> wrote:\n> > > >\n> > > > On Tuesday, November 16, 2021 2:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > Right. I've fixed this issue and attached an updated patch.\n> > > > >\n> > > > >\n> > > >\n> > > > Thanks for your patch.\n> > > >\n> > > > I read the discussion about stats entries for table sync worker[1], the\n> > > > statistics are retained after table sync worker finished its jobs and user can remove\n> > > > them via pg_stat_reset_subscription_worker function.\n> > > >\n> > > > But I notice that, if a table sync worker finished its jobs, the error reported by\n> > > > this worker will not be shown in the pg_stat_subscription_workers view. (It seemed caused by this condition: \"WHERE srsubstate <> 'r'\") Is it intentional? I think this may cause a result that users don't know the statistics are still exist, and won't remove the statistics manually. And that is not friendly to users' storage, right?\n> > > >\n> > >\n> > > You're right. The condition \"WHERE substate <> 'r') should be removed.\n> > > I'll do that change in the next version patch. Thanks!\n> > >\n> >\n> > One more thing you might want to consider for the next version is\n> > whether to rename the columns as discussed in the related thread [1]?\n> > I think we should consider future work and name them accordingly.\n> >\n> > [1] - https://www.postgresql.org/message-id/CAA4eK1KR41bRUuPeNBSGv2%2Bq7ROKukS3myeAUqrZMD8MEwR0DQ%40mail.gmail.com\n>\n> Since the statistics collector process uses UDP socket, the sequencing\n> of the messages is not guaranteed. Will there be a problem if\n> Subscription is dropped and stats collector receives\n> PGSTAT_MTYPE_SUBSCRIPTIONPURGE first and the subscription worker entry\n> is removed and then receives PGSTAT_MTYPE_SUBWORKERERROR(this order\n> can happen because of UDP socket). I'm not sure if the Assert will be\n> a problem in this case.\n>\n\nWhy that Assert will hit? We seem to be always passing 'create' as\ntrue so it should create a new entry. I think a similar situation can\nhappen for functions and it will be probably cleaned in the next\nvacuum cycle.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 19 Nov 2021 12:22:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Nov 19, 2021 at 12:22 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Nov 19, 2021 at 11:09 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Fri, Nov 19, 2021 at 9:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Nov 18, 2021 at 5:10 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Thu, Nov 18, 2021 at 5:45 PM tanghy.fnst@fujitsu.com\n> > > > <tanghy.fnst@fujitsu.com> wrote:\n> > > > >\n> > > > > On Tuesday, November 16, 2021 2:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > >\n> > > > > > Right. I've fixed this issue and attached an updated patch.\n> > > > > >\n> > > > > >\n> > > > >\n> > > > > Thanks for your patch.\n> > > > >\n> > > > > I read the discussion about stats entries for table sync worker[1], the\n> > > > > statistics are retained after table sync worker finished its jobs and user can remove\n> > > > > them via pg_stat_reset_subscription_worker function.\n> > > > >\n> > > > > But I notice that, if a table sync worker finished its jobs, the error reported by\n> > > > > this worker will not be shown in the pg_stat_subscription_workers view. (It seemed caused by this condition: \"WHERE srsubstate <> 'r'\") Is it intentional? I think this may cause a result that users don't know the statistics are still exist, and won't remove the statistics manually. And that is not friendly to users' storage, right?\n> > > > >\n> > > >\n> > > > You're right. The condition \"WHERE substate <> 'r') should be removed.\n> > > > I'll do that change in the next version patch. Thanks!\n> > > >\n> > >\n> > > One more thing you might want to consider for the next version is\n> > > whether to rename the columns as discussed in the related thread [1]?\n> > > I think we should consider future work and name them accordingly.\n> > >\n> > > [1] - https://www.postgresql.org/message-id/CAA4eK1KR41bRUuPeNBSGv2%2Bq7ROKukS3myeAUqrZMD8MEwR0DQ%40mail.gmail.com\n> >\n> > Since the statistics collector process uses UDP socket, the sequencing\n> > of the messages is not guaranteed. Will there be a problem if\n> > Subscription is dropped and stats collector receives\n> > PGSTAT_MTYPE_SUBSCRIPTIONPURGE first and the subscription worker entry\n> > is removed and then receives PGSTAT_MTYPE_SUBWORKERERROR(this order\n> > can happen because of UDP socket). I'm not sure if the Assert will be\n> > a problem in this case.\n> >\n>\n> Why that Assert will hit? We seem to be always passing 'create' as\n> true so it should create a new entry. I think a similar situation can\n> happen for functions and it will be probably cleaned in the next\n> vacuum cycle.\n\nSince we are passing true that Assert will not hit, sorry I missed to\nnotice that. It will create a new entry as you rightly pointed out.\nSince the cleaning is handled by vacuum and current code is also doing\nthat way, I felt no need to make any change.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 19 Nov 2021 12:38:06 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Nov 19, 2021 at 5:52 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Why that Assert will hit? We seem to be always passing 'create' as\n> true so it should create a new entry. I think a similar situation can\n> happen for functions and it will be probably cleaned in the next\n> vacuum cycle.\n>\nOops, I missed that too. So at worst, vacuum will clean it up in the\nout-of-order SUBSCRIPTIONPURGE,SUBWORKERERROR case.\n\nBut I still think the current code may not correctly handle\nfirst_error_time/last_error_time timestamps if out-of-order\nSUBWORKERERROR messages occur, right?\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 19 Nov 2021 18:51:54 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Nov 19, 2021 at 1:22 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Fri, Nov 19, 2021 at 5:52 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Why that Assert will hit? We seem to be always passing 'create' as\n> > true so it should create a new entry. I think a similar situation can\n> > happen for functions and it will be probably cleaned in the next\n> > vacuum cycle.\n> >\n> Oops, I missed that too. So at worst, vacuum will clean it up in the\n> out-of-order SUBSCRIPTIONPURGE,SUBWORKERERROR case.\n>\n> But I still think the current code may not correctly handle\n> first_error_time/last_error_time timestamps if out-of-order\n> SUBWORKERERROR messages occur, right?\n>\n\nYeah in such a case last_error_time can be shown as a time before\nfirst_error_time but I don't think that will be a big problem, the\nnext message will fix it. I don't see what we can do about it and the\nsame is true for other cases like pg_stat_archiver where the success\nand failure times can be out of order. If we want we can remove one of\nthose times but I don't think this happens frequently enough to be\nconsidered a problem. Anyway, these stats are not considered to be\nupdated with the most latest info.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 19 Nov 2021 14:44:55 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Nov 19, 2021 at 8:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Yeah in such a case last_error_time can be shown as a time before\n> first_error_time but I don't think that will be a big problem, the\n> next message will fix it. I don't see what we can do about it and the\n> same is true for other cases like pg_stat_archiver where the success\n> and failure times can be out of order. If we want we can remove one of\n> those times but I don't think this happens frequently enough to be\n> considered a problem. Anyway, these stats are not considered to be\n> updated with the most latest info.\n>\n\nCouldn't the code block in pgstat_recv_subworker_error() that\nincrements error_count just compare the new msg timestamp against the\nexisting first_error_time and last_error_time and, based on the\nresult, update those if required?\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 19 Nov 2021 20:30:42 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Nov 19, 2021 at 3:00 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Fri, Nov 19, 2021 at 8:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Yeah in such a case last_error_time can be shown as a time before\n> > first_error_time but I don't think that will be a big problem, the\n> > next message will fix it. I don't see what we can do about it and the\n> > same is true for other cases like pg_stat_archiver where the success\n> > and failure times can be out of order. If we want we can remove one of\n> > those times but I don't think this happens frequently enough to be\n> > considered a problem. Anyway, these stats are not considered to be\n> > updated with the most latest info.\n> >\n>\n> Couldn't the code block in pgstat_recv_subworker_error() that\n> increments error_count just compare the new msg timestamp against the\n> existing first_error_time and last_error_time and, based on the\n> result, update those if required?\n>\n\nI don't see any problem with that but let's see what Sawada-San has to\nsay about this?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 19 Nov 2021 15:39:32 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Nov 19, 2021 at 7:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Nov 19, 2021 at 3:00 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> >\n> > On Fri, Nov 19, 2021 at 8:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > Yeah in such a case last_error_time can be shown as a time before\n> > > first_error_time but I don't think that will be a big problem, the\n> > > next message will fix it. I don't see what we can do about it and the\n> > > same is true for other cases like pg_stat_archiver where the success\n> > > and failure times can be out of order. If we want we can remove one of\n> > > those times but I don't think this happens frequently enough to be\n> > > considered a problem. Anyway, these stats are not considered to be\n> > > updated with the most latest info.\n> > >\n> >\n> > Couldn't the code block in pgstat_recv_subworker_error() that\n> > increments error_count just compare the new msg timestamp against the\n> > existing first_error_time and last_error_time and, based on the\n> > result, update those if required?\n> >\n>\n> I don't see any problem with that but let's see what Sawada-San has to\n> say about this?\n\nIMO not sure we should do that. Since the stats collector will not\nlikely to receive the same error report frequently in practice (5 sec\ninterval by default), perhaps this problem will unlikely to happen.\nEven if the same messages are reported frequently enough to cause this\nproblem, the next message will also be reported soon, fixing it soon,\nas Amit mentioned. Also, IIUC once we have the shared memory based\nstats collector, we won’t need to worry about this problem. Given that\nthis kind of problem potentially exists also in other stats views that\nhave timestamp values, I’m not sure it's worth dealing with this\nproblem only in pg_stat_subscription_workers view.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 19 Nov 2021 22:19:02 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Nov 18, 2021 at 12:52 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Tuesday, November 16, 2021 2:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > Right. I've fixed this issue and attached an updated patch.\n> >\n> Hi,\n>\n> I have few comments for the testcases.\n>\n> 1)\n>\n> +my $appname = 'tap_sub';\n> +$node_subscriber->safe_psql(\n> + 'postgres',\n> + \"CREATE SUBSCRIPTION tap_sub CONNECTION '$publisher_connstr application_name=$appname' PUBLICATION tap_pub WITH (streaming = off, two_phase = on);\");\n> +my $appname_streaming = 'tap_sub_streaming';\n> +$node_subscriber->safe_psql(\n> + 'postgres',\n> + \"CREATE SUBSCRIPTION tap_sub_streaming CONNECTION '$publisher_connstr application_name=$appname_streaming' PUBLICATION tap_pub_streaming WITH (streaming = on, two_phase = on);\");\n> +\n>\n> I think we can remove the 'application_name=$appname', so that the command\n> could be shorter.\n\nBut we wait for the subscription to catch up by using\nwait_for_catchup() with application_name, no?\n\n>\n> 2)\n> +...(streaming = on, two_phase = on);\");\n> Besides, is there some reasons to set two_phase to ? If so,\n> It might be better to add some comments about it.\n>\n\nYes, two_phase = on is required by the tests for skip transaction\npatch. WIll remove it.\n\n>\n> 3)\n> +CREATE PUBLICATION tap_pub_streaming FOR TABLE test_tab_streaming;\n> +]);\n> +\n>\n> It seems there's no tests to use the table test_tab_streaming. I guess this\n> table is used to test streaming change error, maybe we can add some tests for\n> it ?\n\nOops, similarly this is also required by the skip transaction tests.\nWill remove it.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 24 Nov 2021 11:20:23 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Nov 24, 2021 at 7:51 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Nov 18, 2021 at 12:52 PM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Tuesday, November 16, 2021 2:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > Right. I've fixed this issue and attached an updated patch.\n> > >\n> > Hi,\n> >\n> > I have few comments for the testcases.\n> >\n> > 1)\n> >\n> > +my $appname = 'tap_sub';\n> > +$node_subscriber->safe_psql(\n> > + 'postgres',\n> > + \"CREATE SUBSCRIPTION tap_sub CONNECTION '$publisher_connstr application_name=$appname' PUBLICATION tap_pub WITH (streaming = off, two_phase = on);\");\n> > +my $appname_streaming = 'tap_sub_streaming';\n> > +$node_subscriber->safe_psql(\n> > + 'postgres',\n> > + \"CREATE SUBSCRIPTION tap_sub_streaming CONNECTION '$publisher_connstr application_name=$appname_streaming' PUBLICATION tap_pub_streaming WITH (streaming = on, two_phase = on);\");\n> > +\n> >\n> > I think we can remove the 'application_name=$appname', so that the command\n> > could be shorter.\n>\n> But we wait for the subscription to catch up by using\n> wait_for_catchup() with application_name, no?\n>\n\nYeah, but you can directly use the subscription name in\nwait_for_catchup because we internally use that as\nfallback_application_name. If application_name is not specified in the\nconnection string as suggested by Hou-San then\nfallback_application_name will be considered. Both ways are okay and I\nsee we use both ways in the tests but it seems there are more places\nwhere we use the method Hou-San is suggesting in subscription tests.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 24 Nov 2021 08:43:17 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Nov 24, 2021 at 12:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Nov 24, 2021 at 7:51 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Nov 18, 2021 at 12:52 PM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > On Tuesday, November 16, 2021 2:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > Right. I've fixed this issue and attached an updated patch.\n> > > >\n> > > Hi,\n> > >\n> > > I have few comments for the testcases.\n> > >\n> > > 1)\n> > >\n> > > +my $appname = 'tap_sub';\n> > > +$node_subscriber->safe_psql(\n> > > + 'postgres',\n> > > + \"CREATE SUBSCRIPTION tap_sub CONNECTION '$publisher_connstr application_name=$appname' PUBLICATION tap_pub WITH (streaming = off, two_phase = on);\");\n> > > +my $appname_streaming = 'tap_sub_streaming';\n> > > +$node_subscriber->safe_psql(\n> > > + 'postgres',\n> > > + \"CREATE SUBSCRIPTION tap_sub_streaming CONNECTION '$publisher_connstr application_name=$appname_streaming' PUBLICATION tap_pub_streaming WITH (streaming = on, two_phase = on);\");\n> > > +\n> > >\n> > > I think we can remove the 'application_name=$appname', so that the command\n> > > could be shorter.\n> >\n> > But we wait for the subscription to catch up by using\n> > wait_for_catchup() with application_name, no?\n> >\n>\n> Yeah, but you can directly use the subscription name in\n> wait_for_catchup because we internally use that as\n> fallback_application_name. If application_name is not specified in the\n> connection string as suggested by Hou-San then\n> fallback_application_name will be considered. Both ways are okay and I\n> see we use both ways in the tests but it seems there are more places\n> where we use the method Hou-San is suggesting in subscription tests.\n\nOkay, thanks! I referred to tests that set application_name. ISTM it's\nbetter to unite them so as not to confuse them in future tests.\n\nAnyway, I'll remove it in the next version patch that I'll submit soon.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 24 Nov 2021 17:19:40 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Nov 24, 2021 at 1:50 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Nov 24, 2021 at 12:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Nov 24, 2021 at 7:51 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Thu, Nov 18, 2021 at 12:52 PM houzj.fnst@fujitsu.com\n> > > <houzj.fnst@fujitsu.com> wrote:\n> > > >\n> > > > On Tuesday, November 16, 2021 2:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > Right. I've fixed this issue and attached an updated patch.\n> > > > >\n> > > > Hi,\n> > > >\n> > > > I have few comments for the testcases.\n> > > >\n> > > > 1)\n> > > >\n> > > > +my $appname = 'tap_sub';\n> > > > +$node_subscriber->safe_psql(\n> > > > + 'postgres',\n> > > > + \"CREATE SUBSCRIPTION tap_sub CONNECTION '$publisher_connstr application_name=$appname' PUBLICATION tap_pub WITH (streaming = off, two_phase = on);\");\n> > > > +my $appname_streaming = 'tap_sub_streaming';\n> > > > +$node_subscriber->safe_psql(\n> > > > + 'postgres',\n> > > > + \"CREATE SUBSCRIPTION tap_sub_streaming CONNECTION '$publisher_connstr application_name=$appname_streaming' PUBLICATION tap_pub_streaming WITH (streaming = on, two_phase = on);\");\n> > > > +\n> > > >\n> > > > I think we can remove the 'application_name=$appname', so that the command\n> > > > could be shorter.\n> > >\n> > > But we wait for the subscription to catch up by using\n> > > wait_for_catchup() with application_name, no?\n> > >\n> >\n> > Yeah, but you can directly use the subscription name in\n> > wait_for_catchup because we internally use that as\n> > fallback_application_name. If application_name is not specified in the\n> > connection string as suggested by Hou-San then\n> > fallback_application_name will be considered. Both ways are okay and I\n> > see we use both ways in the tests but it seems there are more places\n> > where we use the method Hou-San is suggesting in subscription tests.\n>\n> Okay, thanks! I referred to tests that set application_name. ISTM it's\n> better to unite them so as not to confuse them in future tests.\n>\n\nAgreed, but let's do this clean-up as a separate patch. Feel free to\nsubmit the patch for the same in a separate thread.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 24 Nov 2021 14:20:24 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Nov 17, 2021 at 8:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Nov 16, 2021 at 12:01 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Right. I've fixed this issue and attached an updated patch.\n> >\n>\n> Few comments/questions:\n> =====================\n> 1.\n> + <para>\n> + The <structname>pg_stat_subscription_workers</structname> view will contain\n> + one row per subscription error reported by workers applying logical\n> + replication changes and workers handling the initial data copy of the\n> + subscribed tables. The statistics entry is removed when the subscription\n> + the worker is running on is removed.\n> + </para>\n>\n> The last line of this paragraph is not clear to me. First \"the\" before\n> \"worker\" in the following part of the sentence seems unnecessary\n> \"..when the subscription the worker..\". Then the part \"running on is\n> removed\" is unclear because it could also mean that we remove the\n> entry when a subscription is disabled. Can we rephrase it to: \"The\n> statistics entry is removed when the corresponding subscription is\n> dropped\"?\n\nAgreed. Fixed.\n\n>\n> 2.\n> Between v20 and v23 versions of patch the size of hash table\n> PGSTAT_SUBWORKER_HASH_SIZE is increased from 32 to 256. I might have\n> missed the comment which lead to this change, can you point me to the\n> same or if you changed it for some other reason, can you let me know\n> the same?\n\nI'd missed reverting this change. I considered increasing this value\nsince the lifetime of subscription is long. But when it comes to\nunshared hashtable can be expanded on-the-fly, it's better to start\nwith a small value. Reverted.\n\n>\n> 3.\n> +\n> + /*\n> + * Repeat for subscription workers. Similarly, we needn't bother\n> + * in the common case where no function stats are being collected.\n> + */\n>\n> /function/subscription workers'\n\nFixed.\n\n>\n> 4.\n> + <para>\n> + Name of command being applied when the error occurred. This field\n> + is always NULL if the error was reported during the initial data\n> + copy.\n> + </para></entry>\n> + </row>\n> +\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>xid</structfield> <type>xid</type>\n> + </para>\n> + <para>\n> + Transaction ID of the publisher node being applied when the error\n> + occurred. This field is always NULL if the error was reported\n> + during the initial data copy.\n> + </para></entry>\n>\n> Is it important to stress on 'always' in the above two descriptions?\n\nNo, removed.\n\n>\n> 5.\n> The current description of first/last_error_time seems sliglthy\n> misleading as one can interpret that these are about different errors.\n> Let's slightly change the description of first/last_error_time as\n> follows or something on those lines:\n>\n> </para>\n> + <para>\n> + Time at which the first error occurred\n> + </para></entry>\n> + </row>\n>\n> First time at which this error occurred\n>\n> <structfield>last_error_time</structfield> <type>timestamp with time zone</type>\n> + </para>\n> + <para>\n> + Time at which the last error occurred\n>\n> Last time at which this error occurred. This will be the same as\n> first_error_time except when the same error occurred more than once\n> consecutively.\n\nChanged. I've removed first_error_time as per discussion on the thread\nfor adding xact stats.\n\n>\n> 6.\n> + </indexterm>\n> + <function>pg_stat_reset_subscription_worker</function> (\n> <parameter>subid</parameter> <type>oid</type>, <optional>\n> <parameter>relid</parameter> <type>oid</type> </optional> )\n> + <returnvalue>void</returnvalue>\n> + </para>\n> + <para>\n> + Resets the statistics of a single subscription worker running on the\n> + subscription with <parameter>subid</parameter> shown in the\n> + <structname>pg_stat_subscription_worker</structname> view. If the\n> + argument <parameter>relid</parameter> is not <literal>NULL</literal>,\n> + resets statistics of the subscription worker handling the initial data\n> + copy of the relation with <parameter>relid</parameter>. Otherwise,\n> + resets the subscription worker statistics of the main apply worker.\n> + If the argument <parameter>relid</parameter> is omitted, resets the\n> + statistics of all subscription workers running on the subscription\n> + with <parameter>subid</parameter>.\n> + </para>\n>\n> The first line of this description seems to indicate that we can only\n> reset the stats of a single worker but the later part indicates that\n> we can reset stats of all subscription workers. Can we change the\n> first line as: \"Resets the statistics of subscription workers running\n> on the subscription with <parameter>subid</parameter> shown in the\n> <structname>pg_stat_subscription_worker</structname> view.\".\n>\n\nChanged.\n\n> 7.\n> pgstat_vacuum_stat()\n> {\n> ..\n> + pgstat_setheader(&spmsg.m_hdr, PGSTAT_MTYPE_SUBSCRIPTIONPURGE);\n> + spmsg.m_databaseid = MyDatabaseId;\n> + spmsg.m_nentries = 0;\n> ..\n> }\n>\n> Do we really need to set the header here? It seems to be getting set\n> in pgstat_send_subscription_purge() while sending this message.\n\nRemoved.\n\n>\n> 8.\n> pgstat_vacuum_stat()\n> {\n> ..\n> +\n> + if (hash_search(htab, (void *) &(subwentry->key.subid), HASH_FIND, NULL)\n> + != NULL)\n> + continue;\n> +\n> + /* This subscription is dead, add the subid to the message */\n> + spmsg.m_subids[spmsg.m_nentries++] = subwentry->key.subid;\n> ..\n> }\n>\n> I think it is better to use a separate variable here for subid as we\n> are using for funcid and tableid. That will make this part of the code\n> easier to follow and look consistent.\n\nAgreed, and changed.\n\n>\n> 9.\n> +/* ----------\n> + * PgStat_MsgSubWorkerError Sent by the apply worker or the table\n> sync worker to\n> + * report the error occurred during logical replication.\n> + * ----------\n>\n> In this comment \"during logical replication\" sounds too generic. Can\n> we instead use \"while processing changes.\" or something like that to\n> make it a bit more specific?\n\n\"while processing changes\" sounds good.\n\nI've attached an updated version patch. Unless I miss something, all\ncomments I got so far have been incorporated into this patch. Please\nreview it.\n\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Wed, 24 Nov 2021 20:43:35 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Nov 24, 2021 at 5:14 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Changed. I've removed first_error_time as per discussion on the thread\n> for adding xact stats.\n>\n\nWe also agreed to change the column names to start with last_error_*\n[1]. Is there a reason to not make those changes? Do you think that we\ncan change it just before committing that patch? I thought it might be\nbetter to do it that way now itself.\n\n[1] - https://www.postgresql.org/message-id/CAD21AoCQ8z5goy3BCqfk2gn5p8NVH5B-uxO3Xc-dXN-MXVfnKg%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 25 Nov 2021 10:27:09 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Nov 24, 2021 at 5:14 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Nov 17, 2021 at 8:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Nov 16, 2021 at 12:01 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > Right. I've fixed this issue and attached an updated patch.\n\nOne very minor comment:\nconflict can be moved to next line to keep it within 80 chars boundary\nwherever possible\n+# Initial table setup on both publisher and subscriber. On subscriber we create\n+# the same tables but with primary keys. Also, insert some data that\nwill conflict\n+# with the data replicated from publisher later.\n+$node_publisher->safe_psql(\n\nSimilarly in the below:\n+# Insert more data to test_tab1, raising an error on the subscriber\ndue to violation\n+# of the unique constraint on test_tab1.\n+my $xid = $node_publisher->safe_psql(\n\nThe rest of the patch looks good.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 25 Nov 2021 16:06:10 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Nov 24, 2021 at 10:44 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached an updated version patch. Unless I miss something, all\n> comments I got so far have been incorporated into this patch. Please\n> review it.\n>\n\nOnly a couple of minor points:\n\nsrc/backend/postmaster/pgstat.c\n(1) pgstat_get_subworker_entry\n\nIn the following comment, it should say \"returns an entry ...\":\n\n+ * apply worker otherwise returns entry of the table sync worker associated\n\nsrc/include/pgstat.h\n(2) typedef struct PgStat_StatDBEntry\n\n\"subworker\" should be \"subworkers\" in the following comment, to match\nthe struct member name:\n\n* subworker is the hash table of PgStat_StatSubWorkerEntry which stores\n\nOtherwise, the patch LGTM.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 25 Nov 2021 23:08:14 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Nov 25, 2021 at 1:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Nov 24, 2021 at 5:14 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Changed. I've removed first_error_time as per discussion on the thread\n> > for adding xact stats.\n> >\n>\n> We also agreed to change the column names to start with last_error_*\n> [1]. Is there a reason to not make those changes? Do you think that we\n> can change it just before committing that patch? I thought it might be\n> better to do it that way now itself.\n\nOh, I thought that you think that we change the column names when\nadding xact stats to the view. But these names also make sense even\nwithout the xact stats. I've attached an updated patch. It also\nincorporated comments from Vignesh and Greg.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Thu, 25 Nov 2021 21:29:12 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Nov 25, 2021 at 7:36 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, Nov 24, 2021 at 5:14 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Nov 17, 2021 at 8:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Nov 16, 2021 at 12:01 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > Right. I've fixed this issue and attached an updated patch.\n>\n> One very minor comment:\n> conflict can be moved to next line to keep it within 80 chars boundary\n> wherever possible\n> +# Initial table setup on both publisher and subscriber. On subscriber we create\n> +# the same tables but with primary keys. Also, insert some data that\n> will conflict\n> +# with the data replicated from publisher later.\n> +$node_publisher->safe_psql(\n>\n> Similarly in the below:\n> +# Insert more data to test_tab1, raising an error on the subscriber\n> due to violation\n> +# of the unique constraint on test_tab1.\n> +my $xid = $node_publisher->safe_psql(\n>\n> The rest of the patch looks good.\n\nThank you for the comments! These are incorporated into v25 patch I\njust submitted.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 25 Nov 2021 21:29:45 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Nov 25, 2021 at 9:08 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Wed, Nov 24, 2021 at 10:44 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached an updated version patch. Unless I miss something, all\n> > comments I got so far have been incorporated into this patch. Please\n> > review it.\n> >\n>\n> Only a couple of minor points:\n>\n> src/backend/postmaster/pgstat.c\n> (1) pgstat_get_subworker_entry\n>\n> In the following comment, it should say \"returns an entry ...\":\n>\n> + * apply worker otherwise returns entry of the table sync worker associated\n>\n> src/include/pgstat.h\n> (2) typedef struct PgStat_StatDBEntry\n>\n> \"subworker\" should be \"subworkers\" in the following comment, to match\n> the struct member name:\n>\n> * subworker is the hash table of PgStat_StatSubWorkerEntry which stores\n>\n> Otherwise, the patch LGTM.\n\nThank you for the comments! These are incorporated into v25 patch I\njust submitted.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 25 Nov 2021 21:30:11 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thur, Nov 25, 2021 8:29 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> On Thu, Nov 25, 2021 at 1:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > On Wed, Nov 24, 2021 at 5:14 PM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> > >\r\n> > > Changed. I've removed first_error_time as per discussion on the\r\n> > > thread for adding xact stats.\r\n> > >\r\n> >\r\n> > We also agreed to change the column names to start with last_error_*\r\n> > [1]. Is there a reason to not make those changes? Do you think that we\r\n> > can change it just before committing that patch? I thought it might be\r\n> > better to do it that way now itself.\r\n> \r\n> Oh, I thought that you think that we change the column names when adding xact\r\n> stats to the view. But these names also make sense even without the xact stats.\r\n> I've attached an updated patch. It also incorporated comments from Vignesh\r\n> and Greg.\r\n> \r\nHi,\r\n\r\nI only noticed some minor things in the testcases\r\n\r\n1)\r\n+$node_publisher->append_conf('postgresql.conf',\r\n+\t\t\t qq[\r\n+logical_decoding_work_mem = 64kB\r\n+]);\r\n\r\nIt seems we don’t need set the decode_work_mem since we don't test streaming ?\r\n\r\n2)\r\n+$node_publisher->safe_psql('postgres',\r\n+\t\t\t q[\r\n+CREATE PUBLICATION tap_pub FOR TABLE test_tab1, test_tab2;\r\n+]);\r\n\r\nThere are a few places where only one command exists in the 'q[' or 'qq[' like the above code.\r\nTo be consistent, I think it might be better to remove the wrap here, maybe we can write like:\r\n$node_publisher->safe_psql('postgres',\r\n\t' CREATE PUBLICATION tap_pub FOR TABLE test_tab1, test_tab2;');\r\n\r\nThe others LGTM.\r\n\r\nBest regards,\r\nHou zj\r\n\r\n\r\n\r\n\r\n",
"msg_date": "Thu, 25 Nov 2021 13:05:43 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Nov 25, 2021 at 10:06 PM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Thur, Nov 25, 2021 8:29 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > On Thu, Nov 25, 2021 at 1:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Nov 24, 2021 at 5:14 PM Masahiko Sawada\n> > <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > Changed. I've removed first_error_time as per discussion on the\n> > > > thread for adding xact stats.\n> > > >\n> > >\n> > > We also agreed to change the column names to start with last_error_*\n> > > [1]. Is there a reason to not make those changes? Do you think that we\n> > > can change it just before committing that patch? I thought it might be\n> > > better to do it that way now itself.\n> >\n> > Oh, I thought that you think that we change the column names when adding xact\n> > stats to the view. But these names also make sense even without the xact stats.\n> > I've attached an updated patch. It also incorporated comments from Vignesh\n> > and Greg.\n> >\n> Hi,\n>\n> I only noticed some minor things in the testcases\n>\n> 1)\n> +$node_publisher->append_conf('postgresql.conf',\n> + qq[\n> +logical_decoding_work_mem = 64kB\n> +]);\n>\n> It seems we don’t need set the decode_work_mem since we don't test streaming ?\n>\n> 2)\n> +$node_publisher->safe_psql('postgres',\n> + q[\n> +CREATE PUBLICATION tap_pub FOR TABLE test_tab1, test_tab2;\n> +]);\n>\n> There are a few places where only one command exists in the 'q[' or 'qq[' like the above code.\n> To be consistent, I think it might be better to remove the wrap here, maybe we can write like:\n> $node_publisher->safe_psql('postgres',\n> ' CREATE PUBLICATION tap_pub FOR TABLE test_tab1, test_tab2;');\n>\n\nIndeed. Attached an updated patch. Thanks!\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Fri, 26 Nov 2021 09:29:39 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Friday, November 26, 2021 9:30 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> Indeed. Attached an updated patch. Thanks!\r\n\r\nThanks for your patch. A small comment:\r\n\r\n+ OID of the relation that the worker is synchronizing; null for the\r\n+ main apply worker\r\n\r\nShould we modify it to \"OID of the relation that the worker was synchronizing ...\"?\r\n\r\nThe rest of the patch LGTM.\r\n\r\nRegards\r\nTang\r\n",
"msg_date": "Fri, 26 Nov 2021 02:15:06 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Nov 26, 2021 at 6:00 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Indeed. Attached an updated patch. Thanks!\n>\n\nI have made a number of changes in the attached patch which includes\n(a) the patch was trying to register multiple array entries for the\nsame subscription which doesn't seem to be required, see changes in\npgstat_vacuum_stat, (b) multiple changes in the test like reduced the\nwal_retrieve_retry_interval to 2s which has reduced the test time to\nhalf, remove the check related to resetting of stats as there is no\nguarantee that the message will be received by the collector and we\nwere not sending it again, changed the test case file name to\n026_stats as we can add more subscription-related stats in this test\nfile itself (c) added/edited multiple comments, (d) updated\nPGSTAT_FILE_FORMAT_ID.\n\nDo let me know what you think of the attached?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Sat, 27 Nov 2021 16:26:40 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Nov 26, 2021 at 7:45 AM tanghy.fnst@fujitsu.com\n<tanghy.fnst@fujitsu.com> wrote:\n>\n> On Friday, November 26, 2021 9:30 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Indeed. Attached an updated patch. Thanks!\n>\n> Thanks for your patch. A small comment:\n>\n> + OID of the relation that the worker is synchronizing; null for the\n> + main apply worker\n>\n> Should we modify it to \"OID of the relation that the worker was synchronizing ...\"?\n>\n\nI don't think this change is required, see the description of the\nsimilar column in pg_stat_subscription.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 27 Nov 2021 16:28:16 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Sat, Nov 27, 2021 at 7:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Nov 26, 2021 at 6:00 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Indeed. Attached an updated patch. Thanks!\n> >\n>\n\nThank you for updating the patch!\n\n> I have made a number of changes in the attached patch which includes\n> (a) the patch was trying to register multiple array entries for the\n> same subscription which doesn't seem to be required, see changes in\n> pgstat_vacuum_stat, (b) multiple changes in the test like reduced the\n> wal_retrieve_retry_interval to 2s which has reduced the test time to\n> half, remove the check related to resetting of stats as there is no\n> guarantee that the message will be received by the collector and we\n> were not sending it again, changed the test case file name to\n> 026_stats as we can add more subscription-related stats in this test\n> file itself\n\nSince we have pg_stat_subscription view, how about 026_worker_stats.pl?\n\nThe rests look good to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 29 Nov 2021 10:42:22 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Nov 29, 2021 at 7:12 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Sat, Nov 27, 2021 at 7:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> Thank you for updating the patch!\n>\n> > I have made a number of changes in the attached patch which includes\n> > (a) the patch was trying to register multiple array entries for the\n> > same subscription which doesn't seem to be required, see changes in\n> > pgstat_vacuum_stat, (b) multiple changes in the test like reduced the\n> > wal_retrieve_retry_interval to 2s which has reduced the test time to\n> > half, remove the check related to resetting of stats as there is no\n> > guarantee that the message will be received by the collector and we\n> > were not sending it again, changed the test case file name to\n> > 026_stats as we can add more subscription-related stats in this test\n> > file itself\n>\n> Since we have pg_stat_subscription view, how about 026_worker_stats.pl?\n>\n\nSounds better. Updated patch attached.\n\n> The rests look good to me.\n>\n\nOkay, I'll push this patch tomorrow unless there are more comments.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Mon, 29 Nov 2021 09:13:34 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Nov 29, 2021 at 9:13 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Nov 29, 2021 at 7:12 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Sat, Nov 27, 2021 at 7:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> >\n> > Thank you for updating the patch!\n> >\n> > > I have made a number of changes in the attached patch which includes\n> > > (a) the patch was trying to register multiple array entries for the\n> > > same subscription which doesn't seem to be required, see changes in\n> > > pgstat_vacuum_stat, (b) multiple changes in the test like reduced the\n> > > wal_retrieve_retry_interval to 2s which has reduced the test time to\n> > > half, remove the check related to resetting of stats as there is no\n> > > guarantee that the message will be received by the collector and we\n> > > were not sending it again, changed the test case file name to\n> > > 026_stats as we can add more subscription-related stats in this test\n> > > file itself\n> >\n> > Since we have pg_stat_subscription view, how about 026_worker_stats.pl?\n> >\n>\n> Sounds better. Updated patch attached.\n\nThanks for the updated patch, the v28 patch looks good to me.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 29 Nov 2021 11:37:50 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Nov 29, 2021 at 11:38 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n\nI have pushed this patch and there is a buildfarm failure for it. See:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sidewinder&dt=2021-11-30%2005%3A05%3A25\n\nSawada-San has shared his initial analysis on pgsql-committers [1] and\nI am responding here as the fix requires some more discussion.\n\n> Looking at the result the test actually got, we had two error entries\n> for test_tab1 instead of one:\n>\n> # Failed test 'check the error reported by the apply worker'\n> # at t/026_worker_stats.pl line 33.\n> # got: 'tap_sub|INSERT|test_tab1|t\n> # tap_sub||test_tab1|t'\n> # expected: 'tap_sub|INSERT|test_tab1|t'\n>\n> The possible scenarios are:\n>\n> The table sync worker for test_tab1 failed due to an error unrelated\n> to apply changes:\n>\n> 2021-11-30 06:24:02.137 CET [18990:2] ERROR: replication origin with\n> OID 2 is already active for PID 23706\n>\n> At this time, the view had one error entry for the table sync worker.\n> After retrying table sync, it succeeded:\n>\n> 2021-11-30 06:24:04.202 CET [28117:2] LOG: logical replication table\n> synchronization worker for subscription \"tap_sub\", table \"test_tab1\"\n> has finished\n>\n> Then after inserting a row on the publisher, the apply worker inserted\n> the row but failed due to violating a unique key violation, which is\n> expected:\n>\n> 2021-11-30 06:24:04.307 CET [4806:2] ERROR: duplicate key value\n> violates unique constraint \"test_tab1_pkey\"\n> 2021-11-30 06:24:04.307 CET [4806:3] DETAIL: Key (a)=(1) already exists.\n> 2021-11-30 06:24:04.307 CET [4806:4] CONTEXT: processing remote data\n> during \"INSERT\" for replication target relation \"public.test_tab1\" in\n> transaction 721 at 2021-11-30 06:24:04.305096+01\n>\n> As a result, we had two error entries for test_tab1: the table sync\n> worker error and the apply worker error. I didn't expect that the\n> table sync worker for test_tab1 failed due to \"replication origin with\n> OID 2 is already active for PID 23706” error.\n>\n> Looking at test_subscription_error() in 026_worker_stats.pl, we have\n> two checks; in the first check, we wait for the view to show the error\n> entry for the given relation name and xid. This check was passed since\n> we had the second error (i.g., apply worker error). In the second\n> check, we get error entries from pg_stat_subscription_workers by\n> specifying only the relation name. Therefore, we ended up getting two\n> entries and failed the tests.\n>\n> To fix this issue, I think that in the second check, we can get the\n> error from pg_stat_subscription_workers by specifying the relation\n> name *and* xid like the first check does. I've attached the patch.\n> What do you think?\n>\n\nI think this will fix the reported failure but there is another race\ncondition in the test. Isn't it possible that for table test_tab2, we\nget an error \"replication origin with OID ...\" or some other error\nbefore copy, in that case also, we will proceed from the second call\nof test_subscription_error() which is not what we expect in the test?\nShouldn't we someway check that the error message also starts with\n\"duplicate key value violates ...\"?\n\n[1] - https://www.postgresql.org/message-id/CAD21AoChP5wOT2AYziF%2B-j7vvThF2NyAs7wr%2Byy%2B8hsnu%3D8Rgg%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 30 Nov 2021 14:58:01 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Nov 30, 2021 at 6:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Nov 29, 2021 at 11:38 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n>\n> I have pushed this patch and there is a buildfarm failure for it. See:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sidewinder&dt=2021-11-30%2005%3A05%3A25\n>\n> Sawada-San has shared his initial analysis on pgsql-committers [1] and\n> I am responding here as the fix requires some more discussion.\n>\n> > Looking at the result the test actually got, we had two error entries\n> > for test_tab1 instead of one:\n> >\n> > # Failed test 'check the error reported by the apply worker'\n> > # at t/026_worker_stats.pl line 33.\n> > # got: 'tap_sub|INSERT|test_tab1|t\n> > # tap_sub||test_tab1|t'\n> > # expected: 'tap_sub|INSERT|test_tab1|t'\n> >\n> > The possible scenarios are:\n> >\n> > The table sync worker for test_tab1 failed due to an error unrelated\n> > to apply changes:\n> >\n> > 2021-11-30 06:24:02.137 CET [18990:2] ERROR: replication origin with\n> > OID 2 is already active for PID 23706\n> >\n> > At this time, the view had one error entry for the table sync worker.\n> > After retrying table sync, it succeeded:\n> >\n> > 2021-11-30 06:24:04.202 CET [28117:2] LOG: logical replication table\n> > synchronization worker for subscription \"tap_sub\", table \"test_tab1\"\n> > has finished\n> >\n> > Then after inserting a row on the publisher, the apply worker inserted\n> > the row but failed due to violating a unique key violation, which is\n> > expected:\n> >\n> > 2021-11-30 06:24:04.307 CET [4806:2] ERROR: duplicate key value\n> > violates unique constraint \"test_tab1_pkey\"\n> > 2021-11-30 06:24:04.307 CET [4806:3] DETAIL: Key (a)=(1) already exists.\n> > 2021-11-30 06:24:04.307 CET [4806:4] CONTEXT: processing remote data\n> > during \"INSERT\" for replication target relation \"public.test_tab1\" in\n> > transaction 721 at 2021-11-30 06:24:04.305096+01\n> >\n> > As a result, we had two error entries for test_tab1: the table sync\n> > worker error and the apply worker error. I didn't expect that the\n> > table sync worker for test_tab1 failed due to \"replication origin with\n> > OID 2 is already active for PID 23706” error.\n> >\n> > Looking at test_subscription_error() in 026_worker_stats.pl, we have\n> > two checks; in the first check, we wait for the view to show the error\n> > entry for the given relation name and xid. This check was passed since\n> > we had the second error (i.g., apply worker error). In the second\n> > check, we get error entries from pg_stat_subscription_workers by\n> > specifying only the relation name. Therefore, we ended up getting two\n> > entries and failed the tests.\n> >\n> > To fix this issue, I think that in the second check, we can get the\n> > error from pg_stat_subscription_workers by specifying the relation\n> > name *and* xid like the first check does. I've attached the patch.\n> > What do you think?\n> >\n>\n> I think this will fix the reported failure but there is another race\n> condition in the test. Isn't it possible that for table test_tab2, we\n> get an error \"replication origin with OID ...\" or some other error\n> before copy, in that case also, we will proceed from the second call\n> of test_subscription_error() which is not what we expect in the test?\n\nRight.\n\n> Shouldn't we someway check that the error message also starts with\n> \"duplicate key value violates ...\"?\n\nYeah, I think it's a good idea to make the checks more specific. That\nis, probably we can specify the prefix of the error message and\nsubrelid in addition to the current conditions: relid and xid. That\nway, we can check what error was reported by which workers (tablesync\nor apply) for which relations. And both check queries in\ntest_subscription_error() can have the same WHERE clause.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 30 Nov 2021 20:41:52 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Nov 30, 2021 at 8:41 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Nov 30, 2021 at 6:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Nov 29, 2021 at 11:38 AM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> >\n> > I have pushed this patch and there is a buildfarm failure for it. See:\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sidewinder&dt=2021-11-30%2005%3A05%3A25\n> >\n> > Sawada-San has shared his initial analysis on pgsql-committers [1] and\n> > I am responding here as the fix requires some more discussion.\n> >\n> > > Looking at the result the test actually got, we had two error entries\n> > > for test_tab1 instead of one:\n> > >\n> > > # Failed test 'check the error reported by the apply worker'\n> > > # at t/026_worker_stats.pl line 33.\n> > > # got: 'tap_sub|INSERT|test_tab1|t\n> > > # tap_sub||test_tab1|t'\n> > > # expected: 'tap_sub|INSERT|test_tab1|t'\n> > >\n> > > The possible scenarios are:\n> > >\n> > > The table sync worker for test_tab1 failed due to an error unrelated\n> > > to apply changes:\n> > >\n> > > 2021-11-30 06:24:02.137 CET [18990:2] ERROR: replication origin with\n> > > OID 2 is already active for PID 23706\n> > >\n> > > At this time, the view had one error entry for the table sync worker.\n> > > After retrying table sync, it succeeded:\n> > >\n> > > 2021-11-30 06:24:04.202 CET [28117:2] LOG: logical replication table\n> > > synchronization worker for subscription \"tap_sub\", table \"test_tab1\"\n> > > has finished\n> > >\n> > > Then after inserting a row on the publisher, the apply worker inserted\n> > > the row but failed due to violating a unique key violation, which is\n> > > expected:\n> > >\n> > > 2021-11-30 06:24:04.307 CET [4806:2] ERROR: duplicate key value\n> > > violates unique constraint \"test_tab1_pkey\"\n> > > 2021-11-30 06:24:04.307 CET [4806:3] DETAIL: Key (a)=(1) already exists.\n> > > 2021-11-30 06:24:04.307 CET [4806:4] CONTEXT: processing remote data\n> > > during \"INSERT\" for replication target relation \"public.test_tab1\" in\n> > > transaction 721 at 2021-11-30 06:24:04.305096+01\n> > >\n> > > As a result, we had two error entries for test_tab1: the table sync\n> > > worker error and the apply worker error. I didn't expect that the\n> > > table sync worker for test_tab1 failed due to \"replication origin with\n> > > OID 2 is already active for PID 23706” error.\n> > >\n> > > Looking at test_subscription_error() in 026_worker_stats.pl, we have\n> > > two checks; in the first check, we wait for the view to show the error\n> > > entry for the given relation name and xid. This check was passed since\n> > > we had the second error (i.g., apply worker error). In the second\n> > > check, we get error entries from pg_stat_subscription_workers by\n> > > specifying only the relation name. Therefore, we ended up getting two\n> > > entries and failed the tests.\n> > >\n> > > To fix this issue, I think that in the second check, we can get the\n> > > error from pg_stat_subscription_workers by specifying the relation\n> > > name *and* xid like the first check does. I've attached the patch.\n> > > What do you think?\n> > >\n> >\n> > I think this will fix the reported failure but there is another race\n> > condition in the test. Isn't it possible that for table test_tab2, we\n> > get an error \"replication origin with OID ...\" or some other error\n> > before copy, in that case also, we will proceed from the second call\n> > of test_subscription_error() which is not what we expect in the test?\n>\n> Right.\n>\n> > Shouldn't we someway check that the error message also starts with\n> > \"duplicate key value violates ...\"?\n>\n> Yeah, I think it's a good idea to make the checks more specific. That\n> is, probably we can specify the prefix of the error message and\n> subrelid in addition to the current conditions: relid and xid. That\n> way, we can check what error was reported by which workers (tablesync\n> or apply) for which relations. And both check queries in\n> test_subscription_error() can have the same WHERE clause.\n\nI've attached a patch that fixes this issue. Please review it.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Tue, 30 Nov 2021 22:38:40 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Nov 30, 2021 at 7:09 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Nov 30, 2021 at 8:41 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Nov 30, 2021 at 6:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Nov 29, 2021 at 11:38 AM vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > >\n> > > I have pushed this patch and there is a buildfarm failure for it. See:\n> > > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sidewinder&dt=2021-11-30%2005%3A05%3A25\n> > >\n> > > Sawada-San has shared his initial analysis on pgsql-committers [1] and\n> > > I am responding here as the fix requires some more discussion.\n> > >\n> > > > Looking at the result the test actually got, we had two error entries\n> > > > for test_tab1 instead of one:\n> > > >\n> > > > # Failed test 'check the error reported by the apply worker'\n> > > > # at t/026_worker_stats.pl line 33.\n> > > > # got: 'tap_sub|INSERT|test_tab1|t\n> > > > # tap_sub||test_tab1|t'\n> > > > # expected: 'tap_sub|INSERT|test_tab1|t'\n> > > >\n> > > > The possible scenarios are:\n> > > >\n> > > > The table sync worker for test_tab1 failed due to an error unrelated\n> > > > to apply changes:\n> > > >\n> > > > 2021-11-30 06:24:02.137 CET [18990:2] ERROR: replication origin with\n> > > > OID 2 is already active for PID 23706\n> > > >\n> > > > At this time, the view had one error entry for the table sync worker.\n> > > > After retrying table sync, it succeeded:\n> > > >\n> > > > 2021-11-30 06:24:04.202 CET [28117:2] LOG: logical replication table\n> > > > synchronization worker for subscription \"tap_sub\", table \"test_tab1\"\n> > > > has finished\n> > > >\n> > > > Then after inserting a row on the publisher, the apply worker inserted\n> > > > the row but failed due to violating a unique key violation, which is\n> > > > expected:\n> > > >\n> > > > 2021-11-30 06:24:04.307 CET [4806:2] ERROR: duplicate key value\n> > > > violates unique constraint \"test_tab1_pkey\"\n> > > > 2021-11-30 06:24:04.307 CET [4806:3] DETAIL: Key (a)=(1) already exists.\n> > > > 2021-11-30 06:24:04.307 CET [4806:4] CONTEXT: processing remote data\n> > > > during \"INSERT\" for replication target relation \"public.test_tab1\" in\n> > > > transaction 721 at 2021-11-30 06:24:04.305096+01\n> > > >\n> > > > As a result, we had two error entries for test_tab1: the table sync\n> > > > worker error and the apply worker error. I didn't expect that the\n> > > > table sync worker for test_tab1 failed due to \"replication origin with\n> > > > OID 2 is already active for PID 23706” error.\n> > > >\n> > > > Looking at test_subscription_error() in 026_worker_stats.pl, we have\n> > > > two checks; in the first check, we wait for the view to show the error\n> > > > entry for the given relation name and xid. This check was passed since\n> > > > we had the second error (i.g., apply worker error). In the second\n> > > > check, we get error entries from pg_stat_subscription_workers by\n> > > > specifying only the relation name. Therefore, we ended up getting two\n> > > > entries and failed the tests.\n> > > >\n> > > > To fix this issue, I think that in the second check, we can get the\n> > > > error from pg_stat_subscription_workers by specifying the relation\n> > > > name *and* xid like the first check does. I've attached the patch.\n> > > > What do you think?\n> > > >\n> > >\n> > > I think this will fix the reported failure but there is another race\n> > > condition in the test. Isn't it possible that for table test_tab2, we\n> > > get an error \"replication origin with OID ...\" or some other error\n> > > before copy, in that case also, we will proceed from the second call\n> > > of test_subscription_error() which is not what we expect in the test?\n> >\n> > Right.\n> >\n> > > Shouldn't we someway check that the error message also starts with\n> > > \"duplicate key value violates ...\"?\n> >\n> > Yeah, I think it's a good idea to make the checks more specific. That\n> > is, probably we can specify the prefix of the error message and\n> > subrelid in addition to the current conditions: relid and xid. That\n> > way, we can check what error was reported by which workers (tablesync\n> > or apply) for which relations. And both check queries in\n> > test_subscription_error() can have the same WHERE clause.\n>\n> I've attached a patch that fixes this issue. Please review it.\n\nThanks for the updated patch, the patch applies neatly and make\ncheck-world passes. Also I ran the failing test in a loop and found it\nto be passing always.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 30 Nov 2021 22:14:06 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tues, Nov 30, 2021 9:39 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> On Tue, Nov 30, 2021 at 8:41 PM Masahiko Sawada <sawada.mshk@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Tue, Nov 30, 2021 at 6:28 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > On Mon, Nov 29, 2021 at 11:38 AM vignesh C <vignesh21@gmail.com>\r\n> wrote:\r\n> > > >\r\n> > >\r\n> > > I have pushed this patch and there is a buildfarm failure for it. See:\r\n> > > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sidewinder&d\r\n> > > t=2021-11-30%2005%3A05%3A25\r\n> > >\r\n> > > Sawada-San has shared his initial analysis on pgsql-committers [1]\r\n> > > and I am responding here as the fix requires some more discussion.\r\n> > >\r\n> > > > Looking at the result the test actually got, we had two error\r\n> > > > entries for test_tab1 instead of one:\r\n> > > >\r\n> > > > # Failed test 'check the error reported by the apply worker'\r\n> > > > # at t/026_worker_stats.pl line 33.\r\n> > > > # got: 'tap_sub|INSERT|test_tab1|t\r\n> > > > # tap_sub||test_tab1|t'\r\n> > > > # expected: 'tap_sub|INSERT|test_tab1|t'\r\n> > > >\r\n> > > > The possible scenarios are:\r\n> > > >\r\n> > > > The table sync worker for test_tab1 failed due to an error\r\n> > > > unrelated to apply changes:\r\n> > > >\r\n> > > > 2021-11-30 06:24:02.137 CET [18990:2] ERROR: replication origin\r\n> > > > with OID 2 is already active for PID 23706\r\n> > > >\r\n> > > > At this time, the view had one error entry for the table sync worker.\r\n> > > > After retrying table sync, it succeeded:\r\n> > > >\r\n> > > > 2021-11-30 06:24:04.202 CET [28117:2] LOG: logical replication\r\n> > > > table synchronization worker for subscription \"tap_sub\", table\r\n> \"test_tab1\"\r\n> > > > has finished\r\n> > > >\r\n> > > > Then after inserting a row on the publisher, the apply worker\r\n> > > > inserted the row but failed due to violating a unique key\r\n> > > > violation, which is\r\n> > > > expected:\r\n> > > >\r\n> > > > 2021-11-30 06:24:04.307 CET [4806:2] ERROR: duplicate key value\r\n> > > > violates unique constraint \"test_tab1_pkey\"\r\n> > > > 2021-11-30 06:24:04.307 CET [4806:3] DETAIL: Key (a)=(1) already exists.\r\n> > > > 2021-11-30 06:24:04.307 CET [4806:4] CONTEXT: processing remote\r\n> > > > data during \"INSERT\" for replication target relation\r\n> > > > \"public.test_tab1\" in transaction 721 at 2021-11-30\r\n> > > > 06:24:04.305096+01\r\n> > > >\r\n> > > > As a result, we had two error entries for test_tab1: the table\r\n> > > > sync worker error and the apply worker error. I didn't expect that\r\n> > > > the table sync worker for test_tab1 failed due to \"replication\r\n> > > > origin with OID 2 is already active for PID 23706” error.\r\n> > > >\r\n> > > > Looking at test_subscription_error() in 026_worker_stats.pl, we\r\n> > > > have two checks; in the first check, we wait for the view to show\r\n> > > > the error entry for the given relation name and xid. This check\r\n> > > > was passed since we had the second error (i.g., apply worker\r\n> > > > error). In the second check, we get error entries from\r\n> > > > pg_stat_subscription_workers by specifying only the relation name.\r\n> > > > Therefore, we ended up getting two entries and failed the tests.\r\n> > > >\r\n> > > > To fix this issue, I think that in the second check, we can get\r\n> > > > the error from pg_stat_subscription_workers by specifying the\r\n> > > > relation name *and* xid like the first check does. I've attached the patch.\r\n> > > > What do you think?\r\n> > > >\r\n> > >\r\n> > > I think this will fix the reported failure but there is another race\r\n> > > condition in the test. Isn't it possible that for table test_tab2,\r\n> > > we get an error \"replication origin with OID ...\" or some other\r\n> > > error before copy, in that case also, we will proceed from the\r\n> > > second call of test_subscription_error() which is not what we expect in the\r\n> test?\r\n> >\r\n> > Right.\r\n> >\r\n> > > Shouldn't we someway check that the error message also starts with\r\n> > > \"duplicate key value violates ...\"?\r\n> >\r\n> > Yeah, I think it's a good idea to make the checks more specific. That\r\n> > is, probably we can specify the prefix of the error message and\r\n> > subrelid in addition to the current conditions: relid and xid. That\r\n> > way, we can check what error was reported by which workers (tablesync\r\n> > or apply) for which relations. And both check queries in\r\n> > test_subscription_error() can have the same WHERE clause.\r\n> \r\n> I've attached a patch that fixes this issue. Please review it.\r\n> \r\n\r\nI have a question about the testcase (I could be wrong here).\r\n\r\nIs it possible that the race condition happen between apply worker(test_tab1)\r\nand table sync worker(test_tab2) ? If so, it seems the error(\"replication\r\norigin with OID\") could happen randomly until we resolve the conflict.\r\nBased on this, for the following code:\r\n-----\r\n # Wait for the error statistics to be updated.\r\n my $check_sql = qq[SELECT count(1) > 0 ] . $part_sql;\r\n $node->poll_query_until(\r\n\t'postgres', $check_sql,\r\n) or die \"Timed out while waiting for statistics to be updated\";\r\n\r\n* [1] *\r\n\r\n $check_sql =\r\n\tqq[\r\nSELECT subname, last_error_command, last_error_relid::regclass,\r\nlast_error_count > 0 ] . $part_sql;\r\n my $result = $node->safe_psql('postgres', $check_sql);\r\n is($result, $expected, $msg);\r\n-----\r\n\r\nIs it possible that the error(\"replication origin with OID\") happen again at the\r\nplace [1]. In this case, the error message we have checked could be replaced by\r\nanother error(\"replication origin ...\") and then the test fail ?\r\n\r\nBest regards,\r\nHou zj\r\n",
"msg_date": "Wed, 1 Dec 2021 02:53:53 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Dec 1, 2021 at 8:24 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Tues, Nov 30, 2021 9:39 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > > Shouldn't we someway check that the error message also starts with\n> > > > \"duplicate key value violates ...\"?\n> > >\n> > > Yeah, I think it's a good idea to make the checks more specific. That\n> > > is, probably we can specify the prefix of the error message and\n> > > subrelid in addition to the current conditions: relid and xid. That\n> > > way, we can check what error was reported by which workers (tablesync\n> > > or apply) for which relations. And both check queries in\n> > > test_subscription_error() can have the same WHERE clause.\n> >\n> > I've attached a patch that fixes this issue. Please review it.\n> >\n>\n> I have a question about the testcase (I could be wrong here).\n>\n> Is it possible that the race condition happen between apply worker(test_tab1)\n> and table sync worker(test_tab2) ? If so, it seems the error(\"replication\n> origin with OID\") could happen randomly until we resolve the conflict.\n> Based on this, for the following code:\n> -----\n> # Wait for the error statistics to be updated.\n> my $check_sql = qq[SELECT count(1) > 0 ] . $part_sql;\n> $node->poll_query_until(\n> 'postgres', $check_sql,\n> ) or die \"Timed out while waiting for statistics to be updated\";\n>\n> * [1] *\n>\n> $check_sql =\n> qq[\n> SELECT subname, last_error_command, last_error_relid::regclass,\n> last_error_count > 0 ] . $part_sql;\n> my $result = $node->safe_psql('postgres', $check_sql);\n> is($result, $expected, $msg);\n> -----\n>\n> Is it possible that the error(\"replication origin with OID\") happen again at the\n> place [1]. In this case, the error message we have checked could be replaced by\n> another error(\"replication origin ...\") and then the test fail ?\n>\n\nOnce we get the \"duplicate key violation ...\" error before * [1] * via\napply_worker then we shouldn't get replication origin-specific error\nbecause the origin set up is done before starting to apply changes.\nAlso, even if that or some other happens after * [1] * because of\nerrmsg_prefix check it should still succeed. Does that make sense?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 1 Dec 2021 08:52:20 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Dec 1, 2021 11:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Wed, Dec 1, 2021 at 8:24 AM houzj.fnst@fujitsu.com\r\n> <houzj.fnst@fujitsu.com> wrote:\r\n> >\r\n> > On Tues, Nov 30, 2021 9:39 PM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> > > >\r\n> > > > > Shouldn't we someway check that the error message also starts with\r\n> > > > > \"duplicate key value violates ...\"?\r\n> > > >\r\n> > > > Yeah, I think it's a good idea to make the checks more specific. That\r\n> > > > is, probably we can specify the prefix of the error message and\r\n> > > > subrelid in addition to the current conditions: relid and xid. That\r\n> > > > way, we can check what error was reported by which workers (tablesync\r\n> > > > or apply) for which relations. And both check queries in\r\n> > > > test_subscription_error() can have the same WHERE clause.\r\n> > >\r\n> > > I've attached a patch that fixes this issue. Please review it.\r\n> > >\r\n> >\r\n> > I have a question about the testcase (I could be wrong here).\r\n> >\r\n> > Is it possible that the race condition happen between apply\r\n> worker(test_tab1)\r\n> > and table sync worker(test_tab2) ? If so, it seems the error(\"replication\r\n> > origin with OID\") could happen randomly until we resolve the conflict.\r\n> > Based on this, for the following code:\r\n> > -----\r\n> > # Wait for the error statistics to be updated.\r\n> > my $check_sql = qq[SELECT count(1) > 0 ] . $part_sql;\r\n> > $node->poll_query_until(\r\n> > 'postgres', $check_sql,\r\n> > ) or die \"Timed out while waiting for statistics to be updated\";\r\n> >\r\n> > * [1] *\r\n> >\r\n> > $check_sql =\r\n> > qq[\r\n> > SELECT subname, last_error_command, last_error_relid::regclass,\r\n> > last_error_count > 0 ] . $part_sql;\r\n> > my $result = $node->safe_psql('postgres', $check_sql);\r\n> > is($result, $expected, $msg);\r\n> > -----\r\n> >\r\n> > Is it possible that the error(\"replication origin with OID\") happen again at the\r\n> > place [1]. In this case, the error message we have checked could be replaced\r\n> by\r\n> > another error(\"replication origin ...\") and then the test fail ?\r\n> >\r\n> \r\n> Once we get the \"duplicate key violation ...\" error before * [1] * via\r\n> apply_worker then we shouldn't get replication origin-specific error\r\n> because the origin set up is done before starting to apply changes.\r\n> Also, even if that or some other happens after * [1] * because of\r\n> errmsg_prefix check it should still succeed. Does that make sense?\r\n\r\nOh, I missed the point that the origin set up is done once we get the expected error.\r\nThanks for the explanation, and I think the patch looks good.\r\n\r\nBest regards,\r\nHou zj\r\n",
"msg_date": "Wed, 1 Dec 2021 03:39:20 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Dec 1, 2021 at 12:22 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Dec 1, 2021 at 8:24 AM houzj.fnst@fujitsu.com\n> <houzj.fnst@fujitsu.com> wrote:\n> >\n> > On Tues, Nov 30, 2021 9:39 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > > Shouldn't we someway check that the error message also starts with\n> > > > > \"duplicate key value violates ...\"?\n> > > >\n> > > > Yeah, I think it's a good idea to make the checks more specific. That\n> > > > is, probably we can specify the prefix of the error message and\n> > > > subrelid in addition to the current conditions: relid and xid. That\n> > > > way, we can check what error was reported by which workers (tablesync\n> > > > or apply) for which relations. And both check queries in\n> > > > test_subscription_error() can have the same WHERE clause.\n> > >\n> > > I've attached a patch that fixes this issue. Please review it.\n> > >\n> >\n> > I have a question about the testcase (I could be wrong here).\n> >\n> > Is it possible that the race condition happen between apply worker(test_tab1)\n> > and table sync worker(test_tab2) ? If so, it seems the error(\"replication\n> > origin with OID\") could happen randomly until we resolve the conflict.\n> > Based on this, for the following code:\n> > -----\n> > # Wait for the error statistics to be updated.\n> > my $check_sql = qq[SELECT count(1) > 0 ] . $part_sql;\n> > $node->poll_query_until(\n> > 'postgres', $check_sql,\n> > ) or die \"Timed out while waiting for statistics to be updated\";\n> >\n> > * [1] *\n> >\n> > $check_sql =\n> > qq[\n> > SELECT subname, last_error_command, last_error_relid::regclass,\n> > last_error_count > 0 ] . $part_sql;\n> > my $result = $node->safe_psql('postgres', $check_sql);\n> > is($result, $expected, $msg);\n> > -----\n> >\n> > Is it possible that the error(\"replication origin with OID\") happen again at the\n> > place [1]. In this case, the error message we have checked could be replaced by\n> > another error(\"replication origin ...\") and then the test fail ?\n> >\n>\n> Once we get the \"duplicate key violation ...\" error before * [1] * via\n> apply_worker then we shouldn't get replication origin-specific error\n> because the origin set up is done before starting to apply changes.\n\nRight.\n\n> Also, even if that or some other happens after * [1] * because of\n> errmsg_prefix check it should still succeed.\n\nIn this case, the old error (\"duplicate key violation ...\") is\noverwritten by a new error (e.g., connection error. not sure how\npossible it is) and the test fails because the query returns no\nentries, no? If so, the result from the second check_sql is unstable\nand it's probably better to check the result only once. That is, the\nfirst check_sql includes the command and we exit from the function\nonce we confirm the error entry is expectedly updated.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 1 Dec 2021 12:41:31 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Dec 1, 2021 at 9:12 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Dec 1, 2021 at 12:22 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Dec 1, 2021 at 8:24 AM houzj.fnst@fujitsu.com\n> > <houzj.fnst@fujitsu.com> wrote:\n> > >\n> > > I have a question about the testcase (I could be wrong here).\n> > >\n> > > Is it possible that the race condition happen between apply worker(test_tab1)\n> > > and table sync worker(test_tab2) ? If so, it seems the error(\"replication\n> > > origin with OID\") could happen randomly until we resolve the conflict.\n> > > Based on this, for the following code:\n> > > -----\n> > > # Wait for the error statistics to be updated.\n> > > my $check_sql = qq[SELECT count(1) > 0 ] . $part_sql;\n> > > $node->poll_query_until(\n> > > 'postgres', $check_sql,\n> > > ) or die \"Timed out while waiting for statistics to be updated\";\n> > >\n> > > * [1] *\n> > >\n> > > $check_sql =\n> > > qq[\n> > > SELECT subname, last_error_command, last_error_relid::regclass,\n> > > last_error_count > 0 ] . $part_sql;\n> > > my $result = $node->safe_psql('postgres', $check_sql);\n> > > is($result, $expected, $msg);\n> > > -----\n> > >\n> > > Is it possible that the error(\"replication origin with OID\") happen again at the\n> > > place [1]. In this case, the error message we have checked could be replaced by\n> > > another error(\"replication origin ...\") and then the test fail ?\n> > >\n> >\n> > Once we get the \"duplicate key violation ...\" error before * [1] * via\n> > apply_worker then we shouldn't get replication origin-specific error\n> > because the origin set up is done before starting to apply changes.\n>\n> Right.\n>\n> > Also, even if that or some other happens after * [1] * because of\n> > errmsg_prefix check it should still succeed.\n>\n> In this case, the old error (\"duplicate key violation ...\") is\n> overwritten by a new error (e.g., connection error. not sure how\n> possible it is)\n>\n\nYeah, or probably some memory allocation failure. I think the\nprobability of such failures is very low but OTOH why take chance.\n\n> and the test fails because the query returns no\n> entries, no?\n>\n\nRight.\n\n> If so, the result from the second check_sql is unstable\n> and it's probably better to check the result only once. That is, the\n> first check_sql includes the command and we exit from the function\n> once we confirm the error entry is expectedly updated.\n>\n\nYeah, I think that should be fine.\n\nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 1 Dec 2021 09:30:18 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Dec 1, 2021 at 1:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Dec 1, 2021 at 9:12 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Dec 1, 2021 at 12:22 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Dec 1, 2021 at 8:24 AM houzj.fnst@fujitsu.com\n> > > <houzj.fnst@fujitsu.com> wrote:\n> > > >\n> > > > I have a question about the testcase (I could be wrong here).\n> > > >\n> > > > Is it possible that the race condition happen between apply worker(test_tab1)\n> > > > and table sync worker(test_tab2) ? If so, it seems the error(\"replication\n> > > > origin with OID\") could happen randomly until we resolve the conflict.\n> > > > Based on this, for the following code:\n> > > > -----\n> > > > # Wait for the error statistics to be updated.\n> > > > my $check_sql = qq[SELECT count(1) > 0 ] . $part_sql;\n> > > > $node->poll_query_until(\n> > > > 'postgres', $check_sql,\n> > > > ) or die \"Timed out while waiting for statistics to be updated\";\n> > > >\n> > > > * [1] *\n> > > >\n> > > > $check_sql =\n> > > > qq[\n> > > > SELECT subname, last_error_command, last_error_relid::regclass,\n> > > > last_error_count > 0 ] . $part_sql;\n> > > > my $result = $node->safe_psql('postgres', $check_sql);\n> > > > is($result, $expected, $msg);\n> > > > -----\n> > > >\n> > > > Is it possible that the error(\"replication origin with OID\") happen again at the\n> > > > place [1]. In this case, the error message we have checked could be replaced by\n> > > > another error(\"replication origin ...\") and then the test fail ?\n> > > >\n> > >\n> > > Once we get the \"duplicate key violation ...\" error before * [1] * via\n> > > apply_worker then we shouldn't get replication origin-specific error\n> > > because the origin set up is done before starting to apply changes.\n> >\n> > Right.\n> >\n> > > Also, even if that or some other happens after * [1] * because of\n> > > errmsg_prefix check it should still succeed.\n> >\n> > In this case, the old error (\"duplicate key violation ...\") is\n> > overwritten by a new error (e.g., connection error. not sure how\n> > possible it is)\n> >\n>\n> Yeah, or probably some memory allocation failure. I think the\n> probability of such failures is very low but OTOH why take chance.\n>\n> > and the test fails because the query returns no\n> > entries, no?\n> >\n>\n> Right.\n>\n> > If so, the result from the second check_sql is unstable\n> > and it's probably better to check the result only once. That is, the\n> > first check_sql includes the command and we exit from the function\n> > once we confirm the error entry is expectedly updated.\n> >\n>\n> Yeah, I think that should be fine.\n\nOkay, I've attached an updated patch. Please review it.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Wed, 1 Dec 2021 14:23:29 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wednesday, December 1, 2021 1:23 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> On Wed, Dec 1, 2021 at 1:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> > On Wed, Dec 1, 2021 at 9:12 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> > > If so, the result from the second check_sql is unstable and it's\r\n> > > probably better to check the result only once. That is, the first\r\n> > > check_sql includes the command and we exit from the function once we\r\n> > > confirm the error entry is expectedly updated.\r\n> > >\r\n> >\r\n> > Yeah, I think that should be fine.\r\n> \r\n> Okay, I've attached an updated patch. Please review it.\r\n> \r\n\r\nI agreed that checking the result only once makes the test more stable.\r\nThe patch looks good to me.\r\n\r\nBest regards,\r\nHou zj\r\n",
"msg_date": "Wed, 1 Dec 2021 06:27:33 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Dec 1, 2021 at 11:57 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> On Wednesday, December 1, 2021 1:23 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > Okay, I've attached an updated patch. Please review it.\n> >\n>\n> I agreed that checking the result only once makes the test more stable.\n> The patch looks good to me.\n>\n\nPushed.\n\nNow, coming back to the skip_xid patch. To summarize the discussion in\nthat regard so far, we have discussed various alternatives for the\nsyntax like:\n\na. ALTER SUBSCRIPTION ... [SET|RESET] SKIP TRANSACTION xxx;\nb. Alter Subscription <sub_name> SET ( subscription_parameter [=value]\n[, ... ] );\nc. Alter Subscription <sub_name> On Error ( subscription_parameter\n[=value] [, ... ] );\nd. Alter Subscription <sub_name> SKIP ( subscription_parameter\n[=value] [, ... ] );\nwhere subscription_parameter can be one of:\nxid = <xid_val>\nlsn = <lsn_val>\n...\n\nWe didn't prefer (a) as it can lead to more keywords as we add more\noptions; (b) as we want these new skip options to behave and be set\ndifferently than existing subscription properties because of the\ndifference in their behavior; (c) as that sounds more like an action\nto be performed on a future condition (error/conflict) whereas here we\nalready knew that an error has happened;\n\nAs per discussion till now, option (d) seems preferable. In this, we\nneed to see how and what to allow as options. The simplest way for the\nfirst version is to just allow one xid to be specified at a time which\nwould mean that specifying multiple xids should error out. We can also\nadditionally allow specifying operations like 'insert', 'update',\netc., and then relation list (list of oids). What that would mean is\nthat for a transaction we can allow which particular operations and\nrelations we want to skip.\n\nI am not sure what exactly we can provide to users to allow skipping\ninitial table sync as we can't specify XID there. One option that\ncomes to mind is to allow specifying a combination of copy_data and\nrelid to skip table sync for a particular relation. We might think of\nnot doing anything for table sync workers but not sure if that is a\ngood option.\n\nThoughts?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 2 Dec 2021 12:18:17 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "\nOn 02.12.21 07:48, Amit Kapila wrote:\n> a. ALTER SUBSCRIPTION ... [SET|RESET] SKIP TRANSACTION xxx;\n> b. Alter Subscription <sub_name> SET ( subscription_parameter [=value]\n> [, ... ] );\n> c. Alter Subscription <sub_name> On Error ( subscription_parameter\n> [=value] [, ... ] );\n> d. Alter Subscription <sub_name> SKIP ( subscription_parameter\n> [=value] [, ... ] );\n> where subscription_parameter can be one of:\n> xid = <xid_val>\n> lsn = <lsn_val>\n> ...\n\n> As per discussion till now, option (d) seems preferable.\n\nI agree.\n\n> In this, we\n> need to see how and what to allow as options. The simplest way for the\n> first version is to just allow one xid to be specified at a time which\n> would mean that specifying multiple xids should error out. We can also\n> additionally allow specifying operations like 'insert', 'update',\n> etc., and then relation list (list of oids). What that would mean is\n> that for a transaction we can allow which particular operations and\n> relations we want to skip.\n\nI don't know how difficult it would be, but allowing multiple xids might \nbe desirable. But this syntax gives you flexibility, so we can also \nstart with a simple implementation.\n\n> I am not sure what exactly we can provide to users to allow skipping\n> initial table sync as we can't specify XID there. One option that\n> comes to mind is to allow specifying a combination of copy_data and\n> relid to skip table sync for a particular relation. We might think of\n> not doing anything for table sync workers but not sure if that is a\n> good option.\n\nI don't think this feature should affect tablesync. The semantics are \nnot clear, and it's not really needed. If the tablesync doesn't work, \nyou can try the setup again from scratch.\n\n\n",
"msg_date": "Thu, 2 Dec 2021 16:08:02 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Dec 2, 2021 at 8:38 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 02.12.21 07:48, Amit Kapila wrote:\n> > a. ALTER SUBSCRIPTION ... [SET|RESET] SKIP TRANSACTION xxx;\n> > b. Alter Subscription <sub_name> SET ( subscription_parameter [=value]\n> > [, ... ] );\n> > c. Alter Subscription <sub_name> On Error ( subscription_parameter\n> > [=value] [, ... ] );\n> > d. Alter Subscription <sub_name> SKIP ( subscription_parameter\n> > [=value] [, ... ] );\n> > where subscription_parameter can be one of:\n> > xid = <xid_val>\n> > lsn = <lsn_val>\n> > ...\n>\n> > As per discussion till now, option (d) seems preferable.\n>\n> I agree.\n>\n> > In this, we\n> > need to see how and what to allow as options. The simplest way for the\n> > first version is to just allow one xid to be specified at a time which\n> > would mean that specifying multiple xids should error out. We can also\n> > additionally allow specifying operations like 'insert', 'update',\n> > etc., and then relation list (list of oids). What that would mean is\n> > that for a transaction we can allow which particular operations and\n> > relations we want to skip.\n>\n> I don't know how difficult it would be, but allowing multiple xids might\n> be desirable.\n>\n\nAre there many cases where there could be multiple xid failures that\nthe user can skip? Apply worker always keeps looping at the same error\nfailure so the user wouldn't know of the second xid failure (if any)\ntill the first failure is resolved. I could think of one such case\nwhere it is possible during the initial synchronization phase where\napply worker went ahead then tablesync worker by skipping to apply the\nchanges on the corresponding table. After that, it is possible, that\nthe table sync worker failed during the catch-up phase and apply\nworker fails during the processing of some other rel.\n\n> But this syntax gives you flexibility, so we can also\n> start with a simple implementation.\n>\n\nYeah, I also think so. BTW, what do you think of providing extra\nflexibility of giving other options like 'operation', 'rel' along with\nxid? I think such options could be useful for large transactions that\noperate on multiple tables as it is quite possible that only a\nparticular operation from the entire transaction is the cause of\nfailure. Now, on one side, we can argue that skipping the entire\ntransaction is better from the consistency point of view but I think\nit is already possible that we just skip a particular update/delete\n(if the corresponding tuple doesn't exist on the subscriber). For the\nsake of simplicity, we can just allow providing xid at this stage and\nthen extend it later as required but I am not very sure of that point.\n\n> > I am not sure what exactly we can provide to users to allow skipping\n> > initial table sync as we can't specify XID there. One option that\n> > comes to mind is to allow specifying a combination of copy_data and\n> > relid to skip table sync for a particular relation. We might think of\n> > not doing anything for table sync workers but not sure if that is a\n> > good option.\n>\n> I don't think this feature should affect tablesync. The semantics are\n> not clear, and it's not really needed. If the tablesync doesn't work,\n> you can try the setup again from scratch.\n>\n\nOkay, that makes sense. But note it is possible that tablesync workers\nmight also need to skip some xids during the catchup phase to complete\nthe sync.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 3 Dec 2021 08:23:25 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Dec 3, 2021 at 11:53 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Dec 2, 2021 at 8:38 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n> >\n> > On 02.12.21 07:48, Amit Kapila wrote:\n> > > a. ALTER SUBSCRIPTION ... [SET|RESET] SKIP TRANSACTION xxx;\n> > > b. Alter Subscription <sub_name> SET ( subscription_parameter [=value]\n> > > [, ... ] );\n> > > c. Alter Subscription <sub_name> On Error ( subscription_parameter\n> > > [=value] [, ... ] );\n> > > d. Alter Subscription <sub_name> SKIP ( subscription_parameter\n> > > [=value] [, ... ] );\n> > > where subscription_parameter can be one of:\n> > > xid = <xid_val>\n> > > lsn = <lsn_val>\n> > > ...\n> >\n> > > As per discussion till now, option (d) seems preferable.\n> >\n> > I agree.\n\n+1\n\n> >\n> > > In this, we\n> > > need to see how and what to allow as options. The simplest way for the\n> > > first version is to just allow one xid to be specified at a time which\n> > > would mean that specifying multiple xids should error out. We can also\n> > > additionally allow specifying operations like 'insert', 'update',\n> > > etc., and then relation list (list of oids). What that would mean is\n> > > that for a transaction we can allow which particular operations and\n> > > relations we want to skip.\n> >\n> > I don't know how difficult it would be, but allowing multiple xids might\n> > be desirable.\n> >\n>\n> Are there many cases where there could be multiple xid failures that\n> the user can skip? Apply worker always keeps looping at the same error\n> failure so the user wouldn't know of the second xid failure (if any)\n> till the first failure is resolved. I could think of one such case\n> where it is possible during the initial synchronization phase where\n> apply worker went ahead then tablesync worker by skipping to apply the\n> changes on the corresponding table. After that, it is possible, that\n> the table sync worker failed during the catch-up phase and apply\n> worker fails during the processing of some other rel.\n>\n> > But this syntax gives you flexibility, so we can also\n> > start with a simple implementation.\n> >\n>\n> Yeah, I also think so. BTW, what do you think of providing extra\n> flexibility of giving other options like 'operation', 'rel' along with\n> xid? I think such options could be useful for large transactions that\n> operate on multiple tables as it is quite possible that only a\n> particular operation from the entire transaction is the cause of\n> failure. Now, on one side, we can argue that skipping the entire\n> transaction is better from the consistency point of view but I think\n> it is already possible that we just skip a particular update/delete\n> (if the corresponding tuple doesn't exist on the subscriber). For the\n> sake of simplicity, we can just allow providing xid at this stage and\n> then extend it later as required but I am not very sure of that point.\n\n+1\n\nSkipping a whole transaction by specifying xid would be a good start.\nIdeally, we'd like to automatically skip only operations within the\ntransaction that fail but it seems not easy to achieve. If we allow\nspecifying operations and/or relations, probably multiple operations\nor relations need to be specified in some cases. Otherwise, the\nsubscriber cannot continue logical replication if the transaction has\nmultiple operations on different relations that fail. But similar to\nthe idea of specifying multiple xids, we need to note the fact that\nuser wouldn't know of the second operation failure unless the apply\nworker applies the change. So I'm not sure there are many use cases in\npractice where users can specify multiple operations and relations in\norder to skip applies that fail.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 3 Dec 2021 15:41:47 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Dec 3, 2021 at 12:12 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Dec 3, 2021 at 11:53 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > But this syntax gives you flexibility, so we can also\n> > > start with a simple implementation.\n> > >\n> >\n> > Yeah, I also think so. BTW, what do you think of providing extra\n> > flexibility of giving other options like 'operation', 'rel' along with\n> > xid? I think such options could be useful for large transactions that\n> > operate on multiple tables as it is quite possible that only a\n> > particular operation from the entire transaction is the cause of\n> > failure. Now, on one side, we can argue that skipping the entire\n> > transaction is better from the consistency point of view but I think\n> > it is already possible that we just skip a particular update/delete\n> > (if the corresponding tuple doesn't exist on the subscriber). For the\n> > sake of simplicity, we can just allow providing xid at this stage and\n> > then extend it later as required but I am not very sure of that point.\n>\n> +1\n>\n> Skipping a whole transaction by specifying xid would be a good start.\n>\n\nOkay, that sounds reasonable, so let's do that for now.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 6 Dec 2021 10:47:00 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Dec 6, 2021 at 2:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Dec 3, 2021 at 12:12 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Dec 3, 2021 at 11:53 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > > But this syntax gives you flexibility, so we can also\n> > > > start with a simple implementation.\n> > > >\n> > >\n> > > Yeah, I also think so. BTW, what do you think of providing extra\n> > > flexibility of giving other options like 'operation', 'rel' along with\n> > > xid? I think such options could be useful for large transactions that\n> > > operate on multiple tables as it is quite possible that only a\n> > > particular operation from the entire transaction is the cause of\n> > > failure. Now, on one side, we can argue that skipping the entire\n> > > transaction is better from the consistency point of view but I think\n> > > it is already possible that we just skip a particular update/delete\n> > > (if the corresponding tuple doesn't exist on the subscriber). For the\n> > > sake of simplicity, we can just allow providing xid at this stage and\n> > > then extend it later as required but I am not very sure of that point.\n> >\n> > +1\n> >\n> > Skipping a whole transaction by specifying xid would be a good start.\n> >\n>\n> Okay, that sounds reasonable, so let's do that for now.\n\nI'll submit the patch tomorrow.\n\nWhile updating the patch, I realized that skipping a transaction that\nis prepared on the publisher will be tricky a bit;\n\nFirst of all, since skip-xid is in pg_subscription catalog, we need to\ndo a catalog update in a transaction and commit it to disable it. I\nthink we need to set origin-lsn and timestamp of the transaction being\nskipped to the transaction that does the catalog update. That is,\nduring skipping the (not prepared) transaction, we skip all\ndata-modification changes coming from the publisher, do a catalog\nupdate, and commit the transaction. If we do the catalog update in the\nnext transaction after skipping the whole transaction, skip_xid could\nbe left in case of a server crash between them. Also, we cannot set\norigin-lsn and timestamp to an empty transaction.\n\nIn prepared transaction cases, I think that when handling a prepare\nmessage, we need to commit the transaction to update the catalog,\ninstead of preparing it. And at the commit prepared and rollback\nprepared time, we skip it since there is not the prepared transaction\non the subscriber. Currently, handling rollback prepared already\nbehaves so; it first checks whether we have prepared the transaction\nor not and skip it if haven’t. So I think we need to do that also for\ncommit prepared case. With that, this requires protocol changes so\nthat the subscriber can get prepare-lsn and prepare-time when handling\ncommit prepared.\n\nSo I’m writing a separate patch to add prepare-lsn and timestamp to\ncommit_prepared message, which will be a building block for skipping\nprepared transactions. Actually, I think it’s beneficial even today;\nwe can skip preparing the transaction if it’s an empty transaction.\nAlthough the comment it’s not a common case, I think that it could\nhappen quite often in some cases:\n\n * XXX, We can optimize such that at commit prepared time, we first check\n * whether we have prepared the transaction or not but that doesn't seem\n * worthwhile because such cases shouldn't be common.\n */\n\nFor example, if the publisher has multiple subscriptions and there are\nmany prepared transactions that modify the particular table subscribed\nby one publisher, many empty transactions are replicated to other\nsubscribers.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 7 Dec 2021 20:36:10 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On 03.12.21 03:53, Amit Kapila wrote:\n>> I don't know how difficult it would be, but allowing multiple xids might\n>> be desirable.\n> \n> Are there many cases where there could be multiple xid failures that\n> the user can skip? Apply worker always keeps looping at the same error\n> failure so the user wouldn't know of the second xid failure (if any)\n> till the first failure is resolved.\n\nYeah, nevermind, doesn't make sense.\n\n> Yeah, I also think so. BTW, what do you think of providing extra\n> flexibility of giving other options like 'operation', 'rel' along with\n> xid? I think such options could be useful for large transactions that\n> operate on multiple tables as it is quite possible that only a\n> particular operation from the entire transaction is the cause of\n> failure. Now, on one side, we can argue that skipping the entire\n> transaction is better from the consistency point of view but I think\n> it is already possible that we just skip a particular update/delete\n> (if the corresponding tuple doesn't exist on the subscriber). For the\n> sake of simplicity, we can just allow providing xid at this stage and\n> then extend it later as required but I am not very sure of that point.\n\nSkipping transactions partially sounds dangerous, especially when \nexposed as an option to users. Needs more careful thought.\n\n\n",
"msg_date": "Tue, 7 Dec 2021 15:44:18 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Dec 7, 2021 at 5:06 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Dec 6, 2021 at 2:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> I'll submit the patch tomorrow.\n>\n> While updating the patch, I realized that skipping a transaction that\n> is prepared on the publisher will be tricky a bit;\n>\n> First of all, since skip-xid is in pg_subscription catalog, we need to\n> do a catalog update in a transaction and commit it to disable it. I\n> think we need to set origin-lsn and timestamp of the transaction being\n> skipped to the transaction that does the catalog update. That is,\n> during skipping the (not prepared) transaction, we skip all\n> data-modification changes coming from the publisher, do a catalog\n> update, and commit the transaction. If we do the catalog update in the\n> next transaction after skipping the whole transaction, skip_xid could\n> be left in case of a server crash between them.\n>\n\nBut if we haven't updated origin_lsn/timestamp before the crash, won't\nit request the same transaction again from the publisher? If so, it\nwill be again able to skip it because skip_xid is still not updated.\n\n> Also, we cannot set\n> origin-lsn and timestamp to an empty transaction.\n>\n\nBut won't we update the catalog for skip_xid in that case?\n\nDo we see any advantage of updating the skip_xid in the same\ntransaction vs. doing it in a separate transaction? If not then\nprobably we can choose either of those ways and add some comments to\nindicate the possibility of doing it another way.\n\n> In prepared transaction cases, I think that when handling a prepare\n> message, we need to commit the transaction to update the catalog,\n> instead of preparing it. And at the commit prepared and rollback\n> prepared time, we skip it since there is not the prepared transaction\n> on the subscriber.\n>\n\nCan't we think of just allowing prepare in this case and updating the\nskip_xid only at commit time? I see that in this case, we would be\ndoing prepare for a transaction that has no changes but as such cases\nwon't be common, isn't that acceptable?\n\n> Currently, handling rollback prepared already\n> behaves so; it first checks whether we have prepared the transaction\n> or not and skip it if haven’t. So I think we need to do that also for\n> commit prepared case. With that, this requires protocol changes so\n> that the subscriber can get prepare-lsn and prepare-time when handling\n> commit prepared.\n>\n> So I’m writing a separate patch to add prepare-lsn and timestamp to\n> commit_prepared message, which will be a building block for skipping\n> prepared transactions. Actually, I think it’s beneficial even today;\n> we can skip preparing the transaction if it’s an empty transaction.\n> Although the comment it’s not a common case, I think that it could\n> happen quite often in some cases:\n>\n> * XXX, We can optimize such that at commit prepared time, we first check\n> * whether we have prepared the transaction or not but that doesn't seem\n> * worthwhile because such cases shouldn't be common.\n> */\n>\n> For example, if the publisher has multiple subscriptions and there are\n> many prepared transactions that modify the particular table subscribed\n> by one publisher, many empty transactions are replicated to other\n> subscribers.\n>\n\nI think this is not clear to me. Why would one have multiple\nsubscriptions for the same publication? I thought it is possible when\nsay some publisher doesn't publish any data of prepared transaction\nsay because the corresponding action is not published or something\nlike that. I don't deny that someday we want to optimize this case but\nit might be better if we don't need to do it along with this patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 8 Dec 2021 10:45:48 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Dec 8, 2021 at 2:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Dec 7, 2021 at 5:06 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Dec 6, 2021 at 2:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > I'll submit the patch tomorrow.\n> >\n> > While updating the patch, I realized that skipping a transaction that\n> > is prepared on the publisher will be tricky a bit;\n> >\n> > First of all, since skip-xid is in pg_subscription catalog, we need to\n> > do a catalog update in a transaction and commit it to disable it. I\n> > think we need to set origin-lsn and timestamp of the transaction being\n> > skipped to the transaction that does the catalog update. That is,\n> > during skipping the (not prepared) transaction, we skip all\n> > data-modification changes coming from the publisher, do a catalog\n> > update, and commit the transaction. If we do the catalog update in the\n> > next transaction after skipping the whole transaction, skip_xid could\n> > be left in case of a server crash between them.\n> >\n>\n> But if we haven't updated origin_lsn/timestamp before the crash, won't\n> it request the same transaction again from the publisher? If so, it\n> will be again able to skip it because skip_xid is still not updated.\n\nYes. I mean that if we update origin_lsn and origin_timestamp when\ncommitting the skipped transaction and then update the catalog in the\nnext transaction it doesn't work in case of a crash. But it's not\npossible in the first place since the first transaction is empty and\nwe cannot set origin_lsn and origin_timestamp to it.\n\n>\n> > Also, we cannot set\n> > origin-lsn and timestamp to an empty transaction.\n> >\n>\n> But won't we update the catalog for skip_xid in that case?\n\nYes. Probably my explanation was not clear. Even if we skip all\nchanges of the transaction, the transaction doesn't become empty since\nwe update the catalog.\n\n>\n> Do we see any advantage of updating the skip_xid in the same\n> transaction vs. doing it in a separate transaction? If not then\n> probably we can choose either of those ways and add some comments to\n> indicate the possibility of doing it another way.\n\nI think that since the skipped transaction is always empty there is\nalways one transaction. What we need to consider is when we update\norigin_lsn and origin_timestamp. In non-prepared transaction cases,\nthe only option is when updating the catalog.\n\n>\n> > In prepared transaction cases, I think that when handling a prepare\n> > message, we need to commit the transaction to update the catalog,\n> > instead of preparing it. And at the commit prepared and rollback\n> > prepared time, we skip it since there is not the prepared transaction\n> > on the subscriber.\n> >\n>\n> Can't we think of just allowing prepare in this case and updating the\n> skip_xid only at commit time? I see that in this case, we would be\n> doing prepare for a transaction that has no changes but as such cases\n> won't be common, isn't that acceptable?\n\nIn this case, we will end up committing both the prepared (empty)\ntransaction and the transaction that updates the catalog, right? If\nso, since these are separate transactions it can be a problem in case\nof a crash between these two commits.\n\n>\n> > Currently, handling rollback prepared already\n> > behaves so; it first checks whether we have prepared the transaction\n> > or not and skip it if haven’t. So I think we need to do that also for\n> > commit prepared case. With that, this requires protocol changes so\n> > that the subscriber can get prepare-lsn and prepare-time when handling\n> > commit prepared.\n> >\n> > So I’m writing a separate patch to add prepare-lsn and timestamp to\n> > commit_prepared message, which will be a building block for skipping\n> > prepared transactions. Actually, I think it’s beneficial even today;\n> > we can skip preparing the transaction if it’s an empty transaction.\n> > Although the comment it’s not a common case, I think that it could\n> > happen quite often in some cases:\n> >\n> > * XXX, We can optimize such that at commit prepared time, we first check\n> > * whether we have prepared the transaction or not but that doesn't seem\n> > * worthwhile because such cases shouldn't be common.\n> > */\n> >\n> > For example, if the publisher has multiple subscriptions and there are\n> > many prepared transactions that modify the particular table subscribed\n> > by one publisher, many empty transactions are replicated to other\n> > subscribers.\n> >\n>\n> I think this is not clear to me. Why would one have multiple\n> subscriptions for the same publication? I thought it is possible when\n> say some publisher doesn't publish any data of prepared transaction\n> say because the corresponding action is not published or something\n> like that. I don't deny that someday we want to optimize this case but\n> it might be better if we don't need to do it along with this patch.\n\nI imagined that the publisher has two publications (say pub-A and\npub-B) that publishes a diferent set of relations in the database and\nthere are two subscribers that are subscribing to either one\npublication (e.g, subscriber-A subscribes to pub-A and subscriber-B\nsubscribes to pub-B). If many prepared transactions happen on the\npublisher and these transactions modify only relations published by\npub-A, both subscriber-A and subscriber-B would prepare the same\nnumber of transactions but all of them in subscriber-B is empty.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 8 Dec 2021 15:17:52 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Dec 8, 2021 at 11:48 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Dec 8, 2021 at 2:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Dec 7, 2021 at 5:06 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Mon, Dec 6, 2021 at 2:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > I'll submit the patch tomorrow.\n> > >\n> > > While updating the patch, I realized that skipping a transaction that\n> > > is prepared on the publisher will be tricky a bit;\n> > >\n> > > First of all, since skip-xid is in pg_subscription catalog, we need to\n> > > do a catalog update in a transaction and commit it to disable it. I\n> > > think we need to set origin-lsn and timestamp of the transaction being\n> > > skipped to the transaction that does the catalog update. That is,\n> > > during skipping the (not prepared) transaction, we skip all\n> > > data-modification changes coming from the publisher, do a catalog\n> > > update, and commit the transaction. If we do the catalog update in the\n> > > next transaction after skipping the whole transaction, skip_xid could\n> > > be left in case of a server crash between them.\n> > >\n> >\n> > But if we haven't updated origin_lsn/timestamp before the crash, won't\n> > it request the same transaction again from the publisher? If so, it\n> > will be again able to skip it because skip_xid is still not updated.\n>\n> Yes. I mean that if we update origin_lsn and origin_timestamp when\n> committing the skipped transaction and then update the catalog in the\n> next transaction it doesn't work in case of a crash. But it's not\n> possible in the first place since the first transaction is empty and\n> we cannot set origin_lsn and origin_timestamp to it.\n>\n> >\n> > > Also, we cannot set\n> > > origin-lsn and timestamp to an empty transaction.\n> > >\n> >\n> > But won't we update the catalog for skip_xid in that case?\n>\n> Yes. Probably my explanation was not clear. Even if we skip all\n> changes of the transaction, the transaction doesn't become empty since\n> we update the catalog.\n>\n> >\n> > Do we see any advantage of updating the skip_xid in the same\n> > transaction vs. doing it in a separate transaction? If not then\n> > probably we can choose either of those ways and add some comments to\n> > indicate the possibility of doing it another way.\n>\n> I think that since the skipped transaction is always empty there is\n> always one transaction. What we need to consider is when we update\n> origin_lsn and origin_timestamp. In non-prepared transaction cases,\n> the only option is when updating the catalog.\n>\n\nYour last sentence is not completely clear to me but it seems you\nagree that we can use one transaction instead of two to skip the\nchanges, perform a catalog update, and update origin_lsn/timestamp.\n\n> >\n> > > In prepared transaction cases, I think that when handling a prepare\n> > > message, we need to commit the transaction to update the catalog,\n> > > instead of preparing it. And at the commit prepared and rollback\n> > > prepared time, we skip it since there is not the prepared transaction\n> > > on the subscriber.\n> > >\n> >\n> > Can't we think of just allowing prepare in this case and updating the\n> > skip_xid only at commit time? I see that in this case, we would be\n> > doing prepare for a transaction that has no changes but as such cases\n> > won't be common, isn't that acceptable?\n>\n> In this case, we will end up committing both the prepared (empty)\n> transaction and the transaction that updates the catalog, right?\n>\n\nCan't we do this catalog update before committing the prepared\ntransaction? If so, both in prepared and non-prepared cases, our\nimplementation could be the same and we have a reason to accomplish\nthe catalog update in the same transaction for which we skipped the\nchanges.\n\n> If\n> so, since these are separate transactions it can be a problem in case\n> of a crash between these two commits.\n>\n> >\n> > > Currently, handling rollback prepared already\n> > > behaves so; it first checks whether we have prepared the transaction\n> > > or not and skip it if haven’t. So I think we need to do that also for\n> > > commit prepared case. With that, this requires protocol changes so\n> > > that the subscriber can get prepare-lsn and prepare-time when handling\n> > > commit prepared.\n> > >\n> > > So I’m writing a separate patch to add prepare-lsn and timestamp to\n> > > commit_prepared message, which will be a building block for skipping\n> > > prepared transactions. Actually, I think it’s beneficial even today;\n> > > we can skip preparing the transaction if it’s an empty transaction.\n> > > Although the comment it’s not a common case, I think that it could\n> > > happen quite often in some cases:\n> > >\n> > > * XXX, We can optimize such that at commit prepared time, we first check\n> > > * whether we have prepared the transaction or not but that doesn't seem\n> > > * worthwhile because such cases shouldn't be common.\n> > > */\n> > >\n> > > For example, if the publisher has multiple subscriptions and there are\n> > > many prepared transactions that modify the particular table subscribed\n> > > by one publisher, many empty transactions are replicated to other\n> > > subscribers.\n> > >\n> >\n> > I think this is not clear to me. Why would one have multiple\n> > subscriptions for the same publication? I thought it is possible when\n> > say some publisher doesn't publish any data of prepared transaction\n> > say because the corresponding action is not published or something\n> > like that. I don't deny that someday we want to optimize this case but\n> > it might be better if we don't need to do it along with this patch.\n>\n> I imagined that the publisher has two publications (say pub-A and\n> pub-B) that publishes a diferent set of relations in the database and\n> there are two subscribers that are subscribing to either one\n> publication (e.g, subscriber-A subscribes to pub-A and subscriber-B\n> subscribes to pub-B). If many prepared transactions happen on the\n> publisher and these transactions modify only relations published by\n> pub-A, both subscriber-A and subscriber-B would prepare the same\n> number of transactions but all of them in subscriber-B is empty.\n>\n\nOkay, I understand those cases but note always checking if the\nprepared xact exists during commit prepared has a cost and that is why\nwe avoided it at the first place. There is a separate effort in\nprogress [1] where we want to avoid sending empty transactions at the\nfirst place. So, it is better to avoid this cost via that effort\nrather than adding additional cost at commit of each prepared\ntransaction. OTOH, if there are other strong reasons to do it then we\ncan probably consider it.\n\n[1] - https://commitfest.postgresql.org/36/3093/\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 8 Dec 2021 12:20:47 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Dec 8, 2021 at 3:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Dec 8, 2021 at 11:48 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Dec 8, 2021 at 2:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Dec 7, 2021 at 5:06 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Mon, Dec 6, 2021 at 2:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > I'll submit the patch tomorrow.\n> > > >\n> > > > While updating the patch, I realized that skipping a transaction that\n> > > > is prepared on the publisher will be tricky a bit;\n> > > >\n> > > > First of all, since skip-xid is in pg_subscription catalog, we need to\n> > > > do a catalog update in a transaction and commit it to disable it. I\n> > > > think we need to set origin-lsn and timestamp of the transaction being\n> > > > skipped to the transaction that does the catalog update. That is,\n> > > > during skipping the (not prepared) transaction, we skip all\n> > > > data-modification changes coming from the publisher, do a catalog\n> > > > update, and commit the transaction. If we do the catalog update in the\n> > > > next transaction after skipping the whole transaction, skip_xid could\n> > > > be left in case of a server crash between them.\n> > > >\n> > >\n> > > But if we haven't updated origin_lsn/timestamp before the crash, won't\n> > > it request the same transaction again from the publisher? If so, it\n> > > will be again able to skip it because skip_xid is still not updated.\n> >\n> > Yes. I mean that if we update origin_lsn and origin_timestamp when\n> > committing the skipped transaction and then update the catalog in the\n> > next transaction it doesn't work in case of a crash. But it's not\n> > possible in the first place since the first transaction is empty and\n> > we cannot set origin_lsn and origin_timestamp to it.\n> >\n> > >\n> > > > Also, we cannot set\n> > > > origin-lsn and timestamp to an empty transaction.\n> > > >\n> > >\n> > > But won't we update the catalog for skip_xid in that case?\n> >\n> > Yes. Probably my explanation was not clear. Even if we skip all\n> > changes of the transaction, the transaction doesn't become empty since\n> > we update the catalog.\n> >\n> > >\n> > > Do we see any advantage of updating the skip_xid in the same\n> > > transaction vs. doing it in a separate transaction? If not then\n> > > probably we can choose either of those ways and add some comments to\n> > > indicate the possibility of doing it another way.\n> >\n> > I think that since the skipped transaction is always empty there is\n> > always one transaction. What we need to consider is when we update\n> > origin_lsn and origin_timestamp. In non-prepared transaction cases,\n> > the only option is when updating the catalog.\n> >\n>\n> Your last sentence is not completely clear to me but it seems you\n> agree that we can use one transaction instead of two to skip the\n> changes, perform a catalog update, and update origin_lsn/timestamp.\n\nYes.\n\n>\n> > >\n> > > > In prepared transaction cases, I think that when handling a prepare\n> > > > message, we need to commit the transaction to update the catalog,\n> > > > instead of preparing it. And at the commit prepared and rollback\n> > > > prepared time, we skip it since there is not the prepared transaction\n> > > > on the subscriber.\n> > > >\n> > >\n> > > Can't we think of just allowing prepare in this case and updating the\n> > > skip_xid only at commit time? I see that in this case, we would be\n> > > doing prepare for a transaction that has no changes but as such cases\n> > > won't be common, isn't that acceptable?\n> >\n> > In this case, we will end up committing both the prepared (empty)\n> > transaction and the transaction that updates the catalog, right?\n> >\n>\n> Can't we do this catalog update before committing the prepared\n> transaction? If so, both in prepared and non-prepared cases, our\n> implementation could be the same and we have a reason to accomplish\n> the catalog update in the same transaction for which we skipped the\n> changes.\n\nBut in case of a crash between these two transactions, given that\nskip_xid is already cleared how do we know the prepared transaction\nthat was supposed to be skipped?\n\n>\n> > If\n> > so, since these are separate transactions it can be a problem in case\n> > of a crash between these two commits.\n> >\n> > >\n> > > > Currently, handling rollback prepared already\n> > > > behaves so; it first checks whether we have prepared the transaction\n> > > > or not and skip it if haven’t. So I think we need to do that also for\n> > > > commit prepared case. With that, this requires protocol changes so\n> > > > that the subscriber can get prepare-lsn and prepare-time when handling\n> > > > commit prepared.\n> > > >\n> > > > So I’m writing a separate patch to add prepare-lsn and timestamp to\n> > > > commit_prepared message, which will be a building block for skipping\n> > > > prepared transactions. Actually, I think it’s beneficial even today;\n> > > > we can skip preparing the transaction if it’s an empty transaction.\n> > > > Although the comment it’s not a common case, I think that it could\n> > > > happen quite often in some cases:\n> > > >\n> > > > * XXX, We can optimize such that at commit prepared time, we first check\n> > > > * whether we have prepared the transaction or not but that doesn't seem\n> > > > * worthwhile because such cases shouldn't be common.\n> > > > */\n> > > >\n> > > > For example, if the publisher has multiple subscriptions and there are\n> > > > many prepared transactions that modify the particular table subscribed\n> > > > by one publisher, many empty transactions are replicated to other\n> > > > subscribers.\n> > > >\n> > >\n> > > I think this is not clear to me. Why would one have multiple\n> > > subscriptions for the same publication? I thought it is possible when\n> > > say some publisher doesn't publish any data of prepared transaction\n> > > say because the corresponding action is not published or something\n> > > like that. I don't deny that someday we want to optimize this case but\n> > > it might be better if we don't need to do it along with this patch.\n> >\n> > I imagined that the publisher has two publications (say pub-A and\n> > pub-B) that publishes a diferent set of relations in the database and\n> > there are two subscribers that are subscribing to either one\n> > publication (e.g, subscriber-A subscribes to pub-A and subscriber-B\n> > subscribes to pub-B). If many prepared transactions happen on the\n> > publisher and these transactions modify only relations published by\n> > pub-A, both subscriber-A and subscriber-B would prepare the same\n> > number of transactions but all of them in subscriber-B is empty.\n> >\n>\n> Okay, I understand those cases but note always checking if the\n> prepared xact exists during commit prepared has a cost and that is why\n> we avoided it at the first place. There is a separate effort in\n> progress [1] where we want to avoid sending empty transactions at the\n> first place. So, it is better to avoid this cost via that effort\n> rather than adding additional cost at commit of each prepared\n> transaction. OTOH, if there are other strong reasons to do it then we\n> can probably consider it.\n>\n\nThank you for the information. Agreed.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 8 Dec 2021 16:05:49 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Dec 8, 2021 at 12:36 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Dec 8, 2021 at 3:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Dec 8, 2021 at 11:48 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > >\n> > > > Can't we think of just allowing prepare in this case and updating the\n> > > > skip_xid only at commit time? I see that in this case, we would be\n> > > > doing prepare for a transaction that has no changes but as such cases\n> > > > won't be common, isn't that acceptable?\n> > >\n> > > In this case, we will end up committing both the prepared (empty)\n> > > transaction and the transaction that updates the catalog, right?\n> > >\n> >\n> > Can't we do this catalog update before committing the prepared\n> > transaction? If so, both in prepared and non-prepared cases, our\n> > implementation could be the same and we have a reason to accomplish\n> > the catalog update in the same transaction for which we skipped the\n> > changes.\n>\n> But in case of a crash between these two transactions, given that\n> skip_xid is already cleared how do we know the prepared transaction\n> that was supposed to be skipped?\n>\n\nI was thinking of doing it as one transaction at the time of\ncommit_prepare. Say, in function apply_handle_commit_prepared(), if we\ncheck whether the skip_xid is the same as prepare_data.xid then update\nthe catalog and set origin_lsn/timestamp in the same transaction. Why\ndo we need two transactions for it?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 8 Dec 2021 14:24:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Dec 8, 2021 at 5:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Dec 8, 2021 at 12:36 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Dec 8, 2021 at 3:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Dec 8, 2021 at 11:48 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > >\n> > > > > Can't we think of just allowing prepare in this case and updating the\n> > > > > skip_xid only at commit time? I see that in this case, we would be\n> > > > > doing prepare for a transaction that has no changes but as such cases\n> > > > > won't be common, isn't that acceptable?\n> > > >\n> > > > In this case, we will end up committing both the prepared (empty)\n> > > > transaction and the transaction that updates the catalog, right?\n> > > >\n> > >\n> > > Can't we do this catalog update before committing the prepared\n> > > transaction? If so, both in prepared and non-prepared cases, our\n> > > implementation could be the same and we have a reason to accomplish\n> > > the catalog update in the same transaction for which we skipped the\n> > > changes.\n> >\n> > But in case of a crash between these two transactions, given that\n> > skip_xid is already cleared how do we know the prepared transaction\n> > that was supposed to be skipped?\n> >\n>\n> I was thinking of doing it as one transaction at the time of\n> commit_prepare. Say, in function apply_handle_commit_prepared(), if we\n> check whether the skip_xid is the same as prepare_data.xid then update\n> the catalog and set origin_lsn/timestamp in the same transaction. Why\n> do we need two transactions for it?\n\nI meant the two transactions are the prepared transaction and the\ntransaction that updates the catalog. If I understand your idea\ncorrectly, in apply_handle_commit_prepared(), we update the catalog\nand set origin_lsn/timestamp. These are done in the same transaction.\nThen, we commit the prepared transaction, right? If the server crashes\nbetween them, skip_xid is already cleared and logical replication\nstarts from the LSN after COMMIT PREPARED. But the prepared\ntransaction still exists on the subscriber.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 8 Dec 2021 20:06:01 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Dec 8, 2021 at 4:05 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > Okay, I understand those cases but note always checking if the\n> > prepared xact exists during commit prepared has a cost and that is why\n> > we avoided it at the first place.\n\nBTW what costs were we concerned about? Looking at LookupGXact(), we\nlook for the 2PC state data on shmem while acquiring TwoPhaseStateLock\nin shared mode. And we check origin_lsn and origin_timestamp of 2PC by\nreading WAL or 2PC state file only if gid matched. On the other hand,\ncommitting the prepared transaction does WAL logging, waits for\nsynchronous replication, and calls post-commit callbacks, and removes\n2PC state file etc. And it requires acquiring TwoPhaseStateLock in\nexclusive mode to remove 2PC state entry. So it looks like always\nchecking if the prepared transaction exists and skipping it if not is\ncheaper than always committing prepared transactions.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 8 Dec 2021 20:22:32 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Dec 8, 2021 at 4:36 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Dec 8, 2021 at 5:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Dec 8, 2021 at 12:36 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Wed, Dec 8, 2021 at 3:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Wed, Dec 8, 2021 at 11:48 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > >\n> > > > > > Can't we think of just allowing prepare in this case and updating the\n> > > > > > skip_xid only at commit time? I see that in this case, we would be\n> > > > > > doing prepare for a transaction that has no changes but as such cases\n> > > > > > won't be common, isn't that acceptable?\n> > > > >\n> > > > > In this case, we will end up committing both the prepared (empty)\n> > > > > transaction and the transaction that updates the catalog, right?\n> > > > >\n> > > >\n> > > > Can't we do this catalog update before committing the prepared\n> > > > transaction? If so, both in prepared and non-prepared cases, our\n> > > > implementation could be the same and we have a reason to accomplish\n> > > > the catalog update in the same transaction for which we skipped the\n> > > > changes.\n> > >\n> > > But in case of a crash between these two transactions, given that\n> > > skip_xid is already cleared how do we know the prepared transaction\n> > > that was supposed to be skipped?\n> > >\n> >\n> > I was thinking of doing it as one transaction at the time of\n> > commit_prepare. Say, in function apply_handle_commit_prepared(), if we\n> > check whether the skip_xid is the same as prepare_data.xid then update\n> > the catalog and set origin_lsn/timestamp in the same transaction. Why\n> > do we need two transactions for it?\n>\n> I meant the two transactions are the prepared transaction and the\n> transaction that updates the catalog. If I understand your idea\n> correctly, in apply_handle_commit_prepared(), we update the catalog\n> and set origin_lsn/timestamp. These are done in the same transaction.\n> Then, we commit the prepared transaction, right?\n>\n\nI am thinking that we can start a transaction, update the catalog,\ncommit that transaction. Then start a new one to update\norigin_lsn/timestamp, finishprepared, and commit it. Now, if it\ncrashes after the first transaction, only commit prepared will be\nresent again and this time we don't need to update the catalog as that\nentry would be already cleared.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 9 Dec 2021 08:17:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Dec 9, 2021 at 11:47 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Dec 8, 2021 at 4:36 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Dec 8, 2021 at 5:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Dec 8, 2021 at 12:36 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Wed, Dec 8, 2021 at 3:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Wed, Dec 8, 2021 at 11:48 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > >\n> > > > > > >\n> > > > > > > Can't we think of just allowing prepare in this case and updating the\n> > > > > > > skip_xid only at commit time? I see that in this case, we would be\n> > > > > > > doing prepare for a transaction that has no changes but as such cases\n> > > > > > > won't be common, isn't that acceptable?\n> > > > > >\n> > > > > > In this case, we will end up committing both the prepared (empty)\n> > > > > > transaction and the transaction that updates the catalog, right?\n> > > > > >\n> > > > >\n> > > > > Can't we do this catalog update before committing the prepared\n> > > > > transaction? If so, both in prepared and non-prepared cases, our\n> > > > > implementation could be the same and we have a reason to accomplish\n> > > > > the catalog update in the same transaction for which we skipped the\n> > > > > changes.\n> > > >\n> > > > But in case of a crash between these two transactions, given that\n> > > > skip_xid is already cleared how do we know the prepared transaction\n> > > > that was supposed to be skipped?\n> > > >\n> > >\n> > > I was thinking of doing it as one transaction at the time of\n> > > commit_prepare. Say, in function apply_handle_commit_prepared(), if we\n> > > check whether the skip_xid is the same as prepare_data.xid then update\n> > > the catalog and set origin_lsn/timestamp in the same transaction. Why\n> > > do we need two transactions for it?\n> >\n> > I meant the two transactions are the prepared transaction and the\n> > transaction that updates the catalog. If I understand your idea\n> > correctly, in apply_handle_commit_prepared(), we update the catalog\n> > and set origin_lsn/timestamp. These are done in the same transaction.\n> > Then, we commit the prepared transaction, right?\n> >\n>\n> I am thinking that we can start a transaction, update the catalog,\n> commit that transaction. Then start a new one to update\n> origin_lsn/timestamp, finishprepared, and commit it. Now, if it\n> crashes after the first transaction, only commit prepared will be\n> resent again and this time we don't need to update the catalog as that\n> entry would be already cleared.\n\nSounds good. In the crash case, it should be fine since we will just\ncommit an empty transaction. The same is true for the case where\nskip_xid has been changed after skipping and preparing the transaction\nand before handling commit_prepared.\n\nRegarding the case where the user specifies XID of the transaction\nafter it is prepared on the subscriber (i.g., the transaction is not\nempty), we won’t skip committing the prepared transaction. But I think\nthat we don't need to support skipping already-prepared transaction\nsince such transaction doesn't conflict with anything regardless of\nhaving changed or not.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 9 Dec 2021 17:53:49 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Dec 9, 2021 at 2:24 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Dec 9, 2021 at 11:47 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > I am thinking that we can start a transaction, update the catalog,\n> > commit that transaction. Then start a new one to update\n> > origin_lsn/timestamp, finishprepared, and commit it. Now, if it\n> > crashes after the first transaction, only commit prepared will be\n> > resent again and this time we don't need to update the catalog as that\n> > entry would be already cleared.\n>\n> Sounds good. In the crash case, it should be fine since we will just\n> commit an empty transaction. The same is true for the case where\n> skip_xid has been changed after skipping and preparing the transaction\n> and before handling commit_prepared.\n>\n> Regarding the case where the user specifies XID of the transaction\n> after it is prepared on the subscriber (i.g., the transaction is not\n> empty), we won’t skip committing the prepared transaction. But I think\n> that we don't need to support skipping already-prepared transaction\n> since such transaction doesn't conflict with anything regardless of\n> having changed or not.\n>\n\nYeah, this makes sense to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 9 Dec 2021 14:46:13 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Dec 9, 2021 at 6:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Dec 9, 2021 at 2:24 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Dec 9, 2021 at 11:47 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > I am thinking that we can start a transaction, update the catalog,\n> > > commit that transaction. Then start a new one to update\n> > > origin_lsn/timestamp, finishprepared, and commit it. Now, if it\n> > > crashes after the first transaction, only commit prepared will be\n> > > resent again and this time we don't need to update the catalog as that\n> > > entry would be already cleared.\n> >\n> > Sounds good. In the crash case, it should be fine since we will just\n> > commit an empty transaction. The same is true for the case where\n> > skip_xid has been changed after skipping and preparing the transaction\n> > and before handling commit_prepared.\n> >\n> > Regarding the case where the user specifies XID of the transaction\n> > after it is prepared on the subscriber (i.g., the transaction is not\n> > empty), we won’t skip committing the prepared transaction. But I think\n> > that we don't need to support skipping already-prepared transaction\n> > since such transaction doesn't conflict with anything regardless of\n> > having changed or not.\n> >\n>\n> Yeah, this makes sense to me.\n>\n\nI've attached an updated patch. The new syntax is like \"ALTER\nSUBSCRIPTION testsub SKIP (xid = '123')\".\n\nI’ve been thinking we can do something safeguard for the case where\nthe user specified the wrong xid. For example, can we somewhat use the\nstats in pg_stat_subscription_workers? An idea is that logical\nreplication worker fetches the xid from the stats when reading the\nsubscription and skips the transaction if the xid matches to\nsubskipxid. That is, the worker checks the error reported by the\nworker previously working on the same subscription. The error could\nnot be a conflict error (e.g., connection error etc.) or might have\nbeen cleared by the reset function, But given the worker is in an\nerror loop, the worker can eventually get xid in question. We can\nprevent an unrelated transaction from being skipped unexpectedly. It\nseems not a stable solution though. Or it might be enough to warn\nusers when they specified an XID that doesn’t match to last_error_xid.\nAnyway, I think it’s better to have more discussion on this. Any\nideas?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Fri, 10 Dec 2021 14:44:16 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Dec 10, 2021 at 11:14 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Dec 9, 2021 at 6:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Dec 9, 2021 at 2:24 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Thu, Dec 9, 2021 at 11:47 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > I am thinking that we can start a transaction, update the catalog,\n> > > > commit that transaction. Then start a new one to update\n> > > > origin_lsn/timestamp, finishprepared, and commit it. Now, if it\n> > > > crashes after the first transaction, only commit prepared will be\n> > > > resent again and this time we don't need to update the catalog as that\n> > > > entry would be already cleared.\n> > >\n> > > Sounds good. In the crash case, it should be fine since we will just\n> > > commit an empty transaction. The same is true for the case where\n> > > skip_xid has been changed after skipping and preparing the transaction\n> > > and before handling commit_prepared.\n> > >\n> > > Regarding the case where the user specifies XID of the transaction\n> > > after it is prepared on the subscriber (i.g., the transaction is not\n> > > empty), we won’t skip committing the prepared transaction. But I think\n> > > that we don't need to support skipping already-prepared transaction\n> > > since such transaction doesn't conflict with anything regardless of\n> > > having changed or not.\n> > >\n> >\n> > Yeah, this makes sense to me.\n> >\n>\n> I've attached an updated patch. The new syntax is like \"ALTER\n> SUBSCRIPTION testsub SKIP (xid = '123')\".\n>\n> I’ve been thinking we can do something safeguard for the case where\n> the user specified the wrong xid. For example, can we somewhat use the\n> stats in pg_stat_subscription_workers? An idea is that logical\n> replication worker fetches the xid from the stats when reading the\n> subscription and skips the transaction if the xid matches to\n> subskipxid. That is, the worker checks the error reported by the\n> worker previously working on the same subscription. The error could\n> not be a conflict error (e.g., connection error etc.) or might have\n> been cleared by the reset function, But given the worker is in an\n> error loop, the worker can eventually get xid in question. We can\n> prevent an unrelated transaction from being skipped unexpectedly. It\n> seems not a stable solution though. Or it might be enough to warn\n> users when they specified an XID that doesn’t match to last_error_xid.\n>\n\nI think the idea is good but because it is not predictable as pointed\nby you so we might want to just issue a LOG/WARNING. If not already\nmentioned, then please do mention in docs the possibility of skipping\nnon-errored transactions.\n\nFew comments/questions:\n=====================\n1.\n+ Specifies the ID of the transaction whose application is to\nbe skipped\n+ by the logical replication worker. Setting -1 means to reset the\n+ transaction ID.\n\nCan we change it to something like: \"Specifies the ID of the\ntransaction whose changes are to be skipped by the logical replication\nworker. ....\"\n\n2.\n@@ -104,6 +104,16 @@ GetSubscription(Oid subid, bool missing_ok)\n Assert(!isnull);\n sub->publications = textarray_to_stringlist(DatumGetArrayTypeP(datum));\n\n+ /* Get skip XID */\n+ datum = SysCacheGetAttr(SUBSCRIPTIONOID,\n+ tup,\n+ Anum_pg_subscription_subskipxid,\n+ &isnull);\n+ if (!isnull)\n+ sub->skipxid = DatumGetTransactionId(datum);\n+ else\n+ sub->skipxid = InvalidTransactionId;\n\nCan't we assign it as we do for other fixed columns like subdbid,\nsubowner, etc.?\n\n3.\n+ * Also, we don't skip receiving the changes in streaming cases,\nsince we decide\n+ * whether or not to skip applying the changes when starting to apply changes.\n\nBut why so? Can't we even skip streaming (and writing to file all such\nmessages)? If we can do this then we can avoid even collecting all\nmessages in a file.\n\n4.\n+ * Also, one might think that we can skip preparing the skipped transaction.\n+ * But if we do that, PREPARE WAL record won’t be sent to its physical\n+ * standbys, resulting in that users won’t be able to find the prepared\n+ * transaction entry after a fail-over.\n+ *\n..\n+ */\n+ if (skipping_changes)\n+ stop_skipping_changes(false);\n\nWhy do we need such a Prepare's entry either at current subscriber or\non its physical standby? I think it is to allow Commit-prepared. If\nso, how about if we skip even commit prepared as well? Even on\nphysical standby, we would be having the value of skip_xid which can\nhelp us to skip there as well after failover.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 11 Dec 2021 11:59:18 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Sat, Dec 11, 2021 at 3:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Dec 10, 2021 at 11:14 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Dec 9, 2021 at 6:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Dec 9, 2021 at 2:24 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Thu, Dec 9, 2021 at 11:47 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > I am thinking that we can start a transaction, update the catalog,\n> > > > > commit that transaction. Then start a new one to update\n> > > > > origin_lsn/timestamp, finishprepared, and commit it. Now, if it\n> > > > > crashes after the first transaction, only commit prepared will be\n> > > > > resent again and this time we don't need to update the catalog as that\n> > > > > entry would be already cleared.\n> > > >\n> > > > Sounds good. In the crash case, it should be fine since we will just\n> > > > commit an empty transaction. The same is true for the case where\n> > > > skip_xid has been changed after skipping and preparing the transaction\n> > > > and before handling commit_prepared.\n> > > >\n> > > > Regarding the case where the user specifies XID of the transaction\n> > > > after it is prepared on the subscriber (i.g., the transaction is not\n> > > > empty), we won’t skip committing the prepared transaction. But I think\n> > > > that we don't need to support skipping already-prepared transaction\n> > > > since such transaction doesn't conflict with anything regardless of\n> > > > having changed or not.\n> > > >\n> > >\n> > > Yeah, this makes sense to me.\n> > >\n> >\n> > I've attached an updated patch. The new syntax is like \"ALTER\n> > SUBSCRIPTION testsub SKIP (xid = '123')\".\n> >\n> > I’ve been thinking we can do something safeguard for the case where\n> > the user specified the wrong xid. For example, can we somewhat use the\n> > stats in pg_stat_subscription_workers? An idea is that logical\n> > replication worker fetches the xid from the stats when reading the\n> > subscription and skips the transaction if the xid matches to\n> > subskipxid. That is, the worker checks the error reported by the\n> > worker previously working on the same subscription. The error could\n> > not be a conflict error (e.g., connection error etc.) or might have\n> > been cleared by the reset function, But given the worker is in an\n> > error loop, the worker can eventually get xid in question. We can\n> > prevent an unrelated transaction from being skipped unexpectedly. It\n> > seems not a stable solution though. Or it might be enough to warn\n> > users when they specified an XID that doesn’t match to last_error_xid.\n> >\n>\n> I think the idea is good but because it is not predictable as pointed\n> by you so we might want to just issue a LOG/WARNING. If not already\n> mentioned, then please do mention in docs the possibility of skipping\n> non-errored transactions.\n>\n> Few comments/questions:\n> =====================\n> 1.\n> + Specifies the ID of the transaction whose application is to\n> be skipped\n> + by the logical replication worker. Setting -1 means to reset the\n> + transaction ID.\n>\n> Can we change it to something like: \"Specifies the ID of the\n> transaction whose changes are to be skipped by the logical replication\n> worker. ....\"\n>\n\nAgreed.\n\n> 2.\n> @@ -104,6 +104,16 @@ GetSubscription(Oid subid, bool missing_ok)\n> Assert(!isnull);\n> sub->publications = textarray_to_stringlist(DatumGetArrayTypeP(datum));\n>\n> + /* Get skip XID */\n> + datum = SysCacheGetAttr(SUBSCRIPTIONOID,\n> + tup,\n> + Anum_pg_subscription_subskipxid,\n> + &isnull);\n> + if (!isnull)\n> + sub->skipxid = DatumGetTransactionId(datum);\n> + else\n> + sub->skipxid = InvalidTransactionId;\n>\n> Can't we assign it as we do for other fixed columns like subdbid,\n> subowner, etc.?\n>\n\nYeah, I think we can use InvalidTransactionId as the initial value\ninstead of setting NULL. Then, we can change this code.\n\n> 3.\n> + * Also, we don't skip receiving the changes in streaming cases,\n> since we decide\n> + * whether or not to skip applying the changes when starting to apply changes.\n>\n> But why so? Can't we even skip streaming (and writing to file all such\n> messages)? If we can do this then we can avoid even collecting all\n> messages in a file.\n\nIIUC in streaming cases, a transaction can be sent to the subscriber\nwhile splitting into multiple chunks of changes. In the meanwhile,\nskip_xid can be changed. If the user changed or cleared skip_xid after\nthe subscriber skips some streamed changes, the subscriber won't able\nto have complete changes of the transaction.\n\n>\n> 4.\n> + * Also, one might think that we can skip preparing the skipped transaction.\n> + * But if we do that, PREPARE WAL record won’t be sent to its physical\n> + * standbys, resulting in that users won’t be able to find the prepared\n> + * transaction entry after a fail-over.\n> + *\n> ..\n> + */\n> + if (skipping_changes)\n> + stop_skipping_changes(false);\n>\n> Why do we need such a Prepare's entry either at current subscriber or\n> on its physical standby? I think it is to allow Commit-prepared. If\n> so, how about if we skip even commit prepared as well? Even on\n> physical standby, we would be having the value of skip_xid which can\n> help us to skip there as well after failover.\n\nIt's true that skip_xid would be set also on physical standby. When it\ncomes to preparing the skipped transaction on the current subscriber,\nif we want to skip commit-prepared I think we need protocol changes in\norder for subscribers to know prepare_lsn and preppare_timestampso\nthat it can lookup the prepared transaction when doing\ncommit-prepared. I proposed this idea before. This change would be\nbenefical as of now since the publisher sends even empty transactions.\nBut considering the proposed patch[1] that makes the puslisher not\nsend empty transaction, this protocol change would be an optimization\nonly for this feature.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 13 Dec 2021 11:58:08 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Dec 10, 2021 at 4:44 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached an updated patch. The new syntax is like \"ALTER\n> SUBSCRIPTION testsub SKIP (xid = '123')\".\n>\n\nI have some review comments:\n\n(1) Patch comment - some suggested wording improvements\n\nBEFORE:\nIf incoming change violates any constraint, logical replication stops\nAFTER:\nIf an incoming change violates any constraint, logical replication stops\n\nBEFORE:\nThe user can specify XID by ALTER SUBSCRIPTION ... SKIP (xid = XXX),\nupdating pg_subscription.subskipxid field, telling the apply worker to\nskip the transaction.\nAFTER:\nThe user can specify the XID of the transaction to skip using\nALTER SUBSCRIPTION ... SKIP (xid = XXX), updating the pg_subscription.subskipxid\nfield, telling the apply worker to skip the transaction.\n\nsrc/sgml/logical-replication.sgml\n(2) Some suggested wording improvements\n\n(i) Missing \"the\"\nBEFORE:\n+ the existing data. When a conflict produce an error, it is shown in\nAFTER:\n+ the existing data. When a conflict produce an error, it is shown in the\n\n(ii) Suggest starting a new sentence\nBEFORE:\n+ and it is also shown in subscriber's server log as follows:\nAFTER:\n+ The error is also shown in the subscriber's server log as follows:\n\n\n(iii) Context message should say \"at ...\" instead of \"with commit\ntimestamp ...\", to match the actual output from the current code\nBEFORE:\n+CONTEXT: processing remote data during \"INSERT\" for replication\ntarget relation \"public.test\" in transaction 716 with commit timestamp\n2021-09-29 15:52:45.165754+00\nAFTER:\n+CONTEXT: processing remote data during \"INSERT\" for replication\ntarget relation \"public.test\" in transaction 716 at 2021-09-29\n15:52:45.165754+00\n\n\n(iv) The following paragraph seems out of place, with the information\npresented in the wrong order:\n\n+ <para>\n+ In this case, you need to consider changing the data on the\nsubscriber so that it\n+ doesn't conflict with incoming changes, or dropping the\nconflicting constraint or\n+ unique index, or writing a trigger on the subscriber to suppress or redirect\n+ conflicting incoming changes, or as a last resort, by skipping the\nwhole transaction.\n+ They skip the whole transaction, including changes that may not violate any\n+ constraint. They may easily make the subscriber inconsistent, especially if\n+ a user specifies the wrong transaction ID or the position of origin.\n+ </para>\n\n\nHow about rearranging it as follows:\n\n+ <para>\n+ These methods skip the whole transaction, including changes that\nmay not violate\n+ any constraint. They may easily make the subscriber inconsistent,\nespecially if\n+ a user specifies the wrong transaction ID or the position of\norigin, and should\n+ be used as a last resort.\n+ Alternatively, you might consider changing the data on the\nsubscriber so that it\n+ doesn't conflict with incoming changes, or dropping the\nconflicting constraint or\n+ unique index, or writing a trigger on the subscriber to suppress or redirect\n+ conflicting incoming changes.\n+ </para>\n\n\ndoc/src/sgml/ref/alter_subscription.sgml\n(3)\n\n(i) Doc needs clarification\nBEFORE:\n+ the whole transaction. The logical replication worker skips all data\nAFTER:\n+ the whole transaction. For the latter case, the logical\nreplication worker skips all data\n\n\n(ii) \"Setting -1 means to reset the transaction ID\"\n\nShouldn't it be explained what resetting actually does and when it can\nbe, or is needed to be, done? Isn't it automatically reset?\nI notice that negative values (other than -1) seem to be regarded as\nvalid - is that right?\nAlso, what happens if this option is set multiple times? Does it just\noverride and use the latest setting? (other option handling errors out\nwith errorConflictingDefElem()).\ne.g. alter subscription sub skip (xid = 721, xid = 722);\n\n\nsrc/backend/replication/logical/worker.c\n(4) Shouldn't the \"done skipping logical replication transaction\"\nmessage also include the skipped XID value at the end?\n\n\nsrc/test/subscription/t/027_skip_xact.pl\n(5) Some suggested wording improvements\n\n(i)\nBEFORE:\n+# Test skipping the transaction. This function must be called after the caller\n+# inserting data that conflict with the subscriber. After waiting for the\n+# subscription worker stats are updated, we skip the transaction in question\n+# by ALTER SUBSCRIPTION ... SKIP. Then, check if logical replication\ncan continue\n+# working by inserting $nonconflict_data on the publisher.\nAFTER:\n+# Test skipping the transaction. This function must be called after the caller\n+# inserts data that conflicts with the subscriber. After waiting for the\n+# subscription worker stats to be updated, we skip the transaction in question\n+# by ALTER SUBSCRIPTION ... SKIP. Then, check if logical replication\ncan continue\n+# working by inserting $nonconflict_data on the publisher.\n\n(ii)\nBEFORE:\n+# will conflict with the data replicated from publisher later.\nAFTER:\n+# will conflict with the data replicated later from the publisher.\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 13 Dec 2021 14:12:10 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Dec 13, 2021 at 8:28 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Sat, Dec 11, 2021 at 3:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > 3.\n> > + * Also, we don't skip receiving the changes in streaming cases,\n> > since we decide\n> > + * whether or not to skip applying the changes when starting to apply changes.\n> >\n> > But why so? Can't we even skip streaming (and writing to file all such\n> > messages)? If we can do this then we can avoid even collecting all\n> > messages in a file.\n>\n> IIUC in streaming cases, a transaction can be sent to the subscriber\n> while splitting into multiple chunks of changes. In the meanwhile,\n> skip_xid can be changed. If the user changed or cleared skip_xid after\n> the subscriber skips some streamed changes, the subscriber won't able\n> to have complete changes of the transaction.\n>\n\nYeah, I think if we want we can handle this by writing into the stream\nxid file whether the changes need to be skipped and then the\nconsecutive streams can check that in the file or may be in some way\ndon't allow skip_xid to be changed in worker if it is already skipping\nsome xact. If we don't want to do anything for this then it is better\nto at least reflect this reasoning in the comments.\n\n> >\n> > 4.\n> > + * Also, one might think that we can skip preparing the skipped transaction.\n> > + * But if we do that, PREPARE WAL record won’t be sent to its physical\n> > + * standbys, resulting in that users won’t be able to find the prepared\n> > + * transaction entry after a fail-over.\n> > + *\n> > ..\n> > + */\n> > + if (skipping_changes)\n> > + stop_skipping_changes(false);\n> >\n> > Why do we need such a Prepare's entry either at current subscriber or\n> > on its physical standby? I think it is to allow Commit-prepared. If\n> > so, how about if we skip even commit prepared as well? Even on\n> > physical standby, we would be having the value of skip_xid which can\n> > help us to skip there as well after failover.\n>\n> It's true that skip_xid would be set also on physical standby. When it\n> comes to preparing the skipped transaction on the current subscriber,\n> if we want to skip commit-prepared I think we need protocol changes in\n> order for subscribers to know prepare_lsn and preppare_timestampso\n> that it can lookup the prepared transaction when doing\n> commit-prepared. I proposed this idea before. This change would be\n> benefical as of now since the publisher sends even empty transactions.\n> But considering the proposed patch[1] that makes the puslisher not\n> send empty transaction, this protocol change would be an optimization\n> only for this feature.\n>\n\nI was thinking to compare the xid received as part of the\ncommit_prepared message with the value of skip_xid to skip the\ncommit_prepared but I guess the user would change it between prepare\nand commit prepare and then we won't be able to detect it, right? I\nthink we can handle this and the streaming case if we disallow users\nto change the value of skip_xid when we are already skipping changes\nor don't let the new skip_xid to reflect in the apply worker if we are\nalready skipping some other transaction. What do you think?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 13 Dec 2021 09:34:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Dec 13, 2021 at 1:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Dec 13, 2021 at 8:28 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Sat, Dec 11, 2021 at 3:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > 3.\n> > > + * Also, we don't skip receiving the changes in streaming cases,\n> > > since we decide\n> > > + * whether or not to skip applying the changes when starting to apply changes.\n> > >\n> > > But why so? Can't we even skip streaming (and writing to file all such\n> > > messages)? If we can do this then we can avoid even collecting all\n> > > messages in a file.\n> >\n> > IIUC in streaming cases, a transaction can be sent to the subscriber\n> > while splitting into multiple chunks of changes. In the meanwhile,\n> > skip_xid can be changed. If the user changed or cleared skip_xid after\n> > the subscriber skips some streamed changes, the subscriber won't able\n> > to have complete changes of the transaction.\n> >\n>\n> Yeah, I think if we want we can handle this by writing into the stream\n> xid file whether the changes need to be skipped and then the\n> consecutive streams can check that in the file or may be in some way\n> don't allow skip_xid to be changed in worker if it is already skipping\n> some xact. If we don't want to do anything for this then it is better\n> to at least reflect this reasoning in the comments.\n\nYes. Given that we still need to apply messages other than\ndata-modification messages, we need to skip writing only these changes\nto the stream file.\n\n>\n> > >\n> > > 4.\n> > > + * Also, one might think that we can skip preparing the skipped transaction.\n> > > + * But if we do that, PREPARE WAL record won’t be sent to its physical\n> > > + * standbys, resulting in that users won’t be able to find the prepared\n> > > + * transaction entry after a fail-over.\n> > > + *\n> > > ..\n> > > + */\n> > > + if (skipping_changes)\n> > > + stop_skipping_changes(false);\n> > >\n> > > Why do we need such a Prepare's entry either at current subscriber or\n> > > on its physical standby? I think it is to allow Commit-prepared. If\n> > > so, how about if we skip even commit prepared as well? Even on\n> > > physical standby, we would be having the value of skip_xid which can\n> > > help us to skip there as well after failover.\n> >\n> > It's true that skip_xid would be set also on physical standby. When it\n> > comes to preparing the skipped transaction on the current subscriber,\n> > if we want to skip commit-prepared I think we need protocol changes in\n> > order for subscribers to know prepare_lsn and preppare_timestampso\n> > that it can lookup the prepared transaction when doing\n> > commit-prepared. I proposed this idea before. This change would be\n> > benefical as of now since the publisher sends even empty transactions.\n> > But considering the proposed patch[1] that makes the puslisher not\n> > send empty transaction, this protocol change would be an optimization\n> > only for this feature.\n> >\n>\n> I was thinking to compare the xid received as part of the\n> commit_prepared message with the value of skip_xid to skip the\n> commit_prepared but I guess the user would change it between prepare\n> and commit prepare and then we won't be able to detect it, right? I\n> think we can handle this and the streaming case if we disallow users\n> to change the value of skip_xid when we are already skipping changes\n> or don't let the new skip_xid to reflect in the apply worker if we are\n> already skipping some other transaction. What do you think?\n\nIn streaming cases, we don’t know when stream-commit or stream-abort\ncomes and another conflict could occur on the subscription in the\nmeanwhile. But given that (we expect) this feature is used after the\napply worker enters into an error loop, this is unlikely to happen in\npractice unless the user sets the wrong XID. Similarly, in 2PC cases,\nwe don’t know when commit-prepared or rollback-prepared comes and\nanother conflict could occur in the meanwhile. But this could occur in\npractice even if the user specified the correct XID. Therefore, if we\ndisallow to change skip_xid until the subscriber receives\ncommit-prepared or rollback-prepared, we cannot skip the second\ntransaction that conflicts with data on the subscriber.\n\n From the application perspective, which behavior is preferable between\nskipping preparing a transaction and preparing an empty transaction,\nin the first place? From the resource consumption etc., skipping\npreparing transactions seems better. On the other hand, if we skipped\npreparing the transaction, the application would not be able to find\nthe prepared transaction after a fail-over to the subscriber.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 13 Dec 2021 22:24:38 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Dec 13, 2021 at 6:55 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Dec 13, 2021 at 1:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Dec 13, 2021 at 8:28 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > > >\n> > > > 4.\n> > > > + * Also, one might think that we can skip preparing the skipped transaction.\n> > > > + * But if we do that, PREPARE WAL record won’t be sent to its physical\n> > > > + * standbys, resulting in that users won’t be able to find the prepared\n> > > > + * transaction entry after a fail-over.\n> > > > + *\n> > > > ..\n> > > > + */\n> > > > + if (skipping_changes)\n> > > > + stop_skipping_changes(false);\n> > > >\n> > > > Why do we need such a Prepare's entry either at current subscriber or\n> > > > on its physical standby? I think it is to allow Commit-prepared. If\n> > > > so, how about if we skip even commit prepared as well? Even on\n> > > > physical standby, we would be having the value of skip_xid which can\n> > > > help us to skip there as well after failover.\n> > >\n> > > It's true that skip_xid would be set also on physical standby. When it\n> > > comes to preparing the skipped transaction on the current subscriber,\n> > > if we want to skip commit-prepared I think we need protocol changes in\n> > > order for subscribers to know prepare_lsn and preppare_timestampso\n> > > that it can lookup the prepared transaction when doing\n> > > commit-prepared. I proposed this idea before. This change would be\n> > > benefical as of now since the publisher sends even empty transactions.\n> > > But considering the proposed patch[1] that makes the puslisher not\n> > > send empty transaction, this protocol change would be an optimization\n> > > only for this feature.\n> > >\n> >\n> > I was thinking to compare the xid received as part of the\n> > commit_prepared message with the value of skip_xid to skip the\n> > commit_prepared but I guess the user would change it between prepare\n> > and commit prepare and then we won't be able to detect it, right? I\n> > think we can handle this and the streaming case if we disallow users\n> > to change the value of skip_xid when we are already skipping changes\n> > or don't let the new skip_xid to reflect in the apply worker if we are\n> > already skipping some other transaction. What do you think?\n>\n> In streaming cases, we don’t know when stream-commit or stream-abort\n> comes and another conflict could occur on the subscription in the\n> meanwhile. But given that (we expect) this feature is used after the\n> apply worker enters into an error loop, this is unlikely to happen in\n> practice unless the user sets the wrong XID. Similarly, in 2PC cases,\n> we don’t know when commit-prepared or rollback-prepared comes and\n> another conflict could occur in the meanwhile. But this could occur in\n> practice even if the user specified the correct XID. Therefore, if we\n> disallow to change skip_xid until the subscriber receives\n> commit-prepared or rollback-prepared, we cannot skip the second\n> transaction that conflicts with data on the subscriber.\n>\n\nI agree with this theory. Can we reflect this in comments so that in\nthe future we know why we didn't pursue this direction?\n\n> From the application perspective, which behavior is preferable between\n> skipping preparing a transaction and preparing an empty transaction,\n> in the first place? From the resource consumption etc., skipping\n> preparing transactions seems better. On the other hand, if we skipped\n> preparing the transaction, the application would not be able to find\n> the prepared transaction after a fail-over to the subscriber.\n>\n\nI am not sure how much it matters that such prepares are not present\nbecause we wanted to some way skip the corresponding commit prepared\nas well. I think your previous point is a good enough reason as to why\nwe should allow such prepares.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 14 Dec 2021 08:19:44 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Dec 10, 2021 at 11:14 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Dec 9, 2021 at 6:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Dec 9, 2021 at 2:24 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Thu, Dec 9, 2021 at 11:47 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > I am thinking that we can start a transaction, update the catalog,\n> > > > commit that transaction. Then start a new one to update\n> > > > origin_lsn/timestamp, finishprepared, and commit it. Now, if it\n> > > > crashes after the first transaction, only commit prepared will be\n> > > > resent again and this time we don't need to update the catalog as that\n> > > > entry would be already cleared.\n> > >\n> > > Sounds good. In the crash case, it should be fine since we will just\n> > > commit an empty transaction. The same is true for the case where\n> > > skip_xid has been changed after skipping and preparing the transaction\n> > > and before handling commit_prepared.\n> > >\n> > > Regarding the case where the user specifies XID of the transaction\n> > > after it is prepared on the subscriber (i.g., the transaction is not\n> > > empty), we won’t skip committing the prepared transaction. But I think\n> > > that we don't need to support skipping already-prepared transaction\n> > > since such transaction doesn't conflict with anything regardless of\n> > > having changed or not.\n> > >\n> >\n> > Yeah, this makes sense to me.\n> >\n>\n> I've attached an updated patch. The new syntax is like \"ALTER\n> SUBSCRIPTION testsub SKIP (xid = '123')\".\n>\n> I’ve been thinking we can do something safeguard for the case where\n> the user specified the wrong xid. For example, can we somewhat use the\n> stats in pg_stat_subscription_workers? An idea is that logical\n> replication worker fetches the xid from the stats when reading the\n> subscription and skips the transaction if the xid matches to\n> subskipxid. That is, the worker checks the error reported by the\n> worker previously working on the same subscription. The error could\n> not be a conflict error (e.g., connection error etc.) or might have\n> been cleared by the reset function, But given the worker is in an\n> error loop, the worker can eventually get xid in question. We can\n> prevent an unrelated transaction from being skipped unexpectedly. It\n> seems not a stable solution though. Or it might be enough to warn\n> users when they specified an XID that doesn’t match to last_error_xid.\n> Anyway, I think it’s better to have more discussion on this. Any\n> ideas?\n\nWhile the worker is skipping one of the skip transactions specified by\nthe user and immediately if the user specifies another skip\ntransaction while the skipping of the transaction is in progress this\nnew value will be reset by the worker while clearing the skip xid. I\nfelt once the worker has identified the skip xid and is about to skip\nthe xid, the worker can acquire a lock to prevent concurrency issues:\n+static void\n+clear_subscription_skip_xid(void)\n+{\n+ Relation rel;\n+ HeapTuple tup;\n+ bool nulls[Natts_pg_subscription];\n+ bool replaces[Natts_pg_subscription];\n+ Datum values[Natts_pg_subscription];\n+\n+ memset(values, 0, sizeof(values));\n+ memset(nulls, false, sizeof(nulls));\n+ memset(replaces, false, sizeof(replaces));\n+\n+ if (!IsTransactionState())\n+ StartTransactionCommand();\n+\n+ rel = table_open(SubscriptionRelationId, RowExclusiveLock);\n+\n+ /* Fetch the existing tuple. */\n+ tup = SearchSysCacheCopy1(SUBSCRIPTIONOID,\n+\nObjectIdGetDatum(MySubscription->oid));\n+\n+ if (!HeapTupleIsValid(tup))\n+ elog(ERROR, \"subscription \\\"%s\\\" does not exist\",\nMySubscription->name);\n+\n+ /* Set subskipxid to null */\n+ nulls[Anum_pg_subscription_subskipxid - 1] = true;\n+ replaces[Anum_pg_subscription_subskipxid - 1] = true;\n+\n+ /* Update the system catalog to reset the skip XID */\n+ tup = heap_modify_tuple(tup, RelationGetDescr(rel), values, nulls,\n+ replaces);\n+ CatalogTupleUpdate(rel, &tup->t_self, tup);\n+\n+ heap_freetuple(tup);\n+ table_close(rel, RowExclusiveLock);\n+}\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 14 Dec 2021 09:53:06 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Dec 14, 2021 at 3:23 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> While the worker is skipping one of the skip transactions specified by\n> the user and immediately if the user specifies another skip\n> transaction while the skipping of the transaction is in progress this\n> new value will be reset by the worker while clearing the skip xid. I\n> felt once the worker has identified the skip xid and is about to skip\n> the xid, the worker can acquire a lock to prevent concurrency issues:\n\nThat's a good point.\nIf only the last_error_xid could be skipped, then this wouldn't be an\nissue, right?\nIf a different xid to skip is specified while the worker is currently\nskipping a transaction, should that even be allowed?\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 14 Dec 2021 16:35:38 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Dec 3, 2021 at 12:12 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Skipping a whole transaction by specifying xid would be a good start.\n> Ideally, we'd like to automatically skip only operations within the\n> transaction that fail but it seems not easy to achieve. If we allow\n> specifying operations and/or relations, probably multiple operations\n> or relations need to be specified in some cases. Otherwise, the\n> subscriber cannot continue logical replication if the transaction has\n> multiple operations on different relations that fail. But similar to\n> the idea of specifying multiple xids, we need to note the fact that\n> user wouldn't know of the second operation failure unless the apply\n> worker applies the change. So I'm not sure there are many use cases in\n> practice where users can specify multiple operations and relations in\n> order to skip applies that fail.\n\nI think there would be use cases for specifying the relations or\noperation, e.g. if the user finds an issue in inserting in a\nparticular relation then maybe based on some manual investigation he\nfounds that the table has some constraint due to that it is failing on\nthe subscriber side but on the publisher side that constraint is not\nthere so maybe the user is okay to skip the changes for this table and\nnot for other tables, or there might be a few more tables which are\ndesigned based on the same principle and can have similar error so\nisn't it good to provide an option to give the list of all such\ntables.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 14 Dec 2021 11:40:38 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Dec 14, 2021 at 8:20 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Dec 13, 2021 at 6:55 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n\n> > In streaming cases, we don’t know when stream-commit or stream-abort\n> > comes and another conflict could occur on the subscription in the\n> > meanwhile. But given that (we expect) this feature is used after the\n> > apply worker enters into an error loop, this is unlikely to happen in\n> > practice unless the user sets the wrong XID. Similarly, in 2PC cases,\n> > we don’t know when commit-prepared or rollback-prepared comes and\n> > another conflict could occur in the meanwhile. But this could occur in\n> > practice even if the user specified the correct XID. Therefore, if we\n> > disallow to change skip_xid until the subscriber receives\n> > commit-prepared or rollback-prepared, we cannot skip the second\n> > transaction that conflicts with data on the subscriber.\n> >\n>\n> I agree with this theory. Can we reflect this in comments so that in\n> the future we know why we didn't pursue this direction?\n\nI might be missing something here, but for streaming, transaction\nusers can decide whether they wants to skip or not only once we start\napplying no? I mean only once we start applying the changes we can\nget some errors and by that time we must be having all the changes for\nthe transaction. So I do not understand the point we are trying to\ndiscuss here?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 14 Dec 2021 13:07:02 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Dec 14, 2021 at 1:07 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Dec 14, 2021 at 8:20 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Dec 13, 2021 at 6:55 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> > > In streaming cases, we don’t know when stream-commit or stream-abort\n> > > comes and another conflict could occur on the subscription in the\n> > > meanwhile. But given that (we expect) this feature is used after the\n> > > apply worker enters into an error loop, this is unlikely to happen in\n> > > practice unless the user sets the wrong XID. Similarly, in 2PC cases,\n> > > we don’t know when commit-prepared or rollback-prepared comes and\n> > > another conflict could occur in the meanwhile. But this could occur in\n> > > practice even if the user specified the correct XID. Therefore, if we\n> > > disallow to change skip_xid until the subscriber receives\n> > > commit-prepared or rollback-prepared, we cannot skip the second\n> > > transaction that conflicts with data on the subscriber.\n> > >\n> >\n> > I agree with this theory. Can we reflect this in comments so that in\n> > the future we know why we didn't pursue this direction?\n>\n> I might be missing something here, but for streaming, transaction\n> users can decide whether they wants to skip or not only once we start\n> applying no? I mean only once we start applying the changes we can\n> get some errors and by that time we must be having all the changes for\n> the transaction.\n>\n\nThat is right and as per my understanding, the patch is trying to\naccomplish the same.\n\n> So I do not understand the point we are trying to\n> discuss here?\n>\n\nThe point is that whether we can skip the changes while streaming\nitself like when we get the changes and write to a stream file. Now,\nit is possible that streams from multiple transactions can be\ninterleaved and users can change the skip_xid in between. It is not\nthat we can't handle this but that would require a more complex design\nand it doesn't seem worth it because we can anyway skip the changes\nwhile applying as you mentioned in the previous paragraph.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 14 Dec 2021 14:35:50 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Dec 10, 2021 at 11:14 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Dec 9, 2021 at 6:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Dec 9, 2021 at 2:24 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Thu, Dec 9, 2021 at 11:47 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > I am thinking that we can start a transaction, update the catalog,\n> > > > commit that transaction. Then start a new one to update\n> > > > origin_lsn/timestamp, finishprepared, and commit it. Now, if it\n> > > > crashes after the first transaction, only commit prepared will be\n> > > > resent again and this time we don't need to update the catalog as that\n> > > > entry would be already cleared.\n> > >\n> > > Sounds good. In the crash case, it should be fine since we will just\n> > > commit an empty transaction. The same is true for the case where\n> > > skip_xid has been changed after skipping and preparing the transaction\n> > > and before handling commit_prepared.\n> > >\n> > > Regarding the case where the user specifies XID of the transaction\n> > > after it is prepared on the subscriber (i.g., the transaction is not\n> > > empty), we won’t skip committing the prepared transaction. But I think\n> > > that we don't need to support skipping already-prepared transaction\n> > > since such transaction doesn't conflict with anything regardless of\n> > > having changed or not.\n> > >\n> >\n> > Yeah, this makes sense to me.\n> >\n>\n> I've attached an updated patch. The new syntax is like \"ALTER\n> SUBSCRIPTION testsub SKIP (xid = '123')\".\n>\n> I’ve been thinking we can do something safeguard for the case where\n> the user specified the wrong xid. For example, can we somewhat use the\n> stats in pg_stat_subscription_workers? An idea is that logical\n> replication worker fetches the xid from the stats when reading the\n> subscription and skips the transaction if the xid matches to\n> subskipxid. That is, the worker checks the error reported by the\n> worker previously working on the same subscription. The error could\n> not be a conflict error (e.g., connection error etc.) or might have\n> been cleared by the reset function, But given the worker is in an\n> error loop, the worker can eventually get xid in question. We can\n> prevent an unrelated transaction from being skipped unexpectedly. It\n> seems not a stable solution though. Or it might be enough to warn\n> users when they specified an XID that doesn’t match to last_error_xid.\n> Anyway, I think it’s better to have more discussion on this. Any\n> ideas?\n\nFew comments:\n1) Should we check if conflicting option is specified like others above:\n+ else if (strcmp(defel->defname, \"xid\") == 0)\n+ {\n+ char *xid_str = defGetString(defel);\n+ TransactionId xid;\n+\n+ if (strcmp(xid_str, \"-1\") == 0)\n+ {\n+ /* Setting -1 to xid means to reset it */\n+ xid = InvalidTransactionId;\n+ }\n+ else\n+ {\n\n2) Currently only superusers can set skip xid, we can add this in the\ndocumentation:\n+ if (!superuser())\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n+ errmsg(\"must be superuser to set %s\", \"skip_xid\")));\n\n3) There is an extra tab before \"The resolution can be done ...\", it\ncan be removed.\n+ Skip applying changes of the particular transaction. If incoming data\n+ violates any constraints the logical replication will stop until it is\n+ resolved. The resolution can be done either by changing data on the\n+ subscriber so that it doesn't conflict with incoming change or\nby skipping\n+ the whole transaction. The logical replication worker skips all data\n\n4) xid with -2 is currently allowed, may be it is ok. If it is fine we\ncan remove it from the fail section.\n+-- fail\n+ALTER SUBSCRIPTION regress_testsub SKIP (xid = 1.1);\n+ERROR: invalid transaction id: 1.1\n+ALTER SUBSCRIPTION regress_testsub SKIP (xid = -2);\n+ALTER SUBSCRIPTION regress_testsub SKIP (xid = 0);\n+ERROR: invalid transaction id: 0\n+ALTER SUBSCRIPTION regress_testsub SKIP (xid = 1);\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 14 Dec 2021 15:23:30 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Dec 14, 2021 at 2:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > >\n> > > I agree with this theory. Can we reflect this in comments so that in\n> > > the future we know why we didn't pursue this direction?\n> >\n> > I might be missing something here, but for streaming, transaction\n> > users can decide whether they wants to skip or not only once we start\n> > applying no? I mean only once we start applying the changes we can\n> > get some errors and by that time we must be having all the changes for\n> > the transaction.\n> >\n>\n> That is right and as per my understanding, the patch is trying to\n> accomplish the same.\n>\n> > So I do not understand the point we are trying to\n> > discuss here?\n> >\n>\n> The point is that whether we can skip the changes while streaming\n> itself like when we get the changes and write to a stream file. Now,\n> it is possible that streams from multiple transactions can be\n> interleaved and users can change the skip_xid in between. It is not\n> that we can't handle this but that would require a more complex design\n> and it doesn't seem worth it because we can anyway skip the changes\n> while applying as you mentioned in the previous paragraph.\n\nActually, I was trying to understand the use case for skipping while\nstreaming. Actually, during streaming we are not doing any database\noperation that means this will not generate any error. So IIUC, there\nis no use case for skipping while streaming itself? Is there any use\ncase which I am not aware of?\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 14 Dec 2021 15:41:27 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Dec 14, 2021 at 3:41 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Dec 14, 2021 at 2:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > >\n> > > > I agree with this theory. Can we reflect this in comments so that in\n> > > > the future we know why we didn't pursue this direction?\n> > >\n> > > I might be missing something here, but for streaming, transaction\n> > > users can decide whether they wants to skip or not only once we start\n> > > applying no? I mean only once we start applying the changes we can\n> > > get some errors and by that time we must be having all the changes for\n> > > the transaction.\n> > >\n> >\n> > That is right and as per my understanding, the patch is trying to\n> > accomplish the same.\n> >\n> > > So I do not understand the point we are trying to\n> > > discuss here?\n> > >\n> >\n> > The point is that whether we can skip the changes while streaming\n> > itself like when we get the changes and write to a stream file. Now,\n> > it is possible that streams from multiple transactions can be\n> > interleaved and users can change the skip_xid in between. It is not\n> > that we can't handle this but that would require a more complex design\n> > and it doesn't seem worth it because we can anyway skip the changes\n> > while applying as you mentioned in the previous paragraph.\n>\n> Actually, I was trying to understand the use case for skipping while\n> streaming. Actually, during streaming we are not doing any database\n> operation that means this will not generate any error.\n>\n\nSay, there is an error the first time when we start to apply changes\nfor such a transaction. So, such a transaction will be streamed again.\nSay, the user has set the skip_xid before we stream a second time, so\nthis time, we can skip it either during the stream phase or apply\nphase. I think the patch is skipping it during apply phase.\nSawada-San, please confirm if my understanding is correct?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 14 Dec 2021 16:53:55 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Dec 14, 2021 at 8:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Dec 14, 2021 at 3:41 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Tue, Dec 14, 2021 at 2:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > > >\n> > > > > I agree with this theory. Can we reflect this in comments so that in\n> > > > > the future we know why we didn't pursue this direction?\n> > > >\n> > > > I might be missing something here, but for streaming, transaction\n> > > > users can decide whether they wants to skip or not only once we start\n> > > > applying no? I mean only once we start applying the changes we can\n> > > > get some errors and by that time we must be having all the changes for\n> > > > the transaction.\n> > > >\n> > >\n> > > That is right and as per my understanding, the patch is trying to\n> > > accomplish the same.\n> > >\n> > > > So I do not understand the point we are trying to\n> > > > discuss here?\n> > > >\n> > >\n> > > The point is that whether we can skip the changes while streaming\n> > > itself like when we get the changes and write to a stream file. Now,\n> > > it is possible that streams from multiple transactions can be\n> > > interleaved and users can change the skip_xid in between. It is not\n> > > that we can't handle this but that would require a more complex design\n> > > and it doesn't seem worth it because we can anyway skip the changes\n> > > while applying as you mentioned in the previous paragraph.\n> >\n> > Actually, I was trying to understand the use case for skipping while\n> > streaming. Actually, during streaming we are not doing any database\n> > operation that means this will not generate any error.\n> >\n>\n> Say, there is an error the first time when we start to apply changes\n> for such a transaction. So, such a transaction will be streamed again.\n> Say, the user has set the skip_xid before we stream a second time, so\n> this time, we can skip it either during the stream phase or apply\n> phase. I think the patch is skipping it during apply phase.\n> Sawada-San, please confirm if my understanding is correct?\n\nMy understanding is the same. The patch doesn't skip the streaming\nphase but starts skipping when starting to apply changes. That is, we\nreceive streamed changes and write them to the stream file anyway\nregardless of skip_xid. When receiving the stream-commit message, we\ncheck whether or not we skip this transaction, and if so we apply all\nmessages in the stream file other than all data modification messages.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 15 Dec 2021 09:38:34 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Dec 14, 2021 at 2:35 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Tue, Dec 14, 2021 at 3:23 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > While the worker is skipping one of the skip transactions specified by\n> > the user and immediately if the user specifies another skip\n> > transaction while the skipping of the transaction is in progress this\n> > new value will be reset by the worker while clearing the skip xid. I\n> > felt once the worker has identified the skip xid and is about to skip\n> > the xid, the worker can acquire a lock to prevent concurrency issues:\n>\n> That's a good point.\n> If only the last_error_xid could be skipped, then this wouldn't be an\n> issue, right?\n> If a different xid to skip is specified while the worker is currently\n> skipping a transaction, should that even be allowed?\n>\n\nWe don't expect such usage but yes, it could happen and seems not\ngood. I thought we can acquire Share lock on pg_subscription during\nthe skip but not sure it's a good idea. It would be better if we can\nfind a way to allow users to specify only XID that has failed.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 15 Dec 2021 11:49:11 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Dec 14, 2021 at 4:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n\n> > Actually, I was trying to understand the use case for skipping while\n> > streaming. Actually, during streaming we are not doing any database\n> > operation that means this will not generate any error.\n> >\n>\n> Say, there is an error the first time when we start to apply changes\n> for such a transaction. So, such a transaction will be streamed again.\n> Say, the user has set the skip_xid before we stream a second time, so\n> this time, we can skip it either during the stream phase or apply\n> phase. I think the patch is skipping it during apply phase.\n> Sawada-San, please confirm if my understanding is correct?\n>\n\nGot it, thanks for clarifying.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 15 Dec 2021 09:27:19 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Dec 15, 2021 at 8:19 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Dec 14, 2021 at 2:35 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> >\n> > On Tue, Dec 14, 2021 at 3:23 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > While the worker is skipping one of the skip transactions specified by\n> > > the user and immediately if the user specifies another skip\n> > > transaction while the skipping of the transaction is in progress this\n> > > new value will be reset by the worker while clearing the skip xid. I\n> > > felt once the worker has identified the skip xid and is about to skip\n> > > the xid, the worker can acquire a lock to prevent concurrency issues:\n> >\n> > That's a good point.\n> > If only the last_error_xid could be skipped, then this wouldn't be an\n> > issue, right?\n> > If a different xid to skip is specified while the worker is currently\n> > skipping a transaction, should that even be allowed?\n> >\n>\n> We don't expect such usage but yes, it could happen and seems not\n> good. I thought we can acquire Share lock on pg_subscription during\n> the skip but not sure it's a good idea. It would be better if we can\n> find a way to allow users to specify only XID that has failed.\n>\n\nYeah, but as we don't have a definite way to allow specifying only\nfailed XID, I think it is better to use share lock on that particular\nsubscription. We are already using it for add/update rel state (see,\nAddSubscriptionRelState, UpdateSubscriptionRelState), so this will be\nanother place to use a similar technique.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 15 Dec 2021 09:40:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Dec 14, 2021 at 11:40 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Fri, Dec 3, 2021 at 12:12 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Skipping a whole transaction by specifying xid would be a good start.\n> > Ideally, we'd like to automatically skip only operations within the\n> > transaction that fail but it seems not easy to achieve. If we allow\n> > specifying operations and/or relations, probably multiple operations\n> > or relations need to be specified in some cases. Otherwise, the\n> > subscriber cannot continue logical replication if the transaction has\n> > multiple operations on different relations that fail. But similar to\n> > the idea of specifying multiple xids, we need to note the fact that\n> > user wouldn't know of the second operation failure unless the apply\n> > worker applies the change. So I'm not sure there are many use cases in\n> > practice where users can specify multiple operations and relations in\n> > order to skip applies that fail.\n>\n> I think there would be use cases for specifying the relations or\n> operation, e.g. if the user finds an issue in inserting in a\n> particular relation then maybe based on some manual investigation he\n> founds that the table has some constraint due to that it is failing on\n> the subscriber side but on the publisher side that constraint is not\n> there so maybe the user is okay to skip the changes for this table and\n> not for other tables, or there might be a few more tables which are\n> designed based on the same principle and can have similar error so\n> isn't it good to provide an option to give the list of all such\n> tables.\n>\n\nThat's right and I agree there could be some use case for it and even\nspecifying the operation but I think we can always extend the existing\nfeature for it if the need arises. Note that the user can anyway only\nspecify a single relation or an operation because there is a way to\nknow only one error and till that is resolved, the apply process won't\nproceed. We have discussed providing these additional options in this\nthread but thought of doing it later once we have the base feature and\nbased on the feedback from users.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 15 Dec 2021 09:46:20 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Dec 15, 2021 at 9:46 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Dec 14, 2021 at 11:40 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> That's right and I agree there could be some use case for it and even\n> specifying the operation but I think we can always extend the existing\n> feature for it if the need arises. Note that the user can anyway only\n> specify a single relation or an operation because there is a way to\n> know only one error and till that is resolved, the apply process won't\n> proceed. We have discussed providing these additional options in this\n> thread but thought of doing it later once we have the base feature and\n> based on the feedback from users.\n\nYeah, I only wanted to make the point that this could be useful, it\nseems we are on the same page. I agree we can extend it in the future\nas well.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 15 Dec 2021 10:15:08 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Dec 15, 2021 at 1:49 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> We don't expect such usage but yes, it could happen and seems not\n> good. I thought we can acquire Share lock on pg_subscription during\n> the skip but not sure it's a good idea. It would be better if we can\n> find a way to allow users to specify only XID that has failed.\n>\n\nYes, I agree that would be better.\nIf you didn't do that, I think you'd need to queue the XIDs to be\nskipped (rather than locking).\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 15 Dec 2021 15:58:29 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Dec 15, 2021 at 1:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Dec 15, 2021 at 8:19 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Dec 14, 2021 at 2:35 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> > >\n> > > On Tue, Dec 14, 2021 at 3:23 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > > > While the worker is skipping one of the skip transactions specified by\n> > > > the user and immediately if the user specifies another skip\n> > > > transaction while the skipping of the transaction is in progress this\n> > > > new value will be reset by the worker while clearing the skip xid. I\n> > > > felt once the worker has identified the skip xid and is about to skip\n> > > > the xid, the worker can acquire a lock to prevent concurrency issues:\n> > >\n> > > That's a good point.\n> > > If only the last_error_xid could be skipped, then this wouldn't be an\n> > > issue, right?\n> > > If a different xid to skip is specified while the worker is currently\n> > > skipping a transaction, should that even be allowed?\n> > >\n> >\n> > We don't expect such usage but yes, it could happen and seems not\n> > good. I thought we can acquire Share lock on pg_subscription during\n> > the skip but not sure it's a good idea. It would be better if we can\n> > find a way to allow users to specify only XID that has failed.\n> >\n>\n> Yeah, but as we don't have a definite way to allow specifying only\n> failed XID, I think it is better to use share lock on that particular\n> subscription. We are already using it for add/update rel state (see,\n> AddSubscriptionRelState, UpdateSubscriptionRelState), so this will be\n> another place to use a similar technique.\n\nYes, but it seems to mean that we disallow users to change skip_xid\nwhile the apply worker is skipping changes so we will end up having\nthe same problem we discussed so far;\n\nIn the current patch, we don't clear skip_xid at prepare time but do\nthat at commit-prepare time. But we cannot keep holding the lock until\ncommit-prepared comes because we don’t know when commit-prepared\ncomes. It’s possible that another conflict occurs before the\ncommit-prepared comes. We also cannot only clear skip_xid at prepare\ntime because it doesn’t solve the concurrency problem at\ncommit-prepared time. So if my understanding is correct, we need to\nboth clear skip_xid and unlock the lock at prepare time, and commit\nthe prepared (empty) transaction at commit-prepared time (I assume\nthat we prepare even empty transactions).\n\nSuppose that at prepare time, we clear skip_xid (and release the lock)\nand then prepare the transaction, if the server crashes right after\nclearing skip_xid, skip_xid is already cleared but the transaction\nwill be sent again. The user has to specify skip_xid again. So let’s\nchange the order; we prepare the transaction and then clear skip_xid.\nBut if the server crashes between them, the transaction won’t be sent\nagain, but skip_xid is left. The user has to clear it. The left\nskip_xid can automatically be cleared at commit-prepared time if XID\nin the commit-prepared message matches skip_xid, but this actually\ndoesn’t solve the concurrency problem. If the user changed skip_xid\nbefore commit-prepared, we would end up clearing the value. So we\nmight want to hold the lock until we clear skip_xid but we want to\navoid that as I explained first. It seems like we entered a loop.\n\nIt sounds better among these ideas that we clear skip_xid and then\nprepare the transaction. Or we might want to revisit the idea of\nstoring skip_xid on shmem (e.g., ReplicationState) instead of the\ncatalog.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 15 Dec 2021 23:49:14 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Dec 15, 2021 at 8:19 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Dec 15, 2021 at 1:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Dec 15, 2021 at 8:19 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Tue, Dec 14, 2021 at 2:35 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> > > >\n> > > > On Tue, Dec 14, 2021 at 3:23 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > > >\n> > > > > While the worker is skipping one of the skip transactions specified by\n> > > > > the user and immediately if the user specifies another skip\n> > > > > transaction while the skipping of the transaction is in progress this\n> > > > > new value will be reset by the worker while clearing the skip xid. I\n> > > > > felt once the worker has identified the skip xid and is about to skip\n> > > > > the xid, the worker can acquire a lock to prevent concurrency issues:\n> > > >\n> > > > That's a good point.\n> > > > If only the last_error_xid could be skipped, then this wouldn't be an\n> > > > issue, right?\n> > > > If a different xid to skip is specified while the worker is currently\n> > > > skipping a transaction, should that even be allowed?\n> > > >\n> > >\n> > > We don't expect such usage but yes, it could happen and seems not\n> > > good. I thought we can acquire Share lock on pg_subscription during\n> > > the skip but not sure it's a good idea. It would be better if we can\n> > > find a way to allow users to specify only XID that has failed.\n> > >\n> >\n> > Yeah, but as we don't have a definite way to allow specifying only\n> > failed XID, I think it is better to use share lock on that particular\n> > subscription. We are already using it for add/update rel state (see,\n> > AddSubscriptionRelState, UpdateSubscriptionRelState), so this will be\n> > another place to use a similar technique.\n>\n> Yes, but it seems to mean that we disallow users to change skip_xid\n> while the apply worker is skipping changes so we will end up having\n> the same problem we discussed so far;\n>\n\nI thought we just want to lock before clearing the skip_xid something\nlike take the lock, check if the skip_xid in the catalog is the same\nas we have skipped, if it is the same then clear it, otherwise, leave\nit as it is. How will that disallow users to change skip_xid when we\nare skipping changes?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 16 Dec 2021 08:12:57 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Dec 16, 2021 at 11:43 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Dec 15, 2021 at 8:19 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Dec 15, 2021 at 1:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Dec 15, 2021 at 8:19 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Tue, Dec 14, 2021 at 2:35 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n> > > > >\n> > > > > On Tue, Dec 14, 2021 at 3:23 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > > > >\n> > > > > > While the worker is skipping one of the skip transactions specified by\n> > > > > > the user and immediately if the user specifies another skip\n> > > > > > transaction while the skipping of the transaction is in progress this\n> > > > > > new value will be reset by the worker while clearing the skip xid. I\n> > > > > > felt once the worker has identified the skip xid and is about to skip\n> > > > > > the xid, the worker can acquire a lock to prevent concurrency issues:\n> > > > >\n> > > > > That's a good point.\n> > > > > If only the last_error_xid could be skipped, then this wouldn't be an\n> > > > > issue, right?\n> > > > > If a different xid to skip is specified while the worker is currently\n> > > > > skipping a transaction, should that even be allowed?\n> > > > >\n> > > >\n> > > > We don't expect such usage but yes, it could happen and seems not\n> > > > good. I thought we can acquire Share lock on pg_subscription during\n> > > > the skip but not sure it's a good idea. It would be better if we can\n> > > > find a way to allow users to specify only XID that has failed.\n> > > >\n> > >\n> > > Yeah, but as we don't have a definite way to allow specifying only\n> > > failed XID, I think it is better to use share lock on that particular\n> > > subscription. We are already using it for add/update rel state (see,\n> > > AddSubscriptionRelState, UpdateSubscriptionRelState), so this will be\n> > > another place to use a similar technique.\n> >\n> > Yes, but it seems to mean that we disallow users to change skip_xid\n> > while the apply worker is skipping changes so we will end up having\n> > the same problem we discussed so far;\n> >\n>\n> I thought we just want to lock before clearing the skip_xid something\n> like take the lock, check if the skip_xid in the catalog is the same\n> as we have skipped, if it is the same then clear it, otherwise, leave\n> it as it is. How will that disallow users to change skip_xid when we\n> are skipping changes?\n\nOh I thought we wanted to keep holding the lock while skipping changes\n(changing skip_xid requires acquiring the lock).\n\nSo if skip_xid is already changed, the apply worker would do\nreplorigin_advance() with WAL logging, instead of committing the\ncatalog change?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 16 Dec 2021 14:06:34 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Dec 16, 2021 at 10:37 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Dec 16, 2021 at 11:43 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > I thought we just want to lock before clearing the skip_xid something\n> > like take the lock, check if the skip_xid in the catalog is the same\n> > as we have skipped, if it is the same then clear it, otherwise, leave\n> > it as it is. How will that disallow users to change skip_xid when we\n> > are skipping changes?\n>\n> Oh I thought we wanted to keep holding the lock while skipping changes\n> (changing skip_xid requires acquiring the lock).\n>\n> So if skip_xid is already changed, the apply worker would do\n> replorigin_advance() with WAL logging, instead of committing the\n> catalog change?\n>\n\nRight. BTW, how are you planning to advance the origin? Normally, a\ncommit transaction would do it but when we are skipping all changes,\nthe commit might not do it as there won't be any transaction id\nassigned.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 16 Dec 2021 10:51:32 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Dec 16, 2021 at 2:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Dec 16, 2021 at 10:37 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Dec 16, 2021 at 11:43 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > I thought we just want to lock before clearing the skip_xid something\n> > > like take the lock, check if the skip_xid in the catalog is the same\n> > > as we have skipped, if it is the same then clear it, otherwise, leave\n> > > it as it is. How will that disallow users to change skip_xid when we\n> > > are skipping changes?\n> >\n> > Oh I thought we wanted to keep holding the lock while skipping changes\n> > (changing skip_xid requires acquiring the lock).\n> >\n> > So if skip_xid is already changed, the apply worker would do\n> > replorigin_advance() with WAL logging, instead of committing the\n> > catalog change?\n> >\n>\n> Right. BTW, how are you planning to advance the origin? Normally, a\n> commit transaction would do it but when we are skipping all changes,\n> the commit might not do it as there won't be any transaction id\n> assigned.\n\nI've not tested it yet but replorigin_advance() with wal_log = true\nseems to work for this case.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 16 Dec 2021 14:42:20 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On 13.12.21 04:12, Greg Nancarrow wrote:\n> (ii) \"Setting -1 means to reset the transaction ID\"\n> \n> Shouldn't it be explained what resetting actually does and when it can\n> be, or is needed to be, done? Isn't it automatically reset?\n> I notice that negative values (other than -1) seem to be regarded as\n> valid - is that right?\n> Also, what happens if this option is set multiple times? Does it just\n> override and use the latest setting? (other option handling errors out\n> with errorConflictingDefElem()).\n> e.g. alter subscription sub skip (xid = 721, xid = 722);\n\nLet's not use magic numbers and instead use a syntax that is more \nexplicit, like SKIP (xid = NONE) or RESET SKIP or something like that.\n\n\n",
"msg_date": "Fri, 17 Dec 2021 10:53:48 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Dec 17, 2021 at 3:23 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 13.12.21 04:12, Greg Nancarrow wrote:\n> > (ii) \"Setting -1 means to reset the transaction ID\"\n> >\n> > Shouldn't it be explained what resetting actually does and when it can\n> > be, or is needed to be, done? Isn't it automatically reset?\n> > I notice that negative values (other than -1) seem to be regarded as\n> > valid - is that right?\n> > Also, what happens if this option is set multiple times? Does it just\n> > override and use the latest setting? (other option handling errors out\n> > with errorConflictingDefElem()).\n> > e.g. alter subscription sub skip (xid = 721, xid = 722);\n>\n> Let's not use magic numbers and instead use a syntax that is more\n> explicit, like SKIP (xid = NONE) or RESET SKIP or something like that.\n>\n\n+1 for using SKIP (xid = NONE) because otherwise first we need to\nintroduce RESET syntax for this command.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 17 Dec 2021 15:42:10 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Dec 17, 2021 at 7:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Dec 17, 2021 at 3:23 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n> >\n> > On 13.12.21 04:12, Greg Nancarrow wrote:\n> > > (ii) \"Setting -1 means to reset the transaction ID\"\n> > >\n> > > Shouldn't it be explained what resetting actually does and when it can\n> > > be, or is needed to be, done? Isn't it automatically reset?\n> > > I notice that negative values (other than -1) seem to be regarded as\n> > > valid - is that right?\n> > > Also, what happens if this option is set multiple times? Does it just\n> > > override and use the latest setting? (other option handling errors out\n> > > with errorConflictingDefElem()).\n> > > e.g. alter subscription sub skip (xid = 721, xid = 722);\n> >\n> > Let's not use magic numbers and instead use a syntax that is more\n> > explicit, like SKIP (xid = NONE) or RESET SKIP or something like that.\n> >\n>\n> +1 for using SKIP (xid = NONE) because otherwise first we need to\n> introduce RESET syntax for this command.\n\nAgreed. Thank you for the comment!\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 17 Dec 2021 20:13:11 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Dec 16, 2021 at 2:42 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Dec 16, 2021 at 2:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Dec 16, 2021 at 10:37 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Thu, Dec 16, 2021 at 11:43 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > I thought we just want to lock before clearing the skip_xid something\n> > > > like take the lock, check if the skip_xid in the catalog is the same\n> > > > as we have skipped, if it is the same then clear it, otherwise, leave\n> > > > it as it is. How will that disallow users to change skip_xid when we\n> > > > are skipping changes?\n> > >\n> > > Oh I thought we wanted to keep holding the lock while skipping changes\n> > > (changing skip_xid requires acquiring the lock).\n> > >\n> > > So if skip_xid is already changed, the apply worker would do\n> > > replorigin_advance() with WAL logging, instead of committing the\n> > > catalog change?\n> > >\n> >\n> > Right. BTW, how are you planning to advance the origin? Normally, a\n> > commit transaction would do it but when we are skipping all changes,\n> > the commit might not do it as there won't be any transaction id\n> > assigned.\n>\n> I've not tested it yet but replorigin_advance() with wal_log = true\n> seems to work for this case.\n\nI've tested it and realized that we cannot use replorigin_advance()\nfor this purpose without changes. That is, the current\nreplorigin_advance() doesn't allow to advance the origin by the owner:\n\n /* Make sure it's not used by somebody else */\n if (replication_state->acquired_by != 0)\n {\n ereport(ERROR,\n (errcode(ERRCODE_OBJECT_IN_USE),\n errmsg(\"replication origin with OID %d is already\nactive for PID %d\",\n replication_state->roident,\n replication_state->acquired_by)));\n }\n\nSo we need to change it so that the origin owner can advance its\norigin, which makes sense to me.\n\nAlso, when we have to update the origin instead of committing the\ncatalog change while updating the origin, we cannot record the origin\ntimestamp. This behavior makes sense to me because we skipped the\ntransaction. But ISTM it’s not good if we emit the origin timestamp\nonly when directly updating the origin. So probably we need to always\nomit origin timestamp.\n\nApart from that, I'm vaguely concerned that the logic seems to be\ngetting complex. Probably it comes from the fact that we store\nskip_xid in the catalog and update the catalog to clear/set the\nskip_xid. It might be worth revisiting the idea of storing skip_xid on\nshmem (e.g., ReplicationState)? That way, we can always advance the\norigin by replorigin_advance() and don’t need to worry about a complex\ncase like the server crashes during preparing the transaction. I’ve\nnot considered the downside yet enough, though.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 27 Dec 2021 13:23:36 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Dec 27, 2021 at 9:54 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Dec 16, 2021 at 2:42 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Dec 16, 2021 at 2:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Dec 16, 2021 at 10:37 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Thu, Dec 16, 2021 at 11:43 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > I thought we just want to lock before clearing the skip_xid something\n> > > > > like take the lock, check if the skip_xid in the catalog is the same\n> > > > > as we have skipped, if it is the same then clear it, otherwise, leave\n> > > > > it as it is. How will that disallow users to change skip_xid when we\n> > > > > are skipping changes?\n> > > >\n> > > > Oh I thought we wanted to keep holding the lock while skipping changes\n> > > > (changing skip_xid requires acquiring the lock).\n> > > >\n> > > > So if skip_xid is already changed, the apply worker would do\n> > > > replorigin_advance() with WAL logging, instead of committing the\n> > > > catalog change?\n> > > >\n> > >\n> > > Right. BTW, how are you planning to advance the origin? Normally, a\n> > > commit transaction would do it but when we are skipping all changes,\n> > > the commit might not do it as there won't be any transaction id\n> > > assigned.\n> >\n> > I've not tested it yet but replorigin_advance() with wal_log = true\n> > seems to work for this case.\n>\n> I've tested it and realized that we cannot use replorigin_advance()\n> for this purpose without changes. That is, the current\n> replorigin_advance() doesn't allow to advance the origin by the owner:\n>\n> /* Make sure it's not used by somebody else */\n> if (replication_state->acquired_by != 0)\n> {\n> ereport(ERROR,\n> (errcode(ERRCODE_OBJECT_IN_USE),\n> errmsg(\"replication origin with OID %d is already\n> active for PID %d\",\n> replication_state->roident,\n> replication_state->acquired_by)));\n> }\n>\n> So we need to change it so that the origin owner can advance its\n> origin, which makes sense to me.\n>\n> Also, when we have to update the origin instead of committing the\n> catalog change while updating the origin, we cannot record the origin\n> timestamp.\n>\n\nIs it because we currently update the origin timestamp with commit record?\n\n> This behavior makes sense to me because we skipped the\n> transaction. But ISTM it’s not good if we emit the origin timestamp\n> only when directly updating the origin. So probably we need to always\n> omit origin timestamp.\n>\n\nDo you mean to say that you want to omit it even when we are\ncommitting the changes?\n\n> Apart from that, I'm vaguely concerned that the logic seems to be\n> getting complex. Probably it comes from the fact that we store\n> skip_xid in the catalog and update the catalog to clear/set the\n> skip_xid. It might be worth revisiting the idea of storing skip_xid on\n> shmem (e.g., ReplicationState)?\n>\n\nIIRC, the problem with that idea was that we won't remember skip_xid\ninformation after server restart and the user won't even know that it\nhas to set it again.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 5 Jan 2022 09:00:57 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jan 5, 2022 at 9:01 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Dec 27, 2021 at 9:54 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Do you mean to say that you want to omit it even when we are\n> committing the changes?\n>\n> > Apart from that, I'm vaguely concerned that the logic seems to be\n> > getting complex. Probably it comes from the fact that we store\n> > skip_xid in the catalog and update the catalog to clear/set the\n> > skip_xid. It might be worth revisiting the idea of storing skip_xid on\n> > shmem (e.g., ReplicationState)?\n> >\n>\n> IIRC, the problem with that idea was that we won't remember skip_xid\n> information after server restart and the user won't even know that it\n> has to set it again.\n\n\nI agree, that if we don't keep it in the catalog then after restart if\nthe transaction replayed again then the user has to set the skip xid\nagain and that would be pretty inconvenient because the user might\nhave to analyze the failure again and repeat the same process he did\nbefore restart. But OTOH the combination of restart and the skip xid\nmight not be very frequent so this might not be a very bad option.\nBasically, I am in favor of storing it in a catalog as that solution\nlooks cleaner at least from the user pov but if we think there are a\nlot of complexities from the implementation pov then we might analyze\nthe approach of storing in shmem as well.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 5 Jan 2022 09:48:27 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jan 5, 2022 at 9:48 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Jan 5, 2022 at 9:01 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Dec 27, 2021 at 9:54 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Do you mean to say that you want to omit it even when we are\n> > committing the changes?\n> >\n> > > Apart from that, I'm vaguely concerned that the logic seems to be\n> > > getting complex. Probably it comes from the fact that we store\n> > > skip_xid in the catalog and update the catalog to clear/set the\n> > > skip_xid. It might be worth revisiting the idea of storing skip_xid on\n> > > shmem (e.g., ReplicationState)?\n> > >\n> >\n> > IIRC, the problem with that idea was that we won't remember skip_xid\n> > information after server restart and the user won't even know that it\n> > has to set it again.\n>\n>\n> I agree, that if we don't keep it in the catalog then after restart if\n> the transaction replayed again then the user has to set the skip xid\n> again and that would be pretty inconvenient because the user might\n> have to analyze the failure again and repeat the same process he did\n> before restart. But OTOH the combination of restart and the skip xid\n> might not be very frequent so this might not be a very bad option.\n> Basically, I am in favor of storing it in a catalog as that solution\n> looks cleaner at least from the user pov but if we think there are a\n> lot of complexities from the implementation pov then we might analyze\n> the approach of storing in shmem as well.\n>\n\nFair point, but I think it is better to see the patch or the problems\nthat can't be solved if we pursue storing it in catalog. Even, if we\ndecide to store it in shmem, we need to invent some way to inform the\nuser that we have not honored the previous setting of skip_xid and it\nneeds to be reset again.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 6 Jan 2022 10:27:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jan 5, 2022 at 12:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Dec 27, 2021 at 9:54 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Dec 16, 2021 at 2:42 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Thu, Dec 16, 2021 at 2:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Thu, Dec 16, 2021 at 10:37 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > On Thu, Dec 16, 2021 at 11:43 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > >\n> > > > > > I thought we just want to lock before clearing the skip_xid something\n> > > > > > like take the lock, check if the skip_xid in the catalog is the same\n> > > > > > as we have skipped, if it is the same then clear it, otherwise, leave\n> > > > > > it as it is. How will that disallow users to change skip_xid when we\n> > > > > > are skipping changes?\n> > > > >\n> > > > > Oh I thought we wanted to keep holding the lock while skipping changes\n> > > > > (changing skip_xid requires acquiring the lock).\n> > > > >\n> > > > > So if skip_xid is already changed, the apply worker would do\n> > > > > replorigin_advance() with WAL logging, instead of committing the\n> > > > > catalog change?\n> > > > >\n> > > >\n> > > > Right. BTW, how are you planning to advance the origin? Normally, a\n> > > > commit transaction would do it but when we are skipping all changes,\n> > > > the commit might not do it as there won't be any transaction id\n> > > > assigned.\n> > >\n> > > I've not tested it yet but replorigin_advance() with wal_log = true\n> > > seems to work for this case.\n> >\n> > I've tested it and realized that we cannot use replorigin_advance()\n> > for this purpose without changes. That is, the current\n> > replorigin_advance() doesn't allow to advance the origin by the owner:\n> >\n> > /* Make sure it's not used by somebody else */\n> > if (replication_state->acquired_by != 0)\n> > {\n> > ereport(ERROR,\n> > (errcode(ERRCODE_OBJECT_IN_USE),\n> > errmsg(\"replication origin with OID %d is already\n> > active for PID %d\",\n> > replication_state->roident,\n> > replication_state->acquired_by)));\n> > }\n> >\n> > So we need to change it so that the origin owner can advance its\n> > origin, which makes sense to me.\n> >\n> > Also, when we have to update the origin instead of committing the\n> > catalog change while updating the origin, we cannot record the origin\n> > timestamp.\n> >\n>\n> Is it because we currently update the origin timestamp with commit record?\n\nYes.\n\n>\n> > This behavior makes sense to me because we skipped the\n> > transaction. But ISTM it’s not good if we emit the origin timestamp\n> > only when directly updating the origin. So probably we need to always\n> > omit origin timestamp.\n> >\n>\n> Do you mean to say that you want to omit it even when we are\n> committing the changes?\n\nYes, it would be better to record only origin lsn in terms of consistency.\n\n>\n> > Apart from that, I'm vaguely concerned that the logic seems to be\n> > getting complex. Probably it comes from the fact that we store\n> > skip_xid in the catalog and update the catalog to clear/set the\n> > skip_xid. It might be worth revisiting the idea of storing skip_xid on\n> > shmem (e.g., ReplicationState)?\n> >\n>\n> IIRC, the problem with that idea was that we won't remember skip_xid\n> information after server restart and the user won't even know that it\n> has to set it again.\n\nRight, I agree that it’s not convenient when the server restarts or\ncrashes, but these problems could not be critical in the situation\nwhere users have to use this feature; the subscriber already entered\nan error loop so they can know xid again and it’s an uncommon case\nthat they need to restart during skipping changes.\n\nAnyway, I'll submit an updated patch soon so we can discuss complexity\nvs. convenience.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 7 Jan 2022 10:04:58 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Jan 7, 2022 at 6:35 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Jan 5, 2022 at 12:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Dec 27, 2021 at 9:54 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Thu, Dec 16, 2021 at 2:42 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Thu, Dec 16, 2021 at 2:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Thu, Dec 16, 2021 at 10:37 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > >\n> > > > > > On Thu, Dec 16, 2021 at 11:43 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > > >\n> > > > > > > I thought we just want to lock before clearing the skip_xid something\n> > > > > > > like take the lock, check if the skip_xid in the catalog is the same\n> > > > > > > as we have skipped, if it is the same then clear it, otherwise, leave\n> > > > > > > it as it is. How will that disallow users to change skip_xid when we\n> > > > > > > are skipping changes?\n> > > > > >\n> > > > > > Oh I thought we wanted to keep holding the lock while skipping changes\n> > > > > > (changing skip_xid requires acquiring the lock).\n> > > > > >\n> > > > > > So if skip_xid is already changed, the apply worker would do\n> > > > > > replorigin_advance() with WAL logging, instead of committing the\n> > > > > > catalog change?\n> > > > > >\n> > > > >\n> > > > > Right. BTW, how are you planning to advance the origin? Normally, a\n> > > > > commit transaction would do it but when we are skipping all changes,\n> > > > > the commit might not do it as there won't be any transaction id\n> > > > > assigned.\n> > > >\n> > > > I've not tested it yet but replorigin_advance() with wal_log = true\n> > > > seems to work for this case.\n> > >\n> > > I've tested it and realized that we cannot use replorigin_advance()\n> > > for this purpose without changes. That is, the current\n> > > replorigin_advance() doesn't allow to advance the origin by the owner:\n> > >\n> > > /* Make sure it's not used by somebody else */\n> > > if (replication_state->acquired_by != 0)\n> > > {\n> > > ereport(ERROR,\n> > > (errcode(ERRCODE_OBJECT_IN_USE),\n> > > errmsg(\"replication origin with OID %d is already\n> > > active for PID %d\",\n> > > replication_state->roident,\n> > > replication_state->acquired_by)));\n> > > }\n> > >\n> > > So we need to change it so that the origin owner can advance its\n> > > origin, which makes sense to me.\n> > >\n> > > Also, when we have to update the origin instead of committing the\n> > > catalog change while updating the origin, we cannot record the origin\n> > > timestamp.\n> > >\n> >\n> > Is it because we currently update the origin timestamp with commit record?\n>\n> Yes.\n>\n> >\n> > > This behavior makes sense to me because we skipped the\n> > > transaction. But ISTM it’s not good if we emit the origin timestamp\n> > > only when directly updating the origin. So probably we need to always\n> > > omit origin timestamp.\n> > >\n> >\n> > Do you mean to say that you want to omit it even when we are\n> > committing the changes?\n>\n> Yes, it would be better to record only origin lsn in terms of consistency.\n>\n\nI am not so sure about this point because then what purpose origin\ntimestamp will serve in the code.\n\n> >\n> > > Apart from that, I'm vaguely concerned that the logic seems to be\n> > > getting complex. Probably it comes from the fact that we store\n> > > skip_xid in the catalog and update the catalog to clear/set the\n> > > skip_xid. It might be worth revisiting the idea of storing skip_xid on\n> > > shmem (e.g., ReplicationState)?\n> > >\n> >\n> > IIRC, the problem with that idea was that we won't remember skip_xid\n> > information after server restart and the user won't even know that it\n> > has to set it again.\n>\n> Right, I agree that it’s not convenient when the server restarts or\n> crashes, but these problems could not be critical in the situation\n> where users have to use this feature; the subscriber already entered\n> an error loop so they can know xid again and it’s an uncommon case\n> that they need to restart during skipping changes.\n>\n> Anyway, I'll submit an updated patch soon so we can discuss complexity\n> vs. convenience.\n>\n\nOkay, that sounds reasonable.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 7 Jan 2022 09:52:56 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Jan 7, 2022 at 10:04 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Jan 5, 2022 at 12:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Dec 27, 2021 at 9:54 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Thu, Dec 16, 2021 at 2:42 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Thu, Dec 16, 2021 at 2:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Thu, Dec 16, 2021 at 10:37 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > >\n> > > > > > On Thu, Dec 16, 2021 at 11:43 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > > >\n> > > > > > > I thought we just want to lock before clearing the skip_xid something\n> > > > > > > like take the lock, check if the skip_xid in the catalog is the same\n> > > > > > > as we have skipped, if it is the same then clear it, otherwise, leave\n> > > > > > > it as it is. How will that disallow users to change skip_xid when we\n> > > > > > > are skipping changes?\n> > > > > >\n> > > > > > Oh I thought we wanted to keep holding the lock while skipping changes\n> > > > > > (changing skip_xid requires acquiring the lock).\n> > > > > >\n> > > > > > So if skip_xid is already changed, the apply worker would do\n> > > > > > replorigin_advance() with WAL logging, instead of committing the\n> > > > > > catalog change?\n> > > > > >\n> > > > >\n> > > > > Right. BTW, how are you planning to advance the origin? Normally, a\n> > > > > commit transaction would do it but when we are skipping all changes,\n> > > > > the commit might not do it as there won't be any transaction id\n> > > > > assigned.\n> > > >\n> > > > I've not tested it yet but replorigin_advance() with wal_log = true\n> > > > seems to work for this case.\n> > >\n> > > I've tested it and realized that we cannot use replorigin_advance()\n> > > for this purpose without changes. That is, the current\n> > > replorigin_advance() doesn't allow to advance the origin by the owner:\n> > >\n> > > /* Make sure it's not used by somebody else */\n> > > if (replication_state->acquired_by != 0)\n> > > {\n> > > ereport(ERROR,\n> > > (errcode(ERRCODE_OBJECT_IN_USE),\n> > > errmsg(\"replication origin with OID %d is already\n> > > active for PID %d\",\n> > > replication_state->roident,\n> > > replication_state->acquired_by)));\n> > > }\n> > >\n> > > So we need to change it so that the origin owner can advance its\n> > > origin, which makes sense to me.\n> > >\n> > > Also, when we have to update the origin instead of committing the\n> > > catalog change while updating the origin, we cannot record the origin\n> > > timestamp.\n> > >\n> >\n> > Is it because we currently update the origin timestamp with commit record?\n>\n> Yes.\n>\n> >\n> > > This behavior makes sense to me because we skipped the\n> > > transaction. But ISTM it’s not good if we emit the origin timestamp\n> > > only when directly updating the origin. So probably we need to always\n> > > omit origin timestamp.\n> > >\n> >\n> > Do you mean to say that you want to omit it even when we are\n> > committing the changes?\n>\n> Yes, it would be better to record only origin lsn in terms of consistency.\n>\n> >\n> > > Apart from that, I'm vaguely concerned that the logic seems to be\n> > > getting complex. Probably it comes from the fact that we store\n> > > skip_xid in the catalog and update the catalog to clear/set the\n> > > skip_xid. It might be worth revisiting the idea of storing skip_xid on\n> > > shmem (e.g., ReplicationState)?\n> > >\n> >\n> > IIRC, the problem with that idea was that we won't remember skip_xid\n> > information after server restart and the user won't even know that it\n> > has to set it again.\n>\n> Right, I agree that it’s not convenient when the server restarts or\n> crashes, but these problems could not be critical in the situation\n> where users have to use this feature; the subscriber already entered\n> an error loop so they can know xid again and it’s an uncommon case\n> that they need to restart during skipping changes.\n>\n> Anyway, I'll submit an updated patch soon so we can discuss complexity\n> vs. convenience.\n\nAttached an updated patch. Please review it.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Fri, 7 Jan 2022 14:52:52 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Jan 7, 2022 at 11:23 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Jan 7, 2022 at 10:04 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Jan 5, 2022 at 12:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Dec 27, 2021 at 9:54 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Thu, Dec 16, 2021 at 2:42 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > On Thu, Dec 16, 2021 at 2:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > >\n> > > > > > On Thu, Dec 16, 2021 at 10:37 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > > >\n> > > > > > > On Thu, Dec 16, 2021 at 11:43 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > > > >\n> > > > > > > > I thought we just want to lock before clearing the skip_xid something\n> > > > > > > > like take the lock, check if the skip_xid in the catalog is the same\n> > > > > > > > as we have skipped, if it is the same then clear it, otherwise, leave\n> > > > > > > > it as it is. How will that disallow users to change skip_xid when we\n> > > > > > > > are skipping changes?\n> > > > > > >\n> > > > > > > Oh I thought we wanted to keep holding the lock while skipping changes\n> > > > > > > (changing skip_xid requires acquiring the lock).\n> > > > > > >\n> > > > > > > So if skip_xid is already changed, the apply worker would do\n> > > > > > > replorigin_advance() with WAL logging, instead of committing the\n> > > > > > > catalog change?\n> > > > > > >\n> > > > > >\n> > > > > > Right. BTW, how are you planning to advance the origin? Normally, a\n> > > > > > commit transaction would do it but when we are skipping all changes,\n> > > > > > the commit might not do it as there won't be any transaction id\n> > > > > > assigned.\n> > > > >\n> > > > > I've not tested it yet but replorigin_advance() with wal_log = true\n> > > > > seems to work for this case.\n> > > >\n> > > > I've tested it and realized that we cannot use replorigin_advance()\n> > > > for this purpose without changes. That is, the current\n> > > > replorigin_advance() doesn't allow to advance the origin by the owner:\n> > > >\n> > > > /* Make sure it's not used by somebody else */\n> > > > if (replication_state->acquired_by != 0)\n> > > > {\n> > > > ereport(ERROR,\n> > > > (errcode(ERRCODE_OBJECT_IN_USE),\n> > > > errmsg(\"replication origin with OID %d is already\n> > > > active for PID %d\",\n> > > > replication_state->roident,\n> > > > replication_state->acquired_by)));\n> > > > }\n> > > >\n> > > > So we need to change it so that the origin owner can advance its\n> > > > origin, which makes sense to me.\n> > > >\n> > > > Also, when we have to update the origin instead of committing the\n> > > > catalog change while updating the origin, we cannot record the origin\n> > > > timestamp.\n> > > >\n> > >\n> > > Is it because we currently update the origin timestamp with commit record?\n> >\n> > Yes.\n> >\n> > >\n> > > > This behavior makes sense to me because we skipped the\n> > > > transaction. But ISTM it’s not good if we emit the origin timestamp\n> > > > only when directly updating the origin. So probably we need to always\n> > > > omit origin timestamp.\n> > > >\n> > >\n> > > Do you mean to say that you want to omit it even when we are\n> > > committing the changes?\n> >\n> > Yes, it would be better to record only origin lsn in terms of consistency.\n> >\n> > >\n> > > > Apart from that, I'm vaguely concerned that the logic seems to be\n> > > > getting complex. Probably it comes from the fact that we store\n> > > > skip_xid in the catalog and update the catalog to clear/set the\n> > > > skip_xid. It might be worth revisiting the idea of storing skip_xid on\n> > > > shmem (e.g., ReplicationState)?\n> > > >\n> > >\n> > > IIRC, the problem with that idea was that we won't remember skip_xid\n> > > information after server restart and the user won't even know that it\n> > > has to set it again.\n> >\n> > Right, I agree that it’s not convenient when the server restarts or\n> > crashes, but these problems could not be critical in the situation\n> > where users have to use this feature; the subscriber already entered\n> > an error loop so they can know xid again and it’s an uncommon case\n> > that they need to restart during skipping changes.\n> >\n> > Anyway, I'll submit an updated patch soon so we can discuss complexity\n> > vs. convenience.\n>\n> Attached an updated patch. Please review it.\n\nThanks for the updated patch, few comments:\n1) Should this be case insensitive to support NONE too:\n+ /* Setting xid = NONE is treated as resetting xid */\n+ if (strcmp(xid_str, \"none\") == 0)\n+ xid = InvalidTransactionId;\n\n2) Can we have an option to specify last_error_xid of\npg_stat_subscription_workers. Something like:\nalter subscription sub1 skip ( XID = 'last_subscription_error');\n\nWhen the user specified last_subscription_error, it should pick\nlast_error_xid from pg_stat_subscription_workers.\nAs this operation is a critical operation, if there is an option which\ncould automatically pick and set from pg_stat_subscription_workers, it\nwould be useful.\n\n3) Currently the following syntax is being supported, I felt this\nshould throw an error:\npostgres=# alter subscription sub1 set ( XID = 100);\nALTER SUBSCRIPTION\n\n4) You might need to rebase the patch:\ngit am v2-0001-Add-ALTER-SUBSCRIPTION-.-SKIP-to-skip-the-transac.patch\nApplying: Add ALTER SUBSCRIPTION ... SKIP to skip the transaction on\nsubscriber nodes\nerror: patch failed: doc/src/sgml/logical-replication.sgml:333\nerror: doc/src/sgml/logical-replication.sgml: patch does not apply\nPatch failed at 0001 Add ALTER SUBSCRIPTION ... SKIP to skip the\ntransaction on subscriber nodes\nhint: Use 'git am --show-current-patch=diff' to see the failed patch\n\n5) You might have to rename 027_skip_xact to 028_skip_xact as\n027_nosuperuser.pl already exists\ndiff --git a/src/test/subscription/t/027_skip_xact.pl\nb/src/test/subscription/t/027_skip_xact.pl\nnew file mode 100644\nindex 0000000000..a63c9c345e\n--- /dev/null\n+++ b/src/test/subscription/t/027_skip_xact.pl\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 10 Jan 2022 14:57:28 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Dec 16, 2021 at 11:12 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Dec 16, 2021 at 2:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > >\n> > > So if skip_xid is already changed, the apply worker would do\n> > > replorigin_advance() with WAL logging, instead of committing the\n> > > catalog change?\n> > >\n> >\n> > Right. BTW, how are you planning to advance the origin? Normally, a\n> > commit transaction would do it but when we are skipping all changes,\n> > the commit might not do it as there won't be any transaction id\n> > assigned.\n>\n> I've not tested it yet but replorigin_advance() with wal_log = true\n> seems to work for this case.\n>\n\nIIUC, the changes corresponding to above in the latest patch are as follows:\n\n--- a/src/backend/replication/logical/origin.c\n+++ b/src/backend/replication/logical/origin.c\n@@ -921,7 +921,8 @@ replorigin_advance(RepOriginId node,\n LWLockAcquire(&replication_state->lock, LW_EXCLUSIVE);\n\n /* Make sure it's not used by somebody else */\n- if (replication_state->acquired_by != 0)\n+ if (replication_state->acquired_by != 0 &&\n+ replication_state->acquired_by != MyProcPid)\n {\n...\n\nclear_subscription_skip_xid()\n{\n..\n+ else if (!XLogRecPtrIsInvalid(origin_lsn))\n+ {\n+ /*\n+ * User has already changed subskipxid before clearing the subskipxid, so\n+ * don't change the catalog but just advance the replication origin.\n+ */\n+ replorigin_advance(replorigin_session_origin, origin_lsn,\n+ GetXLogInsertRecPtr(),\n+ false, /* go_backward */\n+ true /* wal_log */);\n+ }\n..\n}\n\nI was thinking what if we don't advance origin explicitly in this\ncase? Actually, that will be no different than the transactions where\nthe apply worker doesn't apply any change because the initial sync is\nin progress (see should_apply_changes_for_rel()) or we have received\nan empty transaction. In those cases also, the origin lsn won't be\nadvanced even though we acknowledge the advanced last_received\nlocation because of keep_alive messages. Now, it is possible after the\nrestart we send the old start_lsn location because the replication\norigin was not updated before restart but we handle that case in the\nserver by starting from the last confirmed location. See below code:\n\nCreateDecodingContext()\n{\n..\nelse if (start_lsn < slot->data.confirmed_flush)\n..\n\nFew other comments on the latest patch:\n=================================\n1.\nA conflict will produce an error and will stop the replication; it must be\n resolved manually by the user. Details about the conflict can be found in\n- the subscriber's server log.\n+ <xref linkend=\"monitoring-pg-stat-subscription-workers\"/> as well as the\n+ subscriber's server log.\n\nCan we slightly change the modified line to: \"Details about the\nconflict can be found in <xref\nlinkend=\"monitoring-pg-stat-subscription-workers\"/> and the\nsubscriber's server log.\"? I think we can commit this change\nseparately as this is true even without this patch.\n\n2.\n The resolution can be done either by changing data on the subscriber so\n- that it does not conflict with the incoming change or by skipping the\n- transaction that conflicts with the existing data. The transaction can be\n- skipped by calling the <link linkend=\"pg-replication-origin-advance\">\n+ that it does not conflict with the incoming changes or by skipping the whole\n+ transaction. This option specifies the ID of the transaction whose\n+ application is to be skipped by the logical replication worker. The logical\n+ replication worker skips all data modification transaction conflicts with\n+ the existing data. When a conflict produce an error, it is shown in\n+ <structname>pg_stat_subscription_workers</structname> view as follows:\n\nI don't think most of the additional text added in the above paragraph\nis required. We can rephrase it as: \"The resolution can be done either\nby changing data on the subscriber so that it does not conflict with\nthe incoming change or by skipping the transaction that conflicts with\nthe existing data. When a conflict produces an error, it is shown in\n<structname>pg_stat_subscription_workers</structname> view as\nfollows:\". After that keep the text, you have.\n\n3.\nThey skip the whole transaction, including changes that may not violate any\n+ constraint. They may easily make the subscriber inconsistent, especially if\n+ a user specifies the wrong transaction ID or the position of origin.\n\nCan we slightly reword the above text as: \"Skipping the whole\ntransaction includes skipping the changes that may not violate any\nconstraint. This can easily make the subscriber inconsistent,\nespecially if a user specifies the wrong transaction ID or the\nposition of origin.\"?\n\n4.\nThe logical replication worker skips all data\n+ modification changes within the specified transaction. Therefore, since\n+ it skips the whole transaction including the changes that may not violate\n+ the constraint, it should only be used as a last resort. This option has\n+ no effect for the transaction that is already prepared with enabling\n+ <literal>two_phase</literal> on susbscriber.\n\nLet's slightly reword the above text as: \"The logical replication\nworker skips all data modification changes within the specified\ntransaction including the changes that may not violate the constraint,\nso, it should only be used as a last resort. This option has no effect\non the transaction that is already prepared by enabling\n<literal>two_phase</literal> on the subscriber.\"\n\n5.\n+ by the logical replication worker. Setting\n<literal>NONE</literal> means\n+ to reset the transaction ID.\n\nLet's slightly reword the second part of the sentence as: \"Setting\n<literal>NONE</literal> resets the transaction ID.\"\n\n6.\nOnce we start skipping\n+ * changes, we don't stop it until the we skip all changes of the\ntransaction even\n+ * if the subscription invalidated and MySubscription->skipxid gets\nchanged or reset.\n\n/subscription invalidated/subscription is invalidated\n\nWhat do you mean by subscription invalidated and how is it related to\nthis feature? I think we should mention something on these lines in\nthe docs as well.\n\n7.\n\"Please refer to the comments in these functions for details.\". We can\nslightly modify this part of the comment as: \"Please refer to the\ncomments in corresponding functions for details.\"\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 10 Jan 2022 17:20:16 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jan 10, 2022 at 2:57 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> 2) Can we have an option to specify last_error_xid of\n> pg_stat_subscription_workers. Something like:\n> alter subscription sub1 skip ( XID = 'last_subscription_error');\n>\n> When the user specified last_subscription_error, it should pick\n> last_error_xid from pg_stat_subscription_workers.\n> As this operation is a critical operation, if there is an option which\n> could automatically pick and set from pg_stat_subscription_workers, it\n> would be useful.\n>\n\nI think having some automatic functionality around this would be good\nbut I am not so sure about this idea because it is possible that the\nerror has not reached the stats collector and the user might be\nreferring to server logs to set the skip xid. In such cases, even\nthough an error would have occurred but we won't be able to set the\nrequired xid. Now, one can imagine that if we don't get the required\nvalue from pg_stat_subscription_workers then we can return an error to\nthe user indicating that she can cross-verify the server logs and set\nthe appropriate xid value but IMO it could be confusing. I feel even\nif we want some automatic functionality like you are proposing or\nsomething else, it could be done as a separate patch but let's wait\nand see what Sawada-San or others think about this?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 11 Jan 2022 07:51:57 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jan 11, 2022 at 7:52 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jan 10, 2022 at 2:57 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > 2) Can we have an option to specify last_error_xid of\n> > pg_stat_subscription_workers. Something like:\n> > alter subscription sub1 skip ( XID = 'last_subscription_error');\n> >\n> > When the user specified last_subscription_error, it should pick\n> > last_error_xid from pg_stat_subscription_workers.\n> > As this operation is a critical operation, if there is an option which\n> > could automatically pick and set from pg_stat_subscription_workers, it\n> > would be useful.\n> >\n>\n> I think having some automatic functionality around this would be good\n> but I am not so sure about this idea because it is possible that the\n> error has not reached the stats collector and the user might be\n> referring to server logs to set the skip xid. In such cases, even\n> though an error would have occurred but we won't be able to set the\n> required xid. Now, one can imagine that if we don't get the required\n> value from pg_stat_subscription_workers then we can return an error to\n> the user indicating that she can cross-verify the server logs and set\n> the appropriate xid value but IMO it could be confusing. I feel even\n> if we want some automatic functionality like you are proposing or\n> something else, it could be done as a separate patch but let's wait\n> and see what Sawada-San or others think about this?\n\nIf we are ok with the suggested idea then it can be done as a separate\npatch, I agree that it need not be part of the existing patch.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 11 Jan 2022 07:57:34 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jan 10, 2022 at 8:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Dec 16, 2021 at 11:12 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Dec 16, 2021 at 2:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > >\n> > > > So if skip_xid is already changed, the apply worker would do\n> > > > replorigin_advance() with WAL logging, instead of committing the\n> > > > catalog change?\n> > > >\n> > >\n> > > Right. BTW, how are you planning to advance the origin? Normally, a\n> > > commit transaction would do it but when we are skipping all changes,\n> > > the commit might not do it as there won't be any transaction id\n> > > assigned.\n> >\n> > I've not tested it yet but replorigin_advance() with wal_log = true\n> > seems to work for this case.\n> >\n>\n> IIUC, the changes corresponding to above in the latest patch are as follows:\n>\n> --- a/src/backend/replication/logical/origin.c\n> +++ b/src/backend/replication/logical/origin.c\n> @@ -921,7 +921,8 @@ replorigin_advance(RepOriginId node,\n> LWLockAcquire(&replication_state->lock, LW_EXCLUSIVE);\n>\n> /* Make sure it's not used by somebody else */\n> - if (replication_state->acquired_by != 0)\n> + if (replication_state->acquired_by != 0 &&\n> + replication_state->acquired_by != MyProcPid)\n> {\n> ...\n>\n> clear_subscription_skip_xid()\n> {\n> ..\n> + else if (!XLogRecPtrIsInvalid(origin_lsn))\n> + {\n> + /*\n> + * User has already changed subskipxid before clearing the subskipxid, so\n> + * don't change the catalog but just advance the replication origin.\n> + */\n> + replorigin_advance(replorigin_session_origin, origin_lsn,\n> + GetXLogInsertRecPtr(),\n> + false, /* go_backward */\n> + true /* wal_log */);\n> + }\n> ..\n> }\n>\n> I was thinking what if we don't advance origin explicitly in this\n> case? Actually, that will be no different than the transactions where\n> the apply worker doesn't apply any change because the initial sync is\n> in progress (see should_apply_changes_for_rel()) or we have received\n> an empty transaction. In those cases also, the origin lsn won't be\n> advanced even though we acknowledge the advanced last_received\n> location because of keep_alive messages. Now, it is possible after the\n> restart we send the old start_lsn location because the replication\n> origin was not updated before restart but we handle that case in the\n> server by starting from the last confirmed location. See below code:\n>\n> CreateDecodingContext()\n> {\n> ..\n> else if (start_lsn < slot->data.confirmed_flush)\n> ..\n\nGood point. Probably one minor thing that is different from the\ntransaction where the apply worker applied an empty transaction is a\ncase where the server restarts/crashes before sending an\nacknowledgment of the flush location. That is, in the case of the\nempty transaction, the publisher sends an empty transaction again. On\nthe other hand in the case of skipping the transaction, a non-empty\ntransaction will be sent again but skip_xid is already changed or\ncleared, therefore the user will have to specify skip_xid again. If we\nwrite replication origin WAL record to advance the origin lsn, it\nreduces the possibility of that. But I think it’s a very minor case so\nwe won’t need to deal with that.\n\nAnyway, according to your analysis, I think we don't necessarily need\nto do replorigin_advance() in this case.\n\n>\n> Few other comments on the latest patch:\n> =================================\n> 1.\n> A conflict will produce an error and will stop the replication; it must be\n> resolved manually by the user. Details about the conflict can be found in\n> - the subscriber's server log.\n> + <xref linkend=\"monitoring-pg-stat-subscription-workers\"/> as well as the\n> + subscriber's server log.\n>\n> Can we slightly change the modified line to: \"Details about the\n> conflict can be found in <xref\n> linkend=\"monitoring-pg-stat-subscription-workers\"/> and the\n> subscriber's server log.\"?\n\nWill fix it.\n\n> I think we can commit this change\n> separately as this is true even without this patch.\n\nRight. It seems an oversight of 8d74fc96db. I've attached the patch.\n\n>\n> 2.\n> The resolution can be done either by changing data on the subscriber so\n> - that it does not conflict with the incoming change or by skipping the\n> - transaction that conflicts with the existing data. The transaction can be\n> - skipped by calling the <link linkend=\"pg-replication-origin-advance\">\n> + that it does not conflict with the incoming changes or by skipping the whole\n> + transaction. This option specifies the ID of the transaction whose\n> + application is to be skipped by the logical replication worker. The logical\n> + replication worker skips all data modification transaction conflicts with\n> + the existing data. When a conflict produce an error, it is shown in\n> + <structname>pg_stat_subscription_workers</structname> view as follows:\n>\n> I don't think most of the additional text added in the above paragraph\n> is required. We can rephrase it as: \"The resolution can be done either\n> by changing data on the subscriber so that it does not conflict with\n> the incoming change or by skipping the transaction that conflicts with\n> the existing data. When a conflict produces an error, it is shown in\n> <structname>pg_stat_subscription_workers</structname> view as\n> follows:\". After that keep the text, you have.\n\nAgreed, will fix.\n\n>\n> 3.\n> They skip the whole transaction, including changes that may not violate any\n> + constraint. They may easily make the subscriber inconsistent, especially if\n> + a user specifies the wrong transaction ID or the position of origin.\n>\n> Can we slightly reword the above text as: \"Skipping the whole\n> transaction includes skipping the changes that may not violate any\n> constraint. This can easily make the subscriber inconsistent,\n> especially if a user specifies the wrong transaction ID or the\n> position of origin.\"?\n\nWill fix.\n\n>\n> 4.\n> The logical replication worker skips all data\n> + modification changes within the specified transaction. Therefore, since\n> + it skips the whole transaction including the changes that may not violate\n> + the constraint, it should only be used as a last resort. This option has\n> + no effect for the transaction that is already prepared with enabling\n> + <literal>two_phase</literal> on susbscriber.\n>\n> Let's slightly reword the above text as: \"The logical replication\n> worker skips all data modification changes within the specified\n> transaction including the changes that may not violate the constraint,\n> so, it should only be used as a last resort. This option has no effect\n> on the transaction that is already prepared by enabling\n> <literal>two_phase</literal> on the subscriber.\"\n\nWill fix.\n\n>\n> 5.\n> + by the logical replication worker. Setting\n> <literal>NONE</literal> means\n> + to reset the transaction ID.\n>\n> Let's slightly reword the second part of the sentence as: \"Setting\n> <literal>NONE</literal> resets the transaction ID.\"\n\nWill fix.\n\n>\n> 6.\n> Once we start skipping\n> + * changes, we don't stop it until the we skip all changes of the\n> transaction even\n> + * if the subscription invalidated and MySubscription->skipxid gets\n> changed or reset.\n>\n> /subscription invalidated/subscription is invalidated\n\nWill fix.\n\n>\n> What do you mean by subscription invalidated and how is it related to\n> this feature? I think we should mention something on these lines in\n> the docs as well.\n\nI meant that MySubscription, a cache of pg_subscription entry, is\ninvalidated by the catalog change. IIUC while applying changes we\ndon't re-read pg_subscription (i.e., not calling\nmaybe_reread_subscription()). Similarly, while skipping changes, we\nalso don't do that. Therefore, even if skip_xid has been changed while\nskipping changes, we don't stop skipping changes.\n\n>\n> 7.\n> \"Please refer to the comments in these functions for details.\". We can\n> slightly modify this part of the comment as: \"Please refer to the\n> comments in corresponding functions for details.\"\n\nWill fix.\n\nI'll submit an updated patch soon.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Tue, 11 Jan 2022 12:22:08 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jan 11, 2022 at 11:22 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jan 10, 2022 at 2:57 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > 2) Can we have an option to specify last_error_xid of\n> > pg_stat_subscription_workers. Something like:\n> > alter subscription sub1 skip ( XID = 'last_subscription_error');\n> >\n> > When the user specified last_subscription_error, it should pick\n> > last_error_xid from pg_stat_subscription_workers.\n> > As this operation is a critical operation, if there is an option which\n> > could automatically pick and set from pg_stat_subscription_workers, it\n> > would be useful.\n> >\n>\n> I think having some automatic functionality around this would be good\n> but I am not so sure about this idea because it is possible that the\n> error has not reached the stats collector and the user might be\n> referring to server logs to set the skip xid. In such cases, even\n> though an error would have occurred but we won't be able to set the\n> required xid. Now, one can imagine that if we don't get the required\n> value from pg_stat_subscription_workers then we can return an error to\n> the user indicating that she can cross-verify the server logs and set\n> the appropriate xid value but IMO it could be confusing. I feel even\n> if we want some automatic functionality like you are proposing or\n> something else, it could be done as a separate patch but let's wait\n> and see what Sawada-San or others think about this?\n\nAgreed. The automatically setting XID would be a good idea but we can\ndo that in a separate patch so we can keep the first patch simple.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 11 Jan 2022 15:01:23 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jan 11, 2022 at 8:52 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Jan 10, 2022 at 8:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > I was thinking what if we don't advance origin explicitly in this\n> > case? Actually, that will be no different than the transactions where\n> > the apply worker doesn't apply any change because the initial sync is\n> > in progress (see should_apply_changes_for_rel()) or we have received\n> > an empty transaction. In those cases also, the origin lsn won't be\n> > advanced even though we acknowledge the advanced last_received\n> > location because of keep_alive messages. Now, it is possible after the\n> > restart we send the old start_lsn location because the replication\n> > origin was not updated before restart but we handle that case in the\n> > server by starting from the last confirmed location. See below code:\n> >\n> > CreateDecodingContext()\n> > {\n> > ..\n> > else if (start_lsn < slot->data.confirmed_flush)\n> > ..\n>\n> Good point. Probably one minor thing that is different from the\n> transaction where the apply worker applied an empty transaction is a\n> case where the server restarts/crashes before sending an\n> acknowledgment of the flush location. That is, in the case of the\n> empty transaction, the publisher sends an empty transaction again. On\n> the other hand in the case of skipping the transaction, a non-empty\n> transaction will be sent again but skip_xid is already changed or\n> cleared, therefore the user will have to specify skip_xid again. If we\n> write replication origin WAL record to advance the origin lsn, it\n> reduces the possibility of that. But I think it’s a very minor case so\n> we won’t need to deal with that.\n>\n\nYeah, in the worst case, it will lead to conflict again and the user\nneeds to set the xid again.\n\n> Anyway, according to your analysis, I think we don't necessarily need\n> to do replorigin_advance() in this case.\n>\n\nRight.\n\n> >\n> > 5.\n> > + by the logical replication worker. Setting\n> > <literal>NONE</literal> means\n> > + to reset the transaction ID.\n> >\n> > Let's slightly reword the second part of the sentence as: \"Setting\n> > <literal>NONE</literal> resets the transaction ID.\"\n>\n> Will fix.\n>\n> >\n> > 6.\n> > Once we start skipping\n> > + * changes, we don't stop it until the we skip all changes of the\n> > transaction even\n> > + * if the subscription invalidated and MySubscription->skipxid gets\n> > changed or reset.\n> >\n> > /subscription invalidated/subscription is invalidated\n>\n> Will fix.\n>\n> >\n> > What do you mean by subscription invalidated and how is it related to\n> > this feature? I think we should mention something on these lines in\n> > the docs as well.\n>\n> I meant that MySubscription, a cache of pg_subscription entry, is\n> invalidated by the catalog change. IIUC while applying changes we\n> don't re-read pg_subscription (i.e., not calling\n> maybe_reread_subscription()). Similarly, while skipping changes, we\n> also don't do that. Therefore, even if skip_xid has been changed while\n> skipping changes, we don't stop skipping changes.\n>\n\nOkay, but I don't think we need to mention subscription is invalidated\nas that could be confusing, the other part of the comment is quite\nclear.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 11 Jan 2022 11:42:01 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jan 11, 2022 at 3:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jan 11, 2022 at 8:52 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Jan 10, 2022 at 8:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > I was thinking what if we don't advance origin explicitly in this\n> > > case? Actually, that will be no different than the transactions where\n> > > the apply worker doesn't apply any change because the initial sync is\n> > > in progress (see should_apply_changes_for_rel()) or we have received\n> > > an empty transaction. In those cases also, the origin lsn won't be\n> > > advanced even though we acknowledge the advanced last_received\n> > > location because of keep_alive messages. Now, it is possible after the\n> > > restart we send the old start_lsn location because the replication\n> > > origin was not updated before restart but we handle that case in the\n> > > server by starting from the last confirmed location. See below code:\n> > >\n> > > CreateDecodingContext()\n> > > {\n> > > ..\n> > > else if (start_lsn < slot->data.confirmed_flush)\n> > > ..\n> >\n> > Good point. Probably one minor thing that is different from the\n> > transaction where the apply worker applied an empty transaction is a\n> > case where the server restarts/crashes before sending an\n> > acknowledgment of the flush location. That is, in the case of the\n> > empty transaction, the publisher sends an empty transaction again. On\n> > the other hand in the case of skipping the transaction, a non-empty\n> > transaction will be sent again but skip_xid is already changed or\n> > cleared, therefore the user will have to specify skip_xid again. If we\n> > write replication origin WAL record to advance the origin lsn, it\n> > reduces the possibility of that. But I think it’s a very minor case so\n> > we won’t need to deal with that.\n> >\n>\n> Yeah, in the worst case, it will lead to conflict again and the user\n> needs to set the xid again.\n\nOn second thought, the same is true for other cases, for example,\npreparing the transaction and clearing skip_xid while handling a\nprepare message. That is, currently we don't clear skip_xid while\nhandling a prepare message but do that while handling commit/rollback\nprepared message, in order to avoid the worst case. If we do both\nwhile handling a prepare message and the server crashes between them,\nit ends up that skip_xid is cleared and the transaction will be\nresent, which is identical to the worst-case above. Therefore, if we\naccept this situation because of its low probability, probably we can\ndo the same things for other cases too, which makes the patch simple\nespecially for prepare and commit/rollback-prepared cases. What do you\nthink?\n\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 11 Jan 2022 17:20:39 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jan 11, 2022 at 1:51 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Jan 11, 2022 at 3:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Jan 11, 2022 at 8:52 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Mon, Jan 10, 2022 at 8:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > I was thinking what if we don't advance origin explicitly in this\n> > > > case? Actually, that will be no different than the transactions where\n> > > > the apply worker doesn't apply any change because the initial sync is\n> > > > in progress (see should_apply_changes_for_rel()) or we have received\n> > > > an empty transaction. In those cases also, the origin lsn won't be\n> > > > advanced even though we acknowledge the advanced last_received\n> > > > location because of keep_alive messages. Now, it is possible after the\n> > > > restart we send the old start_lsn location because the replication\n> > > > origin was not updated before restart but we handle that case in the\n> > > > server by starting from the last confirmed location. See below code:\n> > > >\n> > > > CreateDecodingContext()\n> > > > {\n> > > > ..\n> > > > else if (start_lsn < slot->data.confirmed_flush)\n> > > > ..\n> > >\n> > > Good point. Probably one minor thing that is different from the\n> > > transaction where the apply worker applied an empty transaction is a\n> > > case where the server restarts/crashes before sending an\n> > > acknowledgment of the flush location. That is, in the case of the\n> > > empty transaction, the publisher sends an empty transaction again. On\n> > > the other hand in the case of skipping the transaction, a non-empty\n> > > transaction will be sent again but skip_xid is already changed or\n> > > cleared, therefore the user will have to specify skip_xid again. If we\n> > > write replication origin WAL record to advance the origin lsn, it\n> > > reduces the possibility of that. But I think it’s a very minor case so\n> > > we won’t need to deal with that.\n> > >\n> >\n> > Yeah, in the worst case, it will lead to conflict again and the user\n> > needs to set the xid again.\n>\n> On second thought, the same is true for other cases, for example,\n> preparing the transaction and clearing skip_xid while handling a\n> prepare message. That is, currently we don't clear skip_xid while\n> handling a prepare message but do that while handling commit/rollback\n> prepared message, in order to avoid the worst case. If we do both\n> while handling a prepare message and the server crashes between them,\n> it ends up that skip_xid is cleared and the transaction will be\n> resent, which is identical to the worst-case above.\n>\n\nHow are you thinking to update the skip xid before prepare? If we do\nit in the same transaction then the changes in the catalog will be\npart of the prepared xact but won't be committed. Now, say if we do it\nafter prepare, then the situation won't be the same because after\nrestart the same xact won't appear again.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 11 Jan 2022 15:38:04 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jan 11, 2022 at 8:52 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Jan 10, 2022 at 8:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > Few other comments on the latest patch:\n> > =================================\n> > 1.\n> > A conflict will produce an error and will stop the replication; it must be\n> > resolved manually by the user. Details about the conflict can be found in\n> > - the subscriber's server log.\n> > + <xref linkend=\"monitoring-pg-stat-subscription-workers\"/> as well as the\n> > + subscriber's server log.\n> >\n> > Can we slightly change the modified line to: \"Details about the\n> > conflict can be found in <xref\n> > linkend=\"monitoring-pg-stat-subscription-workers\"/> and the\n> > subscriber's server log.\"?\n>\n> Will fix it.\n>\n> > I think we can commit this change\n> > separately as this is true even without this patch.\n>\n> Right. It seems an oversight of 8d74fc96db. I've attached the patch.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 11 Jan 2022 15:40:59 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jan 11, 2022 at 7:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jan 11, 2022 at 1:51 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Jan 11, 2022 at 3:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Jan 11, 2022 at 8:52 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Mon, Jan 10, 2022 at 8:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > I was thinking what if we don't advance origin explicitly in this\n> > > > > case? Actually, that will be no different than the transactions where\n> > > > > the apply worker doesn't apply any change because the initial sync is\n> > > > > in progress (see should_apply_changes_for_rel()) or we have received\n> > > > > an empty transaction. In those cases also, the origin lsn won't be\n> > > > > advanced even though we acknowledge the advanced last_received\n> > > > > location because of keep_alive messages. Now, it is possible after the\n> > > > > restart we send the old start_lsn location because the replication\n> > > > > origin was not updated before restart but we handle that case in the\n> > > > > server by starting from the last confirmed location. See below code:\n> > > > >\n> > > > > CreateDecodingContext()\n> > > > > {\n> > > > > ..\n> > > > > else if (start_lsn < slot->data.confirmed_flush)\n> > > > > ..\n> > > >\n> > > > Good point. Probably one minor thing that is different from the\n> > > > transaction where the apply worker applied an empty transaction is a\n> > > > case where the server restarts/crashes before sending an\n> > > > acknowledgment of the flush location. That is, in the case of the\n> > > > empty transaction, the publisher sends an empty transaction again. On\n> > > > the other hand in the case of skipping the transaction, a non-empty\n> > > > transaction will be sent again but skip_xid is already changed or\n> > > > cleared, therefore the user will have to specify skip_xid again. If we\n> > > > write replication origin WAL record to advance the origin lsn, it\n> > > > reduces the possibility of that. But I think it’s a very minor case so\n> > > > we won’t need to deal with that.\n> > > >\n> > >\n> > > Yeah, in the worst case, it will lead to conflict again and the user\n> > > needs to set the xid again.\n> >\n> > On second thought, the same is true for other cases, for example,\n> > preparing the transaction and clearing skip_xid while handling a\n> > prepare message. That is, currently we don't clear skip_xid while\n> > handling a prepare message but do that while handling commit/rollback\n> > prepared message, in order to avoid the worst case. If we do both\n> > while handling a prepare message and the server crashes between them,\n> > it ends up that skip_xid is cleared and the transaction will be\n> > resent, which is identical to the worst-case above.\n> >\n>\n> How are you thinking to update the skip xid before prepare? If we do\n> it in the same transaction then the changes in the catalog will be\n> part of the prepared xact but won't be committed. Now, say if we do it\n> after prepare, then the situation won't be the same because after\n> restart the same xact won't appear again.\n\nI was thinking to commit the catalog change first in a separate\ntransaction while not updating origin LSN and then prepare an empty\ntransaction while updating origin LSN. If the server crashes between\nthem, the skip_xid is cleared but the transaction will be resent.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 12 Jan 2022 09:19:07 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jan 11, 2022 at 7:11 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jan 11, 2022 at 8:52 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Jan 10, 2022 at 8:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > Few other comments on the latest patch:\n> > > =================================\n> > > 1.\n> > > A conflict will produce an error and will stop the replication; it must be\n> > > resolved manually by the user. Details about the conflict can be found in\n> > > - the subscriber's server log.\n> > > + <xref linkend=\"monitoring-pg-stat-subscription-workers\"/> as well as the\n> > > + subscriber's server log.\n> > >\n> > > Can we slightly change the modified line to: \"Details about the\n> > > conflict can be found in <xref\n> > > linkend=\"monitoring-pg-stat-subscription-workers\"/> and the\n> > > subscriber's server log.\"?\n> >\n> > Will fix it.\n> >\n> > > I think we can commit this change\n> > > separately as this is true even without this patch.\n> >\n> > Right. It seems an oversight of 8d74fc96db. I've attached the patch.\n> >\n>\n> Pushed.\n\nThanks!\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 12 Jan 2022 09:19:27 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jan 12, 2022 at 5:49 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Jan 11, 2022 at 7:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Jan 11, 2022 at 1:51 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On second thought, the same is true for other cases, for example,\n> > > preparing the transaction and clearing skip_xid while handling a\n> > > prepare message. That is, currently we don't clear skip_xid while\n> > > handling a prepare message but do that while handling commit/rollback\n> > > prepared message, in order to avoid the worst case. If we do both\n> > > while handling a prepare message and the server crashes between them,\n> > > it ends up that skip_xid is cleared and the transaction will be\n> > > resent, which is identical to the worst-case above.\n> > >\n> >\n> > How are you thinking to update the skip xid before prepare? If we do\n> > it in the same transaction then the changes in the catalog will be\n> > part of the prepared xact but won't be committed. Now, say if we do it\n> > after prepare, then the situation won't be the same because after\n> > restart the same xact won't appear again.\n>\n> I was thinking to commit the catalog change first in a separate\n> transaction while not updating origin LSN and then prepare an empty\n> transaction while updating origin LSN.\n>\n\nBut, won't it complicate the handling if in the future we try to\nenhance this API such that it skips partial changes like skipping only\nfor particular relation(s) or particular operations as discussed\npreviously in this thread?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 12 Jan 2022 08:51:24 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jan 12, 2022 at 12:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jan 12, 2022 at 5:49 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Jan 11, 2022 at 7:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Jan 11, 2022 at 1:51 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On second thought, the same is true for other cases, for example,\n> > > > preparing the transaction and clearing skip_xid while handling a\n> > > > prepare message. That is, currently we don't clear skip_xid while\n> > > > handling a prepare message but do that while handling commit/rollback\n> > > > prepared message, in order to avoid the worst case. If we do both\n> > > > while handling a prepare message and the server crashes between them,\n> > > > it ends up that skip_xid is cleared and the transaction will be\n> > > > resent, which is identical to the worst-case above.\n> > > >\n> > >\n> > > How are you thinking to update the skip xid before prepare? If we do\n> > > it in the same transaction then the changes in the catalog will be\n> > > part of the prepared xact but won't be committed. Now, say if we do it\n> > > after prepare, then the situation won't be the same because after\n> > > restart the same xact won't appear again.\n> >\n> > I was thinking to commit the catalog change first in a separate\n> > transaction while not updating origin LSN and then prepare an empty\n> > transaction while updating origin LSN.\n> >\n>\n> But, won't it complicate the handling if in the future we try to\n> enhance this API such that it skips partial changes like skipping only\n> for particular relation(s) or particular operations as discussed\n> previously in this thread?\n\nRight. I was thinking that if we accept the situation that the user\nhas to set skip_xid again in case of the server crashes, we might be\nable to accept also the situation that the user has to clear skip_xid\nin a case of the server crashes. But it seems the former is less\nproblematic.\n\nI've attached an updated patch that incorporated all comments I got so far.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Wed, 12 Jan 2022 15:02:19 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jan 10, 2022 at 6:27 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Fri, Jan 7, 2022 at 11:23 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Jan 7, 2022 at 10:04 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Wed, Jan 5, 2022 at 12:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Mon, Dec 27, 2021 at 9:54 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > On Thu, Dec 16, 2021 at 2:42 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > >\n> > > > > > On Thu, Dec 16, 2021 at 2:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > > >\n> > > > > > > On Thu, Dec 16, 2021 at 10:37 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > > > >\n> > > > > > > > On Thu, Dec 16, 2021 at 11:43 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > > > > >\n> > > > > > > > > I thought we just want to lock before clearing the skip_xid something\n> > > > > > > > > like take the lock, check if the skip_xid in the catalog is the same\n> > > > > > > > > as we have skipped, if it is the same then clear it, otherwise, leave\n> > > > > > > > > it as it is. How will that disallow users to change skip_xid when we\n> > > > > > > > > are skipping changes?\n> > > > > > > >\n> > > > > > > > Oh I thought we wanted to keep holding the lock while skipping changes\n> > > > > > > > (changing skip_xid requires acquiring the lock).\n> > > > > > > >\n> > > > > > > > So if skip_xid is already changed, the apply worker would do\n> > > > > > > > replorigin_advance() with WAL logging, instead of committing the\n> > > > > > > > catalog change?\n> > > > > > > >\n> > > > > > >\n> > > > > > > Right. BTW, how are you planning to advance the origin? Normally, a\n> > > > > > > commit transaction would do it but when we are skipping all changes,\n> > > > > > > the commit might not do it as there won't be any transaction id\n> > > > > > > assigned.\n> > > > > >\n> > > > > > I've not tested it yet but replorigin_advance() with wal_log = true\n> > > > > > seems to work for this case.\n> > > > >\n> > > > > I've tested it and realized that we cannot use replorigin_advance()\n> > > > > for this purpose without changes. That is, the current\n> > > > > replorigin_advance() doesn't allow to advance the origin by the owner:\n> > > > >\n> > > > > /* Make sure it's not used by somebody else */\n> > > > > if (replication_state->acquired_by != 0)\n> > > > > {\n> > > > > ereport(ERROR,\n> > > > > (errcode(ERRCODE_OBJECT_IN_USE),\n> > > > > errmsg(\"replication origin with OID %d is already\n> > > > > active for PID %d\",\n> > > > > replication_state->roident,\n> > > > > replication_state->acquired_by)));\n> > > > > }\n> > > > >\n> > > > > So we need to change it so that the origin owner can advance its\n> > > > > origin, which makes sense to me.\n> > > > >\n> > > > > Also, when we have to update the origin instead of committing the\n> > > > > catalog change while updating the origin, we cannot record the origin\n> > > > > timestamp.\n> > > > >\n> > > >\n> > > > Is it because we currently update the origin timestamp with commit record?\n> > >\n> > > Yes.\n> > >\n> > > >\n> > > > > This behavior makes sense to me because we skipped the\n> > > > > transaction. But ISTM it’s not good if we emit the origin timestamp\n> > > > > only when directly updating the origin. So probably we need to always\n> > > > > omit origin timestamp.\n> > > > >\n> > > >\n> > > > Do you mean to say that you want to omit it even when we are\n> > > > committing the changes?\n> > >\n> > > Yes, it would be better to record only origin lsn in terms of consistency.\n> > >\n> > > >\n> > > > > Apart from that, I'm vaguely concerned that the logic seems to be\n> > > > > getting complex. Probably it comes from the fact that we store\n> > > > > skip_xid in the catalog and update the catalog to clear/set the\n> > > > > skip_xid. It might be worth revisiting the idea of storing skip_xid on\n> > > > > shmem (e.g., ReplicationState)?\n> > > > >\n> > > >\n> > > > IIRC, the problem with that idea was that we won't remember skip_xid\n> > > > information after server restart and the user won't even know that it\n> > > > has to set it again.\n> > >\n> > > Right, I agree that it’s not convenient when the server restarts or\n> > > crashes, but these problems could not be critical in the situation\n> > > where users have to use this feature; the subscriber already entered\n> > > an error loop so they can know xid again and it’s an uncommon case\n> > > that they need to restart during skipping changes.\n> > >\n> > > Anyway, I'll submit an updated patch soon so we can discuss complexity\n> > > vs. convenience.\n> >\n> > Attached an updated patch. Please review it.\n\nThank you for the comments!\n\n>\n> Thanks for the updated patch, few comments:\n> 1) Should this be case insensitive to support NONE too:\n> + /* Setting xid = NONE is treated as resetting xid */\n> + if (strcmp(xid_str, \"none\") == 0)\n> + xid = InvalidTransactionId;\n\nI think the string value is always small cases so we don't need to do\nstrcacsecmp here.\n\n>\n> 2) Can we have an option to specify last_error_xid of\n> pg_stat_subscription_workers. Something like:\n> alter subscription sub1 skip ( XID = 'last_subscription_error');\n>\n> When the user specified last_subscription_error, it should pick\n> last_error_xid from pg_stat_subscription_workers.\n> As this operation is a critical operation, if there is an option which\n> could automatically pick and set from pg_stat_subscription_workers, it\n> would be useful.\n\nAs I mentioned before in another mail, I think we can do that in a\nseparate patch.\n\n>\n> 3) Currently the following syntax is being supported, I felt this\n> should throw an error:\n> postgres=# alter subscription sub1 set ( XID = 100);\n> ALTER SUBSCRIPTION\n\nFixed.\n\n>\n> 4) You might need to rebase the patch:\n> git am v2-0001-Add-ALTER-SUBSCRIPTION-.-SKIP-to-skip-the-transac.patch\n> Applying: Add ALTER SUBSCRIPTION ... SKIP to skip the transaction on\n> subscriber nodes\n> error: patch failed: doc/src/sgml/logical-replication.sgml:333\n> error: doc/src/sgml/logical-replication.sgml: patch does not apply\n> Patch failed at 0001 Add ALTER SUBSCRIPTION ... SKIP to skip the\n> transaction on subscriber nodes\n> hint: Use 'git am --show-current-patch=diff' to see the failed patch\n>\n> 5) You might have to rename 027_skip_xact to 028_skip_xact as\n> 027_nosuperuser.pl already exists\n> diff --git a/src/test/subscription/t/027_skip_xact.pl\n> b/src/test/subscription/t/027_skip_xact.pl\n> new file mode 100644\n> index 0000000000..a63c9c345e\n> --- /dev/null\n> +++ b/src/test/subscription/t/027_skip_xact.pl\n\nI've resolved these conflicts.\n\nThese comments are incorporated into the latest v3 patch I just submitted[1].\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoD9JXah2V8uFURUpZbK_ewsut%2Bjb1ESm6YQkrhQm3nJRg%40mail.gmail.com\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 12 Jan 2022 15:03:43 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jan 12, 2022 at 11:32 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Jan 12, 2022 at 12:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Jan 12, 2022 at 5:49 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Tue, Jan 11, 2022 at 7:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Tue, Jan 11, 2022 at 1:51 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > On second thought, the same is true for other cases, for example,\n> > > > > preparing the transaction and clearing skip_xid while handling a\n> > > > > prepare message. That is, currently we don't clear skip_xid while\n> > > > > handling a prepare message but do that while handling commit/rollback\n> > > > > prepared message, in order to avoid the worst case. If we do both\n> > > > > while handling a prepare message and the server crashes between them,\n> > > > > it ends up that skip_xid is cleared and the transaction will be\n> > > > > resent, which is identical to the worst-case above.\n> > > > >\n> > > >\n> > > > How are you thinking to update the skip xid before prepare? If we do\n> > > > it in the same transaction then the changes in the catalog will be\n> > > > part of the prepared xact but won't be committed. Now, say if we do it\n> > > > after prepare, then the situation won't be the same because after\n> > > > restart the same xact won't appear again.\n> > >\n> > > I was thinking to commit the catalog change first in a separate\n> > > transaction while not updating origin LSN and then prepare an empty\n> > > transaction while updating origin LSN.\n> > >\n> >\n> > But, won't it complicate the handling if in the future we try to\n> > enhance this API such that it skips partial changes like skipping only\n> > for particular relation(s) or particular operations as discussed\n> > previously in this thread?\n>\n> Right. I was thinking that if we accept the situation that the user\n> has to set skip_xid again in case of the server crashes, we might be\n> able to accept also the situation that the user has to clear skip_xid\n> in a case of the server crashes. But it seems the former is less\n> problematic.\n>\n> I've attached an updated patch that incorporated all comments I got so far.\n\nThanks for the updated patch, few comments:\n1) Currently skip xid is not displayed in describe subscriptions, can\nwe include it too:\n\\dRs+ sub1\n List of subscriptions\n Name | Owner | Enabled | Publication | Binary | Streaming | Two\nphase commit | Synchronous commit | Conninfo\n------+---------+---------+-------------+--------+-----------+------------------+--------------------+--------------------------------\n sub1 | vignesh | t | {pub1} | f | f | e\n | off | dbname=postgres host=localhost\n(1 row)\n\n2) This import \"use PostgreSQL::Test::Utils;\" is not required:\n+# Tests for skipping logical replication transactions.\n+use strict;\n+use warnings;\n+use PostgreSQL::Test::Cluster;\n+use PostgreSQL::Test::Utils;\n+use Test::More tests => 6;\n\n3) Some of the comments uses a punctuation mark and some of them does\nnot use, Should we keep it consistent:\n+ # Wait for worker error\n+ $node_subscriber->poll_query_until(\n+ 'postgres',\n\n+ # Set skip xid\n+ $node_subscriber->safe_psql(\n+ 'postgres',\n\n+# Create publisher node.\n+my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');\n+$node_publisher->init(allows_streaming => 'logical');\n\n\n+# Create subscriber node.\n+my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');\n\n4) Should this be changed:\n+ * True if we are skipping all data modification changes (INSERT,\nUPDATE, etc.) of\n+ * the specified transaction at MySubscription->skipxid. Once we\nstart skipping\n+ * changes, we don't stop it until the we skip all changes of the\ntransaction even\n+ * if pg_subscription is updated that and MySubscription->skipxid\ngets changed or\nto:\n+ * True if we are skipping all data modification changes (INSERT,\nUPDATE, etc.) of\n+ * the specified transaction at MySubscription->skipxid. Once we\nstart skipping\n+ * changes, we don't stop it until we skip all changes of the transaction even\n+ * if pg_subscription is updated that and MySubscription->skipxid\ngets changed or\n\nIn \"stop it until the we skip all changes\", here the is not required.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 12 Jan 2022 19:40:42 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jan 12, 2022 2:02 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> I've attached an updated patch that incorporated all comments I got so far.\r\n> \r\n\r\nThanks for updating the patch. Here are some comments:\r\n\r\n1)\r\n+ Skip applying changes of the particular transaction. If incoming data\r\n\r\nShould \"Skip\" be \"Skips\" ?\r\n\r\n2)\r\n+ prepared by enabling <literal>two_phase</literal> on susbscriber. After h\r\n+ the logical replication successfully skips the transaction, the transaction\r\n\r\nThe \"h\" after word \"After\" seems redundant.\r\n\r\n3)\r\n+ Skipping the whole transaction includes skipping the cahnge that may not violate\r\n\r\n\"cahnge\" should be \"changes\" I think.\r\n\r\n4)\r\n+/*\r\n+ * True if we are skipping all data modification changes (INSERT, UPDATE, etc.) of\r\n+ * the specified transaction at MySubscription->skipxid. Once we start skipping\r\n...\r\n+ */\r\n+static TransactionId skipping_xid = InvalidTransactionId;\r\n+#define is_skipping_changes() (TransactionIdIsValid(skipping_xid))\r\n\r\nMaybe we should modify this comment. Something like:\r\nskipping_xid is valid if we are skipping all data modification changes ...\r\n\r\n5)\r\n+\t\t\t\t\tif (!superuser())\r\n+\t\t\t\t\t\tereport(ERROR,\r\n+\t\t\t\t\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\r\n+\t\t\t\t\t\t\t\t errmsg(\"must be superuser to set %s\", \"skip_xid\")));\r\n\r\nShould we change the message to \"must be superuser to skip xid\"?\r\nBecause the SQL stmt is \"ALTER SUBSCRIPTION ... SKIP (xid = XXX)\".\r\n\r\nRegards,\r\nTang\r\n",
"msg_date": "Thu, 13 Jan 2022 01:07:49 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jan 12, 2022 at 11:10 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, Jan 12, 2022 at 11:32 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Jan 12, 2022 at 12:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Jan 12, 2022 at 5:49 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Tue, Jan 11, 2022 at 7:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Tue, Jan 11, 2022 at 1:51 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > >\n> > > > > > On second thought, the same is true for other cases, for example,\n> > > > > > preparing the transaction and clearing skip_xid while handling a\n> > > > > > prepare message. That is, currently we don't clear skip_xid while\n> > > > > > handling a prepare message but do that while handling commit/rollback\n> > > > > > prepared message, in order to avoid the worst case. If we do both\n> > > > > > while handling a prepare message and the server crashes between them,\n> > > > > > it ends up that skip_xid is cleared and the transaction will be\n> > > > > > resent, which is identical to the worst-case above.\n> > > > > >\n> > > > >\n> > > > > How are you thinking to update the skip xid before prepare? If we do\n> > > > > it in the same transaction then the changes in the catalog will be\n> > > > > part of the prepared xact but won't be committed. Now, say if we do it\n> > > > > after prepare, then the situation won't be the same because after\n> > > > > restart the same xact won't appear again.\n> > > >\n> > > > I was thinking to commit the catalog change first in a separate\n> > > > transaction while not updating origin LSN and then prepare an empty\n> > > > transaction while updating origin LSN.\n> > > >\n> > >\n> > > But, won't it complicate the handling if in the future we try to\n> > > enhance this API such that it skips partial changes like skipping only\n> > > for particular relation(s) or particular operations as discussed\n> > > previously in this thread?\n> >\n> > Right. I was thinking that if we accept the situation that the user\n> > has to set skip_xid again in case of the server crashes, we might be\n> > able to accept also the situation that the user has to clear skip_xid\n> > in a case of the server crashes. But it seems the former is less\n> > problematic.\n> >\n> > I've attached an updated patch that incorporated all comments I got so far.\n>\n> Thanks for the updated patch, few comments:\n\nThank you for the comments!\n\n> 1) Currently skip xid is not displayed in describe subscriptions, can\n> we include it too:\n> \\dRs+ sub1\n> List of subscriptions\n> Name | Owner | Enabled | Publication | Binary | Streaming | Two\n> phase commit | Synchronous commit | Conninfo\n> ------+---------+---------+-------------+--------+-----------+------------------+--------------------+--------------------------------\n> sub1 | vignesh | t | {pub1} | f | f | e\n> | off | dbname=postgres host=localhost\n> (1 row)\n>\n> 2) This import \"use PostgreSQL::Test::Utils;\" is not required:\n> +# Tests for skipping logical replication transactions.\n> +use strict;\n> +use warnings;\n> +use PostgreSQL::Test::Cluster;\n> +use PostgreSQL::Test::Utils;\n> +use Test::More tests => 6;\n>\n> 3) Some of the comments uses a punctuation mark and some of them does\n> not use, Should we keep it consistent:\n> + # Wait for worker error\n> + $node_subscriber->poll_query_until(\n> + 'postgres',\n>\n> + # Set skip xid\n> + $node_subscriber->safe_psql(\n> + 'postgres',\n>\n> +# Create publisher node.\n> +my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');\n> +$node_publisher->init(allows_streaming => 'logical');\n>\n>\n> +# Create subscriber node.\n> +my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');\n>\n> 4) Should this be changed:\n> + * True if we are skipping all data modification changes (INSERT,\n> UPDATE, etc.) of\n> + * the specified transaction at MySubscription->skipxid. Once we\n> start skipping\n> + * changes, we don't stop it until the we skip all changes of the\n> transaction even\n> + * if pg_subscription is updated that and MySubscription->skipxid\n> gets changed or\n> to:\n> + * True if we are skipping all data modification changes (INSERT,\n> UPDATE, etc.) of\n> + * the specified transaction at MySubscription->skipxid. Once we\n> start skipping\n> + * changes, we don't stop it until we skip all changes of the transaction even\n> + * if pg_subscription is updated that and MySubscription->skipxid\n> gets changed or\n>\n> In \"stop it until the we skip all changes\", here the is not required.\n>\n\nI agree with all the comments above. I've attached an updated patch.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Fri, 14 Jan 2022 11:19:10 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Jan 13, 2022 at 10:07 AM tanghy.fnst@fujitsu.com\n<tanghy.fnst@fujitsu.com> wrote:\n>\n> On Wed, Jan 12, 2022 2:02 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached an updated patch that incorporated all comments I got so far.\n> >\n>\n> Thanks for updating the patch. Here are some comments:\n\nThank you for the comments!\n\n>\n> 1)\n> + Skip applying changes of the particular transaction. If incoming data\n>\n> Should \"Skip\" be \"Skips\" ?\n>\n> 2)\n> + prepared by enabling <literal>two_phase</literal> on susbscriber. After h\n> + the logical replication successfully skips the transaction, the transaction\n>\n> The \"h\" after word \"After\" seems redundant.\n>\n> 3)\n> + Skipping the whole transaction includes skipping the cahnge that may not violate\n>\n> \"cahnge\" should be \"changes\" I think.\n>\n> 4)\n> +/*\n> + * True if we are skipping all data modification changes (INSERT, UPDATE, etc.) of\n> + * the specified transaction at MySubscription->skipxid. Once we start skipping\n> ...\n> + */\n> +static TransactionId skipping_xid = InvalidTransactionId;\n> +#define is_skipping_changes() (TransactionIdIsValid(skipping_xid))\n>\n> Maybe we should modify this comment. Something like:\n> skipping_xid is valid if we are skipping all data modification changes ...\n>\n> 5)\n> + if (!superuser())\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> + errmsg(\"must be superuser to set %s\", \"skip_xid\")));\n>\n> Should we change the message to \"must be superuser to skip xid\"?\n> Because the SQL stmt is \"ALTER SUBSCRIPTION ... SKIP (xid = XXX)\".\n\nI agree with all the comments above. These are incorporated into the\nlatest v4 patch I've just submitted[1].\n\nRegards,\n\n[1] postgresql.org/message-id/CAD21AoBZC87nY1pCaexk1uBA68JSBmy2-UqLGirT9g-RVMhjKw%40mail.gmail.com\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 14 Jan 2022 11:25:02 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Jan 14, 2022 at 7:49 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Jan 12, 2022 at 11:10 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Wed, Jan 12, 2022 at 11:32 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Wed, Jan 12, 2022 at 12:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Wed, Jan 12, 2022 at 5:49 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > On Tue, Jan 11, 2022 at 7:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > >\n> > > > > > On Tue, Jan 11, 2022 at 1:51 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > > >\n> > > > > > > On second thought, the same is true for other cases, for example,\n> > > > > > > preparing the transaction and clearing skip_xid while handling a\n> > > > > > > prepare message. That is, currently we don't clear skip_xid while\n> > > > > > > handling a prepare message but do that while handling commit/rollback\n> > > > > > > prepared message, in order to avoid the worst case. If we do both\n> > > > > > > while handling a prepare message and the server crashes between them,\n> > > > > > > it ends up that skip_xid is cleared and the transaction will be\n> > > > > > > resent, which is identical to the worst-case above.\n> > > > > > >\n> > > > > >\n> > > > > > How are you thinking to update the skip xid before prepare? If we do\n> > > > > > it in the same transaction then the changes in the catalog will be\n> > > > > > part of the prepared xact but won't be committed. Now, say if we do it\n> > > > > > after prepare, then the situation won't be the same because after\n> > > > > > restart the same xact won't appear again.\n> > > > >\n> > > > > I was thinking to commit the catalog change first in a separate\n> > > > > transaction while not updating origin LSN and then prepare an empty\n> > > > > transaction while updating origin LSN.\n> > > > >\n> > > >\n> > > > But, won't it complicate the handling if in the future we try to\n> > > > enhance this API such that it skips partial changes like skipping only\n> > > > for particular relation(s) or particular operations as discussed\n> > > > previously in this thread?\n> > >\n> > > Right. I was thinking that if we accept the situation that the user\n> > > has to set skip_xid again in case of the server crashes, we might be\n> > > able to accept also the situation that the user has to clear skip_xid\n> > > in a case of the server crashes. But it seems the former is less\n> > > problematic.\n> > >\n> > > I've attached an updated patch that incorporated all comments I got so far.\n> >\n> > Thanks for the updated patch, few comments:\n>\n> Thank you for the comments!\n>\n> > 1) Currently skip xid is not displayed in describe subscriptions, can\n> > we include it too:\n> > \\dRs+ sub1\n> > List of subscriptions\n> > Name | Owner | Enabled | Publication | Binary | Streaming | Two\n> > phase commit | Synchronous commit | Conninfo\n> > ------+---------+---------+-------------+--------+-----------+------------------+--------------------+--------------------------------\n> > sub1 | vignesh | t | {pub1} | f | f | e\n> > | off | dbname=postgres host=localhost\n> > (1 row)\n> >\n> > 2) This import \"use PostgreSQL::Test::Utils;\" is not required:\n> > +# Tests for skipping logical replication transactions.\n> > +use strict;\n> > +use warnings;\n> > +use PostgreSQL::Test::Cluster;\n> > +use PostgreSQL::Test::Utils;\n> > +use Test::More tests => 6;\n> >\n> > 3) Some of the comments uses a punctuation mark and some of them does\n> > not use, Should we keep it consistent:\n> > + # Wait for worker error\n> > + $node_subscriber->poll_query_until(\n> > + 'postgres',\n> >\n> > + # Set skip xid\n> > + $node_subscriber->safe_psql(\n> > + 'postgres',\n> >\n> > +# Create publisher node.\n> > +my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');\n> > +$node_publisher->init(allows_streaming => 'logical');\n> >\n> >\n> > +# Create subscriber node.\n> > +my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');\n> >\n> > 4) Should this be changed:\n> > + * True if we are skipping all data modification changes (INSERT,\n> > UPDATE, etc.) of\n> > + * the specified transaction at MySubscription->skipxid. Once we\n> > start skipping\n> > + * changes, we don't stop it until the we skip all changes of the\n> > transaction even\n> > + * if pg_subscription is updated that and MySubscription->skipxid\n> > gets changed or\n> > to:\n> > + * True if we are skipping all data modification changes (INSERT,\n> > UPDATE, etc.) of\n> > + * the specified transaction at MySubscription->skipxid. Once we\n> > start skipping\n> > + * changes, we don't stop it until we skip all changes of the transaction even\n> > + * if pg_subscription is updated that and MySubscription->skipxid\n> > gets changed or\n> >\n> > In \"stop it until the we skip all changes\", here the is not required.\n> >\n>\n> I agree with all the comments above. I've attached an updated patch.\n\nThanks for the updated patch, few minor comments:\n1) Should \"SKIP\" be \"SKIP (\" here:\n@@ -1675,7 +1675,7 @@ psql_completion(const char *text, int start, int end)\n /* ALTER SUBSCRIPTION <name> */\n else if (Matches(\"ALTER\", \"SUBSCRIPTION\", MatchAny))\n COMPLETE_WITH(\"CONNECTION\", \"ENABLE\", \"DISABLE\", \"OWNER TO\",\n- \"RENAME TO\", \"REFRESH\nPUBLICATION\", \"SET\",\n+ \"RENAME TO\", \"REFRESH\nPUBLICATION\", \"SET\", \"SKIP\",\n\n2) We could add a test for this if possible:\n+ case ALTER_SUBSCRIPTION_SKIP:\n+ {\n+ if (!superuser())\n+ ereport(ERROR,\n+\n(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n+ errmsg(\"must\nbe superuser to skip transaction\")));\n\n3) There was one typo in commit message, transaciton shoudl be transaction:\nAfter skipping the transaciton the apply worker clears\npg_subscription.subskipxid.\n\nAnother small typo, susbscriber should be subscriber:\n+ prepared by enabling <literal>two_phase</literal> on susbscriber. After\n+ the logical replication successfully skips the transaction, the\ntransaction\n\n4) Should skipsubxid be mentioned as subskipxid here:\n+ * Clear the subskipxid of pg_subscription catalog. This catalog\n+ * update must be committed before finishing prepared transaction.\n+ * Because otherwise, in a case where the server crashes between\n+ * finishing prepared transaction and the catalog update, COMMIT\n+ * PREPARED won’t be resent but skipsubxid is left.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 14 Jan 2022 17:35:25 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Jan 14, 2022 at 7:49 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I agree with all the comments above. I've attached an updated patch.\n>\n\nReview comments\n================\n1.\n+\n+ <para>\n+ In this case, you need to consider changing the data on the\nsubscriber so that it\n\nThe starting of this sentence doesn't make sense to me. How about\nchanging it like: \"To resolve conflicts, you need to ...\n\n2.\n+ <structname>pg_subscription</structname>.<structfield>subskipxid</structfield>)\n+ is cleared. See <xref linkend=\"logical-replication-conflicts\"/> for\n+ the details of logical replication conflicts.\n+ </para>\n+\n+ <para>\n+ <replaceable>skip_option</replaceable> specifies options for\nthis operation.\n+ The supported option is:\n+\n+ <variablelist>\n+ <varlistentry>\n+ <term><literal>xid</literal> (<type>xid</type>)</term>\n+ <listitem>\n+ <para>\n+ Specifies the ID of the transaction whose changes are to be skipped\n+ by the logical replication worker. Setting\n<literal>NONE</literal> resets\n+ the transaction ID.\n+ </para>\n\nEmpty spaces after line finish are inconsistent. I personally use a\nsingle space before a new line but I see that others use two spaces\nand the nearby documentation also uses two spaces in this regard so I\nam fine either way but let's be consistent.\n\n3.\n+ case ALTER_SUBSCRIPTION_SKIP:\n+ {\n+ if (!superuser())\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n+ errmsg(\"must be superuser to skip transaction\")));\n+\n+ parse_subscription_options(pstate, stmt->options, SUBOPT_XID, &opts);\n+\n+ if (IsSet(opts.specified_opts, SUBOPT_XID))\n..\n..\n\nIs there a case when the above 'if (IsSet(..' won't be true? If not,\nthen probably there should be Assert instead of 'if'.\n\n4.\n+static TransactionId skipping_xid = InvalidTransactionId;\n\nI find this variable name bit odd. Can we name it skip_xid?\n\n5.\n+ * skipping_xid is valid if we are skipping all data modification changes\n+ * (INSERT, UPDATE, etc.) of the specified transaction at\nMySubscription->skipxid.\n+ * Once we start skipping changes, we don't stop it until we skip all changes\n\nI think it would be better to write the first line of comment as: \"We\nenable skipping all data modification changes (INSERT, UPDATE, etc.)\nfor the subscription if the user has specified skip_xid. Once we ...\"\n\n6.\n+static void\n+maybe_start_skipping_changes(TransactionId xid)\n+{\n+ Assert(!is_skipping_changes());\n+ Assert(!in_remote_transaction);\n+ Assert(!in_streamed_transaction);\n+\n+ /* Make sure subscription cache is up-to-date */\n+ maybe_reread_subscription();\n\nWhy do we need to update the cache here by calling\nmaybe_reread_subscription() and at other places in the patch? It is\nsufficient to get the skip_xid value at the start of the worker via\nGetSubscription().\n\n7. In maybe_reread_subscription(), isn't there a need to check whether\nskip_xid is changed where we exit and launch the worker and compare\nother subscription parameters?\n\n8.\n+static void\n+clear_subscription_skip_xid(TransactionId xid, XLogRecPtr origin_lsn,\n+ TimestampTz origin_timestamp)\n+{\n+ Relation rel;\n+ Form_pg_subscription subform;\n+ HeapTuple tup;\n+ bool nulls[Natts_pg_subscription];\n+ bool replaces[Natts_pg_subscription];\n+ Datum values[Natts_pg_subscription];\n+\n+ memset(values, 0, sizeof(values));\n+ memset(nulls, false, sizeof(nulls));\n+ memset(replaces, false, sizeof(replaces));\n+\n+ if (!IsTransactionState())\n+ StartTransactionCommand();\n+\n+ LockSharedObject(SubscriptionRelationId, MySubscription->oid, 0,\n+ AccessShareLock);\n\nIt is important to add a comment as to why we need a lock here.\n\n9.\n+ * needs to be set subskipxid again. We can reduce the possibility by\n+ * logging a replication origin WAL record to advance the origin LSN\n+ * instead but it doesn't seem to be worth since it's a very minor case.\n\nYou can also add here that there is no way to advance origin_timestamp\nso that would be inconsistent.\n\n10.\n+clear_subscription_skip_xid(TransactionId xid, XLogRecPtr origin_lsn,\n+ TimestampTz origin_timestamp)\n{\n..\n..\n+ if (!IsTransactionState())\n+ StartTransactionCommand();\n..\n..\n+ CommitTransactionCommand();\n..\n}\n\nThe transaction should be committed in this function if it is started\nhere otherwise it should be the responsibility of the caller to commit\nit.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 15 Jan 2022 15:53:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Jan 14, 2022 at 5:35 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Thanks for the updated patch, few minor comments:\n> 1) Should \"SKIP\" be \"SKIP (\" here:\n> @@ -1675,7 +1675,7 @@ psql_completion(const char *text, int start, int end)\n> /* ALTER SUBSCRIPTION <name> */\n> else if (Matches(\"ALTER\", \"SUBSCRIPTION\", MatchAny))\n> COMPLETE_WITH(\"CONNECTION\", \"ENABLE\", \"DISABLE\", \"OWNER TO\",\n> - \"RENAME TO\", \"REFRESH\n> PUBLICATION\", \"SET\",\n> + \"RENAME TO\", \"REFRESH\n> PUBLICATION\", \"SET\", \"SKIP\",\n>\n\nWon't the another rule as follows added by patch sufficient for what\nyou are asking?\n+ /* ALTER SUBSCRIPTION <name> SKIP */\n+ else if (Matches(\"ALTER\", \"SUBSCRIPTION\", MatchAny, \"SKIP\"))\n+ COMPLETE_WITH(\"(\");\n\nI might be missing something but why do you think the handling of SKIP\nbe any different than what we are doing for SET?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 15 Jan 2022 15:58:21 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Sat, Jan 15, 2022 at 7:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jan 14, 2022 at 7:49 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I agree with all the comments above. I've attached an updated patch.\n> >\n>\n> Review comments\n> ================\n\nThank you for the comments!\n\n> 1.\n> +\n> + <para>\n> + In this case, you need to consider changing the data on the\n> subscriber so that it\n>\n> The starting of this sentence doesn't make sense to me. How about\n> changing it like: \"To resolve conflicts, you need to ...\n>\n\nFixed.\n\n> 2.\n> + <structname>pg_subscription</structname>.<structfield>subskipxid</structfield>)\n> + is cleared. See <xref linkend=\"logical-replication-conflicts\"/> for\n> + the details of logical replication conflicts.\n> + </para>\n> +\n> + <para>\n> + <replaceable>skip_option</replaceable> specifies options for\n> this operation.\n> + The supported option is:\n> +\n> + <variablelist>\n> + <varlistentry>\n> + <term><literal>xid</literal> (<type>xid</type>)</term>\n> + <listitem>\n> + <para>\n> + Specifies the ID of the transaction whose changes are to be skipped\n> + by the logical replication worker. Setting\n> <literal>NONE</literal> resets\n> + the transaction ID.\n> + </para>\n>\n> Empty spaces after line finish are inconsistent. I personally use a\n> single space before a new line but I see that others use two spaces\n> and the nearby documentation also uses two spaces in this regard so I\n> am fine either way but let's be consistent.\n\nFixed.\n\n>\n> 3.\n> + case ALTER_SUBSCRIPTION_SKIP:\n> + {\n> + if (!superuser())\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> + errmsg(\"must be superuser to skip transaction\")));\n> +\n> + parse_subscription_options(pstate, stmt->options, SUBOPT_XID, &opts);\n> +\n> + if (IsSet(opts.specified_opts, SUBOPT_XID))\n> ..\n> ..\n>\n> Is there a case when the above 'if (IsSet(..' won't be true? If not,\n> then probably there should be Assert instead of 'if'.\n>\n\nFixed.\n\n> 4.\n> +static TransactionId skipping_xid = InvalidTransactionId;\n>\n> I find this variable name bit odd. Can we name it skip_xid?\n>\n\nOkay, renamed.\n\n> 5.\n> + * skipping_xid is valid if we are skipping all data modification changes\n> + * (INSERT, UPDATE, etc.) of the specified transaction at\n> MySubscription->skipxid.\n> + * Once we start skipping changes, we don't stop it until we skip all changes\n>\n> I think it would be better to write the first line of comment as: \"We\n> enable skipping all data modification changes (INSERT, UPDATE, etc.)\n> for the subscription if the user has specified skip_xid. Once we ...\"\n>\n\nChanged.\n\n> 6.\n> +static void\n> +maybe_start_skipping_changes(TransactionId xid)\n> +{\n> + Assert(!is_skipping_changes());\n> + Assert(!in_remote_transaction);\n> + Assert(!in_streamed_transaction);\n> +\n> + /* Make sure subscription cache is up-to-date */\n> + maybe_reread_subscription();\n>\n> Why do we need to update the cache here by calling\n> maybe_reread_subscription() and at other places in the patch? It is\n> sufficient to get the skip_xid value at the start of the worker via\n> GetSubscription().\n\nMySubscription could be out-of-date after a user changes the catalog.\nIn non-skipping change cases, we check it when starting the\ntransaction in begin_replication_step() which is called, e.g., when\napplying an insert change. But I think we need to make sure it’s\nup-to-date at the beginning of applying changes, that is, before\nstarting a transaction. Otherwise, we may end up skipping the\ntransaction based on out-of-dated subscription cache.\n\nThe reason why calling calling maybe_reread_subscription in both\napply_handle_commit_prepared() and apply_handle_rollback_prepared() is\nthe same; MySubscription could be out-of-date when applying\ncommit-prepared or rollback-prepared since we have not called\nbegin_replication_step() to open a new transaction.\n\n>\n> 7. In maybe_reread_subscription(), isn't there a need to check whether\n> skip_xid is changed where we exit and launch the worker and compare\n> other subscription parameters?\n\nIIUC we relaunch the worker here when subscription parameters such as\nslot_name was changed. In the current implementation, I think that\nrelaunching the worker is not necessarily necessary when skip_xid is\nchanged. For instance, when skipping the prepared transaction, we\ndeliberately don’t clear subskipxid of pg_subscription and do that at\ncommit-prepared or rollback-prepared case. There are chances that the\nuser changes skip_xid before commit-prepared or rollback-prepared. But\nwe tolerate this case.\n\nAlso, in non-streaming and non-2PC cases, while skipping changes we\ndon’t call maybe_reread_subscription() until all changes are skipped.\nSo it cannot work to cancel skipping changes that is already started.\n\n>\n> 8.\n> +static void\n> +clear_subscription_skip_xid(TransactionId xid, XLogRecPtr origin_lsn,\n> + TimestampTz origin_timestamp)\n> +{\n> + Relation rel;\n> + Form_pg_subscription subform;\n> + HeapTuple tup;\n> + bool nulls[Natts_pg_subscription];\n> + bool replaces[Natts_pg_subscription];\n> + Datum values[Natts_pg_subscription];\n> +\n> + memset(values, 0, sizeof(values));\n> + memset(nulls, false, sizeof(nulls));\n> + memset(replaces, false, sizeof(replaces));\n> +\n> + if (!IsTransactionState())\n> + StartTransactionCommand();\n> +\n> + LockSharedObject(SubscriptionRelationId, MySubscription->oid, 0,\n> + AccessShareLock);\n>\n> It is important to add a comment as to why we need a lock here.\n\nAdded.\n\n>\n> 9.\n> + * needs to be set subskipxid again. We can reduce the possibility by\n> + * logging a replication origin WAL record to advance the origin LSN\n> + * instead but it doesn't seem to be worth since it's a very minor case.\n>\n> You can also add here that there is no way to advance origin_timestamp\n> so that would be inconsistent.\n\nAdded.\n\n>\n> 10.\n> +clear_subscription_skip_xid(TransactionId xid, XLogRecPtr origin_lsn,\n> + TimestampTz origin_timestamp)\n> {\n> ..\n> ..\n> + if (!IsTransactionState())\n> + StartTransactionCommand();\n> ..\n> ..\n> + CommitTransactionCommand();\n> ..\n> }\n>\n> The transaction should be committed in this function if it is started\n> here otherwise it should be the responsibility of the caller to commit\n> it.\n\nFixed.\n\nI've attached an updated patch that incorporated these comments except\nfor 6 and 7 that we probably need more discussion on. The comments\nfrom Vignesh are also incorporated.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Mon, 17 Jan 2022 13:18:49 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Jan 14, 2022 at 9:05 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Fri, Jan 14, 2022 at 7:49 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Jan 12, 2022 at 11:10 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Wed, Jan 12, 2022 at 11:32 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Wed, Jan 12, 2022 at 12:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Wed, Jan 12, 2022 at 5:49 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > >\n> > > > > > On Tue, Jan 11, 2022 at 7:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > > >\n> > > > > > > On Tue, Jan 11, 2022 at 1:51 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > > > >\n> > > > > > > > On second thought, the same is true for other cases, for example,\n> > > > > > > > preparing the transaction and clearing skip_xid while handling a\n> > > > > > > > prepare message. That is, currently we don't clear skip_xid while\n> > > > > > > > handling a prepare message but do that while handling commit/rollback\n> > > > > > > > prepared message, in order to avoid the worst case. If we do both\n> > > > > > > > while handling a prepare message and the server crashes between them,\n> > > > > > > > it ends up that skip_xid is cleared and the transaction will be\n> > > > > > > > resent, which is identical to the worst-case above.\n> > > > > > > >\n> > > > > > >\n> > > > > > > How are you thinking to update the skip xid before prepare? If we do\n> > > > > > > it in the same transaction then the changes in the catalog will be\n> > > > > > > part of the prepared xact but won't be committed. Now, say if we do it\n> > > > > > > after prepare, then the situation won't be the same because after\n> > > > > > > restart the same xact won't appear again.\n> > > > > >\n> > > > > > I was thinking to commit the catalog change first in a separate\n> > > > > > transaction while not updating origin LSN and then prepare an empty\n> > > > > > transaction while updating origin LSN.\n> > > > > >\n> > > > >\n> > > > > But, won't it complicate the handling if in the future we try to\n> > > > > enhance this API such that it skips partial changes like skipping only\n> > > > > for particular relation(s) or particular operations as discussed\n> > > > > previously in this thread?\n> > > >\n> > > > Right. I was thinking that if we accept the situation that the user\n> > > > has to set skip_xid again in case of the server crashes, we might be\n> > > > able to accept also the situation that the user has to clear skip_xid\n> > > > in a case of the server crashes. But it seems the former is less\n> > > > problematic.\n> > > >\n> > > > I've attached an updated patch that incorporated all comments I got so far.\n> > >\n> > > Thanks for the updated patch, few comments:\n> >\n> > Thank you for the comments!\n> >\n> > > 1) Currently skip xid is not displayed in describe subscriptions, can\n> > > we include it too:\n> > > \\dRs+ sub1\n> > > List of subscriptions\n> > > Name | Owner | Enabled | Publication | Binary | Streaming | Two\n> > > phase commit | Synchronous commit | Conninfo\n> > > ------+---------+---------+-------------+--------+-----------+------------------+--------------------+--------------------------------\n> > > sub1 | vignesh | t | {pub1} | f | f | e\n> > > | off | dbname=postgres host=localhost\n> > > (1 row)\n> > >\n> > > 2) This import \"use PostgreSQL::Test::Utils;\" is not required:\n> > > +# Tests for skipping logical replication transactions.\n> > > +use strict;\n> > > +use warnings;\n> > > +use PostgreSQL::Test::Cluster;\n> > > +use PostgreSQL::Test::Utils;\n> > > +use Test::More tests => 6;\n> > >\n> > > 3) Some of the comments uses a punctuation mark and some of them does\n> > > not use, Should we keep it consistent:\n> > > + # Wait for worker error\n> > > + $node_subscriber->poll_query_until(\n> > > + 'postgres',\n> > >\n> > > + # Set skip xid\n> > > + $node_subscriber->safe_psql(\n> > > + 'postgres',\n> > >\n> > > +# Create publisher node.\n> > > +my $node_publisher = PostgreSQL::Test::Cluster->new('publisher');\n> > > +$node_publisher->init(allows_streaming => 'logical');\n> > >\n> > >\n> > > +# Create subscriber node.\n> > > +my $node_subscriber = PostgreSQL::Test::Cluster->new('subscriber');\n> > >\n> > > 4) Should this be changed:\n> > > + * True if we are skipping all data modification changes (INSERT,\n> > > UPDATE, etc.) of\n> > > + * the specified transaction at MySubscription->skipxid. Once we\n> > > start skipping\n> > > + * changes, we don't stop it until the we skip all changes of the\n> > > transaction even\n> > > + * if pg_subscription is updated that and MySubscription->skipxid\n> > > gets changed or\n> > > to:\n> > > + * True if we are skipping all data modification changes (INSERT,\n> > > UPDATE, etc.) of\n> > > + * the specified transaction at MySubscription->skipxid. Once we\n> > > start skipping\n> > > + * changes, we don't stop it until we skip all changes of the transaction even\n> > > + * if pg_subscription is updated that and MySubscription->skipxid\n> > > gets changed or\n> > >\n> > > In \"stop it until the we skip all changes\", here the is not required.\n> > >\n> >\n> > I agree with all the comments above. I've attached an updated patch.\n>\n> Thanks for the updated patch, few minor comments:\n\nThank you for the comments.\n\n> 1) Should \"SKIP\" be \"SKIP (\" here:\n> @@ -1675,7 +1675,7 @@ psql_completion(const char *text, int start, int end)\n> /* ALTER SUBSCRIPTION <name> */\n> else if (Matches(\"ALTER\", \"SUBSCRIPTION\", MatchAny))\n> COMPLETE_WITH(\"CONNECTION\", \"ENABLE\", \"DISABLE\", \"OWNER TO\",\n> - \"RENAME TO\", \"REFRESH\n> PUBLICATION\", \"SET\",\n> + \"RENAME TO\", \"REFRESH\n> PUBLICATION\", \"SET\", \"SKIP\",\n\nAs Amit mentioned, it's consistent with the SET option.\n\n>\n> 2) We could add a test for this if possible:\n> + case ALTER_SUBSCRIPTION_SKIP:\n> + {\n> + if (!superuser())\n> + ereport(ERROR,\n> +\n> (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> + errmsg(\"must\n> be superuser to skip transaction\")));\n>\n> 3) There was one typo in commit message, transaciton shoudl be transaction:\n> After skipping the transaciton the apply worker clears\n> pg_subscription.subskipxid.\n>\n> Another small typo, susbscriber should be subscriber:\n> + prepared by enabling <literal>two_phase</literal> on susbscriber. After\n> + the logical replication successfully skips the transaction, the\n> transaction\n>\n> 4) Should skipsubxid be mentioned as subskipxid here:\n> + * Clear the subskipxid of pg_subscription catalog. This catalog\n> + * update must be committed before finishing prepared transaction.\n> + * Because otherwise, in a case where the server crashes between\n> + * finishing prepared transaction and the catalog update, COMMIT\n> + * PREPARED won’t be resent but skipsubxid is left.\n>\n\nThe above comments were incorporated into the latest v5 patch I just\nsubmitted[1].\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoCd3Y2-b67%2BpVrzrdteUmup1XG6JeHYOa5dGjh8qZ3VuQ%40mail.gmail.com\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 17 Jan 2022 13:20:19 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jan 17, 2022 at 9:49 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Sat, Jan 15, 2022 at 7:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> > 6.\n> > +static void\n> > +maybe_start_skipping_changes(TransactionId xid)\n> > +{\n> > + Assert(!is_skipping_changes());\n> > + Assert(!in_remote_transaction);\n> > + Assert(!in_streamed_transaction);\n> > +\n> > + /* Make sure subscription cache is up-to-date */\n> > + maybe_reread_subscription();\n> >\n> > Why do we need to update the cache here by calling\n> > maybe_reread_subscription() and at other places in the patch? It is\n> > sufficient to get the skip_xid value at the start of the worker via\n> > GetSubscription().\n>\n> MySubscription could be out-of-date after a user changes the catalog.\n> In non-skipping change cases, we check it when starting the\n> transaction in begin_replication_step() which is called, e.g., when\n> applying an insert change. But I think we need to make sure it’s\n> up-to-date at the beginning of applying changes, that is, before\n> starting a transaction. Otherwise, we may end up skipping the\n> transaction based on out-of-dated subscription cache.\n>\n\nI thought the user would normally set skip_xid only after an error\nwhich means that the value should be as new as the time of the start\nof the worker. I am slightly worried about the cost we might need to\npay for this additional look-up in case skip_xid is not changed. Do\nyou see any valid user scenario where we might not see the required\nskip_xid? I am okay with calling this if we really need it.\n\n> >\n> > 7. In maybe_reread_subscription(), isn't there a need to check whether\n> > skip_xid is changed where we exit and launch the worker and compare\n> > other subscription parameters?\n>\n> IIUC we relaunch the worker here when subscription parameters such as\n> slot_name was changed. In the current implementation, I think that\n> relaunching the worker is not necessarily necessary when skip_xid is\n> changed. For instance, when skipping the prepared transaction, we\n> deliberately don’t clear subskipxid of pg_subscription and do that at\n> commit-prepared or rollback-prepared case. There are chances that the\n> user changes skip_xid before commit-prepared or rollback-prepared. But\n> we tolerate this case.\n>\n\nI think between prepare and commit prepared, the user only needs to\nchange it if there is another error in which case we will anyway\nrestart and load the new value of same. But, I understand that we\ndon't need to restart if skip_xid is changed as it might not impact\nremote connection in any way, so I am fine for not doing anything for\nthis.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 17 Jan 2022 11:17:58 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jan 17, 2022 at 2:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jan 17, 2022 at 9:49 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Sat, Jan 15, 2022 at 7:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> >\n> > > 6.\n> > > +static void\n> > > +maybe_start_skipping_changes(TransactionId xid)\n> > > +{\n> > > + Assert(!is_skipping_changes());\n> > > + Assert(!in_remote_transaction);\n> > > + Assert(!in_streamed_transaction);\n> > > +\n> > > + /* Make sure subscription cache is up-to-date */\n> > > + maybe_reread_subscription();\n> > >\n> > > Why do we need to update the cache here by calling\n> > > maybe_reread_subscription() and at other places in the patch? It is\n> > > sufficient to get the skip_xid value at the start of the worker via\n> > > GetSubscription().\n> >\n> > MySubscription could be out-of-date after a user changes the catalog.\n> > In non-skipping change cases, we check it when starting the\n> > transaction in begin_replication_step() which is called, e.g., when\n> > applying an insert change. But I think we need to make sure it’s\n> > up-to-date at the beginning of applying changes, that is, before\n> > starting a transaction. Otherwise, we may end up skipping the\n> > transaction based on out-of-dated subscription cache.\n> >\n>\n> I thought the user would normally set skip_xid only after an error\n> which means that the value should be as new as the time of the start\n> of the worker. I am slightly worried about the cost we might need to\n> pay for this additional look-up in case skip_xid is not changed. Do\n> you see any valid user scenario where we might not see the required\n> skip_xid? I am okay with calling this if we really need it.\n\nFair point. I've changed the code accordingly.\n\n>\n> > >\n> > > 7. In maybe_reread_subscription(), isn't there a need to check whether\n> > > skip_xid is changed where we exit and launch the worker and compare\n> > > other subscription parameters?\n> >\n> > IIUC we relaunch the worker here when subscription parameters such as\n> > slot_name was changed. In the current implementation, I think that\n> > relaunching the worker is not necessarily necessary when skip_xid is\n> > changed. For instance, when skipping the prepared transaction, we\n> > deliberately don’t clear subskipxid of pg_subscription and do that at\n> > commit-prepared or rollback-prepared case. There are chances that the\n> > user changes skip_xid before commit-prepared or rollback-prepared. But\n> > we tolerate this case.\n> >\n>\n> I think between prepare and commit prepared, the user only needs to\n> change it if there is another error in which case we will anyway\n> restart and load the new value of same. But, I understand that we\n> don't need to restart if skip_xid is changed as it might not impact\n> remote connection in any way, so I am fine for not doing anything for\n> this.\n\nI'll leave this part for now. We can change it later if others think\nit's necessary.\n\nI've attached an updated patch. Please review it.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Mon, 17 Jan 2022 15:18:01 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Monday, January 17, 2022 3:18 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> I've attached an updated patch. Please review it.\r\nHi, thank you for sharing a new patch.\r\nFew comments on the v6.\r\n\r\n(1) doc/src/sgml/ref/alter_subscription.sgml\r\n\r\n+ resort. This option has no effect on the transaction that is already\r\n\r\nOne TAB exists between \"resort\" and \"This\".\r\n\r\n(2) Minor improvement suggestion of comment in src/backend/replication/logical/worker.c\r\n\r\n+ * reset during that. Also, we don't skip receiving the changes in streaming\r\n+ * cases, since we decide whether or not to skip applying the changes when\r\n\r\nI sugguest that you don't use 'streaming cases', because\r\nwhat \"streaming cases\" means sounds a bit broader than actual your implementation.\r\nWe do skip transaction of streaming cases but not during the spooling phase, right ?\r\n\r\nI suggest below.\r\n\r\n\"We don't skip receiving the changes at the phase to spool streaming transactions\"\r\n\r\n(3) in the comment of apply_handle_prepare_internal, two full-width characters.\r\n\r\n3-1\r\n+\t * won’t be resent in a case where the server crashes between them.\r\n\r\n3-2\r\n+\t * COMMIT PREPARED or ROLLBACK PREPARED. But that’s okay because this\r\n\r\nYou have full-width characters for \"won't\" and \"that's\".\r\nCould you please check ?\r\n\r\n\r\n(4) typo\r\n\r\n+ * the subscription if hte user has specified skip_xid. Once we start skipping\r\n\r\n\"hte\" should \"the\" ?\r\n\r\n(5)\r\n\r\nI can miss something here but, in one of\r\nthe past discussions, there seems a consensus that\r\nif the user specifies XID of a subtransaction,\r\nit would be better to skip only the subtransaction.\r\n\r\nThis time, is it out of the range of the patch ?\r\nIf so, I suggest you include some description about it\r\neither in the commit message or around codes related to it.\r\n\r\n(6)\r\n\r\nI feel it's a better idea to include a test whether\r\nto skip aborted streaming transaction clears the XID\r\nin the TAP test for this feature, in a sense to cover\r\nvarious new code paths. Did you have any special reason\r\nto omit the case ?\r\n\r\n(7)\r\n\r\nI want more explanation for the reason to restart the subscriber\r\nin the TAP test because this is not mandatory operation.\r\n(We can pass the TAP tests without this restart)\r\n\r\nFrom :\r\n# Restart the subscriber node to restart logical replication with no interval\r\n\r\nIIUC, below would be better.\r\n\r\nTo :\r\n# As an optimization to finish tests earlier, restart the subscriber with no interval,\r\n# rather than waiting for new error to laucher a new apply worker.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Mon, 17 Jan 2022 08:03:17 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Monday, January 17, 2022 5:03 PM I wrote:\r\n> Hi, thank you for sharing a new patch.\r\n> Few comments on the v6.\r\n> \r\n> (1) doc/src/sgml/ref/alter_subscription.sgml\r\n> \r\n> + resort. This option has no effect on the transaction that is\r\n> + already\r\n> \r\n> One TAB exists between \"resort\" and \"This\".\r\n> \r\n> (2) Minor improvement suggestion of comment in\r\n> src/backend/replication/logical/worker.c\r\n> \r\n> + * reset during that. Also, we don't skip receiving the changes in\r\n> + streaming\r\n> + * cases, since we decide whether or not to skip applying the changes\r\n> + when\r\n> \r\n> I sugguest that you don't use 'streaming cases', because what \"streaming\r\n> cases\" means sounds a bit broader than actual your implementation.\r\n> We do skip transaction of streaming cases but not during the spooling phase,\r\n> right ?\r\n> \r\n> I suggest below.\r\n> \r\n> \"We don't skip receiving the changes at the phase to spool streaming\r\n> transactions\"\r\n> \r\n> (3) in the comment of apply_handle_prepare_internal, two full-width\r\n> characters.\r\n> \r\n> 3-1\r\n> +\t * won’t be resent in a case where the server crashes between them.\r\n> \r\n> 3-2\r\n> +\t * COMMIT PREPARED or ROLLBACK PREPARED. But that’s okay\r\n> because this\r\n> \r\n> You have full-width characters for \"won't\" and \"that's\".\r\n> Could you please check ?\r\n> \r\n> \r\n> (4) typo\r\n> \r\n> + * the subscription if hte user has specified skip_xid. Once we start\r\n> + skipping\r\n> \r\n> \"hte\" should \"the\" ?\r\n> \r\n> (5)\r\n> \r\n> I can miss something here but, in one of the past discussions, there seems a\r\n> consensus that if the user specifies XID of a subtransaction, it would be better\r\n> to skip only the subtransaction.\r\n> \r\n> This time, is it out of the range of the patch ?\r\n> If so, I suggest you include some description about it either in the commit\r\n> message or around codes related to it.\r\n> \r\n> (6)\r\n> \r\n> I feel it's a better idea to include a test whether to skip aborted streaming\r\n> transaction clears the XID in the TAP test for this feature, in a sense to cover\r\n> various new code paths. Did you have any special reason to omit the case ?\r\n> \r\n> (7)\r\n> \r\n> I want more explanation for the reason to restart the subscriber in the TAP test\r\n> because this is not mandatory operation.\r\n> (We can pass the TAP tests without this restart)\r\n> \r\n> From :\r\n> # Restart the subscriber node to restart logical replication with no interval\r\n> \r\n> IIUC, below would be better.\r\n> \r\n> To :\r\n> # As an optimization to finish tests earlier, restart the subscriber with no\r\n> interval, # rather than waiting for new error to laucher a new apply worker.\r\nFew more minor comments\r\n\r\n(8) another full-width char in apply_handle_commit_prepared\r\n\r\n\r\n+ * PREPARED won't be resent but subskipxid is left.\r\n\r\nKindly check \"won't\" ?\r\n\r\n(9) the header comments of clear_subscription_skip_xid\r\n\r\n+/* clear subskipxid of pg_subscription catalog */\r\n\r\nShould start with an upper letter ?\r\n\r\n(10) some variable declarations and initialization of clear_subscription_skip_xid\r\n\r\nThere's no harm in moving below codes into a condition case\r\nwhere the user didn't change the subskipxid before\r\napply worker clearing it.\r\n\r\n+ bool nulls[Natts_pg_subscription];\r\n+ bool replaces[Natts_pg_subscription];\r\n+ Datum values[Natts_pg_subscription];\r\n+\r\n+ memset(values, 0, sizeof(values));\r\n+ memset(nulls, false, sizeof(nulls));\r\n+ memset(replaces, false, sizeof(replaces));\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Mon, 17 Jan 2022 12:34:56 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jan 17, 2022 at 5:03 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Monday, January 17, 2022 3:18 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > I've attached an updated patch. Please review it.\n> Hi, thank you for sharing a new patch.\n> Few comments on the v6.\n\nThank you for the comments!\n\n>\n> (1) doc/src/sgml/ref/alter_subscription.sgml\n>\n> + resort. This option has no effect on the transaction that is already\n>\n> One TAB exists between \"resort\" and \"This\".\n\nWill remove.\n\n>\n> (2) Minor improvement suggestion of comment in src/backend/replication/logical/worker.c\n>\n> + * reset during that. Also, we don't skip receiving the changes in streaming\n> + * cases, since we decide whether or not to skip applying the changes when\n>\n> I sugguest that you don't use 'streaming cases', because\n> what \"streaming cases\" means sounds a bit broader than actual your implementation.\n> We do skip transaction of streaming cases but not during the spooling phase, right ?\n>\n> I suggest below.\n>\n> \"We don't skip receiving the changes at the phase to spool streaming transactions\"\n\nI might be missing your point but I think it's correct that we don't\nskip receiving the change of the transaction that is sent via\nstreaming protocol. And it doesn't sound broader to me. Could you\nelaborate on that?\n\n>\n> (3) in the comment of apply_handle_prepare_internal, two full-width characters.\n>\n> 3-1\n> + * won’t be resent in a case where the server crashes between them.\n>\n> 3-2\n> + * COMMIT PREPARED or ROLLBACK PREPARED. But that’s okay because this\n>\n> You have full-width characters for \"won't\" and \"that's\".\n> Could you please check ?\n\nWhich characters in \"won't\" are full-width characters? I could not find them.\n\n>\n>\n> (4) typo\n>\n> + * the subscription if hte user has specified skip_xid. Once we start skipping\n>\n> \"hte\" should \"the\" ?\n\nWill fix.\n\n>\n> (5)\n>\n> I can miss something here but, in one of\n> the past discussions, there seems a consensus that\n> if the user specifies XID of a subtransaction,\n> it would be better to skip only the subtransaction.\n>\n> This time, is it out of the range of the patch ?\n> If so, I suggest you include some description about it\n> either in the commit message or around codes related to it.\n\nHow can the user know subtransaction XID? I suppose you refer to\nstreaming protocol cases but while applying spooled changes we don't\nreport subtransaction XID neither in server log nor\npg_stat_subscription_workers.\n\n>\n> (6)\n>\n> I feel it's a better idea to include a test whether\n> to skip aborted streaming transaction clears the XID\n> in the TAP test for this feature, in a sense to cover\n> various new code paths. Did you have any special reason\n> to omit the case ?\n\nWhich code path is newly covered by this aborted streaming transaction\ntests? I think that this patch is already covered even by the test for\na committed-and-streamed transaction. It doesn't matter whether the\nstreamed transaction is committed or aborted because an error occurs\nwhile applying the spooled changes.\n\n>\n> (7)\n>\n> I want more explanation for the reason to restart the subscriber\n> in the TAP test because this is not mandatory operation.\n> (We can pass the TAP tests without this restart)\n>\n> From :\n> # Restart the subscriber node to restart logical replication with no interval\n>\n> IIUC, below would be better.\n>\n> To :\n> # As an optimization to finish tests earlier, restart the subscriber with no interval,\n> # rather than waiting for new error to laucher a new apply worker.\n\nI could not understand why the proposed sentence has more information.\nDoes it mean you want to mention \"As an optimization to finish tests\nearlier\"?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 17 Jan 2022 21:51:48 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jan 17, 2022 at 9:35 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Monday, January 17, 2022 5:03 PM I wrote:\n> > Hi, thank you for sharing a new patch.\n> > Few comments on the v6.\n> >\n> > (1) doc/src/sgml/ref/alter_subscription.sgml\n> >\n> > + resort. This option has no effect on the transaction that is\n> > + already\n> >\n> > One TAB exists between \"resort\" and \"This\".\n> >\n> > (2) Minor improvement suggestion of comment in\n> > src/backend/replication/logical/worker.c\n> >\n> > + * reset during that. Also, we don't skip receiving the changes in\n> > + streaming\n> > + * cases, since we decide whether or not to skip applying the changes\n> > + when\n> >\n> > I sugguest that you don't use 'streaming cases', because what \"streaming\n> > cases\" means sounds a bit broader than actual your implementation.\n> > We do skip transaction of streaming cases but not during the spooling phase,\n> > right ?\n> >\n> > I suggest below.\n> >\n> > \"We don't skip receiving the changes at the phase to spool streaming\n> > transactions\"\n> >\n> > (3) in the comment of apply_handle_prepare_internal, two full-width\n> > characters.\n> >\n> > 3-1\n> > + * won’t be resent in a case where the server crashes between them.\n> >\n> > 3-2\n> > + * COMMIT PREPARED or ROLLBACK PREPARED. But that’s okay\n> > because this\n> >\n> > You have full-width characters for \"won't\" and \"that's\".\n> > Could you please check ?\n> >\n> >\n> > (4) typo\n> >\n> > + * the subscription if hte user has specified skip_xid. Once we start\n> > + skipping\n> >\n> > \"hte\" should \"the\" ?\n> >\n> > (5)\n> >\n> > I can miss something here but, in one of the past discussions, there seems a\n> > consensus that if the user specifies XID of a subtransaction, it would be better\n> > to skip only the subtransaction.\n> >\n> > This time, is it out of the range of the patch ?\n> > If so, I suggest you include some description about it either in the commit\n> > message or around codes related to it.\n> >\n> > (6)\n> >\n> > I feel it's a better idea to include a test whether to skip aborted streaming\n> > transaction clears the XID in the TAP test for this feature, in a sense to cover\n> > various new code paths. Did you have any special reason to omit the case ?\n> >\n> > (7)\n> >\n> > I want more explanation for the reason to restart the subscriber in the TAP test\n> > because this is not mandatory operation.\n> > (We can pass the TAP tests without this restart)\n> >\n> > From :\n> > # Restart the subscriber node to restart logical replication with no interval\n> >\n> > IIUC, below would be better.\n> >\n> > To :\n> > # As an optimization to finish tests earlier, restart the subscriber with no\n> > interval, # rather than waiting for new error to laucher a new apply worker.\n> Few more minor comments\n\nThank you for the comments!\n\n>\n> (8) another full-width char in apply_handle_commit_prepared\n>\n>\n> + * PREPARED won't be resent but subskipxid is left.\n>\n> Kindly check \"won't\" ?\n\nAgain, I don't follow what you mean by full-width character in this context.\n\n>\n> (9) the header comments of clear_subscription_skip_xid\n>\n> +/* clear subskipxid of pg_subscription catalog */\n>\n> Should start with an upper letter ?\n\nOkay, I'll change it.\n\n>\n> (10) some variable declarations and initialization of clear_subscription_skip_xid\n>\n> There's no harm in moving below codes into a condition case\n> where the user didn't change the subskipxid before\n> apply worker clearing it.\n>\n> + bool nulls[Natts_pg_subscription];\n> + bool replaces[Natts_pg_subscription];\n> + Datum values[Natts_pg_subscription];\n> +\n> + memset(values, 0, sizeof(values));\n> + memset(nulls, false, sizeof(nulls));\n> + memset(replaces, false, sizeof(replaces));\n>\n\nWill move.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 17 Jan 2022 21:54:59 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jan 17, 2022 at 6:22 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> >\n> > (5)\n> >\n> > I can miss something here but, in one of\n> > the past discussions, there seems a consensus that\n> > if the user specifies XID of a subtransaction,\n> > it would be better to skip only the subtransaction.\n> >\n> > This time, is it out of the range of the patch ?\n> > If so, I suggest you include some description about it\n> > either in the commit message or around codes related to it.\n>\n> How can the user know subtransaction XID? I suppose you refer to\n> streaming protocol cases but while applying spooled changes we don't\n> report subtransaction XID neither in server log nor\n> pg_stat_subscription_workers.\n>\n\nI also think in the current system users won't be aware of\nsubtransaction's XID but I feel Osumi-San's point is valid that we\nshould at least add it in docs that we allow to skip only top-level\nxacts. Also, in the future, it won't be impossible to imagine that we\ncan have subtransaction's XID info also available to users as we have\nthat in the case of streaming xacts (See subxact_data).\n\nFew minor points:\n===============\n1.\n+ * the subscription if hte user has specified skip_xid.\n\nTypo. /hte/the\n\n2.\n+ * PREPARED won’t be resent but subskipxid is left.\n\nIn diffmerge tool, won't is showing some funny chars. When I manually\nremoved 't and added it again, everything is fine. I am not sure why\nit is so? I think Osumi-San has also raised this complaint.\n\n3.\n+ /*\n+ * We don't expect that the user set the XID of the transaction that is\n+ * rolled back but if the skip XID is set, clear it.\n+ */\n\n/user set/user to set/\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 17 Jan 2022 18:44:58 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jan 17, 2022 at 5:18 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached an updated patch. Please review it.\n>\n\nSome review comments for the v6 patch:\n\n\ndoc/src/sgml/logical-replication.sgml\n\n(1) Expanded output\n\nSince the view output is shown in \"expanded output\" mode, perhaps the\ndoc should say that, or alternatively add the following lines prior to\nit, to make it clear:\n\n postgres=# \\x\n Expanded display is on.\n\n\n(2) Message output in server log\n\nThe actual CONTEXT text now just says \"at ...\" instead of \"with commit\ntimestamp ...\", so the doc needs to be updated as follows:\n\nBEFORE:\n+CONTEXT: processing remote data during \"INSERT\" for replication\ntarget relation \"public.test\" in transaction 716 with commit timestamp\n2021-09-29 15:52:45.165754+00\nAFTER:\n+CONTEXT: processing remote data during \"INSERT\" for replication\ntarget relation \"public.test\" in transaction 716 at 2021-09-29\n15:52:45.165754+00\n\n(3)\nThe wording \"the change\" doesn't seem right here, so I suggest the\nfollowing update:\n\nBEFORE:\n+ Skipping the whole transaction includes skipping the change that\nmay not violate\nAFTER:\n+ Skipping the whole transaction includes skipping changes that may\nnot violate\n\n\ndoc/src/sgml/ref/alter_subscription.sgml\n\n(4)\nI have a number of suggested wording improvements:\n\nBEFORE:\n+ Skips applying changes of the particular transaction. If incoming data\n+ violates any constraints the logical replication will stop until it is\n+ resolved. The resolution can be done either by changing data on the\n+ subscriber so that it doesn't conflict with incoming change or\nby skipping\n+ the whole transaction. The logical replication worker skips all data\n+ modification changes within the specified transaction including\nthe changes\n+ that may not violate the constraint, so, it should only be used as a last\n+ resort. This option has no effect on the transaction that is already\n+ prepared by enabling <literal>two_phase</literal> on subscriber.\n\nAFTER:\n+ Skips applying all changes of the specified transaction. If\nincoming data\n+ violates any constraints, logical replication will stop until it is\n+ resolved. The resolution can be done either by changing data on the\n+ subscriber so that it doesn't conflict with incoming change or\nby skipping\n+ the whole transaction. Using the SKIP option, the logical\nreplication worker skips all data\n+ modification changes within the specified transaction, including changes\n+ that may not violate the constraint, so, it should only be used as a last\n+ resort. This option has no effect on transactions that are already\n+ prepared by enabling <literal>two_phase</literal> on the subscriber.\n\n\n(5)\nchange -> changes\n\nBEFORE:\n+ subscriber so that it doesn't conflict with incoming change or\nby skipping\nAFTER:\n+ subscriber so that it doesn't conflict with incoming changes or\nby skipping\n\n\nsrc/backend/replication/logical/worker.c\n\n(6) Missing word?\nThe following should say \"worth doing\" or \"worth it\"?\n\n+ * doesn't seem to be worth since it's a very minor case.\n\n\nsrc/test/regress/sql/subscription.sql\n\n(7) Misleading test case\nI think the following test case is misleading and should be removed,\nbecause the \"1.1\" xid value is only regarded as invalid because \"1\" is\nan invalid xid (and there's already a test case for a \"1\" xid) - the\nfractional part gets thrown away, and doesn't affect the validity\nhere.\n\n +ALTER SUBSCRIPTION regress_testsub SKIP (xid = 1.1);\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 18 Jan 2022 12:36:21 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jan 17, 2022 at 10:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jan 17, 2022 at 6:22 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > >\n> > > (5)\n> > >\n> > > I can miss something here but, in one of\n> > > the past discussions, there seems a consensus that\n> > > if the user specifies XID of a subtransaction,\n> > > it would be better to skip only the subtransaction.\n> > >\n> > > This time, is it out of the range of the patch ?\n> > > If so, I suggest you include some description about it\n> > > either in the commit message or around codes related to it.\n> >\n> > How can the user know subtransaction XID? I suppose you refer to\n> > streaming protocol cases but while applying spooled changes we don't\n> > report subtransaction XID neither in server log nor\n> > pg_stat_subscription_workers.\n> >\n>\n> I also think in the current system users won't be aware of\n> subtransaction's XID but I feel Osumi-San's point is valid that we\n> should at least add it in docs that we allow to skip only top-level\n> xacts. Also, in the future, it won't be impossible to imagine that we\n> can have subtransaction's XID info also available to users as we have\n> that in the case of streaming xacts (See subxact_data).\n\nFair point and more accurate, but I'm a bit concerned that using these\nwords could confuse the user. There are some places in the doc where\nwe use the words “top-level transaction” and \"sub transactions” but\nthese are not commonly used in the doc. The user normally would not be\naware that sub transactions are used to implement SAVEPOINTs. Also,\nthe publisher's subtransaction ID doesn’t appear anywhere on the\nsubscriber. So if we want to mention it, I think we should use other\nwords instead of them but I don’t have a good idea for that. Do you\nhave any ideas?\n\n>\n> Few minor points:\n> ===============\n> 1.\n> + * the subscription if hte user has specified skip_xid.\n>\n> Typo. /hte/the\n\nWill fix.\n\n>\n> 2.\n> + * PREPARED won’t be resent but subskipxid is left.\n>\n> In diffmerge tool, won't is showing some funny chars. When I manually\n> removed 't and added it again, everything is fine. I am not sure why\n> it is so? I think Osumi-San has also raised this complaint.\n\nOh I didn't realize that. I'll check it again by using diffmerge tool.\n\n>\n> 3.\n> + /*\n> + * We don't expect that the user set the XID of the transaction that is\n> + * rolled back but if the skip XID is set, clear it.\n> + */\n>\n> /user set/user to set/\n\nWill fix.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 18 Jan 2022 11:32:14 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jan 18, 2022 at 10:36 AM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Mon, Jan 17, 2022 at 5:18 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached an updated patch. Please review it.\n> >\n>\n> Some review comments for the v6 patch:\n\nThank you for the comments!\n\n>\n>\n> doc/src/sgml/logical-replication.sgml\n>\n> (1) Expanded output\n>\n> Since the view output is shown in \"expanded output\" mode, perhaps the\n> doc should say that, or alternatively add the following lines prior to\n> it, to make it clear:\n>\n> postgres=# \\x\n> Expanded display is on.\n\nI'm not sure it's really necessary. A similar example would be\nperform.sgml but it doesn't say \"\\x\".\n\n>\n>\n> (2) Message output in server log\n>\n> The actual CONTEXT text now just says \"at ...\" instead of \"with commit\n> timestamp ...\", so the doc needs to be updated as follows:\n>\n> BEFORE:\n> +CONTEXT: processing remote data during \"INSERT\" for replication\n> target relation \"public.test\" in transaction 716 with commit timestamp\n> 2021-09-29 15:52:45.165754+00\n> AFTER:\n> +CONTEXT: processing remote data during \"INSERT\" for replication\n> target relation \"public.test\" in transaction 716 at 2021-09-29\n> 15:52:45.165754+00\n\nWill fix.\n\n>\n> (3)\n> The wording \"the change\" doesn't seem right here, so I suggest the\n> following update:\n>\n> BEFORE:\n> + Skipping the whole transaction includes skipping the change that\n> may not violate\n> AFTER:\n> + Skipping the whole transaction includes skipping changes that may\n> not violate\n>\n>\n> doc/src/sgml/ref/alter_subscription.sgml\n\nWill fix.\n\n>\n> (4)\n> I have a number of suggested wording improvements:\n>\n> BEFORE:\n> + Skips applying changes of the particular transaction. If incoming data\n> + violates any constraints the logical replication will stop until it is\n> + resolved. The resolution can be done either by changing data on the\n> + subscriber so that it doesn't conflict with incoming change or\n> by skipping\n> + the whole transaction. The logical replication worker skips all data\n> + modification changes within the specified transaction including\n> the changes\n> + that may not violate the constraint, so, it should only be used as a last\n> + resort. This option has no effect on the transaction that is already\n> + prepared by enabling <literal>two_phase</literal> on subscriber.\n>\n> AFTER:\n> + Skips applying all changes of the specified transaction. If\n> incoming data\n> + violates any constraints, logical replication will stop until it is\n> + resolved. The resolution can be done either by changing data on the\n> + subscriber so that it doesn't conflict with incoming change or\n> by skipping\n> + the whole transaction. Using the SKIP option, the logical\n> replication worker skips all data\n> + modification changes within the specified transaction, including changes\n> + that may not violate the constraint, so, it should only be used as a last\n> + resort. This option has no effect on transactions that are already\n> + prepared by enabling <literal>two_phase</literal> on the subscriber.\n>\n\nWill fix.\n\n>\n> (5)\n> change -> changes\n>\n> BEFORE:\n> + subscriber so that it doesn't conflict with incoming change or\n> by skipping\n> AFTER:\n> + subscriber so that it doesn't conflict with incoming changes or\n> by skipping\n\nWill fix.\n\n>\n>\n> src/backend/replication/logical/worker.c\n>\n> (6) Missing word?\n> The following should say \"worth doing\" or \"worth it\"?\n>\n> + * doesn't seem to be worth since it's a very minor case.\n>\n\nWIll fix\n\n>\n> src/test/regress/sql/subscription.sql\n>\n> (7) Misleading test case\n> I think the following test case is misleading and should be removed,\n> because the \"1.1\" xid value is only regarded as invalid because \"1\" is\n> an invalid xid (and there's already a test case for a \"1\" xid) - the\n> fractional part gets thrown away, and doesn't affect the validity\n> here.\n>\n> +ALTER SUBSCRIPTION regress_testsub SKIP (xid = 1.1);\n>\n\nGood point. Will remove.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 18 Jan 2022 11:41:26 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jan 18, 2022 at 8:02 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Jan 17, 2022 at 10:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Jan 17, 2022 at 6:22 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > >\n> > > > (5)\n> > > >\n> > > > I can miss something here but, in one of\n> > > > the past discussions, there seems a consensus that\n> > > > if the user specifies XID of a subtransaction,\n> > > > it would be better to skip only the subtransaction.\n> > > >\n> > > > This time, is it out of the range of the patch ?\n> > > > If so, I suggest you include some description about it\n> > > > either in the commit message or around codes related to it.\n> > >\n> > > How can the user know subtransaction XID? I suppose you refer to\n> > > streaming protocol cases but while applying spooled changes we don't\n> > > report subtransaction XID neither in server log nor\n> > > pg_stat_subscription_workers.\n> > >\n> >\n> > I also think in the current system users won't be aware of\n> > subtransaction's XID but I feel Osumi-San's point is valid that we\n> > should at least add it in docs that we allow to skip only top-level\n> > xacts. Also, in the future, it won't be impossible to imagine that we\n> > can have subtransaction's XID info also available to users as we have\n> > that in the case of streaming xacts (See subxact_data).\n>\n> Fair point and more accurate, but I'm a bit concerned that using these\n> words could confuse the user. There are some places in the doc where\n> we use the words “top-level transaction” and \"sub transactions” but\n> these are not commonly used in the doc. The user normally would not be\n> aware that sub transactions are used to implement SAVEPOINTs. Also,\n> the publisher's subtransaction ID doesn’t appear anywhere on the\n> subscriber. So if we want to mention it, I think we should use other\n> words instead of them but I don’t have a good idea for that. Do you\n> have any ideas?\n>\n\nHow about changing existing text:\n+ Specifies the ID of the transaction whose changes are to be skipped\n+ by the logical replication worker. Setting <literal>NONE</literal>\n+ resets the transaction ID.\n\nto\n\nSpecifies the top-level transaction identifier whose changes are to be\nskipped by the logical replication worker. We don't support skipping\nindividual subtransactions. Setting <literal>NONE</literal> resets\nthe transaction ID.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 18 Jan 2022 08:22:30 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jan 17, 2022 2:18 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> I've attached an updated patch. Please review it.\r\n> \r\n\r\nThanks for updating the patch. Few comments:\r\n\r\n1)\r\n\t\t/* Two_phase is only supported in v15 and higher */\r\n \t\tif (pset.sversion >= 150000)\r\n \t\t\tappendPQExpBuffer(&buf,\r\n-\t\t\t\t\t\t\t \", subtwophasestate AS \\\"%s\\\"\\n\",\r\n-\t\t\t\t\t\t\t gettext_noop(\"Two phase commit\"));\r\n+\t\t\t\t\t\t\t \", subtwophasestate AS \\\"%s\\\"\\n\"\r\n+\t\t\t\t\t\t\t \", subskipxid AS \\\"%s\\\"\\n\",\r\n+\t\t\t\t\t\t\t gettext_noop(\"Two phase commit\"),\r\n+\t\t\t\t\t\t\t gettext_noop(\"Skip XID\"));\r\n \r\n \t\tappendPQExpBuffer(&buf,\r\n \t\t\t\t\t\t \", subsynccommit AS \\\"%s\\\"\\n\"\r\n\r\nI think \"skip xid\" should be mentioned in the comment. Maybe it could be changed to:\r\n\"Two_phase and skip XID are only supported in v15 and higher\"\r\n\r\n2) The following two places are not consistent in whether \"= value\" is surround\r\nwith square brackets.\r\n\r\n+ALTER SUBSCRIPTION <replaceable class=\"parameter\">name</replaceable> SKIP ( <replaceable class=\"parameter\">skip_option</replaceable> [= <replaceable class=\"parameter\">value</replaceable>] [, ... ] )\r\n\r\n+ <term><literal>SKIP ( <replaceable class=\"parameter\">skip_option</replaceable> = <replaceable class=\"parameter\">value</replaceable> [, ... ] )</literal></term>\r\n\r\nShould we modify the first place to:\r\n+ALTER SUBSCRIPTION <replaceable class=\"parameter\">name</replaceable> SKIP ( <replaceable class=\"parameter\">skip_option</replaceable> = <replaceable class=\"parameter\">value</replaceable> [, ... ] )\r\n\r\nBecause currently there is only one skip_option - xid, and a parameter must be\r\nspecified when using it.\r\n\r\n3)\r\n+\t * Protect subskip_xid of pg_subscription from being concurrently updated\r\n+\t * while clearing it.\r\n\r\n\"subskip_xid\" should be \"subskipxid\" I think.\r\n \r\n4)\r\n+/*\r\n+ * Start skipping changes of the transaction if the given XID matches the\r\n+ * transaction ID specified by skip_xid option.\r\n+ */\r\n\r\nThe option name was \"skip_xid\" in the previous version, and it is \"xid\" in\r\nlatest patch. So should we modify \"skip_xid option\" to \"skip xid option\", or\r\n\"skip option xid\", or something else?\r\n\r\nAlso the following place has similar issue:\r\n+ * the subscription if hte user has specified skip_xid. Once we start skipping\r\n\r\nRegards,\r\nTang\r\n",
"msg_date": "Tue, 18 Jan 2022 03:04:00 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Monday, January 17, 2022 9:52 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> Thank you for the comments!\r\n..\r\n> > (2) Minor improvement suggestion of comment in\r\n> > src/backend/replication/logical/worker.c\r\n> >\r\n> > + * reset during that. Also, we don't skip receiving the changes in\r\n> > + streaming\r\n> > + * cases, since we decide whether or not to skip applying the changes\r\n> > + when\r\n> >\r\n> > I sugguest that you don't use 'streaming cases', because what\r\n> > \"streaming cases\" means sounds a bit broader than actual your\r\n> implementation.\r\n> > We do skip transaction of streaming cases but not during the spooling phase,\r\n> right ?\r\n> >\r\n> > I suggest below.\r\n> >\r\n> > \"We don't skip receiving the changes at the phase to spool streaming\r\n> transactions\"\r\n> \r\n> I might be missing your point but I think it's correct that we don't skip receiving\r\n> the change of the transaction that is sent via streaming protocol. And it doesn't\r\n> sound broader to me. Could you elaborate on that?\r\nOK. Excuse me for lack of explanation.\r\n\r\nI felt \"streaming cases\" implies \"non-streaming cases\"\r\nto compare a diffference (in my head) when it is\r\nused to explain something at first.\r\nI imagined the contrast between those, when I saw it.\r\n\r\nThus, I thought \"streaming cases\" meant\r\nwhole flow of streaming transactions which consists of messages\r\nsurrounded by stream start and stream stop and which are finished by\r\nstream commit/stream abort (including 2PC variations).\r\n\r\nWhen I come back to the subject, you wrote below in the comment\r\n\r\n\"we don't skip receiving the changes in streaming cases,\r\nsince we decide whether or not to skip applying the changes\r\nwhen starting to apply changes\"\r\n\r\nThe first part of this sentence\r\n(\"we don't skip receiving the changes in streaming cases\")\r\ngives me an impression where we don't skip changes in the streaming cases\r\n(of my understanding above), but the last part\r\n(\"we decide whether or not to skip applying the changes\r\nwhen starting to apply change\") means we skip transactions for streaming at apply phase.\r\n\r\nSo, this sentence looked confusing to me slightly.\r\nThus, I suggested below (and when I connect it with existing part)\r\n\r\n\"we don't skip receiving the changes at the phase to spool streaming transactions\r\nsince we decide whether or not to skip applying the changes when starting to apply changes\"\r\n\r\nFor me this looked better, but of course, this is a suggestion.\r\n\r\n> >\r\n> > (3) in the comment of apply_handle_prepare_internal, two full-width\r\n> characters.\r\n> >\r\n> > 3-1\r\n> > + * won’t be resent in a case where the server crashes between\r\n> them.\r\n> >\r\n> > 3-2\r\n> > + * COMMIT PREPARED or ROLLBACK PREPARED. But that’s okay\r\n> > + because this\r\n> >\r\n> > You have full-width characters for \"won't\" and \"that's\".\r\n> > Could you please check ?\r\n> \r\n> Which characters in \"won't\" are full-width characters? I could not find them.\r\nAll characters I found and mentioned as full-width are single quotes.\r\n\r\nIt might be good that you check the entire patch once\r\nby some tool that helps you to detect it.\r\n\r\n> > (5)\r\n> >\r\n> > I can miss something here but, in one of the past discussions, there\r\n> > seems a consensus that if the user specifies XID of a subtransaction,\r\n> > it would be better to skip only the subtransaction.\r\n> >\r\n> > This time, is it out of the range of the patch ?\r\n> > If so, I suggest you include some description about it either in the\r\n> > commit message or around codes related to it.\r\n> \r\n> How can the user know subtransaction XID? I suppose you refer to streaming\r\n> protocol cases but while applying spooled changes we don't report\r\n> subtransaction XID neither in server log nor pg_stat_subscription_workers.\r\nYeah, usually subtransaction XID is not exposed to the users. I agree.\r\n\r\nBut, clarifying the target of this feature is only top-level transactions\r\nsounds better to me. Thank you Amit-san for your support\r\nabout how we should write it in [1] !\r\n\r\n> > (6)\r\n> >\r\n> > I feel it's a better idea to include a test whether to skip aborted\r\n> > streaming transaction clears the XID in the TAP test for this feature,\r\n> > in a sense to cover various new code paths. Did you have any special\r\n> > reason to omit the case ?\r\n> \r\n> Which code path is newly covered by this aborted streaming transaction tests?\r\n> I think that this patch is already covered even by the test for a\r\n> committed-and-streamed transaction. It doesn't matter whether the streamed\r\n> transaction is committed or aborted because an error occurs while applying the\r\n> spooled changes.\r\nOh, this was my mistake. What I expressed as a new patch is\r\napply_handle_stream_abort -> clear_subscription_skip_xid.\r\nBut, this was totally wrong as you explained.\r\n\r\n\r\n> >\r\n> > (7)\r\n> >\r\n> > I want more explanation for the reason to restart the subscriber in\r\n> > the TAP test because this is not mandatory operation.\r\n> > (We can pass the TAP tests without this restart)\r\n> >\r\n> > From :\r\n> > # Restart the subscriber node to restart logical replication with no\r\n> > interval\r\n> >\r\n> > IIUC, below would be better.\r\n> >\r\n> > To :\r\n> > # As an optimization to finish tests earlier, restart the subscriber\r\n> > with no interval, # rather than waiting for new error to laucher a new apply\r\n> worker.\r\n> \r\n> I could not understand why the proposed sentence has more information.\r\n> Does it mean you want to mention \"As an optimization to finish tests earlier\"?\r\nYes, exactly. The point is to add \"As an optimization to finish tests earlier\".\r\n\r\nProbably, I should have asked a simple question \"why do you restart the subscriber\" ?\r\nAt first sight, I couldn't understand the meaning for the restart and\r\nyou don't explain the reason itself.\r\n\r\n[1] - https://www.postgresql.org/message-id/CAA4eK1JHUF7fVNHQ1ZRRgVsdE8XDY8BruU9dNP3Q3jizNdpEbg%40mail.gmail.com\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Tue, 18 Jan 2022 03:20:15 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jan 18, 2022 at 8:34 AM tanghy.fnst@fujitsu.com\n<tanghy.fnst@fujitsu.com> wrote:\n>\n> On Mon, Jan 17, 2022 2:18 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n>\n> 2) The following two places are not consistent in whether \"= value\" is surround\n> with square brackets.\n>\n> +ALTER SUBSCRIPTION <replaceable class=\"parameter\">name</replaceable> SKIP ( <replaceable class=\"parameter\">skip_option</replaceable> [= <replaceable class=\"parameter\">value</replaceable>] [, ... ] )\n>\n> + <term><literal>SKIP ( <replaceable class=\"parameter\">skip_option</replaceable> = <replaceable class=\"parameter\">value</replaceable> [, ... ] )</literal></term>\n>\n> Should we modify the first place to:\n> +ALTER SUBSCRIPTION <replaceable class=\"parameter\">name</replaceable> SKIP ( <replaceable class=\"parameter\">skip_option</replaceable> = <replaceable class=\"parameter\">value</replaceable> [, ... ] )\n>\n> Because currently there is only one skip_option - xid, and a parameter must be\n> specified when using it.\n>\n\nGood observation. Do we really need [, ... ] as currently, we support\nonly one value for XID?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 18 Jan 2022 09:07:44 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jan 18, 2022 at 12:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jan 18, 2022 at 8:34 AM tanghy.fnst@fujitsu.com\n> <tanghy.fnst@fujitsu.com> wrote:\n> >\n> > On Mon, Jan 17, 2022 2:18 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> >\n> > 2) The following two places are not consistent in whether \"= value\" is surround\n> > with square brackets.\n> >\n> > +ALTER SUBSCRIPTION <replaceable class=\"parameter\">name</replaceable> SKIP ( <replaceable class=\"parameter\">skip_option</replaceable> [= <replaceable class=\"parameter\">value</replaceable>] [, ... ] )\n> >\n> > + <term><literal>SKIP ( <replaceable class=\"parameter\">skip_option</replaceable> = <replaceable class=\"parameter\">value</replaceable> [, ... ] )</literal></term>\n> >\n> > Should we modify the first place to:\n> > +ALTER SUBSCRIPTION <replaceable class=\"parameter\">name</replaceable> SKIP ( <replaceable class=\"parameter\">skip_option</replaceable> = <replaceable class=\"parameter\">value</replaceable> [, ... ] )\n> >\n> > Because currently there is only one skip_option - xid, and a parameter must be\n> > specified when using it.\n> >\n>\n> Good observation. Do we really need [, ... ] as currently, we support\n> only one value for XID?\n\nI think no. In the doc, it should be:\n\nALTER SUBSCRIPTION name SKIP ( skip_option = value )\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 18 Jan 2022 12:50:46 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jan 18, 2022 at 12:04 PM tanghy.fnst@fujitsu.com\n<tanghy.fnst@fujitsu.com> wrote:\n>\n> On Mon, Jan 17, 2022 2:18 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached an updated patch. Please review it.\n> >\n>\n> Thanks for updating the patch. Few comments:\n>\n> 1)\n> /* Two_phase is only supported in v15 and higher */\n> if (pset.sversion >= 150000)\n> appendPQExpBuffer(&buf,\n> - \", subtwophasestate AS \\\"%s\\\"\\n\",\n> - gettext_noop(\"Two phase commit\"));\n> + \", subtwophasestate AS \\\"%s\\\"\\n\"\n> + \", subskipxid AS \\\"%s\\\"\\n\",\n> + gettext_noop(\"Two phase commit\"),\n> + gettext_noop(\"Skip XID\"));\n>\n> appendPQExpBuffer(&buf,\n> \", subsynccommit AS \\\"%s\\\"\\n\"\n>\n> I think \"skip xid\" should be mentioned in the comment. Maybe it could be changed to:\n> \"Two_phase and skip XID are only supported in v15 and higher\"\n\nAdded.\n\n>\n> 2) The following two places are not consistent in whether \"= value\" is surround\n> with square brackets.\n>\n> +ALTER SUBSCRIPTION <replaceable class=\"parameter\">name</replaceable> SKIP ( <replaceable class=\"parameter\">skip_option</replaceable> [= <replaceable class=\"parameter\">value</replaceable>] [, ... ] )\n>\n> + <term><literal>SKIP ( <replaceable class=\"parameter\">skip_option</replaceable> = <replaceable class=\"parameter\">value</replaceable> [, ... ] )</literal></term>\n>\n> Should we modify the first place to:\n> +ALTER SUBSCRIPTION <replaceable class=\"parameter\">name</replaceable> SKIP ( <replaceable class=\"parameter\">skip_option</replaceable> = <replaceable class=\"parameter\">value</replaceable> [, ... ] )\n>\n> Because currently there is only one skip_option - xid, and a parameter must be\n> specified when using it.\n\nGood catch. Fixed.\n\n>\n> 3)\n> + * Protect subskip_xid of pg_subscription from being concurrently updated\n> + * while clearing it.\n>\n> \"subskip_xid\" should be \"subskipxid\" I think.\n\nFixed.\n\n>\n> 4)\n> +/*\n> + * Start skipping changes of the transaction if the given XID matches the\n> + * transaction ID specified by skip_xid option.\n> + */\n>\n> The option name was \"skip_xid\" in the previous version, and it is \"xid\" in\n> latest patch. So should we modify \"skip_xid option\" to \"skip xid option\", or\n> \"skip option xid\", or something else?\n>\n> Also the following place has similar issue:\n> + * the subscription if hte user has specified skip_xid. Once we start skipping\n\nFixed.\n\nI've attached an updated patch. All comments I got so far were\nincorporated into this patch unless I'm missing something.\n\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Tue, 18 Jan 2022 13:39:01 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jan 18, 2022 at 12:20 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Monday, January 17, 2022 9:52 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > Thank you for the comments!\n> ..\n> > > (2) Minor improvement suggestion of comment in\n> > > src/backend/replication/logical/worker.c\n> > >\n> > > + * reset during that. Also, we don't skip receiving the changes in\n> > > + streaming\n> > > + * cases, since we decide whether or not to skip applying the changes\n> > > + when\n> > >\n> > > I sugguest that you don't use 'streaming cases', because what\n> > > \"streaming cases\" means sounds a bit broader than actual your\n> > implementation.\n> > > We do skip transaction of streaming cases but not during the spooling phase,\n> > right ?\n> > >\n> > > I suggest below.\n> > >\n> > > \"We don't skip receiving the changes at the phase to spool streaming\n> > transactions\"\n> >\n> > I might be missing your point but I think it's correct that we don't skip receiving\n> > the change of the transaction that is sent via streaming protocol. And it doesn't\n> > sound broader to me. Could you elaborate on that?\n> OK. Excuse me for lack of explanation.\n>\n> I felt \"streaming cases\" implies \"non-streaming cases\"\n> to compare a diffference (in my head) when it is\n> used to explain something at first.\n> I imagined the contrast between those, when I saw it.\n>\n> Thus, I thought \"streaming cases\" meant\n> whole flow of streaming transactions which consists of messages\n> surrounded by stream start and stream stop and which are finished by\n> stream commit/stream abort (including 2PC variations).\n>\n> When I come back to the subject, you wrote below in the comment\n>\n> \"we don't skip receiving the changes in streaming cases,\n> since we decide whether or not to skip applying the changes\n> when starting to apply changes\"\n>\n> The first part of this sentence\n> (\"we don't skip receiving the changes in streaming cases\")\n> gives me an impression where we don't skip changes in the streaming cases\n> (of my understanding above), but the last part\n> (\"we decide whether or not to skip applying the changes\n> when starting to apply change\") means we skip transactions for streaming at apply phase.\n>\n> So, this sentence looked confusing to me slightly.\n> Thus, I suggested below (and when I connect it with existing part)\n>\n> \"we don't skip receiving the changes at the phase to spool streaming transactions\n> since we decide whether or not to skip applying the changes when starting to apply changes\"\n>\n> For me this looked better, but of course, this is a suggestion.\n\nThank you for your explanation.\n\nI've modified the comment with some changes since \"the phase to spool\nstreaming transaction\" seems not commonly be used in worker.c.\n\n>\n> > >\n> > > (3) in the comment of apply_handle_prepare_internal, two full-width\n> > characters.\n> > >\n> > > 3-1\n> > > + * won’t be resent in a case where the server crashes between\n> > them.\n> > >\n> > > 3-2\n> > > + * COMMIT PREPARED or ROLLBACK PREPARED. But that’s okay\n> > > + because this\n> > >\n> > > You have full-width characters for \"won't\" and \"that's\".\n> > > Could you please check ?\n> >\n> > Which characters in \"won't\" are full-width characters? I could not find them.\n> All characters I found and mentioned as full-width are single quotes.\n>\n> It might be good that you check the entire patch once\n> by some tool that helps you to detect it.\n\nThanks!\n\n>\n> > > (5)\n> > >\n> > > I can miss something here but, in one of the past discussions, there\n> > > seems a consensus that if the user specifies XID of a subtransaction,\n> > > it would be better to skip only the subtransaction.\n> > >\n> > > This time, is it out of the range of the patch ?\n> > > If so, I suggest you include some description about it either in the\n> > > commit message or around codes related to it.\n> >\n> > How can the user know subtransaction XID? I suppose you refer to streaming\n> > protocol cases but while applying spooled changes we don't report\n> > subtransaction XID neither in server log nor pg_stat_subscription_workers.\n> Yeah, usually subtransaction XID is not exposed to the users. I agree.\n>\n> But, clarifying the target of this feature is only top-level transactions\n> sounds better to me. Thank you Amit-san for your support\n> about how we should write it in [1] !\n\nYes, I've included the sentence proposed by Amit in the latest patch.\n\n>\n> > > (6)\n> > >\n> > > I feel it's a better idea to include a test whether to skip aborted\n> > > streaming transaction clears the XID in the TAP test for this feature,\n> > > in a sense to cover various new code paths. Did you have any special\n> > > reason to omit the case ?\n> >\n> > Which code path is newly covered by this aborted streaming transaction tests?\n> > I think that this patch is already covered even by the test for a\n> > committed-and-streamed transaction. It doesn't matter whether the streamed\n> > transaction is committed or aborted because an error occurs while applying the\n> > spooled changes.\n> Oh, this was my mistake. What I expressed as a new patch is\n> apply_handle_stream_abort -> clear_subscription_skip_xid.\n> But, this was totally wrong as you explained.\n>\n>\n> > >\n> > > (7)\n> > >\n> > > I want more explanation for the reason to restart the subscriber in\n> > > the TAP test because this is not mandatory operation.\n> > > (We can pass the TAP tests without this restart)\n> > >\n> > > From :\n> > > # Restart the subscriber node to restart logical replication with no\n> > > interval\n> > >\n> > > IIUC, below would be better.\n> > >\n> > > To :\n> > > # As an optimization to finish tests earlier, restart the subscriber\n> > > with no interval, # rather than waiting for new error to laucher a new apply\n> > worker.\n> >\n> > I could not understand why the proposed sentence has more information.\n> > Does it mean you want to mention \"As an optimization to finish tests earlier\"?\n> Yes, exactly. The point is to add \"As an optimization to finish tests earlier\".\n>\n> Probably, I should have asked a simple question \"why do you restart the subscriber\" ?\n> At first sight, I couldn't understand the meaning for the restart and\n> you don't explain the reason itself.\n\nI thought \"to restart logical replication with no interval\" explains\nthe reason why we restart the subscriber. I left this part but we can\nchange it later if others also want to do that change.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 18 Jan 2022 13:43:07 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tuesday, January 18, 2022 1:39 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> I've attached an updated patch. All comments I got so far were incorporated\r\n> into this patch unless I'm missing something.\r\n\r\nHi, thank you for your new patch v7.\r\nFor your information, I've encountered a failure to apply patch v7\r\non top of the latest commit (d3f4532)\r\n\r\n$ git am v7-0001-Add-ALTER-SUBSCRIPTION-.-SKIP-to-skip-the-transac.patch\r\nApplying: Add ALTER SUBSCRIPTION ... SKIP to skip the transaction on subscriber nodes\r\nerror: patch failed: src/backend/parser/gram.y:9954\r\nerror: src/backend/parser/gram.y: patch does not apply\r\n\r\nCould you please rebase it when it's necessary ?\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Tue, 18 Jan 2022 05:37:35 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jan 18, 2022 at 2:37 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Tuesday, January 18, 2022 1:39 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > I've attached an updated patch. All comments I got so far were incorporated\n> > into this patch unless I'm missing something.\n>\n> Hi, thank you for your new patch v7.\n> For your information, I've encountered a failure to apply patch v7\n> on top of the latest commit (d3f4532)\n>\n> $ git am v7-0001-Add-ALTER-SUBSCRIPTION-.-SKIP-to-skip-the-transac.patch\n> Applying: Add ALTER SUBSCRIPTION ... SKIP to skip the transaction on subscriber nodes\n> error: patch failed: src/backend/parser/gram.y:9954\n> error: src/backend/parser/gram.y: patch does not apply\n>\n> Could you please rebase it when it's necessary ?\n\nThank you for reporting!\n\nI've attached a rebased patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Tue, 18 Jan 2022 15:05:10 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tuesday, January 18, 2022 3:05 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> I've attached a rebased patch.\r\nThank you for your rebase !\r\n\r\nSeveral review comments on v8.\r\n\r\n(1) doc/src/sgml/logical-replication.sgml\r\n\r\n+\r\n+ <para>\r\n+ To resolve conflicts, you need to consider changing the data on the subscriber so\r\n+ that it doesn't conflict with incoming changes, or dropping the conflicting constraint\r\n+ or unique index, or writing a trigger on the subscriber to suppress or redirect\r\n+ conflicting incoming changes, or as a last resort, by skipping the whole transaction.\r\n+ Skipping the whole transaction includes skipping changes that may not violate\r\n+ any constraint. This can easily make the subscriber inconsistent, especially if\r\n+ a user specifies the wrong transaction ID or the position of origin.\r\n+ </para>\r\n\r\nThe first sentence is too long and lack of readability slightly.\r\nOne idea to sort out listing items is to utilize \"itemizedlist\".\r\nFor instance, I imagined something like below.\r\n\r\n <para>\r\n To resolve conflicts, you need to consider following actions:\r\n <itemizedlist>\r\n <listitem>\r\n <para>\r\n Change the data on the subscriber so that it doesn't conflict with incoming changes\r\n </para>\r\n </listitem>\r\n ...\r\n <listitem>\r\n <para>\r\n As a last resort, skip the whole transaction\r\n </para>\r\n </listitem>\r\n </itemizedlist>\r\n ....\r\n </para>\r\n\r\nWhat did you think ?\r\n\r\nBy the way, in case only when you want to keep the current sentence style,\r\nI have one more question. Do we need \"by\" in the part\r\n\"by skipping the whole transaction\" ? If we focus on only this action,\r\nI think the sentence becomes \"you need to consider skipping the whole transaction\".\r\nIf this is true, we don't need \"by\" in the part.\r\n\r\n(2)\r\n\r\nAlso, in the same paragraph, we write\r\n\r\n+ ... This can easily make the subscriber inconsistent, especially if\r\n+ a user specifies the wrong transaction ID or the position of origin.\r\n\r\nThe subject of this sentence should be \"Those\" or \"Some of those\" ?\r\nbecause we want to mention either \"new skip xid feature\" or\r\n\"pg_replication_origin_advance\".\r\n\r\n(3) doc/src/sgml/ref/alter_subscription.sgml\r\n\r\nBelow change contains unnecessary spaces.\r\n+ the whole transaction. Using <command> ALTER SUBSCRIPTION ... SKIP </command>\r\n\r\nNeed to change\r\nFrom:\r\n<command> ALTER SUBSCRIPTION ... SKIP </command>\r\nTo:\r\n<command>ALTER SUBSCRIPTION ... SKIP</command>\r\n\r\n(4) comment in clear_subscription_skip_xid\r\n\r\n+ * the flush position the transaction will be sent again and the user\r\n+ * needs to be set subskipxid again. We can reduce the possibility by\r\n\r\nShoud change\r\nFrom:\r\nthe user needs to be set...\r\nTo:\r\nthe user needs to set...\r\n\r\n(5) clear_subscription_skip_xid\r\n\r\n+ if (!HeapTupleIsValid(tup))\r\n+ elog(ERROR, \"subscription \\\"%s\\\" does not exist\", MySubscription->name);\r\n\r\nCan we change it to ereport with ERRCODE_UNDEFINED_OBJECT ?\r\nThis suggestion has another aspect that in within one patch, we don't mix \r\nboth ereport and elog at the same time.\r\n\r\n(6) apply_handle_stream_abort\r\n\r\n@@ -1209,6 +1300,13 @@ apply_handle_stream_abort(StringInfo s)\r\n\r\n logicalrep_read_stream_abort(s, &xid, &subxid);\r\n\r\n+ /*\r\n+ * We don't expect the user to set the XID of the transaction that is\r\n+ * rolled back but if the skip XID is set, clear it.\r\n+ */\r\n+ if (MySubscription->skipxid == xid || MySubscription->skipxid == subxid)\r\n+ clear_subscription_skip_xid(MySubscription->skipxid, InvalidXLogRecPtr, 0);\r\n+\r\n\r\nIn my humble opinion, this still cares about subtransaction xid still.\r\nIf we want to be consistent with top level transactions only,\r\nI felt checking MySubscription->skipxid == xid should be sufficient.\r\n\r\nBelow is an *insame* (in a sense not correct usage) scenario\r\nto hit the \"MySubscription->skipxid == subxid\".\r\nSorry if it is not perfect.\r\n\r\n-------\r\nSet logical_decoding_work_mem = 64.\r\nCreate tables named 'tab' with a column id (integer);\r\nCreate pub and sub with streaming = true.\r\nNo initial data is required on both nodes\r\nbecause we just want to issue stream_abort\r\nafter executing skip xid feature.\r\n\r\n<Session1> to the publisher\r\nbegin;\r\nselect pg_current_xact_id(); -- for reference\r\ninsert into tab values (1);\r\nsavepoint s1;\r\ninsert into tab values (2);\r\nsavepoint s2;\r\ninsert into tab values (generate_series(1001, 2000));\r\nselect ctid, xmin, xmax, id from tab where id in (1, 2, 1001);\r\n\r\n<Session2> to the subscriber\r\nselect subname, subskipxid from pg_subscription; -- shows 0\r\nalter subscription mysub skip (xid = xxx); -- xxx is that of xmin for 1001 on the publisher\r\nselect subname, subskipxid from pg_subscription; -- check it shows xxx just in case\r\n\r\n<Session1>\r\nrollback to s1;\r\ncommit;\r\nselect * from tab; -- shows only data '1'.\r\n\r\n<Session2>\r\nselect subname, subskipxid from pg_subscription; -- shows 0. subskipxid was reset by the skip xid feature\r\nselect count(1) = 1 from tab; -- shows true\r\n\r\nFYI: the commands result of those last two commands.\r\npostgres=# select subname, subskipxid from pg_subscription;\r\n subname | subskipxid \r\n---------+------------\r\n mysub | 0\r\n(1 row)\r\n\r\npostgres=# select count(1) = 1 from tab;\r\n ?column? \r\n----------\r\n t\r\n(1 row)\r\n\r\nThus, it still cares about subtransactions and clear the subskipxid.\r\nShould we fix this behavior for consistency ?\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Wed, 19 Jan 2022 03:22:08 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Sat, Jan 15, 2022 at 3:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jan 14, 2022 at 5:35 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Thanks for the updated patch, few minor comments:\n> > 1) Should \"SKIP\" be \"SKIP (\" here:\n> > @@ -1675,7 +1675,7 @@ psql_completion(const char *text, int start, int end)\n> > /* ALTER SUBSCRIPTION <name> */\n> > else if (Matches(\"ALTER\", \"SUBSCRIPTION\", MatchAny))\n> > COMPLETE_WITH(\"CONNECTION\", \"ENABLE\", \"DISABLE\", \"OWNER TO\",\n> > - \"RENAME TO\", \"REFRESH\n> > PUBLICATION\", \"SET\",\n> > + \"RENAME TO\", \"REFRESH\n> > PUBLICATION\", \"SET\", \"SKIP\",\n> >\n>\n> Won't the another rule as follows added by patch sufficient for what\n> you are asking?\n> + /* ALTER SUBSCRIPTION <name> SKIP */\n> + else if (Matches(\"ALTER\", \"SUBSCRIPTION\", MatchAny, \"SKIP\"))\n> + COMPLETE_WITH(\"(\");\n>\n> I might be missing something but why do you think the handling of SKIP\n> be any different than what we are doing for SET?\n\nIn case of \"ALTER SUBSCRIPTION sub1 SET\" there are 2 possible tab\ncompletion options, user can either specify \"ALTER SUBSCRIPTION sub1\nSET PUBLICATION pub1\" or \"ALTER SUBSCRIPTION sub1 SET ( SET option\nlike STREAMING,etc = 'on')\", that is why we have 2 possible options as\nbelow:\npostgres=# ALTER SUBSCRIPTION sub1 SET\n( PUBLICATION\n\nWhereas in the case of SKIP there is only one possible tab completion\noption i.e XID. We handle similarly in case of WITH option, we specify\n\"WITH (\" in case of tab completion for \"CREATE PUBLICATION pub1\"\npostgres=# CREATE PUBLICATION pub1\nFOR ALL TABLES FOR ALL TABLES IN SCHEMA FOR TABLE\n WITH (\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 19 Jan 2022 12:02:19 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jan 18, 2022 at 5:05 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached a rebased patch.\n\nA couple of comments for the v8 patch:\n\ndoc/src/sgml/logical-replication.sgml\n\n(1)\nStrictly-speaking it's the transaction, not transaction ID, that\ncontains changes, so suggesting minor change:\n\nBEFORE:\n+ The transaction ID that contains the change violating the constraint can be\nAFTER:\n+ The ID of the transaction that contains the change violating the\nconstraint can be\n\n\ndoc/src/sgml/ref/alter_subscription.sgml\n\n(2) apply_handle_commit_internal\nIt's not entirely apparent what commits the clearing of subskixpid\nhere, so I suggest the following addition:\n\nBEFORE:\n+ * clear subskipxid of pg_subscription.\nAFTER:\n+ * clear subskipxid of pg_subscription, then commit.\n\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 19 Jan 2022 18:14:28 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jan 19, 2022 at 12:22 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Tuesday, January 18, 2022 3:05 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > I've attached a rebased patch.\n> Thank you for your rebase !\n>\n> Several review comments on v8.\n\nThank you for the comments!\n\n>\n> (1) doc/src/sgml/logical-replication.sgml\n>\n> +\n> + <para>\n> + To resolve conflicts, you need to consider changing the data on the subscriber so\n> + that it doesn't conflict with incoming changes, or dropping the conflicting constraint\n> + or unique index, or writing a trigger on the subscriber to suppress or redirect\n> + conflicting incoming changes, or as a last resort, by skipping the whole transaction.\n> + Skipping the whole transaction includes skipping changes that may not violate\n> + any constraint. This can easily make the subscriber inconsistent, especially if\n> + a user specifies the wrong transaction ID or the position of origin.\n> + </para>\n>\n> The first sentence is too long and lack of readability slightly.\n> One idea to sort out listing items is to utilize \"itemizedlist\".\n> For instance, I imagined something like below.\n>\n> <para>\n> To resolve conflicts, you need to consider following actions:\n> <itemizedlist>\n> <listitem>\n> <para>\n> Change the data on the subscriber so that it doesn't conflict with incoming changes\n> </para>\n> </listitem>\n> ...\n> <listitem>\n> <para>\n> As a last resort, skip the whole transaction\n> </para>\n> </listitem>\n> </itemizedlist>\n> ....\n> </para>\n>\n> What did you think ?\n>\n> By the way, in case only when you want to keep the current sentence style,\n> I have one more question. Do we need \"by\" in the part\n> \"by skipping the whole transaction\" ? If we focus on only this action,\n> I think the sentence becomes \"you need to consider skipping the whole transaction\".\n> If this is true, we don't need \"by\" in the part.\n\nI personally prefer to keep the current sentence since listing them\nseems not suitable in this case. But I agree that \"by\" is not\nnecessary here.\n\n>\n> (2)\n>\n> Also, in the same paragraph, we write\n>\n> + ... This can easily make the subscriber inconsistent, especially if\n> + a user specifies the wrong transaction ID or the position of origin.\n>\n> The subject of this sentence should be \"Those\" or \"Some of those\" ?\n> because we want to mention either \"new skip xid feature\" or\n> \"pg_replication_origin_advance\".\n\nI think \"This\" in the sentence refers to \"Skipping the whole\ntransaction\". In the previous paragraph, we describe that there are\ntwo methods for skipping the whole transaction: this new feature and\npg_replication_origin_advance(). And in this paragraph, we don't\nmention any specific methods for skipping the whole transaction but\ndescribe that skipping the whole transaction per se can easily make\nthe subscriber inconsistent. The current structure is fine with me.\n\n>\n> (3) doc/src/sgml/ref/alter_subscription.sgml\n>\n> Below change contains unnecessary spaces.\n> + the whole transaction. Using <command> ALTER SUBSCRIPTION ... SKIP </command>\n>\n> Need to change\n> From:\n> <command> ALTER SUBSCRIPTION ... SKIP </command>\n> To:\n> <command>ALTER SUBSCRIPTION ... SKIP</command>\n\nWill remove.\n\n>\n> (4) comment in clear_subscription_skip_xid\n>\n> + * the flush position the transaction will be sent again and the user\n> + * needs to be set subskipxid again. We can reduce the possibility by\n>\n> Shoud change\n> From:\n> the user needs to be set...\n> To:\n> the user needs to set...\n\nWill remove.\n\n>\n> (5) clear_subscription_skip_xid\n>\n> + if (!HeapTupleIsValid(tup))\n> + elog(ERROR, \"subscription \\\"%s\\\" does not exist\", MySubscription->name);\n>\n> Can we change it to ereport with ERRCODE_UNDEFINED_OBJECT ?\n> This suggestion has another aspect that in within one patch, we don't mix\n> both ereport and elog at the same time.\n\nI don’t think we need to set errcode since this error is a\nshould-not-happen error.\n\n>\n> (6) apply_handle_stream_abort\n>\n> @@ -1209,6 +1300,13 @@ apply_handle_stream_abort(StringInfo s)\n>\n> logicalrep_read_stream_abort(s, &xid, &subxid);\n>\n> + /*\n> + * We don't expect the user to set the XID of the transaction that is\n> + * rolled back but if the skip XID is set, clear it.\n> + */\n> + if (MySubscription->skipxid == xid || MySubscription->skipxid == subxid)\n> + clear_subscription_skip_xid(MySubscription->skipxid, InvalidXLogRecPtr, 0);\n> +\n>\n> In my humble opinion, this still cares about subtransaction xid still.\n> If we want to be consistent with top level transactions only,\n> I felt checking MySubscription->skipxid == xid should be sufficient.\n\nI thought if we can clear subskipxid whose value has already been\nprocessed on the subscriber with a reasonable cost it makes sense to\ndo that because it can reduce the possibility of the issue that XID is\nwraparound while leaving the wrong in subskipxid. But as you pointed\nout, the current behavior doesn’t match the description in the doc:\n\nAfter the logical replication successfully skips the transaction, the\ntransaction ID (stored in pg_subscription.subskipxid) is cleared.\n\nand\n\nWe don't support skipping individual subtransactions.\n\nI'll remove it in the next version patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 19 Jan 2022 16:15:56 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jan 19, 2022 at 12:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Jan 19, 2022 at 12:22 PM osumi.takamichi@fujitsu.com\n> <osumi.takamichi@fujitsu.com> wrote:\n> >\n> > (6) apply_handle_stream_abort\n> >\n> > @@ -1209,6 +1300,13 @@ apply_handle_stream_abort(StringInfo s)\n> >\n> > logicalrep_read_stream_abort(s, &xid, &subxid);\n> >\n> > + /*\n> > + * We don't expect the user to set the XID of the transaction that is\n> > + * rolled back but if the skip XID is set, clear it.\n> > + */\n> > + if (MySubscription->skipxid == xid || MySubscription->skipxid == subxid)\n> > + clear_subscription_skip_xid(MySubscription->skipxid, InvalidXLogRecPtr, 0);\n> > +\n> >\n> > In my humble opinion, this still cares about subtransaction xid still.\n> > If we want to be consistent with top level transactions only,\n> > I felt checking MySubscription->skipxid == xid should be sufficient.\n>\n> I thought if we can clear subskipxid whose value has already been\n> processed on the subscriber with a reasonable cost it makes sense to\n> do that because it can reduce the possibility of the issue that XID is\n> wraparound while leaving the wrong in subskipxid.\n>\n\nI guess that could happen if the user sets some unrelated XID value.\nSo, I think it should be okay to not clear this but we can add a\ncomment in the code at that place that we don't clear subtransaction's\nXID as we don't support skipping individual subtransactions or\nsomething like that.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 19 Jan 2022 14:27:51 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On 18.01.22 07:05, Masahiko Sawada wrote:\n> I've attached a rebased patch.\n\nI think this is now almost done. Attached I have a small fixup patch \nwith some documentation proof-reading, and removing some comments I felt \nare redundant. Some others have also sent you some documentation \nupdates, so feel free to merge mine in with them.\n\nSome other comments:\n\nparse_subscription_options() and AlterSubscriptionStmt mixes regular \noptions and skip options in ways that confuse me. It seems to work \ncorrectly, though. I guess for now it's okay, but if we add more skip \noptions, it might be better to separate those more cleanly.\n\nI think the superuser check in AlterSubscription() might no longer be \nappropriate. Subscriptions can now be owned by non-superusers. Please \ncheck that.\n\nThe display order in psql \\dRs+ is a bit odd. I would put it at the \nend, certainly not between Two phase commit and Synchronous commit.\n\nPlease run pgperltidy over 028_skip_xact.pl.\n\nIs the setting of logical_decoding_work_mem in the test script required? \n If so, comment why.\n\nPlease document arguments origin_lsn and origin_timestamp of\nstop_skipping_changes(). Otherwise, one has to dig quite deep to find\nout what they are for.\n\nThis is all minor stuff, so I think when this and the nearby comments \nare addressed, this is fine by me.",
"msg_date": "Thu, 20 Jan 2022 17:18:51 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Jan 21, 2022 at 1:18 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 18.01.22 07:05, Masahiko Sawada wrote:\n> > I've attached a rebased patch.\n>\n> I think this is now almost done. Attached I have a small fixup patch\n> with some documentation proof-reading, and removing some comments I felt\n> are redundant. Some others have also sent you some documentation\n> updates, so feel free to merge mine in with them.\n\nThank you for reviewing the patch and attaching the fixup patch!\n\n>\n> Some other comments:\n>\n> parse_subscription_options() and AlterSubscriptionStmt mixes regular\n> options and skip options in ways that confuse me. It seems to work\n> correctly, though. I guess for now it's okay, but if we add more skip\n> options, it might be better to separate those more cleanly.\n\nAgreed.\n\n>\n> I think the superuser check in AlterSubscription() might no longer be\n> appropriate. Subscriptions can now be owned by non-superusers. Please\n> check that.\n\nIIUC we don't allow non-superuser to own the subscription yet. We\nstill have the following superuser checks:\n\nIn CreateSubscription():\n\n if (!superuser())\n ereport(ERROR,\n (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n errmsg(\"must be superuser to create subscriptions\")));\n\nand in AlterSubscriptionOwner_internal();\n\n /* New owner must be a superuser */\n if (!superuser_arg(newOwnerId))\n ereport(ERROR,\n (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n errmsg(\"permission denied to change owner of\nsubscription \\\"%s\\\"\",\n NameStr(form->subname)),\n errhint(\"The owner of a subscription must be a superuser.\")));\n\nAlso, doing superuser check here seems to be consistent with\npg_replication_origin_advance() which is another way to skip\ntransactions and also requires superuser permission.\n\n>\n> The display order in psql \\dRs+ is a bit odd. I would put it at the\n> end, certainly not between Two phase commit and Synchronous commit.\n\nFixed.\n\n>\n> Please run pgperltidy over 028_skip_xact.pl.\n\nFixed.\n\n>\n> Is the setting of logical_decoding_work_mem in the test script required?\n> If so, comment why.\n\nYes, it makes the tests check streaming logical replication cases\neasily. Added the comment.\n\n>\n> Please document arguments origin_lsn and origin_timestamp of\n> stop_skipping_changes(). Otherwise, one has to dig quite deep to find\n> out what they are for.\n\nAdded.\n\nAlso, after reading the documentation updates, I realized that there\nare two paragraphs describing almost the same things so merged them.\nPlease check the doc updates in the latest patch.\n\nI've attached an updated patch that incorporated these commends as\nwell as other comments I got so far.\n\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Fri, 21 Jan 2022 12:08:28 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jan 19, 2022 at 3:32 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Sat, Jan 15, 2022 at 3:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Jan 14, 2022 at 5:35 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > Thanks for the updated patch, few minor comments:\n> > > 1) Should \"SKIP\" be \"SKIP (\" here:\n> > > @@ -1675,7 +1675,7 @@ psql_completion(const char *text, int start, int end)\n> > > /* ALTER SUBSCRIPTION <name> */\n> > > else if (Matches(\"ALTER\", \"SUBSCRIPTION\", MatchAny))\n> > > COMPLETE_WITH(\"CONNECTION\", \"ENABLE\", \"DISABLE\", \"OWNER TO\",\n> > > - \"RENAME TO\", \"REFRESH\n> > > PUBLICATION\", \"SET\",\n> > > + \"RENAME TO\", \"REFRESH\n> > > PUBLICATION\", \"SET\", \"SKIP\",\n> > >\n> >\n> > Won't the another rule as follows added by patch sufficient for what\n> > you are asking?\n> > + /* ALTER SUBSCRIPTION <name> SKIP */\n> > + else if (Matches(\"ALTER\", \"SUBSCRIPTION\", MatchAny, \"SKIP\"))\n> > + COMPLETE_WITH(\"(\");\n> >\n> > I might be missing something but why do you think the handling of SKIP\n> > be any different than what we are doing for SET?\n>\n> In case of \"ALTER SUBSCRIPTION sub1 SET\" there are 2 possible tab\n> completion options, user can either specify \"ALTER SUBSCRIPTION sub1\n> SET PUBLICATION pub1\" or \"ALTER SUBSCRIPTION sub1 SET ( SET option\n> like STREAMING,etc = 'on')\", that is why we have 2 possible options as\n> below:\n> postgres=# ALTER SUBSCRIPTION sub1 SET\n> ( PUBLICATION\n>\n> Whereas in the case of SKIP there is only one possible tab completion\n> option i.e XID. We handle similarly in case of WITH option, we specify\n> \"WITH (\" in case of tab completion for \"CREATE PUBLICATION pub1\"\n> postgres=# CREATE PUBLICATION pub1\n> FOR ALL TABLES FOR ALL TABLES IN SCHEMA FOR TABLE\n> WITH (\n\nRight. I've incorporated this comment into the latest v9 patch[1].\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoDOuNtvFUfU2wH2QgTJ6AyMXXh_vdA87qX0mUibdsrYTg%40mail.gmail.com\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 21 Jan 2022 12:11:05 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jan 19, 2022 at 4:14 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> On Tue, Jan 18, 2022 at 5:05 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached a rebased patch.\n>\n> A couple of comments for the v8 patch:\n\nThank you for the comments!\n\n>\n> doc/src/sgml/logical-replication.sgml\n>\n> (1)\n> Strictly-speaking it's the transaction, not transaction ID, that\n> contains changes, so suggesting minor change:\n>\n> BEFORE:\n> + The transaction ID that contains the change violating the constraint can be\n> AFTER:\n> + The ID of the transaction that contains the change violating the\n> constraint can be\n>\n>\n> doc/src/sgml/ref/alter_subscription.sgml\n>\n> (2) apply_handle_commit_internal\n> It's not entirely apparent what commits the clearing of subskixpid\n> here, so I suggest the following addition:\n>\n> BEFORE:\n> + * clear subskipxid of pg_subscription.\n> AFTER:\n> + * clear subskipxid of pg_subscription, then commit.\n>\n\nThese comments are merged with Peter's comments and incorporated into\nthe latest v9 patch[1].\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoDOuNtvFUfU2wH2QgTJ6AyMXXh_vdA87qX0mUibdsrYTg%40mail.gmail.com\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 21 Jan 2022 12:13:25 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jan 19, 2022 at 5:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jan 19, 2022 at 12:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Jan 19, 2022 at 12:22 PM osumi.takamichi@fujitsu.com\n> > <osumi.takamichi@fujitsu.com> wrote:\n> > >\n> > > (6) apply_handle_stream_abort\n> > >\n> > > @@ -1209,6 +1300,13 @@ apply_handle_stream_abort(StringInfo s)\n> > >\n> > > logicalrep_read_stream_abort(s, &xid, &subxid);\n> > >\n> > > + /*\n> > > + * We don't expect the user to set the XID of the transaction that is\n> > > + * rolled back but if the skip XID is set, clear it.\n> > > + */\n> > > + if (MySubscription->skipxid == xid || MySubscription->skipxid == subxid)\n> > > + clear_subscription_skip_xid(MySubscription->skipxid, InvalidXLogRecPtr, 0);\n> > > +\n> > >\n> > > In my humble opinion, this still cares about subtransaction xid still.\n> > > If we want to be consistent with top level transactions only,\n> > > I felt checking MySubscription->skipxid == xid should be sufficient.\n> >\n> > I thought if we can clear subskipxid whose value has already been\n> > processed on the subscriber with a reasonable cost it makes sense to\n> > do that because it can reduce the possibility of the issue that XID is\n> > wraparound while leaving the wrong in subskipxid.\n> >\n>\n> I guess that could happen if the user sets some unrelated XID value.\n> So, I think it should be okay to not clear this but we can add a\n> comment in the code at that place that we don't clear subtransaction's\n> XID as we don't support skipping individual subtransactions or\n> something like that.\n\nAgreed and added the comment in the latest patch[1].\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoDOuNtvFUfU2wH2QgTJ6AyMXXh_vdA87qX0mUibdsrYTg%40mail.gmail.com\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 21 Jan 2022 12:14:05 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Jan 21, 2022 at 8:39 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Jan 21, 2022 at 1:18 AM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n> >\n> > I think the superuser check in AlterSubscription() might no longer be\n> > appropriate. Subscriptions can now be owned by non-superusers. Please\n> > check that.\n>\n> IIUC we don't allow non-superuser to own the subscription yet. We\n> still have the following superuser checks:\n>\n> In CreateSubscription():\n>\n> if (!superuser())\n> ereport(ERROR,\n> (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> errmsg(\"must be superuser to create subscriptions\")));\n>\n> and in AlterSubscriptionOwner_internal();\n>\n> /* New owner must be a superuser */\n> if (!superuser_arg(newOwnerId))\n> ereport(ERROR,\n> (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> errmsg(\"permission denied to change owner of\n> subscription \\\"%s\\\"\",\n> NameStr(form->subname)),\n> errhint(\"The owner of a subscription must be a superuser.\")));\n>\n> Also, doing superuser check here seems to be consistent with\n> pg_replication_origin_advance() which is another way to skip\n> transactions and also requires superuser permission.\n>\n\n+1. I think this feature has the potential to make data inconsistent\nand only be used as a last resort to resolve the conflicts so it is\nbetter to allow this as a superuser.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 21 Jan 2022 08:50:23 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jan 18, 2022 at 9:21 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Jan 18, 2022 at 12:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Jan 18, 2022 at 8:34 AM tanghy.fnst@fujitsu.com\n> > <tanghy.fnst@fujitsu.com> wrote:\n> > >\n> > > On Mon, Jan 17, 2022 2:18 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > >\n> > > 2) The following two places are not consistent in whether \"= value\" is surround\n> > > with square brackets.\n> > >\n> > > +ALTER SUBSCRIPTION <replaceable class=\"parameter\">name</replaceable> SKIP ( <replaceable class=\"parameter\">skip_option</replaceable> [= <replaceable class=\"parameter\">value</replaceable>] [, ... ] )\n> > >\n> > > + <term><literal>SKIP ( <replaceable class=\"parameter\">skip_option</replaceable> = <replaceable class=\"parameter\">value</replaceable> [, ... ] )</literal></term>\n> > >\n> > > Should we modify the first place to:\n> > > +ALTER SUBSCRIPTION <replaceable class=\"parameter\">name</replaceable> SKIP ( <replaceable class=\"parameter\">skip_option</replaceable> = <replaceable class=\"parameter\">value</replaceable> [, ... ] )\n> > >\n> > > Because currently there is only one skip_option - xid, and a parameter must be\n> > > specified when using it.\n> > >\n> >\n> > Good observation. Do we really need [, ... ] as currently, we support\n> > only one value for XID?\n>\n> I think no. In the doc, it should be:\n>\n> ALTER SUBSCRIPTION name SKIP ( skip_option = value )\n>\n\nIn the latest patch, I see:\n+ <varlistentry>\n+ <term><literal>SKIP ( <replaceable\nclass=\"parameter\">skip_option</replaceable> = <replaceable\nclass=\"parameter\">value</replaceable> [, ... ] )</literal></term>\n\nWhat do we want to indicate by [, ... ]? To me, it appears like\nmultiple options but that is not what we support currently.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 21 Jan 2022 09:50:41 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Jan 21, 2022 at 1:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jan 18, 2022 at 9:21 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Jan 18, 2022 at 12:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Jan 18, 2022 at 8:34 AM tanghy.fnst@fujitsu.com\n> > > <tanghy.fnst@fujitsu.com> wrote:\n> > > >\n> > > > On Mon, Jan 17, 2022 2:18 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > >\n> > > > 2) The following two places are not consistent in whether \"= value\" is surround\n> > > > with square brackets.\n> > > >\n> > > > +ALTER SUBSCRIPTION <replaceable class=\"parameter\">name</replaceable> SKIP ( <replaceable class=\"parameter\">skip_option</replaceable> [= <replaceable class=\"parameter\">value</replaceable>] [, ... ] )\n> > > >\n> > > > + <term><literal>SKIP ( <replaceable class=\"parameter\">skip_option</replaceable> = <replaceable class=\"parameter\">value</replaceable> [, ... ] )</literal></term>\n> > > >\n> > > > Should we modify the first place to:\n> > > > +ALTER SUBSCRIPTION <replaceable class=\"parameter\">name</replaceable> SKIP ( <replaceable class=\"parameter\">skip_option</replaceable> = <replaceable class=\"parameter\">value</replaceable> [, ... ] )\n> > > >\n> > > > Because currently there is only one skip_option - xid, and a parameter must be\n> > > > specified when using it.\n> > > >\n> > >\n> > > Good observation. Do we really need [, ... ] as currently, we support\n> > > only one value for XID?\n> >\n> > I think no. In the doc, it should be:\n> >\n> > ALTER SUBSCRIPTION name SKIP ( skip_option = value )\n> >\n>\n> In the latest patch, I see:\n> + <varlistentry>\n> + <term><literal>SKIP ( <replaceable\n> class=\"parameter\">skip_option</replaceable> = <replaceable\n> class=\"parameter\">value</replaceable> [, ... ] )</literal></term>\n>\n> What do we want to indicate by [, ... ]? To me, it appears like\n> multiple options but that is not what we support currently.\n\nYou're right. It's an oversight.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 21 Jan 2022 13:40:17 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Friday, January 21, 2022 12:08 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> I've attached an updated patch that incorporated these commends as well as\r\n> other comments I got so far.\r\nThank you for your update !\r\n\r\nFew minor comments.\r\n\r\n(1) trivial question\r\n\r\nFor the users,\r\nwas it perfectly clear that in the cascading logical replication setup,\r\nwe can't selectively skip an arbitrary transaction of one upper nodes,\r\nwithout skipping its all executions on subsequent nodes,\r\nwhen we refer to the current doc description of v9 ?\r\n\r\nIIUC, this is because we don't write changes WAL either and\r\ncan't propagate the contents to subsequent nodes.\r\n\r\nI tested this case and it didn't, as I expected.\r\nThis can apply to other measures for conflicts, though.\r\n\r\n(2) suggestion\r\n\r\nThere's no harm in writing a notification for a committer\r\n\"Bump catalog version\" in the commit log,\r\nas the patch changes the catalog.\r\n\r\n(3) minor question\r\n\r\nIn the past, there was a discussion that\r\nit might be better if we reset the XID\r\naccording to a change of subconninfo,\r\nwhich might be an opportunity to connect another\r\npublisher of a different XID space.\r\nCurrently, we can regard it as user's responsibility.\r\nWas this correct ?\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Fri, 21 Jan 2022 05:02:45 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Jan 21, 2022 at 10:32 AM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Friday, January 21, 2022 12:08 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > I've attached an updated patch that incorporated these commends as well as\n> > other comments I got so far.\n> Thank you for your update !\n>\n> Few minor comments.\n>\n> (1) trivial question\n>\n> For the users,\n> was it perfectly clear that in the cascading logical replication setup,\n> we can't selectively skip an arbitrary transaction of one upper nodes,\n> without skipping its all executions on subsequent nodes,\n> when we refer to the current doc description of v9 ?\n>\n> IIUC, this is because we don't write changes WAL either and\n> can't propagate the contents to subsequent nodes.\n>\n> I tested this case and it didn't, as I expected.\n> This can apply to other measures for conflicts, though.\n>\n\nRight, there is nothing new as the user will same effect when she uses\nexisting function pg_replication_origin_advance(). So, not sure if we\nwant to add something specific to this.\n\n>\n> (3) minor question\n>\n> In the past, there was a discussion that\n> it might be better if we reset the XID\n> according to a change of subconninfo,\n> which might be an opportunity to connect another\n> publisher of a different XID space.\n> Currently, we can regard it as user's responsibility.\n> Was this correct ?\n>\n\nI think if the user points to another publisher, doesn't it similarly\nneeds to change slot_name as well? If so, I think this can be treated\nin a similar way.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 21 Jan 2022 10:59:45 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Jan 21, 2022 at 2:09 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached an updated patch that incorporated these commends as\n> well as other comments I got so far.\n>\n\nsrc/backend/replication/logical/worker.c\n\n(1)\nDidn't you mean to say \"check the\" instead of \"clear\" in the following\ncomment? (the subtransaction's XID was never being cleared before,\njust checked against the skipxid, and now that check has been removed)\n\n+ * ... . Since we don't\n+ * support skipping individual subtransactions we don't clear\n+ * subtransaction's XID.\n\nOther than that, the patch LGTM.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 21 Jan 2022 16:50:40 +1100",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Friday, January 21, 2022 2:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Fri, Jan 21, 2022 at 10:32 AM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > On Friday, January 21, 2022 12:08 PM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> > > I've attached an updated patch that incorporated these commends as\r\n> > > well as other comments I got so far.\r\n> > Thank you for your update !\r\n> >\r\n> > Few minor comments.\r\n> >\r\n> > (1) trivial question\r\n> >\r\n> > For the users,\r\n> > was it perfectly clear that in the cascading logical replication\r\n> > setup, we can't selectively skip an arbitrary transaction of one upper\r\n> > nodes, without skipping its all executions on subsequent nodes, when\r\n> > we refer to the current doc description of v9 ?\r\n> >\r\n> > IIUC, this is because we don't write changes WAL either and can't\r\n> > propagate the contents to subsequent nodes.\r\n> >\r\n> > I tested this case and it didn't, as I expected.\r\n> > This can apply to other measures for conflicts, though.\r\n> >\r\n> \r\n> Right, there is nothing new as the user will same effect when she uses existing\r\n> function pg_replication_origin_advance(). So, not sure if we want to add\r\n> something specific to this.\r\nOkay, thank you for clarifying this !\r\nThat's good to know.\r\n\r\n\r\n> > (3) minor question\r\n> >\r\n> > In the past, there was a discussion that it might be better if we\r\n> > reset the XID according to a change of subconninfo, which might be an\r\n> > opportunity to connect another publisher of a different XID space.\r\n> > Currently, we can regard it as user's responsibility.\r\n> > Was this correct ?\r\n> >\r\n> \r\n> I think if the user points to another publisher, doesn't it similarly needs to\r\n> change slot_name as well? If so, I think this can be treated in a similar way.\r\nI see. Then, in the AlterSubscription(), switching a slot_name\r\ndoesn't affect other columns, which means this time,\r\nwe don't need some special measure for this either as well, IIUC.\r\nThanks !\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Fri, 21 Jan 2022 07:45:06 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Jan 21, 2022 at 10:10 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Jan 21, 2022 at 1:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > What do we want to indicate by [, ... ]? To me, it appears like\n> > multiple options but that is not what we support currently.\n>\n> You're right. It's an oversight.\n>\n\nI have fixed this and a few other things in the attached patch.\n1.\nThe newly added column needs to be updated in the following statement:\n-- All columns of pg_subscription except subconninfo are publicly readable.\nREVOKE ALL ON pg_subscription FROM public;\nGRANT SELECT (oid, subdbid, subname, subowner, subenabled, subbinary,\n substream, subtwophasestate, subslotname, subsynccommit,\nsubpublications)\n ON pg_subscription TO public;\n\n2.\n+stop_skipping_changes(bool clear_subskipxid, XLogRecPtr origin_lsn,\n+ TimestampTz origin_timestamp)\n+{\n+ Assert(is_skipping_changes());\n+\n+ ereport(LOG,\n+ (errmsg(\"done skipping logical replication transaction %u\",\n+ skip_xid)));\n\nIsn't it better to move this LOG at the end of this function? Because\nclear* functions can give an error, so it is better to move it after\nthat. I have done that in the attached.\n\n3.\n+-- fail - must be superuser\n+SET SESSION AUTHORIZATION 'regress_subscription_user2';\n+ALTER SUBSCRIPTION regress_testsub SKIP (xid = 100);\n+ERROR: must be owner of subscription regress_testsub\n\nThis test doesn't seem to be right. You want to get the error for the\nsuperuser but the error is for the owner. I have changed this test to\ndo what it intends to do.\n\nApart from this, I have changed a few comments and ran pgindent. Do\nlet me know what you think of the changes?\n\nFew things that I think we can improve in 028_skip_xact.pl are as follows:\n\nAfter CREATE SUBSCRIPTION, wait for initial sync to be over and\ntwo_phase state to be enabled. Please see 021_twophase. For the\nstreaming case, we might be able to ensure streaming even with lesser\ndata. Can you please try that?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Fri, 21 Jan 2022 17:25:22 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Jan 21, 2022 at 5:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jan 21, 2022 at 10:10 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n>\n> Few things that I think we can improve in 028_skip_xact.pl are as follows:\n>\n> After CREATE SUBSCRIPTION, wait for initial sync to be over and\n> two_phase state to be enabled. Please see 021_twophase. For the\n> streaming case, we might be able to ensure streaming even with lesser\n> data. Can you please try that?\n>\n\nI noticed that the newly added test by this patch takes time is on the\nupper side. See comparison with the subscription test that takes max\ntime:\n[17:38:49] t/028_skip_xact.pl ................. ok 9298 ms\n[17:38:59] t/100_bugs.pl ...................... ok 11349 ms\n\nI think we can reduce time by removing some stream tests without much\nimpacting on coverage, possibly related to 2PC and streaming together,\nand if you do that we probably don't need a subscription with both 2PC\nand streaming enabled.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 21 Jan 2022 17:43:12 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On 21.01.22 04:08, Masahiko Sawada wrote:\n>> I think the superuser check in AlterSubscription() might no longer be\n>> appropriate. Subscriptions can now be owned by non-superusers. Please\n>> check that.\n> \n> IIUC we don't allow non-superuser to own the subscription yet. We\n> still have the following superuser checks:\n> \n> In CreateSubscription():\n> \n> if (!superuser())\n> ereport(ERROR,\n> (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> errmsg(\"must be superuser to create subscriptions\")));\n> \n> and in AlterSubscriptionOwner_internal();\n> \n> /* New owner must be a superuser */\n> if (!superuser_arg(newOwnerId))\n> ereport(ERROR,\n> (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> errmsg(\"permission denied to change owner of\n> subscription \\\"%s\\\"\",\n> NameStr(form->subname)),\n> errhint(\"The owner of a subscription must be a superuser.\")));\n> \n> Also, doing superuser check here seems to be consistent with\n> pg_replication_origin_advance() which is another way to skip\n> transactions and also requires superuser permission.\n\nI'm referring to commit a2ab9c06ea15fbcb2bfde570986a06b37f52bcca. You \nstill have to be superuser to create a subscription, but you can change \nthe owner to a nonprivileged user and it will observe table permissions \non the subscriber.\n\nAssuming my understanding of that commit is correct, I think it would be \nsufficient in your patch to check that the current user is the owner of \nthe subscription.\n\n\n",
"msg_date": "Fri, 21 Jan 2022 14:53:39 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Jan 21, 2022 at 4:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> Apart from this, I have changed a few comments and ran pgindent. Do\n> let me know what you think of the changes?\n>\n\nThe paragraph describing ALTER SUBSCRIPTION SKIP seems unnecessarily\nrepetitive. Consider:\n\"\"\"\nSkips applying all changes of the specified remote transaction, whose value\nshould be obtained from pg_stat_subscription_workers.last_error_xid. While\nthis will result in avoiding the last error on the subscription, thus\nallowing it to resume working. See \"link to a more holistic description in\nthe Logical Replication chapter\" for alternative means of resolving\nsubscription errors. Removing an entire transaction from the history of a\ntable should be considered a last resort as it can leave the system in a\nvery inconsistent state.\n\nNote, this feature will not accept transactions prepared under two-phase\ncommit.\n\nThis command sets pg_subscription.subskipxid field upon issuance and the\nsystem clears the same field upon seeing and successfully skipped the\nidentified transaction. Issuing this command again while a skipped\ntransaction is pending replaces the existing transaction with the new one.\n\"\"\"\n\nThen change the subskipxid column description to be:\n\"\"\"\nID of the transaction whose changes are to be skipped. It is 0 when there\nare no pending skips. This is set by issuing ALTER SUBSCRIPTION SKIP and\nresets back to 0 when the identified transactions passes through the\nsubscription stream and is successfully ignored.\n\"\"\"\n\nI don't understand why/how \", if a valid transaction ID;\" comes into play\n(how would we know whether it is valid, or if we do ALTER SUBSCRIPTION SKIP\nshould prohibit the invalid value from being chosen).\n\nI'm against mentioning subtransactions in the skip_option description.\n\nThe Logical Replication page changes provide good content overall but I\ndislike going into detail about how to perform conflict resolution in the\nthird paragraph and then summarize the various forms of conflict resolution\nin the newly added forth. Maybe re-work things like:\n\n1. Logical replication behaves...\n2. A conflict will produce...details can be found in places...\n3. Resolving conflicts can be done by...\n4. (split and reworded) If choosing to simply skip the offending\ntransaction you take the pg_stat_subscription_worker.last_error_xid value\n(716 in the example above) and provide it while executing ALTER\nSUBSCRIPTION SKIP...\n5. (split and reworded) Prior to v15 ALTER SUBSCRIPTION SKIP was not\navailable and instead you had to use the pg_replication_origin_advance()\nfunction...\n\nDon't just list out two options for the user to perform the same action.\nTell a story about why we felt compelled to add ALTER SYSTEM SKIP and why\neither the function is now deprecated or is useful given different\ncircumstances (the former seems likely).\n\nDavid J.\n\nOn Fri, Jan 21, 2022 at 4:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:Apart from this, I have changed a few comments and ran pgindent. Do\nlet me know what you think of the changes?The paragraph describing ALTER SUBSCRIPTION SKIP seems unnecessarily repetitive. Consider:\"\"\"Skips applying all changes of the specified remote transaction, whose value should be obtained from pg_stat_subscription_workers.last_error_xid. While this will result in avoiding the last error on the subscription, thus allowing it to resume working. See \"link to a more holistic description in the Logical Replication chapter\" for alternative means of resolving subscription errors. Removing an entire transaction from the history of a table should be considered a last resort as it can leave the system in a very inconsistent state.Note, this feature will not accept transactions prepared under two-phase commit.This command sets pg_subscription.subskipxid field upon issuance and the system clears the same field upon seeing and successfully skipped the identified transaction. Issuing this command again while a skipped transaction is pending replaces the existing transaction with the new one.\"\"\"Then change the subskipxid column description to be:\"\"\"ID of the transaction whose changes are to be skipped. It is 0 when there are no pending skips. This is set by issuing ALTER SUBSCRIPTION SKIP and resets back to 0 when the identified transactions passes through the subscription stream and is successfully ignored.\"\"\"I don't understand why/how \", if a valid transaction ID;\" comes into play (how would we know whether it is valid, or if we do ALTER SUBSCRIPTION SKIP should prohibit the invalid value from being chosen).I'm against mentioning subtransactions in the skip_option description.The Logical Replication page changes provide good content overall but I dislike going into detail about how to perform conflict resolution in the third paragraph and then summarize the various forms of conflict resolution in the newly added forth. Maybe re-work things like:1. Logical replication behaves...2. A conflict will produce...details can be found in places...3. Resolving conflicts can be done by...4. (split and reworded) If choosing to simply skip the offending transaction you take the pg_stat_subscription_worker.last_error_xid value (716 in the example above) and provide it while executing ALTER SUBSCRIPTION SKIP...5. (split and reworded) Prior to v15 ALTER SUBSCRIPTION SKIP was not available and instead you had to use the pg_replication_origin_advance() function...Don't just list out two options for the user to perform the same action. Tell a story about why we felt compelled to add ALTER SYSTEM SKIP and why either the function is now deprecated or is useful given different circumstances (the former seems likely).David J.",
"msg_date": "Fri, 21 Jan 2022 09:29:51 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Jan 21, 2022 at 7:23 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 21.01.22 04:08, Masahiko Sawada wrote:\n> >> I think the superuser check in AlterSubscription() might no longer be\n> >> appropriate. Subscriptions can now be owned by non-superusers. Please\n> >> check that.\n> >\n> > IIUC we don't allow non-superuser to own the subscription yet. We\n> > still have the following superuser checks:\n> >\n> > In CreateSubscription():\n> >\n> > if (!superuser())\n> > ereport(ERROR,\n> > (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> > errmsg(\"must be superuser to create subscriptions\")));\n> >\n> > and in AlterSubscriptionOwner_internal();\n> >\n> > /* New owner must be a superuser */\n> > if (!superuser_arg(newOwnerId))\n> > ereport(ERROR,\n> > (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> > errmsg(\"permission denied to change owner of\n> > subscription \\\"%s\\\"\",\n> > NameStr(form->subname)),\n> > errhint(\"The owner of a subscription must be a superuser.\")));\n> >\n> > Also, doing superuser check here seems to be consistent with\n> > pg_replication_origin_advance() which is another way to skip\n> > transactions and also requires superuser permission.\n>\n> I'm referring to commit a2ab9c06ea15fbcb2bfde570986a06b37f52bcca. You\n> still have to be superuser to create a subscription, but you can change\n> the owner to a nonprivileged user and it will observe table permissions\n> on the subscriber.\n>\n> Assuming my understanding of that commit is correct, I think it would be\n> sufficient in your patch to check that the current user is the owner of\n> the subscription.\n>\n\nWon't we already do that for Alter Subscription command which means\nnothing special needs to be done for this? However, it seems to me\nthat the idea we are trying to follow here is that as this option can\nlead to data inconsistency, it is good to allow only superusers to\nspecify this option. The owner of the subscription can be changed to\nnon-superuser as well in which case I think it won't be a good idea to\nallow this option. OTOH, if we think it is okay to allow such an\noption to users that don't have superuser privilege then I think\nallowing it to the owner of the subscription makes sense to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 22 Jan 2022 08:24:24 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Jan 21, 2022 at 10:00 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Fri, Jan 21, 2022 at 4:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> Apart from this, I have changed a few comments and ran pgindent. Do\n>> let me know what you think of the changes?\n>\n>\n> The paragraph describing ALTER SUBSCRIPTION SKIP seems unnecessarily repetitive. Consider:\n> \"\"\"\n> Skips applying all changes of the specified remote transaction, whose value should be obtained from pg_stat_subscription_workers.last_error_xid.\n>\n\nHere, you can also say that the value can be found from server logs as well.\n\n>\n While this will result in avoiding the last error on the\nsubscription, thus allowing it to resume working. See \"link to a more\nholistic description in the Logical Replication chapter\" for\nalternative means of resolving subscription errors. Removing an\nentire transaction from the history of a table should be considered a\nlast resort as it can leave the system in a very inconsistent state.\n>\n> Note, this feature will not accept transactions prepared under two-phase commit.\n>\n> This command sets pg_subscription.subskipxid field upon issuance and the system clears the same field upon seeing and successfully skipped the identified transaction. Issuing this command again while a skipped transaction is pending replaces the existing transaction with the new one.\n> \"\"\"\n>\n\nThe proposed text sounds better to me except for a minor change as\nsuggested above.\n\n> Then change the subskipxid column description to be:\n> \"\"\"\n> ID of the transaction whose changes are to be skipped. It is 0 when there are no pending skips. This is set by issuing ALTER SUBSCRIPTION SKIP and resets back to 0 when the identified transactions passes through the subscription stream and is successfully ignored.\n> \"\"\"\n>\n\nUsers can manually reset it by specifying NONE, so that should be\ncovered in the above text, otherwise, looks good.\n\n> I don't understand why/how \", if a valid transaction ID;\" comes into play (how would we know whether it is valid, or if we do ALTER SUBSCRIPTION SKIP should prohibit the invalid value from being chosen).\n>\n\nWhat do you mean by invalid value here? Is it the value lesser than\nFirstNormalTransactionId or a value that is of the non-error\ntransaction? For the former, we already have a check in the patch and\nfor later we can't identify it with any certainty because the error\nstats are collected by the stats collector.\n\n> I'm against mentioning subtransactions in the skip_option description.\n>\n\nWe have mentioned that because currently, we don't support it but in\nthe future one can come up with an idea to support it. What problem do\nyou see with it?\n\n> The Logical Replication page changes provide good content overall but I dislike going into detail about how to perform conflict resolution in the third paragraph and then summarize the various forms of conflict resolution in the newly added forth. Maybe re-work things like:\n>\n> 1. Logical replication behaves...\n> 2. A conflict will produce...details can be found in places...\n> 3. Resolving conflicts can be done by...\n> 4. (split and reworded) If choosing to simply skip the offending transaction you take the pg_stat_subscription_worker.last_error_xid value (716 in the example above) and provide it while executing ALTER SUBSCRIPTION SKIP...\n> 5. (split and reworded) Prior to v15 ALTER SUBSCRIPTION SKIP was not available and instead you had to use the pg_replication_origin_advance() function...\n>\n> Don't just list out two options for the user to perform the same action. Tell a story about why we felt compelled to add ALTER SYSTEM SKIP and why either the function is now deprecated or is useful given different circumstances (the former seems likely).\n>\n\nPersonally, I don't see much value in the split (especially giving\ncontext like \"Prior to v15 ..) but specifying the circumstances where\neach of the options could be useful.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 22 Jan 2022 11:00:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Jan 21, 2022 at 10:30 PM Amit Kapila <amit.kapila16@gmail.com>\nwrote:\n\n> On Fri, Jan 21, 2022 at 10:00 PM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> >\n> > On Fri, Jan 21, 2022 at 4:55 AM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >>\n> >> Apart from this, I have changed a few comments and ran pgindent. Do\n> >> let me know what you think of the changes?\n> >\n> >\n> > The paragraph describing ALTER SUBSCRIPTION SKIP seems unnecessarily\n> repetitive. Consider:\n> > \"\"\"\n> > Skips applying all changes of the specified remote transaction, whose\n> value should be obtained from pg_stat_subscription_workers.last_error_xid.\n> >\n>\n> Here, you can also say that the value can be found from server logs as\n> well.\n>\n\nsubscriber's server logs, right? I would agree that adding that for\ncompleteness is warranted.\n\n\n> > Then change the subskipxid column description to be:\n> > \"\"\"\n> > ID of the transaction whose changes are to be skipped. It is 0 when\n> there are no pending skips. This is set by issuing ALTER SUBSCRIPTION SKIP\n> and resets back to 0 when the identified transactions passes through the\n> subscription stream and is successfully ignored.\n> > \"\"\"\n> >\n>\n> Users can manually reset it by specifying NONE, so that should be\n> covered in the above text, otherwise, looks good.\n>\n\nI agree with incorporating \"reset\" into the paragraph somehow - does not\nhave to mention NONE, just that ALTER SUBSCRIPTION SKIP (not a family\nfriendly abbreviation...) is what does it.\n\n\n> > I don't understand why/how \", if a valid transaction ID;\" comes into\n> play (how would we know whether it is valid, or if we do ALTER SUBSCRIPTION\n> SKIP should prohibit the invalid value from being chosen).\n> >\n>\n> What do you mean by invalid value here? Is it the value lesser than\n> FirstNormalTransactionId or a value that is of the non-error\n> transaction? For the former, we already have a check in the patch and\n> for later we can't identify it with any certainty because the error\n> stats are collected by the stats collector.\n>\n\nThe original proposal qualifies the non-zero transaction id in\nsubskipxid as being a \"valid transaction ID\" and that invalid ones (which\nis how \"otherwise\" is interpreted given the \"valid\" qualification preceding\nit) are shown as 0. As an end-user that makes me wonder what it means for\na transaction ID to be invalid. My point is that dropping the mention of\n\"valid transaction ID\" avoids that and lets the reader operate with an\nunderstanding that things should \"just work\". If I see a non-zero in the\ncolumn I have a pending skip and if I see zero I do not. My wording\nassumes it is that simple. If it isn't I would need some clarity as to why\nit is not in order to write something I could read and understand from my\ninexperienced user-centric point-of-view.\n\nI get that I may provide a transaction ID that is invalid such that the\nsystem could never see it (or at least not for a long while) - say we\nerror on transaction 102 and I typo it as 1002 or 101. But I would expect\neither an error where I make the typo or the numbers 1002 or 101 to appear\non the table. I would not expect my 101 typo to result in a 0 appearing on\nthe table (and if it does so today I argue that is a POLA violation).\nThus, \"if a valid transaction ID\" from the original text just doesn't make\nsense to me.\n\nIn typical usage it would seem strange to allow a skip to be recorded if\nthere is no existing error in the subscription. Should we (do we, haven't\nread the code) warn in that situation?\n\n*Or, why even force them to specify a number instead of just saying SKIP\nand if there is a current error we skip its transaction, otherwise we warn\nthem that nothing happened because there is no last error.*\n\nAdditionally, the description for pg_stat_subscription_workers should\ndescribe what happens once the transaction represented by last_error_xid\nhas either been successfully processed or skipped. Does this \"last error\"\nstick around until another error happens (which is hopefully very rare) or\ndoes it reset to blanks? Seems like it should reset, which really makes\nthis more of an \"active_error\" instead of a \"last_error\". This system is\nlinear, we are stuck until this error is resolved, making it active.\n\n\n> > I'm against mentioning subtransactions in the skip_option description.\n> >\n>\n> We have mentioned that because currently, we don't support it but in\n> the future one can come up with an idea to support it. What problem do\n> you see with it?\n>\n\nIf you ever get around to implementing the feature then by all means add\nit. My main issue is that we basically never talk about subtransactions in\nthe user-facing documentation and it doesn't seem desirable to do so here.\nKnowing that a whole transaction is skipped is all I need to care about as\na user. I believe that no users will be asking \"what about subtransactions\n(savepoints)\" but by mentioning it less experienced ones will now have\nsomething to be curious about that they really do not need to be.\n\n\n>\n> > The Logical Replication page changes provide good content overall but I\n> dislike going into detail about how to perform conflict resolution in the\n> third paragraph and then summarize the various forms of conflict resolution\n> in the newly added forth. Maybe re-work things like:\n>\n> Personally, I don't see much value in the split (especially giving\n> context like \"Prior to v15 ..) but specifying the circumstances where\n> each of the options could be useful.\n>\n\nYes, I've been reminded of the desire to avoid mentioning versions and\nagree doing so here is correct. The added context is desired, the style\ndepends on the content.\n\nDavid J.\n\nOn Fri, Jan 21, 2022 at 10:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Fri, Jan 21, 2022 at 10:00 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Fri, Jan 21, 2022 at 4:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> Apart from this, I have changed a few comments and ran pgindent. Do\n>> let me know what you think of the changes?\n>\n>\n> The paragraph describing ALTER SUBSCRIPTION SKIP seems unnecessarily repetitive. Consider:\n> \"\"\"\n> Skips applying all changes of the specified remote transaction, whose value should be obtained from pg_stat_subscription_workers.last_error_xid.\n>\n\nHere, you can also say that the value can be found from server logs as well.subscriber's server logs, right? I would agree that adding that for completeness is warranted.\n\n> Then change the subskipxid column description to be:\n> \"\"\"\n> ID of the transaction whose changes are to be skipped. It is 0 when there are no pending skips. This is set by issuing ALTER SUBSCRIPTION SKIP and resets back to 0 when the identified transactions passes through the subscription stream and is successfully ignored.\n> \"\"\"\n>\n\nUsers can manually reset it by specifying NONE, so that should be\ncovered in the above text, otherwise, looks good.I agree with incorporating \"reset\" into the paragraph somehow - does not have to mention NONE, just that ALTER SUBSCRIPTION SKIP (not a family friendly abbreviation...) is what does it.\n\n> I don't understand why/how \", if a valid transaction ID;\" comes into play (how would we know whether it is valid, or if we do ALTER SUBSCRIPTION SKIP should prohibit the invalid value from being chosen).\n>\n\nWhat do you mean by invalid value here? Is it the value lesser than\nFirstNormalTransactionId or a value that is of the non-error\ntransaction? For the former, we already have a check in the patch and\nfor later we can't identify it with any certainty because the error\nstats are collected by the stats collector.The original proposal qualifies the non-zero transaction id in subskipxid as being a \"valid transaction ID\" and that invalid ones (which is how \"otherwise\" is interpreted given the \"valid\" qualification preceding it) are shown as 0. As an end-user that makes me wonder what it means for a transaction ID to be invalid. My point is that dropping the mention of \"valid transaction ID\" avoids that and lets the reader operate with an understanding that things should \"just work\". If I see a non-zero in the column I have a pending skip and if I see zero I do not. My wording assumes it is that simple. If it isn't I would need some clarity as to why it is not in order to write something I could read and understand from my inexperienced user-centric point-of-view.I get that I may provide a transaction ID that is invalid such that the system could never see it (or at least not for a long while) - say we error on transaction 102 and I typo it as 1002 or 101. But I would expect either an error where I make the typo or the numbers 1002 or 101 to appear on the table. I would not expect my 101 typo to result in a 0 appearing on the table (and if it does so today I argue that is a POLA violation). Thus, \"if a valid transaction ID\" from the original text just doesn't make sense to me.In typical usage it would seem strange to allow a skip to be recorded if there is no existing error in the subscription. Should we (do we, haven't read the code) warn in that situation?Or, why even force them to specify a number instead of just saying SKIP and if there is a current error we skip its transaction, otherwise we warn them that nothing happened because there is no last error. Additionally, the description for pg_stat_subscription_workers should describe what happens once the transaction represented by last_error_xid has either been successfully processed or skipped. Does this \"last error\" stick around until another error happens (which is hopefully very rare) or does it reset to blanks? Seems like it should reset, which really makes this more of an \"active_error\" instead of a \"last_error\". This system is linear, we are stuck until this error is resolved, making it active.\n\n> I'm against mentioning subtransactions in the skip_option description.\n>\n\nWe have mentioned that because currently, we don't support it but in\nthe future one can come up with an idea to support it. What problem do\nyou see with it?If you ever get around to implementing the feature then by all means add it. My main issue is that we basically never talk about subtransactions in the user-facing documentation and it doesn't seem desirable to do so here. Knowing that a whole transaction is skipped is all I need to care about as a user. I believe that no users will be asking \"what about subtransactions (savepoints)\" but by mentioning it less experienced ones will now have something to be curious about that they really do not need to be. \n\n> The Logical Replication page changes provide good content overall but I dislike going into detail about how to perform conflict resolution in the third paragraph and then summarize the various forms of conflict resolution in the newly added forth. Maybe re-work things like:\nPersonally, I don't see much value in the split (especially giving\ncontext like \"Prior to v15 ..) but specifying the circumstances where\neach of the options could be useful.Yes, I've been reminded of the desire to avoid mentioning versions and agree doing so here is correct. The added context is desired, the style depends on the content.David J.",
"msg_date": "Sat, 22 Jan 2022 00:10:53 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Sat, Jan 22, 2022 at 12:41 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Fri, Jan 21, 2022 at 10:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Fri, Jan 21, 2022 at 10:00 PM David G. Johnston\n>> <david.g.johnston@gmail.com> wrote:\n>> >\n>> > On Fri, Jan 21, 2022 at 4:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> >>\n>> >> Apart from this, I have changed a few comments and ran pgindent. Do\n>> >> let me know what you think of the changes?\n>> >\n>> >\n>> > The paragraph describing ALTER SUBSCRIPTION SKIP seems unnecessarily repetitive. Consider:\n>> > \"\"\"\n>> > Skips applying all changes of the specified remote transaction, whose value should be obtained from pg_stat_subscription_workers.last_error_xid.\n>> >\n>>\n>> Here, you can also say that the value can be found from server logs as well.\n>\n>\n> subscriber's server logs, right?\n>\n\nRight.\n\n> I would agree that adding that for completeness is warranted.\n>\n>>\n>> > Then change the subskipxid column description to be:\n>> > \"\"\"\n>> > ID of the transaction whose changes are to be skipped. It is 0 when there are no pending skips. This is set by issuing ALTER SUBSCRIPTION SKIP and resets back to 0 when the identified transactions passes through the subscription stream and is successfully ignored.\n>> > \"\"\"\n>> >\n>>\n>> Users can manually reset it by specifying NONE, so that should be\n>> covered in the above text, otherwise, looks good.\n>\n>\n> I agree with incorporating \"reset\" into the paragraph somehow - does not have to mention NONE, just that ALTER SUBSCRIPTION SKIP (not a family friendly abbreviation...) is what does it.\n>\n\nIt is not clear to me what you have in mind here but to me in this\ncontext saying \"Setting <literal>NONE</literal> resets the transaction\nID.\" seems quite reasonable.\n\n>>\n>> > I don't understand why/how \", if a valid transaction ID;\" comes into play (how would we know whether it is valid, or if we do ALTER SUBSCRIPTION SKIP should prohibit the invalid value from being chosen).\n>> >\n>>\n>> What do you mean by invalid value here? Is it the value lesser than\n>> FirstNormalTransactionId or a value that is of the non-error\n>> transaction? For the former, we already have a check in the patch and\n>> for later we can't identify it with any certainty because the error\n>> stats are collected by the stats collector.\n>\n>\n> The original proposal qualifies the non-zero transaction id in subskipxid as being a \"valid transaction ID\" and that invalid ones (which is how \"otherwise\" is interpreted given the \"valid\" qualification preceding it) are shown as 0. As an end-user that makes me wonder what it means for a transaction ID to be invalid. My point is that dropping the mention of \"valid transaction ID\" avoids that and lets the reader operate with an understanding that things should \"just work\". If I see a non-zero in the column I have a pending skip and if I see zero I do not. My wording assumes it is that simple. If it isn't I would need some clarity as to why it is not in order to write something I could read and understand from my inexperienced user-centric point-of-view.\n>\n> I get that I may provide a transaction ID that is invalid such that the system could never see it (or at least not for a long while) - say we error on transaction 102 and I typo it as 1002 or 101. But I would expect either an error where I make the typo or the numbers 1002 or 101 to appear on the table. I would not expect my 101 typo to result in a 0 appearing on the table (and if it does so today I argue that is a POLA violation). Thus, \"if a valid transaction ID\" from the original text just doesn't make sense to me.\n>\n> In typical usage it would seem strange to allow a skip to be recorded if there is no existing error in the subscription. Should we (do we, haven't read the code) warn in that situation?\n>\n\nYeah, we will error in that situation. The only invalid values are\nsystem reserved values (1,2).\n\n> Or, why even force them to specify a number instead of just saying SKIP and if there is a current error we skip its transaction, otherwise we warn them that nothing happened because there is no last error.\n>\n\nThe idea is that we might extend this feature to skip specific\noperations on relations or maybe by having other identifiers. One idea\nwe discussed was to automatically fetch the last error xid but then\ndecided it can be done as a later patch.\n\n> Additionally, the description for pg_stat_subscription_workers should describe what happens once the transaction represented by last_error_xid has either been successfully processed or skipped. Does this \"last error\" stick around until another error happens (which is hopefully very rare) or does it reset to blanks?\n>\n\nIt will be reset only on subscription drop, otherwise, it will stick\naround until another error happens.\n\n> Seems like it should reset, which really makes this more of an \"active_error\" instead of a \"last_error\". This system is linear, we are stuck until this error is resolved, making it active.\n>\n>>\n>> > I'm against mentioning subtransactions in the skip_option description.\n>> >\n>>\n>> We have mentioned that because currently, we don't support it but in\n>> the future one can come up with an idea to support it. What problem do\n>> you see with it?\n>\n>\n> If you ever get around to implementing the feature then by all means add it. My main issue is that we basically never talk about subtransactions in the user-facing documentation and it doesn't seem desirable to do so here. Knowing that a whole transaction is skipped is all I need to care about as a user. I believe that no users will be asking \"what about subtransactions (savepoints)\" but by mentioning it less experienced ones will now have something to be curious about that they really do not need to be.\n>\n\nIt is not that we don't mention subtransactions in the docs but I see\nyour point and I think we can avoid mentioning it in this case.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 22 Jan 2022 15:11:46 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Sat, Jan 22, 2022 at 2:41 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Sat, Jan 22, 2022 at 12:41 PM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> >\n> > On Fri, Jan 21, 2022 at 10:30 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >>\n> >> On Fri, Jan 21, 2022 at 10:00 PM David G. Johnston\n> >> <david.g.johnston@gmail.com> wrote:\n> >> >\n> >> > On Fri, Jan 21, 2022 at 4:55 AM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >> >>\n>\n> >\n> > I agree with incorporating \"reset\" into the paragraph somehow - does not\n> have to mention NONE, just that ALTER SUBSCRIPTION SKIP (not a family\n> friendly abbreviation...) is what does it.\n> >\n>\n> It is not clear to me what you have in mind here but to me in this\n> context saying \"Setting <literal>NONE</literal> resets the transaction\n> ID.\" seems quite reasonable.\n>\n\nOK\n\n>\n> Yeah, we will error in that situation. The only invalid values are\n> system reserved values (1,2).\n>\n\nSo long as the ALTER command errors when asked to skip those IDs there\nisn't any reason for an end-user, who likely doesn't know or care that 1\nand 2 are special, to be concerned about them (the only two invalid values)\nwhile reading the docs.\n\n\n> > Or, why even force them to specify a number instead of just saying SKIP\n> and if there is a current error we skip its transaction, otherwise we warn\n> them that nothing happened because there is no last error.\n> >\n>\n> The idea is that we might extend this feature to skip specific\n> operations on relations or maybe by having other identifiers.\n\n\nAgain, you've already got syntax reserved that lets you add more features\nto this command in the future; and removing warnings or errors because new\nfeatures make them moot is easy. Lets document and code what we are\nwilling to implement today. A single top-level transaction xid that is\npresently blocking the worker from applying any more WAL.\n\nOne idea\n> we discussed was to automatically fetch the last error xid but then\n> decided it can be done as a later patch.\n>\n\nThis seems backwards. The user-friendly approach is to not make them type\nin anything at all. That said, this particular UX seems like it could use\nsome safety. Thus I would propose at this time that attempting to set the\nskip_option to anything but THE active_error_xid for the named subscription\nresults in an error. Once you add new features the user can set the\nskip_option to other things without provoking errors. Again, I consider\nthis a safety feature since the user now has to accurately match the xid to\nthe name in the SQL in order to perform a successful skip - and the to-be\naffected transaction has to be one that is preventing replication from\nmoving forward. I'm not interested in providing a foot-gun where an\narbitrary future transaction can be scheduled to be skipped. Running the\ncommand twice with the same values should provoke an error since the first\nrun should be allowed to finish (?). Also, we handle the situation where\nthe state of the worker changes between when the user saw the error and\nwrote down the xid to skip and the actual execution of the alter command.\nMaybe not highly anticipated scenarios but this is an easy win to deal with\nthem.\n\n\n> > Additionally, the description for pg_stat_subscription_workers should\n> describe what happens once the transaction represented by last_error_xid\n> has either been successfully processed or skipped. Does this \"last error\"\n> stick around until another error happens (which is hopefully very rare) or\n> does it reset to blanks?\n> >\n>\n> It will be reset only on subscription drop, otherwise, it will stick\n> around until another error happens.\n\n\nI really dislike the user experience this provides, and given it is new in\nv15 (and right now this table seems to exist solely to support this\nfeature) changing this seems within the realm of possibility. I have to\nimagine these workers have a sense of local state that would just be \"no\nerrors, no need to touch pg_stat_subscription_workers at the end of this\ntransaction's commit\". It would save a local state of the error_xid and if\na successfully committed transaction has that xid it would clear the\nerror. The skip code path would also check for and see the matching xid\nvalue and clear the error. Even if the local state thing doesn't work, one\ncatalog lookup per transaction seems like potentially reasonable overhead\nto incur here.\n\nDavid J.\n\nOn Sat, Jan 22, 2022 at 2:41 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Sat, Jan 22, 2022 at 12:41 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Fri, Jan 21, 2022 at 10:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Fri, Jan 21, 2022 at 10:00 PM David G. Johnston\n>> <david.g.johnston@gmail.com> wrote:\n>> >\n>> > On Fri, Jan 21, 2022 at 4:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> >>\n>\n> I agree with incorporating \"reset\" into the paragraph somehow - does not have to mention NONE, just that ALTER SUBSCRIPTION SKIP (not a family friendly abbreviation...) is what does it.\n>\n\nIt is not clear to me what you have in mind here but to me in this\ncontext saying \"Setting <literal>NONE</literal> resets the transaction\nID.\" seems quite reasonable.OK\n\nYeah, we will error in that situation. The only invalid values are\nsystem reserved values (1,2).So long as the ALTER command errors when asked to skip those IDs there isn't any reason for an end-user, who likely doesn't know or care that 1 and 2 are special, to be concerned about them (the only two invalid values) while reading the docs.\n\n> Or, why even force them to specify a number instead of just saying SKIP and if there is a current error we skip its transaction, otherwise we warn them that nothing happened because there is no last error.\n>\n\nThe idea is that we might extend this feature to skip specific\noperations on relations or maybe by having other identifiers.Again, you've already got syntax reserved that lets you add more features to this command in the future; and removing warnings or errors because new features make them moot is easy. Lets document and code what we are willing to implement today. A single top-level transaction xid that is presently blocking the worker from applying any more WAL. One idea\nwe discussed was to automatically fetch the last error xid but then\ndecided it can be done as a later patch.This seems backwards. The user-friendly approach is to not make them type in anything at all. That said, this particular UX seems like it could use some safety. Thus I would propose at this time that attempting to set the skip_option to anything but THE active_error_xid for the named subscription results in an error. Once you add new features the user can set the skip_option to other things without provoking errors. Again, I consider this a safety feature since the user now has to accurately match the xid to the name in the SQL in order to perform a successful skip - and the to-be affected transaction has to be one that is preventing replication from moving forward. I'm not interested in providing a foot-gun where an arbitrary future transaction can be scheduled to be skipped. Running the command twice with the same values should provoke an error since the first run should be allowed to finish (?). Also, we handle the situation where the state of the worker changes between when the user saw the error and wrote down the xid to skip and the actual execution of the alter command. Maybe not highly anticipated scenarios but this is an easy win to deal with them.\n\n> Additionally, the description for pg_stat_subscription_workers should describe what happens once the transaction represented by last_error_xid has either been successfully processed or skipped. Does this \"last error\" stick around until another error happens (which is hopefully very rare) or does it reset to blanks?\n>\n\nIt will be reset only on subscription drop, otherwise, it will stick\naround until another error happens. I really dislike the user experience this provides, and given it is new in v15 (and right now this table seems to exist solely to support this feature) changing this seems within the realm of possibility. I have to imagine these workers have a sense of local state that would just be \"no errors, no need to touch pg_stat_subscription_workers at the end of this transaction's commit\". It would save a local state of the error_xid and if a successfully committed transaction has that xid it would clear the error. The skip code path would also check for and see the matching xid value and clear the error. Even if the local state thing doesn't work, one catalog lookup per transaction seems like potentially reasonable overhead to incur here.David J.",
"msg_date": "Sat, 22 Jan 2022 09:21:24 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Sat, Jan 22, 2022 at 9:21 AM David G. Johnston <\ndavid.g.johnston@gmail.com> wrote:\n\n> On Sat, Jan 22, 2022 at 2:41 AM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n>\n>>\n>> > Additionally, the description for pg_stat_subscription_workers should\n>> describe what happens once the transaction represented by last_error_xid\n>> has either been successfully processed or skipped. Does this \"last error\"\n>> stick around until another error happens (which is hopefully very rare) or\n>> does it reset to blanks?\n>> >\n>>\n>> It will be reset only on subscription drop, otherwise, it will stick\n>> around until another error happens.\n>\n>\n> I really dislike the user experience this provides, and given it is new in\n> v15 (and right now this table seems to exist solely to support this\n> feature) changing this seems within the realm of possibility. I have to\n> imagine these workers have a sense of local state that would just be \"no\n> errors, no need to touch pg_stat_subscription_workers at the end of this\n> transaction's commit\". It would save a local state of the error_xid and if\n> a successfully committed transaction has that xid it would clear the\n> error. The skip code path would also check for and see the matching xid\n> value and clear the error. Even if the local state thing doesn't work, one\n> catalog lookup per transaction seems like potentially reasonable overhead\n> to incur here.\n>\n>\nIt shouldn't even need to be that overhead intensive. Once an error is\nencountered the system stops. By construction it must be told to redo, at\nwhich point the information about \"last error\" is no longer relevant and\ncan be removed (for skipping the user/system will have already done\neverything with the xid that is needed before the redo is issued). In the\nsteady-state it then is simply empty until a new error arises at which\npoint it becomes populated again; and stays that way until the system goes\ninto redo mode as instructed by the user via one of several methods.\n\nDavid J.\n\nOn Sat, Jan 22, 2022 at 9:21 AM David G. Johnston <david.g.johnston@gmail.com> wrote:On Sat, Jan 22, 2022 at 2:41 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> Additionally, the description for pg_stat_subscription_workers should describe what happens once the transaction represented by last_error_xid has either been successfully processed or skipped. Does this \"last error\" stick around until another error happens (which is hopefully very rare) or does it reset to blanks?\n>\n\nIt will be reset only on subscription drop, otherwise, it will stick\naround until another error happens. I really dislike the user experience this provides, and given it is new in v15 (and right now this table seems to exist solely to support this feature) changing this seems within the realm of possibility. I have to imagine these workers have a sense of local state that would just be \"no errors, no need to touch pg_stat_subscription_workers at the end of this transaction's commit\". It would save a local state of the error_xid and if a successfully committed transaction has that xid it would clear the error. The skip code path would also check for and see the matching xid value and clear the error. Even if the local state thing doesn't work, one catalog lookup per transaction seems like potentially reasonable overhead to incur here.It shouldn't even need to be that overhead intensive. Once an error is encountered the system stops. By construction it must be told to redo, at which point the information about \"last error\" is no longer relevant and can be removed (for skipping the user/system will have already done everything with the xid that is needed before the redo is issued). In the steady-state it then is simply empty until a new error arises at which point it becomes populated again; and stays that way until the system goes into redo mode as instructed by the user via one of several methods.David J.",
"msg_date": "Sat, 22 Jan 2022 09:47:14 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Jan 21, 2022 at 9:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jan 21, 2022 at 5:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Jan 21, 2022 at 10:10 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> >\n> > Few things that I think we can improve in 028_skip_xact.pl are as follows:\n> >\n> > After CREATE SUBSCRIPTION, wait for initial sync to be over and\n> > two_phase state to be enabled. Please see 021_twophase. For the\n> > streaming case, we might be able to ensure streaming even with lesser\n> > data. Can you please try that?\n> >\n>\n> I noticed that the newly added test by this patch takes time is on the\n> upper side. See comparison with the subscription test that takes max\n> time:\n> [17:38:49] t/028_skip_xact.pl ................. ok 9298 ms\n> [17:38:59] t/100_bugs.pl ...................... ok 11349 ms\n>\n> I think we can reduce time by removing some stream tests without much\n> impacting on coverage, possibly related to 2PC and streaming together,\n> and if you do that we probably don't need a subscription with both 2PC\n> and streaming enabled.\n\nAgreed.\n\nIn addition to that, after some tests, I realized that the two tests\nof ROLLBACK PREPARED are not stable. If the walsender detects a\nconcurrent abort of the transaction that is being decoded, it’s\npossible that it sends only beigin_prepare and prepare messages, and\nconsequently. If this happens before setting skip_xid, a unique key\nconstraint violation doesn’t occur on the subscription, and\nconsequently, skip_xid is not cleared. We can reduce the possibility\nby setting a very high value to wal_retrieve_retry_interval but I\nthink it’s better to remove them. What do you think?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 24 Jan 2022 11:53:26 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Jan 21, 2022 at 8:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jan 21, 2022 at 10:10 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Jan 21, 2022 at 1:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > What do we want to indicate by [, ... ]? To me, it appears like\n> > > multiple options but that is not what we support currently.\n> >\n> > You're right. It's an oversight.\n> >\n>\n> I have fixed this and a few other things in the attached patch.\n\nThank you for updating the patch!\n\n> 1.\n> The newly added column needs to be updated in the following statement:\n> -- All columns of pg_subscription except subconninfo are publicly readable.\n> REVOKE ALL ON pg_subscription FROM public;\n> GRANT SELECT (oid, subdbid, subname, subowner, subenabled, subbinary,\n> substream, subtwophasestate, subslotname, subsynccommit,\n> subpublications)\n> ON pg_subscription TO public;\n>\n> 2.\n> +stop_skipping_changes(bool clear_subskipxid, XLogRecPtr origin_lsn,\n> + TimestampTz origin_timestamp)\n> +{\n> + Assert(is_skipping_changes());\n> +\n> + ereport(LOG,\n> + (errmsg(\"done skipping logical replication transaction %u\",\n> + skip_xid)));\n>\n> Isn't it better to move this LOG at the end of this function? Because\n> clear* functions can give an error, so it is better to move it after\n> that. I have done that in the attached.\n>\n> 3.\n> +-- fail - must be superuser\n> +SET SESSION AUTHORIZATION 'regress_subscription_user2';\n> +ALTER SUBSCRIPTION regress_testsub SKIP (xid = 100);\n> +ERROR: must be owner of subscription regress_testsub\n>\n> This test doesn't seem to be right. You want to get the error for the\n> superuser but the error is for the owner. I have changed this test to\n> do what it intends to do.\n>\n> Apart from this, I have changed a few comments and ran pgindent. Do\n> let me know what you think of the changes?\n\nAgree with these changes.\n\n>\n> Few things that I think we can improve in 028_skip_xact.pl are as follows:\n>\n> After CREATE SUBSCRIPTION, wait for initial sync to be over and\n> two_phase state to be enabled. Please see 021_twophase.\n\nAgreed.\n\n> For the\n> streaming case, we might be able to ensure streaming even with lesser\n> data. Can you please try that?\n\nYeah, after some tests, it's enough to insert 500 rows as follows:\n\nINSERT INTO test_tab_streaming SELECT i, md5(i::text) FROM\ngenerate_series(1, 500) s(i);\n\nI've just sent another email about that probably we can remove two\ntests for ROLLBACK PREPARED, so I’ll update the patch while including\nthis point.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 24 Jan 2022 11:57:59 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jan 24, 2022 at 8:24 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Jan 21, 2022 at 9:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Jan 21, 2022 at 5:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Fri, Jan 21, 2022 at 10:10 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > >\n> > > Few things that I think we can improve in 028_skip_xact.pl are as follows:\n> > >\n> > > After CREATE SUBSCRIPTION, wait for initial sync to be over and\n> > > two_phase state to be enabled. Please see 021_twophase. For the\n> > > streaming case, we might be able to ensure streaming even with lesser\n> > > data. Can you please try that?\n> > >\n> >\n> > I noticed that the newly added test by this patch takes time is on the\n> > upper side. See comparison with the subscription test that takes max\n> > time:\n> > [17:38:49] t/028_skip_xact.pl ................. ok 9298 ms\n> > [17:38:59] t/100_bugs.pl ...................... ok 11349 ms\n> >\n> > I think we can reduce time by removing some stream tests without much\n> > impacting on coverage, possibly related to 2PC and streaming together,\n> > and if you do that we probably don't need a subscription with both 2PC\n> > and streaming enabled.\n>\n> Agreed.\n>\n> In addition to that, after some tests, I realized that the two tests\n> of ROLLBACK PREPARED are not stable. If the walsender detects a\n> concurrent abort of the transaction that is being decoded, it’s\n> possible that it sends only beigin_prepare and prepare messages, and\n> consequently. If this happens before setting skip_xid, a unique key\n> constraint violation doesn’t occur on the subscription, and\n> consequently, skip_xid is not cleared. We can reduce the possibility\n> by setting a very high value to wal_retrieve_retry_interval but I\n> think it’s better to remove them.\n>\n\n+1.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 24 Jan 2022 08:36:38 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Sat, Jan 22, 2022 at 9:51 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> So long as the ALTER command errors when asked to skip those IDs there isn't any reason for an end-user, who likely doesn't know or care that 1 and 2 are special, to be concerned about them (the only two invalid values) while reading the docs.\n>\n\nIn this matter, I don't see any problem with the current text proposed\nand there are many others who have also reviewed it. I am fine to\nchange if others also think that the current text needs to be changed.\n\n>>\n>> > Additionally, the description for pg_stat_subscription_workers should describe what happens once the transaction represented by last_error_xid has either been successfully processed or skipped. Does this \"last error\" stick around until another error happens (which is hopefully very rare) or does it reset to blanks?\n>> >\n>>\n>> It will be reset only on subscription drop, otherwise, it will stick\n>> around until another error happens.\n>\n>\n> I really dislike the user experience this provides, and given it is new in v15 (and right now this table seems to exist solely to support this feature) changing this seems within the realm of possibility. I have to imagine these workers have a sense of local state that would just be \"no errors, no need to touch pg_stat_subscription_workers at the end of this transaction's commit\". It would save a local state of the error_xid and if a successfully committed transaction has that xid it would clear the error. The skip code path would also check for and see the matching xid value and clear the error. Even if the local state thing doesn't work, one catalog lookup per transaction seems like potentially reasonable overhead to incur here.\n>\n\nAre you telling to update the catalog to save error_xid when an error\noccurs? If so, that has many challenges like we are not supposed to\nperform any such operations when the transaction is in an error state.\nWe have discussed this and other ideas in the beginning. I don't find\nany of your arguments convincing to change the basic approach here but\nI would like to see what others think on this matter?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 24 Jan 2022 09:04:58 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Sun, Jan 23, 2022 at 8:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> > I really dislike the user experience this provides, and given it is new\n> in v15 (and right now this table seems to exist solely to support this\n> feature) changing this seems within the realm of possibility. I have to\n> imagine these workers have a sense of local state that would just be \"no\n> errors, no need to touch pg_stat_subscription_workers at the end of this\n> transaction's commit\". It would save a local state of the error_xid and if\n> a successfully committed transaction has that xid it would clear the\n> error. The skip code path would also check for and see the matching xid\n> value and clear the error. Even if the local state thing doesn't work, one\n> catalog lookup per transaction seems like potentially reasonable overhead\n> to incur here.\n> >\n>\n> Are you telling to update the catalog to save error_xid when an error\n> occurs? If so, that has many challenges like we are not supposed to\n> perform any such operations when the transaction is in an error state.\n> We have discussed this and other ideas in the beginning. I don't find\n> any of your arguments convincing to change the basic approach here but\n> I would like to see what others think on this matter?\n>\n>\nThen how does the table get updated to that state in the first place since\nit doesn't know the error details until there is an error?\n\nIn any case, clearing out the entries in the table would not happen while\nit is applying the replication stream, in an error state or otherwise.\n\nin = while streaming\nout = not streaming\n\n1(in). replication stream is working\n2(in). replication stream fails; capture error information\n3(in->out). stop replication stream; perform rollback on xid\n4(out). update pg_stat_subscription_worker to report the failure, including\nxid of the transaction\n5(out). wait for the user to manually restart the replication stream\n[if they do so by skipping the xid, save the xid from\npg_stat_subscription_worker into pg_subscription.subskipxid - possibly\nrequiring the user to confirm the xid]\n[user has now done their thing and requested that the replication stream\nresume]\n6(out). clear the error information from pg_stat_subscription_worker; it is\nno longer useful/doesn't exist because the user just took action to avoid\nthat very error, one way (skipping its transaction) or another.\n7(out->in). resume the replication stream, return to step 1\n\nYou are already doing steps 1-5 and 7 today however you are forced to deal\nwith transactions and catalog access. I am just adding step 6, which turns\nlast_error_xid into current_error_xid because it is current value of the\nerror in the stream during step 5 when the user needs to decide how to\nrecover from the error. Once the user decides and the stream resumes that\nerror information has no value (go look in the logs if you want history).\nThus when 7 comes around and the stream is restarted the error info in\npg_stat_subscription_worker is empty waiting for the next error to happen.\nIf the user did nothing in step 5 then when that same wal is replayed at\nstep 2 the error will come back.\n\nThe main thing is how many ways can the user exit step 5 and to make sure\nthat no matter which way they exit step 6 happens before step 7.\n\nDavid J.\n\nOn Sun, Jan 23, 2022 at 8:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:> I really dislike the user experience this provides, and given it is new in v15 (and right now this table seems to exist solely to support this feature) changing this seems within the realm of possibility. I have to imagine these workers have a sense of local state that would just be \"no errors, no need to touch pg_stat_subscription_workers at the end of this transaction's commit\". It would save a local state of the error_xid and if a successfully committed transaction has that xid it would clear the error. The skip code path would also check for and see the matching xid value and clear the error. Even if the local state thing doesn't work, one catalog lookup per transaction seems like potentially reasonable overhead to incur here.\n>\n\nAre you telling to update the catalog to save error_xid when an error\noccurs? If so, that has many challenges like we are not supposed to\nperform any such operations when the transaction is in an error state.\nWe have discussed this and other ideas in the beginning. I don't find\nany of your arguments convincing to change the basic approach here but\nI would like to see what others think on this matter?Then how does the table get updated to that state in the first place since it doesn't know the error details until there is an error?In any case, clearing out the entries in the table would not happen while it is applying the replication stream, in an error state or otherwise.in = while streamingout = not streaming1(in). replication stream is working2(in). replication stream fails; capture error information3(in->out). stop replication stream; perform rollback on xid4(out). update pg_stat_subscription_worker to report the failure, including xid of the transaction5(out). wait for the user to manually restart the replication stream[if they do so by skipping the xid, save the xid from pg_stat_subscription_worker into pg_subscription.subskipxid - possibly requiring the user to confirm the xid][user has now done their thing and requested that the replication stream resume]6(out). clear the error information from pg_stat_subscription_worker; it is no longer useful/doesn't exist because the user just took action to avoid that very error, one way (skipping its transaction) or another.7(out->in). resume the replication stream, return to step 1You are already doing steps 1-5 and 7 today however you are forced to deal with transactions and catalog access. I am just adding step 6, which turns last_error_xid into current_error_xid because it is current value of the error in the stream during step 5 when the user needs to decide how to recover from the error. Once the user decides and the stream resumes that error information has no value (go look in the logs if you want history). Thus when 7 comes around and the stream is restarted the error info in pg_stat_subscription_worker is empty waiting for the next error to happen. If the user did nothing in step 5 then when that same wal is replayed at step 2 the error will come back.The main thing is how many ways can the user exit step 5 and to make sure that no matter which way they exit step 6 happens before step 7.David J.",
"msg_date": "Sun, 23 Jan 2022 21:48:48 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Jan 21, 2022 7:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> 2.\r\n> +stop_skipping_changes(bool clear_subskipxid, XLogRecPtr origin_lsn,\r\n> + TimestampTz origin_timestamp)\r\n> +{\r\n> + Assert(is_skipping_changes());\r\n> +\r\n> + ereport(LOG,\r\n> + (errmsg(\"done skipping logical replication transaction %u\",\r\n> + skip_xid)));\r\n> \r\n> Isn't it better to move this LOG at the end of this function? Because\r\n> clear* functions can give an error, so it is better to move it after\r\n> that. I have done that in the attached.\r\n> \r\n\r\n+\t/* Stop skipping changes */\r\n+\tskip_xid = InvalidTransactionId;\r\n+\r\n+\tereport(LOG,\r\n+\t\t\t(errmsg(\"done skipping logical replication transaction %u\",\r\n+\t\t\t\t\tskip_xid)));\r\n\r\n\r\nI think we can move the LOG before resetting skip_xid, otherwise skip_xid would\r\nalways be 0 in the LOG.\r\n\r\nRegards,\r\nTang\r\n",
"msg_date": "Mon, 24 Jan 2022 05:55:27 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jan 24, 2022 at 1:49 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Sun, Jan 23, 2022 at 8:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> > I really dislike the user experience this provides, and given it is new in v15 (and right now this table seems to exist solely to support this feature) changing this seems within the realm of possibility. I have to imagine these workers have a sense of local state that would just be \"no errors, no need to touch pg_stat_subscription_workers at the end of this transaction's commit\". It would save a local state of the error_xid and if a successfully committed transaction has that xid it would clear the error. The skip code path would also check for and see the matching xid value and clear the error. Even if the local state thing doesn't work, one catalog lookup per transaction seems like potentially reasonable overhead to incur here.\n>> >\n>>\n>> Are you telling to update the catalog to save error_xid when an error\n>> occurs? If so, that has many challenges like we are not supposed to\n>> perform any such operations when the transaction is in an error state.\n>> We have discussed this and other ideas in the beginning. I don't find\n>> any of your arguments convincing to change the basic approach here but\n>> I would like to see what others think on this matter?\n>>\n>\n> Then how does the table get updated to that state in the first place since it doesn't know the error details until there is an error?\n\nI think your idea is based on storing error information including XID\nis stored in the system catalog. I think that the reasons why we use\nthe stats collector to store error information including\nlast_error_xid are (1) as Amit mentioned, it would have many\nchallenges if updating the catalog when the transaction is in an error\nstate, and (2) we can store more information such as error messages,\naction, etc. other than XID so that users can identify that the\nreported error is a conflict error but not other types of error such\nas OOM error. For these reasons to me, it makes sense to store\nsubscribers' error information by using the stats collector.\n\nWhen it comes to reporting a message to the stats collector, we need\nto note that it's not guaranteed that all messages arrive at the stats\ncollector. Therefore, last_error_xid doesn't not necessarily get\nupdated after the worker reports an error. Similarly, the same is true\nfor clearing subskipxid. I agree that it's useful if\npg_subscription.subskipxid is automatically set when executing ALTER\nSUBSCRIPTION SKIP but it might not work in some cases because of this\nrestriction.\n\nThere is another idea of storing error XID on shmem (e.g., in\nReplicationState) in addition to reporting error details to the stats\ncollector and using the XID when skipping the transaction, but I'm not\nsure whether it's a reliable way.\n\nAnyway, even if subskipxid is automatically set when ALTER\nSUBSCRIPTION SKIP, I think we need to provide a way to clear it as the\ncurrent patch does (setting NONE) just in case.\n\n>\n> In any case, clearing out the entries in the table would not happen while it is applying the replication stream, in an error state or otherwise.\n>\n> in = while streaming\n> out = not streaming\n>\n> 1(in). replication stream is working\n> 2(in). replication stream fails; capture error information\n> 3(in->out). stop replication stream; perform rollback on xid\n> 4(out). update pg_stat_subscription_worker to report the failure, including xid of the transaction\n> 5(out). wait for the user to manually restart the replication stream\n\nDo you mean that there always is user intervention after error so the\nreplication stream can resume?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 24 Jan 2022 15:54:35 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Sun, Jan 23, 2022 at 11:55 PM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n> On Mon, Jan 24, 2022 at 1:49 PM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> >\n> > On Sun, Jan 23, 2022 at 8:35 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >>\n> >> > I really dislike the user experience this provides, and given it is\n> new in v15 (and right now this table seems to exist solely to support this\n> feature) changing this seems within the realm of possibility. I have to\n> imagine these workers have a sense of local state that would just be \"no\n> errors, no need to touch pg_stat_subscription_workers at the end of this\n> transaction's commit\". It would save a local state of the error_xid and if\n> a successfully committed transaction has that xid it would clear the\n> error. The skip code path would also check for and see the matching xid\n> value and clear the error. Even if the local state thing doesn't work, one\n> catalog lookup per transaction seems like potentially reasonable overhead\n> to incur here.\n> >> >\n> >>\n> >> Are you telling to update the catalog to save error_xid when an error\n> >> occurs? If so, that has many challenges like we are not supposed to\n> >> perform any such operations when the transaction is in an error state.\n> >> We have discussed this and other ideas in the beginning. I don't find\n> >> any of your arguments convincing to change the basic approach here but\n> >> I would like to see what others think on this matter?\n> >>\n> >\n> > Then how does the table get updated to that state in the first place\n> since it doesn't know the error details until there is an error?\n>\n> I think your idea is based on storing error information including XID\n> is stored in the system catalog. I think that the reasons why we use\n> the stats collector\n\n\nI noticed this dynamic while skimming the patch (and also pondering why the\nnew worker table was not in a catalog chapter) but am only now fully\nbeginning to appreciate its impact on this discussion.\n\n\n> to store error information including\n\nlast_error_xid are (1) as Amit mentioned, it would have many\n> challenges if updating the catalog when the transaction is in an error\n> state, and\n\n\nI'm going on faith right now that this is a problem. But from my prior\noutline I hope you can see why I find it surprising. Don't try to update a\ncatalog while in an error state. Get out of the error state first. e.g.,\nA transient \"holding pattern\" would seem to work. Upon a server restart\nthe transient state would be forgotten, it would attempt to reapply the\nwal, would see the same error, and would then go back into the transient\nholding pattern. I do intend to read the other discussion on this\nparticular topic so a detailed rebuttal, if warranted, can be withheld.\n\n\n> (2) we can store more information such as error messages,\n> action, etc. other than XID so that users can identify that the\n> reported error is a conflict error but not other types of error such\n> as OOM error.\n\n\nI mentioned only XID because of the focus on SKIP. The other data already\npresent in that table is ok. Whether we use a catalog or the stats\ncollector seems irrelevant. If anything the catalog makes more sense -\ncalling an error message a statistic is a bit of a reach.\n\n>Similarly, the same is true\n>for clearing subskipxid. I agree that it's useful if\n>pg_subscription.subskipxid is automatically set when executing ALTER\n>SUBSCRIPTION SKIP but it might not work in some cases because of this\n>restriction. For these reasons to me, it makes sense to store\n>subscribers' error information by using the stats collector.\n\nI'm confused - pg_subscription is a catalog, not a stat view. Why is it\naffected?\n\nI don't see how point 2 prevents using a system catalog. I accept point 1\nas true but will need to read some of the prior discussion to really\nunderstand it.\n\nWhen it comes to reporting a message to the stats collector, we need\n> to note that it's not guaranteed that all messages arrive at the stats\n> collector. Therefore, last_error_xid doesn't not necessarily get\n> updated after the worker reports an error.\n\n\nYou'll forgive me for not considering this due to its apparent lack of\nmention in the documentation [*] and it's arguable classification as a POLA\nviolation.\n\n[*]\nhttps://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-SUBSCRIPTION\n\nWhat I do read there seems compatible with the desired user experience.\n500ms lag, idle transaction oriented, reset upon unclean shutdown, and\nconsumers seeing a stable transactional view: none of these seem like\nshow-stoppers.\n\nAnyway, even if subskipxid is automatically set when ALTER\n> SUBSCRIPTION SKIP, I think we need to provide a way to clear it as the\n> current patch does (setting NONE) just in case.\n>\n\nWith my suggestion of requiring a matching xid the whole option for\nskip_xid = { xid | NONE } remains.\n\n> 5(out). wait for the user to manually restart the replication stream\n>\n> Do you mean that there always is user intervention after error so the\n> replication stream can resume?\n>\n\nThat is my working assumption. It doesn't seem like the system would\nauto-resume without a DBA doing something (I'll attribute a server crash to\nthe DBA for convenience).\n\nApparently I need to read more about how the system works today to\nunderstand how this varies from and integrates with today's user experience.\n\nThat said, at present my two dislikes:\n\n1) ALTER SYSTEM SKIP accepts any xid value (I need to consider further the\ntiming of when this resets to zero)\n2) pg_stat_subscription_worker.last_error_* fields remain populated even\nwhile the system is in a normal operating state.\n\nare preventing me from preferring this patch over the status quo (yes, I\nknow the 2nd point is about a committed feature). Regardless of how far\noff I may be regarding our technical ability to change them to a more (IMO)\nuser-friendly design.\n\nDavid J.\n\nOn Sun, Jan 23, 2022 at 11:55 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:On Mon, Jan 24, 2022 at 1:49 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Sun, Jan 23, 2022 at 8:35 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> > I really dislike the user experience this provides, and given it is new in v15 (and right now this table seems to exist solely to support this feature) changing this seems within the realm of possibility. I have to imagine these workers have a sense of local state that would just be \"no errors, no need to touch pg_stat_subscription_workers at the end of this transaction's commit\". It would save a local state of the error_xid and if a successfully committed transaction has that xid it would clear the error. The skip code path would also check for and see the matching xid value and clear the error. Even if the local state thing doesn't work, one catalog lookup per transaction seems like potentially reasonable overhead to incur here.\n>> >\n>>\n>> Are you telling to update the catalog to save error_xid when an error\n>> occurs? If so, that has many challenges like we are not supposed to\n>> perform any such operations when the transaction is in an error state.\n>> We have discussed this and other ideas in the beginning. I don't find\n>> any of your arguments convincing to change the basic approach here but\n>> I would like to see what others think on this matter?\n>>\n>\n> Then how does the table get updated to that state in the first place since it doesn't know the error details until there is an error?\n\nI think your idea is based on storing error information including XID\nis stored in the system catalog. I think that the reasons why we use\nthe stats collectorI noticed this dynamic while skimming the patch (and also pondering why the new worker table was not in a catalog chapter) but am only now fully beginning to appreciate its impact on this discussion. to store error information including\nlast_error_xid are (1) as Amit mentioned, it would have many\nchallenges if updating the catalog when the transaction is in an error\nstate, andI'm going on faith right now that this is a problem. But from my prior outline I hope you can see why I find it surprising. Don't try to update a catalog while in an error state. Get out of the error state first. e.g., A transient \"holding pattern\" would seem to work. Upon a server restart the transient state would be forgotten, it would attempt to reapply the wal, would see the same error, and would then go back into the transient holding pattern. I do intend to read the other discussion on this particular topic so a detailed rebuttal, if warranted, can be withheld. (2) we can store more information such as error messages,\naction, etc. other than XID so that users can identify that the\nreported error is a conflict error but not other types of error such\nas OOM error.I mentioned only XID because of the focus on SKIP. The other data already present in that table is ok. Whether we use a catalog or the stats collector seems irrelevant. If anything the catalog makes more sense - calling an error message a statistic is a bit of a reach.>Similarly, the same is true>for clearing subskipxid. I agree that it's useful if>pg_subscription.subskipxid is automatically set when executing ALTER>SUBSCRIPTION SKIP but it might not work in some cases because of this>restriction. For these reasons to me, it makes sense to store>subscribers' error information by using the stats collector.I'm confused - pg_subscription is a catalog, not a stat view. Why is it affected?I don't see how point 2 prevents using a system catalog. I accept point 1 as true but will need to read some of the prior discussion to really understand it.\nWhen it comes to reporting a message to the stats collector, we need\nto note that it's not guaranteed that all messages arrive at the stats\ncollector. Therefore, last_error_xid doesn't not necessarily get\nupdated after the worker reports an error.You'll forgive me for not considering this due to its apparent lack of mention in the documentation [*] and it's arguable classification as a POLA violation.[*] https://www.postgresql.org/docs/current/monitoring-stats.html#MONITORING-PG-STAT-SUBSCRIPTIONWhat I do read there seems compatible with the desired user experience. 500ms lag, idle transaction oriented, reset upon unclean shutdown, and consumers seeing a stable transactional view: none of these seem like show-stoppers.\nAnyway, even if subskipxid is automatically set when ALTER\nSUBSCRIPTION SKIP, I think we need to provide a way to clear it as the\ncurrent patch does (setting NONE) just in case.With my suggestion of requiring a matching xid the whole option for skip_xid = { xid | NONE } remains.\n> 5(out). wait for the user to manually restart the replication stream\n\nDo you mean that there always is user intervention after error so the\nreplication stream can resume? That is my working assumption. It doesn't seem like the system would auto-resume without a DBA doing something (I'll attribute a server crash to the DBA for convenience).Apparently I need to read more about how the system works today to understand how this varies from and integrates with today's user experience.That said, at present my two dislikes:1) ALTER SYSTEM SKIP accepts any xid value (I need to consider further the timing of when this resets to zero)2) pg_stat_subscription_worker.last_error_* fields remain populated even while the system is in a normal operating state.are preventing me from preferring this patch over the status quo (yes, I know the 2nd point is about a committed feature). Regardless of how far off I may be regarding our technical ability to change them to a more (IMO) user-friendly design.David J.",
"msg_date": "Mon, 24 Jan 2022 00:59:54 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jan 24, 2022 at 1:30 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> That said, at present my two dislikes:\n>\n> 1) ALTER SYSTEM SKIP accepts any xid value (I need to consider further the timing of when this resets to zero)\n>\n\nI think this is required for future extension of this feature wherein\nI think there could be multiple such xids say when we support parallel\napply workers. I think if we get a good way to do it even after the\nfirst version like by making a xid an optional parameter.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 24 Jan 2022 15:12:58 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jan 24, 2022 at 5:00 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Sun, Jan 23, 2022 at 11:55 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n> >Similarly, the same is true\n> >for clearing subskipxid.\n>\n> I'm confused - pg_subscription is a catalog, not a stat view. Why is it affected?\n\nSorry, I mistook last_error_xid for subskipxid here.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 24 Jan 2022 18:54:20 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On 22.01.22 03:54, Amit Kapila wrote:\n> Won't we already do that for Alter Subscription command which means\n> nothing special needs to be done for this? However, it seems to me\n> that the idea we are trying to follow here is that as this option can\n> lead to data inconsistency, it is good to allow only superusers to\n> specify this option. The owner of the subscription can be changed to\n> non-superuser as well in which case I think it won't be a good idea to\n> allow this option. OTOH, if we think it is okay to allow such an\n> option to users that don't have superuser privilege then I think\n> allowing it to the owner of the subscription makes sense to me.\n\nI don't think this functionality allows a nonprivileged user to do \nanything they couldn't otherwise do. You can create inconsistent data \nin the sense that you can choose not to apply certain replicated data. \nBut a subscription owner has to have write access to the target tables \nof the subscription, so they already have the ability to write or not \nwrite any data they want.\n\n\n\n",
"msg_date": "Mon, 24 Jan 2022 15:06:11 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On 22.01.22 10:41, Amit Kapila wrote:\n>> Additionally, the description for pg_stat_subscription_workers should describe what happens once the transaction represented by last_error_xid has either been successfully processed or skipped. Does this \"last error\" stick around until another error happens (which is hopefully very rare) or does it reset to blanks?\n>>\n> It will be reset only on subscription drop, otherwise, it will stick\n> around until another error happens.\n\nIs this going to be a problem with transaction ID wraparound? Do we \nneed to use 64-bit xids for this?\n\n\n",
"msg_date": "Mon, 24 Jan 2022 15:10:35 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Monday, January 24, 2022, Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Mon, Jan 24, 2022 at 1:30 PM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> >\n> > That said, at present my two dislikes:\n> >\n> > 1) ALTER SYSTEM SKIP accepts any xid value (I need to consider further\n> the timing of when this resets to zero)\n> >\n>\n> I think this is required for future extension of this feature wherein\n> I think there could be multiple such xids say when we support parallel\n> apply workers. I think if we get a good way to do it even after the\n> first version like by making a xid an optional parameter.\n>\n>\nExtending the behavior is doable, and maybe we end up without this\nlimitation in the future, so be it. But I’m having a hard time imagining a\nscenario where the xid is not already known to the system, and the user,\nand wants to be in effect for a very short window.\n\nDavid J.\n\nOn Monday, January 24, 2022, Amit Kapila <amit.kapila16@gmail.com> wrote:On Mon, Jan 24, 2022 at 1:30 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> That said, at present my two dislikes:\n>\n> 1) ALTER SYSTEM SKIP accepts any xid value (I need to consider further the timing of when this resets to zero)\n>\n\nI think this is required for future extension of this feature wherein\nI think there could be multiple such xids say when we support parallel\napply workers. I think if we get a good way to do it even after the\nfirst version like by making a xid an optional parameter.\nExtending the behavior is doable, and maybe we end up without this limitation in the future, so be it. But I’m having a hard time imagining a scenario where the xid is not already known to the system, and the user, and wants to be in effect for a very short window.David J.",
"msg_date": "Mon, 24 Jan 2022 07:42:34 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jan 24, 2022 at 7:36 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 22.01.22 03:54, Amit Kapila wrote:\n> > Won't we already do that for Alter Subscription command which means\n> > nothing special needs to be done for this? However, it seems to me\n> > that the idea we are trying to follow here is that as this option can\n> > lead to data inconsistency, it is good to allow only superusers to\n> > specify this option. The owner of the subscription can be changed to\n> > non-superuser as well in which case I think it won't be a good idea to\n> > allow this option. OTOH, if we think it is okay to allow such an\n> > option to users that don't have superuser privilege then I think\n> > allowing it to the owner of the subscription makes sense to me.\n>\n> I don't think this functionality allows a nonprivileged user to do\n> anything they couldn't otherwise do. You can create inconsistent data\n> in the sense that you can choose not to apply certain replicated data.\n>\n\nI thought this will be the only primary way to skip applying certain\ntransactions. The other could be via pg_replication_origin_advance().\nOr are you talking about the case where we skip applying update/delete\nwhere the corresponding rows are not found?\n\nI see the point that if we can allow the owner to skip applying\nupdates/deletes in certain cases then probably this should also be\nokay. Kindly let us know if you have something else in mind as well?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 25 Jan 2022 08:24:45 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jan 24, 2022 at 7:40 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 22.01.22 10:41, Amit Kapila wrote:\n> >> Additionally, the description for pg_stat_subscription_workers should describe what happens once the transaction represented by last_error_xid has either been successfully processed or skipped. Does this \"last error\" stick around until another error happens (which is hopefully very rare) or does it reset to blanks?\n> >>\n> > It will be reset only on subscription drop, otherwise, it will stick\n> > around until another error happens.\n>\n> Is this going to be a problem with transaction ID wraparound?\n>\n\nI think to avoid this we can send a message to clear this (at least to\nclear XID in the view) after skipping the xact but there is no\nguarantee that it will be received by the stats collector.\nAdditionally, the worker can periodically (say after every N (100,\n500, etc) successful transaction) send a clear message after\nsuccessful apply. This will ensure that eventually the error entry\nwill be cleared.\n\n> Do we\n> need to use 64-bit xids for this?\n>\n\nFor 64-bit XIds, as this reported XID is for the remote transactions,\nI think we need to add 4-bytes to each transaction message(say Begin)\nand that could be costly for small transactions. We also probably need\nto make logical decoding aware of 64-bit XID? Note that XIDs in WAL\nrecords are still 32-bit XID. I don't think this feature deserves such\na big (in terms of WAL and network message size) change.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 25 Jan 2022 10:48:11 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On 25.01.22 03:54, Amit Kapila wrote:\n>> I don't think this functionality allows a nonprivileged user to do\n>> anything they couldn't otherwise do. You can create inconsistent data\n>> in the sense that you can choose not to apply certain replicated data.\n>>\n> I thought this will be the only primary way to skip applying certain\n> transactions. The other could be via pg_replication_origin_advance().\n> Or are you talking about the case where we skip applying update/delete\n> where the corresponding rows are not found?\n> \n> I see the point that if we can allow the owner to skip applying\n> updates/deletes in certain cases then probably this should also be\n> okay. Kindly let us know if you have something else in mind as well?\n\nLet's start this again: The question at hand is whether ALTER \nSUBSCRIPTION ... SKIP should be allowed for subscription owners that are \nnot superusers. The argument raised against that was that this would \nallow the owner to create \"inconsistent\" data. But it hasn't been \nexplained what that actually means or why it is dangerous.\n\n\n",
"msg_date": "Tue, 25 Jan 2022 13:48:38 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On 25.01.22 06:18, Amit Kapila wrote:\n> I think to avoid this we can send a message to clear this (at least to\n> clear XID in the view) after skipping the xact but there is no\n> guarantee that it will be received by the stats collector.\n> Additionally, the worker can periodically (say after every N (100,\n> 500, etc) successful transaction) send a clear message after\n> successful apply. This will ensure that eventually the error entry\n> will be cleared.\n\nWell, I think we need *some* solution for now. We can't leave a footgun \nwhere you say, \"skip transaction 700\", somehow transaction 700 doesn't \nhappen, the whole thing gets forgotten, but then 3 months later, the \nnext transaction 700 mysteriously gets dropped.\n\n\n",
"msg_date": "Tue, 25 Jan 2022 13:52:03 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jan 25, 2022 at 5:52 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 25.01.22 06:18, Amit Kapila wrote:\n> > I think to avoid this we can send a message to clear this (at least to\n> > clear XID in the view) after skipping the xact but there is no\n> > guarantee that it will be received by the stats collector.\n> > Additionally, the worker can periodically (say after every N (100,\n> > 500, etc) successful transaction) send a clear message after\n> > successful apply. This will ensure that eventually the error entry\n> > will be cleared.\n>\n> Well, I think we need *some* solution for now. We can't leave a footgun\n> where you say, \"skip transaction 700\", somehow transaction 700 doesn't\n> happen, the whole thing gets forgotten, but then 3 months later, the\n> next transaction 700 mysteriously gets dropped.\n>\n\nThis is indeed part of why I feel that the xid being skipped should be\nvalidated. As the feature is presented the user is supposed to read the\nxid from the system (the new stat view or the error log) and supply it and\nthen the worker, when it goes to skip, should find that the very first\ntransaction xid it encounters is the one it is being told to skip. It\nskips that transaction, clears the skipxid, and puts the system back into\nnormal operating mode. If that first transaction xid isn't the one being\nspecified to skip the worker should error with \"skipping transaction\nfailed, xid 123 expected but 456 found\".\n\nThis whole lack of a guarantee of the availability and accuracy regarding\nthe data that this process should be reliant upon needs to be engineered\naway.\n\nDavid J.\n\nOn Tue, Jan 25, 2022 at 5:52 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 25.01.22 06:18, Amit Kapila wrote:\n> I think to avoid this we can send a message to clear this (at least to\n> clear XID in the view) after skipping the xact but there is no\n> guarantee that it will be received by the stats collector.\n> Additionally, the worker can periodically (say after every N (100,\n> 500, etc) successful transaction) send a clear message after\n> successful apply. This will ensure that eventually the error entry\n> will be cleared.\n\nWell, I think we need *some* solution for now. We can't leave a footgun \nwhere you say, \"skip transaction 700\", somehow transaction 700 doesn't \nhappen, the whole thing gets forgotten, but then 3 months later, the \nnext transaction 700 mysteriously gets dropped.This is indeed part of why I feel that the xid being skipped should be validated. As the feature is presented the user is supposed to read the xid from the system (the new stat view or the error log) and supply it and then the worker, when it goes to skip, should find that the very first transaction xid it encounters is the one it is being told to skip. It skips that transaction, clears the skipxid, and puts the system back into normal operating mode. If that first transaction xid isn't the one being specified to skip the worker should error with \"skipping transaction failed, xid 123 expected but 456 found\".This whole lack of a guarantee of the availability and accuracy regarding the data that this process should be reliant upon needs to be engineered away.David J.",
"msg_date": "Tue, 25 Jan 2022 07:35:32 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jan 25, 2022 at 11:35 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Tue, Jan 25, 2022 at 5:52 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n>>\n>> On 25.01.22 06:18, Amit Kapila wrote:\n>> > I think to avoid this we can send a message to clear this (at least to\n>> > clear XID in the view) after skipping the xact but there is no\n>> > guarantee that it will be received by the stats collector.\n>> > Additionally, the worker can periodically (say after every N (100,\n>> > 500, etc) successful transaction) send a clear message after\n>> > successful apply. This will ensure that eventually the error entry\n>> > will be cleared.\n>>\n>> Well, I think we need *some* solution for now. We can't leave a footgun\n>> where you say, \"skip transaction 700\", somehow transaction 700 doesn't\n>> happen, the whole thing gets forgotten, but then 3 months later, the\n>> next transaction 700 mysteriously gets dropped.\n>\n>\n> This is indeed part of why I feel that the xid being skipped should be validated. As the feature is presented the user is supposed to read the xid from the system (the new stat view or the error log) and supply it and then the worker, when it goes to skip, should find that the very first transaction xid it encounters is the one it is being told to skip. It skips that transaction, clears the skipxid, and puts the system back into normal operating mode. If that first transaction xid isn't the one being specified to skip the worker should error with \"skipping transaction failed, xid 123 expected but 456 found\".\n\nYeah, I think it's a good idea to clear the subskipxid after the first\ntransaction regardless of whether the worker skipped it.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 25 Jan 2022 23:47:04 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jan 25, 2022 at 7:47 AM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n> Yeah, I think it's a good idea to clear the subskipxid after the first\n> transaction regardless of whether the worker skipped it.\n>\n>\nSo basically instead of stopping the worker with an error you suggest\nhaving the worker continue applying changes (after resetting subskipxid,\nand - arguably - the ?_error_* fields). Log the transaction xid mis-match\nas a warning in the log file as opposed to an error.\n\nI was supposing to make it an error and have the worker stop again since in\na system where the xid is verified and the code is bug-free I would expect\nthe situation to be a \"can't happen\" one and I'd rather error in that\ncircumstance than warn. The DBA will have to go and ALTER SUBSCRIPTION\nSKIP (xid = NONE) to get the worker working again but I find that\nacceptable in this case.\n\nDavid J.\n\nOn Tue, Jan 25, 2022 at 7:47 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:Yeah, I think it's a good idea to clear the subskipxid after the first\ntransaction regardless of whether the worker skipped it.So basically instead of stopping the worker with an error you suggest having the worker continue applying changes (after resetting subskipxid, and - arguably - the ?_error_* fields). Log the transaction xid mis-match as a warning in the log file as opposed to an error.I was supposing to make it an error and have the worker stop again since in a system where the xid is verified and the code is bug-free I would expect the situation to be a \"can't happen\" one and I'd rather error in that circumstance than warn. The DBA will have to go and ALTER SUBSCRIPTION SKIP (xid = NONE) to get the worker working again but I find that acceptable in this case.David J.",
"msg_date": "Tue, 25 Jan 2022 07:58:00 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jan 25, 2022 at 11:58 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Tue, Jan 25, 2022 at 7:47 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>> Yeah, I think it's a good idea to clear the subskipxid after the first\n>> transaction regardless of whether the worker skipped it.\n>>\n>\n> So basically instead of stopping the worker with an error you suggest having the worker continue applying changes (after resetting subskipxid, and - arguably - the ?_error_* fields). Log the transaction xid mis-match as a warning in the log file as opposed to an error.\n\nAgreed, I think it's better to log a warning than to raise an error.\nIn the case where the user specified the wrong XID, the worker should\nfail again due to the same error.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 26 Jan 2022 00:08:34 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jan 25, 2022 at 8:09 AM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n> On Tue, Jan 25, 2022 at 11:58 PM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> >\n> > On Tue, Jan 25, 2022 at 7:47 AM Masahiko Sawada <sawada.mshk@gmail.com>\n> wrote:\n> >>\n> >> Yeah, I think it's a good idea to clear the subskipxid after the first\n> >> transaction regardless of whether the worker skipped it.\n> >>\n> >\n> > So basically instead of stopping the worker with an error you suggest\n> having the worker continue applying changes (after resetting subskipxid,\n> and - arguably - the ?_error_* fields). Log the transaction xid mis-match\n> as a warning in the log file as opposed to an error.\n>\n> Agreed, I think it's better to log a warning than to raise an error.\n> In the case where the user specified the wrong XID, the worker should\n> fail again due to the same error.\n>\n>\nIf it remains possible for the system to accept a wrongly specified XID I\nwould agree that this behavior is preferable. At least when the user\nwonders why the skip didn't work and they are seeing the same error again\nthey will have a log entry warning telling them their XID choice was\nincorrect. I would prefer that the system not accept a wrongly specified\nXID and the user be told directly and sooner that their XID choice was\nincorrect.\n\nDavid J.\n\nOn Tue, Jan 25, 2022 at 8:09 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:On Tue, Jan 25, 2022 at 11:58 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Tue, Jan 25, 2022 at 7:47 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>> Yeah, I think it's a good idea to clear the subskipxid after the first\n>> transaction regardless of whether the worker skipped it.\n>>\n>\n> So basically instead of stopping the worker with an error you suggest having the worker continue applying changes (after resetting subskipxid, and - arguably - the ?_error_* fields). Log the transaction xid mis-match as a warning in the log file as opposed to an error.\n\nAgreed, I think it's better to log a warning than to raise an error.\nIn the case where the user specified the wrong XID, the worker should\nfail again due to the same error.If it remains possible for the system to accept a wrongly specified XID I would agree that this behavior is preferable. At least when the user wonders why the skip didn't work and they are seeing the same error again they will have a log entry warning telling them their XID choice was incorrect. I would prefer that the system not accept a wrongly specified XID and the user be told directly and sooner that their XID choice was incorrect.David J.",
"msg_date": "Tue, 25 Jan 2022 08:14:00 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jan 26, 2022 at 12:14 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n>\n> On Tue, Jan 25, 2022 at 8:09 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>> On Tue, Jan 25, 2022 at 11:58 PM David G. Johnston\n>> <david.g.johnston@gmail.com> wrote:\n>> >\n>> > On Tue, Jan 25, 2022 at 7:47 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>> >>\n>> >> Yeah, I think it's a good idea to clear the subskipxid after the first\n>> >> transaction regardless of whether the worker skipped it.\n>> >>\n>> >\n>> > So basically instead of stopping the worker with an error you suggest having the worker continue applying changes (after resetting subskipxid, and - arguably - the ?_error_* fields). Log the transaction xid mis-match as a warning in the log file as opposed to an error.\n>>\n>> Agreed, I think it's better to log a warning than to raise an error.\n>> In the case where the user specified the wrong XID, the worker should\n>> fail again due to the same error.\n>>\n>\n> If it remains possible for the system to accept a wrongly specified XID I would agree that this behavior is preferable. At least when the user wonders why the skip didn't work and they are seeing the same error again they will have a log entry warning telling them their XID choice was incorrect.\n\nYes.\n\n> I would prefer that the system not accept a wrongly specified XID and the user be told directly and sooner that their XID choice was incorrect.\n\nGiven that we cannot use rely on the pg_stat_subscription_workers view\nfor this purpose, we would need either a new sub-system that tracks\neach logical replication status so the system can set the error XID to\nsubskipxid, or to wait for shared-memory based stats collector. While\nagreeing that ideally, we need such a sub-system I'm concerned that\neveryone will agree to add complexity for this feature. That having\nbeen said, if there is a significant need for it, we can implement it\nas an improvement.\n\n\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 26 Jan 2022 00:32:43 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jan 25, 2022 at 8:33 AM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n> Given that we cannot use rely on the pg_stat_subscription_workers view\n> for this purpose, we would need either a new sub-system that tracks\n> each logical replication status so the system can set the error XID to\n> subskipxid, or to wait for shared-memory based stats collector.\n>\n\nI'm reading over the monitoring-stats page to try and get my head around\nall of this. First of all, it defines two kinds of views:\n\n1. PostgreSQL's statistics collector is a subsystem that supports\ncollection and reporting of information about server activity.\n2. PostgreSQL also supports reporting dynamic information ... This facility\nis independent of the collector process.\n\nIn then has two tables:\n\n28.1 Dynamic Statistics Views (describing #2 above)\n28.2 Collected Statistics Views (describing #1 above)\n\nApparently the \"collector process\" is UDP-like, not reliable. The\ndocumentation fails to mention this fact. I'd argue that this is a\ndocumentation bug.\n\nI do see that the pg_stat_subscription_workers view is correctly placed in\nTable 28.2\n\nReviewing the other views listed in that table only pg_stat_archiver abuses\nthe statistics collector in a similar fashion. All of the others are\nactually metric oriented.\n\nI don't care for the specification: \"will contain one row per subscription\nworker on which errors have occurred, for workers applying logical\nreplication changes and workers handling the initial data copy of the\nsubscribed tables.\"\n\nI would much rather have this behave similar to pg_stat_activity (which, of\ncourse, is a Dynamic Statistics View...) in that it shows only and all\nworkers that are presently working. The tablesync workers should go away\nwhen they have finished synchronizing. I should not have to manually\nintervene to get rid of unreliable expired data. The log file feels like a\nsuperior solution to this monitoring view.\n\nAlternatively, if the tablesync workers are done but we've been\naccumulating real statistics for them, then by all means keep them included\nin the view - but regardless of whether they encountered an error. But\nmaybe the view can right join in pg_stat_subscription as show a column for\n\"(pid is not null) AS is_active\".\n\nMaybe we need to add a track_finished_tablesync_workers GUC so the DBA can\ndecide whether to devote storage and processing resources to that\nhistorical information.\n\nIf you had kept the original view name, \"pg_stat_subscription_error\", this\nwhole issue goes away. But you decided to make it more generic and call it\n\"pg_stat_subscription_workers\" - which means you need to get rid of the\nerror-specific condition in the WHERE clause for the view. Show all\nworkers - I can filter on is_active. Showing only active workers is also\nacceptable. You won't get to change your mind so decide whether this wants\nto show only current and running state or whether historical statistics for\nnow defunct tablesync workers are desired. Personally, I would just show\nactive workers and if someone wants to add the feature they can add a\ntrack_tablesync_worker_stats GUC and a matching view.\n\n From that, every apply worker should be sending a statistics message to the\ncollector periodically. If error info is not present and the state is \"all\nis well\", clear out any existing error info from the view. The attempt to\ninclude an actual statistic field here doesn't seem useful nor redeeming.\nI would add a \"state\" field in its place (well, after subrelid). And I\nwould still rename the columns to current_error_* and note that these\nshould be null unless the status field shows error (there may be some\nadditional complexity here). Just get rid of last_error_count.\n\nDavid J.\n\nP.S. I saw the discussion regarding pg_dump'ing the subskipid field. I\ndidn't notice any discussion around creating and restoring a basebackup.\nIt seems like during server startup subskipid should just be cleared out.\nThen it doesn't matter what one does during backup.\n\nOn Tue, Jan 25, 2022 at 8:33 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:Given that we cannot use rely on the pg_stat_subscription_workers view\nfor this purpose, we would need either a new sub-system that tracks\neach logical replication status so the system can set the error XID to\nsubskipxid, or to wait for shared-memory based stats collector.I'm reading over the monitoring-stats page to try and get my head around all of this. First of all, it defines two kinds of views:1. PostgreSQL's statistics collector is a subsystem that supports collection and reporting of information about server activity. 2. PostgreSQL also supports reporting dynamic information ... This facility is independent of the collector process.In then has two tables:28.1 Dynamic Statistics Views (describing #2 above)28.2 Collected Statistics Views (describing #1 above)Apparently the \"collector process\" is UDP-like, not reliable. The documentation fails to mention this fact. I'd argue that this is a documentation bug.I do see that the pg_stat_subscription_workers view is correctly placed in Table 28.2Reviewing the other views listed in that table only pg_stat_archiver abuses the statistics collector in a similar fashion. All of the others are actually metric oriented.I don't care for the specification: \"will contain one row per subscription worker on which errors have occurred, for workers applying logical replication changes and workers handling the initial data copy of the subscribed tables.\"I would much rather have this behave similar to pg_stat_activity (which, of course, is a Dynamic Statistics View...) in that it shows only and all workers that are presently working. The tablesync workers should go away when they have finished synchronizing. I should not have to manually intervene to get rid of unreliable expired data. The log file feels like a superior solution to this monitoring view.Alternatively, if the tablesync workers are done but we've been accumulating real statistics for them, then by all means keep them included in the view - but regardless of whether they encountered an error. But maybe the view can right join in pg_stat_subscription as show a column for \"(pid is not null) AS is_active\".Maybe we need to add a track_finished_tablesync_workers GUC so the DBA can decide whether to devote storage and processing resources to that historical information.If you had kept the original view name, \"pg_stat_subscription_error\", this whole issue goes away. But you decided to make it more generic and call it \"pg_stat_subscription_workers\" - which means you need to get rid of the error-specific condition in the WHERE clause for the view. Show all workers - I can filter on is_active. Showing only active workers is also acceptable. You won't get to change your mind so decide whether this wants to show only current and running state or whether historical statistics for now defunct tablesync workers are desired. Personally, I would just show active workers and if someone wants to add the feature they can add a track_tablesync_worker_stats GUC and a matching view.From that, every apply worker should be sending a statistics message to the collector periodically. If error info is not present and the state is \"all is well\", clear out any existing error info from the view. The attempt to include an actual statistic field here doesn't seem useful nor redeeming. I would add a \"state\" field in its place (well, after subrelid). And I would still rename the columns to current_error_* and note that these should be null unless the status field shows error (there may be some additional complexity here). Just get rid of last_error_count.David J.P.S. I saw the discussion regarding pg_dump'ing the subskipid field. I didn't notice any discussion around creating and restoring a basebackup. It seems like during server startup subskipid should just be cleared out. Then it doesn't matter what one does during backup.",
"msg_date": "Tue, 25 Jan 2022 15:05:22 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jan 25, 2022 at 6:18 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 25.01.22 03:54, Amit Kapila wrote:\n> >> I don't think this functionality allows a nonprivileged user to do\n> >> anything they couldn't otherwise do. You can create inconsistent data\n> >> in the sense that you can choose not to apply certain replicated data.\n> >>\n> > I thought this will be the only primary way to skip applying certain\n> > transactions. The other could be via pg_replication_origin_advance().\n> > Or are you talking about the case where we skip applying update/delete\n> > where the corresponding rows are not found?\n> >\n> > I see the point that if we can allow the owner to skip applying\n> > updates/deletes in certain cases then probably this should also be\n> > okay. Kindly let us know if you have something else in mind as well?\n>\n> Let's start this again: The question at hand is whether ALTER\n> SUBSCRIPTION ... SKIP should be allowed for subscription owners that are\n> not superusers. The argument raised against that was that this would\n> allow the owner to create \"inconsistent\" data. But it hasn't been\n> explained what that actually means or why it is dangerous.\n>\n\nThere are two reasons in my mind: (a) We are going to skip some\nunrelated data changes that are not the direct cause of conflict\nbecause of the entire transaction skip. Now, it is possible that\nunintentionally it allows skipping some actual changes\ninsert/update/delete/truncate to some relations which will then allow\neven the future changes to cause some conflict or won't get applied. A\nfew examples are after TRUNCATE is skipped, the INSERTS in following\ntransactions can cause error \"duplicate key ..\"; similarly say some\nINSERT is skipped, then following UPDATE/DELETE won't find the\ncorresponding row to perform the operation. (b) Users can specify some\nrandom XID, the discussion below is trying to detect this and raise\nWARNING/ERROR but still, it could cause some valid transaction (which\nwon't generate any conflict/error) to skip.\n\nThese can lead to some missing data in the subscriber which the user\nmight not have expected.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 26 Jan 2022 06:39:06 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jan 24, 2022 at 12:59 AM David G. Johnston <\ndavid.g.johnston@gmail.com> wrote:\n\n>\n> > 5(out). wait for the user to manually restart the replication stream\n>>\n>> Do you mean that there always is user intervention after error so the\n>> replication stream can resume?\n>>\n>\n> That is my working assumption. It doesn't seem like the system would\n> auto-resume without a DBA doing something (I'll attribute a server crash to\n> the DBA for convenience).\n>\n> Apparently I need to read more about how the system works today to\n> understand how this varies from and integrates with today's user experience.\n>\n>\nI've done some code reading. My understanding is that a background worker\nfor the main apply of a given subscription is created from the launcher\ncode (not reviewed) which is initialized at server startup (or as needed\nsometime thereafter). This goes into a for(;;) loop in LogicalRepApplyLoop\nunder a PG_TRY in ApplyWorkerMain. When a message is applied that provokes\nan error the PG_CATCH() in ApplyWorkerMain takes over and then this worker\ndies. While in that PG_CATCH() we have an aborted transaction and so are\nlimited in what we can change. We PG_RE_THROW(); back to the background\nworker infrastructure and let it perform logging and cleanup; which\nincludes this destroying this instance of the background worker. The\nbackground worker that is destroyed is replaced and its replacement is\nidentical to the original so far as the statistics collector is concerned.\n\nI haven't traced out when the replacement apply worker gets recreated. It\nseems like doing so immediately, and then it going and just encountering\nthe same error, would be an undesirable choice, and so I've assumed it does\nnot. But I also wasn't expecting the apply worker to PG_RE_THROW() either,\nbut instead continue on running in a different for(;;) loop waiting for\nsome signal from the system that something has changed that may avoid the\nerror that put it in timeout.\n\nSo my more detailed goal would be to get rid of PG_RE_THROW(); (I assume\ndoing so would entail transaction rollback) and stay in the worker. Update\npg_subscription with the error information (having removed PG_RE_THROW we\nhave new things to consider re: pg_stat_subscription_workers). Go into a\nfor(;;) loop, maybe polling pg_subscription for an indication that it is OK\nto retry applying the last transaction. (can an inter-process signal be\nsent from a normal backend process to a background worker process?). The\nSKIP command then matches XID values on pg_subscription; the resumption\nsees the subskipxid, updates pg_subscription to remove the error info and\nsubskipid, skips the next transaction assuming it has the matching XID, and\nthen continues applying as normal. Adapt to deal with crash conditions as\nneeded though clearing before reapplying seems like a safe default. Again,\nupon worker startup maybe they should be cleared too (making pg_dump and\nother backup considerations moot - as noted in my P.S. in the previous\nemail).\n\nI'm not sure we are paranoid enough regarding the locking of\npg_subscription for purposes of reading and writing subskipxid. I'd\nprobably rather serialize access to it, and maybe even not allow changing\nfrom one non-zero XID to another non-zero XID. It shouldn't be needed in\npractice (moreso if the XID has to be the one that is present from\ncurrent_error_xid) and the user can always reset first.\n\nIn worker.c I was and still am confused as to the meaning of 'c' and 'w' in\nLogicalRepApplyLoop. In apply_dispatch in that file enums are used to\ncompare against the message byte, it would be helpful for the inexperienced\nreader if 'c' and 'w' were done as enums instead as well.\n\nDavid J.\n\nOn Mon, Jan 24, 2022 at 12:59 AM David G. Johnston <david.g.johnston@gmail.com> wrote:\n> 5(out). wait for the user to manually restart the replication stream\n\nDo you mean that there always is user intervention after error so the\nreplication stream can resume? That is my working assumption. It doesn't seem like the system would auto-resume without a DBA doing something (I'll attribute a server crash to the DBA for convenience).Apparently I need to read more about how the system works today to understand how this varies from and integrates with today's user experience.I've done some code reading. My understanding is that a background worker for the main apply of a given subscription is created from the launcher code (not reviewed) which is initialized at server startup (or as needed sometime thereafter). This goes into a for(;;) loop in LogicalRepApplyLoop under a PG_TRY in ApplyWorkerMain. When a message is applied that provokes an error the PG_CATCH() in ApplyWorkerMain takes over and then this worker dies. While in that PG_CATCH() we have an aborted transaction and so are limited in what we can change. We PG_RE_THROW(); back to the background worker infrastructure and let it perform logging and cleanup; which includes this destroying this instance of the background worker. The background worker that is destroyed is replaced and its replacement is identical to the original so far as the statistics collector is concerned.I haven't traced out when the replacement apply worker gets recreated. It seems like doing so immediately, and then it going and just encountering the same error, would be an undesirable choice, and so I've assumed it does not. But I also wasn't expecting the apply worker to PG_RE_THROW() either, but instead continue on running in a different for(;;) loop waiting for some signal from the system that something has changed that may avoid the error that put it in timeout.So my more detailed goal would be to get rid of PG_RE_THROW(); (I assume doing so would entail transaction rollback) and stay in the worker. Update pg_subscription with the error information (having removed PG_RE_THROW we have new things to consider re: pg_stat_subscription_workers). Go into a for(;;) loop, maybe polling pg_subscription for an indication that it is OK to retry applying the last transaction. (can an inter-process signal be sent from a normal backend process to a background worker process?). The SKIP command then matches XID values on pg_subscription; the resumption sees the subskipxid, updates pg_subscription to remove the error info and subskipid, skips the next transaction assuming it has the matching XID, and then continues applying as normal. Adapt to deal with crash conditions as needed though clearing before reapplying seems like a safe default. Again, upon worker startup maybe they should be cleared too (making pg_dump and other backup considerations moot - as noted in my P.S. in the previous email).I'm not sure we are paranoid enough regarding the locking of pg_subscription for purposes of reading and writing subskipxid. I'd probably rather serialize access to it, and maybe even not allow changing from one non-zero XID to another non-zero XID. It shouldn't be needed in practice (moreso if the XID has to be the one that is present from current_error_xid) and the user can always reset first.In worker.c I was and still am confused as to the meaning of 'c' and 'w' in LogicalRepApplyLoop. In apply_dispatch in that file enums are used to compare against the message byte, it would be helpful for the inexperienced reader if 'c' and 'w' were done as enums instead as well.David J.",
"msg_date": "Tue, 25 Jan 2022 19:01:24 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jan 25, 2022 at 8:39 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Jan 25, 2022 at 11:58 PM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> >\n> > On Tue, Jan 25, 2022 at 7:47 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >>\n> >> Yeah, I think it's a good idea to clear the subskipxid after the first\n> >> transaction regardless of whether the worker skipped it.\n> >>\n> >\n> > So basically instead of stopping the worker with an error you suggest having the worker continue applying changes (after resetting subskipxid, and - arguably - the ?_error_* fields). Log the transaction xid mis-match as a warning in the log file as opposed to an error.\n>\n> Agreed, I think it's better to log a warning than to raise an error.\n> In the case where the user specified the wrong XID, the worker should\n> fail again due to the same error.\n>\n\nIIUC, the proposal is to compare the skip_xid with the very\ntransaction the apply worker received to apply and raise a warning if\nit doesn't match with skip_xid and then continue. This seems like a\nreasonable idea but can we guarantee that it is always the first\ntransaction that we want to skip? We seem to guarantee that we won't\nget something again once it is written durably/flushed on the\nsubscriber side. I guess here it can happen that before the errored\ntransaction, there is some empty xact, or maybe part of the stream\n(consider streaming transactions) of some xact, or there could be\nother cases as well where the server will send those xacts again.\n\nNow, if the above reasoning is correct then I think your proposal to\nclear the skip_xid in the catalog as soon as we have applied the first\ntransaction successfully seems reasonable to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 26 Jan 2022 07:58:01 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jan 26, 2022 at 7:31 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Mon, Jan 24, 2022 at 12:59 AM David G. Johnston <david.g.johnston@gmail.com> wrote:\n>>\n>\n> So my more detailed goal would be to get rid of PG_RE_THROW();\n>\n\nI don't think that will be possible, consider the FATAL/PANIC error\ncase. Also, there are reasons why we always restart apply worker on\nERROR even without this work. If we want to change that, we might need\nto redesign the apply side mechanism which I don't think we should try\nto do as part of this patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 26 Jan 2022 08:08:13 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jan 26, 2022 at 11:28 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jan 25, 2022 at 8:39 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Jan 25, 2022 at 11:58 PM David G. Johnston\n> > <david.g.johnston@gmail.com> wrote:\n> > >\n> > > On Tue, Jan 25, 2022 at 7:47 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >>\n> > >> Yeah, I think it's a good idea to clear the subskipxid after the first\n> > >> transaction regardless of whether the worker skipped it.\n> > >>\n> > >\n> > > So basically instead of stopping the worker with an error you suggest having the worker continue applying changes (after resetting subskipxid, and - arguably - the ?_error_* fields). Log the transaction xid mis-match as a warning in the log file as opposed to an error.\n> >\n> > Agreed, I think it's better to log a warning than to raise an error.\n> > In the case where the user specified the wrong XID, the worker should\n> > fail again due to the same error.\n> >\n>\n> IIUC, the proposal is to compare the skip_xid with the very\n> transaction the apply worker received to apply and raise a warning if\n> it doesn't match with skip_xid and then continue. This seems like a\n> reasonable idea but can we guarantee that it is always the first\n> transaction that we want to skip? We seem to guarantee that we won't\n> get something again once it is written durably/flushed on the\n> subscriber side. I guess here it can happen that before the errored\n> transaction, there is some empty xact, or maybe part of the stream\n> (consider streaming transactions) of some xact, or there could be\n> other cases as well where the server will send those xacts again.\n\nGood point.\n\nI guess that in the situation the worker entered an error loop, we can\nguarantee that the worker fails while applying the first non-empty\ntransaction since starting logical replication. And the transaction is\nwhat we’d like to skip. If the transaction that can be applied without\nan error is resent after a restart, it’s a problem of logical\nreplication. As you pointed out, it's possible that there are some\nempty transactions before the transaction in question since we don't\nadvance replication origin LSN if the transaction is empty. Also,\nprobably the same is true for a streamed transaction that is rolled\nback or ROLLBACK-PREPARED transactions. So, we can also skip clearing\nsubskipxid if the transaction is empty? That is, we make sure to clear\nit after applying the first non-empty transaction. We would need to\ncarefully think about this solution otherwise ALTER SUBSCRIPTION SKIP\nends up not working at all in some cases.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 26 Jan 2022 11:51:40 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jan 26, 2022 at 11:51 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Jan 26, 2022 at 11:28 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Jan 25, 2022 at 8:39 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Tue, Jan 25, 2022 at 11:58 PM David G. Johnston\n> > > <david.g.johnston@gmail.com> wrote:\n> > > >\n> > > > On Tue, Jan 25, 2022 at 7:47 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >>\n> > > >> Yeah, I think it's a good idea to clear the subskipxid after the first\n> > > >> transaction regardless of whether the worker skipped it.\n> > > >>\n> > > >\n> > > > So basically instead of stopping the worker with an error you suggest having the worker continue applying changes (after resetting subskipxid, and - arguably - the ?_error_* fields). Log the transaction xid mis-match as a warning in the log file as opposed to an error.\n> > >\n> > > Agreed, I think it's better to log a warning than to raise an error.\n> > > In the case where the user specified the wrong XID, the worker should\n> > > fail again due to the same error.\n> > >\n> >\n> > IIUC, the proposal is to compare the skip_xid with the very\n> > transaction the apply worker received to apply and raise a warning if\n> > it doesn't match with skip_xid and then continue. This seems like a\n> > reasonable idea but can we guarantee that it is always the first\n> > transaction that we want to skip? We seem to guarantee that we won't\n> > get something again once it is written durably/flushed on the\n> > subscriber side. I guess here it can happen that before the errored\n> > transaction, there is some empty xact, or maybe part of the stream\n> > (consider streaming transactions) of some xact, or there could be\n> > other cases as well where the server will send those xacts again.\n>\n> Good point.\n>\n> I guess that in the situation the worker entered an error loop, we can\n> guarantee that the worker fails while applying the first non-empty\n> transaction since starting logical replication. And the transaction is\n> what we’d like to skip. If the transaction that can be applied without\n> an error is resent after a restart, it’s a problem of logical\n> replication. As you pointed out, it's possible that there are some\n> empty transactions before the transaction in question since we don't\n> advance replication origin LSN if the transaction is empty. Also,\n> probably the same is true for a streamed transaction that is rolled\n> back or ROLLBACK-PREPARED transactions. So, we can also skip clearing\n> subskipxid if the transaction is empty? That is, we make sure to clear\n> it after applying the first non-empty transaction. We would need to\n> carefully think about this solution otherwise ALTER SUBSCRIPTION SKIP\n> ends up not working at all in some cases.\n\nProbably, we also need to consider the case where the tablesync worker\nentered an error loop and the user wants to skip the transaction? The\napply worker is also running at the same time but it should not clear\nsubskipxid. Similarly, the tablesync worker should not clear\nsubskipxid if the apply worker wants to skip the transaction.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 26 Jan 2022 12:25:21 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jan 26, 2022 at 8:55 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Jan 26, 2022 at 11:51 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Jan 26, 2022 at 11:28 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > IIUC, the proposal is to compare the skip_xid with the very\n> > > transaction the apply worker received to apply and raise a warning if\n> > > it doesn't match with skip_xid and then continue. This seems like a\n> > > reasonable idea but can we guarantee that it is always the first\n> > > transaction that we want to skip? We seem to guarantee that we won't\n> > > get something again once it is written durably/flushed on the\n> > > subscriber side. I guess here it can happen that before the errored\n> > > transaction, there is some empty xact, or maybe part of the stream\n> > > (consider streaming transactions) of some xact, or there could be\n> > > other cases as well where the server will send those xacts again.\n> >\n> > Good point.\n> >\n> > I guess that in the situation the worker entered an error loop, we can\n> > guarantee that the worker fails while applying the first non-empty\n> > transaction since starting logical replication. And the transaction is\n> > what we’d like to skip. If the transaction that can be applied without\n> > an error is resent after a restart, it’s a problem of logical\n> > replication. As you pointed out, it's possible that there are some\n> > empty transactions before the transaction in question since we don't\n> > advance replication origin LSN if the transaction is empty. Also,\n> > probably the same is true for a streamed transaction that is rolled\n> > back or ROLLBACK-PREPARED transactions. So, we can also skip clearing\n> > subskipxid if the transaction is empty? That is, we make sure to clear\n> > it after applying the first non-empty transaction. We would need to\n> > carefully think about this solution otherwise ALTER SUBSCRIPTION SKIP\n> > ends up not working at all in some cases.\n\nI think it is okay to clear after the first successful application of\nany transaction. What I was not sure was about the idea of giving\nWARNING/ERROR if the first xact to be applied is not the same as\nskip_xid.\n\n>\n> Probably, we also need to consider the case where the tablesync worker\n> entered an error loop and the user wants to skip the transaction? The\n> apply worker is also running at the same time but it should not clear\n> subskipxid. Similarly, the tablesync worker should not clear\n> subskipxid if the apply worker wants to skip the transaction.\n>\n\nI think for tablesync workers, the skip_xid set via this mechanism\nwon't work as we don't have any remote_xid for them, and neither any\nXID is reported in the view for them.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 26 Jan 2022 09:24:42 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jan 26, 2022 at 12:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jan 26, 2022 at 8:55 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Jan 26, 2022 at 11:51 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Wed, Jan 26, 2022 at 11:28 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > IIUC, the proposal is to compare the skip_xid with the very\n> > > > transaction the apply worker received to apply and raise a warning if\n> > > > it doesn't match with skip_xid and then continue. This seems like a\n> > > > reasonable idea but can we guarantee that it is always the first\n> > > > transaction that we want to skip? We seem to guarantee that we won't\n> > > > get something again once it is written durably/flushed on the\n> > > > subscriber side. I guess here it can happen that before the errored\n> > > > transaction, there is some empty xact, or maybe part of the stream\n> > > > (consider streaming transactions) of some xact, or there could be\n> > > > other cases as well where the server will send those xacts again.\n> > >\n> > > Good point.\n> > >\n> > > I guess that in the situation the worker entered an error loop, we can\n> > > guarantee that the worker fails while applying the first non-empty\n> > > transaction since starting logical replication. And the transaction is\n> > > what we’d like to skip. If the transaction that can be applied without\n> > > an error is resent after a restart, it’s a problem of logical\n> > > replication. As you pointed out, it's possible that there are some\n> > > empty transactions before the transaction in question since we don't\n> > > advance replication origin LSN if the transaction is empty. Also,\n> > > probably the same is true for a streamed transaction that is rolled\n> > > back or ROLLBACK-PREPARED transactions. So, we can also skip clearing\n> > > subskipxid if the transaction is empty? That is, we make sure to clear\n> > > it after applying the first non-empty transaction. We would need to\n> > > carefully think about this solution otherwise ALTER SUBSCRIPTION SKIP\n> > > ends up not working at all in some cases.\n>\n> I think it is okay to clear after the first successful application of\n> any transaction. What I was not sure was about the idea of giving\n> WARNING/ERROR if the first xact to be applied is not the same as\n> skip_xid.\n\nDo you prefer not to do anything in this case?\n\n>\n> >\n> > Probably, we also need to consider the case where the tablesync worker\n> > entered an error loop and the user wants to skip the transaction? The\n> > apply worker is also running at the same time but it should not clear\n> > subskipxid. Similarly, the tablesync worker should not clear\n> > subskipxid if the apply worker wants to skip the transaction.\n> >\n>\n> I think for tablesync workers, the skip_xid set via this mechanism\n> won't work as we don't have any remote_xid for them, and neither any\n> XID is reported in the view for them.\n\nIf the tablesync worker raises an error while applying changes after\nfinishing the copy, it also reports the error XID.\n\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 26 Jan 2022 13:05:56 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jan 26, 2022 at 7:05 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Tue, Jan 25, 2022 at 8:33 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>\n>> Given that we cannot use rely on the pg_stat_subscription_workers view\n>> for this purpose, we would need either a new sub-system that tracks\n>> each logical replication status so the system can set the error XID to\n>> subskipxid, or to wait for shared-memory based stats collector.\n>\n>\n> I'm reading over the monitoring-stats page to try and get my head around all of this. First of all, it defines two kinds of views:\n>\n> 1. PostgreSQL's statistics collector is a subsystem that supports collection and reporting of information about server activity.\n> 2. PostgreSQL also supports reporting dynamic information ... This facility is independent of the collector process.\n>\n> In then has two tables:\n>\n> 28.1 Dynamic Statistics Views (describing #2 above)\n> 28.2 Collected Statistics Views (describing #1 above)\n>\n> Apparently the \"collector process\" is UDP-like, not reliable. The documentation fails to mention this fact. I'd argue that this is a documentation bug.\n>\n> I do see that the pg_stat_subscription_workers view is correctly placed in Table 28.2\n>\n> Reviewing the other views listed in that table only pg_stat_archiver abuses the statistics collector in a similar fashion. All of the others are actually metric oriented.\n>\n> I don't care for the specification: \"will contain one row per subscription worker on which errors have occurred, for workers applying logical replication changes and workers handling the initial data copy of the subscribed tables.\"\n>\n> I would much rather have this behave similar to pg_stat_activity (which, of course, is a Dynamic Statistics View...) in that it shows only and all workers that are presently working.\n\nI have no objection against having a dynamic statistics view showing\nthe status of each running worker but I think it should be implemented\nin a separate view and not be something that replaces the\npg_stat_subscription_workers. I think pg_stat_subscription would be\nthe right place for it.\n\n> The tablesync workers should go away when they have finished synchronizing. I should not have to manually intervene to get rid of unreliable expired data. The log file feels like a superior solution to this monitoring view.\n>\n> Alternatively, if the tablesync workers are done but we've been accumulating real statistics for them, then by all means keep them included in the view - but regardless of whether they encountered an error. But maybe the view can right join in pg_stat_subscription as show a column for \"(pid is not null) AS is_active\".\n>\n> Maybe we need to add a track_finished_tablesync_workers GUC so the DBA can decide whether to devote storage and processing resources to that historical information.\n>\n> If you had kept the original view name, \"pg_stat_subscription_error\", this whole issue goes away. But you decided to make it more generic and call it \"pg_stat_subscription_workers\" - which means you need to get rid of the error-specific condition in the WHERE clause for the view. Show all workers - I can filter on is_active. Showing only active workers is also acceptable. You won't get to change your mind so decide whether this wants to show only current and running state or whether historical statistics for now defunct tablesync workers are desired. Personally, I would just show active workers and if someone wants to add the feature they can add a track_tablesync_worker_stats GUC and a matching view.\n\nWe plan to clear/remove table sync entries who finished synchronization.\n\nIt’s better not to merge dynamic statistics such as pid and is_active\nand accumulative statistics into one view. I think we can have both\nviews: pg_stat_subscription_workers view with some changes based on\nthe review comments (e.g., removing defunct tablesync entry), and\nanother view showing dynamic statistics such as the worker status.\n\n> From that, every apply worker should be sending a statistics message to the collector periodically. If error info is not present and the state is \"all is well\", clear out any existing error info from the view. The attempt to include an actual statistic field here doesn't seem useful nor redeeming. I would add a \"state\" field in its place (well, after subrelid). And I would still rename the columns to current_error_* and note that these should be null unless the status field shows error (there may be some additional complexity here). Just get rid of last_error_count.\n>\n\nI don't think that using the stats collector to show the current\nstatus of each worker is a good idea because of 500ms lag, UDP\nconnection etc. Even if error info is not present and the state is\ngood according to the view, it might be out-of-date or simply not\ntrue. If we want to do that, it’s much better to prepare something on\nshmem so each worker can store its status (running or error, error\nxid, etc.) and have pg_stat_subscription (or another view) show the\ninformation. One thing we need to consider is that it needs to leave\nthe status even after exiting apply/tablesync worker but we don't know\nhow many statuses for workers we need to allocate on the shmem at\nstartup time.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 26 Jan 2022 13:10:00 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jan 26, 2022 at 9:36 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Jan 26, 2022 at 12:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > I think it is okay to clear after the first successful application of\n> > any transaction. What I was not sure was about the idea of giving\n> > WARNING/ERROR if the first xact to be applied is not the same as\n> > skip_xid.\n>\n> Do you prefer not to do anything in this case?\n>\n\nI am fine with clearing the skip_xid after the first successful\napplication. But note, we shouldn't do catalog access for this, we can\ncheck if it is set in MySubscription.\n\n> >\n> > >\n> > > Probably, we also need to consider the case where the tablesync worker\n> > > entered an error loop and the user wants to skip the transaction? The\n> > > apply worker is also running at the same time but it should not clear\n> > > subskipxid. Similarly, the tablesync worker should not clear\n> > > subskipxid if the apply worker wants to skip the transaction.\n> > >\n> >\n> > I think for tablesync workers, the skip_xid set via this mechanism\n> > won't work as we don't have any remote_xid for them, and neither any\n> > XID is reported in the view for them.\n>\n> If the tablesync worker raises an error while applying changes after\n> finishing the copy, it also reports the error XID.\n>\n\nRight and agreed with your assessment for the same.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 26 Jan 2022 09:45:51 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jan 25, 2022 at 9:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Wed, Jan 26, 2022 at 9:36 AM Masahiko Sawada <sawada.mshk@gmail.com>\n> wrote:\n> > On Wed, Jan 26, 2022 at 12:54 PM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n>\n> > > >\n> > > > Probably, we also need to consider the case where the tablesync\n> worker\n> > > > entered an error loop and the user wants to skip the transaction? The\n> > > > apply worker is also running at the same time but it should not clear\n> > > > subskipxid. Similarly, the tablesync worker should not clear\n> > > > subskipxid if the apply worker wants to skip the transaction.\n> > > >\n> > >\n> > > I think for tablesync workers, the skip_xid set via this mechanism\n> > > won't work as we don't have any remote_xid for them, and neither any\n> > > XID is reported in the view for them.\n> >\n> > If the tablesync worker raises an error while applying changes after\n> > finishing the copy, it also reports the error XID.\n> >\n>\n> Right and agreed with your assessment for the same.\n>\n>\nIIUC each tablesync process also performs an apply stage but only applies\nthe messages related to the single table it is responsible for. Once all\ntablesync workers synchronize they are all destroyed and the main apply\nworker takes over and applies transactions to all subscribed tables.\n\nWe probably should just provide an option for the user to specify\n\"subrelid\". If null, only the main apply worker will skip the given xid,\notherwise only the worker tasked with syncing that particular table will do\nso. It might take a sequence of ALTER SUBSCRIPTION SET commands to get a\nbroken initial table synchronization to load completely but at least there\nwill not be any surprises as to which tables had transactions skipped and\nwhich did not.\n\nIt may even make sense, eventually for the main apply worker to skip on a\nsubrelid basis. Since the main apply worker isn't applying transactions at\nthe same time as the tablesync workers the non-null subrelid can also be\ninterpreted by the main apply worker.\n\nDavid J.\n\nOn Tue, Jan 25, 2022 at 9:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Wed, Jan 26, 2022 at 9:36 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> On Wed, Jan 26, 2022 at 12:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > Probably, we also need to consider the case where the tablesync worker\n> > > entered an error loop and the user wants to skip the transaction? The\n> > > apply worker is also running at the same time but it should not clear\n> > > subskipxid. Similarly, the tablesync worker should not clear\n> > > subskipxid if the apply worker wants to skip the transaction.\n> > >\n> >\n> > I think for tablesync workers, the skip_xid set via this mechanism\n> > won't work as we don't have any remote_xid for them, and neither any\n> > XID is reported in the view for them.\n>\n> If the tablesync worker raises an error while applying changes after\n> finishing the copy, it also reports the error XID.\n>\n\nRight and agreed with your assessment for the same.IIUC each tablesync process also performs an apply stage but only applies the messages related to the single table it is responsible for. Once all tablesync workers synchronize they are all destroyed and the main apply worker takes over and applies transactions to all subscribed tables.We probably should just provide an option for the user to specify \"subrelid\". If null, only the main apply worker will skip the given xid, otherwise only the worker tasked with syncing that particular table will do so. It might take a sequence of ALTER SUBSCRIPTION SET commands to get a broken initial table synchronization to load completely but at least there will not be any surprises as to which tables had transactions skipped and which did not.It may even make sense, eventually for the main apply worker to skip on a subrelid basis. Since the main apply worker isn't applying transactions at the same time as the tablesync workers the non-null subrelid can also be interpreted by the main apply worker.David J.",
"msg_date": "Tue, 25 Jan 2022 21:43:37 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jan 26, 2022 at 1:43 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Tue, Jan 25, 2022 at 9:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Wed, Jan 26, 2022 at 9:36 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>> > On Wed, Jan 26, 2022 at 12:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> > > >\n>> > > > Probably, we also need to consider the case where the tablesync worker\n>> > > > entered an error loop and the user wants to skip the transaction? The\n>> > > > apply worker is also running at the same time but it should not clear\n>> > > > subskipxid. Similarly, the tablesync worker should not clear\n>> > > > subskipxid if the apply worker wants to skip the transaction.\n>> > > >\n>> > >\n>> > > I think for tablesync workers, the skip_xid set via this mechanism\n>> > > won't work as we don't have any remote_xid for them, and neither any\n>> > > XID is reported in the view for them.\n>> >\n>> > If the tablesync worker raises an error while applying changes after\n>> > finishing the copy, it also reports the error XID.\n>> >\n>>\n>> Right and agreed with your assessment for the same.\n>>\n>\n> IIUC each tablesync process also performs an apply stage but only applies the messages related to the single table it is responsible for. Once all tablesync workers synchronize they are all destroyed and the main apply worker takes over and applies transactions to all subscribed tables.\n>\n> We probably should just provide an option for the user to specify \"subrelid\". If null, only the main apply worker will skip the given xid, otherwise only the worker tasked with syncing that particular table will do so. It might take a sequence of ALTER SUBSCRIPTION SET commands to get a broken initial table synchronization to load completely but at least there will not be any surprises as to which tables had transactions skipped and which did not.\n\nThat would work but I’m concerned that the users can specify it\nproperly. Also, we would need to change the errcontext message\ngenerated by apply_error_callback() so the user can know that the\nerror occurred in either apply worker or tablesync worker.\n\nOr, as another idea, since an error during table synchronization is\nnot common and could be resolved by truncating the table and\nrestarting the synchronization in practice, there might be no need\nthis much and we can support it only for apply worker errors.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 26 Jan 2022 16:21:00 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jan 26, 2022 at 12:51 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Jan 26, 2022 at 1:43 PM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> >\n> > We probably should just provide an option for the user to specify \"subrelid\". If null, only the main apply worker will skip the given xid, otherwise only the worker tasked with syncing that particular table will do so. It might take a sequence of ALTER SUBSCRIPTION SET commands to get a broken initial table synchronization to load completely but at least there will not be any surprises as to which tables had transactions skipped and which did not.\n>\n> That would work but I’m concerned that the users can specify it\n> properly. Also, we would need to change the errcontext message\n> generated by apply_error_callback() so the user can know that the\n> error occurred in either apply worker or tablesync worker.\n>\n> Or, as another idea, since an error during table synchronization is\n> not common and could be resolved by truncating the table and\n> restarting the synchronization in practice, there might be no need\n> this much and we can support it only for apply worker errors.\n>\n\nYes, that is what I have also in mind. We can always extend this\nfeature for tablesync process because it can not only fail for the\nspecified skip_xid but also for many other reasons during the initial\ncopy.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 26 Jan 2022 16:32:41 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jan 26, 2022 at 8:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jan 26, 2022 at 12:51 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Jan 26, 2022 at 1:43 PM David G. Johnston\n> > <david.g.johnston@gmail.com> wrote:\n> > >\n> > > We probably should just provide an option for the user to specify \"subrelid\". If null, only the main apply worker will skip the given xid, otherwise only the worker tasked with syncing that particular table will do so. It might take a sequence of ALTER SUBSCRIPTION SET commands to get a broken initial table synchronization to load completely but at least there will not be any surprises as to which tables had transactions skipped and which did not.\n> >\n> > That would work but I’m concerned that the users can specify it\n> > properly. Also, we would need to change the errcontext message\n> > generated by apply_error_callback() so the user can know that the\n> > error occurred in either apply worker or tablesync worker.\n> >\n> > Or, as another idea, since an error during table synchronization is\n> > not common and could be resolved by truncating the table and\n> > restarting the synchronization in practice, there might be no need\n> > this much and we can support it only for apply worker errors.\n> >\n>\n> Yes, that is what I have also in mind. We can always extend this\n> feature for tablesync process because it can not only fail for the\n> specified skip_xid but also for many other reasons during the initial\n> copy.\n\nI'll update the patch accordingly to test and verify this approach.\n\nIn the meantime, I’d like to discuss the possible ideas of storing the\nerror XID somewhere the worker can see it even after a restart. It has\nbeen proposed that the worker updates the catalog when an error\noccurs, which was criticized as updating the catalog in such a\nsituation is not a good idea.\n\nThe next idea I considered was to store the error XID somewhere on\nshmem (e.g., ReplicationState). But It requires entries at least as\nmuch as subscriptions in principle, not\nmax_logical_replcation_workers. Since we don’t know it at startup\ntime, we need to use DSM or cache with a fixed number of entries. It\nseems overkill to me.\n\nThe third idea, which is slightly better than others, is to update the\ncatalog by the launcher process, not the worker process; when an error\noccurs, the apply worker stores the error XID (and maybe its\nsubscription OID) into its LogicalRepWorker entry, and the launcher\nupdates the corresponding entry of pg_subscription catalog before\nlaunching workers. After the worker restarts, it clears the error XID\non the catalog if it successfully applied the transaction with the\nerror XID. The user can enable the skipping transaction behavior by a\nquery say ALTER SUBSCRIPTION SKIP ENABLED. The user cannot enable the\nskipping behavior if the error XID is not set. If the skipping\nbehavior is enabled and the error XID is a valid value, the worker\nskips the transaction and then clears both the error XID and a flag of\nskipping behavior on the catalog.\n\nWith this idea, we don’t need a complex mechanism to store the error\nXID for each subscription and can ensure to skip only the transaction\nin question. But my concern is that the launcher updates the catalog.\nSince it doesn’t connect to any database, probably it cannot open the\ncatalog indexes (because it requires lookup pg_class). Therefore, we\nhave to use in-place updates here. Through quick tests, I’ve confirmed\nthat using heap_inplace_update() to update the error XID on\npg_subscription tuples seems to work but not sure using an in-place\nupdate here is a legitimate approach.\n\nWhat do you think and any ideas?\n\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 27 Jan 2022 14:51:38 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On 26.01.22 05:05, Masahiko Sawada wrote:\n>> I think it is okay to clear after the first successful application of\n>> any transaction. What I was not sure was about the idea of giving\n>> WARNING/ERROR if the first xact to be applied is not the same as\n>> skip_xid.\n> Do you prefer not to do anything in this case?\n\nI think a warning would be sensible. If the user specifies to skip a \ncertain transaction and then that doesn't happen, we should at least say \nsomething.\n\n\n",
"msg_date": "Thu, 27 Jan 2022 14:42:19 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Jan 27, 2022 at 10:42 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 26.01.22 05:05, Masahiko Sawada wrote:\n> >> I think it is okay to clear after the first successful application of\n> >> any transaction. What I was not sure was about the idea of giving\n> >> WARNING/ERROR if the first xact to be applied is not the same as\n> >> skip_xid.\n> > Do you prefer not to do anything in this case?\n>\n> I think a warning would be sensible. If the user specifies to skip a\n> certain transaction and then that doesn't happen, we should at least say\n> something.\n\nMeanwhile waiting for comments on the discussion about the designs of\nboth pg_stat_subscription_workers and ALTER SUBSCRIPTION SKIP feature,\nI’ve incorporated some (minor) comments on the current design patch,\nwhich includes:\n\n* Use LSN instead of XID.\n* Raise a warning if the user specifies to skip a certain transaction\nand then that doesn’t happen.\n* Skip-LSN has an effect on the first non-empty transaction. That is,\nit’s cleared after successfully committing a non-empty transaction,\npreventing the user-specified wrong LSN to remain.\n* Remove some unnecessary tap tests to reduce the test time.\n\nI think we all agree with the first point regardless of where we store\nerror information. And speaking of the current design, I think we all\nagree on other points. Since the design discussion is ongoing, I’ll\nincorporate other comments according to the result of the discussion.\n\nThe attached 0001 patch modifies the pg_stat_subscription_workers to\nreport LSN instead of XID, which is required by ALTER SUBSCRIPTION\nSKIP patch, the 0002 patch.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Fri, 11 Feb 2022 11:09:38 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Feb 11, 2022 at 7:40 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Jan 27, 2022 at 10:42 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n> >\n> > On 26.01.22 05:05, Masahiko Sawada wrote:\n> > >> I think it is okay to clear after the first successful application of\n> > >> any transaction. What I was not sure was about the idea of giving\n> > >> WARNING/ERROR if the first xact to be applied is not the same as\n> > >> skip_xid.\n> > > Do you prefer not to do anything in this case?\n> >\n> > I think a warning would be sensible. If the user specifies to skip a\n> > certain transaction and then that doesn't happen, we should at least say\n> > something.\n>\n> Meanwhile waiting for comments on the discussion about the designs of\n> both pg_stat_subscription_workers and ALTER SUBSCRIPTION SKIP feature,\n> I’ve incorporated some (minor) comments on the current design patch,\n> which includes:\n>\n> * Use LSN instead of XID.\n>\n\nI think exposing LSN is a better approach as it doesn't have the\ndangers of wraparound. And, I think users can use it with the existing\nfunction pg_replication_origin_advance() which will save us from\nadding additional code for this feature. We can explain/expand in docs\nhow users can use the error information from view/error_logs and use\nthe existing function to skip conflicting transactions. We might want\nto even expose error_origin to make it a bit easier for users but not\nsure. I feel the need for the new syntax (and then added code\ncomplexity due to that) isn't warranted if we expose error_LSN and let\nusers use it with the existing functions.\n\nDo you see any problem with the same?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 14 Feb 2022 14:46:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On 14.02.22 10:16, Amit Kapila wrote:\n> I think exposing LSN is a better approach as it doesn't have the\n> dangers of wraparound. And, I think users can use it with the existing\n> function pg_replication_origin_advance() which will save us from\n> adding additional code for this feature. We can explain/expand in docs\n> how users can use the error information from view/error_logs and use\n> the existing function to skip conflicting transactions. We might want\n> to even expose error_origin to make it a bit easier for users but not\n> sure. I feel the need for the new syntax (and then added code\n> complexity due to that) isn't warranted if we expose error_LSN and let\n> users use it with the existing functions.\n\nWell, the whole point of this feature is to provide a higher-level \ninterface instead of pg_replication_origin_advance(). Replication \norigins are currently not something the users have to deal with \ndirectly. We already document that you can use \npg_replication_origin_advance() to skip erroring transactions. But that \nseems unsatisfactory. It'd be like using pg_surgery to fix unique \nconstraint violations.\n\n\n",
"msg_date": "Tue, 15 Feb 2022 11:35:15 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Feb 15, 2022 at 7:35 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 14.02.22 10:16, Amit Kapila wrote:\n> > I think exposing LSN is a better approach as it doesn't have the\n> > dangers of wraparound. And, I think users can use it with the existing\n> > function pg_replication_origin_advance() which will save us from\n> > adding additional code for this feature. We can explain/expand in docs\n> > how users can use the error information from view/error_logs and use\n> > the existing function to skip conflicting transactions. We might want\n> > to even expose error_origin to make it a bit easier for users but not\n> > sure. I feel the need for the new syntax (and then added code\n> > complexity due to that) isn't warranted if we expose error_LSN and let\n> > users use it with the existing functions.\n>\n> Well, the whole point of this feature is to provide a higher-level\n> interface instead of pg_replication_origin_advance(). Replication\n> origins are currently not something the users have to deal with\n> directly. We already document that you can use\n> pg_replication_origin_advance() to skip erroring transactions. But that\n> seems unsatisfactory. It'd be like using pg_surgery to fix unique\n> constraint violations.\n\n+1\n\nI’ve considered a plan for the skipping logical replication\ntransaction feature toward PG15. Several ideas and patches have been\nproposed here and another related thread[1][2] for the skipping\nlogical replication transaction feature as follows:\n\nA. Change pg_stat_subscription_workers (committed 7a8507329085)\nB. Add origin name and commit-LSN to logical replication worker\nerrcontext (proposed[2])\nC. Store error information (e.g., the error message and commit-LSN) to\nthe system catalog\nD. Introduce ALTER SUBSCRIPTION SKIP\nE. Record the skipped data somewhere: server logs or a table\n\nGiven the remaining time for PG15, it’s unlikely to complete all of\nthem for PG15 by the feature freeze. The most realistic plan for PG15\nin my mind is to complete B and D. With these two items, the LSN of\nthe error-ed transaction is shown in the server log, and we can ask\nusers to check server logs for the LSN and use it with ALTER\nSUBSCRIPTION SKIP command. If the community agrees with B+D, we will\nhave a user-visible feature for PG15 which can be further\nextended/improved in PG16 by adding C and E. I started a new thread[2]\nfor B yesterday. In this thread, I'd like to discuss D.\n\nI've attached an updated patch for D and here is the summary:\n\n* Introduce a new command ALTER SUBSCRIPTION ... SKIP (lsn =\n'0/1234'). The user can get the commit-LSN of the transaction in\nquestion from the server logs thanks to B[2].\n* The user-specified LSN (say skip-LSN) is stored in the\npg_subscription catalog.\n* The apply worker skips the whole transaction if the transaction's\ncommit-LSN exactly matches to skip-LSN.\n* The skip-LSN has an effect on only the first non-empty transaction\nsince the worker started to apply changes. IOW it's cleared after\neither skipping the whole transaction or successfully committing a\nnon-empty transaction, preventing the skip-LSN to remain in the\ncatalog. Also, since the latter case means that the user set the wrong\nskip-LSN we clear it with a warning.\n* ALTER SUBSCRIPTION SKIP doesn't support tablesync workers. But it\nwould not be a problem in practice since an error during table\nsynchronization is not common and could be resolved by truncating the\ntable and restarting the synchronization.\n\nFor the above reasons, ALTER SUBSCRIPTION SKIP command is safer than\nthe existing way of using pg_replication_origin_advance().\n\nI've attached an updated patch along with two patches for cfbot tests\nsince the main patch (0003) depends on the other two patches. Both\n0001 and 0002 patches are the same ones I attached on another\nthread[2].\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/20220125063131.4cmvsxbz2tdg6g65%40alap3.anarazel.de\n[2] https://www.postgresql.org/message-id/CAD21AoBarBf2oTF71ig2g_o%3D3Z_Dt6_sOpMQma1kFgbnA5OZ_w%40mail.gmail.com\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Wed, 2 Mar 2022 00:00:36 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Mar 1, 2022 at 8:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I’ve considered a plan for the skipping logical replication\n> transaction feature toward PG15. Several ideas and patches have been\n> proposed here and another related thread[1][2] for the skipping\n> logical replication transaction feature as follows:\n>\n> A. Change pg_stat_subscription_workers (committed 7a8507329085)\n> B. Add origin name and commit-LSN to logical replication worker\n> errcontext (proposed[2])\n> C. Store error information (e.g., the error message and commit-LSN) to\n> the system catalog\n> D. Introduce ALTER SUBSCRIPTION SKIP\n> E. Record the skipped data somewhere: server logs or a table\n>\n> Given the remaining time for PG15, it’s unlikely to complete all of\n> them for PG15 by the feature freeze. The most realistic plan for PG15\n> in my mind is to complete B and D. With these two items, the LSN of\n> the error-ed transaction is shown in the server log, and we can ask\n> users to check server logs for the LSN and use it with ALTER\n> SUBSCRIPTION SKIP command.\n>\n\nIt makes sense to me to try to finish B and D from the above list for\nPG-15. I can review the patch for D in detail if others don't have an\nobjection to it.\n\nPeter E., others, any opinion on this matter?\n\n> If the community agrees with B+D, we will\n> have a user-visible feature for PG15 which can be further\n> extended/improved in PG16 by adding C and E.\n\nAgreed.\n\n>\n> I've attached an updated patch for D and here is the summary:\n>\n> * Introduce a new command ALTER SUBSCRIPTION ... SKIP (lsn =\n> '0/1234'). The user can get the commit-LSN of the transaction in\n> question from the server logs thanks to B[2].\n> * The user-specified LSN (say skip-LSN) is stored in the\n> pg_subscription catalog.\n> * The apply worker skips the whole transaction if the transaction's\n> commit-LSN exactly matches to skip-LSN.\n> * The skip-LSN has an effect on only the first non-empty transaction\n> since the worker started to apply changes. IOW it's cleared after\n> either skipping the whole transaction or successfully committing a\n> non-empty transaction, preventing the skip-LSN to remain in the\n> catalog. Also, since the latter case means that the user set the wrong\n> skip-LSN we clear it with a warning.\n>\n\nAs this will be displayed only in server logs and by background apply\nworker, should it be LOG or WARNING?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 3 Mar 2022 14:11:22 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wednesday, March 2, 2022 12:01 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> I've attached an updated patch along with two patches for cfbot tests since the\r\n> main patch (0003) depends on the other two patches. Both\r\n> 0001 and 0002 patches are the same ones I attached on another thread[2].\r\nHi, few comments on v12-0003-Add-ALTER-SUBSCRIPTION-.-SKIP-to-skip-the-transa.patch.\r\n\r\n\r\n(1) doc/src/sgml/ref/alter_subscription.sgml\r\n\r\n\r\n+ <term><literal>SKIP ( <replaceable class=\"parameter\">skip_option</replaceable> = <replaceable class=\"parameter\">value</r$\r\n...\r\n+ ...After logical replication\r\n+ successfully skips the transaction or commits non-empty transaction,\r\n+ the LSN (stored in\r\n+ <structname>pg_subscription</structname>.<structfield>subskiplsn</structfield>)\r\n+ is cleared. See <xref linkend=\"logical-replication-conflicts\"/> for\r\n+ the details of logical replication conflicts.\r\n+ </para>\r\n...\r\n+ <term><literal>lsn</literal> (<type>pg_lsn</type>)</term>\r\n+ <listitem>\r\n+ <para>\r\n+ Specifies the commit LSN of the remote transaction whose changes are to be skipped\r\n+ by the logical replication worker. Skipping\r\n+ individual subtransactions is not supported. Setting <literal>NONE</literal>\r\n+ resets the LSN.\r\n\r\n\r\nI think we'll extend the SKIP option choices in the future besides the 'lsn' option.\r\nThen, one sentence \"After logical replication successfully skips the transaction or commits non-empty\r\ntransaction, the LSN .. is cleared\" should be moved to the explanation for 'lsn' section,\r\nif we think this behavior to reset LSN is unique for 'lsn' option ?\r\n\r\n\r\n(2) doc/src/sgml/catalogs.sgml\r\n\r\n+ <row>\r\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\r\n+ <structfield>subskiplsn</structfield> <type>pg_lsn</type>\r\n+ </para>\r\n+ <para>\r\n+ Commit LSN of the transaction whose changes are to be skipped, if a valid\r\n+ LSN; otherwise <literal>0/0</literal>.\r\n+ </para></entry>\r\n+ </row>\r\n+\r\n\r\nWe need to cover the PREPARE that keeps causing errors on the subscriber.\r\nThis would apply to the entire patch (e.g. the rename of skip_xact_commit_lsn)\r\n\r\n(3) apply_handle_commit_internal comments\r\n\r\n /*\r\n * Helper function for apply_handle_commit and apply_handle_stream_commit.\r\n+ * Return true if the transaction was committed, otherwise return false.\r\n */\r\n\r\nIf we want to make the new added line alinged with other functions in worker.c,\r\nwe should insert one blank line before it ?\r\n\r\n\r\n(4) apply_worker_post_transaction\r\n\r\nI'm not sure if the current refactoring is good or not.\r\nFor example, the current HEAD calls pgstat_report_stat(false)\r\nfor a commit case if we are in a transaction in apply_handle_commit_internal.\r\nOn the other hand, your refactoring calls pgstat_report_stat unconditionally\r\nfor apply_handle_commit path. I'm not sure if there\r\nare many cases to call apply_handle_commit without opening a transaction,\r\nbut is that acceptable ?\r\n\r\nAlso, the name is a bit broad.\r\nHow about making a function only for stopping and resetting LSN at this stage ?\r\n\r\n\r\n(5) comments for clear_subscription_skip_lsn\r\n\r\nHow about changing the comment like below ?\r\n\r\nFrom:\r\nClear subskiplsn of pg_subscription catalog\r\nTo:\r\nClear subskiplsn of pg_subscription catalog with origin state update\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Thu, 10 Mar 2022 05:10:34 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Mar 1, 2022 at 8:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached an updated patch along with two patches for cfbot tests\n> since the main patch (0003) depends on the other two patches. Both\n> 0001 and 0002 patches are the same ones I attached on another\n> thread[2].\n>\n\nFew comments on 0003:\n=====================\n1.\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>subskiplsn</structfield> <type>pg_lsn</type>\n+ </para>\n+ <para>\n+ Commit LSN of the transaction whose changes are to be skipped,\nif a valid\n+ LSN; otherwise <literal>0/0</literal>.\n+ </para></entry>\n+ </row>\n\nCan't this be prepared LSN or rollback prepared LSN? Can we say\nFinish/End LSN and then add some details which all LSNs can be there?\n\n2. The conflict resolution explanation needs an update after the\nlatest commits and we should probably change the commit LSN\nterminology as mentioned in the previous point.\n\n3. The text in alter_subscription.sgml looks a bit repetitive to me\n(similar to what we have in logical-replication.sgml related to\nconflicts). Here also we refer to only commit LSN which needs to be\nchanged as mentioned in the previous two points.\n\n4.\nif (strcmp(lsn_str, \"none\") == 0)\n+ {\n+ /* Setting lsn = NONE is treated as resetting LSN */\n+ lsn = InvalidXLogRecPtr;\n+ }\n+ else\n+ {\n+ /* Parse the argument as LSN */\n+ lsn = DatumGetTransactionId(DirectFunctionCall1(pg_lsn_in,\n+ CStringGetDatum(lsn_str)));\n+\n+ if (XLogRecPtrIsInvalid(lsn))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"invalid WAL location (LSN): %s\", lsn_str)));\n\nIs there a reason that we don't want to allow setting 0\n(InvalidXLogRecPtr) for skip LSN?\n\n5.\n+# The subscriber will enter an infinite error loop, so we don't want\n+# to overflow the server log with error messages.\n+$node_subscriber->append_conf(\n+ 'postgresql.conf',\n+ qq[\n+wal_retrieve_retry_interval = 2s\n+]);\n\nCan we change this test to use disable_on_error feature? I am thinking\nif the disable_on_error feature got committed first, maybe we can have\none test file for this and disable_on_error feature (something like\nconflicts.pl).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 10 Mar 2022 17:32:42 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Mar 10, 2022 at 2:10 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Wednesday, March 2, 2022 12:01 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > I've attached an updated patch along with two patches for cfbot tests since the\n> > main patch (0003) depends on the other two patches. Both\n> > 0001 and 0002 patches are the same ones I attached on another thread[2].\n> Hi, few comments on v12-0003-Add-ALTER-SUBSCRIPTION-.-SKIP-to-skip-the-transa.patch.\n\nThank you for the comments.\n\n>\n>\n> (1) doc/src/sgml/ref/alter_subscription.sgml\n>\n>\n> + <term><literal>SKIP ( <replaceable class=\"parameter\">skip_option</replaceable> = <replaceable class=\"parameter\">value</r$\n> ...\n> + ...After logical replication\n> + successfully skips the transaction or commits non-empty transaction,\n> + the LSN (stored in\n> + <structname>pg_subscription</structname>.<structfield>subskiplsn</structfield>)\n> + is cleared. See <xref linkend=\"logical-replication-conflicts\"/> for\n> + the details of logical replication conflicts.\n> + </para>\n> ...\n> + <term><literal>lsn</literal> (<type>pg_lsn</type>)</term>\n> + <listitem>\n> + <para>\n> + Specifies the commit LSN of the remote transaction whose changes are to be skipped\n> + by the logical replication worker. Skipping\n> + individual subtransactions is not supported. Setting <literal>NONE</literal>\n> + resets the LSN.\n>\n>\n> I think we'll extend the SKIP option choices in the future besides the 'lsn' option.\n> Then, one sentence \"After logical replication successfully skips the transaction or commits non-empty\n> transaction, the LSN .. is cleared\" should be moved to the explanation for 'lsn' section,\n> if we think this behavior to reset LSN is unique for 'lsn' option ?\n\nHmm, I think that regardless of the type of option (e.g., relid, xid,\nand action whatever), resetting the specified something after that is\nspecific to SKIP command. SKIP command should have an effect on only\nthe first non-empty transaction. Otherwise, we could end up leaving it\nif the user mistakenly specifies the wrong one.\n\n>\n>\n> (2) doc/src/sgml/catalogs.sgml\n>\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>subskiplsn</structfield> <type>pg_lsn</type>\n> + </para>\n> + <para>\n> + Commit LSN of the transaction whose changes are to be skipped, if a valid\n> + LSN; otherwise <literal>0/0</literal>.\n> + </para></entry>\n> + </row>\n> +\n>\n> We need to cover the PREPARE that keeps causing errors on the subscriber.\n> This would apply to the entire patch (e.g. the rename of skip_xact_commit_lsn)\n\nFixed.\n\n>\n> (3) apply_handle_commit_internal comments\n>\n> /*\n> * Helper function for apply_handle_commit and apply_handle_stream_commit.\n> + * Return true if the transaction was committed, otherwise return false.\n> */\n>\n> If we want to make the new added line alinged with other functions in worker.c,\n> we should insert one blank line before it ?\n\nThis part is removed.\n\n>\n>\n> (4) apply_worker_post_transaction\n>\n> I'm not sure if the current refactoring is good or not.\n> For example, the current HEAD calls pgstat_report_stat(false)\n> for a commit case if we are in a transaction in apply_handle_commit_internal.\n> On the other hand, your refactoring calls pgstat_report_stat unconditionally\n> for apply_handle_commit path. I'm not sure if there\n> are many cases to call apply_handle_commit without opening a transaction,\n> but is that acceptable ?\n>\n> Also, the name is a bit broad.\n> How about making a function only for stopping and resetting LSN at this stage ?\n\nAgreed, it seems to be overkill. I'll revert that change.\n\n>\n>\n> (5) comments for clear_subscription_skip_lsn\n>\n> How about changing the comment like below ?\n>\n> From:\n> Clear subskiplsn of pg_subscription catalog\n> To:\n> Clear subskiplsn of pg_subscription catalog with origin state update\n>\n\nUpdated.\n\nI'll submit an updated patch that incorporated comments I got so far\nand is rebased to disable_on_error patch.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 11 Mar 2022 11:25:04 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Mar 10, 2022 at 9:02 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Mar 1, 2022 at 8:31 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached an updated patch along with two patches for cfbot tests\n> > since the main patch (0003) depends on the other two patches. Both\n> > 0001 and 0002 patches are the same ones I attached on another\n> > thread[2].\n> >\n>\n> Few comments on 0003:\n> =====================\n> 1.\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>subskiplsn</structfield> <type>pg_lsn</type>\n> + </para>\n> + <para>\n> + Commit LSN of the transaction whose changes are to be skipped,\n> if a valid\n> + LSN; otherwise <literal>0/0</literal>.\n> + </para></entry>\n> + </row>\n>\n> Can't this be prepared LSN or rollback prepared LSN? Can we say\n> Finish/End LSN and then add some details which all LSNs can be there?\n\nRight, changed to finish LSN.\n\n>\n> 2. The conflict resolution explanation needs an update after the\n> latest commits and we should probably change the commit LSN\n> terminology as mentioned in the previous point.\n\nUpdated.\n\n>\n> 3. The text in alter_subscription.sgml looks a bit repetitive to me\n> (similar to what we have in logical-replication.sgml related to\n> conflicts). Here also we refer to only commit LSN which needs to be\n> changed as mentioned in the previous two points.\n\nUpdated.\n\n>\n> 4.\n> if (strcmp(lsn_str, \"none\") == 0)\n> + {\n> + /* Setting lsn = NONE is treated as resetting LSN */\n> + lsn = InvalidXLogRecPtr;\n> + }\n> + else\n> + {\n> + /* Parse the argument as LSN */\n> + lsn = DatumGetTransactionId(DirectFunctionCall1(pg_lsn_in,\n> + CStringGetDatum(lsn_str)));\n> +\n> + if (XLogRecPtrIsInvalid(lsn))\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"invalid WAL location (LSN): %s\", lsn_str)));\n>\n> Is there a reason that we don't want to allow setting 0\n> (InvalidXLogRecPtr) for skip LSN?\n\n0 is obviously an invalid value for skip LSN, which should not be\nallowed similar to other options (like setting '' to slot_name). Also,\nwe use 0 (InvalidXLogRecPtr) internally to reset the subskipxid when\nNONE is specified.\n\n>\n> 5.\n> +# The subscriber will enter an infinite error loop, so we don't want\n> +# to overflow the server log with error messages.\n> +$node_subscriber->append_conf(\n> + 'postgresql.conf',\n> + qq[\n> +wal_retrieve_retry_interval = 2s\n> +]);\n>\n> Can we change this test to use disable_on_error feature? I am thinking\n> if the disable_on_error feature got committed first, maybe we can have\n> one test file for this and disable_on_error feature (something like\n> conflicts.pl).\n\nGood idea. Updated.\n\nI've attached an updated version patch. This patch can be applied on\ntop of the latest disable_on_error patch[1].\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAA4eK1Kes9TsMpGL6m%2BAJNHYCGRvx6piYQt5v6TEbH_t9jh8nA%40mail.gmail.com\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Fri, 11 Mar 2022 17:19:41 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Friday, March 11, 2022 5:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> I've attached an updated version patch. This patch can be applied on top of the\r\n> latest disable_on_error patch[1].\r\nHi, thank you for the patch. I'll share my review comments on v13.\r\n\r\n\r\n(a) src/backend/commands/subscriptioncmds.c\r\n\r\n@@ -84,6 +86,8 @@ typedef struct SubOpts\r\n bool streaming;\r\n bool twophase;\r\n bool disableonerr;\r\n+ XLogRecPtr lsn; /* InvalidXLogRecPtr for resetting purpose,\r\n+ * otherwise a valid LSN */\r\n\r\n\r\nI think this explanation is slightly odd and can be improved.\r\nStrictly speaking, I feel a *valid* LSN is for retting transaction purpose\r\nfrom the functional perspective. Also, the wording \"resetting purpose\"\r\nis unclear by itself. I'll suggest below change.\r\n\r\nFrom:\r\nInvalidXLogRecPtr for resetting purpose, otherwise a valid LSN\r\nTo:\r\nA valid LSN when we skip transaction, otherwise InvalidXLogRecPtr\r\n\r\n(b) The code position of additional append in describeSubscriptions\r\n\r\n\r\n+\r\n+ /* Skip LSN is only supported in v15 and higher */\r\n+ if (pset.sversion >= 150000)\r\n+ appendPQExpBuffer(&buf,\r\n+ \", subskiplsn AS \\\"%s\\\"\\n\",\r\n+ gettext_noop(\"Skip LSN\"));\r\n\r\nI suggest to combine this code after subdisableonerr.\r\n\r\n(c) parse_subscription_options\r\n\r\n\r\n+ /* Parse the argument as LSN */\r\n+ lsn = DatumGetTransactionId(DirectFunctionCall1(pg_lsn_in,\r\n\r\n\r\nHere, shouldn't we call DatumGetLSN, instead of DatumGetTransactionId ?\r\n\r\n\r\n(d) parse_subscription_options\r\n\r\n+ if (strcmp(lsn_str, \"none\") == 0)\r\n+ {\r\n+ /* Setting lsn = NONE is treated as resetting LSN */\r\n+ lsn = InvalidXLogRecPtr;\r\n+ }\r\n+\r\n\r\nWe should remove this pair of curly brackets that is for one sentence.\r\n\r\n\r\n(e) src/backend/replication/logical/worker.c\r\n\r\n+ * to skip applying the changes when starting to apply changes. The subskiplsn is\r\n+ * cleared after successfully skipping the transaction or applying non-empty\r\n+ * transaction, where the later avoids the mistakenly specified subskiplsn from\r\n+ * being left.\r\n\r\ntypo \"the later\" -> \"the latter\"\r\n\r\nAt the same time, I feel the last part of this sentence can be an independent sentence.\r\nFrom:\r\n, where the later avoids the mistakenly specified subskiplsn from being left\r\nTo:\r\n. The latter prevents the mistakenly specified subskiplsn from being left\r\n\r\n\r\n* Note that my comments below are applied if we choose we don't merge disable_on_error test with skip lsn tests.\r\n\r\n(f) src/test/subscription/t/030_skip_xact.pl\r\n\r\n+use Test::More tests => 4;\r\n\r\nIt's better to utilize the new style for the TAP test.\r\nThen, probably we should introduce done_testing()\r\nat the end of the test.\r\n\r\n(g) src/test/subscription/t/030_skip_xact.pl\r\n\r\nI think there's no need to create two types of subscriptions.\r\nJust one subscription with two_phase = on and streaming = on\r\nwould be sufficient for the tests(normal commit, commit prepared,\r\nstream commit cases). I think this point of view will reduce\r\nthe number of the table and the publication, which will\r\nmake the whole test simpler.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Fri, 11 Mar 2022 11:36:55 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Mar 11, 2022 4:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> I've attached an updated version patch. This patch can be applied on\r\n> top of the latest disable_on_error patch[1].\r\n> \r\n\r\nThanks for your patch. Here are some comments for the v13 patch.\r\n\r\n1. doc/src/sgml/ref/alter_subscription.sgml\r\n+ Specifies the transaction's finish LSN of the remote transaction whose changes\r\n\r\nCould it be simplified to \"Specifies the finish LSN of the remote transaction\r\nwhose ...\".\r\n\r\n2.\r\nI met a failed assertion, the backtrace is attached. This is caused by the\r\nfollowing code in maybe_start_skipping_changes().\r\n\r\n+\t\t/*\r\n+\t\t * It's a rare case; a past subskiplsn was left because the server\r\n+\t\t * crashed after preparing the transaction and before clearing the\r\n+\t\t * subskiplsn. We clear it without a warning message so as not confuse\r\n+\t\t * the user.\r\n+\t\t */\r\n+\t\tif (unlikely(MySubscription->skiplsn < lsn))\r\n+\t\t{\r\n+\t\t\tclear_subscription_skip_lsn(MySubscription->skiplsn, InvalidXLogRecPtr, 0,\r\n+\t\t\t\t\t\t\t\t\t\tfalse);\r\n+\t\t\tAssert(!IsTransactionState());\r\n+\t\t}\r\n\r\nWe want to clear subskiplsn in the case mentioned in comment. But if the next\r\ntransaction is a steaming transaction and this function is called by\r\napply_spooled_messages(), we are inside a transaction here. So, I think this\r\nassertion is not suitable for streaming transaction. Thoughts?\r\n\r\n3.\r\n+\tXLogRecPtr\tsubskiplsn;\t\t/* All changes which committed at this LSN are\r\n+\t\t\t\t\t\t\t\t * skipped */\r\n\r\nTo be consistent, should the comment be changed to \"All changes which finished\r\nat this LSN are skipped\"?\r\n\r\n4.\r\n+ After logical replication worker successfully skips the transaction or commits\r\n+ non-empty transaction, the LSN (stored in\r\n+ <structname>pg_subscription</structname>.<structfield>subskiplsn</structfield>)\r\n+ is cleared.\r\n\r\nBesides \"commits non-empty transaction\", subskiplsn would also be cleared in\r\nsome two-phase commit cases I think. Like prepare/commit/rollback a transaction,\r\neven if it is an empty transaction. So, should we change it for these cases?\r\n\r\n5.\r\n+ * Clear subskiplsn of pg_subscription catalog with origin state update.\r\n\r\nShould \"with origin state update\" modified to \"with origin state updated\"?\r\n\r\nRegards,\r\nShi yu",
"msg_date": "Mon, 14 Mar 2022 09:50:41 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Friday, March 11, 2022 5:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> I've attached an updated version patch. This patch can be applied on top of the\r\n> latest disable_on_error patch[1].\r\nHi, few extra comments on v13.\r\n\r\n\r\n(1) src/backend/replication/logical/worker.c\r\n\r\n\r\nWith regard to clear_subscription_skip_lsn,\r\nThere are cases that we conduct origin state update twice.\r\n\r\nFor instance, the case we reset subskiplsn by executing an\r\nirrelevant non-empty transaction. The first update is\r\nconducted at apply_handle_commit_internal and the second one\r\nis at clear_subscription_skip_lsn. In the second change,\r\nwe update replorigin_session_origin_lsn by smaller value(commit_lsn),\r\ncompared to the first update(end_lsn). Were those intentional and OK ?\r\n\r\n\r\n(2) src/backend/replication/logical/worker.c\r\n\r\n+ * Both origin_lsn and origin_timestamp are the remote transaction's end_lsn\r\n+ * and commit timestamp, respectively.\r\n+ */\r\n+static void\r\n+stop_skipping_changes(XLogRecPtr origin_lsn, TimestampTz origin_ts)\r\n\r\nTypo. Should change 'origin_timestamp' to 'origin_ts',\r\nbecause the name of the argument is the latter.\r\n\r\nAlso, here we handle not only commit but also prepare.\r\nYou need to fix the comment \"commit timestamp\" as well.\r\n\r\n(3) src/backend/replication/logical/worker.c\r\n\r\n+/*\r\n+ * Clear subskiplsn of pg_subscription catalog with origin state update.\r\n+ *\r\n+ * if with_warning is true, we raise a warning when clearing the subskipxid.\r\n\r\nIt's better to insert this second sentence as the last sentence of\r\nthe other comments. It should start with capital letter as well.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Mon, 14 Mar 2022 12:39:49 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Mar 14, 2022 at 6:50 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Fri, Mar 11, 2022 4:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached an updated version patch. This patch can be applied on\n> > top of the latest disable_on_error patch[1].\n> >\n>\n> Thanks for your patch. Here are some comments for the v13 patch.\n\nThank you for the comments!\n\n>\n> 1. doc/src/sgml/ref/alter_subscription.sgml\n> + Specifies the transaction's finish LSN of the remote transaction whose changes\n>\n> Could it be simplified to \"Specifies the finish LSN of the remote transaction\n> whose ...\".\n\nFixed.\n\n>\n> 2.\n> I met a failed assertion, the backtrace is attached. This is caused by the\n> following code in maybe_start_skipping_changes().\n>\n> + /*\n> + * It's a rare case; a past subskiplsn was left because the server\n> + * crashed after preparing the transaction and before clearing the\n> + * subskiplsn. We clear it without a warning message so as not confuse\n> + * the user.\n> + */\n> + if (unlikely(MySubscription->skiplsn < lsn))\n> + {\n> + clear_subscription_skip_lsn(MySubscription->skiplsn, InvalidXLogRecPtr, 0,\n> + false);\n> + Assert(!IsTransactionState());\n> + }\n>\n> We want to clear subskiplsn in the case mentioned in comment. But if the next\n> transaction is a steaming transaction and this function is called by\n> apply_spooled_messages(), we are inside a transaction here. So, I think this\n> assertion is not suitable for streaming transaction. Thoughts?\n\nGood catch. After more thought, I realized that the assumption of this\nif statement is wrong and we don't necessarily need to do here since\nthe left skip-LSN will eventually be cleared when the next transaction\nis finished. So removed this part.\n\n>\n> 3.\n> + XLogRecPtr subskiplsn; /* All changes which committed at this LSN are\n> + * skipped */\n>\n> To be consistent, should the comment be changed to \"All changes which finished\n> at this LSN are skipped\"?\n\nFixed.\n\n>\n> 4.\n> + After logical replication worker successfully skips the transaction or commits\n> + non-empty transaction, the LSN (stored in\n> + <structname>pg_subscription</structname>.<structfield>subskiplsn</structfield>)\n> + is cleared.\n>\n> Besides \"commits non-empty transaction\", subskiplsn would also be cleared in\n> some two-phase commit cases I think. Like prepare/commit/rollback a transaction,\n> even if it is an empty transaction. So, should we change it for these cases?\n\nFixed.\n\n>\n> 5.\n> + * Clear subskiplsn of pg_subscription catalog with origin state update.\n>\n> Should \"with origin state update\" modified to \"with origin state updated\"?\n\nFixed.\n\nI'll submit an updated patch soon.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 15 Mar 2022 11:51:50 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "Hi,\n\nOn Fri, Mar 11, 2022 at 8:37 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Friday, March 11, 2022 5:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > I've attached an updated version patch. This patch can be applied on top of the\n> > latest disable_on_error patch[1].\n> Hi, thank you for the patch. I'll share my review comments on v13.\n>\n>\n> (a) src/backend/commands/subscriptioncmds.c\n>\n> @@ -84,6 +86,8 @@ typedef struct SubOpts\n> bool streaming;\n> bool twophase;\n> bool disableonerr;\n> + XLogRecPtr lsn; /* InvalidXLogRecPtr for resetting purpose,\n> + * otherwise a valid LSN */\n>\n>\n> I think this explanation is slightly odd and can be improved.\n> Strictly speaking, I feel a *valid* LSN is for retting transaction purpose\n> from the functional perspective. Also, the wording \"resetting purpose\"\n> is unclear by itself. I'll suggest below change.\n>\n> From:\n> InvalidXLogRecPtr for resetting purpose, otherwise a valid LSN\n> To:\n> A valid LSN when we skip transaction, otherwise InvalidXLogRecPtr\n\n\"when we skip transaction\" sounds incorrect to me since it's just an\noption value but does not indicate that we really skip the transaction\nthat has that LSN. I realized that we directly use InvalidXLogRecPtr\nfor subskiplsn so I think no need to mention it.\n\n>\n> (b) The code position of additional append in describeSubscriptions\n>\n>\n> +\n> + /* Skip LSN is only supported in v15 and higher */\n> + if (pset.sversion >= 150000)\n> + appendPQExpBuffer(&buf,\n> + \", subskiplsn AS \\\"%s\\\"\\n\",\n> + gettext_noop(\"Skip LSN\"));\n>\n> I suggest to combine this code after subdisableonerr.\n\nI got the comment[1] from Peter to put it at the end, which looks better to me.\n\n>\n> (c) parse_subscription_options\n>\n>\n> + /* Parse the argument as LSN */\n> + lsn = DatumGetTransactionId(DirectFunctionCall1(pg_lsn_in,\n>\n>\n> Here, shouldn't we call DatumGetLSN, instead of DatumGetTransactionId ?\n\nRight, fixed.\n\n>\n>\n> (d) parse_subscription_options\n>\n> + if (strcmp(lsn_str, \"none\") == 0)\n> + {\n> + /* Setting lsn = NONE is treated as resetting LSN */\n> + lsn = InvalidXLogRecPtr;\n> + }\n> +\n>\n> We should remove this pair of curly brackets that is for one sentence.\n\nI moved the comment on top of the if statement and removed the brackets.\n\n>\n>\n> (e) src/backend/replication/logical/worker.c\n>\n> + * to skip applying the changes when starting to apply changes. The subskiplsn is\n> + * cleared after successfully skipping the transaction or applying non-empty\n> + * transaction, where the later avoids the mistakenly specified subskiplsn from\n> + * being left.\n>\n> typo \"the later\" -> \"the latter\"\n>\n> At the same time, I feel the last part of this sentence can be an independent sentence.\n> From:\n> , where the later avoids the mistakenly specified subskiplsn from being left\n> To:\n> . The latter prevents the mistakenly specified subskiplsn from being left\n\nFixed.\n\n>\n>\n> * Note that my comments below are applied if we choose we don't merge disable_on_error test with skip lsn tests.\n>\n> (f) src/test/subscription/t/030_skip_xact.pl\n>\n> +use Test::More tests => 4;\n>\n> It's better to utilize the new style for the TAP test.\n> Then, probably we should introduce done_testing()\n> at the end of the test.\n\nFixed.\n\n>\n> (g) src/test/subscription/t/030_skip_xact.pl\n>\n> I think there's no need to create two types of subscriptions.\n> Just one subscription with two_phase = on and streaming = on\n> would be sufficient for the tests(normal commit, commit prepared,\n> stream commit cases). I think this point of view will reduce\n> the number of the table and the publication, which will\n> make the whole test simpler.\n\nGood point, fixed.\n\nOn Mon, Mar 14, 2022 at 9:39 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Friday, March 11, 2022 5:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > I've attached an updated version patch. This patch can be applied on top of the\n> > latest disable_on_error patch[1].\n> Hi, few extra comments on v13.\n>\n>\n> (1) src/backend/replication/logical/worker.c\n>\n>\n> With regard to clear_subscription_skip_lsn,\n> There are cases that we conduct origin state update twice.\n>\n> For instance, the case we reset subskiplsn by executing an\n> irrelevant non-empty transaction. The first update is\n> conducted at apply_handle_commit_internal and the second one\n> is at clear_subscription_skip_lsn. In the second change,\n> we update replorigin_session_origin_lsn by smaller value(commit_lsn),\n> compared to the first update(end_lsn). Were those intentional and OK ?\n\nGood catch, this part is removed in the latest patch.\n\n>\n>\n> (2) src/backend/replication/logical/worker.c\n>\n> + * Both origin_lsn and origin_timestamp are the remote transaction's end_lsn\n> + * and commit timestamp, respectively.\n> + */\n> +static void\n> +stop_skipping_changes(XLogRecPtr origin_lsn, TimestampTz origin_ts)\n>\n> Typo. Should change 'origin_timestamp' to 'origin_ts',\n> because the name of the argument is the latter.\n>\n> Also, here we handle not only commit but also prepare.\n> You need to fix the comment \"commit timestamp\" as well.\n\nFixed.\n\n>\n> (3) src/backend/replication/logical/worker.c\n>\n> +/*\n> + * Clear subskiplsn of pg_subscription catalog with origin state update.\n> + *\n> + * if with_warning is true, we raise a warning when clearing the subskipxid.\n>\n> It's better to insert this second sentence as the last sentence of\n> the other comments.\n\nwith_warning is removed in the latest patch.\n\nI've attached an updated version patch.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/09b80566-c790-704b-35b4-33f87befc41f%40enterprisedb.com\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Tue, 15 Mar 2022 15:13:17 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Mar 15, 2022 at 11:43 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached an updated version patch.\n>\n\nReview:\n=======\n1.\n+++ b/doc/src/sgml/logical-replication.sgml\n@@ -366,15 +366,19 @@ CONTEXT: processing remote data for replication\norigin \"pg_16395\" during \"INSER\n transaction, the subscription needs to be disabled temporarily by\n <command>ALTER SUBSCRIPTION ... DISABLE</command> first or\nalternatively, the\n subscription can be used with the\n<literal>disable_on_error</literal> option.\n- Then, the transaction can be skipped by calling the\n+ Then, the transaction can be skipped by using\n+ <command>ALTER SUBSCRITPION ... SKIP</command> with the finish LSN\n+ (i.e., LSN 0/14C0378). After that the replication\n+ can be resumed by <command>ALTER SUBSCRIPTION ... ENABLE</command>.\n+ Alternatively, the transaction can also be skipped by calling the\n\nDo we really need to disable the subscription for the skip feature? I\nthink that is required for origin_advance. Also, probably, we can say\nFinish LSN could be Prepare LSN, Commit LSN, etc.\n\n2.\n+ /*\n+ * Quick return if it's not requested to skip this transaction. This\n+ * function is called every start of applying changes and we assume that\n+ * skipping the transaction is not used in many cases.\n+ */\n+ if (likely(XLogRecPtrIsInvalid(MySubscription->skiplsn) ||\n\nThe second part of this comment (especially \".. every start of\napplying changes ..\") sounds slightly odd to me. How about changing it\nto: \"This function is called for every remote transaction and we\nassume that skipping the transaction is not used in many cases.\"\n\n3.\n+\n+ ereport(LOG,\n+ errmsg(\"start skipping logical replication transaction which\nfinished at %X/%X\",\n...\n+ ereport(LOG,\n+ (errmsg(\"done skipping logical replication transaction which\nfinished at %X/%X\",\n\nNo need of 'which' in above LOG messages. I think the message will be\nclear without the use of which in above message.\n\n4.\n+ ereport(LOG,\n+ (errmsg(\"done skipping logical replication transaction which\nfinished at %X/%X\",\n+ LSN_FORMAT_ARGS(skip_xact_finish_lsn))));\n+\n+ /* Stop skipping changes */\n+ skip_xact_finish_lsn = InvalidXLogRecPtr;\n\nLet's reverse the order of these statements to make them consistent\nwith the corresponding maybe_start_* function.\n\n5.\n+\n+ if (myskiplsn != finish_lsn)\n+ ereport(WARNING,\n+ errmsg(\"skip-LSN of logical replication subscription \\\"%s\\\"\ncleared\", MySubscription->name),\n\nShouldn't this be a LOG instead of a WARNING as this will be displayed\nonly in server logs and by background apply worker?\n\n6.\n@@ -1583,7 +1649,8 @@ apply_handle_insert(StringInfo s)\n TupleTableSlot *remoteslot;\n MemoryContext oldctx;\n\n- if (handle_streamed_transaction(LOGICAL_REP_MSG_INSERT, s))\n+ if (is_skipping_changes() ||\n\nIs there a reason to keep the skip_changes check here and in other DML\noperations instead of at one central place in apply_dispatch?\n\n7.\n+ /*\n+ * Start a new transaction to clear the subskipxid, if not started\n+ * yet. The transaction is committed below.\n+ */\n+ if (!IsTransactionState())\n\nI think the second part of the comment: \"The transaction is committed\nbelow.\" is not required.\n\n8.\n+ XLogRecPtr subskiplsn; /* All changes which finished at this LSN are\n+ * skipped */\n+\n #ifdef CATALOG_VARLEN /* variable-length fields start here */\n /* Connection string to the publisher */\n text subconninfo BKI_FORCE_NOT_NULL;\n@@ -109,6 +112,8 @@ typedef struct Subscription\n bool disableonerr; /* Indicates if the subscription should be\n * automatically disabled if a worker error\n * occurs */\n+ XLogRecPtr skiplsn; /* All changes which finished at this LSN are\n+ * skipped */\n\nNo need for 'which' in the above comments.\n\n9.\nCan we merge 029_disable_on_error in 030_skip_xact and name it as\n029_on_error (or 029_on_error_skip_disable or some variant of it)?\nBoth seem to be related features. I am slightly worried at the pace at\nwhich the number of test files are growing in subscription test.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 15 Mar 2022 15:48:08 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tuesday, March 15, 2022 3:13 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> I've attached an updated version patch.\r\n\r\nA couple of minor comments on v14.\r\n\r\n(1) apply_handle_commit_internal\r\n\r\n\r\n+ if (is_skipping_changes())\r\n+ {\r\n+ stop_skipping_changes();\r\n+\r\n+ /*\r\n+ * Start a new transaction to clear the subskipxid, if not started\r\n+ * yet. The transaction is committed below.\r\n+ */\r\n+ if (!IsTransactionState())\r\n+ StartTransactionCommand();\r\n+ }\r\n+\r\n\r\nI suppose we can move this condition check and stop_skipping_changes() call\r\nto the inside of the block we enter when IsTransactionState() returns true.\r\n\r\nAs the comment of apply_handle_commit_internal() mentions,\r\nit's the helper function for apply_handle_commit() and\r\napply_handle_stream_commit().\r\n\r\nThen, I couldn't think that both callers don't open\r\na transaction before the call of apply_handle_commit_internal().\r\nFor applying spooled messages, we call begin_replication_step as well.\r\n\r\nI can miss something, but timing when we receive COMMIT message\r\nwithout opening a transaction, would be the case of empty transactions\r\nwhere the subscription (and its subscription worker) is not interested.\r\nIf this is true, currently the patch's code includes\r\nsuch cases within the range of is_skipping_changes() check.\r\n\r\n(2) clear_subscription_skip_lsn's comments.\r\n\r\nThe comments for this function shouldn't touch\r\nupdate of origin states, now that we don't update those.\r\n\r\n+/*\r\n+ * Clear subskiplsn of pg_subscription catalog with origin state updated.\r\n+ *\r\n\r\n\r\nThis applies to other comments.\r\n\r\n+ /*\r\n+ * Update the subskiplsn of the tuple to InvalidXLogRecPtr. If user has\r\n+ * already changed subskiplsn before clearing it we don't update the\r\n+ * catalog and don't advance the replication origin state. \r\n...\r\n+ * .... We can reduce the possibility by\r\n+ * logging a replication origin WAL record to advance the origin LSN\r\n+ * instead but there is no way to advance the origin timestamp and it\r\n+ * doesn't seem to be worth doing anything about it since it's a very rare\r\n+ * case.\r\n+ */\r\n\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Tue, 15 Mar 2022 14:00:52 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Mar 15, 2022 at 7:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Mar 15, 2022 at 11:43 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n>\n> 6.\n> @@ -1583,7 +1649,8 @@ apply_handle_insert(StringInfo s)\n> TupleTableSlot *remoteslot;\n> MemoryContext oldctx;\n>\n> - if (handle_streamed_transaction(LOGICAL_REP_MSG_INSERT, s))\n> + if (is_skipping_changes() ||\n>\n> Is there a reason to keep the skip_changes check here and in other DML\n> operations instead of at one central place in apply_dispatch?\n\nSince we already have the check of applying the change on the spot at\nthe beginning of the handlers I feel it's better to add\nis_skipping_changes() to the check than add a new if statement to\napply_dispatch, but do you prefer to check it in one central place in\napply_dispatch?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 16 Mar 2022 09:32:24 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Mar 16, 2022 at 6:03 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Mar 15, 2022 at 7:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Mar 15, 2022 at 11:43 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> >\n> > 6.\n> > @@ -1583,7 +1649,8 @@ apply_handle_insert(StringInfo s)\n> > TupleTableSlot *remoteslot;\n> > MemoryContext oldctx;\n> >\n> > - if (handle_streamed_transaction(LOGICAL_REP_MSG_INSERT, s))\n> > + if (is_skipping_changes() ||\n> >\n> > Is there a reason to keep the skip_changes check here and in other DML\n> > operations instead of at one central place in apply_dispatch?\n>\n> Since we already have the check of applying the change on the spot at\n> the beginning of the handlers I feel it's better to add\n> is_skipping_changes() to the check than add a new if statement to\n> apply_dispatch, but do you prefer to check it in one central place in\n> apply_dispatch?\n>\n\nI think either way is fine. I just wanted to know the reason, your\ncurrent change looks okay to me.\n\nSome questions/comments\n======================\n1. IIRC, earlier, we thought of allowing to use of this option (SKIP)\nonly for superusers (as this can lead to inconsistent data if not used\ncarefully) but I don't see that check in the latest patch. What is the\nreason for the same?\n\n2.\n+ /*\n+ * Update the subskiplsn of the tuple to InvalidXLogRecPtr.\n\nI think we can change the above part of the comment to \"Clear subskiplsn.\"\n\n3.\n+ * Since we already have\n\nIsn't it better to say here: Since we have already ...?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 16 Mar 2022 07:58:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Mar 15, 2022 at 7:30 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Tuesday, March 15, 2022 3:13 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > I've attached an updated version patch.\n>\n> A couple of minor comments on v14.\n>\n> (1) apply_handle_commit_internal\n>\n>\n> + if (is_skipping_changes())\n> + {\n> + stop_skipping_changes();\n> +\n> + /*\n> + * Start a new transaction to clear the subskipxid, if not started\n> + * yet. The transaction is committed below.\n> + */\n> + if (!IsTransactionState())\n> + StartTransactionCommand();\n> + }\n> +\n>\n> I suppose we can move this condition check and stop_skipping_changes() call\n> to the inside of the block we enter when IsTransactionState() returns true.\n>\n> As the comment of apply_handle_commit_internal() mentions,\n> it's the helper function for apply_handle_commit() and\n> apply_handle_stream_commit().\n>\n> Then, I couldn't think that both callers don't open\n> a transaction before the call of apply_handle_commit_internal().\n> For applying spooled messages, we call begin_replication_step as well.\n>\n> I can miss something, but timing when we receive COMMIT message\n> without opening a transaction, would be the case of empty transactions\n> where the subscription (and its subscription worker) is not interested.\n>\n\nI think when we skip non-streamed transactions we don't start a\ntransaction. So, if we do what you are suggesting, we will miss to\nclear the skip_lsn after skipping the transaction.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 16 Mar 2022 08:02:31 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Mar 16, 2022 at 7:58 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Mar 16, 2022 at 6:03 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Mar 15, 2022 at 7:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Mar 15, 2022 at 11:43 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > >\n> > > 6.\n> > > @@ -1583,7 +1649,8 @@ apply_handle_insert(StringInfo s)\n> > > TupleTableSlot *remoteslot;\n> > > MemoryContext oldctx;\n> > >\n> > > - if (handle_streamed_transaction(LOGICAL_REP_MSG_INSERT, s))\n> > > + if (is_skipping_changes() ||\n> > >\n> > > Is there a reason to keep the skip_changes check here and in other DML\n> > > operations instead of at one central place in apply_dispatch?\n> >\n> > Since we already have the check of applying the change on the spot at\n> > the beginning of the handlers I feel it's better to add\n> > is_skipping_changes() to the check than add a new if statement to\n> > apply_dispatch, but do you prefer to check it in one central place in\n> > apply_dispatch?\n> >\n>\n> I think either way is fine. I just wanted to know the reason, your\n> current change looks okay to me.\n>\n\nI feel it is better to at least add a comment suggesting that we skip\nonly data modification changes because the other part of message\nhandle_stream_* is there in other message handlers as well. It will\nmake it easier to add a similar check in future message handlers.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 16 Mar 2022 08:04:44 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Mar 16, 2022 at 7:58 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Mar 16, 2022 at 6:03 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Mar 15, 2022 at 7:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Mar 15, 2022 at 11:43 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > >\n> > > 6.\n> > > @@ -1583,7 +1649,8 @@ apply_handle_insert(StringInfo s)\n> > > TupleTableSlot *remoteslot;\n> > > MemoryContext oldctx;\n> > >\n> > > - if (handle_streamed_transaction(LOGICAL_REP_MSG_INSERT, s))\n> > > + if (is_skipping_changes() ||\n> > >\n> > > Is there a reason to keep the skip_changes check here and in other DML\n> > > operations instead of at one central place in apply_dispatch?\n> >\n> > Since we already have the check of applying the change on the spot at\n> > the beginning of the handlers I feel it's better to add\n> > is_skipping_changes() to the check than add a new if statement to\n> > apply_dispatch, but do you prefer to check it in one central place in\n> > apply_dispatch?\n> >\n>\n> I think either way is fine. I just wanted to know the reason, your\n> current change looks okay to me.\n>\n> Some questions/comments\n> ======================\n>\n\nSome cosmetic suggestions:\n======================\n1.\n+# Create subscriptions. Both subscription sets disable_on_error to on\n+# so that they get disabled when a conflict occurs.\n+$node_subscriber->safe_psql(\n+ 'postgres',\n+ qq[\n+CREATE SUBSCRIPTION $subname CONNECTION '$publisher_connstr'\nPUBLICATION tap_pub WITH (streaming = on, two_phase = on,\ndisable_on_error = on);\n+]);\n\nI don't understand what you mean by 'Both subscription ...' in the\nabove comments.\n\n2.\n+ # Check the log indicating that successfully skipped the transaction,\n\nHow about slightly rephrasing this to: \"Check the log to ensure that\nthe transaction is skipped....\"?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 16 Mar 2022 09:50:25 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Mar 15, 2022 at 7:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Mar 15, 2022 at 11:43 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached an updated version patch.\n> >\n>\n> Review:\n> =======\n\nThank you for the comments.\n\n> 1.\n> +++ b/doc/src/sgml/logical-replication.sgml\n> @@ -366,15 +366,19 @@ CONTEXT: processing remote data for replication\n> origin \"pg_16395\" during \"INSER\n> transaction, the subscription needs to be disabled temporarily by\n> <command>ALTER SUBSCRIPTION ... DISABLE</command> first or\n> alternatively, the\n> subscription can be used with the\n> <literal>disable_on_error</literal> option.\n> - Then, the transaction can be skipped by calling the\n> + Then, the transaction can be skipped by using\n> + <command>ALTER SUBSCRITPION ... SKIP</command> with the finish LSN\n> + (i.e., LSN 0/14C0378). After that the replication\n> + can be resumed by <command>ALTER SUBSCRIPTION ... ENABLE</command>.\n> + Alternatively, the transaction can also be skipped by calling the\n>\n> Do we really need to disable the subscription for the skip feature? I\n> think that is required for origin_advance. Also, probably, we can say\n> Finish LSN could be Prepare LSN, Commit LSN, etc.\n\nNot necessary to disable the subscription for skip feature. Fixed.\n\n>\n> 2.\n> + /*\n> + * Quick return if it's not requested to skip this transaction. This\n> + * function is called every start of applying changes and we assume that\n> + * skipping the transaction is not used in many cases.\n> + */\n> + if (likely(XLogRecPtrIsInvalid(MySubscription->skiplsn) ||\n>\n> The second part of this comment (especially \".. every start of\n> applying changes ..\") sounds slightly odd to me. How about changing it\n> to: \"This function is called for every remote transaction and we\n> assume that skipping the transaction is not used in many cases.\"\n>\n\nFixed.\n\n> 3.\n> +\n> + ereport(LOG,\n> + errmsg(\"start skipping logical replication transaction which\n> finished at %X/%X\",\n> ...\n> + ereport(LOG,\n> + (errmsg(\"done skipping logical replication transaction which\n> finished at %X/%X\",\n>\n> No need of 'which' in above LOG messages. I think the message will be\n> clear without the use of which in above message.\n\nRemoved.\n\n>\n> 4.\n> + ereport(LOG,\n> + (errmsg(\"done skipping logical replication transaction which\n> finished at %X/%X\",\n> + LSN_FORMAT_ARGS(skip_xact_finish_lsn))));\n> +\n> + /* Stop skipping changes */\n> + skip_xact_finish_lsn = InvalidXLogRecPtr;\n>\n> Let's reverse the order of these statements to make them consistent\n> with the corresponding maybe_start_* function.\n\nBut we cannot simply rever the order since skip_xact_finish_lsn is\nused in the log message. Do we want to use a variable for it?\n\n>\n> 5.\n> +\n> + if (myskiplsn != finish_lsn)\n> + ereport(WARNING,\n> + errmsg(\"skip-LSN of logical replication subscription \\\"%s\\\"\n> cleared\", MySubscription->name),\n>\n> Shouldn't this be a LOG instead of a WARNING as this will be displayed\n> only in server logs and by background apply worker?\n\nWARNINGs are used also by other auxiliary processes such as archiver,\nautovacuum workers, and launcher. So I think we can use it here.\n\n>\n> 6.\n> @@ -1583,7 +1649,8 @@ apply_handle_insert(StringInfo s)\n> TupleTableSlot *remoteslot;\n> MemoryContext oldctx;\n>\n> - if (handle_streamed_transaction(LOGICAL_REP_MSG_INSERT, s))\n> + if (is_skipping_changes() ||\n>\n> Is there a reason to keep the skip_changes check here and in other DML\n> operations instead of at one central place in apply_dispatch?\n\nI'd leave it as is as I mentioned in another email. But I've added\nsome comments as you suggested.\n\n>\n> 7.\n> + /*\n> + * Start a new transaction to clear the subskipxid, if not started\n> + * yet. The transaction is committed below.\n> + */\n> + if (!IsTransactionState())\n>\n> I think the second part of the comment: \"The transaction is committed\n> below.\" is not required.\n\nRemoved.\n\n>\n> 8.\n> + XLogRecPtr subskiplsn; /* All changes which finished at this LSN are\n> + * skipped */\n> +\n> #ifdef CATALOG_VARLEN /* variable-length fields start here */\n> /* Connection string to the publisher */\n> text subconninfo BKI_FORCE_NOT_NULL;\n> @@ -109,6 +112,8 @@ typedef struct Subscription\n> bool disableonerr; /* Indicates if the subscription should be\n> * automatically disabled if a worker error\n> * occurs */\n> + XLogRecPtr skiplsn; /* All changes which finished at this LSN are\n> + * skipped */\n>\n> No need for 'which' in the above comments.\n\nRemoved.\n\n>\n> 9.\n> Can we merge 029_disable_on_error in 030_skip_xact and name it as\n> 029_on_error (or 029_on_error_skip_disable or some variant of it)?\n> Both seem to be related features. I am slightly worried at the pace at\n> which the number of test files are growing in subscription test.\n\nYes, we can merge them.\n\nI'll submit an updated version patch after incorporating all comments I got.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 16 Mar 2022 15:14:25 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wednesday, March 16, 2022 11:33 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Tue, Mar 15, 2022 at 7:30 PM osumi.takamichi@fujitsu.com\r\n> <osumi.takamichi@fujitsu.com> wrote:\r\n> >\r\n> > On Tuesday, March 15, 2022 3:13 PM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> > > I've attached an updated version patch.\r\n> >\r\n> > A couple of minor comments on v14.\r\n> >\r\n> > (1) apply_handle_commit_internal\r\n> >\r\n> >\r\n> > + if (is_skipping_changes())\r\n> > + {\r\n> > + stop_skipping_changes();\r\n> > +\r\n> > + /*\r\n> > + * Start a new transaction to clear the subskipxid, if not\r\n> started\r\n> > + * yet. The transaction is committed below.\r\n> > + */\r\n> > + if (!IsTransactionState())\r\n> > + StartTransactionCommand();\r\n> > + }\r\n> > +\r\n> >\r\n> > I suppose we can move this condition check and stop_skipping_changes()\r\n> > call to the inside of the block we enter when IsTransactionState() returns\r\n> true.\r\n> >\r\n> > As the comment of apply_handle_commit_internal() mentions, it's the\r\n> > helper function for apply_handle_commit() and\r\n> > apply_handle_stream_commit().\r\n> >\r\n> > Then, I couldn't think that both callers don't open a transaction\r\n> > before the call of apply_handle_commit_internal().\r\n> > For applying spooled messages, we call begin_replication_step as well.\r\n> >\r\n> > I can miss something, but timing when we receive COMMIT message\r\n> > without opening a transaction, would be the case of empty transactions\r\n> > where the subscription (and its subscription worker) is not interested.\r\n> >\r\n> \r\n> I think when we skip non-streamed transactions we don't start a transaction.\r\n> So, if we do what you are suggesting, we will miss to clear the skip_lsn after\r\n> skipping the transaction.\r\nOK, this is what I missed.\r\n\r\nOn the other hand, what I was worried about is that\r\nempty transaction can start skipping changes,\r\nif the subskiplsn is equal to the finish LSN for\r\nthe empty transaction. The reason is we call\r\nmaybe_start_skipping_changes even for empty ones\r\nand set skip_xact_finish_lsn by the finish LSN in that case.\r\n\r\nI checked I could make this happen with debugger and some logs for LSN.\r\nWhat I did is just having two pairs of pub/sub\r\nand conduct a change for one of them,\r\nafter I set a breakpoint in the logicalrep_write_begin\r\non the walsender that will issue an empty transaction.\r\nThen, I check the finish LSN of it and\r\nconduct an alter subscription skip lsn command with this LSN value.\r\nAs a result, empty transaction calls stop_skipping_changes\r\nin the apply_handle_commit_internal and then\r\nenter the block for IsTransactionState == true,\r\nwhich would not happen before applying the patch.\r\n\r\nAlso, this behavior looks contradicted with some comments in worker.c\r\n\"The subskiplsn is cleared after successfully skipping the transaction\r\nor applying non-empty transaction.\" so, I was just confused and\r\nwrote the above comment.\r\n\r\nI think this would not happen in practice, then\r\nit might be OK without a special measure for this,\r\nbut I wasn't sure.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Wed, 16 Mar 2022 06:36:52 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wednesday, March 16, 2022 3:37 PM I wrote:\r\n> On Wednesday, March 16, 2022 11:33 AM Amit Kapila\r\n> <amit.kapila16@gmail.com> wrote:\r\n> > On Tue, Mar 15, 2022 at 7:30 PM osumi.takamichi@fujitsu.com\r\n> > <osumi.takamichi@fujitsu.com> wrote:\r\n> > >\r\n> > > On Tuesday, March 15, 2022 3:13 PM Masahiko Sawada\r\n> > <sawada.mshk@gmail.com> wrote:\r\n> > > > I've attached an updated version patch.\r\n> > >\r\n> > > A couple of minor comments on v14.\r\n> > >\r\n> > > (1) apply_handle_commit_internal\r\n> > >\r\n> > >\r\n> > > + if (is_skipping_changes())\r\n> > > + {\r\n> > > + stop_skipping_changes();\r\n> > > +\r\n> > > + /*\r\n> > > + * Start a new transaction to clear the subskipxid,\r\n> > > + if not\r\n> > started\r\n> > > + * yet. The transaction is committed below.\r\n> > > + */\r\n> > > + if (!IsTransactionState())\r\n> > > + StartTransactionCommand();\r\n> > > + }\r\n> > > +\r\n> > >\r\n> > > I suppose we can move this condition check and\r\n> > > stop_skipping_changes() call to the inside of the block we enter\r\n> > > when IsTransactionState() returns\r\n> > true.\r\n> > >\r\n> > > As the comment of apply_handle_commit_internal() mentions, it's the\r\n> > > helper function for apply_handle_commit() and\r\n> > > apply_handle_stream_commit().\r\n> > >\r\n> > > Then, I couldn't think that both callers don't open a transaction\r\n> > > before the call of apply_handle_commit_internal().\r\n> > > For applying spooled messages, we call begin_replication_step as well.\r\n> > >\r\n> > > I can miss something, but timing when we receive COMMIT message\r\n> > > without opening a transaction, would be the case of empty\r\n> > > transactions where the subscription (and its subscription worker) is not\r\n> interested.\r\n> > >\r\n> >\r\n> > I think when we skip non-streamed transactions we don't start a transaction.\r\n> > So, if we do what you are suggesting, we will miss to clear the\r\n> > skip_lsn after skipping the transaction.\r\n> OK, this is what I missed.\r\n> \r\n> On the other hand, what I was worried about is that empty transaction can start\r\n> skipping changes, if the subskiplsn is equal to the finish LSN for the empty\r\n> transaction. The reason is we call maybe_start_skipping_changes even for\r\n> empty ones and set skip_xact_finish_lsn by the finish LSN in that case.\r\n> \r\n> I checked I could make this happen with debugger and some logs for LSN.\r\n> What I did is just having two pairs of pub/sub and conduct a change for one of\r\n> them, after I set a breakpoint in the logicalrep_write_begin on the walsender\r\n> that will issue an empty transaction.\r\n> Then, I check the finish LSN of it and\r\n> conduct an alter subscription skip lsn command with this LSN value.\r\n> As a result, empty transaction calls stop_skipping_changes in the\r\n> apply_handle_commit_internal and then enter the block for IsTransactionState\r\n> == true, which would not happen before applying the patch.\r\n> \r\n> Also, this behavior looks contradicted with some comments in worker.c \"The\r\n> subskiplsn is cleared after successfully skipping the transaction or applying\r\n> non-empty transaction.\" so, I was just confused and wrote the above comment.\r\nSorry, my understanding was not correct.\r\n\r\nEven when we clear the subskiplsn by empty transaction,\r\nwe can say that it applies to the success of skipping the transaction.\r\nThen this behavior and allowing empty transaction to match the indicated\r\nLSN by alter subscription is fine.\r\n\r\nI'm sorry for making noises.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Wed, 16 Mar 2022 06:57:34 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Mar 16, 2022 at 11:28 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Mar 16, 2022 at 6:03 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Mar 15, 2022 at 7:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Mar 15, 2022 at 11:43 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > >\n> > > 6.\n> > > @@ -1583,7 +1649,8 @@ apply_handle_insert(StringInfo s)\n> > > TupleTableSlot *remoteslot;\n> > > MemoryContext oldctx;\n> > >\n> > > - if (handle_streamed_transaction(LOGICAL_REP_MSG_INSERT, s))\n> > > + if (is_skipping_changes() ||\n> > >\n> > > Is there a reason to keep the skip_changes check here and in other DML\n> > > operations instead of at one central place in apply_dispatch?\n> >\n> > Since we already have the check of applying the change on the spot at\n> > the beginning of the handlers I feel it's better to add\n> > is_skipping_changes() to the check than add a new if statement to\n> > apply_dispatch, but do you prefer to check it in one central place in\n> > apply_dispatch?\n> >\n>\n> I think either way is fine. I just wanted to know the reason, your\n> current change looks okay to me.\n>\n> Some questions/comments\n> ======================\n> 1. IIRC, earlier, we thought of allowing to use of this option (SKIP)\n> only for superusers (as this can lead to inconsistent data if not used\n> carefully) but I don't see that check in the latest patch. What is the\n> reason for the same?\n\nI thought the non-superuser subscription owner can resolve the\nconflict by manuall manipulating the relations, which is the same\nresult of skipping all data modification changes by ALTER SUBSCRIPTION\nSKIP feature. But after more thought, it would not be exactly the same\nsince the skipped transaction might include changes to the relation\nthat the owner doesn't have permission on it.\n\n>\n> 2.\n> + /*\n> + * Update the subskiplsn of the tuple to InvalidXLogRecPtr.\n>\n> I think we can change the above part of the comment to \"Clear subskiplsn.\"\n>\n\nFixed.\n\n> 3.\n> + * Since we already have\n>\n> Isn't it better to say here: Since we have already ...?\n\nFixed.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Wed, 16 Mar 2022 16:07:07 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Mar 16, 2022 at 1:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Mar 16, 2022 at 7:58 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Mar 16, 2022 at 6:03 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Tue, Mar 15, 2022 at 7:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Tue, Mar 15, 2022 at 11:43 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > >\n> > > > 6.\n> > > > @@ -1583,7 +1649,8 @@ apply_handle_insert(StringInfo s)\n> > > > TupleTableSlot *remoteslot;\n> > > > MemoryContext oldctx;\n> > > >\n> > > > - if (handle_streamed_transaction(LOGICAL_REP_MSG_INSERT, s))\n> > > > + if (is_skipping_changes() ||\n> > > >\n> > > > Is there a reason to keep the skip_changes check here and in other DML\n> > > > operations instead of at one central place in apply_dispatch?\n> > >\n> > > Since we already have the check of applying the change on the spot at\n> > > the beginning of the handlers I feel it's better to add\n> > > is_skipping_changes() to the check than add a new if statement to\n> > > apply_dispatch, but do you prefer to check it in one central place in\n> > > apply_dispatch?\n> > >\n> >\n> > I think either way is fine. I just wanted to know the reason, your\n> > current change looks okay to me.\n> >\n> > Some questions/comments\n> > ======================\n> >\n>\n> Some cosmetic suggestions:\n> ======================\n> 1.\n> +# Create subscriptions. Both subscription sets disable_on_error to on\n> +# so that they get disabled when a conflict occurs.\n> +$node_subscriber->safe_psql(\n> + 'postgres',\n> + qq[\n> +CREATE SUBSCRIPTION $subname CONNECTION '$publisher_connstr'\n> PUBLICATION tap_pub WITH (streaming = on, two_phase = on,\n> disable_on_error = on);\n> +]);\n>\n> I don't understand what you mean by 'Both subscription ...' in the\n> above comments.\n\nFixed.\n\n>\n> 2.\n> + # Check the log indicating that successfully skipped the transaction,\n>\n> How about slightly rephrasing this to: \"Check the log to ensure that\n> the transaction is skipped....\"?\n\nFixed.\n\nI've attached an updated version patch.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Wed, 16 Mar 2022 17:22:35 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Mar 16, 2022 4:23 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> I've attached an updated version patch.\r\n> \r\n\r\nThanks for updating the patch. Here are some comments for the v15 patch.\r\n\r\n1. src/backend/replication/logical/worker.c\r\n\r\n+ * to skip applying the changes when starting to apply changes. The subskiplsn is\r\n+ * cleared after successfully skipping the transaction or applying non-empty\r\n+ * transaction. The latter prevents the mistakenly specified subskiplsn from\r\n\r\nShould \"applying non-empty transaction\" be modified to \"finishing a\r\ntransaction\"? To be consistent with the description in the\r\nalter_subscription.sgml.\r\n\r\n2. src/test/subscription/t/029_on_error.pl\r\n\r\n+# Test of logical replication subscription self-disabling feature.\r\n\r\nShould we add something about \"skip logical replication transactions\" in this\r\ncomment?\r\n\r\nRegards,\r\nShi yu\r\n",
"msg_date": "Thu, 17 Mar 2022 02:43:31 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Mar 17, 2022 at 8:13 AM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Wed, Mar 16, 2022 4:23 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached an updated version patch.\n> >\n>\n> Thanks for updating the patch. Here are some comments for the v15 patch.\n>\n> 1. src/backend/replication/logical/worker.c\n>\n> + * to skip applying the changes when starting to apply changes. The subskiplsn is\n> + * cleared after successfully skipping the transaction or applying non-empty\n> + * transaction. The latter prevents the mistakenly specified subskiplsn from\n>\n> Should \"applying non-empty transaction\" be modified to \"finishing a\n> transaction\"? To be consistent with the description in the\n> alter_subscription.sgml.\n>\n\nThe current wording in the patch seems okay to me as it is good to\nemphasize on non-empty transactions.\n\n> 2. src/test/subscription/t/029_on_error.pl\n>\n> +# Test of logical replication subscription self-disabling feature.\n>\n> Should we add something about \"skip logical replication transactions\" in this\n> comment?\n>\n\nHow about: \"Tests for disable_on_error and SKIP transaction features.\"?\n\nI am making some other minor edits in the patch and will take care of\nwhatever we decide for these comments.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 17 Mar 2022 08:59:53 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Mar 16, 2022 at 1:53 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached an updated version patch.\n>\n\nThe patch LGTM. I have made minor changes in comments and docs in the\nattached patch. Kindly let me know what you think of the attached?\n\nI am planning to commit this early next week (on Monday) unless there\nare more comments/suggestions.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Thu, 17 Mar 2022 11:33:42 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thursday, March 17, 2022 3:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Wed, Mar 16, 2022 at 1:53 PM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> >\r\n> > I've attached an updated version patch.\r\n> >\r\n> \r\n> The patch LGTM. I have made minor changes in comments and docs in the\r\n> attached patch. Kindly let me know what you think of the attached?\r\nHi, thank you for the patch. Few minor comments.\r\n\r\n\r\n(1) comment of maybe_start_skipping_changes\r\n\r\n\r\n+ /*\r\n+ * Quick return if it's not requested to skip this transaction. This\r\n+ * function is called for every remote transaction and we assume that\r\n+ * skipping the transaction is not used often.\r\n+ */\r\n\r\nI feel this comment should explain more about our intention and\r\nwhat it confirms. In a case when user requests skip,\r\nbut it doesn't match the condition, we don't start\r\nskipping changes, strictly speaking.\r\n\r\nFrom:\r\nQuick return if it's not requested to skip this transaction.\r\n\r\nTo:\r\nQuick return if we can't ensure possible skiplsn is set\r\nand it equals to the finish LSN of this transaction.\r\n\r\n\r\n(2) 029_on_error.pl\r\n\r\n+ my $contents = slurp_file($node_subscriber->logfile, $offset);\r\n+ $contents =~\r\n+ qr/processing remote data for replication origin \\\"pg_\\d+\\\" during \"INSERT\" for replication target relation \"public.tbl\" in transaction \\d+ finishe$\r\n+ or die \"could not get error-LSN\";\r\n\r\nI think we shouldn't use a lot of new words.\r\n\r\nHow about a change below ?\r\n\r\nFrom:\r\ncould not get error-LSN\r\nTo:\r\nfailed to find expected error message that contains finish LSN for SKIP option\r\n\r\n\r\n(3) apply_handle_commit_internal\r\n\r\n\r\nLastly, may I have the reasons to call both\r\nstop_skipping_changes and clear_subscription_skip_lsn\r\nin this function, instead of having them at the end\r\nof apply_handle_commit and apply_handle_stream_commit ?\r\n\r\nIMHO, this structure looks to create the\r\nextra condition branches in apply_handle_commit_internal.\r\n\r\nAlso, because of this code, when we call stop_skipping_changes\r\nin the apply_handle_commit_internal, after checking\r\nis_skipping_changes() returns true, we check another\r\nis_skipping_changes() at the top of stop_skipping_changes.\r\n\r\nOTOH, for other cases like apply_handle_prepare, apply_handle_stream_prepare,\r\nwe call those two functions (or either one) depending on the needs,\r\nafter existing commits and during the closing processing.\r\n(In the case of rollback_prepare, it's also called after existing commit)\r\n\r\nI feel if we move those two functions at the end\r\nof the apply_handle_commit and apply_handle_stream_commit,\r\nthen we will have more aligned codes and improve readability.\r\n\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Thu, 17 Mar 2022 07:09:29 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Mar 17, 2022 at 12:39 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Thursday, March 17, 2022 3:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Wed, Mar 16, 2022 at 1:53 PM Masahiko Sawada\n> > <sawada.mshk@gmail.com> wrote:\n> > >\n> > > I've attached an updated version patch.\n> > >\n> >\n> > The patch LGTM. I have made minor changes in comments and docs in the\n> > attached patch. Kindly let me know what you think of the attached?\n> Hi, thank you for the patch. Few minor comments.\n>\n>\n> (1) comment of maybe_start_skipping_changes\n>\n>\n> + /*\n> + * Quick return if it's not requested to skip this transaction. This\n> + * function is called for every remote transaction and we assume that\n> + * skipping the transaction is not used often.\n> + */\n>\n> I feel this comment should explain more about our intention and\n> what it confirms. In a case when user requests skip,\n> but it doesn't match the condition, we don't start\n> skipping changes, strictly speaking.\n>\n> From:\n> Quick return if it's not requested to skip this transaction.\n>\n> To:\n> Quick return if we can't ensure possible skiplsn is set\n> and it equals to the finish LSN of this transaction.\n>\n\nHmm, the current comment seems more appropriate. What you are\nsuggesting is almost writing the code in sentence form.\n\n>\n> (2) 029_on_error.pl\n>\n> + my $contents = slurp_file($node_subscriber->logfile, $offset);\n> + $contents =~\n> + qr/processing remote data for replication origin \\\"pg_\\d+\\\" during \"INSERT\" for replication target relation \"public.tbl\" in transaction \\d+ finishe$\n> + or die \"could not get error-LSN\";\n>\n> I think we shouldn't use a lot of new words.\n>\n> How about a change below ?\n>\n> From:\n> could not get error-LSN\n> To:\n> failed to find expected error message that contains finish LSN for SKIP option\n>\n>\n> (3) apply_handle_commit_internal\n>\n...\n>\n> I feel if we move those two functions at the end\n> of the apply_handle_commit and apply_handle_stream_commit,\n> then we will have more aligned codes and improve readability.\n>\n\nI think the intention is to avoid duplicate code as we have a common\nfunction that gets called from both of those. OTOH, if Sawada-San or\nothers also prefer your approach to rearrange the code then I am fine\nwith it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 17 Mar 2022 14:22:20 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Mar 17, 2022 at 5:52 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Mar 17, 2022 at 12:39 PM osumi.takamichi@fujitsu.com\n> <osumi.takamichi@fujitsu.com> wrote:\n> >\n> > On Thursday, March 17, 2022 3:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > On Wed, Mar 16, 2022 at 1:53 PM Masahiko Sawada\n> > > <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > I've attached an updated version patch.\n> > > >\n> > >\n> > > The patch LGTM. I have made minor changes in comments and docs in the\n> > > attached patch. Kindly let me know what you think of the attached?\n> > Hi, thank you for the patch. Few minor comments.\n> >\n> >\n> > (1) comment of maybe_start_skipping_changes\n> >\n> >\n> > + /*\n> > + * Quick return if it's not requested to skip this transaction. This\n> > + * function is called for every remote transaction and we assume that\n> > + * skipping the transaction is not used often.\n> > + */\n> >\n> > I feel this comment should explain more about our intention and\n> > what it confirms. In a case when user requests skip,\n> > but it doesn't match the condition, we don't start\n> > skipping changes, strictly speaking.\n> >\n> > From:\n> > Quick return if it's not requested to skip this transaction.\n> >\n> > To:\n> > Quick return if we can't ensure possible skiplsn is set\n> > and it equals to the finish LSN of this transaction.\n> >\n>\n> Hmm, the current comment seems more appropriate. What you are\n> suggesting is almost writing the code in sentence form.\n>\n> >\n> > (2) 029_on_error.pl\n> >\n> > + my $contents = slurp_file($node_subscriber->logfile, $offset);\n> > + $contents =~\n> > + qr/processing remote data for replication origin \\\"pg_\\d+\\\" during \"INSERT\" for replication target relation \"public.tbl\" in transaction \\d+ finishe$\n> > + or die \"could not get error-LSN\";\n> >\n> > I think we shouldn't use a lot of new words.\n> >\n> > How about a change below ?\n> >\n> > From:\n> > could not get error-LSN\n> > To:\n> > failed to find expected error message that contains finish LSN for SKIP option\n> >\n> >\n> > (3) apply_handle_commit_internal\n> >\n> ...\n> >\n> > I feel if we move those two functions at the end\n> > of the apply_handle_commit and apply_handle_stream_commit,\n> > then we will have more aligned codes and improve readability.\n> >\n\nI think we cannot just move them to the end of apply_handle_commit()\nand apply_handle_stream_commit(). Because if we do that, we end up\nmissing updating replication_session_origin_lsn/timestamp when\nclearing the subskiplsn if we're skipping a non-stream transaction.\n\nBasically, the apply worker differently handles 2pc transactions and\nnon-2pc transactions; we always prepare even empty transactions\nwhereas we don't commit empty non-2pc transactions. So I think we\ndon’t have to handle both in the same way.\n\n> I think the intention is to avoid duplicate code as we have a common\n> function that gets called from both of those.\n\nYes.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 17 Mar 2022 19:55:40 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thursday, March 17, 2022 7:56 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n On Thu, Mar 17, 2022 at 5:52 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Thu, Mar 17, 2022 at 12:39 PM osumi.takamichi@fujitsu.com\r\n> > <osumi.takamichi@fujitsu.com> wrote:\r\n> > >\r\n> > > On Thursday, March 17, 2022 3:04 PM Amit Kapila\r\n> <amit.kapila16@gmail.com> wrote:\r\n> > > > On Wed, Mar 16, 2022 at 1:53 PM Masahiko Sawada\r\n> > > > <sawada.mshk@gmail.com> wrote:\r\n> > > > >\r\n> > > > > I've attached an updated version patch.\r\n> > > > >\r\n> > > >\r\n> > > > The patch LGTM. I have made minor changes in comments and docs in\r\n> > > > the attached patch. Kindly let me know what you think of the attached?\r\n> > > Hi, thank you for the patch. Few minor comments.\r\n> > >\r\n> > >\r\n> > > (3) apply_handle_commit_internal\r\n> > >\r\n> > ...\r\n> > >\r\n> > > I feel if we move those two functions at the end of the\r\n> > > apply_handle_commit and apply_handle_stream_commit, then we will\r\n> > > have more aligned codes and improve readability.\r\n> > >\r\n> \r\n> I think we cannot just move them to the end of apply_handle_commit() and\r\n> apply_handle_stream_commit(). Because if we do that, we end up missing\r\n> updating replication_session_origin_lsn/timestamp when clearing the\r\n> subskiplsn if we're skipping a non-stream transaction.\r\n> \r\n> Basically, the apply worker differently handles 2pc transactions and non-2pc\r\n> transactions; we always prepare even empty transactions whereas we don't\r\n> commit empty non-2pc transactions. So I think we don’t have to handle both in\r\n> the same way.\r\nOkay. Thank you so much for your explanation.\r\nThen the code looks good to me.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n",
"msg_date": "Thu, 17 Mar 2022 12:16:15 +0000",
"msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Mar 17, 2022 at 3:03 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Mar 16, 2022 at 1:53 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached an updated version patch.\n> >\n>\n> The patch LGTM. I have made minor changes in comments and docs in the\n> attached patch. Kindly let me know what you think of the attached?\n\nThank you for updating the patch. It looks good to me.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 18 Mar 2022 09:16:20 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Mar 17, 2022, at 3:03 AM, Amit Kapila wrote:\n> On Wed, Mar 16, 2022 at 1:53 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached an updated version patch.\n> >\n> \n> The patch LGTM. I have made minor changes in comments and docs in the\n> attached patch. Kindly let me know what you think of the attached?\n> \n> I am planning to commit this early next week (on Monday) unless there\n> are more comments/suggestions.\nI reviewed this last version and I have a few comments.\n\n+ * If the user set subskiplsn, we do a sanity check to make\n+ * sure that the specified LSN is a probable value.\n\n... user *sets*...\n\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"skip WAL location (LSN) must be greater than origin LSN %X/%X\",\n+ LSN_FORMAT_ARGS(remote_lsn))));\n\nShouldn't we add the LSN to be skipped in the \"(LSN)\"?\n\n+ * Start a new transaction to clear the subskipxid, if not started\n+ * yet.\n\nIt seems it means subskiplsn.\n\n+ * subskipxid in order to inform users for cases e.g., where the user mistakenly\n+ * specified the wrong subskiplsn.\n\nIt seems it means subskiplsn.\n\n+sub test_skip_xact\n+{\n\nIt seems this function should be named test_skip_lsn. Unless the intention is\nto cover other skip options in the future.\n\nsrc/test/subscription/t/029_disable_on_error.pl | 94 ----------\nsrc/test/subscription/t/029_on_error.pl | 183 +++++++++++++++++++\n\nIt seems you are removing a test for 705e20f8550c0e8e47c0b6b20b5f5ffd6ffd9e33.\nI should also name 029_on_error.pl to something else such as 030_skip_lsn.pl or\na generic name 030_skip_option.pl.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Thu, Mar 17, 2022, at 3:03 AM, Amit Kapila wrote:On Wed, Mar 16, 2022 at 1:53 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:>> I've attached an updated version patch.>The patch LGTM. I have made minor changes in comments and docs in theattached patch. Kindly let me know what you think of the attached?I am planning to commit this early next week (on Monday) unless thereare more comments/suggestions.I reviewed this last version and I have a few comments.+ * If the user set subskiplsn, we do a sanity check to make+ * sure that the specified LSN is a probable value.... user *sets*...+ ereport(ERROR,+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),+ errmsg(\"skip WAL location (LSN) must be greater than origin LSN %X/%X\",+ LSN_FORMAT_ARGS(remote_lsn))));Shouldn't we add the LSN to be skipped in the \"(LSN)\"?+ * Start a new transaction to clear the subskipxid, if not started+ * yet.It seems it means subskiplsn.+ * subskipxid in order to inform users for cases e.g., where the user mistakenly+ * specified the wrong subskiplsn.It seems it means subskiplsn.+sub test_skip_xact+{It seems this function should be named test_skip_lsn. Unless the intention isto cover other skip options in the future.src/test/subscription/t/029_disable_on_error.pl | 94 ----------src/test/subscription/t/029_on_error.pl | 183 +++++++++++++++++++It seems you are removing a test for 705e20f8550c0e8e47c0b6b20b5f5ffd6ffd9e33.I should also name 029_on_error.pl to something else such as 030_skip_lsn.pl ora generic name 030_skip_option.pl.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Sun, 20 Mar 2022 22:39:25 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Mar 21, 2022 at 7:09 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> src/test/subscription/t/029_disable_on_error.pl | 94 ----------\n> src/test/subscription/t/029_on_error.pl | 183 +++++++++++++++++++\n>\n> It seems you are removing a test for 705e20f8550c0e8e47c0b6b20b5f5ffd6ffd9e33.\n>\n\nWe have covered the same test in the new test file. See \"CREATE\nSUBSCRIPTION sub CONNECTION '$publisher_connstr' PUBLICATION pub WITH\n(disable_on_error = true, ...\". This will test the cases we were\nearlier testing via 'disable_on_error'.\n\n> I should also name 029_on_error.pl to something else such as 030_skip_lsn.pl or\n> a generic name 030_skip_option.pl.\n>\n\nThe reason to keep the name 'on_error' is that it has tests for both\n'disable_on_error' option and 'skip_lsn'. The other option could be\n'on_error_action' or something like that. Now, does this make sense to\nyou?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 21 Mar 2022 07:49:43 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Mar 21, 2022 at 7:09 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Thu, Mar 17, 2022, at 3:03 AM, Amit Kapila wrote:\n>\n> On Wed, Mar 16, 2022 at 1:53 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached an updated version patch.\n> >\n>\n> The patch LGTM. I have made minor changes in comments and docs in the\n> attached patch. Kindly let me know what you think of the attached?\n>\n> I am planning to commit this early next week (on Monday) unless there\n> are more comments/suggestions.\n>\n> I reviewed this last version and I have a few comments.\n>\n> + * If the user set subskiplsn, we do a sanity check to make\n> + * sure that the specified LSN is a probable value.\n>\n> ... user *sets*...\n>\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"skip WAL location (LSN) must be greater than origin LSN %X/%X\",\n> + LSN_FORMAT_ARGS(remote_lsn))));\n>\n> Shouldn't we add the LSN to be skipped in the \"(LSN)\"?\n>\n> + * Start a new transaction to clear the subskipxid, if not started\n> + * yet.\n>\n> It seems it means subskiplsn.\n>\n> + * subskipxid in order to inform users for cases e.g., where the user mistakenly\n> + * specified the wrong subskiplsn.\n>\n> It seems it means subskiplsn.\n>\n> +sub test_skip_xact\n> +{\n>\n> It seems this function should be named test_skip_lsn. Unless the intention is\n> to cover other skip options in the future.\n>\n\nI have fixed all the above comments as per your suggestion in the\nattached. Do let me know if something is missed?\n\n> src/test/subscription/t/029_disable_on_error.pl | 94 ----------\n> src/test/subscription/t/029_on_error.pl | 183 +++++++++++++++++++\n>\n> It seems you are removing a test for 705e20f8550c0e8e47c0b6b20b5f5ffd6ffd9e33.\n> I should also name 029_on_error.pl to something else such as 030_skip_lsn.pl or\n> a generic name 030_skip_option.pl.\n>\n\nAs explained in my previous email, I don't think any change is\nrequired for this comment but do let me know if you still think so?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Mon, 21 Mar 2022 08:55:51 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Mar 21, 2022, at 12:25 AM, Amit Kapila wrote:\n> I have fixed all the above comments as per your suggestion in the\n> attached. Do let me know if something is missed?\nLooks good to me.\n\n> > src/test/subscription/t/029_disable_on_error.pl | 94 ----------\n> > src/test/subscription/t/029_on_error.pl | 183 +++++++++++++++++++\n> >\n> > It seems you are removing a test for 705e20f8550c0e8e47c0b6b20b5f5ffd6ffd9e33.\n> > I should also name 029_on_error.pl to something else such as 030_skip_lsn.pl or\n> > a generic name 030_skip_option.pl.\n> >\n> \n> As explained in my previous email, I don't think any change is\n> required for this comment but do let me know if you still think so?\nOh, sorry about the noise. I saw mixed tests between the 2 new features and I\nwas confused if it was intentional or not.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Mon, Mar 21, 2022, at 12:25 AM, Amit Kapila wrote:I have fixed all the above comments as per your suggestion in theattached. Do let me know if something is missed?Looks good to me.> src/test/subscription/t/029_disable_on_error.pl | 94 ----------> src/test/subscription/t/029_on_error.pl | 183 +++++++++++++++++++>> It seems you are removing a test for 705e20f8550c0e8e47c0b6b20b5f5ffd6ffd9e33.> I should also name 029_on_error.pl to something else such as 030_skip_lsn.pl or> a generic name 030_skip_option.pl.>As explained in my previous email, I don't think any change isrequired for this comment but do let me know if you still think so?Oh, sorry about the noise. I saw mixed tests between the 2 new features and Iwas confused if it was intentional or not.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Mon, 21 Mar 2022 09:21:18 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Mar 21, 2022 at 5:51 PM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Mon, Mar 21, 2022, at 12:25 AM, Amit Kapila wrote:\n>\n> I have fixed all the above comments as per your suggestion in the\n> attached. Do let me know if something is missed?\n>\n> Looks good to me.\n>\n\nThis patch is committed\n(https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=208c5d65bbd60e33e272964578cb74182ac726a8).\nToday, I have marked the corresponding entry in CF as committed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 29 Mar 2022 10:43:00 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Mar 29, 2022 at 10:43:00AM +0530, Amit Kapila wrote:\n> On Mon, Mar 21, 2022 at 5:51 PM Euler Taveira <euler@eulerto.com> wrote:\n> > On Mon, Mar 21, 2022, at 12:25 AM, Amit Kapila wrote:\n> > I have fixed all the above comments as per your suggestion in the\n> > attached. Do let me know if something is missed?\n> >\n> > Looks good to me.\n> \n> This patch is committed\n> (https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=208c5d65bbd60e33e272964578cb74182ac726a8).\n\nsrc/test/subscription/t/029_on_error.pl has been failing reliably on the five\nAIX buildfarm members:\n\n# poll_query_until timed out executing this query:\n# SELECT subskiplsn = '0/0' FROM pg_subscription WHERE subname = 'sub'\n# expecting this output:\n# t\n# last actual query output:\n# f\n# with stderr:\ntimed out waiting for match: (?^:LOG: done skipping logical replication transaction finished at 0/1D30788) at t/029_on_error.pl line 50.\n\nI've posted five sets of logs (2.7 MiB compressed) here:\nhttps://drive.google.com/file/d/16NkyNIV07o0o8WM7GwcaAYFQDPTkULkR/view?usp=sharing\n\n\nThe members have not actually uploaded these failures, due to an OOM in the\nPerl process driving the buildfarm script. I think the OOM is due to a need\nfor excess RAM to capture 029_on_error_subscriber.log, which is 27MB here. I\nwill move the members to 64-bit Perl. (AIX 32-bit binaries OOM easily:\nhttps://www.postgresql.org/docs/devel/installation-platform-notes.html#INSTALLATION-NOTES-AIX.)\n\n\n",
"msg_date": "Fri, 1 Apr 2022 00:44:23 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Apr 1, 2022 at 4:44 PM Noah Misch <noah@leadboat.com> wrote:\n>\n> On Tue, Mar 29, 2022 at 10:43:00AM +0530, Amit Kapila wrote:\n> > On Mon, Mar 21, 2022 at 5:51 PM Euler Taveira <euler@eulerto.com> wrote:\n> > > On Mon, Mar 21, 2022, at 12:25 AM, Amit Kapila wrote:\n> > > I have fixed all the above comments as per your suggestion in the\n> > > attached. Do let me know if something is missed?\n> > >\n> > > Looks good to me.\n> >\n> > This patch is committed\n> > (https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=208c5d65bbd60e33e272964578cb74182ac726a8).\n>\n> src/test/subscription/t/029_on_error.pl has been failing reliably on the five\n> AIX buildfarm members:\n>\n> # poll_query_until timed out executing this query:\n> # SELECT subskiplsn = '0/0' FROM pg_subscription WHERE subname = 'sub'\n> # expecting this output:\n> # t\n> # last actual query output:\n> # f\n> # with stderr:\n> timed out waiting for match: (?^:LOG: done skipping logical replication transaction finished at 0/1D30788) at t/029_on_error.pl line 50.\n>\n> I've posted five sets of logs (2.7 MiB compressed) here:\n> https://drive.google.com/file/d/16NkyNIV07o0o8WM7GwcaAYFQDPTkULkR/view?usp=sharing\n\nThank you for the report. I'm investigating this issue.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Fri, 1 Apr 2022 17:10:02 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Apr 1, 2022 at 5:10 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Apr 1, 2022 at 4:44 PM Noah Misch <noah@leadboat.com> wrote:\n> >\n> > On Tue, Mar 29, 2022 at 10:43:00AM +0530, Amit Kapila wrote:\n> > > On Mon, Mar 21, 2022 at 5:51 PM Euler Taveira <euler@eulerto.com> wrote:\n> > > > On Mon, Mar 21, 2022, at 12:25 AM, Amit Kapila wrote:\n> > > > I have fixed all the above comments as per your suggestion in the\n> > > > attached. Do let me know if something is missed?\n> > > >\n> > > > Looks good to me.\n> > >\n> > > This patch is committed\n> > > (https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=208c5d65bbd60e33e272964578cb74182ac726a8).\n> >\n> > src/test/subscription/t/029_on_error.pl has been failing reliably on the five\n> > AIX buildfarm members:\n> >\n> > # poll_query_until timed out executing this query:\n> > # SELECT subskiplsn = '0/0' FROM pg_subscription WHERE subname = 'sub'\n> > # expecting this output:\n> > # t\n> > # last actual query output:\n> > # f\n> > # with stderr:\n> > timed out waiting for match: (?^:LOG: done skipping logical replication transaction finished at 0/1D30788) at t/029_on_error.pl line 50.\n> >\n> > I've posted five sets of logs (2.7 MiB compressed) here:\n> > https://drive.google.com/file/d/16NkyNIV07o0o8WM7GwcaAYFQDPTkULkR/view?usp=sharing\n>\n> Thank you for the report. I'm investigating this issue.\n\nLooking at the subscriber logs, it successfully fetched the correct\nerror-LSN from the server logs and set it to ALTER SUBSCRIPTION …\nSKIP:\n\n2022-03-30 09:48:36.617 UTC [17039636:4] CONTEXT: processing remote\ndata for replication origin \"pg_16391\" during \"INSERT\" for replication\ntarget relation \"public.tbl\" in transaction 725 finished at 0/1D30788\n2022-03-30 09:48:36.617 UTC [17039636:5] LOG: logical replication\nsubscription \"sub\" has been disabled due to an error\n:\n2022-03-30 09:48:36.670 UTC [17039640:1] [unknown] LOG: connection\nreceived: host=[local]\n2022-03-30 09:48:36.672 UTC [17039640:2] [unknown] LOG: connection\nauthorized: user=nm database=postgres application_name=029_on_error.pl\n2022-03-30 09:48:36.675 UTC [17039640:3] 029_on_error.pl LOG:\nstatement: ALTER SUBSCRIPTION sub SKIP (lsn = '0/1D30788')\n2022-03-30 09:48:36.676 UTC [17039640:4] 029_on_error.pl LOG:\ndisconnection: session time: 0:00:00.006 user=nm database=postgres\nhost=[local]\n:\n2022-03-30 09:48:36.762 UTC [28246036:2] ERROR: duplicate key value\nviolates unique constraint \"tbl_pkey\"\n2022-03-30 09:48:36.762 UTC [28246036:3] DETAIL: Key (i)=(1) already exists.\n2022-03-30 09:48:36.762 UTC [28246036:4] CONTEXT: processing remote\ndata for replication origin \"pg_16391\" during \"INSERT\" for replication\ntarget relation \"public.tbl\" in transaction 725 finished at 0/1D30788\n\nHowever, the worker could not start skipping changes of the error\ntransaction for some reason. Given that \"SELECT subskiplsn = '0/0'\nFROM pg_subscription WHERE subname = 'sub’” didn't return true, some\nvalue was set to subskiplsn even after the unique key error.\n\nSo I'm guessing that the apply worker could not get the updated value\nof the subskiplsn or its MySubscription->skiplsn could not match with\nthe transaction's finish LSN. Also, given that the test is failing on\nall AIX buildfarm members, there might be something specific to AIX.\n\nNoah, to investigate this issue further, is it possible for you to\napply the attached patch and run the 029_on_error.pl test? The patch\nadds some logs to get additional information.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Fri, 1 Apr 2022 21:25:52 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Apr 01, 2022 at 09:25:52PM +0900, Masahiko Sawada wrote:\n> > On Fri, Apr 1, 2022 at 4:44 PM Noah Misch <noah@leadboat.com> wrote:\n> > > src/test/subscription/t/029_on_error.pl has been failing reliably on the five\n> > > AIX buildfarm members:\n> > >\n> > > # poll_query_until timed out executing this query:\n> > > # SELECT subskiplsn = '0/0' FROM pg_subscription WHERE subname = 'sub'\n> > > # expecting this output:\n> > > # t\n> > > # last actual query output:\n> > > # f\n> > > # with stderr:\n> > > timed out waiting for match: (?^:LOG: done skipping logical replication transaction finished at 0/1D30788) at t/029_on_error.pl line 50.\n> > >\n> > > I've posted five sets of logs (2.7 MiB compressed) here:\n> > > https://drive.google.com/file/d/16NkyNIV07o0o8WM7GwcaAYFQDPTkULkR/view?usp=sharing\n\n> Given that \"SELECT subskiplsn = '0/0'\n> FROM pg_subscription WHERE subname = 'sub’” didn't return true, some\n> value was set to subskiplsn even after the unique key error.\n> \n> So I'm guessing that the apply worker could not get the updated value\n> of the subskiplsn or its MySubscription->skiplsn could not match with\n> the transaction's finish LSN. Also, given that the test is failing on\n> all AIX buildfarm members, there might be something specific to AIX.\n> \n> Noah, to investigate this issue further, is it possible for you to\n> apply the attached patch and run the 029_on_error.pl test? The patch\n> adds some logs to get additional information.\n\nLogs attached. I ran this outside the buildfarm script environment. Most\nnotably, I didn't override PG_TEST_TIMEOUT_DEFAULT like my buildfarm\nconfiguration does, so the total log size is smaller.",
"msg_date": "Fri, 1 Apr 2022 17:11:53 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Sat, Apr 2, 2022 at 5:41 AM Noah Misch <noah@leadboat.com> wrote:\n>\n> On Fri, Apr 01, 2022 at 09:25:52PM +0900, Masahiko Sawada wrote:\n> > > On Fri, Apr 1, 2022 at 4:44 PM Noah Misch <noah@leadboat.com> wrote:\n> > > > src/test/subscription/t/029_on_error.pl has been failing reliably on the five\n> > > > AIX buildfarm members:\n> > > >\n> > > > # poll_query_until timed out executing this query:\n> > > > # SELECT subskiplsn = '0/0' FROM pg_subscription WHERE subname = 'sub'\n> > > > # expecting this output:\n> > > > # t\n> > > > # last actual query output:\n> > > > # f\n> > > > # with stderr:\n> > > > timed out waiting for match: (?^:LOG: done skipping logical replication transaction finished at 0/1D30788) at t/029_on_error.pl line 50.\n> > > >\n> > > > I've posted five sets of logs (2.7 MiB compressed) here:\n> > > > https://drive.google.com/file/d/16NkyNIV07o0o8WM7GwcaAYFQDPTkULkR/view?usp=sharing\n>\n> > Given that \"SELECT subskiplsn = '0/0'\n> > FROM pg_subscription WHERE subname = 'sub’” didn't return true, some\n> > value was set to subskiplsn even after the unique key error.\n> >\n> > So I'm guessing that the apply worker could not get the updated value\n> > of the subskiplsn or its MySubscription->skiplsn could not match with\n> > the transaction's finish LSN. Also, given that the test is failing on\n> > all AIX buildfarm members, there might be something specific to AIX.\n> >\n> > Noah, to investigate this issue further, is it possible for you to\n> > apply the attached patch and run the 029_on_error.pl test? The patch\n> > adds some logs to get additional information.\n>\n> Logs attached.\n>\n\nThank you.\n\nBy seeing the below Logs:\n----\n....\n2022-04-01 18:19:34.710 CUT [58327402] LOG: not started skipping\nchanges: my_skiplsn 14EB7D8/B0706F72 finish_lsn 0/14EB7D8\n...\n----\n\nIt seems that the value of skiplsn read in GetSubscription is wrong\nwhich makes the apply worker think it doesn't need to skip the\ntransaction. Now, in Alter/Create Subscription, we are using\nLSNGetDatum() to store skiplsn value in pg_subscription but while\nreading it in GetSubscription(), we are not converting back the datum\nto LSN by using DatumGetLSN(). Is it possible that on this machine it\nmight be leading to not getting the right value for skiplsn? I think\nit is worth trying to see if this fixes the problem.\n\nAny other thoughts?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Sat, 2 Apr 2022 06:49:20 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Sat, Apr 02, 2022 at 06:49:20AM +0530, Amit Kapila wrote:\n> On Sat, Apr 2, 2022 at 5:41 AM Noah Misch <noah@leadboat.com> wrote:\n> >\n> > On Fri, Apr 01, 2022 at 09:25:52PM +0900, Masahiko Sawada wrote:\n> > > > On Fri, Apr 1, 2022 at 4:44 PM Noah Misch <noah@leadboat.com> wrote:\n> > > > > src/test/subscription/t/029_on_error.pl has been failing reliably on the five\n> > > > > AIX buildfarm members:\n> > > > >\n> > > > > # poll_query_until timed out executing this query:\n> > > > > # SELECT subskiplsn = '0/0' FROM pg_subscription WHERE subname = 'sub'\n> > > > > # expecting this output:\n> > > > > # t\n> > > > > # last actual query output:\n> > > > > # f\n> > > > > # with stderr:\n> > > > > timed out waiting for match: (?^:LOG: done skipping logical replication transaction finished at 0/1D30788) at t/029_on_error.pl line 50.\n> > > > >\n> > > > > I've posted five sets of logs (2.7 MiB compressed) here:\n> > > > > https://drive.google.com/file/d/16NkyNIV07o0o8WM7GwcaAYFQDPTkULkR/view?usp=sharing\n> >\n> > > Given that \"SELECT subskiplsn = '0/0'\n> > > FROM pg_subscription WHERE subname = 'sub’” didn't return true, some\n> > > value was set to subskiplsn even after the unique key error.\n> > >\n> > > So I'm guessing that the apply worker could not get the updated value\n> > > of the subskiplsn or its MySubscription->skiplsn could not match with\n> > > the transaction's finish LSN. Also, given that the test is failing on\n> > > all AIX buildfarm members, there might be something specific to AIX.\n> > >\n> > > Noah, to investigate this issue further, is it possible for you to\n> > > apply the attached patch and run the 029_on_error.pl test? The patch\n> > > adds some logs to get additional information.\n> >\n> > Logs attached.\n> \n> Thank you.\n> \n> By seeing the below Logs:\n> ----\n> ....\n> 2022-04-01 18:19:34.710 CUT [58327402] LOG: not started skipping\n> changes: my_skiplsn 14EB7D8/B0706F72 finish_lsn 0/14EB7D8\n> ...\n> ----\n> \n> It seems that the value of skiplsn read in GetSubscription is wrong\n> which makes the apply worker think it doesn't need to skip the\n> transaction. Now, in Alter/Create Subscription, we are using\n> LSNGetDatum() to store skiplsn value in pg_subscription but while\n> reading it in GetSubscription(), we are not converting back the datum\n> to LSN by using DatumGetLSN(). Is it possible that on this machine it\n> might be leading to not getting the right value for skiplsn? I think\n> it is worth trying to see if this fixes the problem.\n\nAfter applying datum_to_lsn_skiplsn_1.patch, I get another failure. Logs\nattached.",
"msg_date": "Fri, 1 Apr 2022 18:59:43 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Sat, Apr 2, 2022 at 7:29 AM Noah Misch <noah@leadboat.com> wrote:\n>\n> On Sat, Apr 02, 2022 at 06:49:20AM +0530, Amit Kapila wrote:\n>\n> After applying datum_to_lsn_skiplsn_1.patch, I get another failure. Logs\n> attached.\n>\n\nThe failure is for the same reason. I noticed that even when skip lsn\nvalue should be 0/0, it is some invalid value, see: \"LOG: not started\nskipping changes: my_skiplsn 0/B0706F72 finish_lsn 0/14EB7D8\". Here,\nmy_skiplsn should be 0/0 instead of 0/B0706F72. Now, I am not sure why\nthe LSN's 4 bytes are correct and the other 4 bytes have some random\nvalue. A similar problem is there when we have set the valid value of\nskip lsn, see: \"LOG: not started skipping changes: my_skiplsn\n14EB7D8/B0706F72 finish_lsn 0/14EB7D8\". Here the value of my_skiplsn\nshould be 0/14EB7D8 instead of 14EB7D8/B0706F72.\n\nI am sure that if you create a subscription with the below test and\ncheck the skip lsn value, it will be correct, otherwise, you would\nhave seen failure in subscription.sql as well. If possible, can you\nplease check the following example to rule out the possibility:\n\nFor example,\nPublisher:\nCreate table t1(c1 int);\nCreate Publication pub1 for table t1;\n\nSubscriber:\nCreate table t1(c1 int);\nCreate Subscription sub1 connection 'dbname = postgres' Publication pub1;\nSelect subname, subskiplsn from pg_subsription; -- subskiplsn should be 0/0\n\nAlter Subscription sub1 SKIP (LSN = '0/14EB7D8');\nSelect subname, subskiplsn from pg_subsription; -- subskiplsn should\nbe 0/14EB7D8\n\nAssuming the above is correct and we are still getting the wrong value\nin apply worker, the only remaining suspect is the following code in\nGetSubscription:\nsub->skiplsn = DatumGetLSN(subform->subskiplsn);\n\nI don't know what is wrong with this because subskiplsn is stored as\npg_lsn which is a fixed value and we should be able to access it by\nstruct. Do you see any problem with this?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 2 Apr 2022 09:38:32 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Sat, Apr 2, 2022 at 1:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Apr 2, 2022 at 7:29 AM Noah Misch <noah@leadboat.com> wrote:\n> >\n> > On Sat, Apr 02, 2022 at 06:49:20AM +0530, Amit Kapila wrote:\n> >\n> > After applying datum_to_lsn_skiplsn_1.patch, I get another failure. Logs\n> > attached.\n> >\n>\n> The failure is for the same reason. I noticed that even when skip lsn\n> value should be 0/0, it is some invalid value, see: \"LOG: not started\n> skipping changes: my_skiplsn 0/B0706F72 finish_lsn 0/14EB7D8\". Here,\n> my_skiplsn should be 0/0 instead of 0/B0706F72. Now, I am not sure why\n> the LSN's 4 bytes are correct and the other 4 bytes have some random\n> value.\n\nIt seems that 0/B0706F72 is not a random value. Two subscriber logs\nshow the same value. Since 0x70 = 'p', 0x6F = 'o', and 0x72 = 'r', it\nmight show the next field in the pg_subscription catalog, i.e.,\nsubconninfo. The subscription is created by \"CREATE SUBSCRIPTION sub\nCONNECTION 'port=57851 host=/tmp/6u2vRwQYik dbname=postgres'\nPUBLICATION pub WITH (disable_on_error = true, streaming = on,\ntwo_phase = on)\".\n\nGiven subscription.sql passes, something is wrong when we read the\nsubskiplsn value by like \"sub->skiplsn = subform->subskiplsn;\".\n\nIs it possible to run the test again with the attached patch?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Sat, 2 Apr 2022 16:33:44 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Sat, Apr 02, 2022 at 04:33:44PM +0900, Masahiko Sawada wrote:\n> It seems that 0/B0706F72 is not a random value. Two subscriber logs\n> show the same value. Since 0x70 = 'p', 0x6F = 'o', and 0x72 = 'r', it\n> might show the next field in the pg_subscription catalog, i.e.,\n> subconninfo. The subscription is created by \"CREATE SUBSCRIPTION sub\n> CONNECTION 'port=57851 host=/tmp/6u2vRwQYik dbname=postgres'\n> PUBLICATION pub WITH (disable_on_error = true, streaming = on,\n> two_phase = on)\".\n> \n> Given subscription.sql passes, something is wrong when we read the\n> subskiplsn value by like \"sub->skiplsn = subform->subskiplsn;\".\n\nThat's a good clue. We've never made pg_type.typalign able to represent\nalignment as it works on AIX. A uint64 like pg_lsn has 8-byte alignment, so\nthe C struct follows from that. At the typalign level, we have only these:\n\n#define TYPALIGN_CHAR\t\t\t'c' /* char alignment (i.e. unaligned) */\n#define TYPALIGN_SHORT\t\t\t's' /* short alignment (typically 2 bytes) */\n#define TYPALIGN_INT\t\t\t'i' /* int alignment (typically 4 bytes) */\n#define TYPALIGN_DOUBLE\t\t'd' /* double alignment (often 8 bytes) */\n\nOn AIX, they are:\n\n#define ALIGNOF_DOUBLE 4 \n#define ALIGNOF_INT 4\n#define ALIGNOF_LONG 8 \n/* #undef ALIGNOF_LONG_LONG_INT */\n/* #undef ALIGNOF_PG_INT128_TYPE */\n#define ALIGNOF_SHORT 2 \n\nuint64 and pg_lsn use TYPALIGN_DOUBLE. For AIX, they really need a typalign\ncorresponding to ALIGNOF_LONG. Hence, the C struct layout doesn't match the\ntuple layout. Columns potentially affected:\n\n[local] test=*# select attrelid::regclass, attname from pg_attribute a join pg_class c on c.oid = attrelid where attalign = 'd' and relkind = 'r' and attnotnull and attlen <> -1;\n attrelid │ attname \n─────────────────┼──────────────\n pg_sequence │ seqstart\n pg_sequence │ seqincrement\n pg_sequence │ seqmax\n pg_sequence │ seqmin\n pg_sequence │ seqcache\n pg_subscription │ subskiplsn\n(6 rows)\n\nThe pg_sequence fields evade trouble, because there's exactly eight bytes (two\noids) before them.\n\n\nSome options:\n- Move subskiplsn after subdbid, so it's always aligned anyway. I've\n confirmed that this lets the test pass, in 44s.\n- Move subskiplsn to the CATALOG_VARLEN section, despite its fixed length.\n- Introduce a new typalign value suitable for uint64. This is more intrusive,\n but it's more future-proof. Looking beyond catalog columns, it might\n improve performance by avoiding unaligned reads.\n\n> Is it possible to run the test again with the attached patch?\n\nLogs attached. The test \"passed\", though it printed \"poll_query_until timed\nout\" three times and took awhile.",
"msg_date": "Sat, 2 Apr 2022 01:13:46 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Sat, Apr 2, 2022 at 1:43 PM Noah Misch <noah@leadboat.com> wrote:\n>\n> On Sat, Apr 02, 2022 at 04:33:44PM +0900, Masahiko Sawada wrote:\n> > It seems that 0/B0706F72 is not a random value. Two subscriber logs\n> > show the same value. Since 0x70 = 'p', 0x6F = 'o', and 0x72 = 'r', it\n> > might show the next field in the pg_subscription catalog, i.e.,\n> > subconninfo. The subscription is created by \"CREATE SUBSCRIPTION sub\n> > CONNECTION 'port=57851 host=/tmp/6u2vRwQYik dbname=postgres'\n> > PUBLICATION pub WITH (disable_on_error = true, streaming = on,\n> > two_phase = on)\".\n> >\n> > Given subscription.sql passes, something is wrong when we read the\n> > subskiplsn value by like \"sub->skiplsn = subform->subskiplsn;\".\n>\n> That's a good clue. We've never made pg_type.typalign able to represent\n> alignment as it works on AIX. A uint64 like pg_lsn has 8-byte alignment, so\n> the C struct follows from that. At the typalign level, we have only these:\n>\n> #define TYPALIGN_CHAR 'c' /* char alignment (i.e. unaligned) */\n> #define TYPALIGN_SHORT 's' /* short alignment (typically 2 bytes) */\n> #define TYPALIGN_INT 'i' /* int alignment (typically 4 bytes) */\n> #define TYPALIGN_DOUBLE 'd' /* double alignment (often 8 bytes) */\n>\n> On AIX, they are:\n>\n> #define ALIGNOF_DOUBLE 4\n> #define ALIGNOF_INT 4\n> #define ALIGNOF_LONG 8\n> /* #undef ALIGNOF_LONG_LONG_INT */\n> /* #undef ALIGNOF_PG_INT128_TYPE */\n> #define ALIGNOF_SHORT 2\n>\n> uint64 and pg_lsn use TYPALIGN_DOUBLE. For AIX, they really need a typalign\n> corresponding to ALIGNOF_LONG. Hence, the C struct layout doesn't match the\n> tuple layout. Columns potentially affected:\n>\n> [local] test=*# select attrelid::regclass, attname from pg_attribute a join pg_class c on c.oid = attrelid where attalign = 'd' and relkind = 'r' and attnotnull and attlen <> -1;\n> attrelid │ attname\n> ─────────────────┼──────────────\n> pg_sequence │ seqstart\n> pg_sequence │ seqincrement\n> pg_sequence │ seqmax\n> pg_sequence │ seqmin\n> pg_sequence │ seqcache\n> pg_subscription │ subskiplsn\n> (6 rows)\n>\n> The pg_sequence fields evade trouble, because there's exactly eight bytes (two\n> oids) before them.\n>\n>\n> Some options:\n> - Move subskiplsn after subdbid, so it's always aligned anyway. I've\n> confirmed that this lets the test pass, in 44s.\n> - Move subskiplsn to the CATALOG_VARLEN section, despite its fixed length.\n>\n\n+1 to any one of the above. I mildly prefer the first option as that\nwill allow us to access the value directly instead of going via\nSysCacheGetAttr but I am fine either way.\n\n> - Introduce a new typalign value suitable for uint64. This is more intrusive,\n> but it's more future-proof. Looking beyond catalog columns, it might\n> improve performance by avoiding unaligned reads.\n>\n> > Is it possible to run the test again with the attached patch?\n>\n> Logs attached. The test \"passed\", though it printed \"poll_query_until timed\n> out\" three times and took awhile.\n\nThanks for helping in figuring out the problem.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 2 Apr 2022 15:34:01 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Sat, Apr 2, 2022 at 7:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Apr 2, 2022 at 1:43 PM Noah Misch <noah@leadboat.com> wrote:\n> >\n> > On Sat, Apr 02, 2022 at 04:33:44PM +0900, Masahiko Sawada wrote:\n> > > It seems that 0/B0706F72 is not a random value. Two subscriber logs\n> > > show the same value. Since 0x70 = 'p', 0x6F = 'o', and 0x72 = 'r', it\n> > > might show the next field in the pg_subscription catalog, i.e.,\n> > > subconninfo. The subscription is created by \"CREATE SUBSCRIPTION sub\n> > > CONNECTION 'port=57851 host=/tmp/6u2vRwQYik dbname=postgres'\n> > > PUBLICATION pub WITH (disable_on_error = true, streaming = on,\n> > > two_phase = on)\".\n> > >\n> > > Given subscription.sql passes, something is wrong when we read the\n> > > subskiplsn value by like \"sub->skiplsn = subform->subskiplsn;\".\n> >\n> > That's a good clue. We've never made pg_type.typalign able to represent\n> > alignment as it works on AIX. A uint64 like pg_lsn has 8-byte alignment, so\n> > the C struct follows from that. At the typalign level, we have only these:\n> >\n> > #define TYPALIGN_CHAR 'c' /* char alignment (i.e. unaligned) */\n> > #define TYPALIGN_SHORT 's' /* short alignment (typically 2 bytes) */\n> > #define TYPALIGN_INT 'i' /* int alignment (typically 4 bytes) */\n> > #define TYPALIGN_DOUBLE 'd' /* double alignment (often 8 bytes) */\n> >\n> > On AIX, they are:\n> >\n> > #define ALIGNOF_DOUBLE 4\n> > #define ALIGNOF_INT 4\n> > #define ALIGNOF_LONG 8\n> > /* #undef ALIGNOF_LONG_LONG_INT */\n> > /* #undef ALIGNOF_PG_INT128_TYPE */\n> > #define ALIGNOF_SHORT 2\n> >\n> > uint64 and pg_lsn use TYPALIGN_DOUBLE. For AIX, they really need a typalign\n> > corresponding to ALIGNOF_LONG. Hence, the C struct layout doesn't match the\n> > tuple layout. Columns potentially affected:\n> >\n> > [local] test=*# select attrelid::regclass, attname from pg_attribute a join pg_class c on c.oid = attrelid where attalign = 'd' and relkind = 'r' and attnotnull and attlen <> -1;\n> > attrelid │ attname\n> > ─────────────────┼──────────────\n> > pg_sequence │ seqstart\n> > pg_sequence │ seqincrement\n> > pg_sequence │ seqmax\n> > pg_sequence │ seqmin\n> > pg_sequence │ seqcache\n> > pg_subscription │ subskiplsn\n> > (6 rows)\n> >\n> > The pg_sequence fields evade trouble, because there's exactly eight bytes (two\n> > oids) before them.\n\nThanks for helping with the investigation!\n\n> >\n> >\n> > Some options:\n> > - Move subskiplsn after subdbid, so it's always aligned anyway. I've\n> > confirmed that this lets the test pass, in 44s.\n> > - Move subskiplsn to the CATALOG_VARLEN section, despite its fixed length.\n> >\n>\n> +1 to any one of the above. I mildly prefer the first option as that\n> will allow us to access the value directly instead of going via\n> SysCacheGetAttr but I am fine either way.\n\n+1. I also prefer the first option.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Sat, 2 Apr 2022 20:44:45 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Sat, Apr 02, 2022 at 08:44:45PM +0900, Masahiko Sawada wrote:\n> On Sat, Apr 2, 2022 at 7:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Sat, Apr 2, 2022 at 1:43 PM Noah Misch <noah@leadboat.com> wrote:\n> > > Some options:\n> > > - Move subskiplsn after subdbid, so it's always aligned anyway. I've\n> > > confirmed that this lets the test pass, in 44s.\n> > > - Move subskiplsn to the CATALOG_VARLEN section, despite its fixed length.\n> >\n> > +1 to any one of the above. I mildly prefer the first option as that\n> > will allow us to access the value directly instead of going via\n> > SysCacheGetAttr but I am fine either way.\n> \n> +1. I also prefer the first option.\n\nSounds good to me.\n\n\n",
"msg_date": "Sat, 2 Apr 2022 17:45:55 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Sun, Apr 3, 2022 at 9:45 AM Noah Misch <noah@leadboat.com> wrote:\n>\n> On Sat, Apr 02, 2022 at 08:44:45PM +0900, Masahiko Sawada wrote:\n> > On Sat, Apr 2, 2022 at 7:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > On Sat, Apr 2, 2022 at 1:43 PM Noah Misch <noah@leadboat.com> wrote:\n> > > > Some options:\n> > > > - Move subskiplsn after subdbid, so it's always aligned anyway. I've\n> > > > confirmed that this lets the test pass, in 44s.\n> > > > - Move subskiplsn to the CATALOG_VARLEN section, despite its fixed length.\n> > >\n> > > +1 to any one of the above. I mildly prefer the first option as that\n> > > will allow us to access the value directly instead of going via\n> > > SysCacheGetAttr but I am fine either way.\n> >\n> > +1. I also prefer the first option.\n>\n> Sounds good to me.\n\nI've attached the patch for the first option.\n\n> - Introduce a new typalign value suitable for uint64. This is more intrusive,\n> but it's more future-proof. Looking beyond catalog columns, it might\n> improve performance by avoiding unaligned reads.\n\nThe third option would be a good item for PG16 or later.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Mon, 4 Apr 2022 10:28:30 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Apr 04, 2022 at 10:28:30AM +0900, Masahiko Sawada wrote:\n> On Sun, Apr 3, 2022 at 9:45 AM Noah Misch <noah@leadboat.com> wrote:\n> > On Sat, Apr 02, 2022 at 08:44:45PM +0900, Masahiko Sawada wrote:\n> > > On Sat, Apr 2, 2022 at 7:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > On Sat, Apr 2, 2022 at 1:43 PM Noah Misch <noah@leadboat.com> wrote:\n> > > > > Some options:\n> > > > > - Move subskiplsn after subdbid, so it's always aligned anyway. I've\n> > > > > confirmed that this lets the test pass, in 44s.\n\n> --- a/src/include/catalog/pg_subscription.h\n> +++ b/src/include/catalog/pg_subscription.h\n> @@ -54,6 +54,17 @@ CATALOG(pg_subscription,6100,SubscriptionRelationId) BKI_SHARED_RELATION BKI_ROW\n> \n> \tOid\t\t\tsubdbid BKI_LOOKUP(pg_database);\t/* Database the\n> \t\t\t\t\t\t\t\t\t\t\t\t\t * subscription is in. */\n> +\n> +\t/*\n> +\t * All changes finished at this LSN are skipped.\n> +\t *\n> +\t * Note that XLogRecPtr, pg_lsn in the catalog, is 8-byte alignment\n> +\t * (TYPALIGN_DOUBLE) and it does not match the alignment on some platforms\n> +\t * such as AIX. Therefore subskiplsn needs to be placed here so it is\n> +\t * always aligned.\n\nI'm reading this comment as saying that TYPALIGN_DOUBLE is always 8 bytes, but\nthe problem arises precisely because TYPALIGN_DOUBLE==4 on AIX.\n\nOn most hosts, the C alignment of an XLogRecPtr is 8 bytes, and\nTYPALIGN_DOUBLE==8. On AIX, C alignment is still 8 bytes, but\nTYPALIGN_DOUBLE==4. The tuples on disk and in shared buffers use\nTYPALIGN_DOUBLE to decide how much padding to insert, and that amount of\npadding needs to match the C alignment padding. Placing the field here\nreduces the padding to zero, making that invariant hold trivially.\n\n> +\t */\n> +\tXLogRecPtr\tsubskiplsn;\n> +\n> \tNameData\tsubname;\t\t/* Name of the subscription */\n> \n> \tOid\t\t\tsubowner BKI_LOOKUP(pg_authid); /* Owner of the subscription */\n> @@ -71,9 +82,6 @@ CATALOG(pg_subscription,6100,SubscriptionRelationId) BKI_SHARED_RELATION BKI_ROW\n> \tbool\t\tsubdisableonerr;\t/* True if a worker error should cause the\n> \t\t\t\t\t\t\t\t\t * subscription to be disabled */\n> \n> -\tXLogRecPtr\tsubskiplsn;\t\t/* All changes finished at this LSN are\n> -\t\t\t\t\t\t\t\t * skipped */\n\nSome code sites list pg_subscription fields in field order. Please update\nthem so they continue to list fields in field order. CreateSubscription() is\none example.\n\n\n",
"msg_date": "Sun, 3 Apr 2022 19:31:28 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Apr 4, 2022 at 8:01 AM Noah Misch <noah@leadboat.com> wrote:\n>\n> On Mon, Apr 04, 2022 at 10:28:30AM +0900, Masahiko Sawada wrote:\n> > On Sun, Apr 3, 2022 at 9:45 AM Noah Misch <noah@leadboat.com> wrote:\n> > > On Sat, Apr 02, 2022 at 08:44:45PM +0900, Masahiko Sawada wrote:\n> > > > On Sat, Apr 2, 2022 at 7:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > On Sat, Apr 2, 2022 at 1:43 PM Noah Misch <noah@leadboat.com> wrote:\n> > > > > > Some options:\n> > > > > > - Move subskiplsn after subdbid, so it's always aligned anyway. I've\n> > > > > > confirmed that this lets the test pass, in 44s.\n>\n> > --- a/src/include/catalog/pg_subscription.h\n> > +++ b/src/include/catalog/pg_subscription.h\n> > @@ -54,6 +54,17 @@ CATALOG(pg_subscription,6100,SubscriptionRelationId) BKI_SHARED_RELATION BKI_ROW\n> >\n> > Oid subdbid BKI_LOOKUP(pg_database); /* Database the\n> > * subscription is in. */\n> > +\n> > + /*\n> > + * All changes finished at this LSN are skipped.\n> > + *\n> > + * Note that XLogRecPtr, pg_lsn in the catalog, is 8-byte alignment\n> > + * (TYPALIGN_DOUBLE) and it does not match the alignment on some platforms\n> > + * such as AIX. Therefore subskiplsn needs to be placed here so it is\n> > + * always aligned.\n>\n> I'm reading this comment as saying that TYPALIGN_DOUBLE is always 8 bytes, but\n> the problem arises precisely because TYPALIGN_DOUBLE==4 on AIX.\n>\n\nHow about a comment like: \"It has to be kept at 8-byte alignment\nboundary so as to be accessed directly via C struct as it uses\nTYPALIGN_DOUBLE for storage which has 4-byte alignment on platforms\nlike AIX.\"? Can you please suggest a better comment if you don't like\nthis one?\n\n> > + */\n> > + XLogRecPtr subskiplsn;\n> > +\n> > NameData subname; /* Name of the subscription */\n> >\n> > Oid subowner BKI_LOOKUP(pg_authid); /* Owner of the subscription */\n> > @@ -71,9 +82,6 @@ CATALOG(pg_subscription,6100,SubscriptionRelationId) BKI_SHARED_RELATION BKI_ROW\n> > bool subdisableonerr; /* True if a worker error should cause the\n> > * subscription to be disabled */\n> >\n> > - XLogRecPtr subskiplsn; /* All changes finished at this LSN are\n> > - * skipped */\n>\n> Some code sites list pg_subscription fields in field order. Please update\n> them so they continue to list fields in field order. CreateSubscription() is\n> one example.\n>\n\nAnother minor point is that I think it is better to use DatumGetLSN to\nread this in GetSubscription as we use LSNGetDatum while storing it. I\nam not sure if there is any direct problem due to this but that looks\nconsistent to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 4 Apr 2022 08:20:08 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Apr 4, 2022 at 11:50 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Apr 4, 2022 at 8:01 AM Noah Misch <noah@leadboat.com> wrote:\n> >\n> > On Mon, Apr 04, 2022 at 10:28:30AM +0900, Masahiko Sawada wrote:\n> > > On Sun, Apr 3, 2022 at 9:45 AM Noah Misch <noah@leadboat.com> wrote:\n> > > > On Sat, Apr 02, 2022 at 08:44:45PM +0900, Masahiko Sawada wrote:\n> > > > > On Sat, Apr 2, 2022 at 7:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > > On Sat, Apr 2, 2022 at 1:43 PM Noah Misch <noah@leadboat.com> wrote:\n> > > > > > > Some options:\n> > > > > > > - Move subskiplsn after subdbid, so it's always aligned anyway. I've\n> > > > > > > confirmed that this lets the test pass, in 44s.\n> >\n> > > --- a/src/include/catalog/pg_subscription.h\n> > > +++ b/src/include/catalog/pg_subscription.h\n> > > @@ -54,6 +54,17 @@ CATALOG(pg_subscription,6100,SubscriptionRelationId) BKI_SHARED_RELATION BKI_ROW\n> > >\n> > > Oid subdbid BKI_LOOKUP(pg_database); /* Database the\n> > > * subscription is in. */\n> > > +\n> > > + /*\n> > > + * All changes finished at this LSN are skipped.\n> > > + *\n> > > + * Note that XLogRecPtr, pg_lsn in the catalog, is 8-byte alignment\n> > > + * (TYPALIGN_DOUBLE) and it does not match the alignment on some platforms\n> > > + * such as AIX. Therefore subskiplsn needs to be placed here so it is\n> > > + * always aligned.\n> >\n> > I'm reading this comment as saying that TYPALIGN_DOUBLE is always 8 bytes, but\n> > the problem arises precisely because TYPALIGN_DOUBLE==4 on AIX.\n> >\n>\n> How about a comment like: \"It has to be kept at 8-byte alignment\n> boundary so as to be accessed directly via C struct as it uses\n> TYPALIGN_DOUBLE for storage which has 4-byte alignment on platforms\n> like AIX.\"? Can you please suggest a better comment if you don't like\n> this one?\n>\n> > > + */\n> > > + XLogRecPtr subskiplsn;\n> > > +\n> > > NameData subname; /* Name of the subscription */\n> > >\n> > > Oid subowner BKI_LOOKUP(pg_authid); /* Owner of the subscription */\n> > > @@ -71,9 +82,6 @@ CATALOG(pg_subscription,6100,SubscriptionRelationId) BKI_SHARED_RELATION BKI_ROW\n> > > bool subdisableonerr; /* True if a worker error should cause the\n> > > * subscription to be disabled */\n> > >\n> > > - XLogRecPtr subskiplsn; /* All changes finished at this LSN are\n> > > - * skipped */\n> >\n> > Some code sites list pg_subscription fields in field order. Please update\n> > them so they continue to list fields in field order. CreateSubscription() is\n> > one example.\n> >\n>\n> Another minor point is that I think it is better to use DatumGetLSN to\n> read this in GetSubscription as we use LSNGetDatum while storing it. I\n> am not sure if there is any direct problem due to this but that looks\n> consistent to me.\n\nBut it seems not consistent with other usages since we don't normally\nuse DatumGetXXX to get values directly from C struct.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 4 Apr 2022 12:10:28 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Apr 4, 2022 at 8:41 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Apr 4, 2022 at 11:50 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Another minor point is that I think it is better to use DatumGetLSN to\n> > read this in GetSubscription as we use LSNGetDatum while storing it. I\n> > am not sure if there is any direct problem due to this but that looks\n> > consistent to me.\n>\n> But it seems not consistent with other usages since we don't normally\n> use DatumGetXXX to get values directly from C struct.\n>\n\nOkay, I see that for sequences also we don't use it, so we can\nprobably leave it as it is.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 4 Apr 2022 09:02:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Apr 04, 2022 at 08:20:08AM +0530, Amit Kapila wrote:\n> On Mon, Apr 4, 2022 at 8:01 AM Noah Misch <noah@leadboat.com> wrote:\n> > On Mon, Apr 04, 2022 at 10:28:30AM +0900, Masahiko Sawada wrote:\n> > > On Sun, Apr 3, 2022 at 9:45 AM Noah Misch <noah@leadboat.com> wrote:\n> > > > On Sat, Apr 02, 2022 at 08:44:45PM +0900, Masahiko Sawada wrote:\n> > > > > On Sat, Apr 2, 2022 at 7:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > > On Sat, Apr 2, 2022 at 1:43 PM Noah Misch <noah@leadboat.com> wrote:\n> > > > > > > Some options:\n> > > > > > > - Move subskiplsn after subdbid, so it's always aligned anyway. I've\n> > > > > > > confirmed that this lets the test pass, in 44s.\n> >\n> > > --- a/src/include/catalog/pg_subscription.h\n> > > +++ b/src/include/catalog/pg_subscription.h\n> > > @@ -54,6 +54,17 @@ CATALOG(pg_subscription,6100,SubscriptionRelationId) BKI_SHARED_RELATION BKI_ROW\n> > >\n> > > Oid subdbid BKI_LOOKUP(pg_database); /* Database the\n> > > * subscription is in. */\n> > > +\n> > > + /*\n> > > + * All changes finished at this LSN are skipped.\n> > > + *\n> > > + * Note that XLogRecPtr, pg_lsn in the catalog, is 8-byte alignment\n> > > + * (TYPALIGN_DOUBLE) and it does not match the alignment on some platforms\n> > > + * such as AIX. Therefore subskiplsn needs to be placed here so it is\n> > > + * always aligned.\n> >\n> > I'm reading this comment as saying that TYPALIGN_DOUBLE is always 8 bytes, but\n> > the problem arises precisely because TYPALIGN_DOUBLE==4 on AIX.\n> \n> How about a comment like: \"It has to be kept at 8-byte alignment\n> boundary so as to be accessed directly via C struct as it uses\n> TYPALIGN_DOUBLE for storage which has 4-byte alignment on platforms\n> like AIX.\"? Can you please suggest a better comment if you don't like\n> this one?\n\nI'd write it like this, though I'm not sure it's an improvement on your words:\n\n When ALIGNOF_DOUBLE==4 (e.g. AIX), the C ABI may impose 8-byte alignment on\n some of the C types that correspond to TYPALIGN_DOUBLE SQL types. To ensure\n catalog C struct layout matches catalog tuple layout, arrange for the tuple\n offset of each fixed-width, attalign='d' catalog column to be divisible by 8\n unconditionally. Keep such columns before the first NameData column of the\n catalog, since packagers can override NAMEDATALEN to an odd number.\n\nThe best place for such a comment would be in one of\nsrc/test/regress/sql/*sanity*.sql, next to a test written to detect new\nviolations. If adding such a test would materially delay getting the\nbuildfarm green, putting the comment in pg_subscription.h works for me.\n\n\n",
"msg_date": "Sun, 3 Apr 2022 23:26:20 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Apr 4, 2022 at 3:26 PM Noah Misch <noah@leadboat.com> wrote:\n>\n> On Mon, Apr 04, 2022 at 08:20:08AM +0530, Amit Kapila wrote:\n> > On Mon, Apr 4, 2022 at 8:01 AM Noah Misch <noah@leadboat.com> wrote:\n> > > On Mon, Apr 04, 2022 at 10:28:30AM +0900, Masahiko Sawada wrote:\n> > > > On Sun, Apr 3, 2022 at 9:45 AM Noah Misch <noah@leadboat.com> wrote:\n> > > > > On Sat, Apr 02, 2022 at 08:44:45PM +0900, Masahiko Sawada wrote:\n> > > > > > On Sat, Apr 2, 2022 at 7:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > > > On Sat, Apr 2, 2022 at 1:43 PM Noah Misch <noah@leadboat.com> wrote:\n> > > > > > > > Some options:\n> > > > > > > > - Move subskiplsn after subdbid, so it's always aligned anyway. I've\n> > > > > > > > confirmed that this lets the test pass, in 44s.\n> > >\n> > > > --- a/src/include/catalog/pg_subscription.h\n> > > > +++ b/src/include/catalog/pg_subscription.h\n> > > > @@ -54,6 +54,17 @@ CATALOG(pg_subscription,6100,SubscriptionRelationId) BKI_SHARED_RELATION BKI_ROW\n> > > >\n> > > > Oid subdbid BKI_LOOKUP(pg_database); /* Database the\n> > > > * subscription is in. */\n> > > > +\n> > > > + /*\n> > > > + * All changes finished at this LSN are skipped.\n> > > > + *\n> > > > + * Note that XLogRecPtr, pg_lsn in the catalog, is 8-byte alignment\n> > > > + * (TYPALIGN_DOUBLE) and it does not match the alignment on some platforms\n> > > > + * such as AIX. Therefore subskiplsn needs to be placed here so it is\n> > > > + * always aligned.\n> > >\n> > > I'm reading this comment as saying that TYPALIGN_DOUBLE is always 8 bytes, but\n> > > the problem arises precisely because TYPALIGN_DOUBLE==4 on AIX.\n> >\n> > How about a comment like: \"It has to be kept at 8-byte alignment\n> > boundary so as to be accessed directly via C struct as it uses\n> > TYPALIGN_DOUBLE for storage which has 4-byte alignment on platforms\n> > like AIX.\"? Can you please suggest a better comment if you don't like\n> > this one?\n>\n> I'd write it like this, though I'm not sure it's an improvement on your words:\n>\n> When ALIGNOF_DOUBLE==4 (e.g. AIX), the C ABI may impose 8-byte alignment on\n> some of the C types that correspond to TYPALIGN_DOUBLE SQL types. To ensure\n> catalog C struct layout matches catalog tuple layout, arrange for the tuple\n> offset of each fixed-width, attalign='d' catalog column to be divisible by 8\n> unconditionally. Keep such columns before the first NameData column of the\n> catalog, since packagers can override NAMEDATALEN to an odd number.\n\nThanks!\n\n>\n> The best place for such a comment would be in one of\n> src/test/regress/sql/*sanity*.sql, next to a test written to detect new\n> violations.\n\nAgreed.\n\nIIUC in the new test, we would need a new SQL function to calculate\nthe offset of catalog columns including padding, is that right? Or do\nyou have an idea to do that by using existing functionality?\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 4 Apr 2022 18:55:45 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Apr 04, 2022 at 06:55:45PM +0900, Masahiko Sawada wrote:\n> On Mon, Apr 4, 2022 at 3:26 PM Noah Misch <noah@leadboat.com> wrote:\n> > On Mon, Apr 04, 2022 at 08:20:08AM +0530, Amit Kapila wrote:\n> > > How about a comment like: \"It has to be kept at 8-byte alignment\n> > > boundary so as to be accessed directly via C struct as it uses\n> > > TYPALIGN_DOUBLE for storage which has 4-byte alignment on platforms\n> > > like AIX.\"? Can you please suggest a better comment if you don't like\n> > > this one?\n> >\n> > I'd write it like this, though I'm not sure it's an improvement on your words:\n> >\n> > When ALIGNOF_DOUBLE==4 (e.g. AIX), the C ABI may impose 8-byte alignment on\n> > some of the C types that correspond to TYPALIGN_DOUBLE SQL types. To ensure\n> > catalog C struct layout matches catalog tuple layout, arrange for the tuple\n> > offset of each fixed-width, attalign='d' catalog column to be divisible by 8\n> > unconditionally. Keep such columns before the first NameData column of the\n> > catalog, since packagers can override NAMEDATALEN to an odd number.\n> \n> Thanks!\n> \n> >\n> > The best place for such a comment would be in one of\n> > src/test/regress/sql/*sanity*.sql, next to a test written to detect new\n> > violations.\n> \n> Agreed.\n> \n> IIUC in the new test, we would need a new SQL function to calculate\n> the offset of catalog columns including padding, is that right? Or do\n> you have an idea to do that by using existing functionality?\n\nSomething like this:\n\nselect\n attrelid::regclass,\n attname,\n array(select typname\n from pg_type t join pg_attribute pa on t.oid = pa.atttypid\n where pa.attrelid = a.attrelid and pa.attnum > 0 and pa.attnum < a.attnum order by pa.attnum) AS types_before,\n (select sum(attlen)\n from pg_type t join pg_attribute pa on t.oid = pa.atttypid\n where pa.attrelid = a.attrelid and pa.attnum > 0 and pa.attnum < a.attnum) AS len_before\nfrom pg_attribute a\njoin pg_class c on c.oid = attrelid\nwhere attalign = 'd' and relkind = 'r' and attnotnull and attlen <> -1\norder by attrelid::regclass::text, attnum;\n attrelid │ attname │ types_before │ len_before\n─────────────────┼──────────────┼─────────────────────────────────────────────┼────────────\n pg_sequence │ seqstart │ {oid,oid} │ 8\n pg_sequence │ seqincrement │ {oid,oid,int8} │ 16\n pg_sequence │ seqmax │ {oid,oid,int8,int8} │ 24\n pg_sequence │ seqmin │ {oid,oid,int8,int8,int8} │ 32\n pg_sequence │ seqcache │ {oid,oid,int8,int8,int8,int8} │ 40\n pg_subscription │ subskiplsn │ {oid,oid,name,oid,bool,bool,bool,char,bool} │ 81\n(6 rows)\n\nThat doesn't count padding, but hazardous column changes will cause a diff in\nthe output.\n\n\n",
"msg_date": "Mon, 4 Apr 2022 17:21:07 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Apr 5, 2022 at 9:21 AM Noah Misch <noah@leadboat.com> wrote:\n>\n> On Mon, Apr 04, 2022 at 06:55:45PM +0900, Masahiko Sawada wrote:\n> > On Mon, Apr 4, 2022 at 3:26 PM Noah Misch <noah@leadboat.com> wrote:\n> > > On Mon, Apr 04, 2022 at 08:20:08AM +0530, Amit Kapila wrote:\n> > > > How about a comment like: \"It has to be kept at 8-byte alignment\n> > > > boundary so as to be accessed directly via C struct as it uses\n> > > > TYPALIGN_DOUBLE for storage which has 4-byte alignment on platforms\n> > > > like AIX.\"? Can you please suggest a better comment if you don't like\n> > > > this one?\n> > >\n> > > I'd write it like this, though I'm not sure it's an improvement on your words:\n> > >\n> > > When ALIGNOF_DOUBLE==4 (e.g. AIX), the C ABI may impose 8-byte alignment on\n> > > some of the C types that correspond to TYPALIGN_DOUBLE SQL types. To ensure\n> > > catalog C struct layout matches catalog tuple layout, arrange for the tuple\n> > > offset of each fixed-width, attalign='d' catalog column to be divisible by 8\n> > > unconditionally. Keep such columns before the first NameData column of the\n> > > catalog, since packagers can override NAMEDATALEN to an odd number.\n> >\n> > Thanks!\n> >\n> > >\n> > > The best place for such a comment would be in one of\n> > > src/test/regress/sql/*sanity*.sql, next to a test written to detect new\n> > > violations.\n> >\n> > Agreed.\n> >\n> > IIUC in the new test, we would need a new SQL function to calculate\n> > the offset of catalog columns including padding, is that right? Or do\n> > you have an idea to do that by using existing functionality?\n>\n> Something like this:\n>\n> select\n> attrelid::regclass,\n> attname,\n> array(select typname\n> from pg_type t join pg_attribute pa on t.oid = pa.atttypid\n> where pa.attrelid = a.attrelid and pa.attnum > 0 and pa.attnum < a.attnum order by pa.attnum) AS types_before,\n> (select sum(attlen)\n> from pg_type t join pg_attribute pa on t.oid = pa.atttypid\n> where pa.attrelid = a.attrelid and pa.attnum > 0 and pa.attnum < a.attnum) AS len_before\n> from pg_attribute a\n> join pg_class c on c.oid = attrelid\n> where attalign = 'd' and relkind = 'r' and attnotnull and attlen <> -1\n> order by attrelid::regclass::text, attnum;\n> attrelid │ attname │ types_before │ len_before\n> ─────────────────┼──────────────┼─────────────────────────────────────────────┼────────────\n> pg_sequence │ seqstart │ {oid,oid} │ 8\n> pg_sequence │ seqincrement │ {oid,oid,int8} │ 16\n> pg_sequence │ seqmax │ {oid,oid,int8,int8} │ 24\n> pg_sequence │ seqmin │ {oid,oid,int8,int8,int8} │ 32\n> pg_sequence │ seqcache │ {oid,oid,int8,int8,int8,int8} │ 40\n> pg_subscription │ subskiplsn │ {oid,oid,name,oid,bool,bool,bool,char,bool} │ 81\n> (6 rows)\n>\n> That doesn't count padding, but hazardous column changes will cause a diff in\n> the output.\n\nYes, in this case, we can detect the violated column order even\nwithout considering padding. On the other hand, I think this\ncalculation could not detect some patterns of order. For instance,\nsuppose the column order is {oid, bool, bool, oid, bool, bool, oid,\nint8}, the len_before is 16 but offset of int8 column including\npadding is 20 on ALIGNOF_DOUBLE==4 environment.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 5 Apr 2022 10:13:06 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Apr 05, 2022 at 10:13:06AM +0900, Masahiko Sawada wrote:\n> On Tue, Apr 5, 2022 at 9:21 AM Noah Misch <noah@leadboat.com> wrote:\n> > On Mon, Apr 04, 2022 at 06:55:45PM +0900, Masahiko Sawada wrote:\n> > > On Mon, Apr 4, 2022 at 3:26 PM Noah Misch <noah@leadboat.com> wrote:\n> > > > On Mon, Apr 04, 2022 at 08:20:08AM +0530, Amit Kapila wrote:\n> > > > > How about a comment like: \"It has to be kept at 8-byte alignment\n> > > > > boundary so as to be accessed directly via C struct as it uses\n> > > > > TYPALIGN_DOUBLE for storage which has 4-byte alignment on platforms\n> > > > > like AIX.\"? Can you please suggest a better comment if you don't like\n> > > > > this one?\n> > > >\n> > > > I'd write it like this, though I'm not sure it's an improvement on your words:\n> > > >\n> > > > When ALIGNOF_DOUBLE==4 (e.g. AIX), the C ABI may impose 8-byte alignment on\n> > > > some of the C types that correspond to TYPALIGN_DOUBLE SQL types. To ensure\n> > > > catalog C struct layout matches catalog tuple layout, arrange for the tuple\n> > > > offset of each fixed-width, attalign='d' catalog column to be divisible by 8\n> > > > unconditionally. Keep such columns before the first NameData column of the\n> > > > catalog, since packagers can override NAMEDATALEN to an odd number.\n> > >\n> > > Thanks!\n> > >\n> > > >\n> > > > The best place for such a comment would be in one of\n> > > > src/test/regress/sql/*sanity*.sql, next to a test written to detect new\n> > > > violations.\n> > >\n> > > Agreed.\n> > >\n> > > IIUC in the new test, we would need a new SQL function to calculate\n> > > the offset of catalog columns including padding, is that right? Or do\n> > > you have an idea to do that by using existing functionality?\n> >\n> > Something like this:\n> >\n> > select\n> > attrelid::regclass,\n> > attname,\n> > array(select typname\n> > from pg_type t join pg_attribute pa on t.oid = pa.atttypid\n> > where pa.attrelid = a.attrelid and pa.attnum > 0 and pa.attnum < a.attnum order by pa.attnum) AS types_before,\n> > (select sum(attlen)\n> > from pg_type t join pg_attribute pa on t.oid = pa.atttypid\n> > where pa.attrelid = a.attrelid and pa.attnum > 0 and pa.attnum < a.attnum) AS len_before\n> > from pg_attribute a\n> > join pg_class c on c.oid = attrelid\n> > where attalign = 'd' and relkind = 'r' and attnotnull and attlen <> -1\n> > order by attrelid::regclass::text, attnum;\n> > attrelid │ attname │ types_before │ len_before\n> > ─────────────────┼──────────────┼─────────────────────────────────────────────┼────────────\n> > pg_sequence │ seqstart │ {oid,oid} │ 8\n> > pg_sequence │ seqincrement │ {oid,oid,int8} │ 16\n> > pg_sequence │ seqmax │ {oid,oid,int8,int8} │ 24\n> > pg_sequence │ seqmin │ {oid,oid,int8,int8,int8} │ 32\n> > pg_sequence │ seqcache │ {oid,oid,int8,int8,int8,int8} │ 40\n> > pg_subscription │ subskiplsn │ {oid,oid,name,oid,bool,bool,bool,char,bool} │ 81\n> > (6 rows)\n> >\n> > That doesn't count padding, but hazardous column changes will cause a diff in\n> > the output.\n> \n> Yes, in this case, we can detect the violated column order even\n> without considering padding. On the other hand, I think this\n> calculation could not detect some patterns of order. For instance,\n> suppose the column order is {oid, bool, bool, oid, bool, bool, oid,\n> int8}, the len_before is 16 but offset of int8 column including\n> padding is 20 on ALIGNOF_DOUBLE==4 environment.\n\nCorrect. Feel free to make it more precise. If you do want to add a\nfunction, it could be a regress.c function rather than an always-installed\npart of PostgreSQL. Again, getting the buildfarm green is a priority; we can\nalways add tests later.\n\n\n",
"msg_date": "Mon, 4 Apr 2022 18:46:20 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Apr 5, 2022 at 10:46 AM Noah Misch <noah@leadboat.com> wrote:\n>\n> On Tue, Apr 05, 2022 at 10:13:06AM +0900, Masahiko Sawada wrote:\n> > On Tue, Apr 5, 2022 at 9:21 AM Noah Misch <noah@leadboat.com> wrote:\n> > > On Mon, Apr 04, 2022 at 06:55:45PM +0900, Masahiko Sawada wrote:\n> > > > On Mon, Apr 4, 2022 at 3:26 PM Noah Misch <noah@leadboat.com> wrote:\n> > > > > On Mon, Apr 04, 2022 at 08:20:08AM +0530, Amit Kapila wrote:\n> > > > > > How about a comment like: \"It has to be kept at 8-byte alignment\n> > > > > > boundary so as to be accessed directly via C struct as it uses\n> > > > > > TYPALIGN_DOUBLE for storage which has 4-byte alignment on platforms\n> > > > > > like AIX.\"? Can you please suggest a better comment if you don't like\n> > > > > > this one?\n> > > > >\n> > > > > I'd write it like this, though I'm not sure it's an improvement on your words:\n> > > > >\n> > > > > When ALIGNOF_DOUBLE==4 (e.g. AIX), the C ABI may impose 8-byte alignment on\n> > > > > some of the C types that correspond to TYPALIGN_DOUBLE SQL types. To ensure\n> > > > > catalog C struct layout matches catalog tuple layout, arrange for the tuple\n> > > > > offset of each fixed-width, attalign='d' catalog column to be divisible by 8\n> > > > > unconditionally. Keep such columns before the first NameData column of the\n> > > > > catalog, since packagers can override NAMEDATALEN to an odd number.\n> > > >\n> > > > Thanks!\n> > > >\n> > > > >\n> > > > > The best place for such a comment would be in one of\n> > > > > src/test/regress/sql/*sanity*.sql, next to a test written to detect new\n> > > > > violations.\n> > > >\n> > > > Agreed.\n> > > >\n> > > > IIUC in the new test, we would need a new SQL function to calculate\n> > > > the offset of catalog columns including padding, is that right? Or do\n> > > > you have an idea to do that by using existing functionality?\n> > >\n> > > Something like this:\n> > >\n> > > select\n> > > attrelid::regclass,\n> > > attname,\n> > > array(select typname\n> > > from pg_type t join pg_attribute pa on t.oid = pa.atttypid\n> > > where pa.attrelid = a.attrelid and pa.attnum > 0 and pa.attnum < a.attnum order by pa.attnum) AS types_before,\n> > > (select sum(attlen)\n> > > from pg_type t join pg_attribute pa on t.oid = pa.atttypid\n> > > where pa.attrelid = a.attrelid and pa.attnum > 0 and pa.attnum < a.attnum) AS len_before\n> > > from pg_attribute a\n> > > join pg_class c on c.oid = attrelid\n> > > where attalign = 'd' and relkind = 'r' and attnotnull and attlen <> -1\n> > > order by attrelid::regclass::text, attnum;\n> > > attrelid │ attname │ types_before │ len_before\n> > > ─────────────────┼──────────────┼─────────────────────────────────────────────┼────────────\n> > > pg_sequence │ seqstart │ {oid,oid} │ 8\n> > > pg_sequence │ seqincrement │ {oid,oid,int8} │ 16\n> > > pg_sequence │ seqmax │ {oid,oid,int8,int8} │ 24\n> > > pg_sequence │ seqmin │ {oid,oid,int8,int8,int8} │ 32\n> > > pg_sequence │ seqcache │ {oid,oid,int8,int8,int8,int8} │ 40\n> > > pg_subscription │ subskiplsn │ {oid,oid,name,oid,bool,bool,bool,char,bool} │ 81\n> > > (6 rows)\n> > >\n> > > That doesn't count padding, but hazardous column changes will cause a diff in\n> > > the output.\n> >\n> > Yes, in this case, we can detect the violated column order even\n> > without considering padding. On the other hand, I think this\n> > calculation could not detect some patterns of order. For instance,\n> > suppose the column order is {oid, bool, bool, oid, bool, bool, oid,\n> > int8}, the len_before is 16 but offset of int8 column including\n> > padding is 20 on ALIGNOF_DOUBLE==4 environment.\n>\n> Correct. Feel free to make it more precise. If you do want to add a\n> function, it could be a regress.c function rather than an always-installed\n> part of PostgreSQL. Again, getting the buildfarm green is a priority; we can\n> always add tests later.\n\nAgreed. I'll update and submit the patch as soon as possible.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 5 Apr 2022 12:38:49 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Apr 5, 2022 at 12:38 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Apr 5, 2022 at 10:46 AM Noah Misch <noah@leadboat.com> wrote:\n> >\n> > On Tue, Apr 05, 2022 at 10:13:06AM +0900, Masahiko Sawada wrote:\n> > > On Tue, Apr 5, 2022 at 9:21 AM Noah Misch <noah@leadboat.com> wrote:\n> > > > On Mon, Apr 04, 2022 at 06:55:45PM +0900, Masahiko Sawada wrote:\n> > > > > On Mon, Apr 4, 2022 at 3:26 PM Noah Misch <noah@leadboat.com> wrote:\n> > > > > > On Mon, Apr 04, 2022 at 08:20:08AM +0530, Amit Kapila wrote:\n> > > > > > > How about a comment like: \"It has to be kept at 8-byte alignment\n> > > > > > > boundary so as to be accessed directly via C struct as it uses\n> > > > > > > TYPALIGN_DOUBLE for storage which has 4-byte alignment on platforms\n> > > > > > > like AIX.\"? Can you please suggest a better comment if you don't like\n> > > > > > > this one?\n> > > > > >\n> > > > > > I'd write it like this, though I'm not sure it's an improvement on your words:\n> > > > > >\n> > > > > > When ALIGNOF_DOUBLE==4 (e.g. AIX), the C ABI may impose 8-byte alignment on\n> > > > > > some of the C types that correspond to TYPALIGN_DOUBLE SQL types. To ensure\n> > > > > > catalog C struct layout matches catalog tuple layout, arrange for the tuple\n> > > > > > offset of each fixed-width, attalign='d' catalog column to be divisible by 8\n> > > > > > unconditionally. Keep such columns before the first NameData column of the\n> > > > > > catalog, since packagers can override NAMEDATALEN to an odd number.\n> > > > >\n> > > > > Thanks!\n> > > > >\n> > > > > >\n> > > > > > The best place for such a comment would be in one of\n> > > > > > src/test/regress/sql/*sanity*.sql, next to a test written to detect new\n> > > > > > violations.\n> > > > >\n> > > > > Agreed.\n> > > > >\n> > > > > IIUC in the new test, we would need a new SQL function to calculate\n> > > > > the offset of catalog columns including padding, is that right? Or do\n> > > > > you have an idea to do that by using existing functionality?\n> > > >\n> > > > Something like this:\n> > > >\n> > > > select\n> > > > attrelid::regclass,\n> > > > attname,\n> > > > array(select typname\n> > > > from pg_type t join pg_attribute pa on t.oid = pa.atttypid\n> > > > where pa.attrelid = a.attrelid and pa.attnum > 0 and pa.attnum < a.attnum order by pa.attnum) AS types_before,\n> > > > (select sum(attlen)\n> > > > from pg_type t join pg_attribute pa on t.oid = pa.atttypid\n> > > > where pa.attrelid = a.attrelid and pa.attnum > 0 and pa.attnum < a.attnum) AS len_before\n> > > > from pg_attribute a\n> > > > join pg_class c on c.oid = attrelid\n> > > > where attalign = 'd' and relkind = 'r' and attnotnull and attlen <> -1\n> > > > order by attrelid::regclass::text, attnum;\n> > > > attrelid │ attname │ types_before │ len_before\n> > > > ─────────────────┼──────────────┼─────────────────────────────────────────────┼────────────\n> > > > pg_sequence │ seqstart │ {oid,oid} │ 8\n> > > > pg_sequence │ seqincrement │ {oid,oid,int8} │ 16\n> > > > pg_sequence │ seqmax │ {oid,oid,int8,int8} │ 24\n> > > > pg_sequence │ seqmin │ {oid,oid,int8,int8,int8} │ 32\n> > > > pg_sequence │ seqcache │ {oid,oid,int8,int8,int8,int8} │ 40\n> > > > pg_subscription │ subskiplsn │ {oid,oid,name,oid,bool,bool,bool,char,bool} │ 81\n> > > > (6 rows)\n> > > >\n> > > > That doesn't count padding, but hazardous column changes will cause a diff in\n> > > > the output.\n> > >\n> > > Yes, in this case, we can detect the violated column order even\n> > > without considering padding. On the other hand, I think this\n> > > calculation could not detect some patterns of order. For instance,\n> > > suppose the column order is {oid, bool, bool, oid, bool, bool, oid,\n> > > int8}, the len_before is 16 but offset of int8 column including\n> > > padding is 20 on ALIGNOF_DOUBLE==4 environment.\n> >\n> > Correct. Feel free to make it more precise. If you do want to add a\n> > function, it could be a regress.c function rather than an always-installed\n> > part of PostgreSQL. Again, getting the buildfarm green is a priority; we can\n> > always add tests later.\n>\n> Agreed. I'll update and submit the patch as soon as possible.\n>\n\nI've attached an updated patch. The patch includes a regression test\nto detect the new violation as we discussed. I've confirmed that\nCirrus CI tests pass. Please confirm on AIX and review the patch.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Tue, 5 Apr 2022 15:05:10 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Apr 05, 2022 at 03:05:10PM +0900, Masahiko Sawada wrote:\n> I've attached an updated patch. The patch includes a regression test\n> to detect the new violation as we discussed. I've confirmed that\n> Cirrus CI tests pass. Please confirm on AIX and review the patch.\n\nWhen the context of a \"git grep skiplsn\" match involves several struct fields\nin struct order, please change to the new order. In other words, do for all\n\"git grep skiplsn\" matches what the v2 patch does in GetSubscription(). The\nv2 patch does not do this for catalogs.sgml, but it ought to. I didn't check\nall the other \"git grep\" matches; please do so.\n\nThe changes present in this patch all look good.\n\n\n",
"msg_date": "Tue, 5 Apr 2022 00:08:16 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Apr 5, 2022 at 4:08 PM Noah Misch <noah@leadboat.com> wrote:\n>\n> On Tue, Apr 05, 2022 at 03:05:10PM +0900, Masahiko Sawada wrote:\n> > I've attached an updated patch. The patch includes a regression test\n> > to detect the new violation as we discussed. I've confirmed that\n> > Cirrus CI tests pass. Please confirm on AIX and review the patch.\n>\n> When the context of a \"git grep skiplsn\" match involves several struct fields\n> in struct order, please change to the new order. In other words, do for all\n> \"git grep skiplsn\" matches what the v2 patch does in GetSubscription(). The\n> v2 patch does not do this for catalogs.sgml, but it ought to. I didn't check\n> all the other \"git grep\" matches; please do so.\n\nOops, I missed many places. I checked all \"git grep\" matches and fixed them.\n\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Tue, 5 Apr 2022 16:41:28 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Apr 05, 2022 at 04:41:28PM +0900, Masahiko Sawada wrote:\n> On Tue, Apr 5, 2022 at 4:08 PM Noah Misch <noah@leadboat.com> wrote:\n> > On Tue, Apr 05, 2022 at 03:05:10PM +0900, Masahiko Sawada wrote:\n> > > I've attached an updated patch. The patch includes a regression test\n> > > to detect the new violation as we discussed. I've confirmed that\n> > > Cirrus CI tests pass. Please confirm on AIX and review the patch.\n> >\n> > When the context of a \"git grep skiplsn\" match involves several struct fields\n> > in struct order, please change to the new order. In other words, do for all\n> > \"git grep skiplsn\" matches what the v2 patch does in GetSubscription(). The\n> > v2 patch does not do this for catalogs.sgml, but it ought to. I didn't check\n> > all the other \"git grep\" matches; please do so.\n> \n> Oops, I missed many places. I checked all \"git grep\" matches and fixed them.\n\n> --- a/src/backend/catalog/system_views.sql\n> +++ b/src/backend/catalog/system_views.sql\n> @@ -1285,8 +1285,8 @@ REVOKE ALL ON pg_replication_origin_status FROM public;\n> \n> -- All columns of pg_subscription except subconninfo are publicly readable.\n> REVOKE ALL ON pg_subscription FROM public;\n> -GRANT SELECT (oid, subdbid, subname, subowner, subenabled, subbinary,\n> - substream, subtwophasestate, subdisableonerr, subskiplsn, subslotname,\n> +GRANT SELECT (oid, subdbid, subname, subskiplsn, subowner, subenabled,\n> + subbinary, substream, subtwophasestate, subdisableonerr, subslotname,\n> subsynccommit, subpublications)\n\nsubskiplsn comes before subname. Other than that, this looks done. I\nrecommend committing it with that change.\n\n\n",
"msg_date": "Tue, 5 Apr 2022 20:21:00 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Apr 6, 2022 at 12:21 PM Noah Misch <noah@leadboat.com> wrote:\n>\n> On Tue, Apr 05, 2022 at 04:41:28PM +0900, Masahiko Sawada wrote:\n> > On Tue, Apr 5, 2022 at 4:08 PM Noah Misch <noah@leadboat.com> wrote:\n> > > On Tue, Apr 05, 2022 at 03:05:10PM +0900, Masahiko Sawada wrote:\n> > > > I've attached an updated patch. The patch includes a regression test\n> > > > to detect the new violation as we discussed. I've confirmed that\n> > > > Cirrus CI tests pass. Please confirm on AIX and review the patch.\n> > >\n> > > When the context of a \"git grep skiplsn\" match involves several struct fields\n> > > in struct order, please change to the new order. In other words, do for all\n> > > \"git grep skiplsn\" matches what the v2 patch does in GetSubscription(). The\n> > > v2 patch does not do this for catalogs.sgml, but it ought to. I didn't check\n> > > all the other \"git grep\" matches; please do so.\n> >\n> > Oops, I missed many places. I checked all \"git grep\" matches and fixed them.\n>\n> > --- a/src/backend/catalog/system_views.sql\n> > +++ b/src/backend/catalog/system_views.sql\n> > @@ -1285,8 +1285,8 @@ REVOKE ALL ON pg_replication_origin_status FROM public;\n> >\n> > -- All columns of pg_subscription except subconninfo are publicly readable.\n> > REVOKE ALL ON pg_subscription FROM public;\n> > -GRANT SELECT (oid, subdbid, subname, subowner, subenabled, subbinary,\n> > - substream, subtwophasestate, subdisableonerr, subskiplsn, subslotname,\n> > +GRANT SELECT (oid, subdbid, subname, subskiplsn, subowner, subenabled,\n> > + subbinary, substream, subtwophasestate, subdisableonerr, subslotname,\n> > subsynccommit, subpublications)\n>\n> subskiplsn comes before subname.\n\nRight. I've attached an updated patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Wed, 6 Apr 2022 12:54:42 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Apr 6, 2022 at 9:25 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Apr 6, 2022 at 12:21 PM Noah Misch <noah@leadboat.com> wrote:\n>\n> Right. I've attached an updated patch.\n>\n\nThanks, this looks good to me as well. Noah, would you like to commit it?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 6 Apr 2022 10:01:43 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On 02.04.22 10:13, Noah Misch wrote:\n> uint64 and pg_lsn use TYPALIGN_DOUBLE. For AIX, they really need a typalign\n> corresponding to ALIGNOF_LONG. Hence, the C struct layout doesn't match the\n> tuple layout. Columns potentially affected:\n> \n> [local] test=*# select attrelid::regclass, attname from pg_attribute a join pg_class c on c.oid = attrelid where attalign = 'd' and relkind = 'r' and attnotnull and attlen <> -1;\n> attrelid │ attname\n> ─────────────────┼──────────────\n> pg_sequence │ seqstart\n> pg_sequence │ seqincrement\n> pg_sequence │ seqmax\n> pg_sequence │ seqmin\n> pg_sequence │ seqcache\n> pg_subscription │ subskiplsn\n> (6 rows)\n> \n> The pg_sequence fields evade trouble, because there's exactly eight bytes (two\n> oids) before them.\n\nYes, we carefully did this when we ran into this the last time. See \n<https://www.postgresql.org/message-id/flat/76ce2ca3-40f2-d291-eae2-17b599f29ba0%402ndquadrant.com#cf1313adff98e1d5e1ca789497898310> \nand commit f3b421da5f4addc95812b9db05a24972b8fd9739.\n\n\n",
"msg_date": "Wed, 6 Apr 2022 10:53:28 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Apr 6, 2022 at 10:01 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 6, 2022 at 9:25 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Apr 6, 2022 at 12:21 PM Noah Misch <noah@leadboat.com> wrote:\n> >\n> > Right. I've attached an updated patch.\n> >\n>\n> Thanks, this looks good to me as well. Noah, would you like to commit it?\n>\n\nI'll take care of this today. I think we can mark the new function\nget_column_offset() being introduced by this patch as parallel safe.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 7 Apr 2022 08:25:45 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Apr 7, 2022 at 8:25 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> I'll take care of this today. I think we can mark the new function\n> get_column_offset() being introduced by this patch as parallel safe.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 7 Apr 2022 15:57:54 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Apr 7, 2022 at 7:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Apr 7, 2022 at 8:25 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > I'll take care of this today. I think we can mark the new function\n> > get_column_offset() being introduced by this patch as parallel safe.\n> >\n>\n> Pushed.\n\nThanks!\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 7 Apr 2022 20:39:58 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Apr 07, 2022 at 08:39:58PM +0900, Masahiko Sawada wrote:\n> On Thu, Apr 7, 2022 at 7:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Thu, Apr 7, 2022 at 8:25 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > I'll take care of this today. I think we can mark the new function\n> > > get_column_offset() being introduced by this patch as parallel safe.\n> >\n> > Pushed.\n> \n> Thanks!\n\nI took a closer look at the test case. The \"get_column_offset(coltypes) % 8\"\npart would have caught the problem only when run on an ALIGNOF_DOUBLE==4\nplatform. Instead of testing the start of the typalign='d' column, let's test\nthe first offset beyond the previous column. The difference between those two\nvalues depends on ALIGNOF_DOUBLE. While there, ignore typbyval; it doesn't\naffect disk tuple layout, so this test shouldn't care. I plan to push the\nattached patch.",
"msg_date": "Fri, 15 Apr 2022 00:26:01 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Fri, Apr 15, 2022 at 4:26 PM Noah Misch <noah@leadboat.com> wrote:\n>\n> On Thu, Apr 07, 2022 at 08:39:58PM +0900, Masahiko Sawada wrote:\n> > On Thu, Apr 7, 2022 at 7:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > On Thu, Apr 7, 2022 at 8:25 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > I'll take care of this today. I think we can mark the new function\n> > > > get_column_offset() being introduced by this patch as parallel safe.\n> > >\n> > > Pushed.\n> >\n> > Thanks!\n>\n> I took a closer look at the test case. The \"get_column_offset(coltypes) % 8\"\n> part would have caught the problem only when run on an ALIGNOF_DOUBLE==4\n> platform. Instead of testing the start of the typalign='d' column, let's test\n> the first offset beyond the previous column. The difference between those two\n> values depends on ALIGNOF_DOUBLE.\n\nYes, but it could be false positives in some cases. For instance, the\ncolumn {oid, bool, XLogRecPtr} should be okay on ALIGNOF_DOUBLE == 4\nand 8 platforms but the new test fails.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 18 Apr 2022 10:45:50 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Apr 18, 2022 at 10:45:50AM +0900, Masahiko Sawada wrote:\n> On Fri, Apr 15, 2022 at 4:26 PM Noah Misch <noah@leadboat.com> wrote:\n> > On Thu, Apr 07, 2022 at 08:39:58PM +0900, Masahiko Sawada wrote:\n> > > On Thu, Apr 7, 2022 at 7:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > On Thu, Apr 7, 2022 at 8:25 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > I'll take care of this today. I think we can mark the new function\n> > > > > get_column_offset() being introduced by this patch as parallel safe.\n> > > >\n> > > > Pushed.\n> > >\n> > > Thanks!\n> >\n> > I took a closer look at the test case. The \"get_column_offset(coltypes) % 8\"\n> > part would have caught the problem only when run on an ALIGNOF_DOUBLE==4\n> > platform. Instead of testing the start of the typalign='d' column, let's test\n> > the first offset beyond the previous column. The difference between those two\n> > values depends on ALIGNOF_DOUBLE.\n> \n> Yes, but it could be false positives in some cases. For instance, the\n> column {oid, bool, XLogRecPtr} should be okay on ALIGNOF_DOUBLE == 4\n> and 8 platforms but the new test fails.\n\nI'm happy with that, because the affected author should look for padding-free\nlayouts before settling on your example layout. If the padding-free layouts\nare all unacceptable, the author should update the expected sanity_check.out\nto show the one row where the test \"fails\".\n\n\n",
"msg_date": "Sun, 17 Apr 2022 20:22:24 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Apr 18, 2022 at 12:22 PM Noah Misch <noah@leadboat.com> wrote:\n>\n> On Mon, Apr 18, 2022 at 10:45:50AM +0900, Masahiko Sawada wrote:\n> > On Fri, Apr 15, 2022 at 4:26 PM Noah Misch <noah@leadboat.com> wrote:\n> > > On Thu, Apr 07, 2022 at 08:39:58PM +0900, Masahiko Sawada wrote:\n> > > > On Thu, Apr 7, 2022 at 7:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > On Thu, Apr 7, 2022 at 8:25 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > > I'll take care of this today. I think we can mark the new function\n> > > > > > get_column_offset() being introduced by this patch as parallel safe.\n> > > > >\n> > > > > Pushed.\n> > > >\n> > > > Thanks!\n> > >\n> > > I took a closer look at the test case. The \"get_column_offset(coltypes) % 8\"\n> > > part would have caught the problem only when run on an ALIGNOF_DOUBLE==4\n> > > platform. Instead of testing the start of the typalign='d' column, let's test\n> > > the first offset beyond the previous column. The difference between those two\n> > > values depends on ALIGNOF_DOUBLE.\n> >\n> > Yes, but it could be false positives in some cases. For instance, the\n> > column {oid, bool, XLogRecPtr} should be okay on ALIGNOF_DOUBLE == 4\n> > and 8 platforms but the new test fails.\n>\n> I'm happy with that, because the affected author should look for padding-free\n> layouts before settling on your example layout. If the padding-free layouts\n> are all unacceptable, the author should update the expected sanity_check.out\n> to show the one row where the test \"fails\".\n\nThat makes sense.\n\nRegard,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 18 Apr 2022 13:32:37 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Sun, Apr 17, 2022 at 11:22 PM Noah Misch <noah@leadboat.com> wrote:\n> > Yes, but it could be false positives in some cases. For instance, the\n> > column {oid, bool, XLogRecPtr} should be okay on ALIGNOF_DOUBLE == 4\n> > and 8 platforms but the new test fails.\n>\n> I'm happy with that, because the affected author should look for padding-free\n> layouts before settling on your example layout. If the padding-free layouts\n> are all unacceptable, the author should update the expected sanity_check.out\n> to show the one row where the test \"fails\".\n\nI realize that it was necessary to get something committed quickly\nhere to unbreak the buildfarm, but this is really a mess. As I\nunderstand it, the problem here is that typalign='d' is either 4 bytes\nor 8 depending on how the 'double' type is aligned on that platform,\nbut we use that typalign value also for some other data types that may\nnot be aligned in the same way as 'double'. Consequently, it's\npossible to have a situation where the behavior of the C compiler\ndiverges from the behavior of heap_form_tuple(). To avoid that, we\nneed every catalog column that uses typalign=='d' to begin on an\n8-byte boundary. We also want all such columns to occur before the\nfirst NameData column in the catalog, to guard against the possibility\nthat NAMEDATALEN has been redefined to an odd value. I think this set\nof constraints is a nuisance and that it's mostly good luck we haven't\nrun into any really awkward problems here so far.\n\nIn many of our catalogs, the first member is an OID and the second\nmember of the struct is of type NameData: pg_namespace, pg_class,\npg_proc, etc. That common design pattern is in direct contradiction to\nthe desires of this test case. As soon as someone wants to add a\ntypalign='d' member to any of those system catalogs, the struct layout\nis going to have to get shuffled around -- and then it will look\ndifferent from all the other ones. Or else we'd have to rearrange them\nall to move all the NameData columns to the end. I feel like it's\nweird to introduce a test case that so obviously flies in the face of\nhow catalog layout has been done up to this point, especially for the\nsake of a hypothetical user who want to set NAMEDATALEN to an odd\nnumber. I doubt such scenarios have been thoroughly tested, or ever\nwill be. Perhaps instead we ought to legislate that NAMEDATALEN must\nbe a multiple of 8, or some such thing.\n\nThe other constraint, that typalign='d' fields must always fall on an\n8 byte boundary, is probably less annoying in practice, but it's easy\nto imagine a future catalog running into trouble. Let's say we want to\nintroduce a new catalog that has only an Oid column and a float8\ncolumn. Perhaps with 0-3 bool or uint8 columns as well, or with any\nnumber of NameData columns as well. Well, the only way to satisfy this\nconstraint is to put the float8 column first and the Oid column after\nit, which immediately makes it look different from every other catalog\nwe have. It's hard to feel like that would be a good solution here. I\nthink we ought to try to engineer a solution where heap_form_tuple()\nis going to do the same thing as the C compiler without the sorts of\nextra rules that this test case enforces.\n\nAFAICS, we could do that by:\n\n1. De-supporting platforms that have this problem, or\n2. Introducing new typalign values, as Noah proposed back on April 2, or\n3. Somehow forcing values that are sometimes 4-byte aligned and\nsometimes 8-byte aligned to be 8-byte alignment on all platforms\n\nI also don't like the fact that the test case doesn't even catch\nexactly the problematic set of cases, but rather a superset, leaving\nit up to future patch authors to make a correct judgment about whether\na certain new column can be listed as an expected output of the test\ncase or whether the catalog representation must be changed. The idea\nthat we'll reliably get that right might be optimistic. Again, I don't\nmean to say that this is the fault of this test case since, without\nthe test case, we'd have no idea that there was even a potential\nproblem, which would not be better. But it feels to me like we're\nhacking around the real problem instead of fixing it, and it seems to\nme that we should try to do better.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 13 Jun 2022 10:25:24 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jun 13, 2022 at 11:25 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Sun, Apr 17, 2022 at 11:22 PM Noah Misch <noah@leadboat.com> wrote:\n> > > Yes, but it could be false positives in some cases. For instance, the\n> > > column {oid, bool, XLogRecPtr} should be okay on ALIGNOF_DOUBLE == 4\n> > > and 8 platforms but the new test fails.\n> >\n> > I'm happy with that, because the affected author should look for padding-free\n> > layouts before settling on your example layout. If the padding-free layouts\n> > are all unacceptable, the author should update the expected sanity_check.out\n> > to show the one row where the test \"fails\".\n>\n> I realize that it was necessary to get something committed quickly\n> here to unbreak the buildfarm, but this is really a mess. As I\n> understand it, the problem here is that typalign='d' is either 4 bytes\n> or 8 depending on how the 'double' type is aligned on that platform,\n> but we use that typalign value also for some other data types that may\n> not be aligned in the same way as 'double'. Consequently, it's\n> possible to have a situation where the behavior of the C compiler\n> diverges from the behavior of heap_form_tuple(). To avoid that, we\n> need every catalog column that uses typalign=='d' to begin on an\n> 8-byte boundary. We also want all such columns to occur before the\n> first NameData column in the catalog, to guard against the possibility\n> that NAMEDATALEN has been redefined to an odd value. I think this set\n> of constraints is a nuisance and that it's mostly good luck we haven't\n> run into any really awkward problems here so far.\n>\n> In many of our catalogs, the first member is an OID and the second\n> member of the struct is of type NameData: pg_namespace, pg_class,\n> pg_proc, etc. That common design pattern is in direct contradiction to\n> the desires of this test case. As soon as someone wants to add a\n> typalign='d' member to any of those system catalogs, the struct layout\n> is going to have to get shuffled around -- and then it will look\n> different from all the other ones. Or else we'd have to rearrange them\n> all to move all the NameData columns to the end. I feel like it's\n> weird to introduce a test case that so obviously flies in the face of\n> how catalog layout has been done up to this point, especially for the\n> sake of a hypothetical user who want to set NAMEDATALEN to an odd\n> number. I doubt such scenarios have been thoroughly tested, or ever\n> will be. Perhaps instead we ought to legislate that NAMEDATALEN must\n> be a multiple of 8, or some such thing.\n>\n> The other constraint, that typalign='d' fields must always fall on an\n> 8 byte boundary, is probably less annoying in practice, but it's easy\n> to imagine a future catalog running into trouble. Let's say we want to\n> introduce a new catalog that has only an Oid column and a float8\n> column. Perhaps with 0-3 bool or uint8 columns as well, or with any\n> number of NameData columns as well. Well, the only way to satisfy this\n> constraint is to put the float8 column first and the Oid column after\n> it, which immediately makes it look different from every other catalog\n> we have. It's hard to feel like that would be a good solution here. I\n> think we ought to try to engineer a solution where heap_form_tuple()\n> is going to do the same thing as the C compiler without the sorts of\n> extra rules that this test case enforces.\n\nThese seem to be valid concerns.\n\n> AFAICS, we could do that by:\n>\n> 1. De-supporting platforms that have this problem, or\n> 2. Introducing new typalign values, as Noah proposed back on April 2, or\n> 3. Somehow forcing values that are sometimes 4-byte aligned and\n> sometimes 8-byte aligned to be 8-byte alignment on all platforms\n\nIntroducing new typalign values seems a good idea to me as it's more\nfuture-proof. Will this item be for PG16, right? The main concern\nseems that what this test case enforces would be nuisance when\nintroducing a new system catalog or a new column to the existing\ncatalog but given we're in post PG15-beta1 it is unlikely to happen in\nPG15.\n\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 14 Jun 2022 16:53:42 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Tue, Jun 14, 2022 at 3:54 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > AFAICS, we could do that by:\n> >\n> > 1. De-supporting platforms that have this problem, or\n> > 2. Introducing new typalign values, as Noah proposed back on April 2, or\n> > 3. Somehow forcing values that are sometimes 4-byte aligned and\n> > sometimes 8-byte aligned to be 8-byte alignment on all platforms\n>\n> Introducing new typalign values seems a good idea to me as it's more\n> future-proof. Will this item be for PG16, right? The main concern\n> seems that what this test case enforces would be nuisance when\n> introducing a new system catalog or a new column to the existing\n> catalog but given we're in post PG15-beta1 it is unlikely to happen in\n> PG15.\n\nI agree that we're not likely to introduce a new typalign value any\nsooner than v16. There are a couple of things that bother me about\nthat solution. One is that I don't know how many different behaviors\nexist out there in the wild. If we distinguish the alignment of double\nfrom the alignment of int8, is that good enough, or are there other\ndata types whose properties aren't necessarily the same as either of\nthose? The other is that 32-bit systems are already relatively rare\nand probably will become more rare until they disappear completely. It\ndoesn't seem like a ton of fun to engineer solutions to problems that\nmay go away by themselves with the passage of time. On the other hand,\nif the alternative is to live with this kind of ugliness for another 5\nyears, maybe the time it takes to craft a solution is effort well\nspent.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 15 Jun 2022 13:27:06 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Jun 16, 2022 at 2:27 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Tue, Jun 14, 2022 at 3:54 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > AFAICS, we could do that by:\n> > >\n> > > 1. De-supporting platforms that have this problem, or\n> > > 2. Introducing new typalign values, as Noah proposed back on April 2, or\n> > > 3. Somehow forcing values that are sometimes 4-byte aligned and\n> > > sometimes 8-byte aligned to be 8-byte alignment on all platforms\n> >\n> > Introducing new typalign values seems a good idea to me as it's more\n> > future-proof. Will this item be for PG16, right? The main concern\n> > seems that what this test case enforces would be nuisance when\n> > introducing a new system catalog or a new column to the existing\n> > catalog but given we're in post PG15-beta1 it is unlikely to happen in\n> > PG15.\n>\n> I agree that we're not likely to introduce a new typalign value any\n> sooner than v16. There are a couple of things that bother me about\n> that solution. One is that I don't know how many different behaviors\n> exist out there in the wild. If we distinguish the alignment of double\n> from the alignment of int8, is that good enough, or are there other\n> data types whose properties aren't necessarily the same as either of\n> those?\n\nYeah, there might be.\n\n> The other is that 32-bit systems are already relatively rare\n> and probably will become more rare until they disappear completely. It\n> doesn't seem like a ton of fun to engineer solutions to problems that\n> may go away by themselves with the passage of time.\n\nIIUC the system affected by this problem is not necessarily 32-bit\nsystem. For instance, the hoverfly on buildfarm is 64-bit system but\nwas affected by this problem. According to the XLC manual[1], there is\nno difference between 32-bit systems and 64-bit systems in terms of\nalignment for double. FWIW, looking at the manual, there might have\nbeen a solution for AIX to specify -qalign=natural compiler option in\norder to enforce the alignment of double to 8.\n\nRegards,\n\n[1] https://support.scinet.utoronto.ca/Manuals/xlC++-proguide.pdf;\nTable 11 on page 10.\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 16 Jun 2022 16:25:36 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Thu, Jun 16, 2022 at 3:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> FWIW, looking at the manual, there might have\n> been a solution for AIX to specify -qalign=natural compiler option in\n> order to enforce the alignment of double to 8.\n\nWell if that can work it sure seems better.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 16 Jun 2022 12:35:43 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On 16.06.22 18:35, Robert Haas wrote:\n> On Thu, Jun 16, 2022 at 3:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>> FWIW, looking at the manual, there might have\n>> been a solution for AIX to specify -qalign=natural compiler option in\n>> order to enforce the alignment of double to 8.\n> \n> Well if that can work it sure seems better.\n\nThat means changing the system's ABI, so in the extreme case you then \nneed to compile everything else to match as well.\n\n\n\n",
"msg_date": "Mon, 20 Jun 2022 15:52:49 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jun 20, 2022 at 9:52 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> That means changing the system's ABI, so in the extreme case you then\n> need to compile everything else to match as well.\n\nI think we wouldn't want to do that in a minor release, but doing it\nin a new major release seems fine -- especially if only AIX is\naffected.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 20 Jun 2022 10:04:06 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Mon, Jun 13, 2022 at 10:25:24AM -0400, Robert Haas wrote:\n> On Sun, Apr 17, 2022 at 11:22 PM Noah Misch <noah@leadboat.com> wrote:\n> > > Yes, but it could be false positives in some cases. For instance, the\n> > > column {oid, bool, XLogRecPtr} should be okay on ALIGNOF_DOUBLE == 4\n> > > and 8 platforms but the new test fails.\n> >\n> > I'm happy with that, because the affected author should look for padding-free\n> > layouts before settling on your example layout. If the padding-free layouts\n> > are all unacceptable, the author should update the expected sanity_check.out\n> > to show the one row where the test \"fails\".\n\n> Perhaps instead we ought to legislate that NAMEDATALEN must\n> be a multiple of 8, or some such thing.\n> \n> The other constraint, that typalign='d' fields must always fall on an\n> 8 byte boundary, is probably less annoying in practice, but it's easy\n> to imagine a future catalog running into trouble. Let's say we want to\n> introduce a new catalog that has only an Oid column and a float8\n> column. Perhaps with 0-3 bool or uint8 columns as well, or with any\n> number of NameData columns as well. Well, the only way to satisfy this\n> constraint is to put the float8 column first and the Oid column after\n> it, which immediately makes it look different from every other catalog\n> we have.\n\n> AFAICS, we could do that by:\n> \n> 1. De-supporting platforms that have this problem, or\n> 2. Introducing new typalign values, as Noah proposed back on April 2, or\n> 3. Somehow forcing values that are sometimes 4-byte aligned and\n> sometimes 8-byte aligned to be 8-byte alignment on all platforms\n\nOn Mon, Jun 20, 2022 at 10:04:06AM -0400, Robert Haas wrote:\n> On Mon, Jun 20, 2022 at 9:52 AM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n> > That means changing the system's ABI, so in the extreme case you then\n> > need to compile everything else to match as well.\n> \n> I think we wouldn't want to do that in a minor release, but doing it\n> in a new major release seems fine -- especially if only AIX is\n> affected.\n\n\"Everything\" isn't limited to PostgreSQL. The Perl ABI exposes large structs\nto plperl; a field of type double could require the AIX user to rebuild Perl\nwith the same compiler option.\n\n\nOverall, this could be a textbook example of choosing between:\n\n- Mild harm (unaesthetic column order) to many people.\n- Considerable harm (dump/reload instead of pg_upgrade) to a small, unknown,\n possibly-zero quantity of people.\n\nHere's how I rank the options, from most-preferred to least-preferred:\n\n1. Put new eight-byte fields at the front of each catalog, when in doubt.\n2. On systems where double alignment differs from int64 alignment, require\n NAMEDATALEN%8==0. Upgrading to v16 would require dump/reload for AIX users\n changing NAMEDATALEN to conform to the new restriction.\n3. Introduce new typalign values. Upgrading to v16 would require dump/reload\n for all AIX users.\n4. De-support AIX.\n5. From above, \"Somehow forcing values that are sometimes 4-byte aligned and\n sometimes 8-byte aligned to be 8-byte alignment on all platforms\".\n Upgrading to v16 would require dump/reload for all AIX users.\n6. Require -qalign=natural on AIX. Upgrading to v16 would require dump/reload\n and possible system library rebuilds for all AIX users.\n\nI gather (1) isn't at the top of your ranking, or you wouldn't have written\nin. What do you think of (2)?\n\n\n",
"msg_date": "Tue, 21 Jun 2022 21:28:14 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jun 22, 2022 at 12:28 AM Noah Misch <noah@leadboat.com> wrote:\n> \"Everything\" isn't limited to PostgreSQL. The Perl ABI exposes large structs\n> to plperl; a field of type double could require the AIX user to rebuild Perl\n> with the same compiler option.\n\nOh, that isn't so great, then.\n\n> Here's how I rank the options, from most-preferred to least-preferred:\n>\n> 1. Put new eight-byte fields at the front of each catalog, when in doubt.\n> 2. On systems where double alignment differs from int64 alignment, require\n> NAMEDATALEN%8==0. Upgrading to v16 would require dump/reload for AIX users\n> changing NAMEDATALEN to conform to the new restriction.\n> 3. Introduce new typalign values. Upgrading to v16 would require dump/reload\n> for all AIX users.\n> 4. De-support AIX.\n> 5. From above, \"Somehow forcing values that are sometimes 4-byte aligned and\n> sometimes 8-byte aligned to be 8-byte alignment on all platforms\".\n> Upgrading to v16 would require dump/reload for all AIX users.\n> 6. Require -qalign=natural on AIX. Upgrading to v16 would require dump/reload\n> and possible system library rebuilds for all AIX users.\n>\n> I gather (1) isn't at the top of your ranking, or you wouldn't have written\n> in. What do you think of (2)?\n\n(2) pleases me in the sense that it seems to inconvenience very few\npeople, perhaps no one, in order to avoid inconveniencing a larger\nnumber of people. However, it doesn't seem sufficient. If I understand\ncorrectly, even a catalog that includes no NameData column can have a\nproblem.\n\nRegarding (1), it is my opinion that the only real value of typalign\nis for system catalogs, and specifically that it lets you put the\nfields in an order that is aesthetically pleasing rather than worrying\nabout alignment considerations. After all, if we just ordered the\nfields by descending alignment requirement, we could get rid of\ntypalign altogether (at least, if we didn't care about backward\ncompatibility). User tables would get smaller because we'd get rid of\nalignment padding, and I don't think we'd see much impact on\nperformance because, for user tables, we copy the values into a datum\narray before doing anything interesting with them. So (1) seems to me\nto be conceding that typalign is unfit for the only purpose it has.\nPerhaps that's just how things are, but it doesn't seem like a good\nway for things to be.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 22 Jun 2022 09:50:02 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "[ sorry for not having tracked this thread more closely ... ]\n\nRobert Haas <robertmhaas@gmail.com> writes:\n> Regarding (1), it is my opinion that the only real value of typalign\n> is for system catalogs, and specifically that it lets you put the\n> fields in an order that is aesthetically pleasing rather than worrying\n> about alignment considerations. After all, if we just ordered the\n> fields by descending alignment requirement, we could get rid of\n> typalign altogether (at least, if we didn't care about backward\n> compatibility). User tables would get smaller because we'd get rid of\n> alignment padding, and I don't think we'd see much impact on\n> performance because, for user tables, we copy the values into a datum\n> array before doing anything interesting with them. So (1) seems to me\n> to be conceding that typalign is unfit for the only purpose it has.\n\nThat's a fundamental misreading of the situation. typalign is essential\non alignment-picky architectures, else you will get a SIGBUS fault\nwhen trying to fetch a multibyte value (whether it's just going to get\nstored into a Datum array is not very relevant here).\n\nIt appears that what we've got on AIX is that typalign 'd' overstates the\nactual alignment requirement for 'double', which is safe from the SIGBUS\nangle. However, it is a problem for our usage with system catalogs,\nwhere our C struct declarations may not line up with the way that a\ntuple is constructed by the tuple assembly routines.\n\nI concur that Noah's description of #2 is not an accurate statement\nof the rules we'd have to impose to be sure that the C structs line up\nwith the actual tuple layouts. I don't think we want rules exactly,\nwhat we need is mechanical verification that the field orderings in\nuse are safe. The last time I looked at this thread, what was being\ndiscussed was (a) re-ordering pg_subscription's columns and (b)\nadding some kind of regression test to verify that all catalogs meet\nthe expectation of 'd'-aligned fields not needing alignment padding\nthat an AIX compiler might choose not to insert. That still seems\nlike the most plausible answer to me. I don't especially want to\ninvent an additional typalign code that we could only test on legacy\nplatforms.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 Jun 2022 10:39:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jun 22, 2022 at 10:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> That's a fundamental misreading of the situation. typalign is essential\n> on alignment-picky architectures, else you will get a SIGBUS fault\n> when trying to fetch a multibyte value (whether it's just going to get\n> stored into a Datum array is not very relevant here).\n\nI mean, that problem is easily worked around. Maybe you think memcpy\nwould be a lot slower than a direct assignment, but \"essential\" is a\nstrong word.\n\n> I concur that Noah's description of #2 is not an accurate statement\n> of the rules we'd have to impose to be sure that the C structs line up\n> with the actual tuple layouts. I don't think we want rules exactly,\n> what we need is mechanical verification that the field orderings in\n> use are safe. The last time I looked at this thread, what was being\n> discussed was (a) re-ordering pg_subscription's columns and (b)\n> adding some kind of regression test to verify that all catalogs meet\n> the expectation of 'd'-aligned fields not needing alignment padding\n> that an AIX compiler might choose not to insert. That still seems\n> like the most plausible answer to me. I don't especially want to\n> invent an additional typalign code that we could only test on legacy\n> platforms.\n\nI agree with that, but I don't think that having the developers\nenforce alignment rules by reordering catalog columns for the sake of\nlegacy platforms is appealing either.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 22 Jun 2022 10:53:07 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Jun 22, 2022 at 10:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I don't especially want to\n>> invent an additional typalign code that we could only test on legacy\n>> platforms.\n\n> I agree with that, but I don't think that having the developers\n> enforce alignment rules by reordering catalog columns for the sake of\n> legacy platforms is appealing either.\n\nGiven that we haven't run into this before, it seems like a reasonable\nbet that the problem will seldom arise. So as long as we have a\ncross-check I'm all right with calling it good and moving on. Expending\na whole lot of work to improve the situation seems uncalled-for.\n\nWhen and if we get to a point where we're ready to break on-disk\ncompatibility for user tables, perhaps revisiting the alignment\nrules would be an appropriate component of that. I don't see that\nhappening in the foreseeable future, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 Jun 2022 11:01:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jun 22, 2022 at 11:01 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Given that we haven't run into this before, it seems like a reasonable\n> bet that the problem will seldom arise. So as long as we have a\n> cross-check I'm all right with calling it good and moving on. Expending\n> a whole lot of work to improve the situation seems uncalled-for.\n\nAll right. Well, I'm on record as not liking that solution, but\nobviously you can and do feel differently.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 22 Jun 2022 11:02:50 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jun 22, 2022 at 09:50:02AM -0400, Robert Haas wrote:\n> On Wed, Jun 22, 2022 at 12:28 AM Noah Misch <noah@leadboat.com> wrote:\n> > Here's how I rank the options, from most-preferred to least-preferred:\n> >\n> > 1. Put new eight-byte fields at the front of each catalog, when in doubt.\n> > 2. On systems where double alignment differs from int64 alignment, require\n> > NAMEDATALEN%8==0. Upgrading to v16 would require dump/reload for AIX users\n> > changing NAMEDATALEN to conform to the new restriction.\n> > 3. Introduce new typalign values. Upgrading to v16 would require dump/reload\n> > for all AIX users.\n> > 4. De-support AIX.\n> > 5. From above, \"Somehow forcing values that are sometimes 4-byte aligned and\n> > sometimes 8-byte aligned to be 8-byte alignment on all platforms\".\n> > Upgrading to v16 would require dump/reload for all AIX users.\n> > 6. Require -qalign=natural on AIX. Upgrading to v16 would require dump/reload\n> > and possible system library rebuilds for all AIX users.\n> >\n> > I gather (1) isn't at the top of your ranking, or you wouldn't have written\n> > in. What do you think of (2)?\n> \n> (2) pleases me in the sense that it seems to inconvenience very few\n> people, perhaps no one, in order to avoid inconveniencing a larger\n> number of people. However, it doesn't seem sufficient.\n\nHere's a more-verbose description of (2), with additions about what it does\nand doesn't achieve:\n\n2. On systems where double alignment differs from int64 alignment, require\n NAMEDATALEN%8==0. Modify the test from commits 79b716c and c1da0ac to stop\n treating \"name\" fields specially. The test will still fail for AIX\n compatibility violations, but \"name\" columns no longer limit your field\n position candidates like they do today (today == option (1)). Upgrading to\n v16 would require dump/reload for AIX users changing NAMEDATALEN to conform\n to the new restriction. (I'm not sure pg_upgrade checks NAMEDATALEN\n compatibility, but it should require at least one of: same NAMEDATALEN, or\n absence of \"name\" columns in user tables.)\n\n> If I understand\n> correctly, even a catalog that includes no NameData column can have a\n> problem.\n\nCorrect.\n\nOn Wed, Jun 22, 2022 at 10:39:20AM -0400, Tom Lane wrote:\n> It appears that what we've got on AIX is that typalign 'd' overstates the\n> actual alignment requirement for 'double', which is safe from the SIGBUS\n> angle.\n\nOn AIX, typalign='d' states the exact alignment requirement for 'double'. It\nunderstates the alignment requirement for int64_t.\n\n> I don't think we want rules exactly, what we need is mechanical verification\n> that the field orderings in use are safe.\n\nCommits 79b716c and c1da0ac did that.\n\n\n",
"msg_date": "Wed, 22 Jun 2022 19:48:24 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
},
{
"msg_contents": "On Wed, Jun 22, 2022 at 10:48 PM Noah Misch <noah@leadboat.com> wrote:\n> Here's a more-verbose description of (2), with additions about what it does\n> and doesn't achieve:\n>\n> 2. On systems where double alignment differs from int64 alignment, require\n> NAMEDATALEN%8==0. Modify the test from commits 79b716c and c1da0ac to stop\n> treating \"name\" fields specially. The test will still fail for AIX\n> compatibility violations, but \"name\" columns no longer limit your field\n> position candidates like they do today (today == option (1)). Upgrading to\n> v16 would require dump/reload for AIX users changing NAMEDATALEN to conform\n> to the new restriction. (I'm not sure pg_upgrade checks NAMEDATALEN\n> compatibility, but it should require at least one of: same NAMEDATALEN, or\n> absence of \"name\" columns in user tables.)\n\nDoing this much seems pretty close to free to me. I doubt anyone\nreally cares about using a NAMEDATALEN value that is not a multiple of\n8 on any platform. I also think there are few people who care about\nAIX. The intersection must be very small indeed, or so I would think.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 23 Jun 2022 09:58:07 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Skipping logical replication transactions on subscriber side"
}
] |
[
{
"msg_contents": "Hello pg-devs,\n\nI have given a go at proposing a replacement for rand48.\n\nPOSIX 1988 (?) rand48 is a LCG PRNG designed to generate 32 bits integers \nor floats based on a 48 bits state on 16 or 32 bits architectures. LCG \ncycles on the low bits, which can be quite annoying. Given that we run on \n64 bits architectures and that we need to generate 64 bits ints or \ndoubles, IMHO it makes very little sense to stick to that.\n\nWe should (probably) want:\n - one reasonable default PRNG for all pg internal uses.\n - NOT to invent a new design!\n - something fast, close to rand48 (which basically does 2 arithmetic\n ops, so it is hard to compete)\n no need for something cryptographic though, which would imply slow\n - to produce 64 bits integers & doubles with a 52 bits mantissa,\n so state size > 64 bits.\n - a small state though, because we might generate quite a few of them\n for different purposes so state size <= 256 or even <= 128 bits\n - the state to be aligned to whatever => 128 bits\n - 64 bits operations for efficiency on modern architectures,\n but not 128 bits operations.\n - not to depend on special hardware for speed (eg MMX/SSE/AES).\n - not something with obvious known and relevant defects.\n - not something with \"rights\" attached.\n\nThese constraints reduce drastically the available options from \nhttps://en.wikipedia.org/wiki/List_of_random_number_generators\n\nThe attached patch removes \"rand48\" and adds a \"pg_prng\" implementation \nbased on xoroshiro128ss, and replaces it everywhere. In pgbench, the non \nportable double-relying code is replaced by hopefully portable ints. The \ninterface makes it easy to replace the underlying PRNG if something else \nis desired.\n\nThanks for your feedback.\n\n-- \nFabien.",
"msg_date": "Mon, 24 May 2021 12:31:29 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "rand48 replacement"
},
{
"msg_contents": "Hi,\n\nOn 5/24/21 12:31 PM, Fabien COELHO wrote:\n> \n> Hello pg-devs,\n> \n> I have given a go at proposing a replacement for rand48.\n> \n\nSo what is the motivation for replacing rand48? Speed, quality of \nproduced random numbers, features rand48 can't provide, or what?\n\n> POSIX 1988 (?) rand48 is a LCG PRNG designed to generate 32 bits \n> integers or floats based on a 48 bits state on 16 or 32 bits \n> architectures. LCG cycles on the low bits, which can be quite annoying. \n> Given that we run on 64 bits architectures and that we need to generate \n> 64 bits ints or doubles, IMHO it makes very little sense to stick to that.\n> \n> We should (probably) want:\n> �- one reasonable default PRNG for all pg internal uses.\n> �- NOT to invent a new design!\n> �- something fast, close to rand48 (which basically does 2 arithmetic\n> �� ops, so it is hard to compete)\n> �� no need for something cryptographic though, which would imply slow\n> �- to produce 64 bits integers & doubles with a 52 bits mantissa,\n> �� so state size > 64 bits.\n> �- a small state though, because we might generate quite a few of them\n> �� for different purposes so state size <= 256 or even <= 128 bits\n> �- the state to be aligned to whatever => 128 bits\n> �- 64 bits operations for efficiency on modern architectures,\n> �� but not 128 bits operations.\n> �- not to depend on special hardware for speed (eg MMX/SSE/AES).\n> �- not something with obvious known and relevant defects.\n> �- not something with \"rights\" attached.\n> \n> These constraints reduce drastically the available options from \n> https://en.wikipedia.org/wiki/List_of_random_number_generators\n> \n> The attached patch removes \"rand48\" and adds a \"pg_prng\" implementation \n> based on xoroshiro128ss, and replaces it everywhere. In pgbench, the non \n> portable double-relying code is replaced by hopefully portable ints. The \n> interface makes it easy to replace the underlying PRNG if something else \n> is desired.\n> \n\nxoroshiro seems reasonable. How does it compare to rand48? Does it need \nmuch less/more state, is it faster/slower, etc.? I'd expect that it \nproduces better random sequence, considering rand48 is a LCG, which is \nfairly simple decades old design.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 24 May 2021 12:57:13 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "Hello Tomas,\n\n>> I have given a go at proposing a replacement for rand48.\n>\n> So what is the motivation for replacing rand48? Speed, quality of produced \n> random numbers, features rand48 can't provide, or what?\n\nSpeed can only be near rand48, see below. Quality (eg no trivial cycles, \ndoes not fail too quickly on statistical tests) and soundness (the point \nof generating 52-64 bits of data out of 48 bits, which means that only a \nsmall part of the target space is covered, fails me). Also, I like having \na implementation independent interface (current rand48 tells the name of \nthe algorithm everywhere and the \"uint16[3]\" state type is hardcoded in \nseveral places).\n\n>> POSIX 1988 (?) rand48 is a LCG PRNG designed to generate 32 bits integers \n>> or floats based on a 48 bits state on 16 or 32 bits architectures. LCG \n>> cycles on the low bits, which can be quite annoying. Given that we run on \n>> 64 bits architectures and that we need to generate 64 bits ints or doubles, \n>> IMHO it makes very little sense to stick to that.\n>> \n>> We should (probably) want:\n>> �- one reasonable default PRNG for all pg internal uses.\n>> �- NOT to invent a new design!\n>> �- something fast, close to rand48 (which basically does 2 arithmetic\n>> �� ops, so it is hard to compete)\n>> �� no need for something cryptographic though, which would imply slow\n>> �- to produce 64 bits integers & doubles with a 52 bits mantissa,\n>> �� so state size > 64 bits.\n>> �- a small state though, because we might generate quite a few of them\n>> �� for different purposes so state size <= 256 or even <= 128 bits\n>> �- the state to be aligned to whatever => 128 bits\n>> �- 64 bits operations for efficiency on modern architectures,\n>> �� but not 128 bits operations.\n>> �- not to depend on special hardware for speed (eg MMX/SSE/AES).\n>> �- not something with obvious known and relevant defects.\n>> �- not something with \"rights\" attached.\n>> \n>> These constraints reduce drastically the available options from \n>> https://en.wikipedia.org/wiki/List_of_random_number_generators\n>> \n>> The attached patch removes \"rand48\" and adds a \"pg_prng\" implementation \n>> based on xoroshiro128ss, and replaces it everywhere. In pgbench, the non \n>> portable double-relying code is replaced by hopefully portable ints. The \n>> interface makes it easy to replace the underlying PRNG if something else is \n>> desired.\n>\n> xoroshiro seems reasonable. How does it compare to rand48? Does it need much \n> less/more state, is it faster/slower, etc.?\n\nBasically any PRNG should be slower/comparable than rand48 because it only \ndoes 2 arithmetic ops, you cannot do much less when trying to steer bits. \nHowever because of the 16-bits unpacking/packing on 64 bits architecture \nthere is some room for additional useful ops, so in the end from the end \nuser the performance is only about 5% loweR.\n\nState is 16 bytes vs 6 bytes for rand48. This is ok for generating 8 bytes \nper round and is still quite small.\n\n> I'd expect that it produces better random sequence, considering rand48 \n> is a LCG, which is fairly simple decades old design.\n\nYep, it does not cycle trivially on low bits compared to an LCG (eg odd -> \neven -> odd -> even -> ...), e.g. if you have the bad idea to do \"% 2\" on \nan LCG to extract a bool you just alternate.\n\nTo summarize:\n - better software engineering\n - similar speed (slightly slower)\n - better statistical quality\n - quite small state\n - soundness\n\n-- \nFabien.",
"msg_date": "Mon, 24 May 2021 13:28:07 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "\n\n> 24 мая 2021 г., в 15:31, Fabien COELHO <coelho@cri.ensmp.fr> написал(а):\n> \n> \n> - NOT to invent a new design!\n\nRadical version of this argument would be to use de-facto standard and ubiquitous MT19937.\nThough, I suspect, it's not optimal solution to the date.\n\nBest regards, Andrey Borodin.\n\n\n\n",
"msg_date": "Mon, 24 May 2021 17:30:16 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "Hi Fabien,\n\n> To summarize:\n> - better software engineering\n> - similar speed (slightly slower)\n> - better statistical quality\n> - quite small state\n> - soundness\n\nPersonally, I think your patch is great. Speaking of the speed I\nbelieve we should consider the performance of the entire DBMS in\ntypical scenarios, not the performance of the single procedure. I'm\npretty sure in these terms the impact of your patch is neglectable\nnow, and almost certainly beneficial in the long term because of\nbetter randomness.\n\nWhile reviewing your patch I noticed that you missed\ntest_integerset.c. Here is an updated patch.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Mon, 24 May 2021 16:08:16 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "\nHello Andrey,\n\n>> - NOT to invent a new design!\n>\n> Radical version of this argument would be to use de-facto standard and \n> ubiquitous MT19937.\n\nIndeed, I started considering this one for this reason, obviously.\n\n> Though, I suspect, it's not optimal solution to the date.\n\n\"not optimal\" does not do justice to the issues.\n\nThe main one is the huge 2.5 KB state of MT19937 which makes it quite \nimpractical for plenty of internal and temporary uses. In pgbench there \nare many PRNG needed for reproducibility (eg one global, 3 per thread, one \nper client) plus a temporary one internal to a function call (permute) \nwhich is expected to be reasonably fast, so should not start by \ninitializing 2.5 KB of data. In postgres there are 2 permanent ones (sql \nrandom, C double random) plus some in a geqo and in sampling internal \nstructures.\n\nSo standard MT19937 is basically out of the equation. It also happens to \nfail some statistical tests and not be very fast. It has a insanely huge \ncycle, but pg does not need that, and probably nobody does. The only good \npoint is that it is a standard, which IMO is not enough to fix the other \nissues.\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 24 May 2021 15:08:57 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "\nHello Aleksander,\n\n>> - better software engineering\n>> - similar speed (slightly slower)\n>> - better statistical quality\n>> - quite small state\n>> - soundness\n>\n> Personally, I think your patch is great.\n\nThanks for having a look!\n\n> Speaking of the speed I believe we should consider the performance of \n> the entire DBMS in typical scenarios, not the performance of the single \n> procedure.\n\nSure. I tested a worst-case pgbench script with only \"\\set i random(1, \n100000000)\" on a loop, the slowdown was a few percents (IFAICR < 5%).\n\n> I'm pretty sure in these terms the impact of your patch is neglectable \n> now, and almost certainly beneficial in the long term because of better \n> randomness.\n>\n> While reviewing your patch I noticed that you missed test_integerset.c. \n> Here is an updated patch.\n\nIndeed. Thanks for the catch & the v2 patch!\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 24 May 2021 15:22:58 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "Sorry for a duplicate entry on CF web application",
"msg_date": "Mon, 24 May 2021 14:09:01 +0000",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nAlthough the patch looks OK I would like to keep the status \"Needs review\" for now in case someone would like to join the discussion.",
"msg_date": "Mon, 24 May 2021 14:11:47 +0000",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: tested, passed\n> Documentation: tested, passed\n>\n> Although the patch looks OK I would like to keep the status \"Needs review\" for now in case someone would like to join the discussion.\n\nOk, fine with me.\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 24 May 2021 17:35:24 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "Although this patch is marked RFC, the cfbot shows it doesn't\neven compile on Windows. I think you missed updating Mkvcbuild.pm.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 01 Jul 2021 12:20:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "> Although this patch is marked RFC, the cfbot shows it doesn't\n> even compile on Windows. I think you missed updating Mkvcbuild.pm.\n\nIndeed. Here is a blind attempt at fixing the build, I'll check later to \nsee whether it works. It would help me if the cfbot results were \nintegrated into the cf app.\n\n-- \nFabien.",
"msg_date": "Thu, 1 Jul 2021 18:51:20 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>> Although this patch is marked RFC, the cfbot shows it doesn't\n>> even compile on Windows. I think you missed updating Mkvcbuild.pm.\n\n> Indeed. Here is a blind attempt at fixing the build, I'll check later to \n> see whether it works. It would help me if the cfbot results were \n> integrated into the cf app.\n\nHmm, not there yet per cfbot, not sure why not.\n\nAnyway, after taking a very quick look at the patch itself, I've\ngot just one main objection: I don't approve of putting this in\nport.h or src/port/. erand48.c is there because we envisioned it\noriginally as an occasionally-used substitute for libc facilities.\nBut this is most certainly not that, so it belongs in src/common/\ninstead. I'd also be inclined to invent a new single-purpose .h\nfile for it.\n\nI see that you probably did that because random.c and srandom.c\ndepend on it, but I wonder why we don't make an effort to flush\nthose altogether. It's surely pretty confusing to newbies that\nwhat appears to be a call of the libc primitives is no such thing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 01 Jul 2021 14:41:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "On Thu, 1 Jul 2021 at 19:41, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Anyway, after taking a very quick look at the patch itself, I've\n> got just one main objection: I don't approve of putting this in\n> port.h or src/port/.\n\nI haven't looked at the patch in detail, but one thing I object to is\nthe code to choose a random integer in an arbitrary range.\n\nCurrently, this is done in pgbench by getrand(), which has its\nproblems. However, this patch seems to be replacing that with a simple\nmodulo operation, which is perhaps the worst possible way to do it.\nThere's plenty of research out there on how to do it better -- see,\nfor example, [1] for a nice summary.\n\nAlso, I'd say that functions to choose random integers in an arbitrary\nrange ought to be part of the common API, as they are in almost every\nlanguage's random API.\n\nRegards,\nDean\n\n[1] https://www.pcg-random.org/posts/bounded-rands.html\n\n\n",
"msg_date": "Thu, 1 Jul 2021 20:45:04 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "\nHello Tom,\n\n>> Indeed. Here is a blind attempt at fixing the build, I'll check later to\n>> see whether it works. It would help me if the cfbot results were\n>> integrated into the cf app.\n>\n> Hmm, not there yet per cfbot, not sure why not.\n\nI'll investigate.\n\n> Anyway, after taking a very quick look at the patch itself, I've\n> got just one main objection: I don't approve of putting this in\n> port.h or src/port/. erand48.c is there because we envisioned it\n> originally as an occasionally-used substitute for libc facilities.\n> But this is most certainly not that, so it belongs in src/common/\n> instead.\n\nOk, this would make sense.\n\n> I'd also be inclined to invent a new single-purpose .h\n> file for it.\n\nHmmm. Why not.\n\n> I see that you probably did that because random.c and srandom.c\n> depend on it, but I wonder why we don't make an effort to flush\n> those altogether.\n\nOk for removing them. They are used in contrib where they can be replaced. \nI hope that extensions would not depend on that, though.\n\n> It's surely pretty confusing to newbies that what appears to be a call \n> of the libc primitives is no such thing.\n\nI do not understand your point.\n\nIf people believe the current random() implementation to be *the* libc \nprimitive, then my linux doc says \"The random() function uses a nonlinear \nadditive feedback random number generator employing a default table of \nsize 31 long integers to return successive pseudo-random numbers in the \nrange from 0 to RAND_MAX. The period of this random number generator is \nvery large, approximately 16 * ((2^31) - 1).\", which is pretty far from \nthe rand48 implementation provided in port, so ISTM that the confusion is \nalready there?\n\n-- \nFabien.\n\n\n",
"msg_date": "Thu, 1 Jul 2021 22:36:08 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "Hello Dean,\n\n> I haven't looked at the patch in detail, but one thing I object to is\n> the code to choose a random integer in an arbitrary range.\n\nThanks for bringing up this interesting question!\n\n> Currently, this is done in pgbench by getrand(), which has its\n> problems.\n\nYes. That is one of the motivation for providing something hopefully \nbetter.\n\n> However, this patch seems to be replacing that with a simple\n> modulo operation, which is perhaps the worst possible way to do it.\n\nI did it knowing this issue. Here is why:\n\nThe modulo operation is biased for large ranges close to the limit, sure. \nAlso, the bias is somehow of the same magnitude as the FP multiplication \napproach used previously, so the \"worst\" has not changed much, it is \nreally the same as before.\n\nI thought it is not such an issue because for typical uses we are unlikely \nto be in these conditions, so the one-operation no branching approach \nseemed like a good precision vs performance compromise: I'd expect the \ntypical largest ranges to be well below 40 bits (eg a key in a pretty \nlarge table in pgbench), which makes the bias well under 1/2**24 and ISTM \nthat I can live with that. With the initial 48 bits state, obviously the \nsituation was not the same.\n\n> There's plenty of research out there on how to do it better -- see,\n> for example, [1] for a nice summary.\n\nRejection methods include branches, thus may cost significantly more, as \nshow by the performance figures in blog.\n\nAlso, it somehow breaks the sequence determinism when using range, which I \nfound quite annoying: ISTM desirable that when generating a number the \nstate advances once, and just once.\n\nAlso some methods have higher costs depending on the actual range, eg the \nbitmask approach: for range 129 the bitmask is 0xff and you have a nearly \n50% probability of iterating once, nearly 25% of iterating twice, and so \non… I like performance to be uniform, not to depend on actual values.\n\nGiven these arguments I'd be inclined to keep the bias, but I'm open to \nmore discussion.\n\n> Also, I'd say that functions to choose random integers in an arbitrary \n> range ought to be part of the common API, as they are in almost every \n> language's random API.\n\nThat is a good point.\n\n-- \nFabien.",
"msg_date": "Thu, 1 Jul 2021 23:18:44 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "On Thu, 1 Jul 2021 at 22:18, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n> > However, this patch seems to be replacing that with a simple\n> > modulo operation, which is perhaps the worst possible way to do it.\n>\n> The modulo operation is biased for large ranges close to the limit, sure.\n> Also, the bias is somehow of the same magnitude as the FP multiplication\n> approach used previously, so the \"worst\" has not changed much, it is\n> really the same as before.\n>\n\nIt may be true that the bias is of the same magnitude as FP multiply,\nbut it is not of the same nature. With FP multiply, the\nmore-likely-to-be-chosen values are more-or-less evenly distributed\nacross the range, whereas modulo concentrates them all at one end,\nmaking it more likely to bias test results.\n\nIt's worth paying attention to how other languages/libraries implement\nthis, and basically no one chooses the modulo method, which ought to\nraise alarm bells. Of the biased methods, it has the worst kind of\nbias and the worst performance.\n\nIf a biased method is OK, then the biased integer multiply method\nseems to be the clear winner. This requires the high part of a\n64x64-bit product, which is trivial if 128-bit integers are available,\nbut would need a little more work otherwise. There's some code in\ncommon/d2s that might be suitable.\n\nMost other implementations tend to use an unbiased method though, and\nI think it's worth doing the same. It might be a bit slower, or even\nfaster depending on implementation and platform, but in the context of\nthe DB as a whole, I don't think a few extra cycles matters either\nway. The method recommended at the very end of that blog seems to be\npretty good all round, but any of the other commonly used unbiased\nmethods would probably be OK too.\n\nRegards,\nDean\n\n\n",
"msg_date": "Fri, 2 Jul 2021 09:31:41 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "\nHello Dean,\n\n> It may be true that the bias is of the same magnitude as FP multiply, \n> but it is not of the same nature. With FP multiply, the \n> more-likely-to-be-chosen values are more-or-less evenly distributed \n> across the range, whereas modulo concentrates them all at one end, \n> making it more likely to bias test results.\n\nYes, that is true.\n\n> It's worth paying attention to how other languages/libraries implement\n> this, and basically no one chooses the modulo method, which ought to\n> raise alarm bells. Of the biased methods, it has the worst kind of\n> bias and the worst performance.\n\nHmmm. That is not exactly how I interpreted the figures in the blog.\n\n> If a biased method is OK, then the biased integer multiply method\n> seems to be the clear winner. This requires the high part of a\n> 64x64-bit product, which is trivial if 128-bit integers are available,\n> but would need a little more work otherwise. There's some code in\n> common/d2s that might be suitable.\n\nAnd yes, modulo is expensive. If we allow 128 bits integers operations, I \nwould not choose this RNPG in the first place, I'd take PCG with a 128 bits state.\nThat does not change the discussion about bias, though.\n\n> Most other implementations tend to use an unbiased method though, and I \n> think it's worth doing the same. It might be a bit slower, or even \n> faster depending on implementation and platform, but in the context of \n> the DB as a whole, I don't think a few extra cycles matters either way.\n\nOk ok ok, I surrender!\n\n> The method recommended at the very end of that blog seems to be pretty \n> good all round, but any of the other commonly used unbiased methods \n> would probably be OK too.\n\nThat does not address my other issues with the proposed methods, in \nparticular the fact that the generated sequence is less deterministic, but \nI think I have a simple way around that. I'm hesitating to skip to the the \nbitmask method, and give up performance uniformity. I'll try to come up \nwith something over the week-end, and also address Tom's comments in \npassing.\n\n-- \nFabien.\n\n\n",
"msg_date": "Fri, 2 Jul 2021 23:51:42 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "Hello Dean & Tom,\n\nHere is a v4, which:\n\n - moves the stuff to common and fully removes random/srandom (Tom)\n - includes a range generation function based on the bitmask method (Dean)\n but iterates with splitmix so that the state always advances once (Me)\n\n-- \nFabien.",
"msg_date": "Sat, 3 Jul 2021 09:06:06 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "> Here is a v4, which:\n>\n> - moves the stuff to common and fully removes random/srandom (Tom)\n> - includes a range generation function based on the bitmask method (Dean)\n> but iterates with splitmix so that the state always advances once (Me)\n\nAnd a v5 where an unused test file does also compile if we insist.\n\n-- \nFabien.",
"msg_date": "Sat, 3 Jul 2021 10:45:43 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "Fabien COELHO wrote 2021-07-03 11:45:\n> And a v5 where an unused test file does also compile if we insist.\n\nAbout patch:\n1. PostgreSQL source uses `uint64` and `uint32`, but not \n`uint64_t`/`uint32_t`\n2. I don't see why pg_prng_state could not be `typedef uint64 \npg_prng_state[2];`\n3. Then SamplerRandomState and pgbench RandomState could stay.\n Patch will be a lot shorter.\n I don't like mix of semantic refactoring and syntactic refactoring in \nthe\n same patch.\n While I could agree with replacing `SamplerRandomState => \npg_prng_state`, I'd\n rather see it in separate commit.\n And that separate commit could contain transition:\n `typedef uint64 pg_prng_state[2];` => `typedef struct { uint64 s0, s1 \n} pg_prng_state;`\n4. There is no need in ReservoirStateData->randstate_initialized. There \ncould\n be macros/function:\n `bool pg_prng_initiated(state) { return (state[0]|state[1]) != 0; }`\n5. Is there need for 128bit prng at all? At least 2*64bit.\n There are 2*32bit xoroshiro64 \nhttps://prng.di.unimi.it/xoroshiro64starstar.c\n And there is 4*32bit xoshiro128: \nhttps://prng.di.unimi.it/xoshiro128plusplus.c\n 32bit operations are faster on 32bit platforms.\n But 32bit platforms are quite rare in production this days.\n Therefore I don't have strong opinion on this.\n\nregards,\nSokolov Yura.\n\n\n",
"msg_date": "Sat, 03 Jul 2021 13:20:52 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "Yura Sokolov <y.sokolov@postgrespro.ru> writes:\n> 2. I don't see why pg_prng_state could not be `typedef uint64 \n> pg_prng_state[2];`\n\nPlease no. That sort of typedef behaves so weirdly that it's\na foot-gun.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 03 Jul 2021 10:14:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "\nHello Yura,\n\n> 1. PostgreSQL source uses `uint64` and `uint32`, but not \n> `uint64_t`/`uint32_t`\n> 2. I don't see why pg_prng_state could not be `typedef uint64 \n> pg_prng_state[2];`\n\nIt could, but I do not see that as desirable. From an API design point of \nview we want something clean and abstract, and for me a struct looks \nbetter for that. It would be a struct with an array insided, I think that \nthe code is more readable by avoiding constant index accesses (s[0] vs \ns0), we do not need actual indexes.\n\n> 3. Then SamplerRandomState and pgbench RandomState could stay.\n> Patch will be a lot shorter.\n\nYou mean \"typedef pg_prng_state SamplerRandomState\"? One point of the \npatch is to have \"one\" standard PRNG commonly used where one is needed, so \nI'd say we want the name to be used, hence the substitutions.\n\nAlso, I have a thing against objects being named \"Random\" which are not \nrandom, which is highly misleading. A PRNG is purely deterministic. \nRemoving misleading names is also a benefit.\n\nSo If people want to keep the old name it can be done. But I see these \nname changes as desirable.\n\n> I don't like mix of semantic refactoring and syntactic refactoring in \n> the same patch. While I could agree with replacing `SamplerRandomState \n> => pg_prng_state`, I'd rather see it in separate commit. And that \n> separate commit could contain transition: `typedef uint64 \n> pg_prng_state[2];` => `typedef struct { uint64 s0, s1 } pg_prng_state;`\n\nI would tend to agree on principle, but separating in two phases here \nlooks pointless: why implementing a cleaner rand48 interface, which would \nthen NOT be the rand48 standard, just to upgrade it to something else in \nthe next commit? And the other path is as painfull and pointless.\n\nSo I think that the new feature better comes with its associated \nrefactoring which is an integral part of it.\n\n> 4. There is no need in ReservoirStateData->randstate_initialized. There could\n> be macros/function:\n> `bool pg_prng_initiated(state) { return (state[0]|state[1]) != 0; }`\n\nIt would work for this peculiar implementation but not necessary for \nothers that we may want to substitute later, as it would mean either \nbreaking the interface or adding a boolean in the structure if there is no \nspecial unintialized state that can be detected, which would impact memory \nusage and alignment.\n\nSo I think it is better to keep it that way, usually the user knows \nwhether its structure has been initialized, and the special case for \nreservoir where the user does not seem to know about that can handle its \nown boolean without impacting the common API or the data structure.\n\n> 5. Is there need for 128 bit prng at all?\n\nThis is a 64 bits PRNG with a 128 bit state. We are generating 64 bits \nvalues, so we want a 64 bit PRNG. A prng state must be larger than its \ngenerated value, so we need sizeof(state) > 64 bits, hence at least 128 \nbits if we add 128 bits memory alignement.\n\n> And there is 4*32bit xoshiro128: \n> https://prng.di.unimi.it/xoshiro128plusplus.c\n> 32bit operations are faster on 32bit platforms.\n> But 32bit platforms are quite rare in production this days.\n> Therefore I don't have strong opinion on this.\n\nI think that 99.9% hardware running postgres is 64 bits, so 64 bits is the \nright choice.\n\n-- \nFabien.\n\n\n\n",
"msg_date": "Sat, 3 Jul 2021 17:26:00 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "> 1. PostgreSQL source uses `uint64` and `uint32`, but not \n> `uint64_t`/`uint32_t`\n\nIndeed you are right. Attached v6 does that as well.\n\n-- \nFabien.",
"msg_date": "Sat, 3 Jul 2021 17:36:02 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "On Sat, 3 Jul 2021 at 08:06, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n> Here is a v4, which:\n>\n> - moves the stuff to common and fully removes random/srandom (Tom)\n> - includes a range generation function based on the bitmask method (Dean)\n> but iterates with splitmix so that the state always advances once (Me)\n\nAt the risk of repeating myself: do *not* invent your own scheme.\n\nThe problem with iterating using splitmix is that splitmix is a simple\nshuffling function that takes a single input and returns a mutated\noutput depending only on that input. So let's say for simplicity that\nyou're generating numbers in the range [0,N) with N=2^64-n and n<2^63.\nEach of the n values in [N,2^64) that lie outside the range wanted are\njust mapped in a deterministic way back onto (at most) n values in the\nrange [0,N), making those n values twice as likely to be chosen as the\nother N-n values. So what you've just invented is an algorithm with\nthe complexity of the unbiased bitmask method, but basically the same\nbias as the biased integer multiply method.\n\nI don't understand why you object to advancing the state more than\nonce. Doing so doesn't make the resulting sequence of numbers any less\ndeterministic.\n\nIn fact, I'm pretty sure you *have to* advance the state more than\nonce in *any* unbiased scheme. That's a common characteristic of all\nthe unbiased methods I've seen, and intuitively, I think it has to be\nso.\n\nOtherwise, I'm happy with the use of the bitmask method, as long as\nit's implemented correctly.\n\nRegards,\nDean\n\n\n",
"msg_date": "Sun, 4 Jul 2021 09:47:41 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "\nHello Dean,\n\n>> - moves the stuff to common and fully removes random/srandom (Tom)\n>> - includes a range generation function based on the bitmask method (Dean)\n>> but iterates with splitmix so that the state always advances once (Me)\n>\n> At the risk of repeating myself: do *not* invent your own scheme.\n\n> The problem with iterating using splitmix is that splitmix is a simple\n> shuffling function that takes a single input and returns a mutated\n> output depending only on that input.\n\nIt also iterates over its 64 bits state in a round robin fashion so that \nthe cycle size is 2^64 (it is a simple add).\n\n> So let's say for simplicity that you're generating numbers in the range \n> [0,N) with N=2^64-n and n<2^63. Each of the n values in [N,2^64) that \n> lie outside the range wanted are just mapped in a deterministic way back \n> onto (at most) n values in the range [0,N), making those n values twice \n> as likely to be chosen as the other N-n values.\n\nI do understand your point. If the value is outside the range, splitmix \niterates over its seed and the extraction functions produces a new number \nwhich is tested again. I do not understand the \"mapped back onto\" part, \nthe out of range value is just discarded and the loops starts over with a \nnew derivation, and why it would imply that some values are more likely to \ncome out.\n\n> So what you've just invented is an algorithm with the complexity of the \n> unbiased bitmask method,\n\nThat is what I am trying to implement.\n\n> but basically the same bias as the biased integer multiply method.\n\nI did not understand.\n\n> I don't understand why you object to advancing the state more than\n> once. Doing so doesn't make the resulting sequence of numbers any less\n> deterministic.\n\nIt does, somehow, hence my struggle to try to avoid it.\n\n call seed(0xdeadbeef);\n x1 = somepseudorand();\n x2 = somepseudorand();\n x3 = somepsuedorand();\n\nI think we should want x3 to be the same result whatever the previous \ncalls to the API.\n\n> In fact, I'm pretty sure you *have to* advance the state more than\n> once in *any* unbiased scheme. That's a common characteristic of all\n> the unbiased methods I've seen, and intuitively, I think it has to be\n> so.\n\nYes and no. We can advance another state seeded by the root prng.\n\n> Otherwise, I'm happy with the use of the bitmask method, as long as\n> it's implemented correctly.\n\nI did not understand why it is not correct.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sun, 4 Jul 2021 11:35:09 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "On Sun, 4 Jul 2021 at 10:35, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n> I did not understand why it is not correct.\n>\n\nWell, to make it easier to visualise, let's imagine our word size is\njust 3 bits instead of 64 bits, and that the basic prng() function\ngenerates numbers in the range [0,8). Similarly, imagine a splitmix3()\nthat operates on 3-bit values. So it might do something like this\n(state offset and return values made up):\n\nsplitmix3(state):\n state=0 -> 5, return 2\n state=1 -> 6, return 5\n state=2 -> 7, return 0\n state=3 -> 0, return 3\n state=4 -> 1, return 6\n state=5 -> 2, return 1\n state=6 -> 3, return 7\n state=7 -> 4, return 4\n\nNow suppose we want a random number in the range [0,6). This is what\nhappens with your algorithm for each of the possible prng() return\nvalues:\n\n prng() returns 0 -- OK\n prng() returns 1 -- OK\n prng() returns 2 -- OK\n prng() returns 3 -- OK\n prng() returns 4 -- OK\n prng() returns 5 -- OK\n prng() returns 6 -- out of range so use splitmix3() with initial state=6:\n state=6 -> 3, return 7 -- still out of range, so repeat\n state=3 -> 0, return 3 -- now OK\n prng() returns 7 -- out of range so use splitmix3() with initial state=7:\n state=7 -> 4, return 4 -- now OK\n\nSo, assuming that prng() chooses each of the 8 possible values with\nequal probability (1/8), the overall result is that the values 0,1,2\nand 5 are returned with a probability of 1/8, whereas 3 and 4 are\nreturned with a probability of 2/8.\n\nUsing the correct implementation of the bitmask algorithm, each\niteration calls prng() again, so in the end no particular return value\nis ever more likely than any other (hence it's unbiased).\n\nAs for determinism, the end result is still fully deterministic. For\nexample, lets say that prng() returns the following sequence, for some\ninitial state:\n\n 1,0,3,0,3,7,4,7,6,6,5,3,7,7,7,0,3,6,5,2,3,3,4,0,0,2,7,4,...\n\nthen the bitmask method just returns that sequence with all the 6's\nand 7's removed:\n\n 1,0,3,0,3,4,5,3,0,3,5,2,3,3,4,0,0,2,4,...\n\nand that same sequence will always be returned, when starting from\nthat initial seed.\n\nRegards,\nDean\n\n\n",
"msg_date": "Sun, 4 Jul 2021 12:30:41 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "> Now suppose we want a random number in the range [0,6). This is what\n> happens with your algorithm for each of the possible prng() return\n> values:\n>\n> prng() returns 0 -- OK\n> prng() returns 1 -- OK\n> prng() returns 2 -- OK\n> prng() returns 3 -- OK\n> prng() returns 4 -- OK\n> prng() returns 5 -- OK\n> prng() returns 6 -- out of range so use splitmix3() with initial state=6:\n> state=6 -> 3, return 7 -- still out of range, so repeat\n> state=3 -> 0, return 3 -- now OK\n> prng() returns 7 -- out of range so use splitmix3() with initial state=7:\n> state=7 -> 4, return 4 -- now OK\n>\n> So, assuming that prng() chooses each of the 8 possible values with\n> equal probability (1/8), the overall result is that the values 0,1,2\n> and 5 are returned with a probability of 1/8, whereas 3 and 4 are\n> returned with a probability of 2/8.\n\nOk, I got that explanation.\n\n> Using the correct implementation of the bitmask algorithm, each\n> iteration calls prng() again, so in the end no particular return value\n> is ever more likely than any other (hence it's unbiased).\n\nOk, you're taking into account the number of states of the PRNG, so this \nfinite number implies some bias on some values if you actually enumerate \nall possible cases, as you do above.\n\nI was reasoning \"as if\" the splitmix PRNG was an actual random function, \nwhich is obviously false, but is also somehow a usual (false) assumption \nwith PRNGs, and with this false assumption my implementation is perfect:-)\n\nThe defect of the modulo method for range extraction is that even with an \nactual (real) random generator the results would be biased. The bias is in \nthe method itself. Now you are arguing for a bias linked to the internals \nof the PRNG. They are not the same in nature, even if the effect is the \nsame.\n\nAlso the bias is significant for close to the limit ranges, which is not \nthe kind of use case I have in mind when looking at pgbench.\n\nIf you consider the PRNG internals, then splitmix extraction function \ncould also be taken into account. If it is not invertible (I'm unsure), \nthen assuming it is some kind of hash function, about 1/e of output values \nwould not reachable, which is yet another bias that you could argue \nagainst.\n\nUsing the initial PRNG works better only because the underlying 128 bits \nstate is much larger than the output value. Which is the point for having \na larger state in the first place, anyway.\n\n> As for determinism, the end result is still fully deterministic. For\n> example, lets say that prng() returns the following sequence, for some\n> initial state:\n>\n> 1,0,3,0,3,7,4,7,6,6,5,3,7,7,7,0,3,6,5,2,3,3,4,0,0,2,7,4,...\n>\n> then the bitmask method just returns that sequence with all the 6's\n> and 7's removed:\n>\n> 1,0,3,0,3,4,5,3,0,3,5,2,3,3,4,0,0,2,4,...\n>\n> and that same sequence will always be returned, when starting from\n> that initial seed.\n\nYes and no.\n\nThe result is indeed deterministic of you call the function with the same \nrange. However, if you change the range value in one place then sometimes \nthe state can advance differently, so the determinism is lost, meaning \nthat it depends on actual range values.\n\nAttached a v7 which does as you wish, but also looses the deterministic \nnon-value dependent property I was seeking. I would work around that by \nderiving another 128 bit generator instead of splitmix 64 bit, but that is \noverkill.\n\n-- \nFabien.",
"msg_date": "Sun, 4 Jul 2021 18:03:56 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "On Sun, 4 Jul 2021 at 17:03, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n> > As for determinism, the end result is still fully deterministic.\n>\n> The result is indeed deterministic of you call the function with the same\n> range. However, if you change the range value in one place then sometimes\n> the state can advance differently, so the determinism is lost, meaning\n> that it depends on actual range values.\n\nAh yes, that's true. I can trivially reproduce that in other languages\ntoo. For example, in python, if I call random.seed(0) and then\nrandom.randrange() with the inputs 10,10,10 then the results are\n6,6,0. But if the randrange() inputs are 10,1000,10 then the results\nare 6,776,6. So the result from the 3rd call changes as a result of\nchanging the 2nd input. That's not entirely surprising. The important\nproperty of determinism is that if I set a seed, and then make an\nidentical set of calls to the random API, the results will be\nidentical every time, so that it's possible to write tests with\npredictable/repeatable results.\n\n> I would work around that by\n> deriving another 128 bit generator instead of splitmix 64 bit, but that is\n> overkill.\n\nNot really relevant now, but I'm pretty sure that's impossible to do.\nYou might try it as an interesting academic exercise, but I believe\nit's a logical impossibility.\n\nRegards,\nDean\n\n\n",
"msg_date": "Sun, 4 Jul 2021 18:35:33 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "> The important property of determinism is that if I set a seed, and then \n> make an identical set of calls to the random API, the results will be \n> identical every time, so that it's possible to write tests with \n> predictable/repeatable results.\n\nHmmm… I like my stronger determinism definition more than this one:-)\n\n>> I would work around that by deriving another 128 bit generator instead \n>> of splitmix 64 bit, but that is overkill.\n>\n> Not really relevant now, but I'm pretty sure that's impossible to do.\n> You might try it as an interesting academic exercise, but I believe\n> it's a logical impossibility.\n\nHmmm… I was simply thinking of seeding a new pg_prng_state from the main \npg_prng_state with some transformation, and then iterate over the second \nPRNG, pretty much like I did with splitmix, but with 128 bits so that your \n#states argument does not apply, i.e. something like:\n\n /* select in a range with bitmask rejection */\n uint64 pg_prng_u64_range(pg_prng_state *state, uint64 range)\n {\n /* always advance state once */\n uint64 next = xoroshiro128ss(state);\n uint64 val;\n\n if (range >= 2)\n {\n uint64 mask = mask_u64(range-1);\n\n val = next & mask;\n\n if (val >= range)\n {\n /* copy and update current prng state */\n pg_prng_state iterator = *state;\n\n iterator.s0 ^= next;\n iterator.s1 += UINT64CONST(0x9E3779B97f4A7C15);\n\n /* iterate till val in [0, range) */\n while ((val = xoroshiro128ss(&iterator) & mask) >= range)\n ;\n }\n }\n else\n val = 0;\n\n return val;\n }\n\nThe initial pseudo-random sequence is left to proceed, and a new PRNG is \nbasically forked for iterating on the mask, if needed.\n\n-- \nFabien.",
"msg_date": "Sun, 4 Jul 2021 22:29:48 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "Fabien COELHO писал 2021-07-04 23:29:\n>> The important property of determinism is that if I set a seed, and \n>> then make an identical set of calls to the random API, the results \n>> will be identical every time, so that it's possible to write tests \n>> with predictable/repeatable results.\n> \n> Hmmm… I like my stronger determinism definition more than this one:-)\n> \n>>> I would work around that by deriving another 128 bit generator \n>>> instead of splitmix 64 bit, but that is overkill.\n>> \n>> Not really relevant now, but I'm pretty sure that's impossible to do.\n>> You might try it as an interesting academic exercise, but I believe\n>> it's a logical impossibility.\n> \n> Hmmm… I was simply thinking of seeding a new pg_prng_state from the\n> main pg_prng_state with some transformation, and then iterate over the\n> second PRNG, pretty much like I did with splitmix, but with 128 bits\n> so that your #states argument does not apply, i.e. something like:\n> \n> /* select in a range with bitmask rejection */\n> uint64 pg_prng_u64_range(pg_prng_state *state, uint64 range)\n> {\n> /* always advance state once */\n> uint64 next = xoroshiro128ss(state);\n> uint64 val;\n> \n> if (range >= 2)\n> {\n> uint64 mask = mask_u64(range-1);\n> \n> val = next & mask;\n> \n> if (val >= range)\n> {\n> /* copy and update current prng state */\n> pg_prng_state iterator = *state;\n> \n> iterator.s0 ^= next;\n> iterator.s1 += UINT64CONST(0x9E3779B97f4A7C15);\n> \n> /* iterate till val in [0, range) */\n> while ((val = xoroshiro128ss(&iterator) & mask) >= range)\n> ;\n> }\n> }\n> else\n> val = 0;\n> \n> return val;\n> }\n> \n> The initial pseudo-random sequence is left to proceed, and a new PRNG\n> is basically forked for iterating on the mask, if needed.\n\nI believe most \"range\" values are small, much smaller than UINT32_MAX.\nIn this case, according to [1] fastest method is Lemire's one (I'd take\noriginal version from [2])\n\nTherefore combined method pg_prng_u64_range could branch on range value\n\nuint64 pg_prng_u64_range(pg_prng_state *state, uint64 range)\n{\n uint64 val = xoroshiro128ss(state);\n uint64 m;\n if ((range & (range-1) == 0) /* handle all power 2 cases */\n return range != 0 ? val & (range-1) : 0;\n if (likely(range < PG_UINT32_MAX/32)\n {\n /*\n * Daniel Lamire's unbiased range random algorithm based on \nrejection sampling\n * https://lemire.me/blog/2016/06/30/fast-random-shuffling/\n */\n m = (uint32)val * range;\n if ((uint32)m < range)\n {\n uint32 t = -range % range;\n while ((uint32)m < t)\n m = (uint32)xoroshiro128ss(state) * range;\n }\n return m >> 32;\n }\n /* Apple's mask method */\n m = mask_u64(range-1);\n val &= m;\n while (val >= range)\n val = xoroshiro128ss(state) & m;\n return val;\n}\n\nMask method could also be faster when range is close to mask.\nFor example, fast check for \"range is within 1/1024 to mask\" is\n range < (range/512 + (range&(range*2)))\n\nAnd then method choose could like:\n if (likely(range < UINT32_MAX/32 && range > (range/512 + \n(range&(range*2)))))\n\nBut I don't know does additional condition worth difference or not.\n\n[1] https://www.pcg-random.org/posts/bounded-rands.html\n[2] https://lemire.me/blog/2016/06/30/fast-random-shuffling/\n\nregards,\nSokolov Yura\n\n\n",
"msg_date": "Mon, 05 Jul 2021 09:36:27 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "\nHello Yura,\n\n> I believe most \"range\" values are small, much smaller than UINT32_MAX.\n> In this case, according to [1] fastest method is Lemire's one (I'd take\n> original version from [2]) [...]\n\nYep.\n\nI share your point that the range is more often 32 bits.\n\nHowever, I'm not enthousiastic at combining two methods depending on the \nrange, the function looks complex enough without that, so I would suggest \nnot to take this option. Also, the decision process adds to the average \ncost, which is undesirable. I would certainly select the unbias multiply \nmethod if we want a u32 range variant.\n\n-- \nFabien.\n\n\n",
"msg_date": "Tue, 6 Jul 2021 08:13:36 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "Fabien COELHO писал 2021-07-06 09:13:\n> Hello Yura,\n> \n>> I believe most \"range\" values are small, much smaller than UINT32_MAX.\n>> In this case, according to [1] fastest method is Lemire's one (I'd \n>> take\n>> original version from [2]) [...]\n> \n> Yep.\n> \n> I share your point that the range is more often 32 bits.\n> \n> However, I'm not enthousiastic at combining two methods depending on\n> the range, the function looks complex enough without that, so I would\n> suggest not to take this option. Also, the decision process adds to\n> the average cost, which is undesirable.\n\nGiven 99.99% cases will be in the likely case, branch predictor should\neliminate decision cost.\n\nAnd as Dean Rasheed correctly mentioned, mask method will\nhave really bad pattern for branch predictor if range is not just below\nor equal to power of two.\nFor example, rand_range(10000) will have 60% probability to pass through\n`while (val > range)` and 40% probability to go to next loop iteration.\nrand_range(100000) will have 76%/24% probabilities. Branch predictor\ndoesn't like it. Even rand_range(1000000), which is quite close to 2^20,\nwill have 95%/5%, and still not enough to please BP.\n\nBut with unbias multiply method it will be 0.0002%/99.9998% for 10000,\n0,002%/99.998% for 100000 and 0.02%/99.98% for 1000000 - much-much \nbetter.\nBranch predictor will make it almost free (i hope).\n\nAnd __builtin_clzl is not free lunch either, it has latency 3-4 cycles\non modern processor. Well, probably it could run in parallel with some\npart of xoroshiro, but it depends on how compiler will optimize this\nfunction.\n\n> I would certainly select the unbias multiply method if we want a u32 \n> range variant.\n\nThere could be two functions.\n\nregards,\nSokolov Yura.\n\n\n",
"msg_date": "Tue, 06 Jul 2021 10:19:45 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "\nHello Yura,\n\n>> However, I'm not enthousiastic at combining two methods depending on\n>> the range, the function looks complex enough without that, so I would\n>> suggest not to take this option. Also, the decision process adds to\n>> the average cost, which is undesirable.\n>\n> Given 99.99% cases will be in the likely case, branch predictor should\n> eliminate decision cost.\n\nHmmm. ISTM that a branch predictor should predict that unknown < small \nshould probably be false, so a hint should be given that it is really \ntrue.\n\n> And as Dean Rasheed correctly mentioned, mask method will have really \n> bad pattern for branch predictor if range is not just below or equal to \n> power of two.\n\nOn average the bitmask is the better unbiased method, if the online \nfigures are to be trusted. Also, as already said, I do not really want to \nadd code complexity, especially to get lower average performance, and \nespecially with code like \"threshold = - range % range\", where both \nvariables are unsigned, I have a headache just looking at it:-)\n\n> And __builtin_clzl is not free lunch either, it has latency 3-4 cycles\n> on modern processor.\n\nWell, % is not cheap either.\n\n> Well, probably it could run in parallel with some part of xoroshiro, but \n> it depends on how compiler will optimize this function.\n>\n>> I would certainly select the unbias multiply method if we want a u32 \n>> range variant.\n>\n> There could be two functions.\n\nYep, but do we need them? Who is likely to want 32 bits pseudo random \nints in a range? pgbench needs 64 bits.\n\nSo I'm still inclined to just keep the faster-on-average bitmask method, \ndespite that it may be slower for some ranges. The average cost for the \nworst case in PRNG calls is, if I'm not mistaken:\n\n 1 * 0.5 + 2 * 0.25 + 3 * 0.125 + ... ~ 2\n\nwhich does not look too bad.\n\n-- \nFabien.\n\n\n",
"msg_date": "Tue, 6 Jul 2021 22:49:07 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "Fabien COELHO писал 2021-07-06 23:49:\n> Hello Yura,\n> \n>>> However, I'm not enthousiastic at combining two methods depending on\n>>> the range, the function looks complex enough without that, so I would\n>>> suggest not to take this option. Also, the decision process adds to\n>>> the average cost, which is undesirable.\n>> \n>> Given 99.99% cases will be in the likely case, branch predictor should\n>> eliminate decision cost.\n> \n> Hmmm. ISTM that a branch predictor should predict that unknown < small\n> should probably be false, so a hint should be given that it is really\n> true.\n\nWhy? Branch predictor is history based: if it were usually true here\nthen it will be true this time either.\nunknown < small is usually true therefore branch predictor will assume\nit is true.\n\nI put `likely` for compiler: compiler then puts `likely` path closer.\n\n> \n>> And as Dean Rasheed correctly mentioned, mask method will have really \n>> bad pattern for branch predictor if range is not just below or equal \n>> to power of two.\n> \n> On average the bitmask is the better unbiased method, if the online\n> figures are to be trusted. Also, as already said, I do not really want\n> to add code complexity, especially to get lower average performance,\n> and especially with code like \"threshold = - range % range\", where\n> both variables are unsigned, I have a headache just looking at it:-)\n\nIf you mention https://www.pcg-random.org/posts/bounded-rands.html then\n1. first graphs are made with not exact Lemire's code.\n Last code from \nhttps://lemire.me/blog/2016/06/30/fast-random-shuffling/\n (which I derived from) performs modulo operation only if `(leftover < \nrange)`.\n Even with `rand_range(1000000)` it is just once in four thousands \nruns.\n2. You can see \"Small-Constant Benchmark\" at that page, Debiased Int is\n 1.5 times faster. And even in \"Small-Shuffle\" benchmark their \nunoptimized\n version is on-par with mask method.\n3. If you go to \"Making Improvements/Faster Threshold-Based Discarding\"\n section, then you'll see code my version is matched with. It is twice\n faster than masked method in Small-Shuffle benchmark, and just a bit\n slower in Large-Shuffle.\n\n> \n>> And __builtin_clzl is not free lunch either, it has latency 3-4 cycles\n>> on modern processor.\n> \n> Well, % is not cheap either.\n> \n>> Well, probably it could run in parallel with some part of xoroshiro, \n>> but it depends on how compiler will optimize this function.\n>> \n>>> I would certainly select the unbias multiply method if we want a u32 \n>>> range variant.\n>> \n>> There could be two functions.\n> \n> Yep, but do we need them? Who is likely to want 32 bits pseudo random\n> ints in a range? pgbench needs 64 bits.\n> \n> So I'm still inclined to just keep the faster-on-average bitmask\n> method, despite that it may be slower for some ranges. The average\n> cost for the worst case in PRNG calls is, if I'm not mistaken:\n> \n> 1 * 0.5 + 2 * 0.25 + 3 * 0.125 + ... ~ 2\n> \n> which does not look too bad.\n\nYou doesn't count cost of branch-misprediction.\nhttps://stackoverflow.com/questions/11227809/why-is-processing-a-sorted-array-faster-than-processing-an-unsorted-array\nhttps://lemire.me/blog/2019/10/15/mispredicted-branches-can-multiply-your-running-times/\nTherefore calculation should be at least:\n\n 1 * 0.5 + 0.5 * (3 + 0.5 * (3 + ...)) = 6\n\nBy the way, we have 64bit random. If we use 44bit from it for range <= \n(1<<20), then\nbias will be less than 1/(2**24). Could we just ignore it (given it is \nnot crypto\nstrong random)?\n\nuint64 pg_prng_u64_range(pg_prng_state *state, uint64 range)\n{\n uint64 val = xoroshiro128ss(state);\n uint64 m;\n if ((range & (range-1) == 0) /* handle all power 2 cases */\n return range != 0 ? val & (range-1) : 0;\n if (likely(range < (1<<20)))\n /*\n * While multiply method is biased, bias will be smaller than \n1/(1<<24) for\n * such small ranges. Lets ignore it.\n */\n return ((val >> 20) * range) >> 44;\n /* Apple's mask method */\n m = mask_u64(range-1);\n val &= m;\n while (val >= range)\n val = xoroshiro128ss(state) & m;\n return val;\n}\n\nAnyway, excuse me for heating this discussion cause of such \nnon-essential issue.\nI'll try to control myself and don't proceed it further.\n\nregards,\nSokolov Yura.\n\n\n",
"msg_date": "Wed, 07 Jul 2021 06:00:47 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "On Wed, 7 Jul 2021 at 04:00, Yura Sokolov <y.sokolov@postgrespro.ru> wrote:\n>\n> Anyway, excuse me for heating this discussion cause of such\n> non-essential issue.\n> I'll try to control myself and don't proceed it further.\n>\n\nWhilst it has been interesting learning and discussing all these\ndifferent techniques, I think it's probably best to stick with the\nbitmask method, rather than making the code too complex and difficult\nto follow. The bitmask method has the advantage of being very simple,\neasy to understand and fast (fastest in many of the benchmarks, and\nclose enough in others to make me think that the difference won't\nmatter for our purposes).\n\nTo test the current patch, I hacked up a simple SQL-callable server\nfunction: random(bigint, bigint) returns bigint, similar to the one in\npgbench. After doing so, I couldn't help thinking that it would be\nuseful to have such a function in core, so maybe that could be a\nfollow-on patch. Anyway, that led to the following observations:\n\nFirstly, there's a bug in the existing mask_u64() code -- if\npg_leftmost_one_pos64(u) returns 63, you end up with a mask equal to\n0, and it breaks down.\n\nSecondly, I think it would be simpler to implement this as a bitshift,\nrather than a bitmask, using the high bits from the random number.\nThat might not make much difference for xoroshiro**, but in general,\nPRNGs tend to be weaker in the lower bits, so it seems preferable on\nthat basis. But also, it makes the code simpler and less error-prone.\n\nFinally, I think it would be better to treat the upper bound of the\nrange as inclusive. Doing so makes the function able to cover all\npossible 64-bit ranges. It would then be easy (perhaps in another\nfollow-on patch) to make the pgbench random() function work for all\n64-bit bounds (as long as max >= min), without the weird overflow\nchecking it currently has.\n\nPutting those 3 things together, the code (minus comments) becomes:\n\n if (range > 0)\n {\n int rshift = 63 - pg_leftmost_one_pos64(range);\n\n do\n {\n val = xoroshiro128ss(state) >> rshift;\n }\n while (val > range);\n }\n else\n val = 0;\n\nwhich reduces the complexity a bit.\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 7 Jul 2021 11:29:07 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "Hello Dean,\n\n> Whilst it has been interesting learning and discussing all these\n> different techniques, I think it's probably best to stick with the\n> bitmask method, rather than making the code too complex and difficult\n> to follow.\n\nYes.\n\n> The bitmask method has the advantage of being very simple, easy to \n> understand and fast (fastest in many of the benchmarks, and close enough \n> in others to make me think that the difference won't matter for our \n> purposes).\n>\n> To test the current patch, I hacked up a simple SQL-callable server \n> function: random(bigint, bigint) returns bigint, similar to the one in \n> pgbench. After doing so, I couldn't help thinking that it would be \n> useful to have such a function in core, so maybe that could be a \n> follow-on patch.\n\nYep.\n\n> Anyway, that led to the following observations:\n>\n> Firstly, there's a bug in the existing mask_u64() code -- if\n> pg_leftmost_one_pos64(u) returns 63, you end up with a mask equal to\n> 0, and it breaks down.\n\nOops:-(\n\n> Secondly, I think it would be simpler to implement this as a bitshift, \n> rather than a bitmask, using the high bits from the random number. That \n> might not make much difference for xoroshiro**, but in general, PRNGs \n> tend to be weaker in the lower bits, so it seems preferable on that \n> basis. But also, it makes the code simpler and less error-prone.\n\nIndeed, that looks like a good option.\n\n> Finally, I think it would be better to treat the upper bound of the\n> range as inclusive.\n\nThis bothered me as well, but the usual approach seems to use range as the \nnumber of values, so I was hesitant to depart from that. I'm still \nhesitant to go that way.\n\n> Doing so makes the function able to cover all\n> possible 64-bit ranges. It would then be easy (perhaps in another\n> follow-on patch) to make the pgbench random() function work for all\n> 64-bit bounds (as long as max >= min), without the weird overflow\n> checking it currently has.\n>\n> Putting those 3 things together, the code (minus comments) becomes:\n>\n> if (range > 0)\n> {\n> int rshift = 63 - pg_leftmost_one_pos64(range);\n>\n> do\n> {\n> val = xoroshiro128ss(state) >> rshift;\n> }\n> while (val > range);\n> }\n> else\n> val = 0;\n>\n> which reduces the complexity a bit.\n\nIndeed.\n\nAttached v9 follows this approach but for the range being inclusive, as \nmost sources I found understand the range as the number of values.\n\n-- \nFabien.",
"msg_date": "Thu, 8 Jul 2021 10:26:23 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "On Thu, 8 Jul 2021 at 09:26, Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n> > Finally, I think it would be better to treat the upper bound of the\n> > range as inclusive.\n>\n> This bothered me as well, but the usual approach seems to use range as the\n> number of values, so I was hesitant to depart from that. I'm still\n> hesitant to go that way.\n>\n\nYeah, that bothered me too.\n\nFor example, java.util.Random.nextInt(bound) returns a value in the\nrange [0,bound).\n\nBut other implementations are not all like that. For example python's\nrandom.randint(a,b) returns a value in the range [a,b].\n\nPython also has random.randrange(start,stop[,step]), which is designed\nfor compatibility with their range(start,stop[,step]) function, which\ntreats \"stop\" as exclusive.\n\nHowever, Postgres tends to go the other way, and treat the upper bound\nas inclusive, as in, for example, generate_series() and pgbench's\nrandom() function.\n\nI think it makes more sense to do it that way, because then such\nfunctions can work all the way up to and including the limit of the\nbound's datatype, which makes them more general.\n\nRegards,\nDean\n\n\n",
"msg_date": "Thu, 8 Jul 2021 10:08:52 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": ">>> Finally, I think it would be better to treat the upper bound of the\n>>> range as inclusive.\n>>\n>> This bothered me as well, but the usual approach seems to use range as the\n>> number of values, so I was hesitant to depart from that. I'm still\n>> hesitant to go that way.\n>\n> Yeah, that bothered me too.\n>\n> For example, java.util.Random.nextInt(bound) returns a value in the\n> range [0,bound).\n>\n> But other implementations are not all like that. For example python's\n> random.randint(a,b) returns a value in the range [a,b].\n>\n> Python also has random.randrange(start,stop[,step]), which is designed\n> for compatibility with their range(start,stop[,step]) function, which\n> treats \"stop\" as exclusive.\n>\n> However, Postgres tends to go the other way, and treat the upper bound\n> as inclusive, as in, for example, generate_series() and pgbench's\n> random() function.\n>\n> I think it makes more sense to do it that way, because then such\n> functions can work all the way up to and including the limit of the\n> bound's datatype, which makes them more general.\n\nYep. Still, with one argument:\n\n - C#: Random Next is exclusive\n - Go: rand Intn is exclusive\n - Rust: rand gen_range is exclusive\n - Erlang: rand uniform is inclusive, BUT values start from 1\n\nThe rule seems to be: one parameter is usually the number of values, thus \nis exclusive, 2 parameters describes the range, this is inclusive.\n\nAttached a v10 which is some kind of compromise where the interface uses \ninclusive min and max bounds, so that all values can be reached.\n\n-- \nFabien.",
"msg_date": "Thu, 8 Jul 2021 14:19:38 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "\nHello Yura,\n\n>>> Given 99.99% cases will be in the likely case, branch predictor should\n>>> eliminate decision cost.\n>> \n>> Hmmm. ISTM that a branch predictor should predict that unknown < small\n>> should probably be false, so a hint should be given that it is really\n>> true.\n>\n> Why? Branch predictor is history based:\n\nHmmm. This means running the compiler with some special options, running \nthe code on significant and representative data, then recompiling based on \ncollected branch stats. This is not the usual way pg is built.\n\n> if it were usually true here then it will be true this time either. \n> unknown < small is usually true therefore branch predictor will assume \n> it is true.\n>\n> I put `likely` for compiler: compiler then puts `likely` path closer.\n\nYes, an explicit hint is needed.\n\n>>> And as Dean Rasheed correctly mentioned, mask method will have really bad \n>>> pattern for branch predictor if range is not just below or equal to power \n>>> of two.\n>> \n>> On average the bitmask is the better unbiased method, if the online\n>> figures are to be trusted. Also, as already said, I do not really want\n>> to add code complexity, especially to get lower average performance,\n>> and especially with code like \"threshold = - range % range\", where\n>> both variables are unsigned, I have a headache just looking at it:-)\n>\n> If you mention https://www.pcg-random.org/posts/bounded-rands.html then\n\nIndeed, this is the figures I was refering to when saying that bitmask \nlooks the best method.\n\n> 1. first graphs are made with not exact Lemire's code.\n> Last code from https://lemire.me/blog/2016/06/30/fast-random-shuffling/\n\nOk, other figures, however there is no comparison with the mask method in \nthis post, it mostly argues agains division/modulo.\n\n> By the way, we have 64bit random. If we use 44bit from it for range <= \n> (1<<20), then bias will be less than 1/(2**24). Could we just ignore it \n> (given it is not crypto strong random)?\n\nThat was my initial opinion, by Dean insists on an unbiased method. I \nagree with Dean that performance, if it is not too bad, does not matter \nthat much, so that I'm trying to keep the code simple as a main objective.\n\nYou do not seem ready to buy this argument. Note that despite that my \nresearch is about compiler optimizations, I did bought it:-)\n\nGiven the overheads involved in pgbench, the performance impact of best vs \nworst case scenario is minimal:\n\n \\set i random(0, 7) -- 8 values, good for mask: 4.515 Mtps\n \\set i random(0, 8) -- 9 values, bad for mask: 4.151 Mtps\n\nsp the performance penalty is about 8%.\n\n> if ((range & (range-1) == 0) /* handle all power 2 cases */\n> return range != 0 ? val & (range-1) : 0;\n> if (likely(range < (1<<20)))\n> /*\n> * While multiply method is biased, bias will be smaller than 1/(1<<24) \n> for\n> * such small ranges. Lets ignore it.\n> */\n> return ((val >> 20) * range) >> 44;\n> /* Apple's mask method */\n> m = mask_u64(range-1);\n> val &= m;\n> while (val >= range)\n> val = xoroshiro128ss(state) & m;\n> return val;\n> }\n\nHmmm. The code looks \"simple\" enough, but I do not like optimizing for 20 \nbits values is worth it, especially as the bitmask method seems the best \nanyway. We were discussing 32 bits before.\n\n> Anyway, excuse me for heating this discussion cause of such \n> non-essential issue.\n\nWell, I like to discuss things!\n\n> I'll try to control myself and don't proceed it further.\n\nYep. We have to compromise at some point. The majority opinion seems to be \nthat we want code simplicity more, so the bitmask it is. I've posted a \nv10.\n\nThanks for the interesting discussion and arguments!\n\n-- \nFabien.\n\n\n",
"msg_date": "Thu, 8 Jul 2021 14:31:16 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "Hi Fabien,\n\n> Attached a v10 which is some kind of compromise where the interface uses\n> inclusive min and max bounds, so that all values can be reached.\n\nJust wanted to let you know that cfbot [1] doesn't seem to be happy with\nthe patch. Apparently, some tests are falling. To be honest, I didn't\ninvest too much time into investigating this. Hopefully, it's not a big\ndeal.\n\n[1]: http://cfbot.cputube.org/\n\n-- \nBest regards,\nAleksander Alekseev\n\nHi Fabien,> Attached a v10 which is some kind of compromise where the interface uses> inclusive min and max bounds, so that all values can be reached.Just wanted to let you know that cfbot [1] doesn't seem to be happy with the patch. Apparently, some tests are falling. To be honest, I didn't invest too much time into investigating this. Hopefully, it's not a big deal.[1]: http://cfbot.cputube.org/-- Best regards,Aleksander Alekseev",
"msg_date": "Fri, 24 Sep 2021 15:50:00 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "Hello Aleksander,\n\n>> Attached a v10 which is some kind of compromise where the interface uses\n>> inclusive min and max bounds, so that all values can be reached.\n>\n> Just wanted to let you know that cfbot [1] doesn't seem to be happy with\n> the patch. Apparently, some tests are falling. To be honest, I didn't\n> invest too much time into investigating this. Hopefully, it's not a big\n> deal.\n>\n> [1]: http://cfbot.cputube.org/\n\nIndeed. I wish that these results would be available from the cf \ninterface.\n\nAttached a v11 which might improve things.\n\nThanks for the ping!\n\n-- \nFabien.",
"msg_date": "Sat, 25 Sep 2021 09:40:50 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "Hello again,\n\n>> Just wanted to let you know that cfbot [1] doesn't seem to be happy with\n>> the patch. Apparently, some tests are falling. To be honest, I didn't\n>> invest too much time into investigating this. Hopefully, it's not a big\n>> deal.\n>> \n>> [1]: http://cfbot.cputube.org/\n>\n> Indeed. I wish that these results would be available from the cf interface.\n>\n> Attached a v11 which might improve things.\n\nNot enough. Here is a v12 which might improve things further.\n\n-- \nFabien.",
"msg_date": "Sat, 25 Sep 2021 16:44:52 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "<resent because of list filter>\n\n>>> [1]: http://cfbot.cputube.org/\n>> \n>> Indeed. I wish that these results would be available from the cf interface.\n>> \n>> Attached a v11 which might improve things.\n>\n> Not enough. Here is a v12 which might improve things further.\n\nNot enough. Here is a v13 which might improve things further more.\n\n-- \nFabien.",
"msg_date": "Sat, 25 Sep 2021 18:26:41 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": ">>>> [1]: http://cfbot.cputube.org/\n>>> \n>>> Indeed. I wish that these results would be available from the cf \n>>> interface.\n>>> \n>>> Attached a v11 which might improve things.\n>> \n>> Not enough. Here is a v12 which might improve things further.\n>\n> Not enough. Here is a v13 which might improve things further more.\n\nNot enough. Here is a v14 which might improve things further more again. \nSorry for this noise due to blind windows tests.\n\n-- \nFabien.",
"msg_date": "Sat, 25 Sep 2021 19:04:25 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> Not enough. Here is a v14 which might improve things further more again. \n> Sorry for this noise due to blind windows tests.\n\nJust FTR, I strongly object to your removal of process-startup srandom()\ncalls. Those are not only setting the seed for our own use, but also\nensuring that things like random() calls within PL functions or other\nlibraries aren't 100% predictable.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 25 Sep 2021 13:23:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "Hello Tom,\n\n> Just FTR, I strongly object to your removal of process-startup srandom()\n> calls.\n\nOk. The point of the patch is to replace and unify the postgres underlying \nPRNG, so there was some logic behind this removal.\n\n> Those are not only setting the seed for our own use, but also ensuring \n> that things like random() calls within PL functions or other libraries \n> aren't 100% predictable.\n\nSure, they shouldn't be predictable.\n\nAttached v15 also does call srandom if it is there, and fixes yet another \nremaining random call.\n\n-- \nFabien.",
"msg_date": "Sat, 25 Sep 2021 22:08:57 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "\n>> Just FTR, I strongly object to your removal of process-startup srandom()\n>> calls.\n>\n> Ok. The point of the patch is to replace and unify the postgres underlying \n> PRNG, so there was some logic behind this removal.\n\nFTR, this was triggered by your comment on Jul 1:\n\n>> [...] I see that you probably did that because random.c and srandom.c \n>> depend on it, but I wonder why we don't make an effort to flush those \n>> altogether. It's surely pretty confusing to newbies that what appears \n>> to be a call of the libc primitives is no such thing.\n\nI understood \"flushing s?random.c\" as that it would be a good thing to \nremove their definitions, hence their calls, whereas in the initial patch \nI provided a replacement for srandom & random.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sun, 26 Sep 2021 07:55:22 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "\n> Attached v15 also does call srandom if it is there, and fixes yet another \n> remaining random call.\n\nI think that I have now removed all references to \"random\" from pg source. \nHowever, the test still fails on windows, because the linker does not find \na global variable when compiling extensions, but it seems to find the \nfunctions defined in the very same file...\n\nLink:\n 4130 C:\\Program Files (x86)\\Microsoft Visual Studio 12.0\\VC\\bin\\x86_amd64\\link.exe /ERRORREPORT:QUEUE /OUT:\".\\Release\\tablefunc\\tablefunc.dll\" /INCREMENTAL:NO /NOLOGO Release/postgres/postgres.lib kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib /NODEFAULTLIB:libc /DEF:\"./Release/tablefunc/tablefunc.def\" /MANIFEST /MANIFESTUAC:\"level='asInvoker' uiAccess='false'\" /manifest:embed /DEBUG /PDB:\".\\Release\\tablefunc\\tablefunc.pdb\" /SUBSYSTEM:CONSOLE /STACK:\"4194304\" /TLBID:1 /DYNAMICBASE:NO /NXCOMPAT /IMPLIB:\"Release/tablefunc/tablefunc.lib\" /MACHINE:X64 /ignore:4197 /DLL .\\Release\\tablefunc\\win32ver.res\n 4131 .\\Release\\tablefunc\\tablefunc.obj\n 4132 Creating library Release/tablefunc/tablefunc.lib and object Release/tablefunc/tablefunc.exp\n 4133 tablefunc.obj : error LNK2001: unresolved external symbol pg_global_prng_state [C:\\projects\\postgresql\\tablefunc.vcxproj]\n 4134 .\\Release\\tablefunc\\tablefunc.dll : fatal error LNK1120: 1 unresolved externals [C:\\projects\\postgresql\\tablefunc.vcxproj]\n 4135 Done Building Project \"C:\\projects\\postgresql\\tablefunc.vcxproj\" (default targets) -- FAILED.\n\nThe missing symbol is really defined in common/pg_prng.c which AFAICT is \nlinked with postgres.\n\nIf someone experienced with the windows compilation chain could give a \nhint of what is needed, I'd appreciate it!\n\n-- \nFabien.\n\n\n",
"msg_date": "Thu, 30 Sep 2021 10:23:00 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "On Thu, Sep 30, 2021 at 9:23 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> The missing symbol is really defined in common/pg_prng.c which AFAICT is\n> linked with postgres.\n>\n> If someone experienced with the windows compilation chain could give a\n> hint of what is needed, I'd appreciate it!\n\nI guess the declaration needs PGDLLIMPORT.\n\n\n",
"msg_date": "Thu, 30 Sep 2021 22:31:03 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "> I guess the declaration needs PGDLLIMPORT.\n\nIndeed, thanks!\n\nAttached v16 adds that.\n\n-- \nFabien.",
"msg_date": "Thu, 30 Sep 2021 12:36:47 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "Hi hackers,\n\n>\n> > I guess the declaration needs PGDLLIMPORT.\n>\n> Indeed, thanks!\n>\n> Attached v16 adds that.\n\nIt looks like the patch is in pretty good shape. I noticed that the\nreturn value of pg_prng_strong_seed() is not checked in several\nplaces, also there was a typo in pg_trgm.c. The corrected patch is\nattached. Assuming the new version will not upset cfbot, I would call\nthe patch \"Ready for Committer\".\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Mon, 22 Nov 2021 16:57:56 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n> It looks like the patch is in pretty good shape. I noticed that the\n> return value of pg_prng_strong_seed() is not checked in several\n> places, also there was a typo in pg_trgm.c. The corrected patch is\n> attached. Assuming the new version will not upset cfbot, I would call\n> the patch \"Ready for Committer\".\n\nI took a quick look through this. The biggest substantive point\nI found was that you didn't update the configure script. It's\ncertainly not appropriate for configure to continue to do\nAC_REPLACE_FUNCS on random and srandom when you've removed the\nsrc/port files that that would attempt to include.\n\nThe simplest change here is just to delete those entries from the\nlist, but that would also result in not #define'ing HAVE_RANDOM\nor HAVE_SRANDOM, and I see that the patch introduces a dependency\non the latter. I'm inclined to think that's misguided. srandom()\nhas been required by POSIX since SUSv2, and we certainly have not\ngot any non-Windows buildfarm members that lack it. So I don't\nthink we really need a configure check. What we do need is a decision\nabout what to do on Windows. We could write it like\n\n+#ifndef WIN32\n+\tsrandom(pg_prng_i32(&pg_global_prng_state));\n+#endif\n\nbut I have a different modest suggestion: add\n\n#define srandom(seed) srand(seed)\n\nin win32_port.h. As far as I can see from Microsoft's docs [1],\nsrand() is exactly like srandom(), they just had some compulsion\nto not be POSIX-compatible.\n\nBTW, the commentary in InitProcessGlobals is now completely\ninadequate; it's unclear to a reader why we should be bothering\nith srandom(). I suggest adding a comment right before the\nsrandom() call, along the lines of\n\n /*\n * Also make sure that we've set a good seed for random() (or rand()\n * on Windows). Use of those functions is deprecated in core\n * Postgres, but they might get used by extensions.\n */\n\n+/* use Donald Knuth's LCG constants for default state */\n\nHow did Knuth get into this? This algorithm is certainly not his,\nso why are those constants at all relevant?\n\nOther cosmetic/commentary issues:\n\n* I could do without the stream-of-consciousness notes in pg_prng.c.\nI think what's appropriate is to say \"We use thus-and-such a generator\nwhich is documented here\", maybe with a line or two about its properties.\n\n* Function names like these convey practically nothing to readers:\n\n+extern int64 pg_prng_i64(pg_prng_state *state);\n+extern uint32 pg_prng_u32(pg_prng_state *state);\n+extern int32 pg_prng_i32(pg_prng_state *state);\n+extern double pg_prng_f64(pg_prng_state *state);\n+extern bool pg_prng_bool(pg_prng_state *state);\n\nand these functions' header comments add a grand total of zero bits\nof information. What someone generally wants to know first about\na PRNG is (a) is it uniform and (b) what is the range of outputs,\nneither of which are specified anywhere.\n\n+#define FIRST_BIT_MASK UINT64CONST(0x8000000000000000)\n+#define RIGHT_HALF_MASK UINT64CONST(0x00000000FFFFFFFF)\n+#define DMANTISSA_MASK UINT64CONST(0x000FFFFFFFFFFFFF)\n\nI'm not sure that these named constants are any more readable than\nwriting the underlying constant, maybe less so --- in particular\nI think something based on (1<<52)-1 would be more appropriate for\nthe float mantissa operations. We don't need RIGHT_HALF_MASK at\nall, the casts to uint32 or int32 will accomplish that just fine.\n\nBTW, why are we bothering with FIRST_BIT_MASK in the first place,\nrather than returning \"v & 1\" for pg_prng_bool? Is xoroshiro128ss\nless random in the low-order bits than the higher? If so, that would\nbe a pretty important thing to document. If it's not, we shouldn't\nmake the code look like it is.\n\n+ * select in a range with bitmask rejection.\n\nWhat is \"bitmask rejection\"? Is it actually important to callers?\nI think this should be documented more like \"Produce a random\ninteger uniformly selected from the range [rmin, rmax).\"\n\n\t\t\tregards, tom lane\n\n[1] https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/srand?view=msvc-170\n\n\n",
"msg_date": "Fri, 26 Nov 2021 13:26:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "I wrote:\n> ... What we do need is a decision\n> about what to do on Windows. We could write it like\n> +#ifndef WIN32\n> +\tsrandom(pg_prng_i32(&pg_global_prng_state));\n> +#endif\n> but I have a different modest suggestion: add\n> #define srandom(seed) srand(seed)\n> in win32_port.h. As far as I can see from Microsoft's docs [1],\n> srand() is exactly like srandom(), they just had some compulsion\n> to not be POSIX-compatible.\n\nOh, wait, I take that back --- rand()/srand() are also in POSIX,\nand in the C99 standard (which presumably is where Microsoft got\nthem from). They're deprecated by POSIX on the grounds that the\nspec only allows them to have 32 bits of state, so they can't be\nterribly random. Given that, I think we should just avert our eyes;\nanybody depending on those functions is destined to lose anyway.\nProbably the \"#ifndef WIN32\" fragment suggested above is enough.\nI suppose we could *also* call srand() but that feels a bit silly.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 26 Nov 2021 14:25:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "\nHello Tom,\n\nThanks for the feedback.\n\n> +/* use Donald Knuth's LCG constants for default state */\n>\n> How did Knuth get into this? This algorithm is certainly not his,\n> so why are those constants at all relevant?\n\nThey are not more nor less relevant than any other \"random\" constant. The \nstate needs a default initialization. The point of using DK's is that it \nis somehow cannot be some specially crafted value which would have some \nspecial property only know to the purveyor of the constant and could be \nused by them to break the algorithm.\n\n https://en.wikipedia.org/wiki/Dual_EC_DRBG\n\n> * I could do without the stream-of-consciousness notes in pg_prng.c.\n> I think what's appropriate is to say \"We use thus-and-such a generator\n> which is documented here\", maybe with a line or two about its properties.\n\nThe stuff was really written essentially as a \"why this\" for the first \npatch, and to prevent questions about \"why not this other generator\" \nlater, because it could never stop.\n\n> * Function names like these convey practically nothing to readers:\n>\n> +extern int64 pg_prng_i64(pg_prng_state *state);\n> +extern uint32 pg_prng_u32(pg_prng_state *state);\n> +extern int32 pg_prng_i32(pg_prng_state *state);\n> +extern double pg_prng_f64(pg_prng_state *state);\n> +extern bool pg_prng_bool(pg_prng_state *state);\n\nThe intention is obviously \"postgres pseudo-random number generator for \n<type>\". ISTM that it conveys (1) that it is a postgres-specific stuff, \n(2) that it is a PRNG (which I find *MUCH* more informative than the \nmisleading statement that something is random when it is not, and it is \nshorter) and (3) about the type it returns, because C does require \nfunctions to have distinct names.\n\nWhat would you suggest?\n\n> and these functions' header comments add a grand total of zero bits\n> of information.\n\nYes, probably. I do not like not to comment at all on a function.\n\n> What someone generally wants to know first about a PRNG is (a) is it \n> uniform and (b) what is the range of outputs, neither of which are \n> specified anywhere.\n\nISTM (b) is suggested thanks to the type and (a) I'm not sure about a PRNG \nwhich would claim not at least claim to be uniform. Non uniform PRNG are \nusually built based on a uniform one.\n\nWhat do you suggest as alternate names?\n\n> +#define FIRST_BIT_MASK UINT64CONST(0x8000000000000000)\n> +#define RIGHT_HALF_MASK UINT64CONST(0x00000000FFFFFFFF)\n> +#define DMANTISSA_MASK UINT64CONST(0x000FFFFFFFFFFFFF)\n>\n> I'm not sure that these named constants are any more readable than\n> writing the underlying constant, maybe less so --- in particular\n> I think something based on (1<<52)-1 would be more appropriate for\n> the float mantissa operations. We don't need RIGHT_HALF_MASK at\n> all, the casts to uint32 or int32 will accomplish that just fine.\n\nYep. I did it for uniformity.\n\n> BTW, why are we bothering with FIRST_BIT_MASK in the first place,\n> rather than returning \"v & 1\" for pg_prng_bool?\n\nBecause some PRNG are very bad in the low bits, not xoroshiro stuff, \nthough.\n\n> Is xoroshiro128ss less random in the low-order bits than the higher? \n> If so, that would be a pretty important thing to document. If it's not, \n> we shouldn't make the code look like it is.\n\nDunno. Why should we prefer low bits?\n\n> + * select in a range with bitmask rejection.\n>\n> What is \"bitmask rejection\"? Is it actually important to callers?\n\nNo, it is important to understand how it does it. That is the name of the \ntechnique which is implemented, which helps if you want to understand what \nis going on by googling it. This point could be moved inside the function.\n\n> I think this should be documented more like \"Produce a random\n> integer uniformly selected from the range [rmin, rmax).\"\n\nSure.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sat, 27 Nov 2021 20:11:34 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>> How did Knuth get into this? This algorithm is certainly not his,\n>> so why are those constants at all relevant?\n\n> They are not more nor less relevant than any other \"random\" constant. The \n> state needs a default initialization. The point of using DK's is that it \n> is somehow cannot be some specially crafted value which would have some \n> special property only know to the purveyor of the constant and could be \n> used by them to break the algorithm.\n\nWell, none of that is in the comment, which is probably just as well\nbecause it reads like baseless paranoia. *Any* initialization vector\nshould be as good as any other; if it's not, that's an algorithm fault.\n(OK, I'll give it a pass for zeroes being bad, but otherwise not.)\n\n>> * Function names like these convey practically nothing to readers:\n>> \n>> +extern int64 pg_prng_i64(pg_prng_state *state);\n>> +extern uint32 pg_prng_u32(pg_prng_state *state);\n>> +extern int32 pg_prng_i32(pg_prng_state *state);\n>> +extern double pg_prng_f64(pg_prng_state *state);\n>> +extern bool pg_prng_bool(pg_prng_state *state);\n\n> The intention is obviously \"postgres pseudo-random number generator for \n> <type>\". ISTM that it conveys (1) that it is a postgres-specific stuff, \n> (2) that it is a PRNG (which I find *MUCH* more informative than the \n> misleading statement that something is random when it is not, and it is \n> shorter) and (3) about the type it returns, because C does require \n> functions to have distinct names.\n\n> What would you suggest?\n\nWe have names for these types, and those abbreviations are (mostly)\nnot them. Name-wise I'd be all right with pg_prng_int64 and so on,\nbut I still expect that these functions' header comments should be\nexplicit about uniformity and about the precise output range.\nAs an example, it's far from obvious whether the minimum value\nof pg_prng_int32 should be zero or INT_MIN. (Actually, I suspect\nyou ought to provide both of those cases.) And the output range\nof pg_prng_float8 is not merely unobvious, but not very easy\nto deduce from examining the code either; not that users should\nhave to.\n\n>> BTW, why are we bothering with FIRST_BIT_MASK in the first place,\n>> rather than returning \"v & 1\" for pg_prng_bool?\n\n> Because some PRNG are very bad in the low bits, not xoroshiro stuff, \n> though.\n\nGood, but then you shouldn't write associated code as if that's still\na problem, because you'll cause other people to think it's still a\nproblem and write equally contorted code elsewhere. \"v & 1\" is a\ntransparent way of producing a bool, while this code isn't.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 27 Nov 2021 14:49:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "Hello,\n\n>> They are not more nor less relevant than any other \"random\" constant. The\n>> state needs a default initialization. The point of using DK's is that it\n>> is somehow cannot be some specially crafted value which would have some\n>> special property only know to the purveyor of the constant and could be\n>> used by them to break the algorithm.\n>\n> Well, none of that is in the comment, which is probably just as well\n> because it reads like baseless paranoia.\n\nSure. Welcome to cryptography:-)\n\n> *Any* initialization vector should be as good as any other; if it's not, \n> that's an algorithm fault.\n\nYep.\n\n> (OK, I'll give it a pass for zeroes being bad, but otherwise not.)\n\nOk. We can use any non-zero constant. What's wrong with constants provided \nby a Turing award computer scientist? I find them more attractive that \nsome stupid 0x0123456789….\n\n>>> * Function names like these convey practically nothing to readers:\n>>>\n>>> +extern int64 pg_prng_i64(pg_prng_state *state); [...]\n>\n>> The intention is obviously \"postgres pseudo-random number generator for\n>> <type>\". [...]\n>\n>> What would you suggest?\n>\n> We have names for these types, and those abbreviations are (mostly)\n> not them. Name-wise I'd be all right with pg_prng_int64 and so on,\n\nOk. You prefer \"uint64\" to \"u64\".\n\n> but I still expect that these functions' header comments should be\n> explicit about uniformity and about the precise output range.\n\nOk.\n\n> As an example, it's far from obvious whether the minimum value\n> of pg_prng_int32 should be zero or INT_MIN.\n> (Actually, I suspect you ought to provide both of those cases.)\n\nI agree that it is not obvious. I added \"p\" for \"positive\" variants. I \nfound one place where one could be used.\n\n> And the output range of pg_prng_float8 is not merely unobvious, but not \n> very easy to deduce from examining the code either; not that users \n> should have to.\n\nOk.\n\n>>> BTW, why are we bothering with FIRST_BIT_MASK in the first place,\n>>> rather than returning \"v & 1\" for pg_prng_bool?\n>\n>> Because some PRNG are very bad in the low bits, not xoroshiro stuff,\n>> though.\n>\n> Good, but then you shouldn't write associated code as if that's still\n> a problem, because you'll cause other people to think it's still a\n> problem and write equally contorted code elsewhere. \"v & 1\" is a\n> transparent way of producing a bool, while this code isn't.\n\n\"v & 1\" really produces an integer, not a bool. I'd prefer to actually \ngenerate a boolean and let the compiler optimizer do the cleaning.\n\nSome Xoshiro-family generators have \"linear artifacts in the low bits\", \nAlthough Xoroshiro128** is supposed to be immune, I thought better to keep \naway from these, and I could not see why the last bit would be better than \nany other bit, so taking the first looked okay to me at least.\n\nI think that the attached v18 addresses most of your concerns.\n\n-- \nFabien.",
"msg_date": "Sun, 28 Nov 2021 10:26:57 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>> Good, but then you shouldn't write associated code as if that's still\n>> a problem, because you'll cause other people to think it's still a\n>> problem and write equally contorted code elsewhere. \"v & 1\" is a\n>> transparent way of producing a bool, while this code isn't.\n\n> Some Xoshiro-family generators have \"linear artifacts in the low bits\", \n> Although Xoroshiro128** is supposed to be immune, I thought better to keep \n> away from these, and I could not see why the last bit would be better than \n> any other bit, so taking the first looked okay to me at least.\n\nMeh. If we're going to trust the high bits more than the lower ones,\nwe should do so consistently; it makes no sense to do that in one\npg_prng.c function and not its siblings.\n\nPushed with that change and some others, notably:\n\n* Rewrote a lot of the comments.\n* Refactored so that pg_strong_random() is not called from pg_prng.c.\nAs it stood, that would result in pulling in OpenSSL in programs that\nhave no need of it. (ldexp() still creates a dependency on libm, but\nI figured that was OK.)\n* Changed a number of call sites that were using modulo reduction\nto use pg_prng_uint64_range instead. Given the lengthy discussion\nwe had, it seems silly not to apply the conclusion everywhere.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 28 Nov 2021 21:40:41 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: rand48 replacement"
},
{
"msg_contents": "\n> Pushed with that change and some others, notably:\n\nThanks for the improvements and the push!\n\n-- \nFabien.\n\n\n",
"msg_date": "Tue, 30 Nov 2021 08:37:57 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: rand48 replacement"
}
] |
[
{
"msg_contents": "Hey,\n\nWhile working on the french translation of the manual, I found that one\ncolumn of pg_stats_ext was on the pg_stats columns' list. Here is a quick\npatch to fix this.\n\nRegards.\n\n\n-- \nGuillaume.",
"msg_date": "Mon, 24 May 2021 15:53:19 +0200",
"msg_from": "Guillaume Lelarge <guillaume@lelarge.info>",
"msg_from_op": true,
"msg_subject": "Issue on catalogs.sgml"
},
{
"msg_contents": "Guillaume Lelarge <guillaume@lelarge.info> writes:\n> While working on the french translation of the manual, I found that one\n> column of pg_stats_ext was on the pg_stats columns' list. Here is a quick\n> patch to fix this.\n\nRight you are, and after casting a suspicious eye on the responsible\ncommit, I found another similar error. \"patch\" with the default\namount of context is not too bright about handling our documentation\ntables :-(.\n\nPushed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 24 May 2021 18:05:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Issue on catalogs.sgml"
},
{
"msg_contents": "Le mar. 25 mai 2021 à 00:05, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n\n> Guillaume Lelarge <guillaume@lelarge.info> writes:\n> > While working on the french translation of the manual, I found that one\n> > column of pg_stats_ext was on the pg_stats columns' list. Here is a quick\n> > patch to fix this.\n>\n> Right you are, and after casting a suspicious eye on the responsible\n> commit, I found another similar error. \"patch\" with the default\n> amount of context is not too bright about handling our documentation\n> tables :-(.\n>\n> Pushed.\n>\n>\nThanks.\n\n\n-- \nGuillaume.\n\nLe mar. 25 mai 2021 à 00:05, Tom Lane <tgl@sss.pgh.pa.us> a écrit :Guillaume Lelarge <guillaume@lelarge.info> writes:\n> While working on the french translation of the manual, I found that one\n> column of pg_stats_ext was on the pg_stats columns' list. Here is a quick\n> patch to fix this.\n\nRight you are, and after casting a suspicious eye on the responsible\ncommit, I found another similar error. \"patch\" with the default\namount of context is not too bright about handling our documentation\ntables :-(.\n\nPushed.\nThanks.-- Guillaume.",
"msg_date": "Tue, 25 May 2021 08:40:58 +0200",
"msg_from": "Guillaume Lelarge <guillaume@lelarge.info>",
"msg_from_op": true,
"msg_subject": "Re: Issue on catalogs.sgml"
}
] |
[
{
"msg_contents": "Hi,\n\nPossible pointer TupleDesc rettupdesc used not initialized?\n\nif (!isNull) at line 4346 taking a true branch, the function\ncheck_sql_fn_retval at line 4448 can use rettupdesc uninitialized.\n\nregards,\nRanier Vilela\n\nHi,Possible pointer TupleDesc\trettupdesc used not initialized?if (!isNull) at line 4346 taking a true branch, the function check_sql_fn_retval at line 4448 can use \nrettupdesc\n\nuninitialized.regards,Ranier Vilela",
"msg_date": "Mon, 24 May 2021 21:37:41 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Possible pointer var TupleDesc rettupdesc used not initialized\n (src/backend/optimizer/util/clauses.c)"
},
{
"msg_contents": "\n\n> On May 24, 2021, at 5:37 PM, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> \n> Hi,\n> \n> Possible pointer TupleDesc rettupdesc used not initialized?\n> \n> if (!isNull) at line 4346 taking a true branch, the function check_sql_fn_retval at line 4448 can use rettupdesc uninitialized.\n\nCare to submit a patch?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Mon, 24 May 2021 18:42:20 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Possible pointer var TupleDesc rettupdesc used not initialized\n (src/backend/optimizer/util/clauses.c)"
},
{
"msg_contents": "Em seg., 24 de mai. de 2021 às 22:42, Mark Dilger <\nmark.dilger@enterprisedb.com> escreveu:\n\n>\n>\n> > On May 24, 2021, at 5:37 PM, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > Possible pointer TupleDesc rettupdesc used not initialized?\n> >\n> > if (!isNull) at line 4346 taking a true branch, the function\n> check_sql_fn_retval at line 4448 can use rettupdesc uninitialized.\n>\n> Care to submit a patch?\n>\nHi Mark, sorry but not.\nI examined the code and I can't say what the correct value is for\nrettupdesc.\n\nregards,\nRanier Vilela\n\n<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>\nLivre\nde vírus. www.avast.com\n<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>.\n<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>\n\nEm seg., 24 de mai. de 2021 às 22:42, Mark Dilger <mark.dilger@enterprisedb.com> escreveu:\n\n> On May 24, 2021, at 5:37 PM, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> \n> Hi,\n> \n> Possible pointer TupleDesc rettupdesc used not initialized?\n> \n> if (!isNull) at line 4346 taking a true branch, the function check_sql_fn_retval at line 4448 can use rettupdesc uninitialized.\n\nCare to submit a patch?Hi Mark, sorry but not. I examined the code and I can't say what the correct value is for rettupdesc.regards,Ranier Vilela \n\n\nLivre de vírus. www.avast.com.",
"msg_date": "Mon, 24 May 2021 23:21:05 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Possible pointer var TupleDesc rettupdesc used not initialized\n (src/backend/optimizer/util/clauses.c)"
},
{
"msg_contents": "On Mon, May 24, 2021 at 7:21 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> Em seg., 24 de mai. de 2021 às 22:42, Mark Dilger <\n> mark.dilger@enterprisedb.com> escreveu:\n>\n>>\n>>\n>> > On May 24, 2021, at 5:37 PM, Ranier Vilela <ranier.vf@gmail.com> wrote:\n>> >\n>> > Hi,\n>> >\n>> > Possible pointer TupleDesc rettupdesc used not initialized?\n>> >\n>> > if (!isNull) at line 4346 taking a true branch, the function\n>> check_sql_fn_retval at line 4448 can use rettupdesc uninitialized.\n>>\n>> Care to submit a patch?\n>>\n> Hi Mark, sorry but not.\n> I examined the code and I can't say what the correct value is for\n> rettupdesc.\n>\n\nHi,\nIt seems the following call would fill up value for rettupdesc :\n\nfunctypclass = get_expr_result_type((Node *) fexpr, NULL, &rettupdesc);\n\nCheers\n\n>\n> regards,\n> Ranier Vilela\n>\n>\n> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail> Livre\n> de vírus. www.avast.com\n> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>.\n> <#m_-660087238671669467_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>\n>\n\nOn Mon, May 24, 2021 at 7:21 PM Ranier Vilela <ranier.vf@gmail.com> wrote:Em seg., 24 de mai. de 2021 às 22:42, Mark Dilger <mark.dilger@enterprisedb.com> escreveu:\n\n> On May 24, 2021, at 5:37 PM, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> \n> Hi,\n> \n> Possible pointer TupleDesc rettupdesc used not initialized?\n> \n> if (!isNull) at line 4346 taking a true branch, the function check_sql_fn_retval at line 4448 can use rettupdesc uninitialized.\n\nCare to submit a patch?Hi Mark, sorry but not. I examined the code and I can't say what the correct value is for rettupdesc.Hi,It seems the following call would fill up value for rettupdesc :functypclass = get_expr_result_type((Node *) fexpr, NULL, &rettupdesc); Cheersregards,Ranier Vilela \n\n\nLivre de vírus. www.avast.com.",
"msg_date": "Mon, 24 May 2021 19:39:17 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Possible pointer var TupleDesc rettupdesc used not initialized\n (src/backend/optimizer/util/clauses.c)"
},
{
"msg_contents": "Em seg., 24 de mai. de 2021 às 23:35, Zhihong Yu <zyu@yugabyte.com>\nescreveu:\n\n>\n>\n> On Mon, May 24, 2021 at 7:21 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n>> Em seg., 24 de mai. de 2021 às 22:42, Mark Dilger <\n>> mark.dilger@enterprisedb.com> escreveu:\n>>\n>>>\n>>>\n>>> > On May 24, 2021, at 5:37 PM, Ranier Vilela <ranier.vf@gmail.com>\n>>> wrote:\n>>> >\n>>> > Hi,\n>>> >\n>>> > Possible pointer TupleDesc rettupdesc used not initialized?\n>>> >\n>>> > if (!isNull) at line 4346 taking a true branch, the function\n>>> check_sql_fn_retval at line 4448 can use rettupdesc uninitialized.\n>>>\n>>> Care to submit a patch?\n>>>\n>> Hi Mark, sorry but not.\n>> I examined the code and I can't say what the correct value is for\n>> rettupdesc.\n>>\n>\n> Hi,\n> It seems the following call would fill up value for rettupdesc :\n>\n> functypclass = get_expr_result_type((Node *) fexpr, NULL, &rettupdesc);\n>\nIn short, do you suggest running half the else?\nTo do this, you need to fill fexpr correctly.\nIt will not always be a trivial solution.\n\nregards,\nRanier Vilela\n\nEm seg., 24 de mai. de 2021 às 23:35, Zhihong Yu <zyu@yugabyte.com> escreveu:On Mon, May 24, 2021 at 7:21 PM Ranier Vilela <ranier.vf@gmail.com> wrote:Em seg., 24 de mai. de 2021 às 22:42, Mark Dilger <mark.dilger@enterprisedb.com> escreveu:\n\n> On May 24, 2021, at 5:37 PM, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> \n> Hi,\n> \n> Possible pointer TupleDesc rettupdesc used not initialized?\n> \n> if (!isNull) at line 4346 taking a true branch, the function check_sql_fn_retval at line 4448 can use rettupdesc uninitialized.\n\nCare to submit a patch?Hi Mark, sorry but not. I examined the code and I can't say what the correct value is for rettupdesc.Hi,It seems the following call would fill up value for rettupdesc :functypclass = get_expr_result_type((Node *) fexpr, NULL, &rettupdesc); In short, do you suggest running half the else?To do this, you need to fill fexpr correctly.It will not always be a trivial solution. regards,Ranier Vilela",
"msg_date": "Tue, 25 May 2021 08:51:38 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Possible pointer var TupleDesc rettupdesc used not initialized\n (src/backend/optimizer/util/clauses.c)"
},
{
"msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> Possible pointer TupleDesc rettupdesc used not initialized?\n> if (!isNull) at line 4346 taking a true branch, the function\n> check_sql_fn_retval at line 4448 can use rettupdesc uninitialized.\n\nThis seems to have been introduced by the SQL-function-body patch.\n\nAfter some study, I concluded that the reason we haven't noticed\nis that the case is nearly unreachable: check_sql_fn_retval never\nconsults the rettupdesc unless the function result type is composite\nand the tlist length is more than one --- and we eliminated the latter\ncase earlier in inline_function.\n\nThere is an exception, namely if the single tlist item fails to\nbe coercible to the output type, but that's hard to get to given\nthat it'd have been checked while defining the SQL-body function.\nI did manage to reproduce a problem after turning off\ncheck_function_bodies so I could create a broken function.\n\nIn any case, inline_function has no business assuming that\ncheck_sql_fn_retval doesn't need a valid value.\n\nThe simplest way to fix this seems to be to move the code that\ncreates \"fexpr\" and calls get_expr_result_type, so that we always\ndo that even for SQL-body cases. We could alternatively use some\nother way to obtain a result tupdesc in the SQL-body path; but\ncreating the dummy FuncExpr node is cheap enough that I don't\nthink it's worth contortions to avoid doing it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 25 May 2021 12:09:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Possible pointer var TupleDesc rettupdesc used not initialized\n (src/backend/optimizer/util/clauses.c)"
},
{
"msg_contents": "Em ter., 25 de mai. de 2021 às 13:09, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n> > Possible pointer TupleDesc rettupdesc used not initialized?\n> > if (!isNull) at line 4346 taking a true branch, the function\n> > check_sql_fn_retval at line 4448 can use rettupdesc uninitialized.\n>\n> This seems to have been introduced by the SQL-function-body patch.\n>\n> After some study, I concluded that the reason we haven't noticed\n> is that the case is nearly unreachable: check_sql_fn_retval never\n> consults the rettupdesc unless the function result type is composite\n> and the tlist length is more than one --- and we eliminated the latter\n> case earlier in inline_function.\n>\n> There is an exception, namely if the single tlist item fails to\n> be coercible to the output type, but that's hard to get to given\n> that it'd have been checked while defining the SQL-body function.\n> I did manage to reproduce a problem after turning off\n> check_function_bodies so I could create a broken function.\n>\n> In any case, inline_function has no business assuming that\n> check_sql_fn_retval doesn't need a valid value.\n>\n> The simplest way to fix this seems to be to move the code that\n> creates \"fexpr\" and calls get_expr_result_type, so that we always\n> do that even for SQL-body cases. We could alternatively use some\n> other way to obtain a result tupdesc in the SQL-body path; but\n> creating the dummy FuncExpr node is cheap enough that I don't\n> think it's worth contortions to avoid doing it.\n>\nFollowing the guidelines, I provided a patch.\nBut I did more than requested, removed redundant variable and reduced the\nscope of two.\n\nvcregress check pass fine.\n\nregards,\nRanier Vilela",
"msg_date": "Tue, 25 May 2021 14:26:50 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Possible pointer var TupleDesc rettupdesc used not initialized\n (src/backend/optimizer/util/clauses.c)"
},
{
"msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> Following the guidelines, I provided a patch.\n\nOh, I already pushed a fix, thanks.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 25 May 2021 13:35:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Possible pointer var TupleDesc rettupdesc used not initialized\n (src/backend/optimizer/util/clauses.c)"
},
{
"msg_contents": "Em ter., 25 de mai. de 2021 às 14:35, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n> > Following the guidelines, I provided a patch.\n>\n> Oh, I already pushed a fix, thanks.\n>\nNo problem!\n\nThank you.\n\nregards,\nRanier Vilela\n\nEm ter., 25 de mai. de 2021 às 14:35, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Ranier Vilela <ranier.vf@gmail.com> writes:\n> Following the guidelines, I provided a patch.\n\nOh, I already pushed a fix, thanks.No problem!Thank you.regards,Ranier Vilela",
"msg_date": "Tue, 25 May 2021 14:43:51 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Possible pointer var TupleDesc rettupdesc used not initialized\n (src/backend/optimizer/util/clauses.c)"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile working on an output plugin that uses streaming protocol, I hit an\nassertion failure. Further investigations revealed a possible bug in core\nPostgres. This must be new to PG14 since streaming support is new to this\nrelease. I extended the test_decoding regression test to demonstrate the\nfailure. PFA\n\n```\n2021-05-25 11:32:19.493 IST client backend[68321] pg_regress/stream\nSTATEMENT: SELECT data FROM pg_logical_slot_get_changes('regression_slot',\nNULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '\n1', 'stream-changes', '1');\nTRAP: FailedAssertion(\"txn->size == 0\", File: \"reorderbuffer.c\", Line:\n3476, PID: 68321)\n```\n\n From my preliminary analysis, it looks like we fail to adjust the memory\naccounting after streaming toasted tuples. More concretely, after\n`ReorderBufferProcessPartialChange()` processes the in-progress\ntransaction, `ReorderBufferTruncateTXN()` truncates the accumulated\nchanged in the transaction, but fails to adjust the buffer size for toast\nchunks. Maybe we are missing a call to `ReorderBufferToastReset()`\nsomewhere?\n\n From what I see, the assertion only triggers when data is inserted via COPY\n(multi-insert).\n\nLet me know if anything else is needed to reproduce this.\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nEnterpriseDB: https://www.enterprisedb.com",
"msg_date": "Tue, 25 May 2021 12:06:38 +0530",
"msg_from": "Pavan Deolasee <pavan.deolasee@gmail.com>",
"msg_from_op": true,
"msg_subject": "Assertion failure while streaming toasted data"
},
{
"msg_contents": "On Tue, May 25, 2021 at 12:06 PM Pavan Deolasee\n<pavan.deolasee@gmail.com> wrote:\n>\n> Hi,\n>\n> While working on an output plugin that uses streaming protocol, I hit an assertion failure. Further investigations revealed a possible bug in core Postgres. This must be new to PG14 since streaming support is new to this release. I extended the test_decoding regression test to demonstrate the failure. PFA\n>\n> ```\n> 2021-05-25 11:32:19.493 IST client backend[68321] pg_regress/stream STATEMENT: SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '\n> 1', 'stream-changes', '1');\n> TRAP: FailedAssertion(\"txn->size == 0\", File: \"reorderbuffer.c\", Line: 3476, PID: 68321)\n> ```\n>\n> From my preliminary analysis, it looks like we fail to adjust the memory accounting after streaming toasted tuples. More concretely, after `ReorderBufferProcessPartialChange()` processes the in-progress transaction, `ReorderBufferTruncateTXN()` truncates the accumulated changed in the transaction, but fails to adjust the buffer size for toast chunks. Maybe we are missing a call to `ReorderBufferToastReset()` somewhere?\n>\n> From what I see, the assertion only triggers when data is inserted via COPY (multi-insert).\n>\n> Let me know if anything else is needed to reproduce this.\n\nThanks, I will look into this and let you know if need some help.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 25 May 2021 12:12:46 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assertion failure while streaming toasted data"
},
{
"msg_contents": "On Tue, May 25, 2021 at 12:06:38PM +0530, Pavan Deolasee wrote:\n> While working on an output plugin that uses streaming protocol, I hit an\n> assertion failure. Further investigations revealed a possible bug in core\n> Postgres. This must be new to PG14 since streaming support is new to this\n> release. I extended the test_decoding regression test to demonstrate the\n> failure. PFA\n\nThanks, Pavan. I have added an open item for this one.\n--\nMichael",
"msg_date": "Tue, 25 May 2021 16:21:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Assertion failure while streaming toasted data"
},
{
"msg_contents": "On Tue, May 25, 2021 at 12:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, May 25, 2021 at 12:06 PM Pavan Deolasee\n> <pavan.deolasee@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > While working on an output plugin that uses streaming protocol, I hit an assertion failure. Further investigations revealed a possible bug in core Postgres. This must be new to PG14 since streaming support is new to this release. I extended the test_decoding regression test to demonstrate the failure. PFA\n> >\n> > ```\n> > 2021-05-25 11:32:19.493 IST client backend[68321] pg_regress/stream STATEMENT: SELECT data FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '\n> > 1', 'stream-changes', '1');\n> > TRAP: FailedAssertion(\"txn->size == 0\", File: \"reorderbuffer.c\", Line: 3476, PID: 68321)\n> > ```\n> >\n> > From my preliminary analysis, it looks like we fail to adjust the memory accounting after streaming toasted tuples. More concretely, after `ReorderBufferProcessPartialChange()` processes the in-progress transaction, `ReorderBufferTruncateTXN()` truncates the accumulated changed in the transaction, but fails to adjust the buffer size for toast chunks. Maybe we are missing a call to `ReorderBufferToastReset()` somewhere?\n> >\n> > From what I see, the assertion only triggers when data is inserted via COPY (multi-insert).\n> >\n> > Let me know if anything else is needed to reproduce this.\n>\n> Thanks, I will look into this and let you know if need some help.\n\nI have identified the cause of the issue, basically, the reason is if\nwe are doing a multi insert operation we don't set the toast cleanup\nuntil we get the last tuple of the xl_multi_insert [1]. Now, with\nstreaming, we can process the transaction in between the multi-insert\nbut while doing that the \"change->data.tp.clear_toast_afterwards\" is\nset to false in all the tuples in this stream. And due to this we will\nnot clean up the toast.\n\nOne simple fix could be that we can just clean the toast memory if we\nare processing in the streaming mode (as shown in the attached patch).\nBut maybe that is not the best-optimized solution, ideally, we can\nsave the toast until we process the last tuple of multi-insert in the\ncurrent stream, but I think that's not an easy thing to identify.\n\n[1]\n/*\n* Reset toast reassembly state only after the last row in the last\n* xl_multi_insert_tuple record emitted by one heap_multi_insert()\n* call.\n*/\nif (xlrec->flags & XLH_INSERT_LAST_IN_MULTI &&\n(i + 1) == xlrec->ntuples)\nchange->data.tp.clear_toast_afterwards = true;\nelse\nchange->data.tp.clear_toast_afterwards = false;\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 25 May 2021 13:26:40 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assertion failure while streaming toasted data"
},
{
"msg_contents": "On Tue, May 25, 2021 at 1:26 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n>\n>\n> I have identified the cause of the issue, basically, the reason is if\n> we are doing a multi insert operation we don't set the toast cleanup\n> until we get the last tuple of the xl_multi_insert [1]. Now, with\n> streaming, we can process the transaction in between the multi-insert\n> but while doing that the \"change->data.tp.clear_toast_afterwards\" is\n> set to false in all the tuples in this stream. And due to this we will\n> not clean up the toast.\n>\n\nThanks. That matches my understanding too.\n\n\n>\n> One simple fix could be that we can just clean the toast memory if we\n> are processing in the streaming mode (as shown in the attached patch).\n>\n\nI am not entirely sure if it works correctly. I'd tried something similar,\nbut the downstream node using\nmy output plugin gets NULL values for the toast columns. It's a bit hard\nto demonstrate that with the\ntest_decoding plugin, but if you have some other mechanism to test that\nchange with an actual downstream\nnode receiving and applying changes, it will be useful to test with that.\n\nThanks,\nPavan\n\n-- \n Pavan Deolasee http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\nOn Tue, May 25, 2021 at 1:26 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\nI have identified the cause of the issue, basically, the reason is if\nwe are doing a multi insert operation we don't set the toast cleanup\nuntil we get the last tuple of the xl_multi_insert [1]. Now, with\nstreaming, we can process the transaction in between the multi-insert\nbut while doing that the \"change->data.tp.clear_toast_afterwards\" is\nset to false in all the tuples in this stream. And due to this we will\nnot clean up the toast.Thanks. That matches my understanding too. \n\nOne simple fix could be that we can just clean the toast memory if we\nare processing in the streaming mode (as shown in the attached patch).I am not entirely sure if it works correctly. I'd tried something similar, but the downstream node usingmy output plugin gets NULL values for the toast columns. It's a bit hard to demonstrate that with thetest_decoding plugin, but if you have some other mechanism to test that change with an actual downstreamnode receiving and applying changes, it will be useful to test with that.Thanks,Pavan -- Pavan Deolasee http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services",
"msg_date": "Tue, 25 May 2021 13:44:50 +0530",
"msg_from": "Pavan Deolasee <pavan.deolasee@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Assertion failure while streaming toasted data"
},
{
"msg_contents": "On Tue, May 25, 2021 at 1:45 PM Pavan Deolasee <pavan.deolasee@gmail.com> wrote:\n>\n> I am not entirely sure if it works correctly. I'd tried something similar, but the downstream node using\n> my output plugin gets NULL values for the toast columns. It's a bit hard to demonstrate that with the\n> test_decoding plugin, but if you have some other mechanism to test that change with an actual downstream\n> node receiving and applying changes, it will be useful to test with that.\n\nOkay, I will test that. Thanks.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 25 May 2021 13:49:17 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assertion failure while streaming toasted data"
},
{
"msg_contents": "On Tue, May 25, 2021 at 1:49 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Tue, May 25, 2021 at 1:45 PM Pavan Deolasee <pavan.deolasee@gmail.com>\n> wrote:\n> >\n> > I am not entirely sure if it works correctly. I'd tried something\n> similar, but the downstream node using\n> > my output plugin gets NULL values for the toast columns. It's a bit\n> hard to demonstrate that with the\n> > test_decoding plugin, but if you have some other mechanism to test that\n> change with an actual downstream\n> > node receiving and applying changes, it will be useful to test with that.\n>\n> Okay, I will test that. Thanks.\n>\n>\nI modified test_decoding to print the tuples (like in the non-streamed\ncase) and included your proposed fix. PFA\n\nWhen the transaction is streamed, I see:\n```\n+ opening a streamed block for transaction\n+ table public.toasted: INSERT: id[integer]:9001 other[text]:'bbb'\ndata[text]:'ccc'\n+ table public.toasted: INSERT: id[integer]:9002 other[text]:'ddd'\ndata[text]:'eee'\n+ table public.toasted: INSERT: id[integer]:9003 other[text]:'bar'\ndata[text]:unchanged-toast-datum\n<snipped>\n```\n\nFor a non-streamed case, the `data[text]` column shows the actual data.\nThat probably manifests into NULL data when downstream handles it.\n\nThanks,\nPavan\n\n\n-- \nPavan Deolasee\nEnterpriseDB: https://www.enterprisedb.com",
"msg_date": "Tue, 25 May 2021 13:59:12 +0530",
"msg_from": "Pavan Deolasee <pavan.deolasee@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Assertion failure while streaming toasted data"
},
{
"msg_contents": "On Tue, May 25, 2021 at 1:59 PM Pavan Deolasee <pavan.deolasee@gmail.com> wrote:\n\n>\n> On Tue, May 25, 2021 at 1:49 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>>\n>> On Tue, May 25, 2021 at 1:45 PM Pavan Deolasee <pavan.deolasee@gmail.com> wrote:\n>> >\n>> > I am not entirely sure if it works correctly. I'd tried something similar, but the downstream node using\n>> > my output plugin gets NULL values for the toast columns. It's a bit hard to demonstrate that with the\n>> > test_decoding plugin, but if you have some other mechanism to test that change with an actual downstream\n>> > node receiving and applying changes, it will be useful to test with that.\n>>\n>> Okay, I will test that. Thanks.\n>>\n>\n> I modified test_decoding to print the tuples (like in the non-streamed case) and included your proposed fix. PFA\n>\n> When the transaction is streamed, I see:\n> ```\n> + opening a streamed block for transaction\n> + table public.toasted: INSERT: id[integer]:9001 other[text]:'bbb' data[text]:'ccc'\n> + table public.toasted: INSERT: id[integer]:9002 other[text]:'ddd' data[text]:'eee'\n> + table public.toasted: INSERT: id[integer]:9003 other[text]:'bar' data[text]:unchanged-toast-datum\n> <snipped>\n> ```\n>\n> For a non-streamed case, the `data[text]` column shows the actual data. That probably manifests into NULL data when downstream handles it.\n\nYes, I am able to reproduce this, basically, until we get the last\ntuple of the multi insert we can not clear the toast data otherwise we\ncan never form a complete tuple. So the only possible fix I can think\nof is to consider the multi-insert WAL without the final multi-insert\ntuple as partial data then we will avoid streaming until we get the\ncomplete WAL of one multi-insert. I am working on the patch to fix\nthis, I will share that in some time.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 25 May 2021 14:34:23 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assertion failure while streaming toasted data"
},
{
"msg_contents": "On Tue, May 25, 2021 at 2:34 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> > When the transaction is streamed, I see:\n> > ```\n> > + opening a streamed block for transaction\n> > + table public.toasted: INSERT: id[integer]:9001 other[text]:'bbb' data[text]:'ccc'\n> > + table public.toasted: INSERT: id[integer]:9002 other[text]:'ddd' data[text]:'eee'\n> > + table public.toasted: INSERT: id[integer]:9003 other[text]:'bar' data[text]:unchanged-toast-datum\n> > <snipped>\n> > ```\n> >\n> > For a non-streamed case, the `data[text]` column shows the actual data. That probably manifests into NULL data when downstream handles it.\n>\n> Yes, I am able to reproduce this, basically, until we get the last\n> tuple of the multi insert we can not clear the toast data otherwise we\n> can never form a complete tuple. So the only possible fix I can think\n> of is to consider the multi-insert WAL without the final multi-insert\n> tuple as partial data then we will avoid streaming until we get the\n> complete WAL of one multi-insert. I am working on the patch to fix\n> this, I will share that in some time.\n\nThe attached patch should fix the issue, now the output is like below\n\n===\nopening a streamed block for transaction\ntable public.toasted: INSERT: id[integer]:9001 other[text]:'bbb'\ndata[text]:'ccc'\n table public.toasted: INSERT: id[integer]:9002 other[text]:'ddd'\ndata[text]:'eee'\n table public.toasted: INSERT: id[integer]:9003 other[text]:'bar'\ndata[text]:'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\n<repeat >\n===\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 25 May 2021 14:57:18 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assertion failure while streaming toasted data"
},
{
"msg_contents": "On Tue, May 25, 2021 at 2:57 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n>\n> >\n> > Yes, I am able to reproduce this, basically, until we get the last\n> > tuple of the multi insert we can not clear the toast data otherwise we\n> > can never form a complete tuple. So the only possible fix I can think\n> > of is to consider the multi-insert WAL without the final multi-insert\n> > tuple as partial data then we will avoid streaming until we get the\n> > complete WAL of one multi-insert. I am working on the patch to fix\n> > this, I will share that in some time.\n>\n> The attached patch should fix the issue, now the output is like below\n>\n>\nThanks. This looks fine to me. We should still be able to stream\nmulti-insert transactions (COPY) as and when the copy buffer becomes full\nand is flushed. That seems to be a reasonable restriction to me.\n\nWe should incorporate the regression test in the final patch. I am not\nentirely sure if what I have done is acceptable (or even works in\nall scenarios). We could possibly have a long list of tuples instead of\ndoing the exponential magic. Or we should consider lowering the min value\nfor logical_decoding_work_mem and run these tests with a much lower value.\nIn fact, that's how I caught the problem in the first place. I had\ndeliberately lowered the value to 1kB so that streaming code kicks in very\noften and even for small transactions.\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nEnterpriseDB: https://www.enterprisedb..com\n\nOn Tue, May 25, 2021 at 2:57 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> Yes, I am able to reproduce this, basically, until we get the last\n> tuple of the multi insert we can not clear the toast data otherwise we\n> can never form a complete tuple. So the only possible fix I can think\n> of is to consider the multi-insert WAL without the final multi-insert\n> tuple as partial data then we will avoid streaming until we get the\n> complete WAL of one multi-insert. I am working on the patch to fix\n> this, I will share that in some time.\n\nThe attached patch should fix the issue, now the output is like below\nThanks. This looks fine to me. We should still be able to stream multi-insert transactions (COPY) as and when the copy buffer becomes full and is flushed. That seems to be a reasonable restriction to me.We should incorporate the regression test in the final patch. I am not entirely sure if what I have done is acceptable (or even works in all scenarios). We could possibly have a long list of tuples instead of doing the exponential magic. Or we should consider lowering the min value for logical_decoding_work_mem and run these tests with a much lower value. In fact, that's how I caught the problem in the first place. I had deliberately lowered the value to 1kB so that streaming code kicks in very often and even for small transactions.Thanks,Pavan -- Pavan DeolaseeEnterpriseDB: https://www.enterprisedb..com",
"msg_date": "Tue, 25 May 2021 15:32:59 +0530",
"msg_from": "Pavan Deolasee <pavan.deolasee@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Assertion failure while streaming toasted data"
},
{
"msg_contents": "On Tue, May 25, 2021 at 3:33 PM Pavan Deolasee <pavan.deolasee@gmail.com> wrote:\n\n>> The attached patch should fix the issue, now the output is like below\n>>\n>\n> Thanks. This looks fine to me. We should still be able to stream multi-insert transactions (COPY) as and when the copy buffer becomes full and is flushed. That seems to be a reasonable restriction to me.\n>\n> We should incorporate the regression test in the final patch. I am not entirely sure if what I have done is acceptable (or even works in all scenarios). We could possibly have a long list of tuples instead of doing the exponential magic. Or we should consider lowering the min value for logical_decoding_work_mem and run these tests with a much lower value. In fact, that's how I caught the problem in the first place. I had deliberately lowered the value to 1kB so that streaming code kicks in very often and even for small transactions.\n\nThanks for confirming, I will come up with the test and add that to\nthe next version of the patch.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 25 May 2021 15:41:15 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assertion failure while streaming toasted data"
},
{
"msg_contents": "On Tue, May 25, 2021 at 2:57 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, May 25, 2021 at 2:34 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > > When the transaction is streamed, I see:\n> > > ```\n> > > + opening a streamed block for transaction\n> > > + table public.toasted: INSERT: id[integer]:9001 other[text]:'bbb' data[text]:'ccc'\n> > > + table public.toasted: INSERT: id[integer]:9002 other[text]:'ddd' data[text]:'eee'\n> > > + table public.toasted: INSERT: id[integer]:9003 other[text]:'bar' data[text]:unchanged-toast-datum\n> > > <snipped>\n> > > ```\n> > >\n> > > For a non-streamed case, the `data[text]` column shows the actual data. That probably manifests into NULL data when downstream handles it.\n> >\n> > Yes, I am able to reproduce this, basically, until we get the last\n> > tuple of the multi insert we can not clear the toast data otherwise we\n> > can never form a complete tuple. So the only possible fix I can think\n> > of is to consider the multi-insert WAL without the final multi-insert\n> > tuple as partial data then we will avoid streaming until we get the\n> > complete WAL of one multi-insert.\n\nYeah, that sounds reasonable.\n\n> > I am working on the patch to fix\n> > this, I will share that in some time.\n>\n> The attached patch should fix the issue, now the output is like below\n>\n\nYour patch will fix the reported scenario but I don't like the way\nmulti_insert flag is used to detect incomplete tuple. One problem\ncould be that even when there are no toast inserts, it won't allow to\nstream unless we get the last tuple of multi insert WAL. How about\nchanging the code such that when we are clearing the toast flag, we\nadditionally check 'clear_toast_afterwards' flag?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 25 May 2021 16:50:01 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assertion failure while streaming toasted data"
},
{
"msg_contents": "On Tue, May 25, 2021 at 4:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Your patch will fix the reported scenario but I don't like the way\n> multi_insert flag is used to detect incomplete tuple. One problem\n> could be that even when there are no toast inserts, it won't allow to\n> stream unless we get the last tuple of multi insert WAL. How about\n> changing the code such that when we are clearing the toast flag, we\n> additionally check 'clear_toast_afterwards' flag?\n\nYes, that can be done, I will fix this in the next version of the patch.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 25 May 2021 17:46:10 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assertion failure while streaming toasted data"
},
{
"msg_contents": "On Tue, May 25, 2021 at 5:46 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, May 25, 2021 at 4:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Your patch will fix the reported scenario but I don't like the way\n> > multi_insert flag is used to detect incomplete tuple. One problem\n> > could be that even when there are no toast inserts, it won't allow to\n> > stream unless we get the last tuple of multi insert WAL. How about\n> > changing the code such that when we are clearing the toast flag, we\n> > additionally check 'clear_toast_afterwards' flag?\n>\n> Yes, that can be done, I will fix this in the next version of the patch.\n\nI have fixed as per the suggestion, and as per the offlist discussion,\nI have merged the TOAST and SPEC insert flag and created a single\nPARTIAL_CHANGE flag.\nI have also added a test case for this.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 25 May 2021 18:42:54 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assertion failure while streaming toasted data"
},
{
"msg_contents": "On Tue, May 25, 2021 at 6:43 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n>\n> I have also added a test case for this.\n>\n>\nIs that test good enough to trigger the original bug? In my experience, I\nhad to add a lot more tuples before the logical_decoding_work_mem\nthreshold was crossed and the streaming kicked in. I would suggest running\nthe test without the fix and check if the assertion hits. If so, we are\ngood to go.\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nEnterpriseDB: https://www.enterprisedb..com\n\nOn Tue, May 25, 2021 at 6:43 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\nI have also added a test case for this.\nIs that test good enough to trigger the original bug? In my experience, I had to add a lot more tuples before the logical_decoding_work_mem threshold was crossed and the streaming kicked in. I would suggest running the test without the fix and check if the assertion hits. If so, we are good to go.Thanks,Pavan -- Pavan DeolaseeEnterpriseDB: https://www.enterprisedb..com",
"msg_date": "Tue, 25 May 2021 18:50:10 +0530",
"msg_from": "Pavan Deolasee <pavan.deolasee@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Assertion failure while streaming toasted data"
},
{
"msg_contents": "On Tue, May 25, 2021 at 6:50 PM Pavan Deolasee <pavan.deolasee@gmail.com> wrote:\n\n> Is that test good enough to trigger the original bug? In my experience, I had to add a lot more tuples before the logical_decoding_work_mem threshold was crossed and the streaming kicked in. I would suggest running the test without the fix and check if the assertion hits. If so, we are good to go.\n>\nYes, it is reproducing without fix, I already tested it. Basically, I\nam using the \"stream_test\" table in \"copy stream_test to stdout\"\"\ncommand which already has 20 toasted tuples, each of size 6000 bytes\nso that is big enough to cross 64kB.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 25 May 2021 18:54:50 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assertion failure while streaming toasted data"
},
{
"msg_contents": "On Tue, May 25, 2021 at 6:55 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Tue, May 25, 2021 at 6:50 PM Pavan Deolasee <pavan.deolasee@gmail.com>\n> wrote:\n>\n> > Is that test good enough to trigger the original bug? In my experience,\n> I had to add a lot more tuples before the logical_decoding_work_mem\n> threshold was crossed and the streaming kicked in. I would suggest running\n> the test without the fix and check if the assertion hits. If so, we are\n> good to go.\n> >\n> Yes, it is reproducing without fix, I already tested it. Basically, I\n> am using the \"stream_test\" table in \"copy stream_test to stdout\"\"\n> command which already has 20 toasted tuples, each of size 6000 bytes\n> so that is big enough to cross 64kB.\n>\n>\nOk, great! Thanks for confirming.\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nEnterpriseDB: https://www.enterprisedb..com\n\nOn Tue, May 25, 2021 at 6:55 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:On Tue, May 25, 2021 at 6:50 PM Pavan Deolasee <pavan.deolasee@gmail.com> wrote:\n\n> Is that test good enough to trigger the original bug? In my experience, I had to add a lot more tuples before the logical_decoding_work_mem threshold was crossed and the streaming kicked in. I would suggest running the test without the fix and check if the assertion hits. If so, we are good to go.\n>\nYes, it is reproducing without fix, I already tested it. Basically, I\nam using the \"stream_test\" table in \"copy stream_test to stdout\"\"\ncommand which already has 20 toasted tuples, each of size 6000 bytes\nso that is big enough to cross 64kB.\nOk, great! Thanks for confirming.Thanks,Pavan -- Pavan DeolaseeEnterpriseDB: https://www.enterprisedb..com",
"msg_date": "Tue, 25 May 2021 19:33:31 +0530",
"msg_from": "Pavan Deolasee <pavan.deolasee@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Assertion failure while streaming toasted data"
},
{
"msg_contents": "On Tue, May 25, 2021 at 6:43 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, May 25, 2021 at 5:46 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Tue, May 25, 2021 at 4:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > Your patch will fix the reported scenario but I don't like the way\n> > > multi_insert flag is used to detect incomplete tuple. One problem\n> > > could be that even when there are no toast inserts, it won't allow to\n> > > stream unless we get the last tuple of multi insert WAL. How about\n> > > changing the code such that when we are clearing the toast flag, we\n> > > additionally check 'clear_toast_afterwards' flag?\n> >\n> > Yes, that can be done, I will fix this in the next version of the patch.\n>\n> I have fixed as per the suggestion, and as per the offlist discussion,\n> I have merged the TOAST and SPEC insert flag and created a single\n> PARTIAL_CHANGE flag.\n> I have also added a test case for this.\n>\n\nWhen I am trying to execute the new test independently in windows, I\nam getting the below error:\n'psql' is not recognized as an internal or external command,\noperable program or batch file.\n2021-05-26 09:09:24.399 IST [3188] ERROR: program \"psql -At -c \"copy\nstream_test to stdout\" contrib_regression\" failed\n2021-05-26 09:09:24.399 IST [3188] DETAIL: child process exited with\nexit code 1\n2021-05-26 09:09:24.399 IST [3188] STATEMENT: COPY stream_test FROM\nprogram 'psql -At -c \"copy stream_test to stdout\" contrib_regression';\n\nI have followed below steps:\n1. Run the server\n2. from command prompt, in test_decoding folder, execute,\npg_regress.exe --bindir=d:/WorkSpace/PostgreSQL/master/installation/bin\n--dbname=contrib_regression stream\n\nI searched and didn't find any similar existing tests. Can we think of\nany other way to test this code path? We already have one copy test in\ntoast.sql, isn't it possible to write a similar test here?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 26 May 2021 11:19:33 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assertion failure while streaming toasted data"
},
{
"msg_contents": "On Wed, May 26, 2021 at 11:19 AM Amit Kapila <amit.kapila16@gmail.com>\nwrote:\n\n>\n>\n>\n> I searched and didn't find any similar existing tests. Can we think of\n> any other way to test this code path? We already have one copy test in\n> toast.sql, isn't it possible to write a similar test here?\n>\n>\nYeah, I wasn't very confident about this either. I just wrote it to reduce\nthe test footprint in the reproducer. I think we can simply include a lot\nmore data and do the copy via stdin.\n\nAlternatively, we can reduce logical_decoding_work_mem minimum value and\nrun the test with a smaller value. But that same GUC is used to decide\nspilling txn to disk as well. So I am not sure if reducing the compile time\ndefault is acceptable or not.\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nEnterpriseDB: https://www.enterprisedb..com\n\nOn Wed, May 26, 2021 at 11:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\nI searched and didn't find any similar existing tests. Can we think of\nany other way to test this code path? We already have one copy test in\ntoast.sql, isn't it possible to write a similar test here?\nYeah, I wasn't very confident about this either. I just wrote it to reduce the test footprint in the reproducer. I think we can simply include a lot more data and do the copy via stdin.Alternatively, we can reduce logical_decoding_work_mem minimum value and run the test with a smaller value. But that same GUC is used to decide spilling txn to disk as well. So I am not sure if reducing the compile time default is acceptable or not.Thanks,Pavan-- Pavan DeolaseeEnterpriseDB: https://www.enterprisedb..com",
"msg_date": "Wed, 26 May 2021 11:37:00 +0530",
"msg_from": "Pavan Deolasee <pavan.deolasee@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Assertion failure while streaming toasted data"
},
{
"msg_contents": "On Wed, May 26, 2021 at 11:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, May 25, 2021 at 6:43 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Tue, May 25, 2021 at 5:46 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Tue, May 25, 2021 at 4:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > Your patch will fix the reported scenario but I don't like the way\n> > > > multi_insert flag is used to detect incomplete tuple. One problem\n> > > > could be that even when there are no toast inserts, it won't allow to\n> > > > stream unless we get the last tuple of multi insert WAL. How about\n> > > > changing the code such that when we are clearing the toast flag, we\n> > > > additionally check 'clear_toast_afterwards' flag?\n> > >\n> > > Yes, that can be done, I will fix this in the next version of the patch.\n> >\n> > I have fixed as per the suggestion, and as per the offlist discussion,\n> > I have merged the TOAST and SPEC insert flag and created a single\n> > PARTIAL_CHANGE flag.\n> > I have also added a test case for this.\n> >\n>\n> When I am trying to execute the new test independently in windows, I\n> am getting the below error:\n> 'psql' is not recognized as an internal or external command,\n> operable program or batch file.\n> 2021-05-26 09:09:24.399 IST [3188] ERROR: program \"psql -At -c \"copy\n> stream_test to stdout\" contrib_regression\" failed\n> 2021-05-26 09:09:24.399 IST [3188] DETAIL: child process exited with\n> exit code 1\n> 2021-05-26 09:09:24.399 IST [3188] STATEMENT: COPY stream_test FROM\n> program 'psql -At -c \"copy stream_test to stdout\" contrib_regression';\n>\n> I have followed below steps:\n> 1. Run the server\n> 2. from command prompt, in test_decoding folder, execute,\n> pg_regress.exe --bindir=d:/WorkSpace/PostgreSQL/master/installation/bin\n> --dbname=contrib_regression stream\n\nOk\n\n> I searched and didn't find any similar existing tests. Can we think of\n> any other way to test this code path? We already have one copy test in\n> toast.sql, isn't it possible to write a similar test here?\n\nI will check that and let you know.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 26 May 2021 11:53:09 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assertion failure while streaming toasted data"
},
{
"msg_contents": "On Wed, May 26, 2021 at 11:37 AM Pavan Deolasee\n<pavan.deolasee@gmail.com> wrote:\n>\n\n>\n> Yeah, I wasn't very confident about this either. I just wrote it to reduce the test footprint in the reproducer. I think we can simply include a lot more data and do the copy via stdin.\n\nThat is one way and if we don't find any better way we can do that.\n\n> Alternatively, we can reduce logical_decoding_work_mem minimum value and run the test with a smaller value. But that same GUC is used to decide spilling txn to disk as well. So I am not sure if reducing the compile time default is acceptable or not.\n\nIn the test decoding config, it is already set to a minimum value which is 64k.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 26 May 2021 11:55:30 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assertion failure while streaming toasted data"
},
{
"msg_contents": "On Wed, May 26, 2021 at 11:53 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, May 26, 2021 at 11:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n\n>\n> > I searched and didn't find any similar existing tests. Can we think of\n> > any other way to test this code path? We already have one copy test in\n> > toast.sql, isn't it possible to write a similar test here?\n>\n> I will check that and let you know.\n\nI have followed this approach to write the test.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 26 May 2021 13:47:41 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assertion failure while streaming toasted data"
},
{
"msg_contents": "On Wed, May 26, 2021 at 1:47 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, May 26, 2021 at 11:53 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Wed, May 26, 2021 at 11:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n>\n> >\n> > > I searched and didn't find any similar existing tests. Can we think of\n> > > any other way to test this code path? We already have one copy test in\n> > > toast.sql, isn't it possible to write a similar test here?\n> >\n> > I will check that and let you know.\n>\n> I have followed this approach to write the test.\n>\n\nThe changes look good to me. I have made minor modifications in the\ncomments, see attached. Let me know what do you think?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Wed, 26 May 2021 17:28:03 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assertion failure while streaming toasted data"
},
{
"msg_contents": "On Wed, May 26, 2021 at 5:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, May 26, 2021 at 1:47 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Wed, May 26, 2021 at 11:53 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Wed, May 26, 2021 at 11:19 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> >\n> > >\n> > > > I searched and didn't find any similar existing tests. Can we think of\n> > > > any other way to test this code path? We already have one copy test in\n> > > > toast.sql, isn't it possible to write a similar test here?\n> > >\n> > > I will check that and let you know.\n> >\n> > I have followed this approach to write the test.\n> >\n>\n> The changes look good to me. I have made minor modifications in the\n> comments, see attached. Let me know what do you think?\n\nYour modification looks good to me. Thanks!\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 26 May 2021 18:10:00 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assertion failure while streaming toasted data"
},
{
"msg_contents": "On Tue, May 25, 2021 at 12:51 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, May 25, 2021 at 12:06:38PM +0530, Pavan Deolasee wrote:\n> > While working on an output plugin that uses streaming protocol, I hit an\n> > assertion failure. Further investigations revealed a possible bug in core\n> > Postgres. This must be new to PG14 since streaming support is new to this\n> > release. I extended the test_decoding regression test to demonstrate the\n> > failure. PFA\n>\n> Thanks, Pavan. I have added an open item for this one.\n>\n\nI have pushed this a few days ago [1] and closed the open item\ncorresponding to it.\n\n[1] - https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=6f4bdf81529fdaf6744875b0be99ecb9bfb3b7e0\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 1 Jun 2021 08:34:46 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assertion failure while streaming toasted data"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI found this when reading the related code. Here is the scenario:\n\nbool\nRegisterSyncRequest(const FileTag *ftag, SyncRequestType type,\n bool retryOnError)\n\nFor the case retryOnError is true, the function would in loop call\nForwardSyncRequest() until it succeeds, but in ForwardSyncRequest(),\nwe can see if we run into the below branch, RegisterSyncRequest() will\nneed to loop until the checkpointer absorbs the existing requests so\nForwardSyncRequest() might hang for some time until a checkpoint\nrequest is triggered. This scenario seems to be possible in theory\nthough the chance is not high.\n\nForwardSyncRequest():\n\n if (CheckpointerShmem->checkpointer_pid == 0 ||\n (CheckpointerShmem->num_requests >= CheckpointerShmem->max_requests &&\n !CompactCheckpointerRequestQueue()))\n {\n /*\n * Count the subset of writes where backends have to do their own\n * fsync\n */\n if (!AmBackgroundWriterProcess())\n CheckpointerShmem->num_backend_fsync++;\n LWLockRelease(CheckpointerCommLock);\n return false;\n }\n\nOne fix is to add below similar code in RegisterSyncRequest(), trigger\na checkpoint for the scenario.\n\n// checkpointer_triggered: variable for one trigger only.\nif (!ret && retryOnError && ProcGlobal->checkpointerLatch &&\n!checkpointer_triggered)\n SetLatch(ProcGlobal->checkpointerLatch);\n\nAny comments?\n\nRegards,\nPaul Guo (Vmware)\n\n\n",
"msg_date": "Tue, 25 May 2021 16:39:12 +0800",
"msg_from": "Paul Guo <paulguo@gmail.com>",
"msg_from_op": true,
"msg_subject": "sync request forward function ForwardSyncRequest() might hang for\n some time in a corner case?"
},
{
"msg_contents": "On Tue, May 25, 2021 at 4:39 PM Paul Guo <paulguo@gmail.com> wrote:\n>\n> Hi hackers,\n>\n> I found this when reading the related code. Here is the scenario:\n>\n> bool\n> RegisterSyncRequest(const FileTag *ftag, SyncRequestType type,\n> bool retryOnError)\n>\n> For the case retryOnError is true, the function would in loop call\n> ForwardSyncRequest() until it succeeds, but in ForwardSyncRequest(),\n> we can see if we run into the below branch, RegisterSyncRequest() will\n> need to loop until the checkpointer absorbs the existing requests so\n> ForwardSyncRequest() might hang for some time until a checkpoint\n> request is triggered. This scenario seems to be possible in theory\n> though the chance is not high.\n\nIt seems like a really unlikely scenario, but maybe possible if you\nuse a lot of unlogged tables maybe (as you could eventually\ndirty/evict more than NBuffers buffers without triggering enough WALs\nactivity to trigger a checkpoint with any sane checkpoint\nconfiguration).\n\n> ForwardSyncRequest():\n>\n> if (CheckpointerShmem->checkpointer_pid == 0 ||\n> (CheckpointerShmem->num_requests >= CheckpointerShmem->max_requests &&\n> !CompactCheckpointerRequestQueue()))\n> {\n> /*\n> * Count the subset of writes where backends have to do their own\n> * fsync\n> */\n> if (!AmBackgroundWriterProcess())\n> CheckpointerShmem->num_backend_fsync++;\n> LWLockRelease(CheckpointerCommLock);\n> return false;\n> }\n>\n> One fix is to add below similar code in RegisterSyncRequest(), trigger\n> a checkpoint for the scenario.\n>\n> // checkpointer_triggered: variable for one trigger only.\n> if (!ret && retryOnError && ProcGlobal->checkpointerLatch &&\n> !checkpointer_triggered)\n> SetLatch(ProcGlobal->checkpointerLatch);\n>\n> Any comments?\n\nIt looks like you intended to set the checkpointer_triggered var but\ndidn't. Also this will wake up the checkpointer but won't force a\ncheckpoint (unlike RequestCheckpoint()). It may be a good thing\nthough as it would only absorb the requests and go back to sleep if no\nother threshold is reachrf. Apart from the implementation details it\nseems like it could help in this unlikely event.\n\n\n",
"msg_date": "Thu, 27 May 2021 19:12:16 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: sync request forward function ForwardSyncRequest() might hang for\n some time in a corner case?"
},
{
"msg_contents": "On Thu, May 27, 2021 at 7:11 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Tue, May 25, 2021 at 4:39 PM Paul Guo <paulguo@gmail.com> wrote:\n> >\n> > Hi hackers,\n> >\n> > I found this when reading the related code. Here is the scenario:\n> >\n> > bool\n> > RegisterSyncRequest(const FileTag *ftag, SyncRequestType type,\n> > bool retryOnError)\n> >\n> > For the case retryOnError is true, the function would in loop call\n> > ForwardSyncRequest() until it succeeds, but in ForwardSyncRequest(),\n> > we can see if we run into the below branch, RegisterSyncRequest() will\n> > need to loop until the checkpointer absorbs the existing requests so\n> > ForwardSyncRequest() might hang for some time until a checkpoint\n> > request is triggered. This scenario seems to be possible in theory\n> > though the chance is not high.\n>\n> It seems like a really unlikely scenario, but maybe possible if you\n> use a lot of unlogged tables maybe (as you could eventually\n> dirty/evict more than NBuffers buffers without triggering enough WALs\n> activity to trigger a checkpoint with any sane checkpoint\n> configuration).\n\nRegisterSyncRequest() handles SYNC_UNLINK_REQUEST and\nSYNC_FORGET_REQUEST scenarios, besides the usual SYNC_REQUEST type for\nbuffer sync.\n\n> > ForwardSyncRequest():\n> >\n> > if (CheckpointerShmem->checkpointer_pid == 0 ||\n> > (CheckpointerShmem->num_requests >= CheckpointerShmem->max_requests &&\n> > !CompactCheckpointerRequestQueue()))\n> > {\n> > /*\n> > * Count the subset of writes where backends have to do their own\n> > * fsync\n> > */\n> > if (!AmBackgroundWriterProcess())\n> > CheckpointerShmem->num_backend_fsync++;\n> > LWLockRelease(CheckpointerCommLock);\n> > return false;\n> > }\n> >\n> > One fix is to add below similar code in RegisterSyncRequest(), trigger\n> > a checkpoint for the scenario.\n> >\n> > // checkpointer_triggered: variable for one trigger only.\n> > if (!ret && retryOnError && ProcGlobal->checkpointerLatch &&\n> > !checkpointer_triggered)\n> > SetLatch(ProcGlobal->checkpointerLatch);\n> >\n> > Any comments?\n>\n> It looks like you intended to set the checkpointer_triggered var but\n\nYes this is just pseduo code.\n\n> didn't. Also this will wake up the checkpointer but won't force a\n> checkpoint (unlike RequestCheckpoint()). It may be a good thing\n\nI do not expect an immediate checkpoint. AbsorbSyncRequests()\nis enough since after that RegisterSyncRequest() could finish.\n\n> though as it would only absorb the requests and go back to sleep if no\n> other threshold is reachrf. Apart from the implementation details it\n> seems like it could help in this unlikely event.\n\n\n\n-- \nPaul Guo (Vmware)\n\n\n",
"msg_date": "Thu, 27 May 2021 21:59:10 +0800",
"msg_from": "Paul Guo <paulguo@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: sync request forward function ForwardSyncRequest() might hang for\n some time in a corner case?"
},
{
"msg_contents": "On Thu, May 27, 2021 at 9:59 PM Paul Guo <paulguo@gmail.com> wrote:\n>\n> On Thu, May 27, 2021 at 7:11 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Tue, May 25, 2021 at 4:39 PM Paul Guo <paulguo@gmail.com> wrote:\n> > >\n> > > Hi hackers,\n> > >\n> > > I found this when reading the related code. Here is the scenario:\n> > >\n> > > bool\n> > > RegisterSyncRequest(const FileTag *ftag, SyncRequestType type,\n> > > bool retryOnError)\n> > >\n> > > For the case retryOnError is true, the function would in loop call\n> > > ForwardSyncRequest() until it succeeds, but in ForwardSyncRequest(),\n> > > we can see if we run into the below branch, RegisterSyncRequest() will\n> > > need to loop until the checkpointer absorbs the existing requests so\n> > > ForwardSyncRequest() might hang for some time until a checkpoint\n> > > request is triggered. This scenario seems to be possible in theory\n> > > though the chance is not high.\n> >\n> > It seems like a really unlikely scenario, but maybe possible if you\n> > use a lot of unlogged tables maybe (as you could eventually\n> > dirty/evict more than NBuffers buffers without triggering enough WALs\n> > activity to trigger a checkpoint with any sane checkpoint\n> > configuration).\n>\n> RegisterSyncRequest() handles SYNC_UNLINK_REQUEST and\n> SYNC_FORGET_REQUEST scenarios, besides the usual SYNC_REQUEST type for\n> buffer sync.\n>\n> > > ForwardSyncRequest():\n> > >\n> > > if (CheckpointerShmem->checkpointer_pid == 0 ||\n> > > (CheckpointerShmem->num_requests >= CheckpointerShmem->max_requests &&\n> > > !CompactCheckpointerRequestQueue()))\n> > > {\n> > > /*\n> > > * Count the subset of writes where backends have to do their own\n> > > * fsync\n> > > */\n> > > if (!AmBackgroundWriterProcess())\n> > > CheckpointerShmem->num_backend_fsync++;\n> > > LWLockRelease(CheckpointerCommLock);\n> > > return false;\n> > > }\n> > >\n> > > One fix is to add below similar code in RegisterSyncRequest(), trigger\n> > > a checkpoint for the scenario.\n> > >\n> > > // checkpointer_triggered: variable for one trigger only.\n> > > if (!ret && retryOnError && ProcGlobal->checkpointerLatch &&\n> > > !checkpointer_triggered)\n> > > SetLatch(ProcGlobal->checkpointerLatch);\n> > >\n> > > Any comments?\n> >\n> > It looks like you intended to set the checkpointer_triggered var but\n>\n> Yes this is just pseduo code.\n>\n> > didn't. Also this will wake up the checkpointer but won't force a\n> > checkpoint (unlike RequestCheckpoint()). It may be a good thing\n>\n> I do not expect an immediate checkpoint. AbsorbSyncRequests()\n> is enough since after that RegisterSyncRequest() could finish.\n>\n> > though as it would only absorb the requests and go back to sleep if no\n> > other threshold is reachrf. Apart from the implementation details it\n> > seems like it could help in this unlikely event.\n>\n\nAlso note that ForwardSyncRequest() does wake up the checkpointer if\nit thinks the requests in shared memory are \"too full\", but does not\nwake up when the request is actually full. This does not seem to be reasonable.\nSee below code in ForwardSyncRequest\n\n /* If queue is more than half full, nudge the checkpointer to empty it */\n too_full = (CheckpointerShmem->num_requests >=\n CheckpointerShmem->max_requests / 2);\n\n /* ... but not till after we release the lock */\n if (too_full && ProcGlobal->checkpointerLatch)\n SetLatch(ProcGlobal->checkpointerLatch);\n\n-- \nPaul Guo (Vmware)\n\n\n",
"msg_date": "Thu, 27 May 2021 22:04:53 +0800",
"msg_from": "Paul Guo <paulguo@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: sync request forward function ForwardSyncRequest() might hang for\n some time in a corner case?"
},
{
"msg_contents": "On Thu, May 27, 2021 at 9:59 PM Paul Guo <paulguo@gmail.com> wrote:\n>\n> > It seems like a really unlikely scenario, but maybe possible if you\n> > use a lot of unlogged tables maybe (as you could eventually\n> > dirty/evict more than NBuffers buffers without triggering enough WALs\n> > activity to trigger a checkpoint with any sane checkpoint\n> > configuration).\n>\n> RegisterSyncRequest() handles SYNC_UNLINK_REQUEST and\n> SYNC_FORGET_REQUEST scenarios, besides the usual SYNC_REQUEST type for\n> buffer sync.\n\nI know, but the checkpointer can hold up to NBuffers requests, so I\nhighly doubt that you can end up filling the buffer with those.\n\n> > > ForwardSyncRequest():\n> > >\n> > > if (CheckpointerShmem->checkpointer_pid == 0 ||\n> > > (CheckpointerShmem->num_requests >= CheckpointerShmem->max_requests &&\n> > > !CompactCheckpointerRequestQueue()))\n> > > {\n> > > /*\n> > > * Count the subset of writes where backends have to do their own\n> > > * fsync\n> > > */\n> > > if (!AmBackgroundWriterProcess())\n> > > CheckpointerShmem->num_backend_fsync++;\n> > > LWLockRelease(CheckpointerCommLock);\n> > > return false;\n> > > }\n> > >\n> > > One fix is to add below similar code in RegisterSyncRequest(), trigger\n> > > a checkpoint for the scenario.\n> > >\n> > > // checkpointer_triggered: variable for one trigger only.\n> > > if (!ret && retryOnError && ProcGlobal->checkpointerLatch &&\n> > > !checkpointer_triggered)\n> > > SetLatch(ProcGlobal->checkpointerLatch);\n> > >\n> > > Any comments?\n> >\n> > It looks like you intended to set the checkpointer_triggered var but\n>\n> Yes this is just pseduo code.\n>\n> > didn't. Also this will wake up the checkpointer but won't force a\n> > checkpoint (unlike RequestCheckpoint()). It may be a good thing\n>\n> I do not expect an immediate checkpoint. AbsorbSyncRequests()\n> is enough since after that RegisterSyncRequest() could finish.\n\nYou said \"trigger a checkpoint\", which sounded more like forcing a\ncheckpointer rather than waking up the checkpointer so that it can\nabsorb the pending requests, so it seems worth to mention what it\nwould really do.\n\n\n",
"msg_date": "Thu, 27 May 2021 22:19:57 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: sync request forward function ForwardSyncRequest() might hang for\n some time in a corner case?"
},
{
"msg_contents": "> You said \"trigger a checkpoint\", which sounded more like forcing a\n> checkpointer rather than waking up the checkpointer so that it can\n> absorb the pending requests, so it seems worth to mention what it\n> would really do.\n\nYes it is not accurate. Thanks for the clarification.\n\n-- \nPaul Guo (Vmware)\n\n\n",
"msg_date": "Thu, 27 May 2021 22:22:14 +0800",
"msg_from": "Paul Guo <paulguo@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: sync request forward function ForwardSyncRequest() might hang for\n some time in a corner case?"
},
{
"msg_contents": "On Thu, May 27, 2021 at 10:05 PM Paul Guo <paulguo@gmail.com> wrote:\n>\n> Also note that ForwardSyncRequest() does wake up the checkpointer if\n> it thinks the requests in shared memory are \"too full\", but does not\n> wake up when the request is actually full. This does not seem to be reasonable.\n> See below code in ForwardSyncRequest\n>\n> /* If queue is more than half full, nudge the checkpointer to empty it */\n> too_full = (CheckpointerShmem->num_requests >=\n> CheckpointerShmem->max_requests / 2);\n>\n> /* ... but not till after we release the lock */\n> if (too_full && ProcGlobal->checkpointerLatch)\n> SetLatch(ProcGlobal->checkpointerLatch);\n\nAh indeed. Well it means that the checkpointer it woken up early\nenough to avoid reaching that point. I'm not sure that it's actually\npossible to reach a point where the list if full and the checkpointer\nis sitting idle.\n\n\n",
"msg_date": "Thu, 27 May 2021 22:23:42 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: sync request forward function ForwardSyncRequest() might hang for\n some time in a corner case?"
},
{
"msg_contents": "On Thu, May 27, 2021 at 10:22 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Thu, May 27, 2021 at 10:05 PM Paul Guo <paulguo@gmail.com> wrote:\n> >\n> > Also note that ForwardSyncRequest() does wake up the checkpointer if\n> > it thinks the requests in shared memory are \"too full\", but does not\n> > wake up when the request is actually full. This does not seem to be reasonable.\n> > See below code in ForwardSyncRequest\n> >\n> > /* If queue is more than half full, nudge the checkpointer to empty it */\n> > too_full = (CheckpointerShmem->num_requests >=\n> > CheckpointerShmem->max_requests / 2);\n> >\n> > /* ... but not till after we release the lock */\n> > if (too_full && ProcGlobal->checkpointerLatch)\n> > SetLatch(ProcGlobal->checkpointerLatch);\n>\n> Ah indeed. Well it means that the checkpointer it woken up early\n> enough to avoid reaching that point. I'm not sure that it's actually\n> possible to reach a point where the list if full and the checkpointer\n> is sitting idle.\n\nIn theory this is possible (when the system is under heavy parallel write)\nelse we could remove that part code (CompactCheckpointerRequestQueue())\n:-), though the chance is not high.\n\nIf we encounter this issue those affected queries would suddenly hang\nuntil the next checkpointer wakeup.\n\n\n",
"msg_date": "Fri, 28 May 2021 11:43:09 +0800",
"msg_from": "Paul Guo <paulguo@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: sync request forward function ForwardSyncRequest() might hang for\n some time in a corner case?"
}
] |
[
{
"msg_contents": "In the attached patch, the error message was checking that the \nstructures returned from the parser matched expectations. That's \nsomething we usually use assertions for, not a full user-facing error \nmessage. So I replaced that with an assertion (hidden inside \nlfirst_node()).",
"msg_date": "Tue, 25 May 2021 11:28:56 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Replace run-time error check with assertion"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> In the attached patch, the error message was checking that the \n> structures returned from the parser matched expectations. That's \n> something we usually use assertions for, not a full user-facing error \n> message. So I replaced that with an assertion (hidden inside \n> lfirst_node()).\n\nWorks for me. It's certainly silly to use a translatable ereport\nrather than elog for this.\n\nLocalizing those variables some more looks sane too.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 25 May 2021 10:51:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Replace run-time error check with assertion"
}
] |
[
{
"msg_contents": "My question can be demonstrated with the below example:\n\ncreate table m1(a int, b int);\nexplain (costs off) select (select count(*) filter (*where true*) from m1\nt1)\nfrom m1 t2 where t2.b % 2 = 1;\n\n QUERY PLAN\n---------------------------------\n Seq Scan on m1 t2\n Filter: ((b % 2) = 1)\n InitPlan 1 (returns $0)\n -> Aggregate\n -> Seq Scan on m1 t1\n(5 rows)\n\nThe above is good to me. The aggregate is run in the subPlan/InitPlan.\n\nexplain (costs off) select (select count(*) filter (*where t2.b = 1*) from\nm1 t1)\nfrom m1 t2 where t2.b % 2 = 1;\n\n QUERY PLAN\n-------------------------------\n Aggregate\n -> Seq Scan on m1 t2\n Filter: ((b % 2) = 1)\n SubPlan 1\n -> Seq Scan on m1 t1\n(5 rows)\n\nThis one is too confusing to me since the Aggregate happens\non t2 rather than t1. What happens here? Would this query\ngenerate 1 row all the time like SELECT aggfunc(a) FROM t?\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nMy question can be demonstrated with the below example:create table m1(a int, b int);explain (costs off) select (select count(*) filter (where true) from m1 t1)from m1 t2 where t2.b % 2 = 1; QUERY PLAN--------------------------------- Seq Scan on m1 t2 Filter: ((b % 2) = 1) InitPlan 1 (returns $0) -> Aggregate -> Seq Scan on m1 t1(5 rows)The above is good to me. The aggregate is run in the subPlan/InitPlan.explain (costs off) select (select count(*) filter (where t2.b = 1) from m1 t1)from m1 t2 where t2.b % 2 = 1; QUERY PLAN------------------------------- Aggregate -> Seq Scan on m1 t2 Filter: ((b % 2) = 1) SubPlan 1 -> Seq Scan on m1 t1(5 rows)This one is too confusing to me since the Aggregate happenson t2 rather than t1. What happens here? Would this querygenerate 1 row all the time like SELECT aggfunc(a) FROM t? -- Best RegardsAndy Fan (https://www.aliyun.com/)",
"msg_date": "Tue, 25 May 2021 18:28:40 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "How can the Aggregation move to the outer query"
},
{
"msg_contents": "On Tue, 25 May 2021 at 22:28, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> explain (costs off) select (select count(*) filter (where t2.b = 1) from m1 t1)\n> from m1 t2 where t2.b % 2 = 1;\n>\n> QUERY PLAN\n> -------------------------------\n> Aggregate\n> -> Seq Scan on m1 t2\n> Filter: ((b % 2) = 1)\n> SubPlan 1\n> -> Seq Scan on m1 t1\n> (5 rows)\n>\n> This one is too confusing to me since the Aggregate happens\n> on t2 rather than t1. What happens here? Would this query\n> generate 1 row all the time like SELECT aggfunc(a) FROM t?\n\nI think you're misreading the plan. There's a scan on t2 with a\nsubplan then an aggregate on top of that. Because you made the\nsubquery correlated by adding t2.b, it cannot be executed as an\ninitplan.\n\nYou might see what's going on better if you add VERBOSE to the EXPLAIN options.\n\nDavid\n\n\n",
"msg_date": "Tue, 25 May 2021 23:42:44 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: How can the Aggregation move to the outer query"
},
{
"msg_contents": "On Tue, May 25, 2021 at 7:42 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Tue, 25 May 2021 at 22:28, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> >\n> > explain (costs off) select (select count(*) filter (where t2.b = 1)\n> from m1 t1)\n> > from m1 t2 where t2.b % 2 = 1;\n> >\n> > QUERY PLAN\n> > -------------------------------\n> > Aggregate\n> > -> Seq Scan on m1 t2\n> > Filter: ((b % 2) = 1)\n> > SubPlan 1\n> > -> Seq Scan on m1 t1\n> > (5 rows)\n> >\n> > This one is too confusing to me since the Aggregate happens\n> > on t2 rather than t1. What happens here? Would this query\n> > generate 1 row all the time like SELECT aggfunc(a) FROM t?\n>\n> I think you're misreading the plan. There's a scan on t2 with a\n> subplan then an aggregate on top of that. Because you made the\n> subquery correlated by adding t2.b, it cannot be executed as an\n> initplan.\n>\n> You might see what's going on better if you add VERBOSE to the EXPLAIN\n> options.\n>\n>\nThanks, VERBOSE does provide more information.\n\n Aggregate\n Output: (SubPlan 1)\n -> Seq Scan on public.m1 t2\n Output: t2.a, t2.b\n Filter: ((t2.b % 2) = 1)\n SubPlan 1\n -> Seq Scan on public.m1 t1\n Output: count(*) FILTER (WHERE (t2.b = 1))\n(8 rows)\n\nI am still confused about the SubPlan1, how can it output a\ncount(*) without an Aggregate under it (If this is not easy to\nexplain, I can try more by myself later).\n\nBut after all, I find this case when working on the UniqueKey stuff,\nI have rule that if (query->hasAgg && !query->groupClause), then\nthere are only 1 row for this query. In the above case, the outer query\n(t2) hasAgg=true and subplan's hasAgg=false, which looks not right\nto me. I think the hasAgg=true should be in the subquery and outer\nquery should have hasAgg=false. anything I missed?\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nOn Tue, May 25, 2021 at 7:42 PM David Rowley <dgrowleyml@gmail.com> wrote:On Tue, 25 May 2021 at 22:28, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> explain (costs off) select (select count(*) filter (where t2.b = 1) from m1 t1)\n> from m1 t2 where t2.b % 2 = 1;\n>\n> QUERY PLAN\n> -------------------------------\n> Aggregate\n> -> Seq Scan on m1 t2\n> Filter: ((b % 2) = 1)\n> SubPlan 1\n> -> Seq Scan on m1 t1\n> (5 rows)\n>\n> This one is too confusing to me since the Aggregate happens\n> on t2 rather than t1. What happens here? Would this query\n> generate 1 row all the time like SELECT aggfunc(a) FROM t?\n\nI think you're misreading the plan. There's a scan on t2 with a\nsubplan then an aggregate on top of that. Because you made the\nsubquery correlated by adding t2.b, it cannot be executed as an\ninitplan.\n\nYou might see what's going on better if you add VERBOSE to the EXPLAIN options.Thanks, VERBOSE does provide more information. Aggregate Output: (SubPlan 1) -> Seq Scan on public.m1 t2 Output: t2.a, t2.b Filter: ((t2.b % 2) = 1) SubPlan 1 -> Seq Scan on public.m1 t1 Output: count(*) FILTER (WHERE (t2.b = 1))(8 rows)I am still confused about the SubPlan1, how can it output acount(*) without an Aggregate under it (If this is not easy toexplain, I can try more by myself later). But after all, I find this case when working on the UniqueKey stuff,I have rule that if (query->hasAgg && !query->groupClause), thenthere are only 1 row for this query. In the above case, the outer query(t2) hasAgg=true and subplan's hasAgg=false, which looks not right to me. I think the hasAgg=true should be in the subquery and outerquery should have hasAgg=false. anything I missed?-- Best RegardsAndy Fan (https://www.aliyun.com/)",
"msg_date": "Tue, 25 May 2021 22:20:01 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: How can the Aggregation move to the outer query"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Tue, 25 May 2021 at 22:28, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>> explain (costs off) select (select count(*) filter (where t2.b = 1) from m1 t1)\n>> from m1 t2 where t2.b % 2 = 1;\n>> \n>> This one is too confusing to me since the Aggregate happens\n>> on t2 rather than t1. What happens here? Would this query\n>> generate 1 row all the time like SELECT aggfunc(a) FROM t?\n\n> I think you're misreading the plan. There's a scan on t2 with a\n> subplan then an aggregate on top of that. Because you made the\n> subquery correlated by adding t2.b, it cannot be executed as an\n> initplan.\n\nAlso keep in mind that adding that filter clause completely changed\nthe meaning of the aggregate. Aggregates belong to the lowest\nquery level containing any Var used in their arguments, so that\nwhere in your original query the count(*) was an aggregate of the\nsubquery, now it's an aggregate of the outer query (and the subquery\nnow perceives it as a constant outer reference). AFAIR this is per\nSQL spec.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 25 May 2021 10:23:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: How can the Aggregation move to the outer query"
},
{
"msg_contents": "On Tue, May 25, 2021 at 10:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > On Tue, 25 May 2021 at 22:28, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> >> explain (costs off) select (select count(*) filter (where t2.b = 1)\n> from m1 t1)\n> >> from m1 t2 where t2.b % 2 = 1;\n> >>\n> >> This one is too confusing to me since the Aggregate happens\n> >> on t2 rather than t1. What happens here? Would this query\n> >> generate 1 row all the time like SELECT aggfunc(a) FROM t?\n>\n> > I think you're misreading the plan. There's a scan on t2 with a\n> > subplan then an aggregate on top of that. Because you made the\n> > subquery correlated by adding t2.b, it cannot be executed as an\n> > initplan.\n>\n> Also keep in mind that adding that filter clause completely changed\n> the meaning of the aggregate. Aggregates belong to the lowest\n> query level containing any Var used in their arguments, so that\n> where in your original query the count(*) was an aggregate of the\n> subquery, now it's an aggregate of the outer query (and the subquery\n> now perceives it as a constant outer reference). AFAIR this is per\n> SQL spec.\n>\n\nWell, finally I know it's an aggregate of the outer query.. Thank you for\nthe explanation! so I would say the result set has 1 row for that query\nall the time.\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nOn Tue, May 25, 2021 at 10:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:David Rowley <dgrowleyml@gmail.com> writes:\n> On Tue, 25 May 2021 at 22:28, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>> explain (costs off) select (select count(*) filter (where t2.b = 1) from m1 t1)\n>> from m1 t2 where t2.b % 2 = 1;\n>> \n>> This one is too confusing to me since the Aggregate happens\n>> on t2 rather than t1. What happens here? Would this query\n>> generate 1 row all the time like SELECT aggfunc(a) FROM t?\n\n> I think you're misreading the plan. There's a scan on t2 with a\n> subplan then an aggregate on top of that. Because you made the\n> subquery correlated by adding t2.b, it cannot be executed as an\n> initplan.\n\nAlso keep in mind that adding that filter clause completely changed\nthe meaning of the aggregate. Aggregates belong to the lowest\nquery level containing any Var used in their arguments, so that\nwhere in your original query the count(*) was an aggregate of the\nsubquery, now it's an aggregate of the outer query (and the subquery\nnow perceives it as a constant outer reference). AFAIR this is per\nSQL spec. Well, finally I know it's an aggregate of the outer query.. Thank you forthe explanation! so I would say the result set has 1 row for that queryall the time.-- Best RegardsAndy Fan (https://www.aliyun.com/)",
"msg_date": "Wed, 26 May 2021 00:25:41 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: How can the Aggregation move to the outer query"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nBack in 2016 while being at PostgresPro I developed the ZSON extension [1].\nThe extension introduces the new ZSON type, which is 100% compatible with\nJSONB but uses a shared dictionary of strings most frequently used in given\nJSONB documents for compression. These strings are replaced with integer\nIDs. Afterward, PGLZ (and now LZ4) applies if the document is large enough\nby common PostgreSQL logic. Under certain conditions (many large\ndocuments), this saves disk space, memory and increases the overall\nperformance. More details can be found in README on GitHub.\n\nThe extension was accepted warmly and instantaneously I got several\nrequests to submit it to /contrib/ so people using Amazon RDS and similar\nservices could enjoy it too. Back then I was not sure if the extension is\nmature enough and if it lacks any additional features required to solve the\nreal-world problems of the users. Time showed, however, that people are\nhappy with the extension as it is. There were several minor issues\ndiscovered, but they were fixed back in 2017. The extension never\nexperienced any compatibility problems with the next major release of\nPostgreSQL.\n\nSo my question is if the community may consider adding ZSON to /contrib/.\nIf this is the case I will add this thread to the nearest CF and submit a\ncorresponding patch.\n\n[1]: https://github.com/postgrespro/zson\n\n-- \nBest regards,\nAleksander Alekseev\nOpen-Source PostgreSQL Contributor at Timescale\n\nHi hackers,Back in 2016 while being at PostgresPro I developed the ZSON extension [1]. The extension introduces the new ZSON type, which is 100% compatible with JSONB but uses a shared dictionary of strings most frequently used in given JSONB documents for compression. These strings are replaced with integer IDs. Afterward, PGLZ (and now LZ4) applies if the document is large enough by common PostgreSQL logic. Under certain conditions (many large documents), this saves disk space, memory and increases the overall performance. More details can be found in README on GitHub.The extension was accepted warmly and instantaneously I got several requests to submit it to /contrib/ so people using Amazon RDS and similar services could enjoy it too. Back then I was not sure if the extension is mature enough and if it lacks any additional features required to solve the real-world problems of the users. Time showed, however, that people are happy with the extension as it is. There were several minor issues discovered, but they were fixed back in 2017. The extension never experienced any compatibility problems with the next major release of PostgreSQL.So my question is if the community may consider adding ZSON to /contrib/. If this is the case I will add this thread to the nearest CF and submit a corresponding patch.[1]: https://github.com/postgrespro/zson-- Best regards,Aleksander AlekseevOpen-Source PostgreSQL Contributor at Timescale",
"msg_date": "Tue, 25 May 2021 13:55:13 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Add ZSON extension to /contrib/"
},
{
"msg_contents": "On Tue, May 25, 2021 at 12:55 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi hackers,\n>\n> Back in 2016 while being at PostgresPro I developed the ZSON extension [1]. The extension introduces the new ZSON type, which is 100% compatible with JSONB but uses a shared dictionary of strings most frequently used in given JSONB documents for compression. These strings are replaced with integer IDs. Afterward, PGLZ (and now LZ4) applies if the document is large enough by common PostgreSQL logic. Under certain conditions (many large documents), this saves disk space, memory and increases the overall performance. More details can be found in README on GitHub.\n>\n> The extension was accepted warmly and instantaneously I got several requests to submit it to /contrib/ so people using Amazon RDS and similar services could enjoy it too. Back then I was not sure if the extension is mature enough and if it lacks any additional features required to solve the real-world problems of the users. Time showed, however, that people are happy with the extension as it is. There were several minor issues discovered, but they were fixed back in 2017. The extension never experienced any compatibility problems with the next major release of PostgreSQL.\n>\n> So my question is if the community may consider adding ZSON to /contrib/. If this is the case I will add this thread to the nearest CF and submit a corresponding patch.\n\nIf the extension is mature enough, why make it an extension in\ncontrib, and not instead either enhance the existing jsonb type with\nit or make it a built-in type?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Tue, 25 May 2021 13:32:37 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Add ZSON extension to /contrib/"
},
{
"msg_contents": "\nOn 5/25/21 6:55 AM, Aleksander Alekseev wrote:\n> Hi hackers,\n>\n> Back in 2016 while being at PostgresPro I developed the ZSON extension\n> [1]. The extension introduces the new ZSON type, which is 100%\n> compatible with JSONB but uses a shared dictionary of strings most\n> frequently used in given JSONB documents for compression. These\n> strings are replaced with integer IDs. Afterward, PGLZ (and now LZ4)\n> applies if the document is large enough by common PostgreSQL logic.\n> Under certain conditions (many large documents), this saves disk\n> space, memory and increases the overall performance. More details can\n> be found in README on GitHub.\n>\n> The extension was accepted warmly and instantaneously I got several\n> requests to submit it to /contrib/ so people using Amazon RDS and\n> similar services could enjoy it too. Back then I was not sure if the\n> extension is mature enough and if it lacks any additional features\n> required to solve the real-world problems of the users. Time showed,\n> however, that people are happy with the extension as it is. There were\n> several minor issues discovered, but they were fixed back in 2017. The\n> extension never experienced any compatibility problems with the next\n> major release of PostgreSQL.\n>\n> So my question is if the community may consider adding ZSON to\n> /contrib/. If this is the case I will add this thread to the nearest\n> CF and submit a corresponding patch.\n>\n> [1]: https://github.com/postgrespro/zson\n> <https://github.com/postgrespro/zson>\n>\nWe (2ndQuadrant, now part of EDB) made some enhancements to Zson a few years ago, and I have permission to contribute those if this proposal is adopted. From the readme:\n\n1. There is an option to make zson_learn only process object keys,\nrather than field values.\n\n```\nselect zson_learn('{{table1,col1}}',true);\n```\n\n2. Strings with an octet-length less than 3 are not processed.\nSince strings are encoded as 2 bytes and then there needs to be\nanother byte with the length of the following skipped bytes, encoding\nvalues less than 3 bytes is going to be a net loss.\n\n3. There is a new function to create a dictionary directly from an\narray of text, rather than using the learning code:\n\n```\nselect zson_create_dictionary(array['word1','word2']::text[]);\n```\n\n4. There is a function to augment the current dictionary from an array of text:\n\n```\nselect zson_extend_dictionary(array['value1','value2','value3']::text[]);\n```\n\nThis is particularly useful for adding common field prefixes or values. A good\nexample of field prefixes is URL values where the first part of the URL is\nfairly constrained but the last part is not.\n\n\ncheers\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 25 May 2021 16:08:24 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Add ZSON extension to /contrib/"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> On Tue, May 25, 2021 at 12:55 PM Aleksander Alekseev\n> <aleksander@timescale.com> wrote:\n>> Back in 2016 while being at PostgresPro I developed the ZSON extension [1]. The extension introduces the new ZSON type, which is 100% compatible with JSONB but uses a shared dictionary of strings most frequently used in given JSONB documents for compression.\n\n> If the extension is mature enough, why make it an extension in\n> contrib, and not instead either enhance the existing jsonb type with\n> it or make it a built-in type?\n\nIMO we have too d*mn many JSON types already. If we can find a way\nto shoehorn this optimization into JSONB, that'd be great. Otherwise\nI do not think it's worth the added user confusion.\n\nAlso, even if ZSON was \"100% compatible with JSONB\" back in 2016,\na whole lot of features have been added since then. Having to\nduplicate all that code again for a different data type is not\nsomething I want to see us doing. So that's an independent reason\nfor wanting to hide this under the existing type not make a new one.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 25 May 2021 16:10:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add ZSON extension to /contrib/"
},
{
"msg_contents": "On Tue, 25 May 2021 at 13:32, Magnus Hagander <magnus@hagander.net> wrote:\n>\n> On Tue, May 25, 2021 at 12:55 PM Aleksander Alekseev\n> <aleksander@timescale.com> wrote:\n> >\n> > Hi hackers,\n> >\n> > Back in 2016 while being at PostgresPro I developed the ZSON extension [1]. The extension introduces the new ZSON type, which is 100% compatible with JSONB but uses a shared dictionary of strings most frequently used in given JSONB documents for compression. These strings are replaced with integer IDs. Afterward, PGLZ (and now LZ4) applies if the document is large enough by common PostgreSQL logic. Under certain conditions (many large documents), this saves disk space, memory and increases the overall performance. More details can be found in README on GitHub.\n> >\n> > The extension was accepted warmly and instantaneously I got several requests to submit it to /contrib/ so people using Amazon RDS and similar services could enjoy it too.\n\nDo note that e.g. postgis is not in contrib, but is available in e.g. RDS.\n\n> > Back then I was not sure if the extension is mature enough and if it lacks any additional features required to solve the real-world problems of the users. Time showed, however, that people are happy with the extension as it is. There were several minor issues discovered, but they were fixed back in 2017. The extension never experienced any compatibility problems with the next major release of PostgreSQL.\n> >\n> > So my question is if the community may consider adding ZSON to /contrib/. If this is the case I will add this thread to the nearest CF and submit a corresponding patch.\n\nI like the idea of the ZSON type, but I'm somewhat disappointed by its\ncurrent limitations:\n\n- There is only one active shared dictionary (as a user I would want\ndistinct dictionaries for each use case, similar to ENUM: each ENUM\ntype has their own limit of 2**31 (?) values)\n- There is no provided method to manually specify the dictionary (only\n\"zson_learn\", which constructs a new dictionary)\n- You cannot add to the dictionary (equiv. to ALTER TYPE enum_type ADD\nVALUE), you must create a new one.\n\nApart from that, I noticed the following more technical points, for if\nyou submit it as-is as a patch:\n\n- Each dictionary uses a lot of memory, regardless of the number of\nactual stored keys. For 32-bit systems the base usage of a dictionary\nwithout entries ((sizeof(Word) + sizeof(uint16)) * 2**16) would be\nalmost 1MB, and for 64-bit it would be 1.7MB. That is significantly\nmore than I'd want to install.\n- You call gettimeofday() in both dict_get and in get_current_dict_id.\nThese functions can be called in short and tight loops (for small GSON\nfields), in which case it would add significant overhead through the\nimplied syscalls.\n- The compression method you've chosen seems to extract most common\nstrings from the JSONB table, and then use that as a pre-built\ndictionary for doing some dictionary encoding on the on-disk format of\nthe jsonb structure. Although I fully understand that this makes the\nsystem quite easy to reason about, it does mean that you're deTOASTing\nthe full GSON field, and that the stored bytestring will not be\nstructured / doesn't work well with current debuggers.\n\n> If the extension is mature enough, why make it an extension in\n> contrib, and not instead either enhance the existing jsonb type with\n> it or make it a built-in type?\n\nI don't think that this datatype (that supplies a basic but effective\ncompression algorithm over JSONB) is fit for core as-is.\n\nI have also thought about building a similar type, but one that would\nbe more like ENUM: An extension on the JSONB datatype, which has some\nlist of common 'well-known' values that will be substituted, and to\nwhich later more substitutable values can be added (e.g. CREATE TYPE\n... AS JSONB_DICTIONARY ('\"commonly_used_key\"',\n'\"very_long_string_that_appears_often\"', '[{\"structure\": \"that\",\n\"appears\": \"often\"}]') or something similar). That would leave JSONB\njust as a JSONB_DICTIONARY type without any substitutable values.\n\nThese specialized JSONB types could then be used as a specification\nfor table columns, custom types, et cetera. Some of the reasons I've\nnot yet built such type is me not being familiar with the jsonb- and\nenum-code (which I suspect to be critical for an efficient\nimplementation of such type), although whilst researching I've noticed\nthat it is possible to use most of the JSONB infrastructure / read\nolder jsonb values, as there are still some JEntry type masks\navailable which could flag such substitutions.\n\nWith regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Tue, 25 May 2021 22:19:52 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add ZSON extension to /contrib/"
},
{
"msg_contents": "\nOn 5/25/21 4:10 PM, Tom Lane wrote:\n> Magnus Hagander <magnus@hagander.net> writes:\n>> On Tue, May 25, 2021 at 12:55 PM Aleksander Alekseev\n>> <aleksander@timescale.com> wrote:\n>>> Back in 2016 while being at PostgresPro I developed the ZSON extension [1]. The extension introduces the new ZSON type, which is 100% compatible with JSONB but uses a shared dictionary of strings most frequently used in given JSONB documents for compression.\n>> If the extension is mature enough, why make it an extension in\n>> contrib, and not instead either enhance the existing jsonb type with\n>> it or make it a built-in type?\n> IMO we have too d*mn many JSON types already. If we can find a way\n> to shoehorn this optimization into JSONB, that'd be great. Otherwise\n> I do not think it's worth the added user confusion.\n>\n> Also, even if ZSON was \"100% compatible with JSONB\" back in 2016,\n> a whole lot of features have been added since then. Having to\n> duplicate all that code again for a different data type is not\n> something I want to see us doing. So that's an independent reason\n> for wanting to hide this under the existing type not make a new one.\n\n\n\nI take your point. However, there isn't really any duplication. It's\nhandled by this:\n\n\n CREATE FUNCTION jsonb_to_zson(jsonb)\n RETURNS zson\n AS 'MODULE_PATHNAME'\n LANGUAGE C STRICT IMMUTABLE;\n\n CREATE FUNCTION zson_to_jsonb(zson)\n RETURNS jsonb\n AS 'MODULE_PATHNAME'\n LANGUAGE C STRICT IMMUTABLE;\n\n CREATE CAST (jsonb AS zson) WITH FUNCTION jsonb_to_zson(jsonb) AS\n ASSIGNMENT;\n CREATE CAST (zson AS jsonb) WITH FUNCTION zson_to_jsonb(zson) AS\n IMPLICIT;\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 25 May 2021 16:24:56 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Add ZSON extension to /contrib/"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 5/25/21 4:10 PM, Tom Lane wrote:\n>> Also, even if ZSON was \"100% compatible with JSONB\" back in 2016,\n>> a whole lot of features have been added since then. Having to\n>> duplicate all that code again for a different data type is not\n>> something I want to see us doing. So that's an independent reason\n>> for wanting to hide this under the existing type not make a new one.\n\n> I take your point. However, there isn't really any duplication. It's\n> handled by [ creating a pair of casts ]\n\nIf that were an adequate solution then nobody would be unhappy about\njson vs jsonb. I don't think it really is satisfactory:\n\n* does nothing for user confusion (except maybe make it worse)\n\n* not terribly efficient\n\n* doesn't cover all cases, notably indexes.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 25 May 2021 16:31:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add ZSON extension to /contrib/"
},
{
"msg_contents": "Matthias van de Meent <boekewurm+postgres@gmail.com> writes:\n> I like the idea of the ZSON type, but I'm somewhat disappointed by its\n> current limitations:\n\nI've not read the code, so maybe this thought is completely off-point,\nbut I wonder if anything could be learned from PostGIS. AIUI they\nhave developed the infrastructure needed to have auxiliary info\n(particularly, spatial reference data) attached to a geometry column,\nwithout duplicating it in every value of the column. Seems like that\nis a close analog of what's needed here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 25 May 2021 16:35:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add ZSON extension to /contrib/"
},
{
"msg_contents": "\nOn 5/25/21 4:31 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 5/25/21 4:10 PM, Tom Lane wrote:\n>>> Also, even if ZSON was \"100% compatible with JSONB\" back in 2016,\n>>> a whole lot of features have been added since then. Having to\n>>> duplicate all that code again for a different data type is not\n>>> something I want to see us doing. So that's an independent reason\n>>> for wanting to hide this under the existing type not make a new one.\n>> I take your point. However, there isn't really any duplication. It's\n>> handled by [ creating a pair of casts ]\n> If that were an adequate solution then nobody would be unhappy about\n> json vs jsonb. I don't think it really is satisfactory:\n>\n> * does nothing for user confusion (except maybe make it worse)\n>\n> * not terribly efficient\n>\n> * doesn't cover all cases, notably indexes.\n>\n> \t\t\t\n\n\nQuite so. To some extent it's a toy. But at least one of our customers\nhas found it useful, and judging by Aleksander's email they aren't\nalone. Your ideas downthread are probably a useful pointer of how we\nmight fruitfully proceed.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 25 May 2021 17:06:29 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Add ZSON extension to /contrib/"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Matthias van de Meent <boekewurm+postgres@gmail.com> writes:\n> > I like the idea of the ZSON type, but I'm somewhat disappointed by its\n> > current limitations:\n> \n> I've not read the code, so maybe this thought is completely off-point,\n> but I wonder if anything could be learned from PostGIS. AIUI they\n> have developed the infrastructure needed to have auxiliary info\n> (particularly, spatial reference data) attached to a geometry column,\n> without duplicating it in every value of the column. Seems like that\n> is a close analog of what's needed here.\n\nErr, not exactly the same- there aren't *that* many SRIDs and therefore\nthey can be stuffed into the typemod (my, probably wrong, recollection\nwas that I actually pushed Paul in that direction due to being\nfrustrated with CHECK constraints they had been using previously..).\n\nNot something you could do with a dictionary as what's contempalted\nhere. I do agree that each jsonb/zson/whatever column should really be\nable to have its own dictionary though and maybe you could shove *which*\nof those dictionaries a given column uses into the typemod for that\ncolumn... In an ideal world, however, we wouldn't make a user have to\nactually do that though and instead we'd just build our own magically\nfor them when they use jsonb.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 25 May 2021 20:49:11 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Add ZSON extension to /contrib/"
},
{
"msg_contents": "Hi hackers,\n\nMany thanks for your feedback, I very much appreciate it!\n\n> If the extension is mature enough, why make it an extension in\n> contrib, and not instead either enhance the existing jsonb type with\n> it or make it a built-in type?\n\n> IMO we have too d*mn many JSON types already. If we can find a way\n> to shoehorn this optimization into JSONB, that'd be great. Otherwise\n> I do not think it's worth the added user confusion.\n\nMagnus, Tom,\n\nMy reasoning is that if the problem can be solved with an extension\nthere is little reason to modify the core. This seems to be in the\nspirit of PostgreSQL. If the community reaches the consensus to modify\nthe core to introduce a similar feature, we could discuss this as\nwell. It sounds like a lot of unnecessary work to me though (see\nbelow).\n\n> * doesn't cover all cases, notably indexes.\n\nTom,\n\nNot sure if I follow. What cases do you have in mind?\n\n> Do note that e.g. postgis is not in contrib, but is available in e.g. RDS.\n\nMatthias,\n\nGood point. I suspect that PostGIS is an exception though...\n\n> I like the idea of the ZSON type, but I'm somewhat disappointed by its\n> current limitations\n\nSeveral people suggested various enhancements right after learning\nabout ZSON. Time showed, however, that none of the real-world users\nreally need e.g. more than one common dictionary per database. I\nsuspect this is because no one has more than 2**16 repeatable unique\nstrings (one dictionary limitation) in their documents. Thus there is\nno benefit in having separate dictionaries and corresponding extra\ncomplexity.\n\n> - Each dictionary uses a lot of memory, regardless of the number of\n> actual stored keys. For 32-bit systems the base usage of a dictionary\n> without entries ((sizeof(Word) + sizeof(uint16)) * 2**16) would be\n> almost 1MB, and for 64-bit it would be 1.7MB. That is significantly\n> more than I'd want to install.\n\nYou are probably right on this one, this part could be optimized. I\nwill address this if we agree on submitting the patch.\n\n> - You call gettimeofday() in both dict_get and in get_current_dict_id.\n> These functions can be called in short and tight loops (for small GSON\n> fields), in which case it would add significant overhead through the\n> implied syscalls.\n\nI must admit, I'm not an expert in this area. My understanding is that\ngettimeofday() is implemented as single virtual memory access on\nmodern operating systems, e.g. VDSO on Linux, thus it's very cheap.\nI'm not that sure about other supported platforms though. Probably\nworth investigating.\n\n> It does mean that you're deTOASTing\n> the full GSON field, and that the stored bytestring will not be\n> structured / doesn't work well with current debuggers.\n\nUnfortunately, I'm not very well aware of debugging tools in this\ncontext. Could you please name the debuggers I should take into\naccount?\n\n> We (2ndQuadrant, now part of EDB) made some enhancements to Zson a few years ago, and I have permission to contribute those if this proposal is adopted.\n\nAndrew,\n\nThat's great, and personally I very much like the enhancements you've\nmade. Purely out of curiosity, did they ended up as a part of\n2ndQiadrant / EDB products? I will be happy to accept a pull request\nwith these enhancements regardless of how the story with this proposal\nends up.\n\n> Quite so. To some extent it's a toy. But at least one of our customers\n> has found it useful, and judging by Aleksander's email they aren't\n> alone.\n\nIndeed, this is an extremely simple extension, ~500 effective lines of\ncode in C. It addresses a somewhat specific scenario, which, to my\nregret, doesn't seem to be uncommon. A pain-killer of a sort. In an\nideal world, people suppose simply to normalize their data.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 26 May 2021 13:49:47 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: Add ZSON extension to /contrib/"
},
{
"msg_contents": "\n\nOn 25.05.2021 13:55, Aleksander Alekseev wrote:\n> Hi hackers,\n>\n> Back in 2016 while being at PostgresPro I developed the ZSON extension \n> [1]. The extension introduces the new ZSON type, which is 100% \n> compatible with JSONB but uses a shared dictionary of strings most \n> frequently used in given JSONB documents for compression. These \n> strings are replaced with integer IDs. Afterward, PGLZ (and now LZ4) \n> applies if the document is large enough by common PostgreSQL logic. \n> Under certain conditions (many large documents), this saves disk \n> space, memory and increases the overall performance. More details can \n> be found in README on GitHub.\n>\n> The extension was accepted warmly and instantaneously I got several \n> requests to submit it to /contrib/ so people using Amazon RDS and \n> similar services could enjoy it too. Back then I was not sure if the \n> extension is mature enough and if it lacks any additional features \n> required to solve the real-world problems of the users. Time showed, \n> however, that people are happy with the extension as it is. There were \n> several minor issues discovered, but they were fixed back in 2017. The \n> extension never experienced any compatibility problems with the next \n> major release of PostgreSQL.\n>\n> So my question is if the community may consider adding ZSON to \n> /contrib/. If this is the case I will add this thread to the nearest \n> CF and submit a corresponding patch.\n>\n> [1]: https://github.com/postgrespro/zson\n>\n> -- \n> Best regards,\n> Aleksander Alekseev\n> Open-Source PostgreSQL Contributor at Timescale\n\n\nYet another approach to the same problem:\n\nhttps://github.com/postgrespro/jsonb_schema\n\nInstead of compression JSONs we can try to automatically detect JSON \nschema (names and types of JSON fields) and store it separately from values.\nThis approach is more similar with one used in schema-less databases. It \nis most efficient if there are many JSON records with the same schema \nand sizes of keys are comparable with size of values. At IMDB data set \nit cause reducing of database size about 1.7 times.\n\n\n\n\n",
"msg_date": "Wed, 26 May 2021 18:11:52 +0300",
"msg_from": "Konstantin Knizhnik <knizhnik@garret.ru>",
"msg_from_op": false,
"msg_subject": "Re: Add ZSON extension to /contrib/"
},
{
"msg_contents": "On Wed, 26 May 2021 at 12:49, Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi hackers,\n>\n> Many thanks for your feedback, I very much appreciate it!\n>\n> > If the extension is mature enough, why make it an extension in\n> > contrib, and not instead either enhance the existing jsonb type with\n> > it or make it a built-in type?\n>\n> > IMO we have too d*mn many JSON types already. If we can find a way\n> > to shoehorn this optimization into JSONB, that'd be great. Otherwise\n> > I do not think it's worth the added user confusion.\n>\n> Magnus, Tom,\n>\n> My reasoning is that if the problem can be solved with an extension\n> there is little reason to modify the core. This seems to be in the\n> spirit of PostgreSQL. If the community reaches the consensus to modify\n> the core to introduce a similar feature, we could discuss this as\n> well. It sounds like a lot of unnecessary work to me though (see\n> below).\n>\n> > * doesn't cover all cases, notably indexes.\n>\n> Tom,\n>\n> Not sure if I follow. What cases do you have in mind?\n>\n> > Do note that e.g. postgis is not in contrib, but is available in e.g. RDS.\n>\n> Matthias,\n>\n> Good point. I suspect that PostGIS is an exception though...\n\nQuite a few other non-/common/ extensions are available in RDS[0],\nsome of which are HLL (from citusdata), pglogical (from 2ndQuadrant)\nand orafce (from Pavel Stehule, orafce et al.).\n\n> > I like the idea of the ZSON type, but I'm somewhat disappointed by its\n> > current limitations\n>\n> Several people suggested various enhancements right after learning\n> about ZSON. Time showed, however, that none of the real-world users\n> really need e.g. more than one common dictionary per database. I\n> suspect this is because no one has more than 2**16 repeatable unique\n> strings (one dictionary limitation) in their documents. Thus there is\n> no benefit in having separate dictionaries and corresponding extra\n> complexity.\n\nIMO the main benefit of having different dictionaries is that you\ncould have a small dictionary for small and very structured JSONB\nfields (e.g. some time-series data), and a large one for large /\nunstructured JSONB fields, without having the significant performance\nimpact of having that large and varied dictionary on the\nsmall&structured field. Although a binary search is log(n) and thus\nstill quite cheap even for large dictionaries, the extra size is\ncertainly not free, and you'll be touching more memory in the process.\n\n> > - Each dictionary uses a lot of memory, regardless of the number of\n> > actual stored keys. For 32-bit systems the base usage of a dictionary\n> > without entries ((sizeof(Word) + sizeof(uint16)) * 2**16) would be\n> > almost 1MB, and for 64-bit it would be 1.7MB. That is significantly\n> > more than I'd want to install.\n>\n> You are probably right on this one, this part could be optimized. I\n> will address this if we agree on submitting the patch.\n>\n> > - You call gettimeofday() in both dict_get and in get_current_dict_id.\n> > These functions can be called in short and tight loops (for small GSON\n> > fields), in which case it would add significant overhead through the\n> > implied syscalls.\n>\n> I must admit, I'm not an expert in this area. My understanding is that\n> gettimeofday() is implemented as single virtual memory access on\n> modern operating systems, e.g. VDSO on Linux, thus it's very cheap.\n> I'm not that sure about other supported platforms though. Probably\n> worth investigating.\n\nYes, but vDSO does not necessarily work on all systems: e.g. in 2017,\na lot on EC2 [1] was run using Xen with vDSO not working for\ngettimeofday. I'm uncertain if this issue persists for their new\nKVM/Nitro hypervisor.\n\n> > It does mean that you're deTOASTing\n> > the full GSON field, and that the stored bytestring will not be\n> > structured / doesn't work well with current debuggers.\n>\n> Unfortunately, I'm not very well aware of debugging tools in this\n> context. Could you please name the debuggers I should take into\n> account?\n\nHmm, I was mistaken in that regard. I was under the impression that at\nleast one of pageinspect, pg_filedump and pg_hexedit did support\ncolumn value introspection, which they apparently do not. pg_filedump\n(and thus pg_hexedit) have some introspection, but none specialized\nfor jsonb (yet).\n\nThe point I tried to make was that introspection of GSON would be even\nmore difficult due to it adding a non-standard compression method\nwhich makes introspection effectively impossible (the algorithm can\nreplace things other than the strings it should replace, so it will be\ndifficult to retrieve structure from the encoded string).\n\nWith regards,\n\nMatthias van de Meent\n\n[0] https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html#PostgreSQL.Concepts.General.FeatureSupport.Extensions.13x\n[1] https://blog.packagecloud.io/eng/2017/03/08/system-calls-are-much-slower-on-ec2/\n\n\n",
"msg_date": "Wed, 26 May 2021 18:43:37 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add ZSON extension to /contrib/"
},
{
"msg_contents": "On Tue, May 25, 2021 at 01:55:13PM +0300, Aleksander Alekseev wrote:\n> Hi hackers,\n> \n> Back in 2016 while being at PostgresPro I developed the ZSON extension [1]. The\n> extension introduces the new ZSON type, which is 100% compatible with JSONB but\n> uses a shared dictionary of strings most frequently used in given JSONB\n> documents for compression. These strings are replaced with integer IDs.\n> Afterward, PGLZ (and now LZ4) applies if the document is large enough by common\n> PostgreSQL logic. Under certain conditions (many large documents), this saves\n> disk space, memory and increases the overall performance. More details can be\n> found in README on GitHub.\n\nI think this is interesting because it is one of the few cases that\nallow compression outside of a single column. Here is a list of\ncompression options:\n\n\thttps://momjian.us/main/blogs/pgblog/2020.html#April_27_2020\n\t\n\t1. single field\n\t2. across rows in a single page\n\t3. across rows in a single column\n\t4. across all columns and rows in a table\n\t5. across tables in a database\n\t6. across databases\n\nWhile standard Postgres does #1, ZSON allows 2-5, assuming the data is\nin the ZSON data type. I think this cross-field compression has great\npotential for cases where the data is not relational, or hasn't had time\nto be structured relationally. It also opens questions of how to do\nthis cleanly in a relational system.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Wed, 26 May 2021 17:29:29 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Add ZSON extension to /contrib/"
},
{
"msg_contents": "\nOn 5/26/21 5:29 PM, Bruce Momjian wrote:\n> On Tue, May 25, 2021 at 01:55:13PM +0300, Aleksander Alekseev wrote:\n>> Hi hackers,\n>>\n>> Back in 2016 while being at PostgresPro I developed the ZSON extension [1]. The\n>> extension introduces the new ZSON type, which is 100% compatible with JSONB but\n>> uses a shared dictionary of strings most frequently used in given JSONB\n>> documents for compression. These strings are replaced with integer IDs.\n>> Afterward, PGLZ (and now LZ4) applies if the document is large enough by common\n>> PostgreSQL logic. Under certain conditions (many large documents), this saves\n>> disk space, memory and increases the overall performance. More details can be\n>> found in README on GitHub.\n> I think this is interesting because it is one of the few cases that\n> allow compression outside of a single column. Here is a list of\n> compression options:\n>\n> \thttps://momjian.us/main/blogs/pgblog/2020.html#April_27_2020\n> \t\n> \t1. single field\n> \t2. across rows in a single page\n> \t3. across rows in a single column\n> \t4. across all columns and rows in a table\n> \t5. across tables in a database\n> \t6. across databases\n>\n> While standard Postgres does #1, ZSON allows 2-5, assuming the data is\n> in the ZSON data type. I think this cross-field compression has great\n> potential for cases where the data is not relational, or hasn't had time\n> to be structured relationally. It also opens questions of how to do\n> this cleanly in a relational system.\n>\n\nI think we're going to get the best bang for the buck on doing 2, 3, and\n4. If it's confined to a single table then we can put a dictionary in\nsomething like a fork. Maybe given partitioning we want to be able to do\nmulti-table dictionaries, but that's less certain.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 26 May 2021 22:15:09 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Add ZSON extension to /contrib/"
},
{
"msg_contents": "\n\nOn 5/26/21 6:43 PM, Matthias van de Meent wrote:\n> On Wed, 26 May 2021 at 12:49, Aleksander Alekseev\n> <aleksander@timescale.com> wrote:\n>>\n>> Hi hackers,\n>>\n>> Many thanks for your feedback, I very much appreciate it!\n>>\n>>> If the extension is mature enough, why make it an extension in\n>>> contrib, and not instead either enhance the existing jsonb type with\n>>> it or make it a built-in type?\n>>\n>>> IMO we have too d*mn many JSON types already. If we can find a way\n>>> to shoehorn this optimization into JSONB, that'd be great. Otherwise\n>>> I do not think it's worth the added user confusion.\n>>\n>> Magnus, Tom,\n>>\n>> My reasoning is that if the problem can be solved with an extension\n>> there is little reason to modify the core. This seems to be in the\n>> spirit of PostgreSQL. If the community reaches the consensus to modify\n>> the core to introduce a similar feature, we could discuss this as\n>> well. It sounds like a lot of unnecessary work to me though (see\n>> below).\n>>\n>>> * doesn't cover all cases, notably indexes.\n>>\n>> Tom,\n>>\n>> Not sure if I follow. What cases do you have in mind?\n>>\n>>> Do note that e.g. postgis is not in contrib, but is available in e.g. RDS.\n>>\n>> Matthias,\n>>\n>> Good point. I suspect that PostGIS is an exception though...\n> \n> Quite a few other non-/common/ extensions are available in RDS[0],\n> some of which are HLL (from citusdata), pglogical (from 2ndQuadrant)\n> and orafce (from Pavel Stehule, orafce et al.).\n> \n>>> I like the idea of the ZSON type, but I'm somewhat disappointed by its\n>>> current limitations\n>>\n>> Several people suggested various enhancements right after learning\n>> about ZSON. Time showed, however, that none of the real-world users\n>> really need e.g. more than one common dictionary per database. I\n>> suspect this is because no one has more than 2**16 repeatable unique\n>> strings (one dictionary limitation) in their documents. Thus there is\n>> no benefit in having separate dictionaries and corresponding extra\n>> complexity.\n> \n> IMO the main benefit of having different dictionaries is that you\n> could have a small dictionary for small and very structured JSONB\n> fields (e.g. some time-series data), and a large one for large /\n> unstructured JSONB fields, without having the significant performance\n> impact of having that large and varied dictionary on the\n> small&structured field. Although a binary search is log(n) and thus\n> still quite cheap even for large dictionaries, the extra size is\n> certainly not free, and you'll be touching more memory in the process.\n> \n\nI'm sure we can think of various other arguments for allowing separate\ndictionaries. For example, what if you drop a column? With one huge\ndictionary you're bound to keep the data forever. With per-column dicts\nyou can just drop the dict and free disk space / memory.\n\nI also find it hard to believe that no one needs 2**16 strings. I mean,\n65k is not that much, really. To give an example, I've been toying with\nstoring bitcoin blockchain in a database - one way to do that is storing\neach block as a single JSONB document. But each \"item\" (eg. transaction)\nis identified by a unique hash, so that means (tens of) thousands of\nunique strings *per document*.\n\nYes, it's a bit silly and extreme, and maybe the compression would not\nhelp much in this case. But it shows that 2**16 is damn easy to hit.\n\nIn other words, this seems like a nice example of survivor bias, where\nwe only look at cases for which the existing limitations are acceptable,\nignoring the (many) remaining cases eliminated by those limitations.\n\n>>> - Each dictionary uses a lot of memory, regardless of the number of\n>>> actual stored keys. For 32-bit systems the base usage of a dictionary\n>>> without entries ((sizeof(Word) + sizeof(uint16)) * 2**16) would be\n>>> almost 1MB, and for 64-bit it would be 1.7MB. That is significantly\n>>> more than I'd want to install.\n>>\n>> You are probably right on this one, this part could be optimized. I\n>> will address this if we agree on submitting the patch.\n>>\n\nI'm sure it can be optimized, but I also think it's focusing on the base\nmemory usage too much.\n\nWhat I care about is the end result, i.e. how much disk space / memory I\nsave at the end. I don't care if it's 1MB or 1.7MB if using the\ncompression saves me e.g. 50% of disk space. And it's completely\nirrelevant if I can't use the feature because of limitations stemming\nfrom the \"single dictionary\" limitations (in which case I'll save the\n0.7MB, but I also lose the 50% disk space savings - not a great trade\noff, if you ask me).\n\n>>> - You call gettimeofday() in both dict_get and in get_current_dict_id.\n>>> These functions can be called in short and tight loops (for small GSON\n>>> fields), in which case it would add significant overhead through the\n>>> implied syscalls.\n>>\n>> I must admit, I'm not an expert in this area. My understanding is that\n>> gettimeofday() is implemented as single virtual memory access on\n>> modern operating systems, e.g. VDSO on Linux, thus it's very cheap.\n>> I'm not that sure about other supported platforms though. Probably\n>> worth investigating.\n> \n> Yes, but vDSO does not necessarily work on all systems: e.g. in 2017,\n> a lot on EC2 [1] was run using Xen with vDSO not working for\n> gettimeofday. I'm uncertain if this issue persists for their new\n> KVM/Nitro hypervisor.\n> \n\nYeah. Better not to call gettimeofday is very often. I have no idea why\nthe code even does that, though - it seems to be deciding whether it's\nOK to use cached dictionary based on the timestamp, but that seems\nrather dubious. But it's hard to say, because there are about no useful\ncomments *anywhere* in the code.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 28 May 2021 12:35:34 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Add ZSON extension to /contrib/"
},
{
"msg_contents": "\n\nOn 5/26/21 2:49 AM, Stephen Frost wrote:\n> Greetings,\n> \n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> Matthias van de Meent <boekewurm+postgres@gmail.com> writes:\n>>> I like the idea of the ZSON type, but I'm somewhat disappointed by its\n>>> current limitations:\n>>\n>> I've not read the code, so maybe this thought is completely off-point,\n>> but I wonder if anything could be learned from PostGIS. AIUI they\n>> have developed the infrastructure needed to have auxiliary info\n>> (particularly, spatial reference data) attached to a geometry column,\n>> without duplicating it in every value of the column. Seems like that\n>> is a close analog of what's needed here.\n> \n> Err, not exactly the same- there aren't *that* many SRIDs and therefore\n> they can be stuffed into the typemod (my, probably wrong, recollection\n> was that I actually pushed Paul in that direction due to being\n> frustrated with CHECK constraints they had been using previously..).\n> \n> Not something you could do with a dictionary as what's contempalted\n> here. I do agree that each jsonb/zson/whatever column should really be\n> able to have its own dictionary though and maybe you could shove *which*\n> of those dictionaries a given column uses into the typemod for that\n> column... In an ideal world, however, we wouldn't make a user have to\n> actually do that though and instead we'd just build our own magically\n> for them when they use jsonb.\n> \n\nI think doing this properly will require inventing new infrastructure to\nassociate some custom parameters with a column (and/or data type). In\nprinciple it seems quite similar to 911e702077, which introduced opclass\nparameters, which allowed implementing the new BRIN opclasses in PG14.\n\nEven if we eventually decide to not add zson into contrib (or core), it\nseems like this infrastructure would make zson more usable in practice\nwith this capability.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 28 May 2021 12:43:30 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Add ZSON extension to /contrib/"
},
{
"msg_contents": "\n\nOn 5/27/21 4:15 AM, Andrew Dunstan wrote:\n> \n> On 5/26/21 5:29 PM, Bruce Momjian wrote:\n>> On Tue, May 25, 2021 at 01:55:13PM +0300, Aleksander Alekseev wrote:\n>>> Hi hackers,\n>>>\n>>> Back in 2016 while being at PostgresPro I developed the ZSON extension [1]. The\n>>> extension introduces the new ZSON type, which is 100% compatible with JSONB but\n>>> uses a shared dictionary of strings most frequently used in given JSONB\n>>> documents for compression. These strings are replaced with integer IDs.\n>>> Afterward, PGLZ (and now LZ4) applies if the document is large enough by common\n>>> PostgreSQL logic. Under certain conditions (many large documents), this saves\n>>> disk space, memory and increases the overall performance. More details can be\n>>> found in README on GitHub.\n>> I think this is interesting because it is one of the few cases that\n>> allow compression outside of a single column. Here is a list of\n>> compression options:\n>>\n>> \thttps://momjian.us/main/blogs/pgblog/2020.html#April_27_2020\n>> \t\n>> \t1. single field\n>> \t2. across rows in a single page\n>> \t3. across rows in a single column\n>> \t4. across all columns and rows in a table\n>> \t5. across tables in a database\n>> \t6. across databases\n>>\n>> While standard Postgres does #1, ZSON allows 2-5, assuming the data is\n>> in the ZSON data type. I think this cross-field compression has great\n>> potential for cases where the data is not relational, or hasn't had time\n>> to be structured relationally. It also opens questions of how to do\n>> this cleanly in a relational system.\n>>\n> \n> I think we're going to get the best bang for the buck on doing 2, 3, and\n> 4. If it's confined to a single table then we can put a dictionary in\n> something like a fork.\n\nAgreed.\n\n> Maybe given partitioning we want to be able to do multi-table\n> dictionaries, but that's less certain.\n> \n\nYeah. I think it'll have many of the same issues/complexity as global\nindexes, and the gains are likely limited. At least assuming the\npartitions are sufficiently large, but tiny partitions are inefficient\nin general, I think.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 28 May 2021 12:57:07 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Add ZSON extension to /contrib/"
},
{
"msg_contents": "\nOn 5/28/21 6:35 AM, Tomas Vondra wrote:\n>\n>>\n>> IMO the main benefit of having different dictionaries is that you\n>> could have a small dictionary for small and very structured JSONB\n>> fields (e.g. some time-series data), and a large one for large /\n>> unstructured JSONB fields, without having the significant performance\n>> impact of having that large and varied dictionary on the\n>> small&structured field. Although a binary search is log(n) and thus\n>> still quite cheap even for large dictionaries, the extra size is\n>> certainly not free, and you'll be touching more memory in the process.\n>>\n> I'm sure we can think of various other arguments for allowing separate\n> dictionaries. For example, what if you drop a column? With one huge\n> dictionary you're bound to keep the data forever. With per-column dicts\n> you can just drop the dict and free disk space / memory.\n>\n> I also find it hard to believe that no one needs 2**16 strings. I mean,\n> 65k is not that much, really. To give an example, I've been toying with\n> storing bitcoin blockchain in a database - one way to do that is storing\n> each block as a single JSONB document. But each \"item\" (eg. transaction)\n> is identified by a unique hash, so that means (tens of) thousands of\n> unique strings *per document*.\n>\n> Yes, it's a bit silly and extreme, and maybe the compression would not\n> help much in this case. But it shows that 2**16 is damn easy to hit.\n>\n> In other words, this seems like a nice example of survivor bias, where\n> we only look at cases for which the existing limitations are acceptable,\n> ignoring the (many) remaining cases eliminated by those limitations.\n>\n>\n\nI don't think we should lightly discard the use of 2 byte keys though.\nMaybe we could use a scheme similar to what we use for text lengths,\nwhere the first bit indicates whether we have a 1 byte or 4 byte length\nindicator. Many dictionaries will have less that 2^15-1 entries, so they\nwould use exclusively the smaller keys.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 28 May 2021 10:22:26 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Add ZSON extension to /contrib/"
},
{
"msg_contents": "\n\nOn 5/28/21 4:22 PM, Andrew Dunstan wrote:\n> \n> On 5/28/21 6:35 AM, Tomas Vondra wrote:\n>>\n>>>\n>>> IMO the main benefit of having different dictionaries is that you\n>>> could have a small dictionary for small and very structured JSONB\n>>> fields (e.g. some time-series data), and a large one for large /\n>>> unstructured JSONB fields, without having the significant performance\n>>> impact of having that large and varied dictionary on the\n>>> small&structured field. Although a binary search is log(n) and thus\n>>> still quite cheap even for large dictionaries, the extra size is\n>>> certainly not free, and you'll be touching more memory in the process.\n>>>\n>> I'm sure we can think of various other arguments for allowing separate\n>> dictionaries. For example, what if you drop a column? With one huge\n>> dictionary you're bound to keep the data forever. With per-column dicts\n>> you can just drop the dict and free disk space / memory.\n>>\n>> I also find it hard to believe that no one needs 2**16 strings. I mean,\n>> 65k is not that much, really. To give an example, I've been toying with\n>> storing bitcoin blockchain in a database - one way to do that is storing\n>> each block as a single JSONB document. But each \"item\" (eg. transaction)\n>> is identified by a unique hash, so that means (tens of) thousands of\n>> unique strings *per document*.\n>>\n>> Yes, it's a bit silly and extreme, and maybe the compression would not\n>> help much in this case. But it shows that 2**16 is damn easy to hit.\n>>\n>> In other words, this seems like a nice example of survivor bias, where\n>> we only look at cases for which the existing limitations are acceptable,\n>> ignoring the (many) remaining cases eliminated by those limitations.\n>>\n>>\n> \n> I don't think we should lightly discard the use of 2 byte keys though.\n> Maybe we could use a scheme similar to what we use for text lengths,\n> where the first bit indicates whether we have a 1 byte or 4 byte length\n> indicator. Many dictionaries will have less that 2^15-1 entries, so they\n> would use exclusively the smaller keys.\n> \n\nI didn't mean to discard that, of course. I'm sure a lot of data sets\nmay be perfectly fine with 64k keys, of course, and it may be worth\noptimizing that as a special case. All I'm saying is that if we start\nfrom the position that this limit is perfectly fine and no one is going\nto hit it in practice, it may be due to people not even trying it on\ndocuments with more keys.\n\nThat being said, I still don't think the 1MB vs. 1.7MB figure is\nparticularly meaningful, because it's for \"empty\" dictionary, which is\nsomething you'll not have in practice. And once you start adding keys,\nthe difference will get less and less significant.\n\nHowever, if we care about efficiency for \"small\" JSON documents, it's\nprobably worth using something like varint [1], which is 1-4B depending\non the value.\n\n[1] https://learnmeabitcoin.com/technical/varint\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 28 May 2021 18:06:19 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Add ZSON extension to /contrib/"
},
{
"msg_contents": "On Tue, May 25, 2021, at 22:10, Tom Lane wrote:\n> Magnus Hagander <magnus@hagander.net <mailto:magnus%40hagander.net>> writes:\n> > On Tue, May 25, 2021 at 12:55 PM Aleksander Alekseev\n> > <aleksander@timescale.com <mailto:aleksander%40timescale.com>> wrote:\n> >> Back in 2016 while being at PostgresPro I developed the ZSON extension [1]. The extension introduces the new ZSON type, which is 100% compatible with JSONB but uses a shared dictionary of strings most frequently used in given JSONB documents for compression.\n> \n> > If the extension is mature enough, why make it an extension in\n> > contrib, and not instead either enhance the existing jsonb type with\n> > it or make it a built-in type?\n> \n> IMO we have too d*mn many JSON types already. If we can find a way\n> to shoehorn this optimization into JSONB, that'd be great. Otherwise\n> I do not think it's worth the added user confusion.\n\nI think the json situation is unfortunate.\n\nIf carefully designing the json type from scratch,\nwith all the accumulated experiences over the years from working with json and jsonb,\nI think the end result would probably be quite different.\n\nFor instance, I remember Marko Tiikkaja implemented his own json type many years ago when we worked together at the same company, needing json before PostgreSQL had support for it, I remember I thought some ideas in his interface felt more natural than the built-in json type we later got.\n\nWhile zson improves on efficiency, there are probably lots of other improvements in the interface that could be made as well.\n\nInstead of trying to fix the existing built-in json type, I think it would be better to package the built-in functionality as a \"json\" extension, that would come pre-installed, similar to how \"plpgsql\" comes pre-installed.\n\nUsers who feel they are unhappy with the entire json/jsonb types could then install \"zson\" or some other competing json type instead. This would allow the life-cycles of legacy/deprecated versions to overlap with future versions.\n\nUninstallable Pre-Installed Extensions as a concept in general could perhaps be a feasible alternative to shoehorning in funtionality/optimizations in general, and also a way to avoid GUCs.\n\nThe biggest downside I see is the risk for confusion among users, since there can then be multiple competing implementations providing the same functionality. It's nice to have built-ins when all users love the built-ins.\n\n/Joel\nOn Tue, May 25, 2021, at 22:10, Tom Lane wrote:Magnus Hagander <magnus@hagander.net> writes:> On Tue, May 25, 2021 at 12:55 PM Aleksander Alekseev> <aleksander@timescale.com> wrote:>> Back in 2016 while being at PostgresPro I developed the ZSON extension [1]. The extension introduces the new ZSON type, which is 100% compatible with JSONB but uses a shared dictionary of strings most frequently used in given JSONB documents for compression.> If the extension is mature enough, why make it an extension in> contrib, and not instead either enhance the existing jsonb type with> it or make it a built-in type?IMO we have too d*mn many JSON types already. If we can find a wayto shoehorn this optimization into JSONB, that'd be great. OtherwiseI do not think it's worth the added user confusion.I think the json situation is unfortunate.If carefully designing the json type from scratch,with all the accumulated experiences over the years from working with json and jsonb,I think the end result would probably be quite different.For instance, I remember Marko Tiikkaja implemented his own json type many years ago when we worked together at the same company, needing json before PostgreSQL had support for it, I remember I thought some ideas in his interface felt more natural than the built-in json type we later got.While zson improves on efficiency, there are probably lots of other improvements in the interface that could be made as well.Instead of trying to fix the existing built-in json type, I think it would be better to package the built-in functionality as a \"json\" extension, that would come pre-installed, similar to how \"plpgsql\" comes pre-installed.Users who feel they are unhappy with the entire json/jsonb types could then install \"zson\" or some other competing json type instead. This would allow the life-cycles of legacy/deprecated versions to overlap with future versions.Uninstallable Pre-Installed Extensions as a concept in general could perhaps be a feasible alternative to shoehorning in funtionality/optimizations in general, and also a way to avoid GUCs.The biggest downside I see is the risk for confusion among users, since there can then be multiple competing implementations providing the same functionality. It's nice to have built-ins when all users love the built-ins./Joel",
"msg_date": "Sun, 30 May 2021 10:17:27 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "Re: Add ZSON extension to /contrib/"
},
{
"msg_contents": "Hi hackers,\n\nMany thanks for the feedback and all the great suggestions!\n\nI decided to add the patch to the nearest commitfest. You will find it in\nthe attachment.\n\nDifferences from the GitHub version:\n\n- Code formatting changed;\n- More comments added to the code;\n- SGML documentation added;\n- Plus several minor changes;\n\nI very much like the ideas:\n\n- To use varint, as Tomas suggested\n- Make dictionaries variable in size\n- Somehow avoid calling gettimeofday()\n- Improvements by 2ndQuadrant that Andrew named\n\nHowever, I would like to decompose the task into 1) deciding if the\nextension is worth adding to /contrib/ and 2) improving it. Since there are\npeople who already use ZSON, the extension should be backward-compatible\nwith the current ZSON format anyway. Also, every improvement deserves its\nown discussion, testing, and benchmarking. Thus I believe the suggested\napproach will simplify the job for reviewers, and also save us time if the\npatch will be declined. If the patch will be accepted, I will be delighted\nto submit follow-up patches!\n\nIf you have any other ideas on how the extension can be improved in the\nfuture, please don't hesitate to name them in this thread. Also, I would\nappreciate some code review.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Fri, 4 Jun 2021 18:09:58 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: Add ZSON extension to /contrib/"
},
{
"msg_contents": "On 04.06.21 17:09, Aleksander Alekseev wrote:\n> I decided to add the patch to the nearest commitfest.\n\nWith respect to the commit fest submission, I don't think there is \nconsensus right now to add this. I think people would prefer that this \ndictionary facility be somehow made available in the existing JSON \ntypes. Also, I sense that there is still some volatility about some of \nthe details of how this extension should work and its scope. I think \nthis is served best as an external extension for now.\n\n\n",
"msg_date": "Sat, 3 Jul 2021 12:34:10 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Add ZSON extension to /contrib/"
},
{
"msg_contents": "On 7/3/21 12:34 PM, Peter Eisentraut wrote:\n> On 04.06.21 17:09, Aleksander Alekseev wrote:\n>> I decided to add the patch to the nearest commitfest.\n> \n> With respect to the commit fest submission, I don't think there is \n> consensus right now to add this. I think people would prefer that this \n> dictionary facility be somehow made available in the existing JSON \n> types. Also, I sense that there is still some volatility about some of \n> the details of how this extension should work and its scope. I think \n> this is served best as an external extension for now.\n\nI agree there's a lot of open questions to figure out, but I think this \n\"column-level compression\" capability has a lot of potential. Not just \nfor structured documents like JSON, but maybe even for scalar types.\n\nI don't think the question whether this should be built into jsonb, a \nseparate built-in type, contrib type or something external is the one we \nneed to answer first.\n\nThe first thing I'd like to see is some \"proof\" that it's actually \nuseful in practice - there were some claims about people/customers using \nit and being happy with the benefits, but there were no actual examples \nof data sets that are expected to benefit, compression ratios etc. And \nconsidering that [1] went unnoticed for 5 years, I have my doubts about \nit being used very widely. (I may be wrong and maybe people are just not \ncasting jsonb to zson.)\n\nI've tried to use this on the one large non-synthetic JSONB dataset I \nhad at hand at the moment, which is the bitcoin blockchain. That's ~1TB \nwith JSONB, and when I tried using ZSON instead there was no measurable \nbenefit, in fact the database was a bit larger. But I admit btc data is \nrather strange, because it contains a lot of randomness (all the tx and \nblock IDs are random-looking hashes, etc.), and there's a lot of them in \neach document. So maybe that's simply a data set that can't benefit from \nzson on principle.\n\nI also suspect the zson_extract_strings() is pretty inefficient and I \nran into various issues with the btc blocks which have very many keys, \noften far more than the 10k limit.\n\nIn any case, I think having a clear example(s) of practical data sets \nthat benefit from using zson would be very useful, both to guide the \ndevelopment and to show what the potential gains are.\n\nThe other thing is that a lot of the stuff seems to be manual (e.g. the \nlearning), and not really well integrated with the core. IMO improving \nthis by implementing the necessary infrastructure would help all the \npossible cases (built-in type, contrib, external extension).\n\n\nregards\n\n[1] \nhttps://github.com/postgrespro/zson/commit/02db084ea3b94d9e68fd912dea97094634fcdea5\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 10 Jul 2021 20:47:49 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Add ZSON extension to /contrib/"
},
{
"msg_contents": "Hi hackers,\n\nMany thanks for all the great feedback!\n\nPlease see the follow-up thread `RFC: compression dictionaries for JSONB`:\n\nhttps://www.postgresql.org/message-id/CAJ7c6TPx7N-bVw0dZ1ASCDQKZJHhBYkT6w4HV1LzfS%2BUUTUfmA%40mail.gmail.com\n\n-- \nBest regards,\nAleksander Alekseev\nOpen-Source PostgreSQL Contributor at Timescale\n\nHi hackers,Many thanks for all the great feedback!Please see the follow-up thread `RFC: compression dictionaries for JSONB`:https://www.postgresql.org/message-id/CAJ7c6TPx7N-bVw0dZ1ASCDQKZJHhBYkT6w4HV1LzfS%2BUUTUfmA%40mail.gmail.com-- Best regards,Aleksander AlekseevOpen-Source PostgreSQL Contributor at Timescale",
"msg_date": "Fri, 8 Oct 2021 12:51:24 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: Add ZSON extension to /contrib/"
}
] |
[
{
"msg_contents": "When I am understanding the relationship between Query->rtable and\nroot->simple_rte_array, I'd like to assume that Query->rtable should be\nnever used\nwhen root->simple_rte_array is ready. I mainly checked two places,\nmake_one_rel and\ncreate_plan with the below hacks.\n\n{\n List *l = root->parse->rtable;\n root->parse->rtable = NIL;\n make_one_rel.. or create_plan_recurse..\n root->parse->rtable = l;\n}\n\n\nThen I found adjust_appendrel_attrs_mutator and infer_arbiter_indexes still\nuse it. The attached patch fixed it by replacing the rt_fetch with\nplanner_rt_fetch,\nall the tests passed.\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)",
"msg_date": "Wed, 26 May 2021 02:00:49 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Access root->simple_rte_array instead of Query->rtable for 2 more\n cases."
},
{
"msg_contents": "Andy Fan <zhihui.fan1213@gmail.com> writes:\n> When I am understanding the relationship between Query->rtable and\n> root->simple_rte_array, I'd like to assume that Query->rtable should be\n> never used\n> when root->simple_rte_array is ready.\n\nTBH, now that Lists are really arrays, there's basically no performance\nadvantage to be gained by fooling with this. I've considered ripping\nout simple_rte_array, but haven't felt that the code churn would be\nworth it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 25 May 2021 14:31:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Access root->simple_rte_array instead of Query->rtable for 2 more\n cases."
}
] |
[
{
"msg_contents": "Hi all,\n\nI got curious with what Justin just told here with\nmax_logical_replication_workers:\nhttps://www.postgresql.org/message-id/20210526001359.GE3676@telsasoft.com\n\nAnd while looking at the full set of GUCs, I noticed much more than\none parameter that needed adjustments in the documentation when these\nare PGC_SIGHUP or PGC_POSTMASTER, leading me to the attached patch.\n\nAny comments or objections?\n\nThanks,\n--\nMichael",
"msg_date": "Wed, 26 May 2021 10:34:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Incorrect GUC descriptions in docs and postgresql.conf.sample"
},
{
"msg_contents": "Your patch adds documentation about GUCs that can only be set at server\nstart/config/commandline.\n\nBut it's not true for any of these, which are all HUP/SUSET.\nPlease double check your logic :)\n\nsrc/backend/utils/misc/guc.c: {\"autovacuum_work_mem\", PGC_SIGHUP, RESOURCES_MEM,\nsrc/backend/utils/misc/guc.c: {\"remove_temp_files_after_crash\", PGC_SIGHUP, ERROR_HANDLING_OPTIONS,\nsrc/backend/utils/misc/guc.c: {\"restart_after_crash\", PGC_SIGHUP, ERROR_HANDLING_OPTIONS,\nsrc/backend/utils/misc/guc.c: {\"log_lock_waits\", PGC_SUSET, LOGGING_WHAT,\nsrc/backend/utils/misc/guc.c: {\"autovacuum_work_mem\", PGC_SIGHUP, RESOURCES_MEM,\nsrc/backend/utils/misc/guc.c: {\"ssl_max_protocol_version\", PGC_SIGHUP, CONN_AUTH_SSL,\nsrc/backend/utils/misc/guc.c: {\"ssl_min_protocol_version\", PGC_SIGHUP, CONN_AUTH_SSL,\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 25 May 2021 20:43:14 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect GUC descriptions in docs and postgresql.conf.sample"
},
{
"msg_contents": "On Tue, May 25, 2021 at 08:43:14PM -0500, Justin Pryzby wrote:\n> Your patch adds documentation about GUCs that can only be set at server\n> start/config/commandline.\n\nOh: I realized that I read too quickly and misinterpretted what \"only be set in\nthe config\" means (I know I'm not the only one). Oops.\n\nIn some cases it sounds strange to say that a parameter can \"only\" be set in\nthe config file, since it's dynamically changed at runtime. Which is more\nflexible than restrictive.\n\nof a restriction.\n\n> But it's not true for any of these, which are all HUP/SUSET.\n> Please double check your logic :)\n> \n> src/backend/utils/misc/guc.c: {\"autovacuum_work_mem\", PGC_SIGHUP, RESOURCES_MEM,\n> src/backend/utils/misc/guc.c: {\"remove_temp_files_after_crash\", PGC_SIGHUP, ERROR_HANDLING_OPTIONS,\n> src/backend/utils/misc/guc.c: {\"restart_after_crash\", PGC_SIGHUP, ERROR_HANDLING_OPTIONS,\n> src/backend/utils/misc/guc.c: {\"log_lock_waits\", PGC_SUSET, LOGGING_WHAT,\n> src/backend/utils/misc/guc.c: {\"autovacuum_work_mem\", PGC_SIGHUP, RESOURCES_MEM,\n> src/backend/utils/misc/guc.c: {\"ssl_max_protocol_version\", PGC_SIGHUP, CONN_AUTH_SSL,\n> src/backend/utils/misc/guc.c: {\"ssl_min_protocol_version\", PGC_SIGHUP, CONN_AUTH_SSL,\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 25 May 2021 21:01:30 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Incorrect GUC descriptions in docs and postgresql.conf.sample"
},
{
"msg_contents": "On Tue, May 25, 2021 at 09:01:30PM -0500, Justin Pryzby wrote:\n> On Tue, May 25, 2021 at 08:43:14PM -0500, Justin Pryzby wrote:\n>> Your patch adds documentation about GUCs that can only be set at server\n>> start/config/commandline.\n> \n> Oh: I realized that I read too quickly and misinterpretted what \"only be set in\n> the config\" means (I know I'm not the only one). Oops.\n>\n> In some cases it sounds strange to say that a parameter can \"only\" be set in\n> the config file, since it's dynamically changed at runtime. Which is more\n> flexible than restrictive.\n\nThat's the wording used for ages in the documentation, so I would\nstick with that.\n\n>> But it's not true for any of these, which are all HUP/SUSET.\n>> Please double check your logic :)\n>> \n>> src/backend/utils/misc/guc.c: {\"autovacuum_work_mem\", PGC_SIGHUP, RESOURCES_MEM,\n>> src/backend/utils/misc/guc.c: {\"remove_temp_files_after_crash\", PGC_SIGHUP, ERROR_HANDLING_OPTIONS,\n>> src/backend/utils/misc/guc.c: {\"restart_after_crash\", PGC_SIGHUP, ERROR_HANDLING_OPTIONS,\n>> src/backend/utils/misc/guc.c: {\"log_lock_waits\", PGC_SUSET, LOGGING_WHAT,\n>> src/backend/utils/misc/guc.c: {\"autovacuum_work_mem\", PGC_SIGHUP, RESOURCES_MEM,\n>> src/backend/utils/misc/guc.c: {\"ssl_max_protocol_version\", PGC_SIGHUP, CONN_AUTH_SSL,\n>> src/backend/utils/misc/guc.c: {\"ssl_min_protocol_version\", PGC_SIGHUP, CONN_AUTH_SSL,\n\nThere is one point where you are right here: log_lock_waits has no\nneed to be changed. Looks like I checked too many things at once.\n--\nMichael",
"msg_date": "Wed, 26 May 2021 11:10:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Incorrect GUC descriptions in docs and postgresql.conf.sample"
},
{
"msg_contents": "On Wed, May 26, 2021 at 11:10:22AM +0900, Michael Paquier wrote:\n> There is one point where you are right here: log_lock_waits has no\n> need to be changed. Looks like I checked too many things at once.\n\nFixed that, did one extra round of review, and applied.\n--\nMichael",
"msg_date": "Thu, 27 May 2021 14:59:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Incorrect GUC descriptions in docs and postgresql.conf.sample"
}
] |
[
{
"msg_contents": "It seems that a concurrent UPDATE can restart heap_lock_tuple() even if it's\nnot necessary. Is the attached proposal correct and worth applying?\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com",
"msg_date": "Wed, 26 May 2021 09:03:34 +0200",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Possible optimization of heap_lock_tuple()"
}
] |
[
{
"msg_contents": "Hi,\n\nI found a possible typo in the code comments of heap_multi_insert.\n\n- *\theap_multi_insert\t- insert multiple tuple into a heap\n+ *\theap_multi_insert\t- insert multiple tuples into a heap\n\nAttaching a patch to fix it.\n\nBest regards,\nhouzj",
"msg_date": "Wed, 26 May 2021 07:37:15 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Fix typo: multiple tuple => tuples"
},
{
"msg_contents": "On Wed, May 26, 2021 at 07:37:15AM +0000, houzj.fnst@fujitsu.com wrote:\n> I found a possible typo in the code comments of heap_multi_insert.\n> \n> - *\theap_multi_insert\t- insert multiple tuple into a heap\n> + *\theap_multi_insert\t- insert multiple tuples into a heap\n> \n> Attaching a patch to fix it.\n\nThanks, fixed.\n--\nMichael",
"msg_date": "Wed, 26 May 2021 19:54:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo: multiple tuple => tuples"
}
] |
[
{
"msg_contents": "The attached patch makes an optimization to pg_checksums which prevents\nrewriting the block if the checksum is already what we expect. This can\nlead to much faster runs in cases where it is already set (e.g. enabled ->\ndisabled -> enable, external helper process, interrupted runs, future\nparallel processes). There is also an effort to not sync the data directory\nif no changes were written. Finally, added a bit more output on how many\nfiles were actually changed, e.g.:\n\nChecksum operation completed\nFiles scanned: 1236\nBlocks scanned: 23283\nFiles modified: 38\nBlocks modified: 19194\npg_checksums: syncing data directory\npg_checksums: updating control file\nChecksums enabled in cluster\n\nCheers,\nGreg",
"msg_date": "Wed, 26 May 2021 17:23:55 -0400",
"msg_from": "Greg Sabino Mullane <htamfids@gmail.com>",
"msg_from_op": true,
"msg_subject": "Speed up pg_checksums in cases where checksum already set"
},
{
"msg_contents": "On Thu, May 27, 2021 at 5:24 AM Greg Sabino Mullane <htamfids@gmail.com> wrote:\n>\n> The attached patch makes an optimization to pg_checksums which prevents rewriting the block if the checksum is already what we expect. This can lead to much faster runs in cases where it is already set (e.g. enabled -> disabled -> enable, external helper process, interrupted runs, future parallel processes). There is also an effort to not sync the data directory if no changes were written. Finally, added a bit more output on how many files were actually changed, e.g.:\n\nI don't know how often this will actually help as probably people\naren't toggling the checksum state that often, but it seems like a\ngood idea overall. The patch looks sensible to me.\n\n\n",
"msg_date": "Thu, 27 May 2021 09:26:23 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up pg_checksums in cases where checksum already set"
},
{
"msg_contents": "On Wed, May 26, 2021 at 05:23:55PM -0400, Greg Sabino Mullane wrote:\n> The attached patch makes an optimization to pg_checksums which prevents\n> rewriting the block if the checksum is already what we expect. This can\n> lead to much faster runs in cases where it is already set (e.g. enabled ->\n> disabled -> enable, external helper process, interrupted runs, future\n> parallel processes).\n\nMakes sense.\n\n> There is also an effort to not sync the data directory\n> if no changes were written. Finally, added a bit more output on how many\n> files were actually changed, e.g.:\n\n- if (do_sync)\n+ if (do_sync && total_files_modified)\n {\n \t pg_log_info(\"syncing data directory\");\n fsync_pgdata(DataDir, PG_VERSION_NUM);\n\nHere, I am on the edge. It could be an advantage to force a flush of\nthe data folder anyway, no? Say, all the pages have a correct\nchecksum and they are in the OS cache, but they may not have been\nflushed yet. That would emulate what initdb -S does already.\n\n> Checksum operation completed\n> Files scanned: 1236\n> Blocks scanned: 23283\n> Files modified: 38\n> Blocks modified: 19194\n> pg_checksums: syncing data directory\n> pg_checksums: updating control file\n> Checksums enabled in cluster\n\nThe addition of the number of files modified looks like an advantage.\n--\nMichael",
"msg_date": "Thu, 27 May 2021 11:17:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Speed up pg_checksums in cases where checksum already set"
},
{
"msg_contents": "In one of the checksum patches, there was an understanding that the pages\nshould be written even if the checksum is correct, to handle replicas.\n\n From the v19 patch:\nhttps://www.postgresql.org/message-id/F7AFCFCD-8F77-4546-8D42-C7F675A4B680%40yesql.se\n+ * Mark the buffer as dirty and force a full page write. We have to\n+ * re-write the page to WAL even if the checksum hasn't changed,\n+ * because if there is a replica it might have a slightly different\n+ * version of the page with an invalid checksum, caused by unlogged\n+ * changes (e.g. hintbits) on the master happening while checksums\n+ * were off. This can happen if there was a valid checksum on the page\n+ * at one point in the past, so only when checksums are first on, then\n+ * off, and then turned on again.\n\npg_checksums(1) says:\n\n| When using a replication setup with tools which perform direct copies of relation file blocks (for example pg_rewind(1)), enabling or disabling checksums can lead to page\n| corruptions in the shape of incorrect checksums if the operation is not done consistently across all nodes. When enabling or disabling checksums in a replication setup, it\n| is thus recommended to stop all the clusters before switching them all consistently. Destroying all standbys, performing the operation on the primary and finally recreating\n| the standbys from scratch is also safe.\n\nDoes your patch complicate things for the \"stop all the clusters before\nswitching them all\" case?\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 26 May 2021 21:29:43 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Speed up pg_checksums in cases where checksum already set"
},
{
"msg_contents": "On Wed, May 26, 2021 at 09:29:43PM -0500, Justin Pryzby wrote:\n> In one of the checksum patches, there was an understanding that the pages\n> should be written even if the checksum is correct, to handle replicas.\n> \n> From the v19 patch:\n> https://www.postgresql.org/message-id/F7AFCFCD-8F77-4546-8D42-C7F675A4B680%40yesql.se\n> + * Mark the buffer as dirty and force a full page write. We have to\n> + * re-write the page to WAL even if the checksum hasn't changed,\n> + * because if there is a replica it might have a slightly different\n> + * version of the page with an invalid checksum, caused by unlogged\n> + * changes (e.g. hintbits) on the master happening while checksums\n> + * were off. This can happen if there was a valid checksum on the page\n> + * at one point in the past, so only when checksums are first on, then\n> + * off, and then turned on again.\n\nI am not really following the line of argument here. pg_checksums\nrelies on the fact that the cluster has been safely shut down before\nrunning. So, if this comes to standbys, they would have reached a\nconsistent point, and the shutdown makes sure that all pages are\nflushed.\n--\nMichael",
"msg_date": "Thu, 27 May 2021 13:16:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Speed up pg_checksums in cases where checksum already set"
},
{
"msg_contents": "Thanks for the quick replies, everyone.\n\nOn Wed, May 26, 2021 at 10:17 PM Michael Paquier <michael@paquier.xyz>\nwrote:\n\n>\n> - if (do_sync)\n> + if (do_sync && total_files_modified)\n> Here, I am on the edge. It could be an advantage to force a flush of\n> the data folder anyway, no?\n\n\nI was originally on the fence about including this as well, but it seems\nlike since the database is shut down and already in a consistent state,\nthere seems no advantage to syncing if we have not made any changes. Things\nare no better or worse than when we arrived. However, the real-world use\ncase of running pg_checksums --enable and getting no changed blocks is\nprobably fairly rare, so if there is a strong objection, I'm happy\nreverting to just (do_sync). (I'm not sure how cheap a sync is, I assume\nit's low impact as the database is shut down, I guess it becomes a \"might\nas well while we are here\"?)\n\nOn Wed, May 26, 2021 at 10:29 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> In one of the checksum patches, there was an understanding that the pages\n> should be written even if the checksum is correct, to handle replicas.\n> ...\n> Does your patch complicate things for the \"stop all the clusters before\n> switching them all\" case?\n>\n\nI cannot imagine how it would, but, like Michael, I'm not really\nunderstanding the reasoning here. We only run when safely shutdown, so no\nWAL or dirty buffers need concern us :). Of course, once the postmaster is\nup and running, fiddling with checksums becomes vastly more complicated, as\nevidenced by that thread. I'm happy sticking to and speeding up the offline\nversion for now.\n\nCheers,\nGreg\n\nThanks for the quick replies, everyone.On Wed, May 26, 2021 at 10:17 PM Michael Paquier <michael@paquier.xyz> wrote:\n- if (do_sync)\n+ if (do_sync && total_files_modified)Here, I am on the edge. It could be an advantage to force a flush of\nthe data folder anyway, no?I was originally on the fence about including this as well, but it seems like since the database is shut down and already in a consistent state, there seems no advantage to syncing if we have not made any changes. Things are no better or worse than when we arrived. However, the real-world use case of running pg_checksums --enable and getting no changed blocks is probably fairly rare, so if there is a strong objection, I'm happy reverting to just (do_sync). (I'm not sure how cheap a sync is, I assume it's low impact as the database is shut down, I guess it becomes a \"might as well while we are here\"?)On Wed, May 26, 2021 at 10:29 PM Justin Pryzby <pryzby@telsasoft.com> wrote:In one of the checksum patches, there was an understanding that the pages\nshould be written even if the checksum is correct, to handle replicas....Does your patch complicate things for the \"stop all the clusters before\nswitching them all\" case?I cannot imagine how it would, but, like Michael, I'm not really understanding the reasoning here. We only run when safely shutdown, so no WAL or dirty buffers need concern us :). Of course, once the postmaster is up and running, fiddling with checksums becomes vastly more complicated, as evidenced by that thread. I'm happy sticking to and speeding up the offline version for now.Cheers,Greg",
"msg_date": "Thu, 27 May 2021 10:29:14 -0400",
"msg_from": "Greg Sabino Mullane <htamfids@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Speed up pg_checksums in cases where checksum already set"
},
{
"msg_contents": "On Thu, May 27, 2021 at 10:29:14AM -0400, Greg Sabino Mullane wrote:\n> I was originally on the fence about including this as well, but it seems\n> like since the database is shut down and already in a consistent state,\n> there seems no advantage to syncing if we have not made any changes. Things\n> are no better or worse than when we arrived. However, the real-world use\n> case of running pg_checksums --enable and getting no changed blocks is\n> probably fairly rare, so if there is a strong objection, I'm happy\n> reverting to just (do_sync). (I'm not sure how cheap a sync is, I assume\n> it's low impact as the database is shut down, I guess it becomes a \"might\n> as well while we are here\"?)\n\nI understand that this should be rare, but I don't want to take any\nbets either. With this patch, we could finish with cases where some\npages are still in the OS cache but don't get flushed because a\nprevious cancellation let the cluster in a state where all the page\nchecksums have been written out but a portion of the files were not\nsynced. A follow-up run of pg_checksums would see that all the pages\nare correct, but would think that no sync is required, incorrectly.\n--\nMichael",
"msg_date": "Wed, 2 Jun 2021 14:05:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Speed up pg_checksums in cases where checksum already set"
},
{
"msg_contents": "Fair enough; thanks for the feedback. Attached is a new version that does\nan unconditional sync (well, unless do_sync is false, a flag I am not\nparticularly fond of).\n\nCheers,\nGreg",
"msg_date": "Wed, 2 Jun 2021 10:21:55 -0400",
"msg_from": "Greg Sabino Mullane <htamfids@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Speed up pg_checksums in cases where checksum already set"
},
{
"msg_contents": "Newer version attach that adds a small documentation tweak as well.\n\nCheers,\nGreg",
"msg_date": "Wed, 2 Jun 2021 17:09:36 -0400",
"msg_from": "Greg Sabino Mullane <htamfids@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Speed up pg_checksums in cases where checksum already set"
},
{
"msg_contents": "On Wed, Jun 02, 2021 at 05:09:36PM -0400, Greg Sabino Mullane wrote:\n> Newer version attach that adds a small documentation tweak as well.\n\n- enabling checksums, every file in the cluster is rewritten in-place.\n+ enabling checksums, every file in the cluster with a changed checksum is\n+ rewritten in-place.\n\nThis doc addition is a bit confusing, as it could mean that each file\nhas just one single checksum. We could be more precise, say:\n\"When enabling checksums, each relation file block with a changed\nchecksum is rewritten in place.\"\n\nShould we also mention that the sync happens even if no blocks are\nrewritten based on the reasoning of upthread (aka we'd better do the\nfinal flush as an interrupted pg_checksums may let a portion of the\nfiles as not flushed)?\n--\nMichael",
"msg_date": "Fri, 18 Jun 2021 14:57:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Speed up pg_checksums in cases where checksum already set"
},
{
"msg_contents": "On Fri, Jun 18, 2021 at 1:57 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> This doc addition is a bit confusing, as it could mean that each file\n> has just one single checksum. We could be more precise, say:\n> \"When enabling checksums, each relation file block with a changed\n> checksum is rewritten in place.\"\n>\n\nAgreed, I like that wording. New patch attached.\n\n\n> Should we also mention that the sync happens even if no blocks are\n> rewritten based on the reasoning of upthread (aka we'd better do the\n> final flush as an interrupted pg_checksums may let a portion of the\n> files as not flushed)?\n>\n\nI don't know that we need to bother: the default is already to sync and one\nhas to go out of one's way using the -N argument to NOT sync, so I think\nit's a pretty safe assumption to everyone (except those who read my first\nversion of my patch!) that syncing always happens.\n\nCheers,\nGreg",
"msg_date": "Fri, 18 Jun 2021 20:01:17 -0400",
"msg_from": "Greg Sabino Mullane <htamfids@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Speed up pg_checksums in cases where checksum already set"
},
{
"msg_contents": "On Fri, Jun 18, 2021 at 08:01:17PM -0400, Greg Sabino Mullane wrote:\n> I don't know that we need to bother: the default is already to sync and one\n> has to go out of one's way using the -N argument to NOT sync, so I think\n> it's a pretty safe assumption to everyone (except those who read my first\n> version of my patch!) that syncing always happens.\n\nPerhaps you are right to keep it simple. If people would like to\ndocument that more precisely, it could always be changed if\nnecessary. What you have here sounds pretty much right to me.\n--\nMichael",
"msg_date": "Wed, 23 Jun 2021 09:39:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Speed up pg_checksums in cases where checksum already set"
},
{
"msg_contents": "On Wed, Jun 23, 2021 at 09:39:37AM +0900, Michael Paquier wrote:\n> Perhaps you are right to keep it simple. If people would like to\n> document that more precisely, it could always be changed if\n> necessary. What you have here sounds pretty much right to me.\n\nSo, I was looking at this one today, and got confused with the name of\nthe counters once the patch was in place as this leads to having\nthings like \"blocks\" and \"total_blocks_modified\", which is a bit\nconfusing as \"blocks\" stands for the number of blocks scanned,\nincluding new pages. I have simply suffixed \"files\" and \"blocks\" with\n\"_scanned\" to be more consistent with the new counters that are named\n\"_written\", giving the attached. We still need to have the per-file\ncounter in scan_file() with the global counters updated at the end of\na file scan for the sake of the file counter, of course.\n\nDoes that look fine to you?\n--\nMichael",
"msg_date": "Tue, 29 Jun 2021 15:59:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Speed up pg_checksums in cases where checksum already set"
},
{
"msg_contents": "On Tue, Jun 29, 2021 at 2:59 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> Does that look fine to you?\n>\n\nLooks great, I appreciate the renaming.\n\nCheers,\nGreg\n\nOn Tue, Jun 29, 2021 at 2:59 AM Michael Paquier <michael@paquier.xyz> wrote:Does that look fine to you?Looks great, I appreciate the renaming.Cheers,Greg",
"msg_date": "Tue, 29 Jun 2021 09:10:30 -0400",
"msg_from": "Greg Sabino Mullane <htamfids@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Speed up pg_checksums in cases where checksum already set"
},
{
"msg_contents": "On Tue, Jun 29, 2021 at 09:10:30AM -0400, Greg Sabino Mullane wrote:\n> Looks great, I appreciate the renaming.\n\nApplied, then.\n--\nMichael",
"msg_date": "Wed, 30 Jun 2021 10:10:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Speed up pg_checksums in cases where checksum already set"
}
] |
[
{
"msg_contents": "The attached patch stems from the conversation at [1];\nI'm starting a new thread to avoid confusing the cfbot.\n\nBriefly, the idea is to allow reverting the change made in\ncommit ab596105b to increase FirstBootstrapObjectId from\n12000 to 13000, by teaching genbki.pl to assign OIDs\nindependently in each catalog rather than from a single\nOID counter. Thus, the OIDs in this range will not be\nglobally unique anymore, but only unique per-catalog.\n\nThe aforesaid commit had to increase FirstBootstrapObjectId\nbecause as of HEAD, genbki.pl needs to consume OIDs up through\n12035, overrunning the old limit of 12000. But moving up that\nlimit seems a bit risky, cf [2]. It'd be better if we could\navoid doing that. Since the OIDs in question are spread across\nseveral catalogs, allocating them per-catalog seems to fix the\nproblem quite effectively. With the attached patch, the ending\nOID counters are\n\nGenbkiNextOid(pg_amop) = 10945\nGenbkiNextOid(pg_amproc) = 10697\nGenbkiNextOid(pg_cast) = 10230\nGenbkiNextOid(pg_opclass) = 10164\n\nso we have quite a lot of daylight before we'll ever approach\n12000 again.\n\nPer-catalog OID uniqueness shouldn't be a problem here, because\nany code that's assuming global uniqueness is broken anyway;\nno such guarantee exists after the OID counter has wrapped\naround.\n\nSo I propose shoehorning this into v14, to avoid shipping a\nrelease where FirstBootstrapObjectId has been bumped up.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/3737988.1618451008%40sss.pgh.pa.us\n\n[2] https://www.postgresql.org/message-id/flat/CAGPqQf3JYTrTB1E1fu_zOGj%2BrG_kwTfa3UcUYPfNZL9o1bcYNw%40mail.gmail.com",
"msg_date": "Wed, 26 May 2021 17:43:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Reducing the range of OIDs consumed by genbki.pl"
},
{
"msg_contents": "On Wed, May 26, 2021 at 5:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> So I propose shoehorning this into v14, to avoid shipping a\n> release where FirstBootstrapObjectId has been bumped up.\n\nJust to repeat on this thread what I said on the other one, I am +1 on\nthis as a concept. I have not reviewed the patch.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 27 May 2021 10:36:00 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the range of OIDs consumed by genbki.pl"
},
{
"msg_contents": "On Wed, May 26, 2021 at 5:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> The attached patch stems from the conversation at [1];\n> I'm starting a new thread to avoid confusing the cfbot.\n\nThe patch looks good to me.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, May 26, 2021 at 5:43 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:>> The attached patch stems from the conversation at [1];> I'm starting a new thread to avoid confusing the cfbot.The patch looks good to me.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 27 May 2021 13:09:59 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing the range of OIDs consumed by genbki.pl"
},
{
"msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> The patch looks good to me.\n\nThanks for reviewing!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 27 May 2021 13:34:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Reducing the range of OIDs consumed by genbki.pl"
}
] |
[
{
"msg_contents": "The RADIUS-related checks in parse_hba_line() did not respect elevel\nand did not fill in *err_msg. Also, verify_option_list_length()\npasted together error messages in an untranslatable way. To fix the\nlatter, remove the function and do the error checking inline. It's a\nbit more verbose but only minimally longer, and it makes fixing the\nfirst two issues straightforward.",
"msg_date": "Thu, 27 May 2021 10:36:27 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Fix RADIUS error reporting in hba file parsing"
},
{
"msg_contents": "On Thu, May 27, 2021 at 10:36 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n>\n> The RADIUS-related checks in parse_hba_line() did not respect elevel\n> and did not fill in *err_msg. Also, verify_option_list_length()\n> pasted together error messages in an untranslatable way. To fix the\n> latter, remove the function and do the error checking inline. It's a\n> bit more verbose but only minimally longer, and it makes fixing the\n> first two issues straightforward.\n\nLGTM. I agree that the extra code from removing the function is worth\nit if it makes it better for translations.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Mon, 31 May 2021 11:31:07 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: Fix RADIUS error reporting in hba file parsing"
}
] |
[
{
"msg_contents": "Hi,\n\nSince writing SECURITY DEFINER functions securely requires annoying\nincantations[1], wouldn't it be nice if we provided a way for the superuser\nto override the default search path via a GUC in postgresql.conf? That way\nyou can set search_path if you want to override the default, but if you\nleave it out you're not vulnerable, assuming security_definer_search_path\nonly contains secure schemas.\n\n\n.m\n\nHi,Since writing SECURITY DEFINER functions securely requires annoying incantations[1], wouldn't it be nice if we provided a way for the superuser to override the default search path via a GUC in postgresql.conf? That way you can set search_path if you want to override the default, but if you leave it out you're not vulnerable, assuming security_definer_search_path only contains secure schemas..m",
"msg_date": "Thu, 27 May 2021 14:23:35 +0300",
"msg_from": "Marko Tiikkaja <marko@joh.to>",
"msg_from_op": true,
"msg_subject": "security_definer_search_path GUC"
},
{
"msg_contents": "Glad you bring this problem up for discussion, something should be done to improve the situation.\n\nPersonally, as I really dislike search_path and consider using it an anti-pattern.\nI would rather prefer a GUC to hard-code search_path to a constant default value of just ‘public’ that cannot be changed by anyone or any function. Trying to change it to a different value would raise an exception.\n\nThis would work for me since I always fully-qualify all objects except the ones in public.\n\n/Joel\n\nOn Thu, May 27, 2021, at 13:23, Marko Tiikkaja wrote:\n> Hi,\n> \n> Since writing SECURITY DEFINER functions securely requires annoying incantations[1], wouldn't it be nice if we provided a way for the superuser to override the default search path via a GUC in postgresql.conf? That way you can set search_path if you want to override the default, but if you leave it out you're not vulnerable, assuming security_definer_search_path only contains secure schemas.\n> \n> \n> .m\n\nKind regards,\n\nJoel\n\nGlad you bring this problem up for discussion, something should be done to improve the situation.Personally, as I really dislike search_path and consider using it an anti-pattern.I would rather prefer a GUC to hard-code search_path to a constant default value of just ‘public’ that cannot be changed by anyone or any function. Trying to change it to a different value would raise an exception.This would work for me since I always fully-qualify all objects except the ones in public./JoelOn Thu, May 27, 2021, at 13:23, Marko Tiikkaja wrote:Hi,Since writing SECURITY DEFINER functions securely requires annoying incantations[1], wouldn't it be nice if we provided a way for the superuser to override the default search path via a GUC in postgresql.conf? That way you can set search_path if you want to override the default, but if you leave it out you're not vulnerable, assuming security_definer_search_path only contains secure schemas..mKind regards,Joel",
"msg_date": "Sat, 29 May 2021 22:05:34 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "On Sat, May 29, 2021 at 11:06 PM Joel Jacobson <joel@compiler.org> wrote:\n\n> Glad you bring this problem up for discussion, something should be done to\n> improve the situation.\n>\n> Personally, as I really dislike search_path and consider using it an\n> anti-pattern.\n> I would rather prefer a GUC to hard-code search_path to a constant default\n> value of just ‘public’ that cannot be changed by anyone or any function.\n> Trying to change it to a different value would raise an exception.\n>\n\nThat would work, too! I think it's a nice idea, perhaps even better than\nwhat I proposed. I would be happy to see either one incorporated.\n\n\n.m\n\nOn Sat, May 29, 2021 at 11:06 PM Joel Jacobson <joel@compiler.org> wrote:Glad you bring this problem up for discussion, something should be done to improve the situation.Personally, as I really dislike search_path and consider using it an anti-pattern.I would rather prefer a GUC to hard-code search_path to a constant default value of just ‘public’ that cannot be changed by anyone or any function. Trying to change it to a different value would raise an exception.That would work, too! I think it's a nice idea, perhaps even better than what I proposed. I would be happy to see either one incorporated..m",
"msg_date": "Sat, 29 May 2021 23:10:44 +0300",
"msg_from": "Marko Tiikkaja <marko@joh.to>",
"msg_from_op": true,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "On Sat, May 29, 2021, at 22:10, Marko Tiikkaja wrote:\n> On Sat, May 29, 2021 at 11:06 PM Joel Jacobson <joel@compiler.org> wrote:\n>> __\n>> Glad you bring this problem up for discussion, something should be done to improve the situation.\n>> \n>> Personally, as I really dislike search_path and consider using it an anti-pattern.\n>> I would rather prefer a GUC to hard-code search_path to a constant default value of just ‘public’ that cannot be changed by anyone or any function. Trying to change it to a different value would raise an exception.\n> \n> That would work, too! I think it's a nice idea, perhaps even better than what I proposed. I would be happy to see either one incorporated.\n\nAnother idea would be to create an extension that removes the search_path feature entirely,\nnot sure though if the current hooks would allow creating such an extension.\n\nMaybe \"extensions\" that only removes unwanted core features could be by convention be prefixed with \"no_\"?\n\nCREATE EXTENSION no_search_path;\n\nThat way, a company with a company-wide policy against using search_path,\ncould add this to all their company .control extension files:\n\nrequires = 'no_search_path'\n\nIf some employee would try to `DROP EXTENSION no_search_path` they would get an error:\n\n# DROP EXTENSION no_search_path;\nERROR: cannot drop extension no_search_path because other objects depend on it\nDETAIL: extension acme_inc depends on extension no_search_path\n\nThis would be especially useful when a company has a policy to use some extension,\ninstead of relying on the built-in functionality provided.\nI'm not using \"zson\" myself, but perhaps it could be a good example to illustrate my point:\n\nLet's say a company has decided to use zson instead of json/jsonb,\nthe company would then ensure nothing is using json/jsonb\nvia the top-level .control file for the company's own extension:\n\nrequires = 'no_json, no_jsonb, zson'\n\nOr if not shipping the company's product as an extension,\nthey could instead add this to the company's install script:\n\nCREATE EXTENSION no_json;\nCREATE EXTENSION no_jsonb;\nCREATE EXTENSION zson;\n\nMaybe this is out of scope for extensions, since I guess extensions are supposed to add features?\n\nIf so, how about a new separate command `CREATE REDUCTION` specifically to remove unwanted core features,\nwhich then wouldn't need the \"no_\" prefix since it would be implicit and in a different namespace:\n\nE.g.\n\nCREATE REDUCTION search_path;\n\nand\n\nCREATE REDUCTION json;\nCREATE REDUCTION jsonb;\nCREATE EXTENSION zson;\n\n/Joel\nOn Sat, May 29, 2021, at 22:10, Marko Tiikkaja wrote:On Sat, May 29, 2021 at 11:06 PM Joel Jacobson <joel@compiler.org> wrote:Glad you bring this problem up for discussion, something should be done to improve the situation.Personally, as I really dislike search_path and consider using it an anti-pattern.I would rather prefer a GUC to hard-code search_path to a constant default value of just ‘public’ that cannot be changed by anyone or any function. Trying to change it to a different value would raise an exception.That would work, too! I think it's a nice idea, perhaps even better than what I proposed. I would be happy to see either one incorporated.Another idea would be to create an extension that removes the search_path feature entirely,not sure though if the current hooks would allow creating such an extension.Maybe \"extensions\" that only removes unwanted core features could be by convention be prefixed with \"no_\"?CREATE EXTENSION no_search_path;That way, a company with a company-wide policy against using search_path,could add this to all their company .control extension files:requires = 'no_search_path'If some employee would try to `DROP EXTENSION no_search_path` they would get an error:# DROP EXTENSION no_search_path;ERROR: cannot drop extension no_search_path because other objects depend on itDETAIL: extension acme_inc depends on extension no_search_pathThis would be especially useful when a company has a policy to use some extension,instead of relying on the built-in functionality provided.I'm not using \"zson\" myself, but perhaps it could be a good example to illustrate my point:Let's say a company has decided to use zson instead of json/jsonb,the company would then ensure nothing is using json/jsonbvia the top-level .control file for the company's own extension:requires = 'no_json, no_jsonb, zson'Or if not shipping the company's product as an extension,they could instead add this to the company's install script:CREATE EXTENSION no_json;CREATE EXTENSION no_jsonb;CREATE EXTENSION zson;Maybe this is out of scope for extensions, since I guess extensions are supposed to add features?If so, how about a new separate command `CREATE REDUCTION` specifically to remove unwanted core features,which then wouldn't need the \"no_\" prefix since it would be implicit and in a different namespace:E.g.CREATE REDUCTION search_path;andCREATE REDUCTION json;CREATE REDUCTION jsonb;CREATE EXTENSION zson;/Joel",
"msg_date": "Sun, 30 May 2021 08:51:52 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "On Sun, May 30, 2021, at 08:51, Joel Jacobson wrote:\n> Maybe this is out of scope for extensions, since I guess extensions are supposed to add features?\n> \n> If so, how about a new separate command `CREATE REDUCTION` specifically to remove unwanted core features,\n> which then wouldn't need the \"no_\" prefix since it would be implicit and in a different namespace:\n\nAnother idea would be to extract features that are considered deprecated/legacy into separate extensions,\nand ship them pre-installed for compatibility reasons,\nbut this would allow uninstalling them using DROP EXTENSION,\nsimilar to how e.g. \"plpgsql\" which is a pre-installed extension can be uninstalled.\n\n(Except I wouldn't want to uninstall plpgsql, I think it's great! But I note it's the only pre-installed extension shipped with PostgreSQL, so it's a good example on the concept.)\n\n/Joel\n\nOn Sun, May 30, 2021, at 08:51, Joel Jacobson wrote:Maybe this is out of scope for extensions, since I guess extensions are supposed to add features?If so, how about a new separate command `CREATE REDUCTION` specifically to remove unwanted core features,which then wouldn't need the \"no_\" prefix since it would be implicit and in a different namespace:Another idea would be to extract features that are considered deprecated/legacy into separate extensions,and ship them pre-installed for compatibility reasons,but this would allow uninstalling them using DROP EXTENSION,similar to how e.g. \"plpgsql\" which is a pre-installed extension can be uninstalled.(Except I wouldn't want to uninstall plpgsql, I think it's great! But I note it's the only pre-installed extension shipped with PostgreSQL, so it's a good example on the concept.)/Joel",
"msg_date": "Sun, 30 May 2021 09:30:15 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "ne 30. 5. 2021 v 8:52 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> On Sat, May 29, 2021, at 22:10, Marko Tiikkaja wrote:\n>\n> On Sat, May 29, 2021 at 11:06 PM Joel Jacobson <joel@compiler.org> wrote:\n>\n>\n> Glad you bring this problem up for discussion, something should be done to\n> improve the situation.\n>\n> Personally, as I really dislike search_path and consider using it an\n> anti-pattern.\n> I would rather prefer a GUC to hard-code search_path to a constant default\n> value of just ‘public’ that cannot be changed by anyone or any function.\n> Trying to change it to a different value would raise an exception.\n>\n>\n> That would work, too! I think it's a nice idea, perhaps even better than\n> what I proposed. I would be happy to see either one incorporated.\n>\n>\n> Another idea would be to create an extension that removes the search_path\n> feature entirely,\n> not sure though if the current hooks would allow creating such an\n> extension.\n>\n> Maybe \"extensions\" that only removes unwanted core features could be by\n> convention be prefixed with \"no_\"?\n>\n> CREATE EXTENSION no_search_path;\n>\n> That way, a company with a company-wide policy against using search_path,\n> could add this to all their company .control extension files:\n>\n\nMaybe inverted design can work better - there can be GUC -\n\"qualified_names_required\" with a list of schemas without enabled implicit\naccess.\n\nThe one possible value can be \"all\".\n\nThe advantage of this design can be the possibility of work on current\nextensions.\n\nI don't think so search_path can be disabled - but there can be checks that\ndisallow non-qualified names.\n\nPavel\n\n\n\n> requires = 'no_search_path'\n>\n> If some employee would try to `DROP EXTENSION no_search_path` they would\n> get an error:\n>\n> # DROP EXTENSION no_search_path;\n> ERROR: cannot drop extension no_search_path because other objects depend\n> on it\n> DETAIL: extension acme_inc depends on extension no_search_path\n>\n> This would be especially useful when a company has a policy to use some\n> extension,\n> instead of relying on the built-in functionality provided.\n> I'm not using \"zson\" myself, but perhaps it could be a good example to\n> illustrate my point:\n>\n> Let's say a company has decided to use zson instead of json/jsonb,\n> the company would then ensure nothing is using json/jsonb\n> via the top-level .control file for the company's own extension:\n>\n> requires = 'no_json, no_jsonb, zson'\n>\n> Or if not shipping the company's product as an extension,\n> they could instead add this to the company's install script:\n>\n> CREATE EXTENSION no_json;\n> CREATE EXTENSION no_jsonb;\n> CREATE EXTENSION zson;\n>\n> Maybe this is out of scope for extensions, since I guess extensions are\n> supposed to add features?\n>\n> If so, how about a new separate command `CREATE REDUCTION` specifically to\n> remove unwanted core features,\n> which then wouldn't need the \"no_\" prefix since it would be implicit and\n> in a different namespace:\n>\n> E.g.\n>\n> CREATE REDUCTION search_path;\n>\n> and\n>\n> CREATE REDUCTION json;\n> CREATE REDUCTION jsonb;\n> CREATE EXTENSION zson;\n>\n> /Joel\n>\n\nne 30. 5. 2021 v 8:52 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Sat, May 29, 2021, at 22:10, Marko Tiikkaja wrote:On Sat, May 29, 2021 at 11:06 PM Joel Jacobson <joel@compiler.org> wrote:Glad you bring this problem up for discussion, something should be done to improve the situation.Personally, as I really dislike search_path and consider using it an anti-pattern.I would rather prefer a GUC to hard-code search_path to a constant default value of just ‘public’ that cannot be changed by anyone or any function. Trying to change it to a different value would raise an exception.That would work, too! I think it's a nice idea, perhaps even better than what I proposed. I would be happy to see either one incorporated.Another idea would be to create an extension that removes the search_path feature entirely,not sure though if the current hooks would allow creating such an extension.Maybe \"extensions\" that only removes unwanted core features could be by convention be prefixed with \"no_\"?CREATE EXTENSION no_search_path;That way, a company with a company-wide policy against using search_path,could add this to all their company .control extension files: Maybe inverted design can work better - there can be GUC - \"qualified_names_required\" with a list of schemas without enabled implicit access.The one possible value can be \"all\".The advantage of this design can be the possibility of work on current extensions.I don't think so search_path can be disabled - but there can be checks that disallow non-qualified names.Pavel requires = 'no_search_path'If some employee would try to `DROP EXTENSION no_search_path` they would get an error:# DROP EXTENSION no_search_path;ERROR: cannot drop extension no_search_path because other objects depend on itDETAIL: extension acme_inc depends on extension no_search_pathThis would be especially useful when a company has a policy to use some extension,instead of relying on the built-in functionality provided.I'm not using \"zson\" myself, but perhaps it could be a good example to illustrate my point:Let's say a company has decided to use zson instead of json/jsonb,the company would then ensure nothing is using json/jsonbvia the top-level .control file for the company's own extension:requires = 'no_json, no_jsonb, zson'Or if not shipping the company's product as an extension,they could instead add this to the company's install script:CREATE EXTENSION no_json;CREATE EXTENSION no_jsonb;CREATE EXTENSION zson;Maybe this is out of scope for extensions, since I guess extensions are supposed to add features?If so, how about a new separate command `CREATE REDUCTION` specifically to remove unwanted core features,which then wouldn't need the \"no_\" prefix since it would be implicit and in a different namespace:E.g.CREATE REDUCTION search_path;andCREATE REDUCTION json;CREATE REDUCTION jsonb;CREATE EXTENSION zson;/Joel",
"msg_date": "Sun, 30 May 2021 09:54:22 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "On Sun, May 30, 2021, at 09:54, Pavel Stehule wrote:\n> Maybe inverted design can work better - there can be GUC - \"qualified_names_required\" with a list of schemas without enabled implicit access.\n> \n> The one possible value can be \"all\".\n> \n> The advantage of this design can be the possibility of work on current extensions.\n> \n> I don't think so search_path can be disabled - but there can be checks that disallow non-qualified names.\n\nI would prefer a pre-installed search_path-extension that can be uninstalled,\ninstead of yet another GUC, but if that's not an option, I'm happy with a GUC as well.\n\nIMO, the current search_path default behaviour is a minefield.\n\nFor users like myself, who prefer a safer context-free name resolution behaviour, here is how I think it should work:\n\n* The only schemas that don't require fully-qualified schemas are 'pg_catalog' and 'public'\n\n* The $user schema feature is removed, i.e:\n- $user is not part of the search_path\n- objects are not created nor looked for in a $user schema if such a schema exists\n- objects are always created in 'public' if a schema is not explicitly specified\n\n* Temp objects always needs to be fully-qualified using 'pg_temp'\n\n* 'pg_catalog' and 'public' are enforced to be completely disjoint.\nThat is, trying to create an object in 'public' that is in conflict with 'pg_catalog' would raise an error.\n\nMore ideas?\n\n/Joel\n\n\n\n\nOn Sun, May 30, 2021, at 09:54, Pavel Stehule wrote:Maybe inverted design can work better - there can be GUC - \"qualified_names_required\" with a list of schemas without enabled implicit access.The one possible value can be \"all\".The advantage of this design can be the possibility of work on current extensions.I don't think so search_path can be disabled - but there can be checks that disallow non-qualified names.I would prefer a pre-installed search_path-extension that can be uninstalled,instead of yet another GUC, but if that's not an option, I'm happy with a GUC as well.IMO, the current search_path default behaviour is a minefield.For users like myself, who prefer a safer context-free name resolution behaviour, here is how I think it should work:* The only schemas that don't require fully-qualified schemas are 'pg_catalog' and 'public'* The $user schema feature is removed, i.e:- $user is not part of the search_path- objects are not created nor looked for in a $user schema if such a schema exists- objects are always created in 'public' if a schema is not explicitly specified* Temp objects always needs to be fully-qualified using 'pg_temp'* 'pg_catalog' and 'public' are enforced to be completely disjoint.That is, trying to create an object in 'public' that is in conflict with 'pg_catalog' would raise an error.More ideas?/Joel",
"msg_date": "Tue, 01 Jun 2021 08:59:23 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "út 1. 6. 2021 v 8:59 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> On Sun, May 30, 2021, at 09:54, Pavel Stehule wrote:\n>\n> Maybe inverted design can work better - there can be GUC -\n> \"qualified_names_required\" with a list of schemas without enabled implicit\n> access.\n>\n> The one possible value can be \"all\".\n>\n> The advantage of this design can be the possibility of work on current\n> extensions.\n>\n> I don't think so search_path can be disabled - but there can be checks\n> that disallow non-qualified names.\n>\n>\n> I would prefer a pre-installed search_path-extension that can be\n> uninstalled,\n> instead of yet another GUC, but if that's not an option, I'm happy with a\n> GUC as well.\n>\n> IMO, the current search_path default behaviour is a minefield.\n>\n> For users like myself, who prefer a safer context-free name resolution\n> behaviour, here is how I think it should work:\n>\n> * The only schemas that don't require fully-qualified schemas are\n> 'pg_catalog' and 'public'\n>\n> * The $user schema feature is removed, i.e:\n> - $user is not part of the search_path\n> - objects are not created nor looked for in a $user schema if such a\n> schema exists\n> - objects are always created in 'public' if a schema is not explicitly\n> specified\n>\n> * Temp objects always needs to be fully-qualified using 'pg_temp'\n>\n> * 'pg_catalog' and 'public' are enforced to be completely disjoint.\n> That is, trying to create an object in 'public' that is in conflict with\n> 'pg_catalog' would raise an error.\n>\n> More ideas?\n>\n\nOperators use schemas too. I cannot imagine any work with operators with\nthe necessity of explicit schemas.\n\nRegards\n\nPavel\n\n\n\n/Joel\n\nút 1. 6. 2021 v 8:59 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Sun, May 30, 2021, at 09:54, Pavel Stehule wrote:Maybe inverted design can work better - there can be GUC - \"qualified_names_required\" with a list of schemas without enabled implicit access.The one possible value can be \"all\".The advantage of this design can be the possibility of work on current extensions.I don't think so search_path can be disabled - but there can be checks that disallow non-qualified names.I would prefer a pre-installed search_path-extension that can be uninstalled,instead of yet another GUC, but if that's not an option, I'm happy with a GUC as well.IMO, the current search_path default behaviour is a minefield.For users like myself, who prefer a safer context-free name resolution behaviour, here is how I think it should work:* The only schemas that don't require fully-qualified schemas are 'pg_catalog' and 'public'* The $user schema feature is removed, i.e:- $user is not part of the search_path- objects are not created nor looked for in a $user schema if such a schema exists- objects are always created in 'public' if a schema is not explicitly specified* Temp objects always needs to be fully-qualified using 'pg_temp'* 'pg_catalog' and 'public' are enforced to be completely disjoint.That is, trying to create an object in 'public' that is in conflict with 'pg_catalog' would raise an error.More ideas?Operators use schemas too. I cannot imagine any work with operators with the necessity of explicit schemas.RegardsPavel/Joel",
"msg_date": "Tue, 1 Jun 2021 10:44:34 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "On Tue, Jun 1, 2021, at 10:44, Pavel Stehule wrote:\n> Operators use schemas too. I cannot imagine any work with operators with the necessity of explicit schemas.\n\nI thought operators are mostly installed in the public schema, in which case that wouldn't be a problem, or am I missing something here?\n\n/Joel\nOn Tue, Jun 1, 2021, at 10:44, Pavel Stehule wrote:Operators use schemas too. I cannot imagine any work with operators with the necessity of explicit schemas.I thought operators are mostly installed in the public schema, in which case that wouldn't be a problem, or am I missing something here?/Joel",
"msg_date": "Tue, 01 Jun 2021 12:52:50 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "út 1. 6. 2021 v 12:53 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> On Tue, Jun 1, 2021, at 10:44, Pavel Stehule wrote:\n>\n> Operators use schemas too. I cannot imagine any work with operators with\n> the necessity of explicit schemas.\n>\n>\n> I thought operators are mostly installed in the public schema, in which\n> case that wouldn't be a problem, or am I missing something here?\n>\n\nIt is inconsistency - if I use schema for almost all, then can be strange\nto store operators just to public.\n\n\n> /Joel\n>\n\nút 1. 6. 2021 v 12:53 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Tue, Jun 1, 2021, at 10:44, Pavel Stehule wrote:Operators use schemas too. I cannot imagine any work with operators with the necessity of explicit schemas.I thought operators are mostly installed in the public schema, in which case that wouldn't be a problem, or am I missing something here?It is inconsistency - if I use schema for almost all, then can be strange to store operators just to public. /Joel",
"msg_date": "Tue, 1 Jun 2021 12:55:56 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "On Tue, Jun 1, 2021, at 12:55, Pavel Stehule wrote:\n> \n> \n> út 1. 6. 2021 v 12:53 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n>> On Tue, Jun 1, 2021, at 10:44, Pavel Stehule wrote:\n>>> Operators use schemas too. I cannot imagine any work with operators with the necessity of explicit schemas.\n>> \n>> I thought operators are mostly installed in the public schema, in which case that wouldn't be a problem, or am I missing something here?\n> \n> It is inconsistency - if I use schema for almost all, then can be strange to store operators just to public. \n\nI don't agree. If an extension provides functionality that is supposed to be used by all parts of the system, then I think the 'public' schema is a good choice.\n\nUsing schemas only for the sake of separation, i.e. adding the schemas to the search_path, to make them implicitly available, is IMO an ugly hack, since if just wanting separation without fully-qualifying, then packaging the objects are separate extensions is much cleaner. That way you can easily see what objects are provided by each extension using \\dx+.\n\n/Joel\nOn Tue, Jun 1, 2021, at 12:55, Pavel Stehule wrote:út 1. 6. 2021 v 12:53 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Tue, Jun 1, 2021, at 10:44, Pavel Stehule wrote:Operators use schemas too. I cannot imagine any work with operators with the necessity of explicit schemas.I thought operators are mostly installed in the public schema, in which case that wouldn't be a problem, or am I missing something here?It is inconsistency - if I use schema for almost all, then can be strange to store operators just to public. I don't agree. If an extension provides functionality that is supposed to be used by all parts of the system, then I think the 'public' schema is a good choice.Using schemas only for the sake of separation, i.e. adding the schemas to the search_path, to make them implicitly available, is IMO an ugly hack, since if just wanting separation without fully-qualifying, then packaging the objects are separate extensions is much cleaner. That way you can easily see what objects are provided by each extension using \\dx+./Joel",
"msg_date": "Tue, 01 Jun 2021 13:12:42 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "út 1. 6. 2021 v 13:13 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> On Tue, Jun 1, 2021, at 12:55, Pavel Stehule wrote:\n>\n>\n>\n> út 1. 6. 2021 v 12:53 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n>\n> On Tue, Jun 1, 2021, at 10:44, Pavel Stehule wrote:\n>\n> Operators use schemas too. I cannot imagine any work with operators with\n> the necessity of explicit schemas.\n>\n>\n> I thought operators are mostly installed in the public schema, in which\n> case that wouldn't be a problem, or am I missing something here?\n>\n>\n> It is inconsistency - if I use schema for almost all, then can be strange\n> to store operators just to public.\n>\n>\n> I don't agree. If an extension provides functionality that is supposed to\n> be used by all parts of the system, then I think the 'public' schema is a\n> good choice.\n>\n\nI disagree\n\nusual design of extensions (when schema is used) is\n\ncreate schema ...\nset schema ...\n\ncreate table\ncreate function\n...\n\nIt is hard to say if it is good or it is bad. Orafce using my own schema,\nand some things are in public (and some in pg_catalog), and people don't\ntell me, so it was a good choice.\n\nRegards\n\nPavel\n\n\n> Using schemas only for the sake of separation, i.e. adding the schemas to\n> the search_path, to make them implicitly available, is IMO an ugly hack,\n> since if just wanting separation without fully-qualifying, then packaging\n> the objects are separate extensions is much cleaner. That way you can\n> easily see what objects are provided by each extension using \\dx+.\n>\n> /Joel\n>\n\nút 1. 6. 2021 v 13:13 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Tue, Jun 1, 2021, at 12:55, Pavel Stehule wrote:út 1. 6. 2021 v 12:53 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Tue, Jun 1, 2021, at 10:44, Pavel Stehule wrote:Operators use schemas too. I cannot imagine any work with operators with the necessity of explicit schemas.I thought operators are mostly installed in the public schema, in which case that wouldn't be a problem, or am I missing something here?It is inconsistency - if I use schema for almost all, then can be strange to store operators just to public. I don't agree. If an extension provides functionality that is supposed to be used by all parts of the system, then I think the 'public' schema is a good choice.I disagreeusual design of extensions (when schema is used) iscreate schema ...set schema ...create tablecreate function...It is hard to say if it is good or it is bad. Orafce using my own schema, and some things are in public (and some in pg_catalog), and people don't tell me, so it was a good choice. RegardsPavelUsing schemas only for the sake of separation, i.e. adding the schemas to the search_path, to make them implicitly available, is IMO an ugly hack, since if just wanting separation without fully-qualifying, then packaging the objects are separate extensions is much cleaner. That way you can easily see what objects are provided by each extension using \\dx+./Joel",
"msg_date": "Tue, 1 Jun 2021 14:41:13 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "On Tue, Jun 1, 2021, at 14:41, Pavel Stehule wrote:\n> út 1. 6. 2021 v 13:13 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n>> __I don't agree. If an extension provides functionality that is supposed to be used by all parts of the system, then I think the 'public' schema is a good choice.\n> \n> I disagree\n> \n> usual design of extensions (when schema is used) is\n> \n> create schema ...\n> set schema ...\n> \n> create table\n> create function\n> \n> It is hard to say if it is good or it is bad.\n\nYes, it's hard, because it's a matter of taste.\nSome prefer convenience, others clarity/safety.\n\n> Orafce using my own schema, and some things are in public (and some in pg_catalog), and people don't tell me, so it was a good choice.\n\nI struggle to understand this last sentence.\nSo you orafce extension installs objects in both public and pg_catalog, right.\nBut what do you mean with \"people don't tell me\"?\nAnd what \"was a good choice\"?\n\nThanks for explaining.\n\n/Joel\nOn Tue, Jun 1, 2021, at 14:41, Pavel Stehule wrote:út 1. 6. 2021 v 13:13 odesílatel Joel Jacobson <joel@compiler.org> napsal:I don't agree. If an extension provides functionality that is supposed to be used by all parts of the system, then I think the 'public' schema is a good choice.I disagreeusual design of extensions (when schema is used) iscreate schema ...set schema ...create tablecreate functionIt is hard to say if it is good or it is bad.Yes, it's hard, because it's a matter of taste.Some prefer convenience, others clarity/safety.Orafce using my own schema, and some things are in public (and some in pg_catalog), and people don't tell me, so it was a good choice.I struggle to understand this last sentence.So you orafce extension installs objects in both public and pg_catalog, right.But what do you mean with \"people don't tell me\"?And what \"was a good choice\"?Thanks for explaining./Joel",
"msg_date": "Tue, 01 Jun 2021 17:56:43 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "út 1. 6. 2021 v 17:57 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> On Tue, Jun 1, 2021, at 14:41, Pavel Stehule wrote:\n>\n> út 1. 6. 2021 v 13:13 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n>\n> I don't agree. If an extension provides functionality that is supposed to\n> be used by all parts of the system, then I think the 'public' schema is a\n> good choice.\n>\n>\n> I disagree\n>\n> usual design of extensions (when schema is used) is\n>\n> create schema ...\n> set schema ...\n>\n> create table\n> create function\n>\n> It is hard to say if it is good or it is bad.\n>\n>\n> Yes, it's hard, because it's a matter of taste.\n> Some prefer convenience, others clarity/safety.\n>\n> Orafce using my own schema, and some things are in public (and some in\n> pg_catalog), and people don't tell me, so it was a good choice.\n>\n>\n> I struggle to understand this last sentence.\n> So you orafce extension installs objects in both public and pg_catalog,\n> right.\n> But what do you mean with \"people don't tell me\"?\n> And what \"was a good choice\"?\n>\n\nI learned programming on Orafce, and I didn't expect any success, so I\ndesigned it quickly, and the placing of old Orafce's functions to schemas\nis messy.\n\nI am sure, if I started again, I would never use pg_catalog or public\nschema. I think if somebody uses schema, then it is good to use schema for\nall without exceptions - but it expects usage of search_path. I am not sure\nif using public schema or using search_path are two sides of one thing.\n\nPavel\n\n\n>\n> Thanks for explaining.\n>\n> /Joel\n>\n\nút 1. 6. 2021 v 17:57 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Tue, Jun 1, 2021, at 14:41, Pavel Stehule wrote:út 1. 6. 2021 v 13:13 odesílatel Joel Jacobson <joel@compiler.org> napsal:I don't agree. If an extension provides functionality that is supposed to be used by all parts of the system, then I think the 'public' schema is a good choice.I disagreeusual design of extensions (when schema is used) iscreate schema ...set schema ...create tablecreate functionIt is hard to say if it is good or it is bad.Yes, it's hard, because it's a matter of taste.Some prefer convenience, others clarity/safety.Orafce using my own schema, and some things are in public (and some in pg_catalog), and people don't tell me, so it was a good choice.I struggle to understand this last sentence.So you orafce extension installs objects in both public and pg_catalog, right.But what do you mean with \"people don't tell me\"?And what \"was a good choice\"?I learned programming on Orafce, and I didn't expect any success, so I designed it quickly, and the placing of old Orafce's functions to schemas is messy. I am sure, if I started again, I would never use pg_catalog or public schema. I think if somebody uses schema, then it is good to use schema for all without exceptions - but it expects usage of search_path. I am not sure if using public schema or using search_path are two sides of one thing.Pavel Thanks for explaining./Joel",
"msg_date": "Tue, 1 Jun 2021 18:05:50 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "On Tue, Jun 1, 2021, at 18:05, Pavel Stehule wrote:\n> I learned programming on Orafce, and I didn't expect any success, so I designed it quickly, and the placing of old Orafce's functions to schemas is messy. \n> \n> I am sure, if I started again, I would never use pg_catalog or public schema. I think if somebody uses schema, then it is good to use schema for all without exceptions - but it expects usage of search_path. I am not sure if using public schema or using search_path are two sides of one thing.\n\nI think you're right they both try to provide solutions to the same problem, i.e. when wanting to avoid having to fully-qualify.\n\nHowever, they are very different, and while I think the 'public' schema is a great idea, I think 'search_path' has some serious problems. I'll explain why:\n\n'search_path' is a bit like a global variable in C, that can change the behaviour of the SQL commands executed.\nIt makes unqualified SQL code context-sensitive; you don't know by looking at a piece of code what objects are referred to, you also need to figure out what the active search_path is at this place in the code.\n\n'public' schema if used (without ever changing the default 'search_path'), allows creating unqualified database objects, which I think can be useful in at least three situations:\n\n1) when the application is a monolith inside a company, when there is only one version of the database, i.e. not having to worry about name collision with other objects in some other version, since the application is hidden in the company and the schema design is not exposed to the public\n\n2) when installing a extension that uses schemas, when wanting the convenience of unqualified access to some functions frequently used, instead of adding its schema to the search_path for convenience, one can instead add wrapper-functions in the 'public' schema. This way, all internal functions in the extension, that are not meant to be executed by users, are still hidden in its schema and won't bother anyone (i.e. can't cause unexpected conflicts). Of course, access can also be controlled via REVOKE EXECUTE ... FROM PUBLIC for such internal functions, which is probably a good idea as well.\nIn a similar way, specific tables in the extension's schema can be made unqualified as well by adding simple views, installed in the public schema, if insisting on unqualified convenience.\n\nIn conclusion:\nThe main difference is 'public' makes it possible to make *specific* objects unqualified,\nwhile 'search_path' makes *all* objects in such schema(s) unqualified.\n\n/Joel\nOn Tue, Jun 1, 2021, at 18:05, Pavel Stehule wrote:I learned programming on Orafce, and I didn't expect any success, so I designed it quickly, and the placing of old Orafce's functions to schemas is messy. I am sure, if I started again, I would never use pg_catalog or public schema. I think if somebody uses schema, then it is good to use schema for all without exceptions - but it expects usage of search_path. I am not sure if using public schema or using search_path are two sides of one thing.I think you're right they both try to provide solutions to the same problem, i.e. when wanting to avoid having to fully-qualify.However, they are very different, and while I think the 'public' schema is a great idea, I think 'search_path' has some serious problems. I'll explain why:'search_path' is a bit like a global variable in C, that can change the behaviour of the SQL commands executed.It makes unqualified SQL code context-sensitive; you don't know by looking at a piece of code what objects are referred to, you also need to figure out what the active search_path is at this place in the code.'public' schema if used (without ever changing the default 'search_path'), allows creating unqualified database objects, which I think can be useful in at least three situations:1) when the application is a monolith inside a company, when there is only one version of the database, i.e. not having to worry about name collision with other objects in some other version, since the application is hidden in the company and the schema design is not exposed to the public2) when installing a extension that uses schemas, when wanting the convenience of unqualified access to some functions frequently used, instead of adding its schema to the search_path for convenience, one can instead add wrapper-functions in the 'public' schema. This way, all internal functions in the extension, that are not meant to be executed by users, are still hidden in its schema and won't bother anyone (i.e. can't cause unexpected conflicts). Of course, access can also be controlled via REVOKE EXECUTE ... FROM PUBLIC for such internal functions, which is probably a good idea as well.In a similar way, specific tables in the extension's schema can be made unqualified as well by adding simple views, installed in the public schema, if insisting on unqualified convenience.In conclusion:The main difference is 'public' makes it possible to make *specific* objects unqualified,while 'search_path' makes *all* objects in such schema(s) unqualified./Joel",
"msg_date": "Wed, 02 Jun 2021 08:44:59 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "st 2. 6. 2021 v 8:45 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> On Tue, Jun 1, 2021, at 18:05, Pavel Stehule wrote:\n>\n> I learned programming on Orafce, and I didn't expect any success, so I\n> designed it quickly, and the placing of old Orafce's functions to schemas\n> is messy.\n>\n> I am sure, if I started again, I would never use pg_catalog or public\n> schema. I think if somebody uses schema, then it is good to use schema for\n> all without exceptions - but it expects usage of search_path. I am not sure\n> if using public schema or using search_path are two sides of one thing.\n>\n>\n> I think you're right they both try to provide solutions to the same\n> problem, i.e. when wanting to avoid having to fully-qualify.\n>\n> However, they are very different, and while I think the 'public' schema is\n> a great idea, I think 'search_path' has some serious problems. I'll explain\n> why:\n>\n> 'search_path' is a bit like a global variable in C, that can change the\n> behaviour of the SQL commands executed.\n> It makes unqualified SQL code context-sensitive; you don't know by looking\n> at a piece of code what objects are referred to, you also need to figure\n> out what the active search_path is at this place in the code.\n>\n\nsometimes this is wanted feature - some sharding is based on this\n\nset search_path = 'custormerx'\n...\n\n\n\n> 'public' schema if used (without ever changing the default 'search_path'),\n> allows creating unqualified database objects, which I think can be useful\n> in at least three situations:\n>\n> 1) when the application is a monolith inside a company, when there is only\n> one version of the database, i.e. not having to worry about name collision\n> with other objects in some other version, since the application is hidden\n> in the company and the schema design is not exposed to the public\n>\n> 2) when installing a extension that uses schemas, when wanting the\n> convenience of unqualified access to some functions frequently used,\n> instead of adding its schema to the search_path for convenience, one can\n> instead add wrapper-functions in the 'public' schema. This way, all\n> internal functions in the extension, that are not meant to be executed by\n> users, are still hidden in its schema and won't bother anyone (i.e. can't\n> cause unexpected conflicts). Of course, access can also be controlled via\n> REVOKE EXECUTE ... FROM PUBLIC for such internal functions, which is\n> probably a good idea as well.\n> In a similar way, specific tables in the extension's schema can be made\n> unqualified as well by adding simple views, installed in the public schema,\n> if insisting on unqualified convenience.\n>\n> In conclusion:\n> The main difference is 'public' makes it possible to make *specific*\n> objects unqualified,\n> while 'search_path' makes *all* objects in such schema(s) unqualified.\n>\n\nThese arguments are valid, but I think so it is not all. If you remove\nsearch_path, then the \"public\" schema will be overused. I think we should\nask - who can change the search path and how. Now, there are not any\nlimits. I can imagine the situation when search_path can be changed by only\nsome dedicated role - it can be implemented in a security definer function.\nOr another solution, we can fix the search path to one value, or only a few\npossibilities.\n\nMaybe for your purpose is just enough to introduce syntax for defining all\npossibilities of search path:\n\nsearch_path = \"public\" # now, just default\nsearch_path = [\"public\"] # future - define vector of possible values of\nsearch path - in this case, only \"public\" is allowed - and if you want to\nchange it, you should be database owner\n\nor there can be hook for changing search_path, and it can be implemented\ndynamically in extension\n\nPavel\n\n\n\n\n\n\n\n\n\n\n\n\n>\n> /Joel\n>\n\nst 2. 6. 2021 v 8:45 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Tue, Jun 1, 2021, at 18:05, Pavel Stehule wrote:I learned programming on Orafce, and I didn't expect any success, so I designed it quickly, and the placing of old Orafce's functions to schemas is messy. I am sure, if I started again, I would never use pg_catalog or public schema. I think if somebody uses schema, then it is good to use schema for all without exceptions - but it expects usage of search_path. I am not sure if using public schema or using search_path are two sides of one thing.I think you're right they both try to provide solutions to the same problem, i.e. when wanting to avoid having to fully-qualify.However, they are very different, and while I think the 'public' schema is a great idea, I think 'search_path' has some serious problems. I'll explain why:'search_path' is a bit like a global variable in C, that can change the behaviour of the SQL commands executed.It makes unqualified SQL code context-sensitive; you don't know by looking at a piece of code what objects are referred to, you also need to figure out what the active search_path is at this place in the code.sometimes this is wanted feature - some sharding is based on thisset search_path = 'custormerx'...'public' schema if used (without ever changing the default 'search_path'), allows creating unqualified database objects, which I think can be useful in at least three situations:1) when the application is a monolith inside a company, when there is only one version of the database, i.e. not having to worry about name collision with other objects in some other version, since the application is hidden in the company and the schema design is not exposed to the public2) when installing a extension that uses schemas, when wanting the convenience of unqualified access to some functions frequently used, instead of adding its schema to the search_path for convenience, one can instead add wrapper-functions in the 'public' schema. This way, all internal functions in the extension, that are not meant to be executed by users, are still hidden in its schema and won't bother anyone (i.e. can't cause unexpected conflicts). Of course, access can also be controlled via REVOKE EXECUTE ... FROM PUBLIC for such internal functions, which is probably a good idea as well.In a similar way, specific tables in the extension's schema can be made unqualified as well by adding simple views, installed in the public schema, if insisting on unqualified convenience.In conclusion:The main difference is 'public' makes it possible to make *specific* objects unqualified,while 'search_path' makes *all* objects in such schema(s) unqualified.These arguments are valid, but I think so it is not all. If you remove search_path, then the \"public\" schema will be overused. I think we should ask - who can change the search path and how. Now, there are not any limits. I can imagine the situation when search_path can be changed by only some dedicated role - it can be implemented in a security definer function. Or another solution, we can fix the search path to one value, or only a few possibilities.Maybe for your purpose is just enough to introduce syntax for defining all possibilities of search path:search_path = \"public\" # now, just defaultsearch_path = [\"public\"] # future - define vector of possible values of search path - in this case, only \"public\" is allowed - and if you want to change it, you should be database owneror there can be hook for changing search_path, and it can be implemented dynamically in extensionPavel /Joel",
"msg_date": "Wed, 2 Jun 2021 09:07:18 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "On Wed, Jun 2, 2021, at 09:07, Pavel Stehule wrote:\n> st 2. 6. 2021 v 8:45 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n>> __'search_path' is a bit like a global variable in C, that can change the behaviour of the SQL commands executed.\n>> It makes unqualified SQL code context-sensitive; you don't know by looking at a piece of code what objects are referred to, you also need to figure out what the active search_path is at this place in the code.\n> \n> sometimes this is wanted feature - some sharding is based on this\n> \n> set search_path = 'custormerx'\n\nOh, interesting, didn't know abou that one. Is that recommended best practise, or more of a hack?\n\nI also think we can never get rid of search_path by default, since so much legacy depend on it.\nBut I think it would be good to provide a way to effectively uninstall the search_path for users who prefer to do so, in databases where it's possible, and where clarity and safety is desired.\n\n> \n>> 'public' schema if used (without ever changing the default 'search_path'), allows creating unqualified database objects, which I think can be useful in at least three situations:\n>> \n>> 1) when the application is a monolith inside a company, when there is only one version of the database, i.e. not having to worry about name collision with other objects in some other version, since the application is hidden in the company and the schema design is not exposed to the public\n>> \n>> 2) when installing a extension that uses schemas, when wanting the convenience of unqualified access to some functions frequently used, instead of adding its schema to the search_path for convenience, one can instead add wrapper-functions in the 'public' schema. This way, all internal functions in the extension, that are not meant to be executed by users, are still hidden in its schema and won't bother anyone (i.e. can't cause unexpected conflicts). Of course, access can also be controlled via REVOKE EXECUTE ... FROM PUBLIC for such internal functions, which is probably a good idea as well.\n>> In a similar way, specific tables in the extension's schema can be made unqualified as well by adding simple views, installed in the public schema, if insisting on unqualified convenience.\n>> \n>> In conclusion:\n>> The main difference is 'public' makes it possible to make *specific* objects unqualified,\n>> while 'search_path' makes *all* objects in such schema(s) unqualified.\n> \n> These arguments are valid, but I think so it is not all. If you remove search_path, then the \"public\" schema will be overused.\n\nWhat makes you think that? If a database object is to be accessed unqualified by all users, isn't the 'public' schema a perfect fit for it? How will it be helpful to create different database objects in different schemas, if also adding all such schemas to the search_path so they can be accessed unqualified? In such a scenario you risk unintentionally creating conflicting objects, and whatever schema happened to be first in the search_path will be resolved. Seems insecure and messy to me.\nMuch safer to install objects that you want to access unqualified in 'public', and get an error if you try to create a new object with a conflicting name of an existing one.\n\n> I think we should ask - who can change the search path and how. Now, there are not any limits. I can imagine the situation when search_path can be changed by only some dedicated role - it can be implemented in a security definer function. Or another solution, we can fix the search path to one value, or only a few possibilities.\n> \n> Maybe for your purpose is just enough to introduce syntax for defining all possibilities of search path:\n> \n> search_path = \"public\" # now, just default\n> search_path = [\"public\"] # future - define vector of possible values of search path - in this case, only \"public\" is allowed - and if you want to change it, you should be database owner\n> \n> or there can be hook for changing search_path, and it can be implemented dynamically in extension\n\nNot bad ideas. I think they would improve the situation. Maybe it could even be a global immutable constant value, the same for all users, that could only be set upon initdb, similar to how encoding can only be set via initdb.\n\ninitdb --search_path \"pg_catalog, public, pg_temp\" foobar\n\nBut perhaps the search_path as an uninstallable extension is a less invasive idea.\n\nLooking at the code, this seems to be the commit that introduced search_path back in 2002:\n\nI'm not sure how difficult it would be to extract search_path into an extension.\nDoesn't look to be that much code. Here is the initial commit that introduced the concept.\nBut perhaps it's more complex today due to new dependencies.\n\ncommit 838fe25a9532ab2e9b9b7517fec94e804cf3da81\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Mon Apr 1 03:34:27 2002 +0000\n\n Create a new GUC variable search_path to control the namespace search\n path. The default behavior if no per-user schemas are created is that\n all users share a 'public' namespace, thus providing behavior backwards\n compatible with 7.2 and earlier releases. Probably the semantics and\n default setting will need to be fine-tuned, but this is a start.\n\nBut search_path is not the only problem. I think it's also a problem objects with the same identifies can be created in both pg_catalog and public. Can we think of a valid reason why it is a good idea to continue to allow that? In what real-life scenario is it needed?\n\n/Joel\n\n\n\n\nOn Wed, Jun 2, 2021, at 09:07, Pavel Stehule wrote:st 2. 6. 2021 v 8:45 odesílatel Joel Jacobson <joel@compiler.org> napsal:'search_path' is a bit like a global variable in C, that can change the behaviour of the SQL commands executed.It makes unqualified SQL code context-sensitive; you don't know by looking at a piece of code what objects are referred to, you also need to figure out what the active search_path is at this place in the code.sometimes this is wanted feature - some sharding is based on thisset search_path = 'custormerx'Oh, interesting, didn't know abou that one. Is that recommended best practise, or more of a hack?I also think we can never get rid of search_path by default, since so much legacy depend on it.But I think it would be good to provide a way to effectively uninstall the search_path for users who prefer to do so, in databases where it's possible, and where clarity and safety is desired.'public' schema if used (without ever changing the default 'search_path'), allows creating unqualified database objects, which I think can be useful in at least three situations:1) when the application is a monolith inside a company, when there is only one version of the database, i.e. not having to worry about name collision with other objects in some other version, since the application is hidden in the company and the schema design is not exposed to the public2) when installing a extension that uses schemas, when wanting the convenience of unqualified access to some functions frequently used, instead of adding its schema to the search_path for convenience, one can instead add wrapper-functions in the 'public' schema. This way, all internal functions in the extension, that are not meant to be executed by users, are still hidden in its schema and won't bother anyone (i.e. can't cause unexpected conflicts). Of course, access can also be controlled via REVOKE EXECUTE ... FROM PUBLIC for such internal functions, which is probably a good idea as well.In a similar way, specific tables in the extension's schema can be made unqualified as well by adding simple views, installed in the public schema, if insisting on unqualified convenience.In conclusion:The main difference is 'public' makes it possible to make *specific* objects unqualified,while 'search_path' makes *all* objects in such schema(s) unqualified.These arguments are valid, but I think so it is not all. If you remove search_path, then the \"public\" schema will be overused.What makes you think that? If a database object is to be accessed unqualified by all users, isn't the 'public' schema a perfect fit for it? How will it be helpful to create different database objects in different schemas, if also adding all such schemas to the search_path so they can be accessed unqualified? In such a scenario you risk unintentionally creating conflicting objects, and whatever schema happened to be first in the search_path will be resolved. Seems insecure and messy to me.Much safer to install objects that you want to access unqualified in 'public', and get an error if you try to create a new object with a conflicting name of an existing one.I think we should ask - who can change the search path and how. Now, there are not any limits. I can imagine the situation when search_path can be changed by only some dedicated role - it can be implemented in a security definer function. Or another solution, we can fix the search path to one value, or only a few possibilities. Maybe for your purpose is just enough to introduce syntax for defining all possibilities of search path:search_path = \"public\" # now, just defaultsearch_path = [\"public\"] # future - define vector of possible values of search path - in this case, only \"public\" is allowed - and if you want to change it, you should be database owneror there can be hook for changing search_path, and it can be implemented dynamically in extensionNot bad ideas. I think they would improve the situation. Maybe it could even be a global immutable constant value, the same for all users, that could only be set upon initdb, similar to how encoding can only be set via initdb.initdb --search_path \"pg_catalog, public, pg_temp\" foobarBut perhaps the search_path as an uninstallable extension is a less invasive idea.Looking at the code, this seems to be the commit that introduced search_path back in 2002:I'm not sure how difficult it would be to extract search_path into an extension.Doesn't look to be that much code. Here is the initial commit that introduced the concept.But perhaps it's more complex today due to new dependencies.commit 838fe25a9532ab2e9b9b7517fec94e804cf3da81Author: Tom Lane <tgl@sss.pgh.pa.us>Date: Mon Apr 1 03:34:27 2002 +0000 Create a new GUC variable search_path to control the namespace search path. The default behavior if no per-user schemas are created is that all users share a 'public' namespace, thus providing behavior backwards compatible with 7.2 and earlier releases. Probably the semantics and default setting will need to be fine-tuned, but this is a start.But search_path is not the only problem. I think it's also a problem objects with the same identifies can be created in both pg_catalog and public. Can we think of a valid reason why it is a good idea to continue to allow that? In what real-life scenario is it needed?/Joel",
"msg_date": "Wed, 02 Jun 2021 14:46:08 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "On Wed, Jun 2, 2021 at 3:46 PM Joel Jacobson <joel@compiler.org> wrote:\n\n> If a database object is to be accessed unqualified by all users, isn't the\n> 'public' schema a perfect fit for it? How will it be helpful to create\n> different database objects in different schemas, if also adding all such\n> schemas to the search_path so they can be accessed unqualified? In such a\n> scenario you risk unintentionally creating conflicting objects, and\n> whatever schema happened to be first in the search_path will be resolved.\n> Seems insecure and messy to me.\n>\n\nHeh. This is actually exactly what I wanted to do.\n\nThe use case is: version upgrades. I want to be able to have a search_path\nof something like 'pg_catalog, compat, public'. That way we can provide\ncompatibility versions of newer functions in the \"compat\" schema, which get\ntaken over by pg_catalog when running on a newer version. That way all the\ncompatibility crap is clearly separated from the stuff that should be in\n\"public\".\n\n\n.m\n\nOn Wed, Jun 2, 2021 at 3:46 PM Joel Jacobson <joel@compiler.org> wrote:If a database object is to be accessed unqualified by all users, isn't the 'public' schema a perfect fit for it? How will it be helpful to create different database objects in different schemas, if also adding all such schemas to the search_path so they can be accessed unqualified? In such a scenario you risk unintentionally creating conflicting objects, and whatever schema happened to be first in the search_path will be resolved. Seems insecure and messy to me.Heh. This is actually exactly what I wanted to do.The use case is: version upgrades. I want to be able to have a search_path of something like 'pg_catalog, compat, public'. That way we can provide compatibility versions of newer functions in the \"compat\" schema, which get taken over by pg_catalog when running on a newer version. That way all the compatibility crap is clearly separated from the stuff that should be in \"public\"..m",
"msg_date": "Wed, 2 Jun 2021 19:36:39 +0300",
"msg_from": "Marko Tiikkaja <marko@joh.to>",
"msg_from_op": true,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "st 2. 6. 2021 v 14:46 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> On Wed, Jun 2, 2021, at 09:07, Pavel Stehule wrote:\n>\n> st 2. 6. 2021 v 8:45 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n>\n> 'search_path' is a bit like a global variable in C, that can change the\n> behaviour of the SQL commands executed.\n> It makes unqualified SQL code context-sensitive; you don't know by looking\n> at a piece of code what objects are referred to, you also need to figure\n> out what the active search_path is at this place in the code.\n>\n>\n> sometimes this is wanted feature - some sharding is based on this\n>\n> set search_path = 'custormerx'\n>\n>\n> Oh, interesting, didn't know abou that one. Is that recommended best\n> practise, or more of a hack?\n>\n\nI have not any statistics, but I think it was relatively common until we\nhad good partitioning. I know two big customers from Czech Republic.\n\nSome people use schema as a database - without overhead of system catalogue\nand without necessity of reconnects to other databases.\n\nUsing search_path is very common for applications ported from Oracle.\n\n\n\n> I also think we can never get rid of search_path by default, since so much\n> legacy depend on it.\n> But I think it would be good to provide a way to effectively uninstall the\n> search_path for users who prefer to do so, in databases where it's\n> possible, and where clarity and safety is desired.\n>\n>\n> 'public' schema if used (without ever changing the default 'search_path'),\n> allows creating unqualified database objects, which I think can be useful\n> in at least three situations:\n>\n> 1) when the application is a monolith inside a company, when there is only\n> one version of the database, i.e. not having to worry about name collision\n> with other objects in some other version, since the application is hidden\n> in the company and the schema design is not exposed to the public\n>\n> 2) when installing a extension that uses schemas, when wanting the\n> convenience of unqualified access to some functions frequently used,\n> instead of adding its schema to the search_path for convenience, one can\n> instead add wrapper-functions in the 'public' schema. This way, all\n> internal functions in the extension, that are not meant to be executed by\n> users, are still hidden in its schema and won't bother anyone (i.e. can't\n> cause unexpected conflicts). Of course, access can also be controlled via\n> REVOKE EXECUTE ... FROM PUBLIC for such internal functions, which is\n> probably a good idea as well.\n> In a similar way, specific tables in the extension's schema can be made\n> unqualified as well by adding simple views, installed in the public schema,\n> if insisting on unqualified convenience.\n>\n> In conclusion:\n> The main difference is 'public' makes it possible to make *specific*\n> objects unqualified,\n> while 'search_path' makes *all* objects in such schema(s) unqualified.\n>\n>\n> These arguments are valid, but I think so it is not all. If you remove\n> search_path, then the \"public\" schema will be overused.\n>\n>\n> What makes you think that? If a database object is to be accessed\n> unqualified by all users, isn't the 'public' schema a perfect fit for it?\n> How will it be helpful to create different database objects in different\n> schemas, if also adding all such schemas to the search_path so they can be\n> accessed unqualified? In such a scenario you risk unintentionally creating\n> conflicting objects, and whatever schema happened to be first in the\n> search_path will be resolved. Seems insecure and messy to me.\n> Much safer to install objects that you want to access unqualified in\n> 'public', and get an error if you try to create a new object with a\n> conflicting name of an existing one.\n>\n\nI think people usually prefer simple solutions - like use for all public\nor use for all schemas.\n\n\n\n> I think we should ask - who can change the search path and how. Now, there\n> are not any limits. I can imagine the situation when search_path can be\n> changed by only some dedicated role - it can be implemented in a security\n> definer function. Or another solution, we can fix the search path to one\n> value, or only a few possibilities.\n>\n> Maybe for your purpose is just enough to introduce syntax for defining all\n> possibilities of search path:\n>\n> search_path = \"public\" # now, just default\n> search_path = [\"public\"] # future - define vector of possible values of\n> search path - in this case, only \"public\" is allowed - and if you want to\n> change it, you should be database owner\n>\n> or there can be hook for changing search_path, and it can be implemented\n> dynamically in extension\n>\n>\n> Not bad ideas. I think they would improve the situation. Maybe it could\n> even be a global immutable constant value, the same for all users, that\n> could only be set upon initdb, similar to how encoding can only be set via\n> initdb.\n>\n> initdb --search_path \"pg_catalog, public, pg_temp\" foobar\n>\n> But perhaps the search_path as an uninstallable extension is a less\n> invasive idea.\n>\n> Looking at the code, this seems to be the commit that introduced\n> search_path back in 2002:\n>\n> I'm not sure how difficult it would be to extract search_path into an\n> extension.\n> Doesn't look to be that much code. Here is the initial commit that\n> introduced the concept.\n> But perhaps it's more complex today due to new dependencies.\n>\n> commit 838fe25a9532ab2e9b9b7517fec94e804cf3da81\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> Date: Mon Apr 1 03:34:27 2002 +0000\n>\n> Create a new GUC variable search_path to control the namespace search\n> path. The default behavior if no per-user schemas are created is that\n> all users share a 'public' namespace, thus providing behavior backwards\n> compatible with 7.2 and earlier releases. Probably the semantics and\n> default setting will need to be fine-tuned, but this is a start.\n>\n> But search_path is not the only problem. I think it's also a problem\n> objects with the same identifies can be created in both pg_catalog and\n> public. Can we think of a valid reason why it is a good idea to continue to\n> allow that? In what real-life scenario is it needed?\n>\n\nProbably it has not sense, but there is simple implementation - you can use\njust unique index(schema name, object name), and you don't need any other\nlocks and checks\n\nPavel\n\n\n\n> /Joel\n>\n>\n>\n>\n>\n\nst 2. 6. 2021 v 14:46 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Wed, Jun 2, 2021, at 09:07, Pavel Stehule wrote:st 2. 6. 2021 v 8:45 odesílatel Joel Jacobson <joel@compiler.org> napsal:'search_path' is a bit like a global variable in C, that can change the behaviour of the SQL commands executed.It makes unqualified SQL code context-sensitive; you don't know by looking at a piece of code what objects are referred to, you also need to figure out what the active search_path is at this place in the code.sometimes this is wanted feature - some sharding is based on thisset search_path = 'custormerx'Oh, interesting, didn't know abou that one. Is that recommended best practise, or more of a hack?I have not any statistics, but I think it was relatively common until we had good partitioning. I know two big customers from Czech Republic.Some people use schema as a database - without overhead of system catalogue and without necessity of reconnects to other databases.Using search_path is very common for applications ported from Oracle. I also think we can never get rid of search_path by default, since so much legacy depend on it.But I think it would be good to provide a way to effectively uninstall the search_path for users who prefer to do so, in databases where it's possible, and where clarity and safety is desired.'public' schema if used (without ever changing the default 'search_path'), allows creating unqualified database objects, which I think can be useful in at least three situations:1) when the application is a monolith inside a company, when there is only one version of the database, i.e. not having to worry about name collision with other objects in some other version, since the application is hidden in the company and the schema design is not exposed to the public2) when installing a extension that uses schemas, when wanting the convenience of unqualified access to some functions frequently used, instead of adding its schema to the search_path for convenience, one can instead add wrapper-functions in the 'public' schema. This way, all internal functions in the extension, that are not meant to be executed by users, are still hidden in its schema and won't bother anyone (i.e. can't cause unexpected conflicts). Of course, access can also be controlled via REVOKE EXECUTE ... FROM PUBLIC for such internal functions, which is probably a good idea as well.In a similar way, specific tables in the extension's schema can be made unqualified as well by adding simple views, installed in the public schema, if insisting on unqualified convenience.In conclusion:The main difference is 'public' makes it possible to make *specific* objects unqualified,while 'search_path' makes *all* objects in such schema(s) unqualified.These arguments are valid, but I think so it is not all. If you remove search_path, then the \"public\" schema will be overused.What makes you think that? If a database object is to be accessed unqualified by all users, isn't the 'public' schema a perfect fit for it? How will it be helpful to create different database objects in different schemas, if also adding all such schemas to the search_path so they can be accessed unqualified? In such a scenario you risk unintentionally creating conflicting objects, and whatever schema happened to be first in the search_path will be resolved. Seems insecure and messy to me.Much safer to install objects that you want to access unqualified in 'public', and get an error if you try to create a new object with a conflicting name of an existing one.I think people usually prefer simple solutions - like use for all public or use for all schemas. I think we should ask - who can change the search path and how. Now, there are not any limits. I can imagine the situation when search_path can be changed by only some dedicated role - it can be implemented in a security definer function. Or another solution, we can fix the search path to one value, or only a few possibilities. Maybe for your purpose is just enough to introduce syntax for defining all possibilities of search path:search_path = \"public\" # now, just defaultsearch_path = [\"public\"] # future - define vector of possible values of search path - in this case, only \"public\" is allowed - and if you want to change it, you should be database owneror there can be hook for changing search_path, and it can be implemented dynamically in extensionNot bad ideas. I think they would improve the situation. Maybe it could even be a global immutable constant value, the same for all users, that could only be set upon initdb, similar to how encoding can only be set via initdb.initdb --search_path \"pg_catalog, public, pg_temp\" foobarBut perhaps the search_path as an uninstallable extension is a less invasive idea.Looking at the code, this seems to be the commit that introduced search_path back in 2002:I'm not sure how difficult it would be to extract search_path into an extension.Doesn't look to be that much code. Here is the initial commit that introduced the concept.But perhaps it's more complex today due to new dependencies.commit 838fe25a9532ab2e9b9b7517fec94e804cf3da81Author: Tom Lane <tgl@sss.pgh.pa.us>Date: Mon Apr 1 03:34:27 2002 +0000 Create a new GUC variable search_path to control the namespace search path. The default behavior if no per-user schemas are created is that all users share a 'public' namespace, thus providing behavior backwards compatible with 7.2 and earlier releases. Probably the semantics and default setting will need to be fine-tuned, but this is a start.But search_path is not the only problem. I think it's also a problem objects with the same identifies can be created in both pg_catalog and public. Can we think of a valid reason why it is a good idea to continue to allow that? In what real-life scenario is it needed?Probably it has not sense, but there is simple implementation - you can use just unique index(schema name, object name), and you don't need any other locks and checksPavel /Joel",
"msg_date": "Wed, 2 Jun 2021 18:52:15 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "On Wed, Jun 02, 2021 at 02:46:08PM +0200, Joel Jacobson wrote:\n> \n> But perhaps the search_path as an uninstallable extension is a less invasive idea.\n\nI don't that that happening any time soon. An extension only adds SQL objects,\nit doesn't impact backend code. You can ship a module with your extension, but\ndropping an extension won't unload the module. And if it were then there's the\n*_preload_libraries that would totally nullify what you want.\n\nOn top of that, it would also mean that the relation resolving could be changed\nby any other extension, which seems like a bad idea.\n\n> But search_path is not the only problem. I think it's also a problem objects\n> with the same identifies can be created in both pg_catalog and public. Can we\n> think of a valid reason why it is a good idea to continue to allow that? In\n> what real-life scenario is it needed?\n\nOne somewhat acceptable use case is to replace catalog access with views to\ngive access to some data e.g. some monitoring users. That's less a problem\nrecently with the default roles, but still.\n\nThere might be others.\n\n\n",
"msg_date": "Thu, 3 Jun 2021 00:58:31 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "On 2021-Jun-02, Marko Tiikkaja wrote:\n\n> The use case is: version upgrades. I want to be able to have a search_path\n> of something like 'pg_catalog, compat, public'. That way we can provide\n> compatibility versions of newer functions in the \"compat\" schema, which get\n> taken over by pg_catalog when running on a newer version. That way all the\n> compatibility crap is clearly separated from the stuff that should be in\n> \"public\".\n\nCan't you achieve that with \"ALTER DATABASE .. SET search_path\"?\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n",
"msg_date": "Wed, 2 Jun 2021 15:20:11 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "On Wed, Jun 2, 2021, at 18:36, Marko Tiikkaja wrote:\n> On Wed, Jun 2, 2021 at 3:46 PM Joel Jacobson <joel@compiler.org> wrote:\n>> If a database object is to be accessed unqualified by all users, isn't the 'public' schema a perfect fit for it? How will it be helpful to create different database objects in different schemas, if also adding all such schemas to the search_path so they can be accessed unqualified? In such a scenario you risk unintentionally creating conflicting objects, and whatever schema happened to be first in the search_path will be resolved. Seems insecure and messy to me.\n> \n> Heh. This is actually exactly what I wanted to do.\n> \n> The use case is: version upgrades. I want to be able to have a search_path of something like 'pg_catalog, compat, public'. That way we can provide compatibility versions of newer functions in the \"compat\" schema, which get taken over by pg_catalog when running on a newer version. That way all the compatibility crap is clearly separated from the stuff that should be in \"public\".\n\nThat's a neat trick, probably the best solution in a really old PostgreSQL version, before we had extensions.\n\nBut if running a recent PostgreSQL version, with support for extensions, I think an even cleaner solution\nwould be to package such compatibility versions in a \"compat\" extension, that would just install them into the public schema.\n\nThen, when upgrading, you would just not install the compat extension.\n\nAnd if you wonder what functions in public come from the compat extension, you can use use \\dx+.\n\n/Joel\nOn Wed, Jun 2, 2021, at 18:36, Marko Tiikkaja wrote:On Wed, Jun 2, 2021 at 3:46 PM Joel Jacobson <joel@compiler.org> wrote:If a database object is to be accessed unqualified by all users, isn't the 'public' schema a perfect fit for it? How will it be helpful to create different database objects in different schemas, if also adding all such schemas to the search_path so they can be accessed unqualified? In such a scenario you risk unintentionally creating conflicting objects, and whatever schema happened to be first in the search_path will be resolved. Seems insecure and messy to me.Heh. This is actually exactly what I wanted to do.The use case is: version upgrades. I want to be able to have a search_path of something like 'pg_catalog, compat, public'. That way we can provide compatibility versions of newer functions in the \"compat\" schema, which get taken over by pg_catalog when running on a newer version. That way all the compatibility crap is clearly separated from the stuff that should be in \"public\".That's a neat trick, probably the best solution in a really old PostgreSQL version, before we had extensions.But if running a recent PostgreSQL version, with support for extensions, I think an even cleaner solutionwould be to package such compatibility versions in a \"compat\" extension, that would just install them into the public schema.Then, when upgrading, you would just not install the compat extension.And if you wonder what functions in public come from the compat extension, you can use use \\dx+./Joel",
"msg_date": "Wed, 02 Jun 2021 22:32:13 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "On Wed, Jun 2, 2021 at 10:20 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2021-Jun-02, Marko Tiikkaja wrote:\n>\n> > The use case is: version upgrades. I want to be able to have a\n> search_path\n> > of something like 'pg_catalog, compat, public'. That way we can provide\n> > compatibility versions of newer functions in the \"compat\" schema, which\n> get\n> > taken over by pg_catalog when running on a newer version. That way all\n> the\n> > compatibility crap is clearly separated from the stuff that should be in\n> > \"public\".\n>\n> Can't you achieve that with \"ALTER DATABASE .. SET search_path\"?\n>\n\nNo, because I have a thousand SECURITY DEFINER functions which have to\noverride search_path or they'd be insecure.\n\n\n.m\n\nOn Wed, Jun 2, 2021 at 10:20 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2021-Jun-02, Marko Tiikkaja wrote:\n\n> The use case is: version upgrades. I want to be able to have a search_path\n> of something like 'pg_catalog, compat, public'. That way we can provide\n> compatibility versions of newer functions in the \"compat\" schema, which get\n> taken over by pg_catalog when running on a newer version. That way all the\n> compatibility crap is clearly separated from the stuff that should be in\n> \"public\".\n\nCan't you achieve that with \"ALTER DATABASE .. SET search_path\"?No, because I have a thousand SECURITY DEFINER functions which have to override search_path or they'd be insecure..m",
"msg_date": "Thu, 3 Jun 2021 01:50:17 +0300",
"msg_from": "Marko Tiikkaja <marko@joh.to>",
"msg_from_op": true,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "On Wed, Jun 2, 2021 at 11:32 PM Joel Jacobson <joel@compiler.org> wrote:\n\n> On Wed, Jun 2, 2021, at 18:36, Marko Tiikkaja wrote:\n>\n> The use case is: version upgrades. I want to be able to have a\n> search_path of something like 'pg_catalog, compat, public'. That way we\n> can provide compatibility versions of newer functions in the \"compat\"\n> schema, which get taken over by pg_catalog when running on a newer\n> version. That way all the compatibility crap is clearly separated from the\n> stuff that should be in \"public\".\n>\n>\n> That's a neat trick, probably the best solution in a really old PostgreSQL\n> version, before we had extensions.\n>\n> But if running a recent PostgreSQL version, with support for extensions, I\n> think an even cleaner solution\n> would be to package such compatibility versions in a \"compat\" extension,\n> that would just install them into the public schema.\n>\n\nWriting, verifying and shipping extension upgrade scripts is not pleasant.\nI'd much prefer something that's integrated to the workflow I already have.\n\n\n> And if you wonder what functions in public come from the compat extension,\n> you can use use \\dx+.\n>\n\nThey still show up everywhere when looking at \"public\". So this is only\nslightly better, and a maintenance burden.\n\n\n.m\n\nOn Wed, Jun 2, 2021 at 11:32 PM Joel Jacobson <joel@compiler.org> wrote:On Wed, Jun 2, 2021, at 18:36, Marko Tiikkaja wrote:The use case is: version upgrades. I want to be able to have a search_path of something like 'pg_catalog, compat, public'. That way we can provide compatibility versions of newer functions in the \"compat\" schema, which get taken over by pg_catalog when running on a newer version. That way all the compatibility crap is clearly separated from the stuff that should be in \"public\".That's a neat trick, probably the best solution in a really old PostgreSQL version, before we had extensions.But if running a recent PostgreSQL version, with support for extensions, I think an even cleaner solutionwould be to package such compatibility versions in a \"compat\" extension, that would just install them into the public schema.Writing, verifying and shipping extension upgrade scripts is not pleasant. I'd much prefer something that's integrated to the workflow I already have. And if you wonder what functions in public come from the compat extension, you can use use \\dx+.They still show up everywhere when looking at \"public\". So this is only slightly better, and a maintenance burden..m",
"msg_date": "Thu, 3 Jun 2021 01:55:39 +0300",
"msg_from": "Marko Tiikkaja <marko@joh.to>",
"msg_from_op": true,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "On Thu, Jun 3, 2021, at 00:55, Marko Tiikkaja wrote:\n> On Wed, Jun 2, 2021 at 11:32 PM Joel Jacobson <joel@compiler.org> wrote:\n>> __But if running a recent PostgreSQL version, with support for extensions, I think an even cleaner solution\n>> would be to package such compatibility versions in a \"compat\" extension, that would just install them into the public schema.\n> \n> Writing, verifying and shipping extension upgrade scripts is not pleasant. \n\nI agree. Thanks for acknowledging this problem.\n\nI'm experimenting with an idea that I hope can simplify the \"verifying\" part of the problem.\nhope to have something to show you all soon. \n\n> I'd much prefer something that's integrated to the workflow I already have.\n\nFair point. I guess also the initial switching cost of changing workflow is quite high and difficult to motivate. So even if extensions ergonomics are improved, many existing users will not migrate their workflows anyway due to this.\n\n> \n>> And if you wonder what functions in public come from the compat extension, you can use use \\dx+.\n> \n> They still show up everywhere when looking at \"public\". So this is only slightly better, and a maintenance burden.\n\nGood point. I find this annoying as well sometimes.\n\nIt's easy to get a list of all objects for an extension, via \\dx+\n\nBut it's hard to see what objects in a schema, that are provided by different extensions, via e.g. \\df public.*\n\nWhat about adding a new \"Extension\" column next to \"Schema\" to the relevant commands, such as \\df?\n\n/Joel\nOn Thu, Jun 3, 2021, at 00:55, Marko Tiikkaja wrote:On Wed, Jun 2, 2021 at 11:32 PM Joel Jacobson <joel@compiler.org> wrote:But if running a recent PostgreSQL version, with support for extensions, I think an even cleaner solutionwould be to package such compatibility versions in a \"compat\" extension, that would just install them into the public schema.Writing, verifying and shipping extension upgrade scripts is not pleasant. I agree. Thanks for acknowledging this problem.I'm experimenting with an idea that I hope can simplify the \"verifying\" part of the problem.hope to have something to show you all soon. I'd much prefer something that's integrated to the workflow I already have.Fair point. I guess also the initial switching cost of changing workflow is quite high and difficult to motivate. So even if extensions ergonomics are improved, many existing users will not migrate their workflows anyway due to this. And if you wonder what functions in public come from the compat extension, you can use use \\dx+.They still show up everywhere when looking at \"public\". So this is only slightly better, and a maintenance burden.Good point. I find this annoying as well sometimes.It's easy to get a list of all objects for an extension, via \\dx+But it's hard to see what objects in a schema, that are provided by different extensions, via e.g. \\df public.*What about adding a new \"Extension\" column next to \"Schema\" to the relevant commands, such as \\df?/Joel",
"msg_date": "Thu, 03 Jun 2021 08:13:53 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "čt 3. 6. 2021 v 8:14 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> On Thu, Jun 3, 2021, at 00:55, Marko Tiikkaja wrote:\n>\n> On Wed, Jun 2, 2021 at 11:32 PM Joel Jacobson <joel@compiler.org> wrote:\n>\n> But if running a recent PostgreSQL version, with support for extensions, I\n> think an even cleaner solution\n> would be to package such compatibility versions in a \"compat\" extension,\n> that would just install them into the public schema.\n>\n>\n> Writing, verifying and shipping extension upgrade scripts is not pleasant.\n>\n>\n> I agree. Thanks for acknowledging this problem.\n>\n> I'm experimenting with an idea that I hope can simplify the \"verifying\"\n> part of the problem.\n> hope to have something to show you all soon.\n>\n> I'd much prefer something that's integrated to the workflow I already have.\n>\n>\n> Fair point. I guess also the initial switching cost of changing workflow\n> is quite high and difficult to motivate. So even if extensions ergonomics\n> are improved, many existing users will not migrate their workflows anyway\n> due to this.\n>\n>\n>\n> And if you wonder what functions in public come from the compat extension,\n> you can use use \\dx+.\n>\n>\n> They still show up everywhere when looking at \"public\". So this is only\n> slightly better, and a maintenance burden.\n>\n>\n> Good point. I find this annoying as well sometimes.\n>\n> It's easy to get a list of all objects for an extension, via \\dx+\n>\n> But it's hard to see what objects in a schema, that are provided by\n> different extensions, via e.g. \\df public.*\n>\n> What about adding a new \"Extension\" column next to \"Schema\" to the\n> relevant commands, such as \\df?\n>\n\nI think so for \\df+ it can be very useful. I don't think it is important\nenough to be in short form, but it can be nice in enhanced form.\n\nPavel\n\n\n> /Joel\n>\n\nčt 3. 6. 2021 v 8:14 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Thu, Jun 3, 2021, at 00:55, Marko Tiikkaja wrote:On Wed, Jun 2, 2021 at 11:32 PM Joel Jacobson <joel@compiler.org> wrote:But if running a recent PostgreSQL version, with support for extensions, I think an even cleaner solutionwould be to package such compatibility versions in a \"compat\" extension, that would just install them into the public schema.Writing, verifying and shipping extension upgrade scripts is not pleasant. I agree. Thanks for acknowledging this problem.I'm experimenting with an idea that I hope can simplify the \"verifying\" part of the problem.hope to have something to show you all soon. I'd much prefer something that's integrated to the workflow I already have.Fair point. I guess also the initial switching cost of changing workflow is quite high and difficult to motivate. So even if extensions ergonomics are improved, many existing users will not migrate their workflows anyway due to this. And if you wonder what functions in public come from the compat extension, you can use use \\dx+.They still show up everywhere when looking at \"public\". So this is only slightly better, and a maintenance burden.Good point. I find this annoying as well sometimes.It's easy to get a list of all objects for an extension, via \\dx+But it's hard to see what objects in a schema, that are provided by different extensions, via e.g. \\df public.*What about adding a new \"Extension\" column next to \"Schema\" to the relevant commands, such as \\df?I think so for \\df+ it can be very useful. I don't think it is important enough to be in short form, but it can be nice in enhanced form.Pavel/Joel",
"msg_date": "Thu, 3 Jun 2021 17:51:13 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "On Thu, Jun 3, 2021 at 9:14 AM Joel Jacobson <joel@compiler.org> wrote:\n\n> On Thu, Jun 3, 2021, at 00:55, Marko Tiikkaja wrote:\n>\n> They still show up everywhere when looking at \"public\". So this is only\n> slightly better, and a maintenance burden.\n>\n>\n> Good point. I find this annoying as well sometimes.\n>\n> It's easy to get a list of all objects for an extension, via \\dx+\n>\n> But it's hard to see what objects in a schema, that are provided by\n> different extensions, via e.g. \\df public.*\n>\n> What about adding a new \"Extension\" column next to \"Schema\" to the\n> relevant commands, such as \\df?\n>\n\nThat's just one part of it. The other part of my original proposal was to\navoid having to SET search_path for all SECURITY DEFINER functions. I\nstill think either being able to lock search_path or the separate prosecdef\nsearch_path is the best option here.\n\n\n.m\n\nOn Thu, Jun 3, 2021 at 9:14 AM Joel Jacobson <joel@compiler.org> wrote:On Thu, Jun 3, 2021, at 00:55, Marko Tiikkaja wrote:They still show up everywhere when looking at \"public\". So this is only slightly better, and a maintenance burden.Good point. I find this annoying as well sometimes.It's easy to get a list of all objects for an extension, via \\dx+But it's hard to see what objects in a schema, that are provided by different extensions, via e.g. \\df public.*What about adding a new \"Extension\" column next to \"Schema\" to the relevant commands, such as \\df? That's just one part of it. The other part of my original proposal was to avoid having to SET search_path for all SECURITY DEFINER functions. I still think either being able to lock search_path or the separate prosecdef search_path is the best option here..m",
"msg_date": "Thu, 3 Jun 2021 18:54:42 +0300",
"msg_from": "Marko Tiikkaja <marko@joh.to>",
"msg_from_op": true,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "čt 3. 6. 2021 v 17:54 odesílatel Marko Tiikkaja <marko@joh.to> napsal:\n\n> On Thu, Jun 3, 2021 at 9:14 AM Joel Jacobson <joel@compiler.org> wrote:\n>\n>> On Thu, Jun 3, 2021, at 00:55, Marko Tiikkaja wrote:\n>>\n>> They still show up everywhere when looking at \"public\". So this is only\n>> slightly better, and a maintenance burden.\n>>\n>>\n>> Good point. I find this annoying as well sometimes.\n>>\n>> It's easy to get a list of all objects for an extension, via \\dx+\n>>\n>> But it's hard to see what objects in a schema, that are provided by\n>> different extensions, via e.g. \\df public.*\n>>\n>> What about adding a new \"Extension\" column next to \"Schema\" to the\n>> relevant commands, such as \\df?\n>>\n>\n> That's just one part of it. The other part of my original proposal was to\n> avoid having to SET search_path for all SECURITY DEFINER functions. I\n> still think either being able to lock search_path or the separate prosecdef\n> search_path is the best option here.\n>\n\nI agree so some possibility of locking search_path or possibility to\ncontrol who and when can change it can increase security. This should be a\ncore feature. It's maybe more generic issue - same functionality can be\nrequired for work_mem setting, maybe max_paralel_workers_per_gather, and\nother GUC\n\nRegards\n\nPavel\n\n>\n>\n> .m\n>\n\nčt 3. 6. 2021 v 17:54 odesílatel Marko Tiikkaja <marko@joh.to> napsal:On Thu, Jun 3, 2021 at 9:14 AM Joel Jacobson <joel@compiler.org> wrote:On Thu, Jun 3, 2021, at 00:55, Marko Tiikkaja wrote:They still show up everywhere when looking at \"public\". So this is only slightly better, and a maintenance burden.Good point. I find this annoying as well sometimes.It's easy to get a list of all objects for an extension, via \\dx+But it's hard to see what objects in a schema, that are provided by different extensions, via e.g. \\df public.*What about adding a new \"Extension\" column next to \"Schema\" to the relevant commands, such as \\df? That's just one part of it. The other part of my original proposal was to avoid having to SET search_path for all SECURITY DEFINER functions. I still think either being able to lock search_path or the separate prosecdef search_path is the best option here.I agree so some possibility of locking search_path or possibility to control who and when can change it can increase security. This should be a core feature. It's maybe more generic issue - same functionality can be required for work_mem setting, maybe max_paralel_workers_per_gather, and other GUCRegardsPavel.m",
"msg_date": "Thu, 3 Jun 2021 18:03:09 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "\n\n> On Jun 3, 2021, at 9:03 AM, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> \n> I agree so some possibility of locking search_path or possibility to control who and when can change it can increase security. This should be a core feature. It's maybe more generic issue - same functionality can be required for work_mem setting, maybe max_paralel_workers_per_gather, and other GUC\n\nChapman already suggested a mechanism in [1] to allow chaining together additional validators for GUCs.\n\nWhen setting search_path, the check_search_path(char **newval, void **extra, GucSource source) function is invoked. As I understand Chapman's proposal, additional validators could be added to any GUC. You could implement search_path restrictions by defining additional validators that enforce whatever restriction you like.\n\nMarko, does his idea sound workable for your needs? I understood your original proposal as only restricting the value of search_path within security definer functions. This idea would allow you to restrict it everywhere, and not tailored to just that context.\n\n[1] https://www.postgresql.org/message-id/608C9A81.3020006@anastigmatix.net\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 3 Jun 2021 09:30:42 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "On Thu, Jun 3, 2021 at 7:30 PM Mark Dilger <mark.dilger@enterprisedb.com>\nwrote:\n\n> > On Jun 3, 2021, at 9:03 AM, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >\n> > I agree so some possibility of locking search_path or possibility to\n> control who and when can change it can increase security. This should be a\n> core feature. It's maybe more generic issue - same functionality can be\n> required for work_mem setting, maybe max_paralel_workers_per_gather, and\n> other GUC\n>\n> Chapman already suggested a mechanism in [1] to allow chaining together\n> additional validators for GUCs.\n>\n> When setting search_path, the check_search_path(char **newval, void\n> **extra, GucSource source) function is invoked. As I understand Chapman's\n> proposal, additional validators could be added to any GUC. You could\n> implement search_path restrictions by defining additional validators that\n> enforce whatever restriction you like.\n>\n> Marko, does his idea sound workable for your needs? I understood your\n> original proposal as only restricting the value of search_path within\n> security definer functions. This idea would allow you to restrict it\n> everywhere, and not tailored to just that context.\n>\n\nYeah, that would work for my use case just as well.\n\n\n.m\n\nOn Thu, Jun 3, 2021 at 7:30 PM Mark Dilger <mark.dilger@enterprisedb.com> wrote:> On Jun 3, 2021, at 9:03 AM, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> \n> I agree so some possibility of locking search_path or possibility to control who and when can change it can increase security. This should be a core feature. It's maybe more generic issue - same functionality can be required for work_mem setting, maybe max_paralel_workers_per_gather, and other GUC\n\nChapman already suggested a mechanism in [1] to allow chaining together additional validators for GUCs.\n\nWhen setting search_path, the check_search_path(char **newval, void **extra, GucSource source) function is invoked. As I understand Chapman's proposal, additional validators could be added to any GUC. You could implement search_path restrictions by defining additional validators that enforce whatever restriction you like.\n\nMarko, does his idea sound workable for your needs? I understood your original proposal as only restricting the value of search_path within security definer functions. This idea would allow you to restrict it everywhere, and not tailored to just that context.Yeah, that would work for my use case just as well..m",
"msg_date": "Thu, 3 Jun 2021 19:34:24 +0300",
"msg_from": "Marko Tiikkaja <marko@joh.to>",
"msg_from_op": true,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "čt 3. 6. 2021 v 18:30 odesílatel Mark Dilger <mark.dilger@enterprisedb.com>\nnapsal:\n\n>\n>\n> > On Jun 3, 2021, at 9:03 AM, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >\n> > I agree so some possibility of locking search_path or possibility to\n> control who and when can change it can increase security. This should be a\n> core feature. It's maybe more generic issue - same functionality can be\n> required for work_mem setting, maybe max_paralel_workers_per_gather, and\n> other GUC\n>\n> Chapman already suggested a mechanism in [1] to allow chaining together\n> additional validators for GUCs.\n>\n> When setting search_path, the check_search_path(char **newval, void\n> **extra, GucSource source) function is invoked. As I understand Chapman's\n> proposal, additional validators could be added to any GUC. You could\n> implement search_path restrictions by defining additional validators that\n> enforce whatever restriction you like.\n>\n\nThis design looks good for extensions, but I am not sure if it is good for\nusers. Some declarative way without necessity to programming or install\nsome extension can be nice.\n\nPavel\n\n\n> Marko, does his idea sound workable for your needs? I understood your\n> original proposal as only restricting the value of search_path within\n> security definer functions. This idea would allow you to restrict it\n> everywhere, and not tailored to just that context.\n>\n> [1]\n> https://www.postgresql.org/message-id/608C9A81.3020006@anastigmatix.net\n>\n> —\n> Mark Dilger\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n>\n>\n\nčt 3. 6. 2021 v 18:30 odesílatel Mark Dilger <mark.dilger@enterprisedb.com> napsal:\n\n> On Jun 3, 2021, at 9:03 AM, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> \n> I agree so some possibility of locking search_path or possibility to control who and when can change it can increase security. This should be a core feature. It's maybe more generic issue - same functionality can be required for work_mem setting, maybe max_paralel_workers_per_gather, and other GUC\n\nChapman already suggested a mechanism in [1] to allow chaining together additional validators for GUCs.\n\nWhen setting search_path, the check_search_path(char **newval, void **extra, GucSource source) function is invoked. As I understand Chapman's proposal, additional validators could be added to any GUC. You could implement search_path restrictions by defining additional validators that enforce whatever restriction you like.This design looks good for extensions, but I am not sure if it is good for users. Some declarative way without necessity to programming or install some extension can be nice. Pavel\n\nMarko, does his idea sound workable for your needs? I understood your original proposal as only restricting the value of search_path within security definer functions. This idea would allow you to restrict it everywhere, and not tailored to just that context.\n\n[1] https://www.postgresql.org/message-id/608C9A81.3020006@anastigmatix.net\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 3 Jun 2021 18:38:23 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "\n\n> On Jun 3, 2021, at 9:38 AM, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> \n> This design looks good for extensions, but I am not sure if it is good for users. Some declarative way without necessity to programming or install some extension can be nice.\n\nI agree, though \"some declarative way\" is a bit vague. I've had ideas that perhaps superusers should be able to further restrict the [min,max] fields of int and real GUCs. Since -1 is sometimes used to mean \"disabled\", syntax to allow specifying a set might be necessary, something like [-1, 60..600]. For text and enum GUCs, perhaps a set of regexps would work, some being required to match and others being required not to match, such as:\n\n\tsearch_path !~ '\\mcustomerx\\M'\n\tsearch_path ~ '^pg_catalog,'\n\nIf we did something like this, we'd need it to play nicely with other filters provided by extensions, because I'm reasonably sure not all filters could be done merely using set notation and regular expression syntax. In fact, I find it hard to convince myself that set notation and regular expression syntax would even be useful in a large enough number of cases to be worth implementing. What are your thought on that?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 3 Jun 2021 11:25:13 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "I thought everybody was already doing this, but maybe not. I put the\nfollowing in all my function definitions:\n\n SET search_path FROM CURRENT\n\n(with the exception of a very few functions which explicitly need to use\nthe caller's search path)\n\nIt seems to me that if this was the default (note: I'm totally ignoring\nbackward compatibility issues for now), then most of these issues wouldn't\nexist. My schema creation scripts start with an appropriate search path\nsetting and that value then gets built into every function they create.\n\nRelated question: how can function compilation work when the behaviour\ndepends on the search path of the caller? In other words, the behaviour of\nthe function can be totally different on each call. Are there any popular\nprogramming environments in which the behaviour of a called function\ndepends on the caller's environment (actually yes: shell scripting, with\n$PATH especially; but besides that and stored procedures)?\n\nI also want to mention that I consider any suggestion to eliminate the\nsearch_path concept as a complete non-starter. It would be no different\nfrom proposing that the next version of a programming language eliminate\n(or stop using) the module system. If I could make it happen easily, I\nwould go in the other direction and allow schemas to be hierarchical (note:\ntotally ignoring all sorts of very important choices which are more than\njust details about how this should work). I would like to be able to have\nan extension or subsystem exist in a single schema, with its objects broken\nup into schemas within the schema. Same reason as most languages have\nhierarchical module systems.\n\nOn Thu, 3 Jun 2021 at 14:25, Mark Dilger <mark.dilger@enterprisedb.com>\nwrote:\n\n>\n>\n> > On Jun 3, 2021, at 9:38 AM, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >\n> > This design looks good for extensions, but I am not sure if it is good\n> for users. Some declarative way without necessity to programming or install\n> some extension can be nice.\n>\n> I agree, though \"some declarative way\" is a bit vague. I've had ideas\n> that perhaps superusers should be able to further restrict the [min,max]\n> fields of int and real GUCs. Since -1 is sometimes used to mean\n> \"disabled\", syntax to allow specifying a set might be necessary, something\n> like [-1, 60..600]. For text and enum GUCs, perhaps a set of regexps would\n> work, some being required to match and others being required not to match,\n> such as:\n>\n> search_path !~ '\\mcustomerx\\M'\n> search_path ~ '^pg_catalog,'\n>\n> If we did something like this, we'd need it to play nicely with other\n> filters provided by extensions, because I'm reasonably sure not all filters\n> could be done merely using set notation and regular expression syntax. In\n> fact, I find it hard to convince myself that set notation and regular\n> expression syntax would even be useful in a large enough number of cases to\n> be worth implementing. What are your thought on that?\n>\n> —\n> Mark Dilger\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n>\n>\n>\n>\n\nI thought everybody was already doing this, but maybe not. I put the following in all my function definitions: SET search_path FROM CURRENT(with the exception of a very few functions which explicitly need to use the caller's search path)It seems to me that if this was the default (note: I'm totally ignoring backward compatibility issues for now), then most of these issues wouldn't exist. My schema creation scripts start with an appropriate search path setting and that value then gets built into every function they create.Related question: how can function compilation work when the behaviour depends on the search path of the caller? In other words, the behaviour of the function can be totally different on each call. Are there any popular programming environments in which the behaviour of a called function depends on the caller's environment (actually yes: shell scripting, with $PATH especially; but besides that and stored procedures)?I also want to mention that I consider any suggestion to eliminate the search_path concept as a complete non-starter. It would be no different from proposing that the next version of a programming language eliminate (or stop using) the module system. If I could make it happen easily, I would go in the other direction and allow schemas to be hierarchical (note: totally ignoring all sorts of very important choices which are more than just details about how this should work). I would like to be able to have an extension or subsystem exist in a single schema, with its objects broken up into schemas within the schema. Same reason as most languages have hierarchical module systems.On Thu, 3 Jun 2021 at 14:25, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n\n> On Jun 3, 2021, at 9:38 AM, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> \n> This design looks good for extensions, but I am not sure if it is good for users. Some declarative way without necessity to programming or install some extension can be nice.\n\nI agree, though \"some declarative way\" is a bit vague. I've had ideas that perhaps superusers should be able to further restrict the [min,max] fields of int and real GUCs. Since -1 is sometimes used to mean \"disabled\", syntax to allow specifying a set might be necessary, something like [-1, 60..600]. For text and enum GUCs, perhaps a set of regexps would work, some being required to match and others being required not to match, such as:\n\n search_path !~ '\\mcustomerx\\M'\n search_path ~ '^pg_catalog,'\n\nIf we did something like this, we'd need it to play nicely with other filters provided by extensions, because I'm reasonably sure not all filters could be done merely using set notation and regular expression syntax. In fact, I find it hard to convince myself that set notation and regular expression syntax would even be useful in a large enough number of cases to be worth implementing. What are your thought on that?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 3 Jun 2021 14:42:04 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "čt 3. 6. 2021 v 20:25 odesílatel Mark Dilger <mark.dilger@enterprisedb.com>\nnapsal:\n\n>\n>\n> > On Jun 3, 2021, at 9:38 AM, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >\n> > This design looks good for extensions, but I am not sure if it is good\n> for users. Some declarative way without necessity to programming or install\n> some extension can be nice.\n>\n> I agree, though \"some declarative way\" is a bit vague. I've had ideas\n> that perhaps superusers should be able to further restrict the [min,max]\n> fields of int and real GUCs. Since -1 is sometimes used to mean\n> \"disabled\", syntax to allow specifying a set might be necessary, something\n> like [-1, 60..600]. For text and enum GUCs, perhaps a set of regexps would\n> work, some being required to match and others being required not to match,\n> such as:\n>\n> search_path !~ '\\mcustomerx\\M'\n> search_path ~ '^pg_catalog,'\n>\n> If we did something like this, we'd need it to play nicely with other\n> filters provided by extensions, because I'm reasonably sure not all filters\n> could be done merely using set notation and regular expression syntax. In\n> fact, I find it hard to convince myself that set notation and regular\n> expression syntax would even be useful in a large enough number of cases to\n> be worth implementing. What are your thought on that?\n>\n\nI don't think so for immutable strings we need regular expressions. Maybe\nuse some special keyword\n\nsearch_path only \"pg_catalog\"\n\n\n\n\n> —\n> Mark Dilger\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n>\n>\n\nčt 3. 6. 2021 v 20:25 odesílatel Mark Dilger <mark.dilger@enterprisedb.com> napsal:\n\n> On Jun 3, 2021, at 9:38 AM, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> \n> This design looks good for extensions, but I am not sure if it is good for users. Some declarative way without necessity to programming or install some extension can be nice.\n\nI agree, though \"some declarative way\" is a bit vague. I've had ideas that perhaps superusers should be able to further restrict the [min,max] fields of int and real GUCs. Since -1 is sometimes used to mean \"disabled\", syntax to allow specifying a set might be necessary, something like [-1, 60..600]. For text and enum GUCs, perhaps a set of regexps would work, some being required to match and others being required not to match, such as:\n\n search_path !~ '\\mcustomerx\\M'\n search_path ~ '^pg_catalog,'\n\nIf we did something like this, we'd need it to play nicely with other filters provided by extensions, because I'm reasonably sure not all filters could be done merely using set notation and regular expression syntax. In fact, I find it hard to convince myself that set notation and regular expression syntax would even be useful in a large enough number of cases to be worth implementing. What are your thought on that?I don't think so for immutable strings we need regular expressions. Maybe use some special keywordsearch_path only \"pg_catalog\" \n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 3 Jun 2021 21:06:13 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "\n\n> On Jun 3, 2021, at 12:06 PM, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> \n> \n> \n> čt 3. 6. 2021 v 20:25 odesílatel Mark Dilger <mark.dilger@enterprisedb.com> napsal:\n> \n> \n> > On Jun 3, 2021, at 9:38 AM, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> > \n> > This design looks good for extensions, but I am not sure if it is good for users. Some declarative way without necessity to programming or install some extension can be nice.\n> \n> I agree, though \"some declarative way\" is a bit vague. I've had ideas that perhaps superusers should be able to further restrict the [min,max] fields of int and real GUCs. Since -1 is sometimes used to mean \"disabled\", syntax to allow specifying a set might be necessary, something like [-1, 60..600]. For text and enum GUCs, perhaps a set of regexps would work, some being required to match and others being required not to match, such as:\n> \n> search_path !~ '\\mcustomerx\\M'\n> search_path ~ '^pg_catalog,'\n> \n> If we did something like this, we'd need it to play nicely with other filters provided by extensions, because I'm reasonably sure not all filters could be done merely using set notation and regular expression syntax. In fact, I find it hard to convince myself that set notation and regular expression syntax would even be useful in a large enough number of cases to be worth implementing. What are your thought on that?\n> \n> I don't think so for immutable strings we need regular expressions. Maybe use some special keyword\n> \n> search_path only \"pg_catalog\" \n\nI think we're trying to solve different problems. I'm trying to allow non-superusers to set GUCs while putting constraints on what values they choose. You appear to be trying to revoke the ability to set a GUC by forcing it to only ever have a single value.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Thu, 3 Jun 2021 12:11:25 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "čt 3. 6. 2021 v 21:11 odesílatel Mark Dilger <mark.dilger@enterprisedb.com>\nnapsal:\n\n>\n>\n> > On Jun 3, 2021, at 12:06 PM, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >\n> >\n> >\n> > čt 3. 6. 2021 v 20:25 odesílatel Mark Dilger <\n> mark.dilger@enterprisedb.com> napsal:\n> >\n> >\n> > > On Jun 3, 2021, at 9:38 AM, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > >\n> > > This design looks good for extensions, but I am not sure if it is good\n> for users. Some declarative way without necessity to programming or install\n> some extension can be nice.\n> >\n> > I agree, though \"some declarative way\" is a bit vague. I've had ideas\n> that perhaps superusers should be able to further restrict the [min,max]\n> fields of int and real GUCs. Since -1 is sometimes used to mean\n> \"disabled\", syntax to allow specifying a set might be necessary, something\n> like [-1, 60..600]. For text and enum GUCs, perhaps a set of regexps would\n> work, some being required to match and others being required not to match,\n> such as:\n> >\n> > search_path !~ '\\mcustomerx\\M'\n> > search_path ~ '^pg_catalog,'\n> >\n> > If we did something like this, we'd need it to play nicely with other\n> filters provided by extensions, because I'm reasonably sure not all filters\n> could be done merely using set notation and regular expression syntax. In\n> fact, I find it hard to convince myself that set notation and regular\n> expression syntax would even be useful in a large enough number of cases to\n> be worth implementing. What are your thought on that?\n> >\n> > I don't think so for immutable strings we need regular expressions.\n> Maybe use some special keyword\n> >\n> > search_path only \"pg_catalog\"\n>\n> I think we're trying to solve different problems. I'm trying to allow\n> non-superusers to set GUCs while putting constraints on what values they\n> choose. You appear to be trying to revoke the ability to set a GUC by\n> forcing it to only ever have a single value.\n>\n\nMy proposal doesn't mean the search_path cannot be changed - it limits\npossible values like your patch. Maybe we can get inspiration from\npg_hba.conf\n\n\n>\n> —\n> Mark Dilger\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n>\n>\n\nčt 3. 6. 2021 v 21:11 odesílatel Mark Dilger <mark.dilger@enterprisedb.com> napsal:\n\n> On Jun 3, 2021, at 12:06 PM, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> \n> \n> \n> čt 3. 6. 2021 v 20:25 odesílatel Mark Dilger <mark.dilger@enterprisedb.com> napsal:\n> \n> \n> > On Jun 3, 2021, at 9:38 AM, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> > \n> > This design looks good for extensions, but I am not sure if it is good for users. Some declarative way without necessity to programming or install some extension can be nice.\n> \n> I agree, though \"some declarative way\" is a bit vague. I've had ideas that perhaps superusers should be able to further restrict the [min,max] fields of int and real GUCs. Since -1 is sometimes used to mean \"disabled\", syntax to allow specifying a set might be necessary, something like [-1, 60..600]. For text and enum GUCs, perhaps a set of regexps would work, some being required to match and others being required not to match, such as:\n> \n> search_path !~ '\\mcustomerx\\M'\n> search_path ~ '^pg_catalog,'\n> \n> If we did something like this, we'd need it to play nicely with other filters provided by extensions, because I'm reasonably sure not all filters could be done merely using set notation and regular expression syntax. In fact, I find it hard to convince myself that set notation and regular expression syntax would even be useful in a large enough number of cases to be worth implementing. What are your thought on that?\n> \n> I don't think so for immutable strings we need regular expressions. Maybe use some special keyword\n> \n> search_path only \"pg_catalog\" \n\nI think we're trying to solve different problems. I'm trying to allow non-superusers to set GUCs while putting constraints on what values they choose. You appear to be trying to revoke the ability to set a GUC by forcing it to only ever have a single value.My proposal doesn't mean the search_path cannot be changed - it limits possible values like your patch. Maybe we can get inspiration from pg_hba.conf \n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 3 Jun 2021 21:24:31 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "On Thu, Jun 3, 2021, at 20:42, Isaac Morland wrote:\n> I also want to mention that I consider any suggestion to eliminate the search_path concept as a complete non-starter.\n> \n> It would be no different from proposing that the next version of a programming language eliminate (or stop using) the module system.\n\nI think the suggestion of making it possible (but not a default) to eliminate search_path,\nis very similar to C compiler flags that turn specific language features into hard errors, such as \"-Werror=vla\".\n\nIf you know your C code base doesn't contain vla, you can compile with that compiler flag.\n\nIf you know your SQL code base doesn't makes use of search_path, nor any installed EXTENSIONs,\nI'm suggesting it would be nice to have a way to effectively ensure that stays the case.\n\nI realise \"eliminate\" is not really necessary, it would suffice to just allow setting a a sane default per database, and make that value immutable, then all data structures and code using wouldn't need to change, one would then only need to change the code that can mutate search_path, to prevent that from happening.\n\n> If I could make it happen easily, I would go in the other direction and allow schemas to be hierarchical (note: totally ignoring all sorts of very important choices which are more than just details about how this should work). I would like to be able to have an extension or subsystem exist in a single schema, with its objects broken up into schemas within the schema. Same reason as most languages have hierarchical module systems.\n\nI note we already have a hierarchical extension system; EXTENSIONs can specify their dependencies (parents) via \"requires\" in the .control file. The entire hierarchical tree can then can be created/dropped using CASCADE.\n\nI can possibly see some value in hierarchical schemas too, that is completely unrelated to my distaste for search_path.\n\nI never felt I needed more than one namespace level, but I've only worked in companies with <1000 employees, so I can imagine it would be useful if the data needs for >100k employees needs to be organised in one and the same database. Is this how large companies organise their data? Or do they instead break up things into multiple databases?\nDo we have some example of an extension that is complex enough where it would be good to organise it into multiple schema levels?\n\nIf reducing complexity by not using search_path, the complexity budget might afford hierarchical schemas, so I think the two ideas seem very compatible.\n\n/Joel\nOn Thu, Jun 3, 2021, at 20:42, Isaac Morland wrote:I also want to mention that I consider any suggestion to eliminate the search_path concept as a complete non-starter. It would be no different from proposing that the next version of a programming language eliminate (or stop using) the module system.I think the suggestion of making it possible (but not a default) to eliminate search_path,is very similar to C compiler flags that turn specific language features into hard errors, such as \"-Werror=vla\".If you know your C code base doesn't contain vla, you can compile with that compiler flag.If you know your SQL code base doesn't makes use of search_path, nor any installed EXTENSIONs,I'm suggesting it would be nice to have a way to effectively ensure that stays the case.I realise \"eliminate\" is not really necessary, it would suffice to just allow setting a a sane default per database, and make that value immutable, then all data structures and code using wouldn't need to change, one would then only need to change the code that can mutate search_path, to prevent that from happening.If I could make it happen easily, I would go in the other direction and allow schemas to be hierarchical (note: totally ignoring all sorts of very important choices which are more than just details about how this should work). I would like to be able to have an extension or subsystem exist in a single schema, with its objects broken up into schemas within the schema. Same reason as most languages have hierarchical module systems.I note we already have a hierarchical extension system; EXTENSIONs can specify their dependencies (parents) via \"requires\" in the .control file. The entire hierarchical tree can then can be created/dropped using CASCADE.I can possibly see some value in hierarchical schemas too, that is completely unrelated to my distaste for search_path.I never felt I needed more than one namespace level, but I've only worked in companies with <1000 employees, so I can imagine it would be useful if the data needs for >100k employees needs to be organised in one and the same database. Is this how large companies organise their data? Or do they instead break up things into multiple databases?Do we have some example of an extension that is complex enough where it would be good to organise it into multiple schema levels?If reducing complexity by not using search_path, the complexity budget might afford hierarchical schemas, so I think the two ideas seem very compatible./Joel",
"msg_date": "Fri, 04 Jun 2021 08:37:52 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "Hi\n\n\n>\n> I realise \"eliminate\" is not really necessary, it would suffice to just\n> allow setting a a sane default per database, and make that value immutable,\n> then all data structures and code using wouldn't need to change, one would\n> then only need to change the code that can mutate search_path, to prevent\n> that from happening.\n>\n\nI understand that for some specific cases the search_path can be\nproblematic. On the other hand, the SQL database supports interactive work,\nand then the search_path can save a lot of monkey work.\n\nIt is the same as using the command line without the possibility to\ncustomize the PATH variable. The advantages and disadvantages are exactly\nthe same.\n\nRegards\n\nPavel\n\nHiI realise \"eliminate\" is not really necessary, it would suffice to just allow setting a a sane default per database, and make that value immutable, then all data structures and code using wouldn't need to change, one would then only need to change the code that can mutate search_path, to prevent that from happening.I understand that for some specific cases the search_path can be problematic. On the other hand, the SQL database supports interactive work, and then the search_path can save a lot of monkey work. It is the same as using the command line without the possibility to customize the PATH variable. The advantages and disadvantages are exactly the same.RegardsPavel",
"msg_date": "Fri, 4 Jun 2021 08:58:15 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "On Fri, Jun 4, 2021, at 08:58, Pavel Stehule wrote:\n> It is the same as using the command line without the possibility to customize the PATH variable. The advantages and disadvantages are exactly the same.\n\nThe reason why we even have PATH in the *nix world,\nis not because they *wanted* to separate things (like we want with schemas or extensions),\nbut because they *needed* to, because /bin was overflowed:\n\n\"The UNIX shell gave up the Multics idea of a search path and looked for program names that weren’t\nfile names in just one place, /bin. Then in v3 /bin overflowed the small (256K), fast fixed-head drive.\nThus was /usr/bin born, and the idea of a search path reinstated.\" [1]\n\n[1] https://www.cs.dartmouth.edu/~doug/reader.pdf\n\n/Joel\nOn Fri, Jun 4, 2021, at 08:58, Pavel Stehule wrote:It is the same as using the command line without the possibility to customize the PATH variable. The advantages and disadvantages are exactly the same.The reason why we even have PATH in the *nix world,is not because they *wanted* to separate things (like we want with schemas or extensions),but because they *needed* to, because /bin was overflowed:\"The UNIX shell gave up the Multics idea of a search path and looked for program names that weren’tfile names in just one place, /bin. Then in v3 /bin overflowed the small (256K), fast fixed-head drive.Thus was /usr/bin born, and the idea of a search path reinstated.\" [1][1] https://www.cs.dartmouth.edu/~doug/reader.pdf/Joel",
"msg_date": "Fri, 04 Jun 2021 11:17:20 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "pá 4. 6. 2021 v 11:17 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> On Fri, Jun 4, 2021, at 08:58, Pavel Stehule wrote:\n>\n> It is the same as using the command line without the possibility to\n> customize the PATH variable. The advantages and disadvantages are exactly\n> the same.\n>\n>\n> The reason why we even have PATH in the *nix world,\n> is not because they *wanted* to separate things (like we want with schemas\n> or extensions),\n> but because they *needed* to, because /bin was overflowed:\n>\n> \"The UNIX shell gave up the Multics idea of a search path and looked for\n> program names that weren’t\n> file names in just one place, /bin. Then in v3 /bin overflowed the small\n> (256K), fast fixed-head drive.\n> Thus was /usr/bin born, and the idea of a search path reinstated.\" [1]\n>\n> [1] https://www.cs.dartmouth.edu/~doug/reader.pdf\n>\n>\nIt's funny - sometimes too restrictive limits are reason for design of\nlonger living concepts\n\nPavel\n\n\n/Joel\n>\n\npá 4. 6. 2021 v 11:17 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Fri, Jun 4, 2021, at 08:58, Pavel Stehule wrote:It is the same as using the command line without the possibility to customize the PATH variable. The advantages and disadvantages are exactly the same.The reason why we even have PATH in the *nix world,is not because they *wanted* to separate things (like we want with schemas or extensions),but because they *needed* to, because /bin was overflowed:\"The UNIX shell gave up the Multics idea of a search path and looked for program names that weren’tfile names in just one place, /bin. Then in v3 /bin overflowed the small (256K), fast fixed-head drive.Thus was /usr/bin born, and the idea of a search path reinstated.\" [1][1] https://www.cs.dartmouth.edu/~doug/reader.pdfIt's funny - sometimes too restrictive limits are reason for design of longer living conceptsPavel/Joel",
"msg_date": "Fri, 4 Jun 2021 11:45:59 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "On Fri, Jun 4, 2021, at 11:45, Pavel Stehule wrote:\n> \n> \n> pá 4. 6. 2021 v 11:17 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n>> __\n>> On Fri, Jun 4, 2021, at 08:58, Pavel Stehule wrote:\n>>> It is the same as using the command line without the possibility to customize the PATH variable. The advantages and disadvantages are exactly the same.\n>> \n>> The reason why we even have PATH in the *nix world,\n>> is not because they *wanted* to separate things (like we want with schemas or extensions),\n>> but because they *needed* to, because /bin was overflowed:\n>> \n>> \"The UNIX shell gave up the Multics idea of a search path and looked for program names that weren’t\n>> file names in just one place, /bin. Then in v3 /bin overflowed the small (256K), fast fixed-head drive.\n>> Thus was /usr/bin born, and the idea of a search path reinstated.\" [1]\n>> \n>> [1] https://www.cs.dartmouth.edu/~doug/reader.pdf\n>> \n> \n> It's funny - sometimes too restrictive limits are reason for design of longer living concepts\n> \n> Pavel\n\nYes, it’s funny, I bet there is some English word for this phenomenon?\n\nI just read an article discussing similar problems in *nix and found the extract below very interesting.\n\nMaybe there are takeaways from this article that can inspire us, when thinking about PostgreSQL The Next 50 Years.\n\n”Unix Shell Programming: The Next 50 Years\n…\n2 THE GOOD, THE BAD, AND THE UGLY\n…\n2.2 The Bad\n…\nU4: No support for contemporary deployments. The shell’s core abstractions were designed to facilitate orchestra- tion, management, and processing on a single machine. How- ever, the overabundance of non-solutions—e.g., pssh, GNU parallel, web interfaces—for these classes of computation on today’s distributed environments indicates an impedance mismatch between what the shell provides and the needs of these environments. This mismatch is caused by shell programs being pervasively side-effectful, and exacerbated by classic single-system image issues, where configuration scripts, program and library paths, and environment vari- ables are configured ad hoc. The composition primitives do not compose at scale.”\n\nhttps://sigops.org/s/conferences/hotos/2021/papers/hotos21-s06-greenberg.pdf\n\n/Joel\n\nOn Fri, Jun 4, 2021, at 11:45, Pavel Stehule wrote:pá 4. 6. 2021 v 11:17 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Fri, Jun 4, 2021, at 08:58, Pavel Stehule wrote:It is the same as using the command line without the possibility to customize the PATH variable. The advantages and disadvantages are exactly the same.The reason why we even have PATH in the *nix world,is not because they *wanted* to separate things (like we want with schemas or extensions),but because they *needed* to, because /bin was overflowed:\"The UNIX shell gave up the Multics idea of a search path and looked for program names that weren’tfile names in just one place, /bin. Then in v3 /bin overflowed the small (256K), fast fixed-head drive.Thus was /usr/bin born, and the idea of a search path reinstated.\" [1][1] https://www.cs.dartmouth.edu/~doug/reader.pdfIt's funny - sometimes too restrictive limits are reason for design of longer living conceptsPavelYes, it’s funny, I bet there is some English word for this phenomenon?I just read an article discussing similar problems in *nix and found the extract below very interesting.Maybe there are takeaways from this article that can inspire us, when thinking about PostgreSQL The Next 50 Years.”Unix Shell Programming: The Next 50 Years…2 THE GOOD, THE BAD, AND THE UGLY…2.2 The Bad…U4: No support for contemporary deployments. The shell’s core abstractions were designed to facilitate orchestra- tion, management, and processing on a single machine. How- ever, the overabundance of non-solutions—e.g., pssh, GNU parallel, web interfaces—for these classes of computation on today’s distributed environments indicates an impedance mismatch between what the shell provides and the needs of these environments. This mismatch is caused by shell programs being pervasively side-effectful, and exacerbated by classic single-system image issues, where configuration scripts, program and library paths, and environment vari- ables are configured ad hoc. The composition primitives do not compose at scale.”https://sigops.org/s/conferences/hotos/2021/papers/hotos21-s06-greenberg.pdf/Joel",
"msg_date": "Fri, 04 Jun 2021 15:18:03 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "Maybe this could work:\nCREATE SCHEMA schema_name UNQUALIFIED;\nWhich would explicitly make all the objects created in the schema accessible unqualified, but also enforce there are no conflicts with other objects in existence in all unqualified schemas, upon the creation of new objects.\n/Joel\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nMaybe this could work:CREATE SCHEMA schema_name UNQUALIFIED;Which would explicitly make all the objects created in the schema accessible unqualified, but also enforce there are no conflicts with other objects in existence in all unqualified schemas, upon the creation of new objects./Joel",
"msg_date": "Fri, 04 Jun 2021 17:42:29 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "pá 4. 6. 2021 v 17:43 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> Maybe this could work:\n> CREATE SCHEMA schema_name UNQUALIFIED;\n> Which would explicitly make all the objects created in the schema\n> accessible unqualified, but also enforce there are no conflicts with other\n> objects in existence in all unqualified schemas, upon the creation of new\n> objects.\n>\n\nYes, it can work. I am not sure if \"unqualified\" is the best keyword, but\nthe idea is workable.\n\nRegards\n\nPavel\n\n/Joel\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n\npá 4. 6. 2021 v 17:43 odesílatel Joel Jacobson <joel@compiler.org> napsal:Maybe this could work:CREATE SCHEMA schema_name UNQUALIFIED;Which would explicitly make all the objects created in the schema accessible unqualified, but also enforce there are no conflicts with other objects in existence in all unqualified schemas, upon the creation of new objects.Yes, it can work. I am not sure if \"unqualified\" is the best keyword, but the idea is workable.RegardsPavel /Joel",
"msg_date": "Fri, 4 Jun 2021 18:03:05 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "On Fri, Jun 4, 2021, at 18:03, Pavel Stehule wrote:\n> \n> \n> pá 4. 6. 2021 v 17:43 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n>> __\n>> Maybe this could work:\n>> CREATE SCHEMA schema_name UNQUALIFIED;\n>> Which would explicitly make all the objects created in the schema accessible unqualified, but also enforce there are no conflicts with other objects in existence in all unqualified schemas, upon the creation of new objects.\n> \n> Yes, it can work. I am not sure if \"unqualified\" is the best keyword, but the idea is workable.\n\nSo maybe a combination of some kind of GUC to restrict search_path in some way,\nand a settable/unsettable new boolean pg_namespace column\nto control if the schema should be accessible unqualified or not?\n\nIf we don't like \"UNQUALIFIED\" as a keyword, maybe we could reuse \"PUBLIC\"?\nOr will that be confusing since \"PUBLIC\" is also a role_specification?\n\nI think unqualified=true should mean a schema doesn't need to be part of the search_path, to be accessible unqualified.\n\nThis means, \"pg_catalog\" and \"public\", might have unqualified=false, as their default values.\n\nSetting unqualified=true for \"pg_catalog\" and \"public\" would enforce there are no overlapping objects between the two.\n\nMarko, in your use-case with the \"compat\" schema, do you think it would work to just do\nALTER SCHEMA compat DROP UNQUALIFIED (or whatever the command should be)\nwhen upgrading to the new major version, where compat isn't necessary,\nsimilar to changing the GUC to not include \"compat\"?\n\nIMO, the biggest disadvantage with this idea is that it undeniably increases complexity of name resolution further,\nsince it's then yet another thing to take into account. But maybe it's worth it, if the GUC to restrict search_path,\ncan effectively reduce complexity, when used in combination with this other proposed feature.\n\nI think it's a really difficult question. I strongly feel something should be done in this area to improve the situation,\nbut it's far from obvious what the right thing to do is.\n\n/Joel\nOn Fri, Jun 4, 2021, at 18:03, Pavel Stehule wrote:pá 4. 6. 2021 v 17:43 odesílatel Joel Jacobson <joel@compiler.org> napsal:Maybe this could work:CREATE SCHEMA schema_name UNQUALIFIED;Which would explicitly make all the objects created in the schema accessible unqualified, but also enforce there are no conflicts with other objects in existence in all unqualified schemas, upon the creation of new objects.Yes, it can work. I am not sure if \"unqualified\" is the best keyword, but the idea is workable.So maybe a combination of some kind of GUC to restrict search_path in some way,and a settable/unsettable new boolean pg_namespace columnto control if the schema should be accessible unqualified or not?If we don't like \"UNQUALIFIED\" as a keyword, maybe we could reuse \"PUBLIC\"?Or will that be confusing since \"PUBLIC\" is also a role_specification?I think unqualified=true should mean a schema doesn't need to be part of the search_path, to be accessible unqualified.This means, \"pg_catalog\" and \"public\", might have unqualified=false, as their default values.Setting unqualified=true for \"pg_catalog\" and \"public\" would enforce there are no overlapping objects between the two.Marko, in your use-case with the \"compat\" schema, do you think it would work to just doALTER SCHEMA compat DROP UNQUALIFIED (or whatever the command should be)when upgrading to the new major version, where compat isn't necessary,similar to changing the GUC to not include \"compat\"?IMO, the biggest disadvantage with this idea is that it undeniably increases complexity of name resolution further,since it's then yet another thing to take into account. But maybe it's worth it, if the GUC to restrict search_path,can effectively reduce complexity, when used in combination with this other proposed feature.I think it's a really difficult question. I strongly feel something should be done in this area to improve the situation,but it's far from obvious what the right thing to do is./Joel",
"msg_date": "Mon, 07 Jun 2021 22:54:34 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "On Fri, Jun 4, 2021 at 9:03 AM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> pá 4. 6. 2021 v 17:43 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n>\n>> Maybe this could work:\n>> CREATE SCHEMA schema_name UNQUALIFIED;\n>> Which would explicitly make all the objects created in the schema\n>> accessible unqualified, but also enforce there are no conflicts with other\n>> objects in existence in all unqualified schemas, upon the creation of new\n>> objects.\n>>\n>\n> Yes, it can work. I am not sure if \"unqualified\" is the best keyword, but\n> the idea is workable.\n>\n\nSounds like a job for an event trigger listening to CREATE/ALTER schema.\n\nDavid J.\n\nOn Fri, Jun 4, 2021 at 9:03 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:pá 4. 6. 2021 v 17:43 odesílatel Joel Jacobson <joel@compiler.org> napsal:Maybe this could work:CREATE SCHEMA schema_name UNQUALIFIED;Which would explicitly make all the objects created in the schema accessible unqualified, but also enforce there are no conflicts with other objects in existence in all unqualified schemas, upon the creation of new objects.Yes, it can work. I am not sure if \"unqualified\" is the best keyword, but the idea is workable.Sounds like a job for an event trigger listening to CREATE/ALTER schema.David J.",
"msg_date": "Mon, 7 Jun 2021 14:09:18 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "On Mon, Jun 7, 2021 at 2:09 PM David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n> On Fri, Jun 4, 2021 at 9:03 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>> pá 4. 6. 2021 v 17:43 odesílatel Joel Jacobson <joel@compiler.org>\n>> napsal:\n>>\n>>> Maybe this could work:\n>>> CREATE SCHEMA schema_name UNQUALIFIED;\n>>> Which would explicitly make all the objects created in the schema\n>>> accessible unqualified, but also enforce there are no conflicts with other\n>>> objects in existence in all unqualified schemas, upon the creation of new\n>>> objects.\n>>>\n>>\n>> Yes, it can work. I am not sure if \"unqualified\" is the best keyword, but\n>> the idea is workable.\n>>\n>\n> Sounds like a job for an event trigger listening to CREATE/ALTER schema.\n>\n\nNever mind...I got mixed up a bit on what this all is purporting to do. My\nintent was to try and solve the problem with existing features (event\ntriggers) instead of inventing new ones, which is still desirable.\n\nDavid J.\n\nOn Mon, Jun 7, 2021 at 2:09 PM David G. Johnston <david.g.johnston@gmail.com> wrote:On Fri, Jun 4, 2021 at 9:03 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:pá 4. 6. 2021 v 17:43 odesílatel Joel Jacobson <joel@compiler.org> napsal:Maybe this could work:CREATE SCHEMA schema_name UNQUALIFIED;Which would explicitly make all the objects created in the schema accessible unqualified, but also enforce there are no conflicts with other objects in existence in all unqualified schemas, upon the creation of new objects.Yes, it can work. I am not sure if \"unqualified\" is the best keyword, but the idea is workable.Sounds like a job for an event trigger listening to CREATE/ALTER schema.Never mind...I got mixed up a bit on what this all is purporting to do. My intent was to try and solve the problem with existing features (event triggers) instead of inventing new ones, which is still desirable.David J.",
"msg_date": "Mon, 7 Jun 2021 14:22:36 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "On Mon, Jun 7, 2021 at 1:55 PM Joel Jacobson <joel@compiler.org> wrote:\n\n> If we don't like \"UNQUALIFIED\" as a keyword, maybe we could reuse \"PUBLIC\"?\n> Or will that be confusing since \"PUBLIC\" is also a role_specification?\n>\n>\nFor me the concept resembles explicitly denoting certain schemas as being\nsimple tags, while the actual \"namespace\" is the GLOBAL namespace. Today\nthere is no global namespace, all schemas generate their own individual\nnamespace in addition to \"tagging\" their objects with a textual label.\n\nAvoiding \"public\" is highly desirable.\n\nTo access a global object you should be able to still specify its schema\ntag. Unqualified means \"use search_path\"; and \"use search_path\" includes\nglobal. But there is a truth table waiting to be created to detail what\ncombinations result in errors (including where those errors occur - runtime\nor creation time).\n\nDavid J.\n\nOn Mon, Jun 7, 2021 at 1:55 PM Joel Jacobson <joel@compiler.org> wrote:If we don't like \"UNQUALIFIED\" as a keyword, maybe we could reuse \"PUBLIC\"?Or will that be confusing since \"PUBLIC\" is also a role_specification?For me the concept resembles explicitly denoting certain schemas as being simple tags, while the actual \"namespace\" is the GLOBAL namespace. Today there is no global namespace, all schemas generate their own individual namespace in addition to \"tagging\" their objects with a textual label.Avoiding \"public\" is highly desirable.To access a global object you should be able to still specify its schema tag. Unqualified means \"use search_path\"; and \"use search_path\" includes global. But there is a truth table waiting to be created to detail what combinations result in errors (including where those errors occur - runtime or creation time).David J.",
"msg_date": "Mon, 7 Jun 2021 14:26:27 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
},
{
"msg_contents": "On Mon, Jun 7, 2021, at 23:26, David G. Johnston wrote:\n> On Mon, Jun 7, 2021 at 1:55 PM Joel Jacobson <joel@compiler.org> wrote:\n>> __\n>> If we don't like \"UNQUALIFIED\" as a keyword, maybe we could reuse \"PUBLIC\"?\n>> Or will that be confusing since \"PUBLIC\" is also a role_specification?\n>> \n> \n> For me the concept resembles explicitly denoting certain schemas as being simple tags, while the actual \"namespace\" is the GLOBAL namespace. Today there is no global namespace, all schemas generate their own individual namespace in addition to \"tagging\" their objects with a textual label.\n> \n> \n> Avoiding \"public\" is highly desirable.\n> \n> To access a global object you should be able to still specify its schema tag. Unqualified means \"use search_path\"; and \"use search_path\" includes global. But there is a truth table waiting to be created to detail what combinations result in errors (including where those errors occur - runtime or creation time).\n\n+1\n\n/Joel\nOn Mon, Jun 7, 2021, at 23:26, David G. Johnston wrote:On Mon, Jun 7, 2021 at 1:55 PM Joel Jacobson <joel@compiler.org> wrote:If we don't like \"UNQUALIFIED\" as a keyword, maybe we could reuse \"PUBLIC\"?Or will that be confusing since \"PUBLIC\" is also a role_specification?For me the concept resembles explicitly denoting certain schemas as being simple tags, while the actual \"namespace\" is the GLOBAL namespace. Today there is no global namespace, all schemas generate their own individual namespace in addition to \"tagging\" their objects with a textual label.Avoiding \"public\" is highly desirable.To access a global object you should be able to still specify its schema tag. Unqualified means \"use search_path\"; and \"use search_path\" includes global. But there is a truth table waiting to be created to detail what combinations result in errors (including where those errors occur - runtime or creation time).+1/Joel",
"msg_date": "Tue, 08 Jun 2021 04:48:20 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "Re: security_definer_search_path GUC"
}
] |
[
{
"msg_contents": "Hi\n\nI think I just found a bug in logical replication. Data couldn't be synchronized while updating toast data. Could anyone take a look at it?\n\nHere is the steps to proceduce the BUG:\n------publisher------\nCREATE TABLE toasted_key (\n id serial,\n toasted_key text PRIMARY KEY,\n toasted_col1 text,\n toasted_col2 text\n);\nCREATE PUBLICATION pub FOR TABLE toasted_key;\n\n------subscriber------\nCREATE TABLE toasted_key (\n id serial,\n toasted_key text PRIMARY KEY,\n toasted_col1 text,\n toasted_col2 text\n);\nCREATE SUBSCRIPTION sub CONNECTION 'dbname=postgres' PUBLICATION pub;\n\n------publisher------\nALTER TABLE toasted_key ALTER COLUMN toasted_key SET STORAGE EXTERNAL;\nALTER TABLE toasted_key ALTER COLUMN toasted_col1 SET STORAGE EXTERNAL;\nINSERT INTO toasted_key(toasted_key, toasted_col1) VALUES(repeat('1234567890', 200), repeat('9876543210', 200));\nUPDATE toasted_key SET toasted_col2 = toasted_col1;\n\n------subscriber------\nSELECT count(*) FROM toasted_key WHERE toasted_col2 = toasted_col1;\n\nThe above command is supposed to output \"count = 1\" but in fact it outputs \"count = 0\" which means UPDATE operation failed at the subscriber. Right?\n\nI debugged and found the subscriber could receive message from publisher, and in apply_handle_update_internal function, it invoked FindReplTupleInLocalRel function but failed to find a tuple.\nFYI, I also tested DELETE operation(DELETE FROM toasted_key;), which also invoked FindReplTupleInLocalRel function, and the result is ok.\n\nRegards\nTang\n\n\n",
"msg_date": "Fri, 28 May 2021 05:16:12 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "[BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Friday, May 28, 2021 2:16 PM, tanghy.fnst@fujitsu.com wrote: \n>I think I just found a bug in logical replication. Data couldn't be synchronized while updating toast data. \n>Could anyone take a look at it?\n\nFYI. The problem also occurs in PG-13. I will try to check from which version it got introduced.\n\nRegards,\nTang\n\n\n\n",
"msg_date": "Fri, 28 May 2021 06:01:41 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Friday, May 28, 2021 3:02 PM, tanghy.fnst@fujitsu.com wrote: \n> FYI. The problem also occurs in PG-13. I will try to check from which version it\n> got introduced.\n\nI reproduced it in PG-10,11,12,13.\nI think the problem has been existing since Logical replication introduced in PG-10.\n\nRegards\nTang\n\n\n",
"msg_date": "Fri, 28 May 2021 07:01:06 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Fri, May 28, 2021 at 12:31 PM tanghy.fnst@fujitsu.com\n<tanghy.fnst@fujitsu.com> wrote:\n>\n> On Friday, May 28, 2021 3:02 PM, tanghy.fnst@fujitsu.com wrote:\n> > FYI. The problem also occurs in PG-13. I will try to check from which version it\n> > got introduced.\n>\n> I reproduced it in PG-10,11,12,13.\n> I think the problem has been existing since Logical replication introduced in PG-10.\n\nSeems you did not set the replica identity for updating the tuple.\nTry this before updating, and it should work.\n\nALTER TABLE toasted_key REPLICA IDENTITY USING INDEX toasted_key_pkey;\n\nor\n\nALTER TABLE toasted_key REPLICA IDENTITY FULL.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 28 May 2021 16:21:17 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Friday, May 28, 2021 6:51 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote:\r\n> Seems you did not set the replica identity for updating the tuple.\r\n> Try this before updating, and it should work.\r\n\r\nThanks for your reply. I tried it.\r\n\r\n> ALTER TABLE toasted_key REPLICA IDENTITY USING INDEX toasted_key_pkey;\r\n\r\nThis didn't work.\r\n\r\n> or\r\n> \r\n> ALTER TABLE toasted_key REPLICA IDENTITY FULL.\r\n\r\nIt worked.\r\n\r\nAnd I noticed if the length of PRIMARY KEY (toasted_key) is short, data could be synchronized successfully with default replica identity. \r\nCould you tell me why we need to set replica identity?\r\n\r\nRegards\r\nTang\r\n",
"msg_date": "Mon, 31 May 2021 02:34:21 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Mon, May 31, 2021 at 8:04 AM tanghy.fnst@fujitsu.com\n<tanghy.fnst@fujitsu.com> wrote:\n>\n> On Friday, May 28, 2021 6:51 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > Seems you did not set the replica identity for updating the tuple.\n> > Try this before updating, and it should work.\n>\n> Thanks for your reply. I tried it.\n>\n> > ALTER TABLE toasted_key REPLICA IDENTITY USING INDEX toasted_key_pkey;\n>\n> This didn't work.\n>\n> > or\n> >\n> > ALTER TABLE toasted_key REPLICA IDENTITY FULL.\n>\n> It worked.\n>\n> And I noticed if the length of PRIMARY KEY (toasted_key) is short, data could be synchronized successfully with default replica identity.\n> Could you tell me why we need to set replica identity?\n\nLooks like some problem if the replica identity is an index and the\nvalue is stored externally, I will debug this and let you know.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 31 May 2021 12:20:56 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Mon, May 31, 2021 at 12:20 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, May 31, 2021 at 8:04 AM tanghy.fnst@fujitsu.com\n> <tanghy.fnst@fujitsu.com> wrote:\n> >\n> > On Friday, May 28, 2021 6:51 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > Seems you did not set the replica identity for updating the tuple.\n> > > Try this before updating, and it should work.\n> >\n> > Thanks for your reply. I tried it.\n> >\n> > > ALTER TABLE toasted_key REPLICA IDENTITY USING INDEX toasted_key_pkey;\n> >\n> > This didn't work.\n> >\n> > > or\n> > >\n> > > ALTER TABLE toasted_key REPLICA IDENTITY FULL.\n> >\n> > It worked.\n> >\n> > And I noticed if the length of PRIMARY KEY (toasted_key) is short, data could be synchronized successfully with default replica identity.\n> > Could you tell me why we need to set replica identity?\n>\n> Looks like some problem if the replica identity is an index and the\n> value is stored externally, I will debug this and let you know.\n\n\nThe problem is if the key attribute is not changed we don't log it as\nit should get logged along with the updated tuple, but if it is\nexternally stored then the complete key will never be logged because\nthere is no log from the toast table. For fixing this if the key is\nexternally stored then always log that.\n\nPlease test with the attached patch.\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 31 May 2021 14:41:46 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Mon, May 31, 2021 5:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\r\n> \r\n> The problem is if the key attribute is not changed we don't log it as\r\n> it should get logged along with the updated tuple, but if it is\r\n> externally stored then the complete key will never be logged because\r\n> there is no log from the toast table. For fixing this if the key is\r\n> externally stored then always log that.\r\n> \r\n> Please test with the attached patch.\r\n\r\nThanks for your patch. I tested it and the bug was fixed.\r\nI'm still trying to understand your fix, please allow me to ask more(maybe silly) questions if I found any.\r\n\r\n+\t * if the key hasn't changedand we're only logging the key, we're done.\r\n\r\nI think \"changedand\" should be \"changed and\".\r\n\r\nRegards\r\nTang\r\n",
"msg_date": "Mon, 31 May 2021 10:03:54 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Mon, 31 May 2021 at 3:33 PM, tanghy.fnst@fujitsu.com <\ntanghy.fnst@fujitsu.com> wrote:\n\n> On Mon, May 31, 2021 5:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > The problem is if the key attribute is not changed we don't log it as\n> > it should get logged along with the updated tuple, but if it is\n> > externally stored then the complete key will never be logged because\n> > there is no log from the toast table. For fixing this if the key is\n> > externally stored then always log that.\n> >\n> > Please test with the attached patch.\n>\n> Thanks for your patch. I tested it and the bug was fixed.\n>\n\nThanks for confirming this.\n\n\n> I'm still trying to understand your fix, please allow me to ask more(maybe\n> silly) questions if I found any.\n>\n> + * if the key hasn't changedand we're only logging the key, we're\n> done.\n>\n> I think \"changedand\" should be \"changed and\".\n>\n\nOkay, I will fix it. Lets see what others have to say about this fix, if\nwe agree with this then I think we might have to change the test output. I\nwill do that in the next version along with your comment fix.\n\nOn Mon, 31 May 2021 at 3:33 PM, tanghy.fnst@fujitsu.com <tanghy.fnst@fujitsu.com> wrote:On Mon, May 31, 2021 5:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> \n> The problem is if the key attribute is not changed we don't log it as\n> it should get logged along with the updated tuple, but if it is\n> externally stored then the complete key will never be logged because\n> there is no log from the toast table. For fixing this if the key is\n> externally stored then always log that.\n> \n> Please test with the attached patch.\n\nThanks for your patch. I tested it and the bug was fixed.Thanks for confirming this. \nI'm still trying to understand your fix, please allow me to ask more(maybe silly) questions if I found any.\n\n+ * if the key hasn't changedand we're only logging the key, we're done.\n\nI think \"changedand\" should be \"changed and\".Okay, I will fix it. Lets see what others have to say about this fix, if we agree with this then I think we might have to change the test output. I will do that in the next version along with your comment fix.",
"msg_date": "Mon, 31 May 2021 15:39:01 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "Hi\r\n\r\nI have some questions with your patch. Here are two cases I used to check the bug.\r\n\r\nCase1(PK toasted_key is short), data could be synchronized on HEAD.\r\n---------------\r\nINSERT INTO toasted_key(toasted_key, toasted_col1) VALUES('111', repeat('9876543210', 200));\r\nUPDATE toasted_key SET toasted_col2 = toasted_col1;\r\n---------------\r\n\r\nCase2(PK toasted_key is very long), data couldn’t be synchronized on HEAD.(which is the bug)\r\n---------------\r\nINSERT INTO toasted_key(toasted_key, toasted_col1) VALUES(repeat('9876543210', 200), '111');\r\nUPDATE toasted_key SET toasted_col2 = toasted_col1;\r\n---------------\r\n\r\nSo I think the bug is only related with the length of primary key.\r\nI noticed that in case1, ExtractReplicaIdentity function returned NULL on HEAD. But after your fix, it didn’t return NULL. There is no problem with this case on HEAD, but the patch modified its return value. I’m not sure if it would bring new problems. Have you checked it?\r\n\r\nRegards\r\nTang\r\n\n\n\n\n\n\n\n\n\nHi\n \nI have some questions with your patch. Here are two cases I used to check the bug.\n \nCase1(PK toasted_key\r\nis short), data could be synchronized on HEAD.\n---------------\nINSERT INTO toasted_key(toasted_key, toasted_col1) VALUES('111', repeat('9876543210', 200));\nUPDATE toasted_key SET toasted_col2 = toasted_col1;\n---------------\n \nCase2(PK toasted_key\r\nis very long), data couldn’t be synchronized on HEAD.(which\r\n is the bug)\n---------------\nINSERT INTO toasted_key(toasted_key, toasted_col1) VALUES(repeat('9876543210', 200), '111');\nUPDATE toasted_key SET toasted_col2 = toasted_col1;\n---------------\n \nSo I think the bug is only related with the length of primary key.\r\n\nI noticed that in case1, ExtractReplicaIdentity function returned NULL on HEAD. But after your fix, it didn’t return NULL. There\r\n is no problem with this case on HEAD, but the patch modified its return value. I’m not sure if it would bring new problems. Have you checked it?\n \nRegards\nTang",
"msg_date": "Tue, 1 Jun 2021 06:59:00 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Tue, Jun 1, 2021 at 12:29 PM tanghy.fnst@fujitsu.com\n<tanghy.fnst@fujitsu.com> wrote:\n>\n> Hi\n>\n>\n>\n> I have some questions with your patch. Here are two cases I used to check the bug.\n>\n>\n>\n> Case1(PK toasted_key is short), data could be synchronized on HEAD.\n>\n> ---------------\n>\n> INSERT INTO toasted_key(toasted_key, toasted_col1) VALUES('111', repeat('9876543210', 200));\n>\n> UPDATE toasted_key SET toasted_col2 = toasted_col1;\n>\n> ---------------\n>\n>\n>\n> Case2(PK toasted_key is very long), data couldn’t be synchronized on HEAD.(which is the bug)\n>\n> ---------------\n>\n> INSERT INTO toasted_key(toasted_key, toasted_col1) VALUES(repeat('9876543210', 200), '111');\n>\n> UPDATE toasted_key SET toasted_col2 = toasted_col1;\n>\n> ---------------\n>\n>\n>\n> So I think the bug is only related with the length of primary key.\n>\n> I noticed that in case1, ExtractReplicaIdentity function returned NULL on HEAD. But after your fix, it didn’t return NULL. There is no problem with this case on HEAD, but the patch modified its return value. I’m not sure if it would bring new problems. Have you checked it?\n\nGood observation, basically, my check says that any field in the tuple\nis toasted then prepare the key tuple, actually, after that, I should\nrecheck whether the key field specifically toasted or not and if it is\nnot then we can continue returning NULL. I will fix this in the next\nversion.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 1 Jun 2021 15:39:25 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Tue, Jun 1, 2021 at 3:39 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Jun 1, 2021 at 12:29 PM tanghy.fnst@fujitsu.com\n\n> > I noticed that in case1, ExtractReplicaIdentity function returned NULL on HEAD. But after your fix, it didn’t return NULL. There is no problem with this case on HEAD, but the patch modified its return value. I’m not sure if it would bring new problems. Have you checked it?\n>\n> Good observation, basically, my check says that any field in the tuple\n> is toasted then prepare the key tuple, actually, after that, I should\n> recheck whether the key field specifically toasted or not and if it is\n> not then we can continue returning NULL. I will fix this in the next\n> version.\n\nAttached patch fixes that, I haven't yet added the test case. Once\nsomeone confirms on the approach then I will add a test case to the\npatch.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 2 Jun 2021 12:14:23 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Wed, Jun 2, 2021 2:44 PM Dilip Kumar <dilipbalaut@gmail.com> wrote: \r\n> Attached patch fixes that, I haven't yet added the test case. Once\r\n> someone confirms on the approach then I will add a test case to the\r\n> patch.\r\n\r\n\tkey_tuple = heap_form_tuple(desc, values, nulls);\r\n\t*copy = true;\r\n...\r\n\t\tkey_tuple = toast_flatten_tuple(oldtup, desc);\r\n \t\theap_freetuple(oldtup);\r\n \t}\r\n+\t/*\r\n+\t * If key tuple doesn't have any external data and key is not changed then\r\n+\t * just free the key tuple and return NULL.\r\n+\t */\r\n+\telse if (!key_changed)\r\n+\t{\r\n+\t\theap_freetuple(key_tuple);\r\n+\t\treturn NULL;\r\n+\t}\r\n \r\n \treturn key_tuple;\r\n }\r\n\r\nI think \"*copy = false\" should be added before return NULL because we don't return a modified copy tuple here. Thoughts?\r\n\r\nRegards\r\nTang \r\n\r\n",
"msg_date": "Wed, 2 Jun 2021 09:07:08 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Wed, Jun 2, 2021 at 2:37 PM tanghy.fnst@fujitsu.com\n<tanghy.fnst@fujitsu.com> wrote:\n>\n> On Wed, Jun 2, 2021 2:44 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > Attached patch fixes that, I haven't yet added the test case. Once\n> > someone confirms on the approach then I will add a test case to the\n> > patch.\n>\n> key_tuple = heap_form_tuple(desc, values, nulls);\n> *copy = true;\n> ...\n> key_tuple = toast_flatten_tuple(oldtup, desc);\n> heap_freetuple(oldtup);\n> }\n> + /*\n> + * If key tuple doesn't have any external data and key is not changed then\n> + * just free the key tuple and return NULL.\n> + */\n> + else if (!key_changed)\n> + {\n> + heap_freetuple(key_tuple);\n> + return NULL;\n> + }\n>\n> return key_tuple;\n> }\n>\n> I think \"*copy = false\" should be added before return NULL because we don't return a modified copy tuple here. Thoughts?\n\nYes, you are right. I will change it in the next version, along with\nthe test case.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 2 Jun 2021 15:09:52 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Wed, Jun 2, 2021 at 3:10 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> Yes, you are right. I will change it in the next version, along with\n> the test case.\n>\n+ /*\n+ * if the key hasn't changed and we're only logging the key, we're done.\n+ * But if tuple has external data then we might have to detoast the key.\n+ */\nThis doesn't really mention why we need to detoast the key even when\nthe key remains the same. I guess we can add some more details here.\n\n-- \nThanks & Regards,\nKuntal Ghosh\n\n\n",
"msg_date": "Wed, 2 Jun 2021 19:20:35 +0530",
"msg_from": "Kuntal Ghosh <kuntalghosh.2007@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Wed, Jun 2, 2021 at 7:20 PM Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote:\n>\n> On Wed, Jun 2, 2021 at 3:10 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > Yes, you are right. I will change it in the next version, along with\n> > the test case.\n> >\n> + /*\n> + * if the key hasn't changed and we're only logging the key, we're done.\n> + * But if tuple has external data then we might have to detoast the key.\n> + */\n> This doesn't really mention why we need to detoast the key even when\n> the key remains the same. I guess we can add some more details here.\n\nNoted, let's see what others have to say about fixing this, then I\nwill fix this along with one other pending comment and I will also add\nthe test case. Thanks for looking into this.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 2 Jun 2021 19:23:16 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Wed, Jun 2, 2021 at 7:23 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Jun 2, 2021 at 7:20 PM Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote:\n> >\n> > On Wed, Jun 2, 2021 at 3:10 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > Yes, you are right. I will change it in the next version, along with\n> > > the test case.\n> > >\n> > + /*\n> > + * if the key hasn't changed and we're only logging the key, we're done.\n> > + * But if tuple has external data then we might have to detoast the key.\n> > + */\n> > This doesn't really mention why we need to detoast the key even when\n> > the key remains the same. I guess we can add some more details here.\n>\n> Noted, let's see what others have to say about fixing this, then I\n> will fix this along with one other pending comment and I will also add\n> the test case. Thanks for looking into this.\n\nI have fixed all the pending issues, I see there is already a test\ncase for this so I have changed the output for that.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 3 Jun 2021 17:15:13 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Thu, Jun 3, 2021 7:45 PMDilip Kumar <dilipbalaut@gmail.com> wrote:\r\n> \r\n> I have fixed all the pending issues, I see there is already a test\r\n> case for this so I have changed the output for that.\r\n\r\nThanks for your patch. I tested it for all branches(10,11,12,13,HEAD) and all of them passed.(This bug was introduced in PG-10.)\r\nI also tested the scenario where I found this bug, data could be synchronized after your fix.\r\n\r\nRegards\r\nTang\r\n",
"msg_date": "Fri, 4 Jun 2021 02:55:01 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Fri, Jun 4, 2021 at 8:25 AM tanghy.fnst@fujitsu.com\n<tanghy.fnst@fujitsu.com> wrote:\n>\n> On Thu, Jun 3, 2021 7:45 PMDilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > I have fixed all the pending issues, I see there is already a test\n> > case for this so I have changed the output for that.\n>\n> Thanks for your patch. I tested it for all branches(10,11,12,13,HEAD) and all of them passed.(This bug was introduced in PG-10.)\n> I also tested the scenario where I found this bug, data could be synchronized after your fix.\n\nThanks for verifying this.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 4 Jun 2021 09:38:13 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Thu, Jun 3, 2021 at 5:15 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Jun 2, 2021 at 7:23 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Wed, Jun 2, 2021 at 7:20 PM Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote:\n> > >\n> > > On Wed, Jun 2, 2021 at 3:10 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > Yes, you are right. I will change it in the next version, along with\n> > > > the test case.\n> > > >\n> > > + /*\n> > > + * if the key hasn't changed and we're only logging the key, we're done.\n> > > + * But if tuple has external data then we might have to detoast the key.\n> > > + */\n> > > This doesn't really mention why we need to detoast the key even when\n> > > the key remains the same. I guess we can add some more details here.\n> >\n> > Noted, let's see what others have to say about fixing this, then I\n> > will fix this along with one other pending comment and I will also add\n> > the test case. Thanks for looking into this.\n>\n> I have fixed all the pending issues, I see there is already a test\n> case for this so I have changed the output for that.\n>\n\nIIUC, this issue occurs because we don't log the actual key value for\nunchanged toast key. It is neither logged as part of old_key_tuple nor\nfor new tuple due to which we are not able to find it in the\napply-side when we searched it via FindReplTupleInLocalRel. Now, I\nthink it will work if we LOG the entire unchanged toasted value as you\nhave done in the patch but can we explore some other way to fix it. In\nthe subscriber-side, can we detect that the key column has toasted\nvalue in the new tuple and try to first fetch it and then do the index\nsearch for the fetched toasted value? I am not sure about the\nfeasibility of this but wanted to see if we can someway avoid logging\nunchanged toasted key value as that might save us from additional WAL.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 22 Jul 2021 16:11:08 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Thu, Jul 22, 2021 at 4:11 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jun 3, 2021 at 5:15 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Wed, Jun 2, 2021 at 7:23 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Wed, Jun 2, 2021 at 7:20 PM Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote:\n> > > >\n> > > > On Wed, Jun 2, 2021 at 3:10 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > >\n> > > > > Yes, you are right. I will change it in the next version, along with\n> > > > > the test case.\n> > > > >\n> > > > + /*\n> > > > + * if the key hasn't changed and we're only logging the key, we're done.\n> > > > + * But if tuple has external data then we might have to detoast the key.\n> > > > + */\n> > > > This doesn't really mention why we need to detoast the key even when\n> > > > the key remains the same. I guess we can add some more details here.\n> > >\n> > > Noted, let's see what others have to say about fixing this, then I\n> > > will fix this along with one other pending comment and I will also add\n> > > the test case. Thanks for looking into this.\n> >\n> > I have fixed all the pending issues, I see there is already a test\n> > case for this so I have changed the output for that.\n> >\n>\n> IIUC, this issue occurs because we don't log the actual key value for\n> unchanged toast key. It is neither logged as part of old_key_tuple nor\n> for new tuple due to which we are not able to find it in the\n> apply-side when we searched it via FindReplTupleInLocalRel. Now, I\n> think it will work if we LOG the entire unchanged toasted value as you\n> have done in the patch but can we explore some other way to fix it. In\n> the subscriber-side, can we detect that the key column has toasted\n> value in the new tuple and try to first fetch it and then do the index\n> search for the fetched toasted value? I am not sure about the\n> feasibility of this but wanted to see if we can someway avoid logging\n> unchanged toasted key value as that might save us from additional WAL.\n\nYeah if we can do this then it will be a better approach, I think as\nyou mentioned we can avoid logging unchanged toast key data. I will\ninvestigate this next week and update the thread.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 22 Jul 2021 20:01:52 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Thu, Jul 22, 2021 at 8:02 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Jul 22, 2021 at 4:11 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Jun 3, 2021 at 5:15 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Wed, Jun 2, 2021 at 7:23 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > On Wed, Jun 2, 2021 at 7:20 PM Kuntal Ghosh <kuntalghosh.2007@gmail.com> wrote:\n> > > > >\n> > > > > On Wed, Jun 2, 2021 at 3:10 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > > >\n> > > > > > Yes, you are right. I will change it in the next version, along with\n> > > > > > the test case.\n> > > > > >\n> > > > > + /*\n> > > > > + * if the key hasn't changed and we're only logging the key, we're done.\n> > > > > + * But if tuple has external data then we might have to detoast the key.\n> > > > > + */\n> > > > > This doesn't really mention why we need to detoast the key even when\n> > > > > the key remains the same. I guess we can add some more details here.\n> > > >\n> > > > Noted, let's see what others have to say about fixing this, then I\n> > > > will fix this along with one other pending comment and I will also add\n> > > > the test case. Thanks for looking into this.\n> > >\n> > > I have fixed all the pending issues, I see there is already a test\n> > > case for this so I have changed the output for that.\n> > >\n> >\n> > IIUC, this issue occurs because we don't log the actual key value for\n> > unchanged toast key. It is neither logged as part of old_key_tuple nor\n> > for new tuple due to which we are not able to find it in the\n> > apply-side when we searched it via FindReplTupleInLocalRel. Now, I\n> > think it will work if we LOG the entire unchanged toasted value as you\n> > have done in the patch but can we explore some other way to fix it. In\n> > the subscriber-side, can we detect that the key column has toasted\n> > value in the new tuple and try to first fetch it and then do the index\n> > search for the fetched toasted value? I am not sure about the\n> > feasibility of this but wanted to see if we can someway avoid logging\n> > unchanged toasted key value as that might save us from additional WAL.\n>\n> Yeah if we can do this then it will be a better approach, I think as\n> you mentioned we can avoid logging unchanged toast key data. I will\n> investigate this next week and update the thread.\n>\n\nOkay, thanks. I think one point we need to consider here is that on\nthe subscriber side, we use dirtysnapshot to search the key, so we\nneed to ensure that we don't fetch the wrong data. I am not sure what\nwill happen when by the time we try to search the tuple in the\nsubscriber-side for the update, it has been removed and re-inserted\nwith the same values by the user. Do we find the newly inserted tuple\nand update it? If so, can it also happen even if logged the unchanged\nold_key_tuple as the patch is doing currently?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 23 Jul 2021 08:58:31 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Fri, Jul 23, 2021 at 8:58 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Okay, thanks. I think one point we need to consider here is that on\n> the subscriber side, we use dirtysnapshot to search the key, so we\n> need to ensure that we don't fetch the wrong data. I am not sure what\n> will happen when by the time we try to search the tuple in the\n> subscriber-side for the update, it has been removed and re-inserted\n> with the same values by the user. Do we find the newly inserted tuple\n> and update it? If so, can it also happen even if logged the unchanged\n> old_key_tuple as the patch is doing currently?\n>\n\nI was thinking more about this idea, but IMHO, unless we send the key\ntoasted tuple from the publisher how is the subscriber supposed to\nfetch it. Because that is the key value for finding the tuple on the\nsubscriber side and if we haven't sent the key value, how are we\nsupposed to find the tuple on the subscriber side?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 26 Jul 2021 10:45:17 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Mon, Jul 26, 2021 at 10:45 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> I was thinking more about this idea, but IMHO, unless we send the key\n> toasted tuple from the publisher how is the subscriber supposed to\n> fetch it. Because that is the key value for finding the tuple on the\n> subscriber side and if we haven't sent the key value, how are we\n> supposed to find the tuple on the subscriber side?\n>\n\nI was thinking of using toast pointer but that won't work because it\ncan be different on the subscriber-side. I don't see any better ideas\nto fix this issue. This problem seems to be from the time Logical\nReplication has been introduced, so adding others (who are generally\ninvolved in this area) to see what they think about this bug? I think\npeople might not be using toasted columns for Replica Identity due to\nwhich this problem has been reported yet but I feel this is quite a\nfundamental issue and we should do something about this.\n\nLet me summarize the problem for the ease of others.\n\nThe logical replica can go out of sync for UPDATES when there is a\ntoast column as part of REPLICA IDENTITY. In such cases, updates are\nnot replicated if the key column doesn't change because we don't log\nthe actual key value for the unchanged toast key. It is neither logged\nas part of old_key_tuple nor for new tuple due to which we are not\nable to find the tuple to be updated on the subscriber-side and the\nupdate is ignored on the subscriber-side. We log this in DEBUG1 mode\nbut I don't think the user can do anything about this and the replica\nwill go out-of-sync. This works when the replica identity column value\nis not toasted because then it will be part of the new tuple and we\nuse that to fetch the tuple on the subscriber.\n\nNow, it is not clear if the key-value (for the toast column which is\npart of replica identity) is not present in WAL how we can find the\ntuple to update on subscriber? We can't use the toast pointer in the\nnew tuple to fetch the toast information as that can be different on\nsubscribers. The simple way is to WAL LOG the unchanged toasted value\nas part of old_key_tuple, this will be required only for toast\ncolumns.\n\nNote that Delete works because we WAL Log the unchanged key tuple in that case.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 30 Jul 2021 10:21:12 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Fri, Jul 30, 2021 at 10:21 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> This problem seems to be from the time Logical\n> Replication has been introduced, so adding others (who are generally\n> involved in this area) to see what they think about this bug? I think\n> people might not be using toasted columns for Replica Identity due to\n> which this problem has been reported yet but I feel this is quite a\n> fundamental issue and we should do something about this.\n>\n> Let me summarize the problem for the ease of others.\n>\n> The logical replica can go out of sync for UPDATES when there is a\n> toast column as part of REPLICA IDENTITY. In such cases, updates are\n> not replicated if the key column doesn't change because we don't log\n> the actual key value for the unchanged toast key. It is neither logged\n> as part of old_key_tuple nor for new tuple due to which we are not\n> able to find the tuple to be updated on the subscriber-side and the\n> update is ignored on the subscriber-side. We log this in DEBUG1 mode\n> but I don't think the user can do anything about this and the replica\n> will go out-of-sync. This works when the replica identity column value\n> is not toasted because then it will be part of the new tuple and we\n> use that to fetch the tuple on the subscriber.\n>\n\nIt seems to me this problem exists from the time we introduced\nwal_level = logical in the commit e55704d8b2 [1], or another\npossibility is that logical replication commit didn't consider\nsomething to make it work. Andres, Robert, Petr, can you guys please\ncomment because otherwise, we might miss something here.\n\n[1] -\ncommit e55704d8b2fe522fbc9435acbb5bc59033478bd5\nAuthor: Robert Haas <rhaas@postgresql.org>\nDate: Tue Dec 10 18:33:45 2013 -0500\n\nAdd new wal_level, logical, sufficient for logical decoding.\n\nWhen wal_level=logical, we'll log columns from the old tuple as\nconfigured by the REPLICA IDENTITY facility added in commit\n07cacba983ef79be4a84fcd0e0ca3b5fcb85dd65. This makes it possible a\nproperly-configured logical replication solution to correctly\nfollow table updates even if they change the chosen key columns, or,\nwith REPLICA IDENTITY FULL, even if the table has no key at all. Note\nthat updates which do not modify the replica identity column won't log\nanything extra, making the choice of a good key (i.e. one that will\nrarely be changed) important to performance when wal_level=logical is\nconfigured.\n..\nAndres Freund, reviewed in various versions by myself, Heikki\nLinnakangas, KONDO Mitsumasa, and many others.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 10 Aug 2021 10:50:16 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On 2021-Jul-30, Amit Kapila wrote:\n\n> I was thinking of using toast pointer but that won't work because it\n> can be different on the subscriber-side. I don't see any better ideas\n> to fix this issue. This problem seems to be from the time Logical\n> Replication has been introduced, so adding others (who are generally\n> involved in this area) to see what they think about this bug? I think\n> people might not be using toasted columns for Replica Identity due to\n> which this problem has been reported yet but I feel this is quite a\n> fundamental issue and we should do something about this.\n\nIn the evening before going offline a week ago I was looking at this and\nmy conclusion was that this was a legitimate problem: the original\nimplementation is faulty in that the full detoasted value is required to\nbe transmitted in order for downstream to be able to read the value.\n\nI am not sure if at the level of logical decoding it is a problem\ntheoretically, but at least for logical replication it is clearly a\npractical problem.\n\nReading Dilip's last posted patch that day, I had some reservations\nabout the API of ExtractReplicaIdentity. The new argument makes for a\nvery strange to explain behavior \"return the key values if they are\nunchanged, *or* if they are toasted\" ... ??? I tried to make sense of\nthat, and tried to find a concept that would make sense to the whole,\nbut couldn't find any obvious angle in the short time I looked at it.\nI haven't looked at it again.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"El Maquinismo fue proscrito so pena de cosquilleo hasta la muerte\"\n(Ijon Tichy en Viajes, Stanislaw Lem)\n\n\n",
"msg_date": "Tue, 10 Aug 2021 10:38:48 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Tue, Aug 10, 2021 at 8:08 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Jul-30, Amit Kapila wrote:\n>\n> Reading Dilip's last posted patch that day, I had some reservations\n> about the API of ExtractReplicaIdentity. The new argument makes for a\n> very strange to explain behavior \"return the key values if they are\n> unchanged, *or* if they are toasted\" ... ???\n>\n\nI think we can say it as \"Return the key values if they are changed\n*or* if they are toasted\". Currently, we have an exception for Deletes\nwhere the caller always passed key_changed as true, so maybe we can\nhave a similar exception when the tuple has toasted values. Can we\nthink of changing the flag to \"key_required\" instead of \"key_changed\"\nand let the caller identify and set its value? For Deletes, it will\nwork the same but for Updates, the caller needs to compute it by\nchecking if any of the key columns are modified or has a toast value.\nWe can try to see if the caller can identify it cheaply along with\ndetermining the modified_attrs as at that time we will anyway check\nreplica key attrs.\n\nCurrently, in proposed patch first, we check that the tuple has any\ntoast values and then it deforms and forms the new key tuple. After\nthat, it checks if the key has any toast values and then only decides\nto return the tuple. If as described in the previous paragraph, we can\ncheaply identify whether the key has toasted values, then we can avoid\ndeform/form cost in some cases. Also, I think we need to change the\nReplica Identity description in the docs[1].\n\n[1] - https://www.postgresql.org/docs/devel/sql-altertable.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 11 Aug 2021 10:30:28 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Wed, Aug 11, 2021 at 10:30 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Aug 10, 2021 at 8:08 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2021-Jul-30, Amit Kapila wrote:\n> >\n> > Reading Dilip's last posted patch that day, I had some reservations\n> > about the API of ExtractReplicaIdentity. The new argument makes for a\n> > very strange to explain behavior \"return the key values if they are\n> > unchanged, *or* if they are toasted\" ... ???\n> >\n>\n> I think we can say it as \"Return the key values if they are changed\n> *or* if they are toasted\". Currently, we have an exception for Deletes\n> where the caller always passed key_changed as true, so maybe we can\n> have a similar exception when the tuple has toasted values. Can we\n> think of changing the flag to \"key_required\" instead of \"key_changed\"\n> and let the caller identify and set its value? For Deletes, it will\n> work the same but for Updates, the caller needs to compute it by\n> checking if any of the key columns are modified or has a toast value.\n> We can try to see if the caller can identify it cheaply along with\n> determining the modified_attrs as at that time we will anyway check\n> replica key attrs.\n\nRight\n\n>\n> Currently, in proposed patch first, we check that the tuple has any\n> toast values and then it deforms and forms the new key tuple. After\n> that, it checks if the key has any toast values and then only decides\n> to return the tuple. If as described in the previous paragraph, we can\n> cheaply identify whether the key has toasted values, then we can avoid\n> deform/form cost in some cases. Also, I think we need to change the\n> Replica Identity description in the docs[1].\n\nYeah we can avoid that by detecting any toasted replica identity key\nin HeapDetermineModifiedColumns, check the attached patch.\n\n\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 11 Aug 2021 18:14:55 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Wed, Aug 11, 2021 at 10:45 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> Yeah we can avoid that by detecting any toasted replica identity key\n> in HeapDetermineModifiedColumns, check the attached patch.\n>\n\nThe patch applies cleanly, all tests pass, I tried out a few toast\ncombination tests and they seem to be working fine.\nNo review comments, the patch looks good to me.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 2 Sep 2021 21:33:22 +1000",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Wed, Aug 11, 2021 at 10:45 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> Yeah we can avoid that by detecting any toasted replica identity key\n> in HeapDetermineModifiedColumns, check the attached patch.\n>\n\nI had a second look at this, and I just had a small doubt. Since the\nconvention is that for UPDATES, the old tuple/key is written to\nWAL only if the one of the columns in the key has changed as part of\nthe update, and we are breaking that convention with this patch by\nalso including\nthe old key if it is toasted and is stored in disk even if it is not changed.\nWhy do we not include the detoasted key as part of the new tuple\nrather than the old tuple? Then we don't really break this convention.\n\nAnd one small typo in the patch:\n\nThe header above ExtractReplicaIdentity()\n\nBefore:\n * key_required should be false if caller knows that no replica identity\n * columns changed value and it doesn't has any external data.\n * It's always true in the DELETE case.\n\nAfter:\n * key_required should be false if caller knows that no replica identity\n * columns changed value and it doesn't have any external data.\n * It's always true in the DELETE case.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 8 Sep 2021 15:56:04 +1000",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Wed, Sep 8, 2021 at 11:26 AM Ajin Cherian <itsajin@gmail.com> wrote:\n\n> On Wed, Aug 11, 2021 at 10:45 PM Dilip Kumar <dilipbalaut@gmail.com>\n> wrote:\n>\n> > Yeah we can avoid that by detecting any toasted replica identity key\n> > in HeapDetermineModifiedColumns, check the attached patch.\n> >\n>\n> I had a second look at this, and I just had a small doubt. Since the\n> convention is that for UPDATES, the old tuple/key is written to\n> WAL only if the one of the columns in the key has changed as part of\n> the update, and we are breaking that convention with this patch by\n> also including\n> the old key if it is toasted and is stored in disk even if it is not\n> changed.\n> Why do we not include the detoasted key as part of the new tuple\n> rather than the old tuple? Then we don't really break this convention.\n>\n\nThe purpose of including the toasted old data is only for the replica\nidentity, but if you include it in the new tuple then it will affect the\ngeneral wal replay, basically, now you will have large detoasted data in\nyour new tuple which your are directly going to memcpy on the standby while\nreplaying so that will create corruption. So basically, you can not\ninclude this in the new tuple without changing a lot of logic around replay\nwhich is completely useless.\n\nSo let this tuple be for a specific purpose and that is replica identity in\nour case.\n\n\n> And one small typo in the patch:\n>\n> The header above ExtractReplicaIdentity()\n>\n> Before:\n> * key_required should be false if caller knows that no replica identity\n> * columns changed value and it doesn't has any external data.\n> * It's always true in the DELETE case.\n>\n> After:\n> * key_required should be false if caller knows that no replica identity\n> * columns changed value and it doesn't have any external data.\n> * It's always true in the DELETE case.\n>\n\nOkay, I will change this.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, Sep 8, 2021 at 11:26 AM Ajin Cherian <itsajin@gmail.com> wrote:On Wed, Aug 11, 2021 at 10:45 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> Yeah we can avoid that by detecting any toasted replica identity key\n> in HeapDetermineModifiedColumns, check the attached patch.\n>\n\nI had a second look at this, and I just had a small doubt. Since the\nconvention is that for UPDATES, the old tuple/key is written to\nWAL only if the one of the columns in the key has changed as part of\nthe update, and we are breaking that convention with this patch by\nalso including\nthe old key if it is toasted and is stored in disk even if it is not changed.\nWhy do we not include the detoasted key as part of the new tuple\nrather than the old tuple? Then we don't really break this convention.The purpose of including the toasted old data is only for the replica identity, but if you include it in the new tuple then it will affect the general wal replay, basically, now you will have large detoasted data in your new tuple which your are directly going to memcpy on the standby while replaying so that will create corruption. So basically, you can not include this in the new tuple without changing a lot of logic around replay which is completely useless.So let this tuple be for a specific purpose and that is replica identity in our case.\n\nAnd one small typo in the patch:\n\nThe header above ExtractReplicaIdentity()\n\nBefore:\n * key_required should be false if caller knows that no replica identity\n * columns changed value and it doesn't has any external data.\n * It's always true in the DELETE case.\n\nAfter:\n * key_required should be false if caller knows that no replica identity\n * columns changed value and it doesn't have any external data.\n * It's always true in the DELETE case.Okay, I will change this. -- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 8 Sep 2021 11:58:17 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Wed, Aug 11, 2021 at 06:14:55PM +0530, Dilip Kumar wrote:\n> Right\n\nAmit, are you planning to look more at this patch? It has been a\ncouple of months since the last update, and this is still a bug as far\nas I understand.\n\nFWIW, I find the API changes of HeapDetermineModifiedColumns() and\nExtractReplicaIdentity() a bit grotty. Shouldn't we try to flatten\nthe old tuple instead? There are things like\ntoast_flatten_tuple_to_datum() for this purpose if a tuple satisfies\nHeapTupleHasExternal(), or just heap_copy_tuple_as_datum().\n--\nMichael",
"msg_date": "Mon, 24 Jan 2022 12:58:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Mon, Jan 24, 2022 at 9:28 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Aug 11, 2021 at 06:14:55PM +0530, Dilip Kumar wrote:\n> > Right\n>\n> Amit, are you planning to look more at this patch? It has been a\n> couple of months since the last update, and this is still a bug as far\n> as I understand.\n>\n> FWIW, I find the API changes of HeapDetermineModifiedColumns() and\n> ExtractReplicaIdentity() a bit grotty. Shouldn't we try to flatten\n> the old tuple instead? There are things like\n> toast_flatten_tuple_to_datum() for this purpose if a tuple satisfies\n> HeapTupleHasExternal(), or just heap_copy_tuple_as_datum().\n>\n\nThat can add overhead in cases where we don't need to log the toasted\nvalues of the old tuple. We only need it for the case where we have\nunchanged toasted replica identity columns. In the previous version\n[1], we were doing something like you are suggesting and that seems to\nhave overhead as explained in the second paragraph of the email [2].\nAlso, Alvaro seems to have some reservations about that change. I\ndon't know if there is a better way to fix this but I could be missing\nsomething.\n\n[1] - https://www.postgresql.org/message-id/CAFiTN-sTS4bB7W3UJV3iUm%3DwKdr9EpOwyK97hNr77MzFQm_NBw%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/CAA4eK1KgZr%3DQSBE_Qh0Qfb2ma1Tc6%2BZxkMaUHO7aC7b9WSCRaw%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 24 Jan 2022 17:35:50 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Mon, Jan 24, 2022, at 12:58 AM, Michael Paquier wrote:\n> FWIW, I find the API changes of HeapDetermineModifiedColumns() and\n> ExtractReplicaIdentity() a bit grotty. Shouldn't we try to flatten\n> the old tuple instead? There are things like\n> toast_flatten_tuple_to_datum() for this purpose if a tuple satisfies\n> HeapTupleHasExternal(), or just heap_copy_tuple_as_datum().\n> \nI checked v4 and I don't like the HeapDetermineModifiedColumns() and\nheap_tuple_attr_equals() changes either. It seems it is hijacking these\nfunctions to something else. I would suggest to change the signature to\n\nstatic void\nheap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,\n HeapTuple tup1, HeapTuple tup2,\n bool *is_equal, bool *key_has_external);\n\nand\n\nstatic void\nHeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols,\n HeapTuple oldtup, HeapTuple newtup,\n Bitmapset *modified_attrs, bool *key_has_external);\n\nI didn't figure out a cheap way to check if the key has external value other\nthan slightly modifying the HeapDetermineModifiedColumns() function and its\nsubroutine heap_tuple_attr_equals(). As Alvaro said I don't think adding\nHeapTupleHasExternal() (as in v3) is a good idea because it does not optimize\ngenuine cases such as a table whose PK is an integer and contains a single\nTOAST column.\n\nAnother suggestion is to keep key_changed and add another attribute\n(key_has_external) to ExtractReplicaIdentity(). If we need key_changed in the\nfuture we'll have to decompose it again. You also encapsulates that\noptimization into the function that helps with future improvements/fixes.\n\nstatic HeapTuple\nExtractReplicaIdentity(Relation relation, HeapTuple tp, bool key_changed,\n bool key_has_external, bool *copy);\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Mon, Jan 24, 2022, at 12:58 AM, Michael Paquier wrote:FWIW, I find the API changes of HeapDetermineModifiedColumns() andExtractReplicaIdentity() a bit grotty. Shouldn't we try to flattenthe old tuple instead? There are things liketoast_flatten_tuple_to_datum() for this purpose if a tuple satisfiesHeapTupleHasExternal(), or just heap_copy_tuple_as_datum().I checked v4 and I don't like the HeapDetermineModifiedColumns() andheap_tuple_attr_equals() changes either. It seems it is hijacking thesefunctions to something else. I would suggest to change the signature tostatic voidheap_tuple_attr_equals(TupleDesc tupdesc, int attrnum, HeapTuple tup1, HeapTuple tup2, bool *is_equal, bool *key_has_external);andstatic voidHeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols, HeapTuple oldtup, HeapTuple newtup, Bitmapset *modified_attrs, bool *key_has_external);I didn't figure out a cheap way to check if the key has external value otherthan slightly modifying the HeapDetermineModifiedColumns() function and itssubroutine heap_tuple_attr_equals(). As Alvaro said I don't think addingHeapTupleHasExternal() (as in v3) is a good idea because it does not optimizegenuine cases such as a table whose PK is an integer and contains a singleTOAST column.Another suggestion is to keep key_changed and add another attribute(key_has_external) to ExtractReplicaIdentity(). If we need key_changed in thefuture we'll have to decompose it again. You also encapsulates thatoptimization into the function that helps with future improvements/fixes.static HeapTupleExtractReplicaIdentity(Relation relation, HeapTuple tp, bool key_changed, bool key_has_external, bool *copy);--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Mon, 24 Jan 2022 15:55:34 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Tue, Aug 10, 2021 at 1:20 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> It seems to me this problem exists from the time we introduced\n> wal_level = logical in the commit e55704d8b2 [1], or another\n> possibility is that logical replication commit didn't consider\n> something to make it work. Andres, Robert, Petr, can you guys please\n> comment because otherwise, we might miss something here.\n\nI'm belatedly getting around to looking at this thread. My\nrecollection of this is:\n\nI think we realized when we were working on the logical decoding stuff\nthat the key columns of the old tuple would have to be detoasted in\norder for the mechanism to work, because I remember worrying about\nwhether it would potentially be a problem that the WAL record would\nend up huge. However, I think we believed that the new tuple wouldn't\nneed to have the detoasted values, because logical decoding is\ndesigned to notice all the TOAST insertions for the new tuple and\nreassemble those separate chunks to get the original value back. And\noff-hand I'm not sure why that logic doesn't apply just as much to the\nkey columns as any others.\n\nBut the evidence does suggest that there's some kind of bug here, so\nevidently there's some flaw in that line of thinking. I'm not sure\noff-hand what it is, though.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 24 Jan 2022 15:10:05 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "Hi,\n\nOn 2022-01-24 15:10:05 -0500, Robert Haas wrote:\n> I think we realized when we were working on the logical decoding stuff\n> that the key columns of the old tuple would have to be detoasted in\n> order for the mechanism to work, because I remember worrying about\n> whether it would potentially be a problem that the WAL record would\n> end up huge. However, I think we believed that the new tuple wouldn't\n> need to have the detoasted values, because logical decoding is\n> designed to notice all the TOAST insertions for the new tuple and\n> reassemble those separate chunks to get the original value back.\n\nPossibly the root of the problem is that we/I didn't think of cases where the\nprimary key is an external toast datum - in moast scenarios you'd an error\nabout a too wide index tuple. But of course that neglects cases where toasting\nhappens due to SET STORAGE or due to the aggregate tuple width, rather than\nindividual column width.\n\n\n> And off-hand I'm not sure why that logic doesn't apply just as much to the\n> key columns as any others.\n\nThe difference is that it's mostly fine to not have unchanging key columns as\npart of decoded update - you just don't update those columns. But you can't do\nthat without knowing the replica identity...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 24 Jan 2022 13:17:48 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Mon, Jan 24, 2022 at 4:17 PM Andres Freund <andres@anarazel.de> wrote:\n> Possibly the root of the problem is that we/I didn't think of cases where the\n> primary key is an external toast datum - in moast scenarios you'd an error\n> about a too wide index tuple. But of course that neglects cases where toasting\n> happens due to SET STORAGE or due to the aggregate tuple width, rather than\n> individual column width.\n\nThat seems consistent with what's been described on this thread so\nfar, but I still don't quite understand why the logic that reassembles\nTOAST chunks doesn't solve it. I mean, decoding doesn't know whether\nany changes are happening on the subscriber side, so it's not like we\ncan (a) query for the row and then (b) decide to reassemble TOAST\nchunks if we find it, or something like that. The decoding has to say,\nwell, here's the new tuple and the old key columns, and then the\nsubscriber does whatever it does. I guess it could check whether the\nold and new values are identical to decide whether to drop that column\nout of the result, but it shouldn't compare a TOAST pointer to a\ndetoasted value and go \"yeah, that looks equal\"....\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 24 Jan 2022 16:31:08 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On 2022-01-24 16:31:08 -0500, Robert Haas wrote:\n> That seems consistent with what's been described on this thread so\n> far, but I still don't quite understand why the logic that reassembles\n> TOAST chunks doesn't solve it.\n\nThere are no toast chunks to reassemble if the update didn't change the\nprimary key. So this just hits the path we'd also hit for an unchanged toasted\nnon-key column.\n\n\n",
"msg_date": "Mon, 24 Jan 2022 13:42:06 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Mon, Jan 24, 2022 at 4:42 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-01-24 16:31:08 -0500, Robert Haas wrote:\n> > That seems consistent with what's been described on this thread so\n> > far, but I still don't quite understand why the logic that reassembles\n> > TOAST chunks doesn't solve it.\n>\n> There are no toast chunks to reassemble if the update didn't change the\n> primary key. So this just hits the path we'd also hit for an unchanged toasted\n> non-key column.\n\nOh. Hmm. That's bad.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 24 Jan 2022 16:43:31 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Tue, Jan 25, 2022 at 12:26 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Mon, Jan 24, 2022, at 12:58 AM, Michael Paquier wrote:\n>\n> FWIW, I find the API changes of HeapDetermineModifiedColumns() and\n> ExtractReplicaIdentity() a bit grotty. Shouldn't we try to flatten\n> the old tuple instead? There are things like\n> toast_flatten_tuple_to_datum() for this purpose if a tuple satisfies\n> HeapTupleHasExternal(), or just heap_copy_tuple_as_datum().\n>\n> I checked v4 and I don't like the HeapDetermineModifiedColumns() and\n> heap_tuple_attr_equals() changes either. It seems it is hijacking these\n> functions to something else. I would suggest to change the signature to\n>\n> static void\n> heap_tuple_attr_equals(TupleDesc tupdesc, int attrnum,\n> HeapTuple tup1, HeapTuple tup2,\n> bool *is_equal, bool *key_has_external);\n>\n> and\n>\n> static void\n> HeapDetermineModifiedColumns(Relation relation, Bitmapset *interesting_cols,\n> HeapTuple oldtup, HeapTuple newtup,\n> Bitmapset *modified_attrs, bool *key_has_external);\n>\n> I didn't figure out a cheap way to check if the key has external value other\n> than slightly modifying the HeapDetermineModifiedColumns() function and its\n> subroutine heap_tuple_attr_equals().\n>\n\nI am not sure if your proposal is much different compared to v4 or how\nmuch it improves the situation? I see you didn't consider\n'check_external_attr' parameter and I think that is important to know\nif the key has any external toast value. Overall, I see your point\nthat the change of APIs looks a bit ugly. But, I guess that is more\ndue to their names and current purpose. I think it could be better if\nwe bring all the code of heap_tuple_attr_equals in its only caller\nHeapDetermineModifiedColumns or at least part of the code where we get\nattr value and can determine whether the value is stored externally.\nThen change name of HeapDetermineModifiedColumns to\nHeapDetermineColumnsInfo with additional parameters.\n\n> As Alvaro said I don't think adding\n> HeapTupleHasExternal() (as in v3) is a good idea because it does not optimize\n> genuine cases such as a table whose PK is an integer and contains a single\n> TOAST column.\n>\n> Another suggestion is to keep key_changed and add another attribute\n> (key_has_external) to ExtractReplicaIdentity(). If we need key_changed in the\n> future we'll have to decompose it again.\n>\n\nTrue, but we can make the required changes at that point as well.\nOTOH, we can do what you are suggesting as well but I am not sure if\nthat is required.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 25 Jan 2022 11:59:10 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Tue, Jan 25, 2022 at 11:59 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jan 25, 2022 at 12:26 AM Euler Taveira <euler@eulerto.com> wrote:\n> >\n>\n> I am not sure if your proposal is much different compared to v4 or how\n> much it improves the situation? I see you didn't consider\n> 'check_external_attr' parameter and I think that is important to know\n> if the key has any external toast value. Overall, I see your point\n> that the change of APIs looks a bit ugly. But, I guess that is more\n> due to their names and current purpose. I think it could be better if\n> we bring all the code of heap_tuple_attr_equals in its only caller\n> HeapDetermineModifiedColumns or at least part of the code where we get\n> attr value and can determine whether the value is stored externally.\n> Then change name of HeapDetermineModifiedColumns to\n> HeapDetermineColumnsInfo with additional parameters.\n\nI think the best way is to do some refactoring and renaming of the\nfunction, because as part of HeapDetermineModifiedColumns we are\nalready processing the tuple so we can not put extra overhead of\nreprocessing it again. In short I like the idea of renaming the\nHeapDetermineModifiedColumns and moving part of heap_tuple_attr_equals\ncode into the caller. Here is the patch set for the same. I have\ndivided it into two patches which can eventually be merged, 0001- for\nrefactoring 0002- does the actual work.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 28 Jan 2022 12:16:33 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Fri, Jan 28, 2022 at 12:16 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> I think the best way is to do some refactoring and renaming of the\n> function, because as part of HeapDetermineModifiedColumns we are\n> already processing the tuple so we can not put extra overhead of\n> reprocessing it again. In short I like the idea of renaming the\n> HeapDetermineModifiedColumns and moving part of heap_tuple_attr_equals\n> code into the caller. Here is the patch set for the same. I have\n> divided it into two patches which can eventually be merged, 0001- for\n> refactoring 0002- does the actual work.\n>\n\n+ /*\n+ * If it's a whole-tuple reference, say \"not equal\". It's not really\n+ * worth supporting this case, since it could only succeed after a\n+ * no-op update, which is hardly a case worth optimizing for.\n+ */\n+ if (attrnum == 0)\n+ continue;\n+\n+ /*\n+ * Likewise, automatically say \"not equal\" for any system attribute\n+ * other than tableOID; we cannot expect these to be consistent in a\n+ * HOT chain, or even to be set correctly yet in the new tuple.\n+ */\n+ if (attrnum < 0)\n+ {\n+ if (attrnum != TableOidAttributeNumber)\n+ continue;\n+ }\n\nThese two cases need to be considered as the corresponding attribute\nis modified, so the attnum needs to be added in the bitmapset of\nmodified attrs.\n\nI have changed this and various other comments in the patch. I have\nmodified the docs as well to reflect the changes. I thought of adding\na test but I think the current test in toast.sql seems sufficient.\nKindly let me know what you think of the attached? I think we should\nbackpatch this till v10. What do you think?\n\nDoes anyone else have better ideas to fix this?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Sat, 29 Jan 2022 15:56:53 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Sat, Jan 29, 2022 at 3:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jan 28, 2022 at 12:16 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> + /*\n> + * If it's a whole-tuple reference, say \"not equal\". It's not really\n> + * worth supporting this case, since it could only succeed after a\n> + * no-op update, which is hardly a case worth optimizing for.\n> + */\n> + if (attrnum == 0)\n> + continue;\n> +\n> + /*\n> + * Likewise, automatically say \"not equal\" for any system attribute\n> + * other than tableOID; we cannot expect these to be consistent in a\n> + * HOT chain, or even to be set correctly yet in the new tuple.\n> + */\n> + if (attrnum < 0)\n> + {\n> + if (attrnum != TableOidAttributeNumber)\n> + continue;\n> + }\n>\n> These two cases need to be considered as the corresponding attribute\n> is modified, so the attnum needs to be added in the bitmapset of\n> modified attrs.\n\nYeah right.\n\n>\n> I have changed this and various other comments in the patch. I have\n> modified the docs as well to reflect the changes. I thought of adding\n> a test but I think the current test in toast.sql seems sufficient.\n> Kindly let me know what you think of the attached? I think we should\n> backpatch this till v10. What do you think?\n\nLooks fine to me.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 31 Jan 2022 09:02:54 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Mon, Jan 31, 2022 at 9:03 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> >\n> > I have changed this and various other comments in the patch. I have\n> > modified the docs as well to reflect the changes. I thought of adding\n> > a test but I think the current test in toast.sql seems sufficient.\n> > Kindly let me know what you think of the attached? I think we should\n> > backpatch this till v10. What do you think?\n>\n> Looks fine to me.\n>\n\nAttached are the patches for back-branches till v10. I have made two\nmodifications: (a) changed heap_tuple_attr_equals() to\nheap_attr_equals() as we are not passing tuple to it; (b) changed\nparameter name 'check_external_cols' to 'external_cols' to make it\nsound similar to existing parameter 'interesting_cols' in\nHeapDetermine* function.\n\nLet me know what you think of the attached? Do you see any reason not\nto back-patch this fix?\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Fri, 4 Feb 2022 17:45:36 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "I don't have a reason not to commit this patch. I have some suggestions\non the comments and docs though.\n\n> @@ -8359,14 +8408,15 @@ log_heap_new_cid(Relation relation, HeapTuple tup)\n> * Returns NULL if there's no need to log an identity or if there's no suitable\n> * key defined.\n> *\n> - * key_changed should be false if caller knows that no replica identity\n> - * columns changed value. It's always true in the DELETE case.\n> + * key_required should be false if caller knows that no replica identity\n> + * columns changed value and it doesn't has any external data. It's always\n> + * true in the DELETE case.\n> *\n> * *copy is set to true if the returned tuple is a modified copy rather than\n> * the same tuple that was passed in.\n> */\n> static HeapTuple\n> -ExtractReplicaIdentity(Relation relation, HeapTuple tp, bool key_changed,\n> +ExtractReplicaIdentity(Relation relation, HeapTuple tp, bool key_required,\n\nI find the new comment pretty hard to interpret. I would say something\nlike \"Pass key_required true if any replica identity columns changed\nvalue, or if any of them have external data. DELETE must always pass\ntrue\".\n\n> diff --git a/doc/src/sgml/ref/alter_table.sgml b/doc/src/sgml/ref/alter_table.sgml\n> index dee026e..d67ef7c 100644\n> --- a/doc/src/sgml/ref/alter_table.sgml\n> +++ b/doc/src/sgml/ref/alter_table.sgml\n> @@ -873,8 +873,10 @@ WITH ( MODULUS <replaceable class=\"parameter\">numeric_literal</replaceable>, REM\n> This form changes the information which is written to the write-ahead log\n> to identify rows which are updated or deleted. This option has no effect\n> except when logical replication is in use.\n> - In all cases, no old values are logged unless at least one of the columns\n> - that would be logged differs between the old and new versions of the row.\n> + In all cases except toasted values, no old values are logged unless at\n> + least one of the columns that would be logged differs between the old and\n> + new versions of the row. We detoast the unchanged old toast values and log\n> + them.\n\nHere we're patching with a minimal wording change with almost\nincomprehensible results. I think we should patch more extensively.\nI suggest:\n\n\tThis form changes the information which is written to the\n\twrite-ahead log to identify rows which are updated or deleted.\n\n\tIn most cases, the old value of each column is only logged if\n\tit differs from the new value; however, if the old value is\n\tstored externally, it is always logged regardless of whether it\n\tchanged.\n\n\tThis option has no effect unless logical replication is in use.\n\nI didn't get a chance to review the code, but I think this is valuable.\n\n\n\n\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 4 Feb 2022 12:36:19 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Fri, Feb 4, 2022 at 9:06 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> I don't have a reason not to commit this patch.\n>\n\nIt is not very clear to me from this so just checking again, are you\nfine with back-patching this as well?\n\n>\n> I have some suggestions\n> on the comments and docs though.\n>\n\nThanks, your suggestions look good to me. I'll take care of these in\nthe next version.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 5 Feb 2022 06:10:31 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On 2022-Feb-05, Amit Kapila wrote:\n\n> On Fri, Feb 4, 2022 at 9:06 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > I don't have a reason not to commit this patch.\n> >\n> \n> It is not very clear to me from this so just checking again, are you\n> fine with back-patching this as well?\n\nHmm, of course, I never thought it'd be a good idea to let the bug\nunfixed in back branches.\n\nThanks!\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"El sudor es la mejor cura para un pensamiento enfermo\" (Bardia)\n\n\n",
"msg_date": "Sat, 5 Feb 2022 09:25:09 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-04 17:45:36 +0530, Amit Kapila wrote:\n> diff --git a/contrib/test_decoding/expected/toast.out b/contrib/test_decoding/expected/toast.out\n> index cd03e9d..a757e7d 100644\n> --- a/contrib/test_decoding/expected/toast.out\n> +++ b/contrib/test_decoding/expected/toast.out\n> @@ -77,7 +77,7 @@ SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot',\n> table public.toasted_key: INSERT: id[integer]:1 toasted_key[text]:'1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123\n> COMMIT\n> BEGIN\n> - table public.toasted_key: UPDATE: id[integer]:1 toasted_key[text]:unchanged-toast-datum toasted_col1[text]:unchanged-toast-datum toasted_col2[text]:'987654321098765432109876543210987654321098765432109\n> + table public.toasted_key: UPDATE: old-key: toasted_key[text]:'123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678\n\nHm. This looks weird. What happened to the the change to toasted_col2 that was\nin the \"removed\" line?\n\nThis corresponds to the following statement I think:\n-- test update of a toasted key without changing it\nUPDATE toasted_key SET toasted_col2 = toasted_col1;\nwhich previously was inserted as:\nINSERT INTO toasted_key(toasted_key, toasted_col1) VALUES(repeat('1234567890', 200), repeat('9876543210', 200));\n\nso toasted_col2 should have changed from NULL to repeat('9876543210', 200)\n\n\nAm I misreading something?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 5 Feb 2022 15:34:45 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Sun, Feb 6, 2022 at 5:04 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-02-04 17:45:36 +0530, Amit Kapila wrote:\n> > diff --git a/contrib/test_decoding/expected/toast.out b/contrib/test_decoding/expected/toast.out\n> > index cd03e9d..a757e7d 100644\n> > --- a/contrib/test_decoding/expected/toast.out\n> > +++ b/contrib/test_decoding/expected/toast.out\n> > @@ -77,7 +77,7 @@ SELECT substr(data, 1, 200) FROM pg_logical_slot_get_changes('regression_slot',\n> > table public.toasted_key: INSERT: id[integer]:1 toasted_key[text]:'1234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123\n> > COMMIT\n> > BEGIN\n> > - table public.toasted_key: UPDATE: id[integer]:1 toasted_key[text]:unchanged-toast-datum toasted_col1[text]:unchanged-toast-datum toasted_col2[text]:'987654321098765432109876543210987654321098765432109\n> > + table public.toasted_key: UPDATE: old-key: toasted_key[text]:'123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678\n>\n> Hm. This looks weird. What happened to the the change to toasted_col2 that was\n> in the \"removed\" line?\n>\n> This corresponds to the following statement I think:\n> -- test update of a toasted key without changing it\n> UPDATE toasted_key SET toasted_col2 = toasted_col1;\n> which previously was inserted as:\n> INSERT INTO toasted_key(toasted_key, toasted_col1) VALUES(repeat('1234567890', 200), repeat('9876543210', 200));\n>\n> so toasted_col2 should have changed from NULL to repeat('9876543210', 200)\n>\n\nRight, and it is getting changed. We are just printing the first 200\ncharacters (by using SQL [1]) from the decoded tuple so what is shown\nin the results is the initial 200 bytes. The complete decoded data\nafter the patch is as follows:\n\n table public.toasted_key: UPDATE: old-key:\ntoasted_key[text]:'12345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890'\nnew-tuple: id[integer]:2 toasted_key[text]:unchanged-toast-datum\ntoasted_col1[text]:unchanged-toast-datum\ntoasted_col2[text]:'98765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210987654321098765432109876543210'\n\nSo, in the result, the initial 200 bytes contain data of old-key which\nis what we expect.\n\n[1] - SELECT substr(data, 1, 200) FROM\npg_logical_slot_get_changes('regression_slot', NULL, NULL,\n'include-xids', '0', 'skip-empty-xacts', '1');\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 7 Feb 2022 08:44:00 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Sat, Feb 5, 2022 at 6:10 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Feb 4, 2022 at 9:06 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> >\n> > I have some suggestions\n> > on the comments and docs though.\n> >\n>\n> Thanks, your suggestions look good to me. I'll take care of these in\n> the next version.\n>\n\nAttached please find the modified patches.\n\n-- \nWith Regards,\nAmit Kapila.",
"msg_date": "Mon, 7 Feb 2022 12:25:04 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Mon, Feb 7, 2022 at 12:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Feb 5, 2022 at 6:10 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Feb 4, 2022 at 9:06 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > >\n> > >\n> > > I have some suggestions\n> > > on the comments and docs though.\n> > >\n> >\n> > Thanks, your suggestions look good to me. I'll take care of these in\n> > the next version.\n> >\n>\n> Attached please find the modified patches.\n\nI have looked into the latest modification and back branch patches and\nthey look fine to me.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 7 Feb 2022 13:27:13 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-07 08:44:00 +0530, Amit Kapila wrote:\n> Right, and it is getting changed. We are just printing the first 200\n> characters (by using SQL [1]) from the decoded tuple so what is shown\n> in the results is the initial 200 bytes.\n\nAh, I knew I must have been missing something.\n\n\n> The complete decoded data after the patch is as follows:\n\nHm. I think we should change the way the strings are shortened - otherwise we\ndon't really verify much in that test. Perhaps we could just replace the long\nrepetitive strings with something shorter in the output?\n\nE.g. using something like regexp_replace(data, '(1234567890|9876543210){200}', '\\1{200}','g')\ninside the substr().\n\nWonder if we should deduplicate the number of different toasted strings in the\nfile to something that'd allow us to have a single \"redact_toast\" function or\nsuch. There's too many different ones to have a reasonbly simple redaction\nfunction right now. But that's perhaps better done separately. \n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 7 Feb 2022 11:17:58 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Tue, Feb 8, 2022 at 12:48 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-02-07 08:44:00 +0530, Amit Kapila wrote:\n> > Right, and it is getting changed. We are just printing the first 200\n> > characters (by using SQL [1]) from the decoded tuple so what is shown\n> > in the results is the initial 200 bytes.\n>\n> Ah, I knew I must have been missing something.\n>\n>\n> > The complete decoded data after the patch is as follows:\n>\n> Hm. I think we should change the way the strings are shortened - otherwise we\n> don't really verify much in that test. Perhaps we could just replace the long\n> repetitive strings with something shorter in the output?\n>\n> E.g. using something like regexp_replace(data, '(1234567890|9876543210){200}', '\\1{200}','g')\n> inside the substr().\n\n\nIMHO, in this particular case using regexp_replace as you explained\nwould be a good option as we will be verifying complete data instead\nof just the first 200 characters.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 8 Feb 2022 13:17:47 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Tue, Feb 8, 2022 at 12:48 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-02-07 08:44:00 +0530, Amit Kapila wrote:\n> > Right, and it is getting changed. We are just printing the first 200\n> > characters (by using SQL [1]) from the decoded tuple so what is shown\n> > in the results is the initial 200 bytes.\n>\n> Ah, I knew I must have been missing something.\n>\n>\n> > The complete decoded data after the patch is as follows:\n>\n> Hm. I think we should change the way the strings are shortened - otherwise we\n> don't really verify much in that test. Perhaps we could just replace the long\n> repetitive strings with something shorter in the output?\n>\n> E.g. using something like regexp_replace(data, '(1234567890|9876543210){200}', '\\1{200}','g')\n> inside the substr().\n>\n\nThis sounds like a good idea. Shall we do this as part of this patch\nitself or as a separate improvement?\n\n> Wonder if we should deduplicate the number of different toasted strings in the\n> file to something that'd allow us to have a single \"redact_toast\" function or\n> such. There's too many different ones to have a reasonbly simple redaction\n> function right now.\n>\n\nI think this is also worth trying.\n\n> But that's perhaps better done separately.\n>\n\n+1.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 8 Feb 2022 15:21:19 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Mon, Feb 7, 2022 2:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Sat, Feb 5, 2022 at 6:10 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> >\r\n> > On Fri, Feb 4, 2022 at 9:06 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\r\n> > >\r\n> > >\r\n> > > I have some suggestions\r\n> > > on the comments and docs though.\r\n> > >\r\n> >\r\n> > Thanks, your suggestions look good to me. I'll take care of these in\r\n> > the next version.\r\n> >\r\n> \r\n> Attached please find the modified patches.\r\n> \r\n\r\nThanks for your patch. I tried it and it works well.\r\nTwo small comments:\r\n\r\n1)\r\n+static Bitmapset *HeapDetermineColumnsInfo(Relation relation,\r\n+\t\t\t\t\t\t\t\t\t\t Bitmapset *interesting_cols,\r\n+\t\t\t\t\t\t\t\t\t\t Bitmapset *external_cols,\r\n+\t\t\t\t\t\t\t\t\t\t HeapTuple oldtup, HeapTuple newtup,\r\n+\t\t\t\t\t\t\t\t\t\t bool *id_has_external);\r\n\r\n+HeapDetermineColumnsInfo(Relation relation,\r\n+\t\t\t\t\t\t Bitmapset *interesting_cols,\r\n+\t\t\t\t\t\t Bitmapset *external_cols,\r\n+\t\t\t\t\t\t HeapTuple oldtup, HeapTuple newtup,\r\n+\t\t\t\t\t\t bool *has_external)\r\n\r\nThe declaration and the definition of this function use different parameter\r\nnames for the last parameter (id_has_external and has_external), it's better to\r\nbe consistent.\r\n\r\n2)\r\n+\t\t/*\r\n+\t\t * Check if the old tuple's attribute is stored externally and is a\r\n+\t\t * member of external_cols.\r\n+\t\t */\r\n+\t\tif (VARATT_IS_EXTERNAL((struct varlena *) DatumGetPointer(value1)) &&\r\n+\t\t\tbms_is_member(attrnum - FirstLowInvalidHeapAttributeNumber,\r\n+\t\t\t\t\t\t external_cols))\r\n+\t\t\t*has_external = true;\r\n\r\nIf has_external is already true, it seems we don't need this check, so should we\r\ncheck has_external first?\r\n\r\nRegards,\r\nTang\r\n",
"msg_date": "Wed, 9 Feb 2022 01:18:24 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Tue, Feb 8, 2022, at 10:18 PM, tanghy.fnst@fujitsu.com wrote:\n> 2)\n> + /*\n> + * Check if the old tuple's attribute is stored externally and is a\n> + * member of external_cols.\n> + */\n> + if (VARATT_IS_EXTERNAL((struct varlena *) DatumGetPointer(value1)) &&\n> + bms_is_member(attrnum - FirstLowInvalidHeapAttributeNumber,\n> + external_cols))\n> + *has_external = true;\n> \n> If has_external is already true, it seems we don't need this check, so should we\n> check has_external first?\nIs it worth it? I don't think so. It complicates a non-critical path. In\ngeneral, the condition will be executed once or twice.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Tue, Feb 8, 2022, at 10:18 PM, tanghy.fnst@fujitsu.com wrote:2)+\t\t/*+\t\t * Check if the old tuple's attribute is stored externally and is a+\t\t * member of external_cols.+\t\t */+\t\tif (VARATT_IS_EXTERNAL((struct varlena *) DatumGetPointer(value1)) &&+\t\t\tbms_is_member(attrnum - FirstLowInvalidHeapAttributeNumber,+\t\t\t\t\t\t external_cols))+\t\t\t*has_external = true;If has_external is already true, it seems we don't need this check, so should wecheck has_external first?Is it worth it? I don't think so. It complicates a non-critical path. Ingeneral, the condition will be executed once or twice.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Tue, 08 Feb 2022 22:46:33 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Wed, Feb 9, 2022 at 7:16 AM Euler Taveira <euler@eulerto.com> wrote:\n>\n> On Tue, Feb 8, 2022, at 10:18 PM, tanghy.fnst@fujitsu.com wrote:\n>\n> 2)\n> + /*\n> + * Check if the old tuple's attribute is stored externally and is a\n> + * member of external_cols.\n> + */\n> + if (VARATT_IS_EXTERNAL((struct varlena *) DatumGetPointer(value1)) &&\n> + bms_is_member(attrnum - FirstLowInvalidHeapAttributeNumber,\n> + external_cols))\n> + *has_external = true;\n>\n> If has_external is already true, it seems we don't need this check, so should we\n> check has_external first?\n>\n> Is it worth it? I don't think so.\n>\n\nI also don't think it is worth adding such a check.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 9 Feb 2022 07:54:35 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Tue, Feb 8, 2022 3:18 AM Andres Freund <andres@anarazel.de> wrote:\n> \n> On 2022-02-07 08:44:00 +0530, Amit Kapila wrote:\n> > Right, and it is getting changed. We are just printing the first 200\n> > characters (by using SQL [1]) from the decoded tuple so what is shown\n> > in the results is the initial 200 bytes.\n> \n> Ah, I knew I must have been missing something.\n> \n> \n> > The complete decoded data after the patch is as follows:\n> \n> Hm. I think we should change the way the strings are shortened - otherwise we\n> don't really verify much in that test. Perhaps we could just replace the long\n> repetitive strings with something shorter in the output?\n> \n> E.g. using something like regexp_replace(data,\n> '(1234567890|9876543210){200}', '\\1{200}','g')\n> inside the substr().\n> \n> Wonder if we should deduplicate the number of different toasted strings in the\n> file to something that'd allow us to have a single \"redact_toast\" function or\n> such. There's too many different ones to have a reasonbly simple redaction\n> function right now. But that's perhaps better done separately.\n> \n\nI tried to make the output shorter using your suggestion like the following SQL, \nplease see the attached patch, which is based on v8 patch[1].\n\nSELECT substr(regexp_replace(data, '(1234567890|9876543210){200}', '\\1{200}','g'), 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n\nNote that some strings are still longer than 200 characters even though they have \nbeen shorter, so they can't be shown entirely.\n\ne.g.\ntable public.toasted_key: UPDATE: old-key: toasted_key[text]:'1234567890{200}' new-tuple: id[integer]:1 toasted_key[text]:unchanged-toast-datum toasted_col1[text]:unchanged-toast-datum toasted_col2[te\n\nThe entire string is:\ntable public.toasted_key: UPDATE: old-key: toasted_key[text]:'1234567890{200}' new-tuple: id[integer]:1 toasted_key[text]:unchanged-toast-datum toasted_col1[text]:unchanged-toast-datum toasted_col2[text]:'9876543210{200}'\n\nMaybe it's better to change the substr length to 250 to show the entire string, or we \ncan do it as separate HEAD only improvement where we can deduplicate some of the\nother long strings as well. Thoughts?\n\n[1] https://www.postgresql.org/message-id/CAA4eK1L_Z_2LDwMNbGrwoO%2BFc-2Q04YORQSA9UfGUTMQpy2O1Q%40mail.gmail.com\n\nRegards,\nTang",
"msg_date": "Wed, 9 Feb 2022 05:38:08 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Wed, Feb 9, 2022 at 11:08 AM tanghy.fnst@fujitsu.com\n<tanghy.fnst@fujitsu.com> wrote:\n>\n> On Tue, Feb 8, 2022 3:18 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2022-02-07 08:44:00 +0530, Amit Kapila wrote:\n> > > Right, and it is getting changed. We are just printing the first 200\n> > > characters (by using SQL [1]) from the decoded tuple so what is shown\n> > > in the results is the initial 200 bytes.\n> >\n> > Ah, I knew I must have been missing something.\n> >\n> >\n> > > The complete decoded data after the patch is as follows:\n> >\n> > Hm. I think we should change the way the strings are shortened - otherwise we\n> > don't really verify much in that test. Perhaps we could just replace the long\n> > repetitive strings with something shorter in the output?\n> >\n> > E.g. using something like regexp_replace(data,\n> > '(1234567890|9876543210){200}', '\\1{200}','g')\n> > inside the substr().\n> >\n> > Wonder if we should deduplicate the number of different toasted strings in the\n> > file to something that'd allow us to have a single \"redact_toast\" function or\n> > such. There's too many different ones to have a reasonbly simple redaction\n> > function right now. But that's perhaps better done separately.\n> >\n>\n> I tried to make the output shorter using your suggestion like the following SQL,\n> please see the attached patch, which is based on v8 patch[1].\n>\n> SELECT substr(regexp_replace(data, '(1234567890|9876543210){200}', '\\1{200}','g'), 1, 200) FROM pg_logical_slot_get_changes('regression_slot', NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n>\n> Note that some strings are still longer than 200 characters even though they have\n> been shorter, so they can't be shown entirely.\n>\n> e.g.\n> table public.toasted_key: UPDATE: old-key: toasted_key[text]:'1234567890{200}' new-tuple: id[integer]:1 toasted_key[text]:unchanged-toast-datum toasted_col1[text]:unchanged-toast-datum toasted_col2[te\n>\n> The entire string is:\n> table public.toasted_key: UPDATE: old-key: toasted_key[text]:'1234567890{200}' new-tuple: id[integer]:1 toasted_key[text]:unchanged-toast-datum toasted_col1[text]:unchanged-toast-datum toasted_col2[text]:'9876543210{200}'\n>\n> Maybe it's better to change the substr length to 250 to show the entire string, or we\n> can do it as separate HEAD only improvement where we can deduplicate some of the\n> other long strings as well. Thoughts?\n>\n\nI think it is better to do this as a separate HEAD-only improvement as\nit can affect other tests results. We can also try to deduplicate some\nof the other long strings used in toast.sql file along with it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 10 Feb 2022 07:44:22 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Mon, Feb 7, 2022 at 1:27 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Feb 7, 2022 at 12:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Attached please find the modified patches.\n>\n> I have looked into the latest modification and back branch patches and\n> they look fine to me.\n>\n\nToday, while looking at this patch again, I think I see one problem\nwith the below change (referring pg10 patch):\n+ if (attrnum < 0)\n+ {\n+ if (attrnum != ObjectIdAttributeNumber &&\n+ attrnum != TableOidAttributeNumber)\n+ {\n+ modified = bms_add_member(modified,\n+ attrnum -\n+ FirstLowInvalidHeapAttributeNumber);\n+ continue;\n+ }\n+ }\n...\n...\n+ /* No need to check attributes that can't be stored externally. */\n+ if (isnull1 || TupleDescAttr(tupdesc, attrnum - 1)->attlen != -1)\n+ continue;\n\nI think it is possible that we use TupleDescAttr for system attribute\n(in this case ObjectIdAttributeNumber/TableOidAttributeNumber) which\nwill be wrong as it contains only user attributes, not system\nattributes. See comments atop TupleDescData.\n\nI think this check should be modified to if (attrnum < 0 || isnull1\n|| TupleDescAttr(tupdesc, attrnum - 1)->attlen != -1). What do you\nthink?\n\n* Another minor comment:\n+ if (!heap_attr_equals(RelationGetDescr(relation), attrnum, value1,\n+ value2, isnull1, isnull2))\n\nI think here we can directly use tupdesc instead of RelationGetDescr(relation).\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 10 Feb 2022 19:04:17 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Thu, Feb 10, 2022 9:34 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Mon, Feb 7, 2022 at 1:27 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\r\n> >\r\n> > On Mon, Feb 7, 2022 at 12:25 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > Attached please find the modified patches.\r\n> >\r\n> > I have looked into the latest modification and back branch patches and\r\n> > they look fine to me.\r\n> >\r\n> \r\n> Today, while looking at this patch again, I think I see one problem\r\n> with the below change (referring pg10 patch):\r\n> + if (attrnum < 0)\r\n> + {\r\n> + if (attrnum != ObjectIdAttributeNumber &&\r\n> + attrnum != TableOidAttributeNumber)\r\n> + {\r\n> + modified = bms_add_member(modified,\r\n> + attrnum -\r\n> + FirstLowInvalidHeapAttributeNumber);\r\n> + continue;\r\n> + }\r\n> + }\r\n> ...\r\n> ...\r\n> + /* No need to check attributes that can't be stored externally. */\r\n> + if (isnull1 || TupleDescAttr(tupdesc, attrnum - 1)->attlen != -1)\r\n> + continue;\r\n> \r\n> I think it is possible that we use TupleDescAttr for system attribute\r\n> (in this case ObjectIdAttributeNumber/TableOidAttributeNumber) which\r\n> will be wrong as it contains only user attributes, not system\r\n> attributes. See comments atop TupleDescData.\r\n> \r\n> I think this check should be modified to if (attrnum < 0 || isnull1\r\n> || TupleDescAttr(tupdesc, attrnum - 1)->attlen != -1). What do you\r\n> think?\r\n> \r\n\r\nI agree with you.\r\n\r\n> * Another minor comment:\r\n> + if (!heap_attr_equals(RelationGetDescr(relation), attrnum, value1,\r\n> + value2, isnull1, isnull2))\r\n> \r\n> I think here we can directly use tupdesc instead of RelationGetDescr(relation).\r\n> \r\n\r\n+1.\r\n\r\nAttached the patches which fixed the above two comments and the first comment in\r\nmy previous mail [1], the rest is the same as before.\r\nI ran the tests on all branches, they all passed as expected.\r\n\r\n[1] https://www.postgresql.org/message-id/OS0PR01MB61134DD41BE6D986B9DB80CCFB2E9%40OS0PR01MB6113.jpnprd01.prod.outlook.com\r\n\r\nRegards,\r\nTang",
"msg_date": "Fri, 11 Feb 2022 06:30:45 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Fri, Feb 11, 2022 at 12:00 PM tanghy.fnst@fujitsu.com\n<tanghy.fnst@fujitsu.com> wrote:\n>\n> Attached the patches which fixed the above two comments and the first comment in\n> my previous mail [1], the rest is the same as before.\n> I ran the tests on all branches, they all passed as expected.\n>\n\nThanks, these look good to me. I'll push these early next week\n(Monday) unless there are any more suggestions or comments.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 11 Feb 2022 16:27:31 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Fri, Feb 11, 2022 at 4:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Feb 11, 2022 at 12:00 PM tanghy.fnst@fujitsu.com\n> <tanghy.fnst@fujitsu.com> wrote:\n> >\n> > Attached the patches which fixed the above two comments and the first comment in\n> > my previous mail [1], the rest is the same as before.\n> > I ran the tests on all branches, they all passed as expected.\n> >\n>\n> Thanks, these look good to me. I'll push these early next week\n> (Monday) unless there are any more suggestions or comments.\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 14 Feb 2022 14:54:41 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "On Mon, Feb 14, 2022 at 2:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Feb 11, 2022 at 4:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Feb 11, 2022 at 12:00 PM tanghy.fnst@fujitsu.com\n> > <tanghy.fnst@fujitsu.com> wrote:\n> > >\n> > > Attached the patches which fixed the above two comments and the first comment in\n> > > my previous mail [1], the rest is the same as before.\n> > > I ran the tests on all branches, they all passed as expected.\n> > >\n> >\n> > Thanks, these look good to me. I'll push these early next week\n> > (Monday) unless there are any more suggestions or comments.\n> >\n>\n> Pushed!\n>\n\nThanks!!\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 14 Feb 2022 15:15:26 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
},
{
"msg_contents": "Hi,\n\nOn 2022-02-14 14:54:41 +0530, Amit Kapila wrote:\n> On Fri, Feb 11, 2022 at 4:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Feb 11, 2022 at 12:00 PM tanghy.fnst@fujitsu.com\n> > <tanghy.fnst@fujitsu.com> wrote:\n> > >\n> > > Attached the patches which fixed the above two comments and the first comment in\n> > > my previous mail [1], the rest is the same as before.\n> > > I ran the tests on all branches, they all passed as expected.\n> > >\n> >\n> > Thanks, these look good to me. I'll push these early next week\n> > (Monday) unless there are any more suggestions or comments.\n> >\n> \n> Pushed!\n\nThanks for all the work on this!\n\n\n",
"msg_date": "Mon, 14 Feb 2022 19:20:15 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [BUG]Update Toast data failure in logical replication"
}
] |
[
{
"msg_contents": "Hi all,\n\nNow that hamerkop has been fixed and that we have some coverage with\nbuilds of GSSAPI on Windows thanks to 02511066, the buildfarm has been\ncomplaining about a build failure on Windows for 12 and 13:\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=hamerkop&dt=2021-05-28%2011%3A06%3A18&stg=make\n\nThe logs are hard to decrypt, but I guess that this is caused by the\nuse of setenv() in be-secure-gssapi.c and auth.c, as the tree has no\nimplementation that MSVC could feed on for those branches.\n\nThe recent commit 7ca37fb has changed things so as setenv() is used\ninstead of putenv(), and provides a fallback implementation, which\nexplains why the compilation of be-secure-gssapi.c and auth.c works\nwith MSVC, as reported by hamerkop.\n\nWe can do two things here:\n1) Switch be-secure-gssapi.c and auth.c to use putenv().\n2) Backport into 12 and 13 the fallback implementation of setenv\nintroduced in 7ca37fb, and keep be-secure-gssapi.c as they are now.\n\nIt is worth noting that 860fe27 mentions the use of setenv() in\nbe-secure-gssapi.c but has done nothing for it. I would choose 1), on\nthe ground that adding a new file on back-branches adds an additional\ncost to Windows maintainers if they use their own MSVC scripts (I know\nof one case here that would be impacted), and that does not seem\nmandatory here as putenv() would just work.\n\nThoughts?\n--\nMichael",
"msg_date": "Fri, 28 May 2021 22:18:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "be-secure-gssapi.c and auth.c with setenv() not compatible on Windows"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> We can do two things here:\n> 1) Switch be-secure-gssapi.c and auth.c to use putenv().\n> 2) Backport into 12 and 13 the fallback implementation of setenv\n> introduced in 7ca37fb, and keep be-secure-gssapi.c as they are now.\n\nThere's a lot of value in keeping the branches looking alike.\nOn the other hand, 7ca37fb hasn't survived contact with the\npublic yet, so I'm a bit nervous about it.\n\nIt's not clear to me how much of 7ca37fb you're envisioning\nback-patching in (2). I think it'd be best to back-patch\nonly the addition of pgwin32_setenv, and then let the gssapi\ncode use it. In that way, if there's anything wrong with\npgwin32_setenv, we're only breaking code that never worked\non Windows before anyway.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 28 May 2021 11:37:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: be-secure-gssapi.c and auth.c with setenv() not compatible on\n Windows"
},
{
"msg_contents": "On Fri, May 28, 2021 at 11:37:22AM -0400, Tom Lane wrote:\n> There's a lot of value in keeping the branches looking alike.\n> On the other hand, 7ca37fb hasn't survived contact with the\n> public yet, so I'm a bit nervous about it.\n\nI don't think this set of complications is worth the risk\ndestabilizing those stable branches.\n\n> It's not clear to me how much of 7ca37fb you're envisioning\n> back-patching in (2). I think it'd be best to back-patch\n> only the addition of pgwin32_setenv, and then let the gssapi\n> code use it. In that way, if there's anything wrong with\n> pgwin32_setenv, we're only breaking code that never worked\n> on Windows before anyway.\n\nJust to be clear, for 2) I was thinking to pick up the minimal parts\nyou have changed in win32env.c and add src/port/setenv.c to add the\nfallback implementation of setenv(), without changing anything else.\nThis also requires grabbing the small changes within pgwin32_putenv(),\nvisibly.\n--\nMichael",
"msg_date": "Sat, 29 May 2021 17:52:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: be-secure-gssapi.c and auth.c with setenv() not compatible on\n Windows"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Fri, May 28, 2021 at 11:37:22AM -0400, Tom Lane wrote:\n>> It's not clear to me how much of 7ca37fb you're envisioning\n>> back-patching in (2). I think it'd be best to back-patch\n>> only the addition of pgwin32_setenv, and then let the gssapi\n>> code use it. In that way, if there's anything wrong with\n>> pgwin32_setenv, we're only breaking code that never worked\n>> on Windows before anyway.\n\n> Just to be clear, for 2) I was thinking to pick up the minimal parts\n> you have changed in win32env.c and add src/port/setenv.c to add the\n> fallback implementation of setenv(), without changing anything else.\n> This also requires grabbing the small changes within pgwin32_putenv(),\n> visibly.\n\nWhat I had in mind was to *only* add pgwin32_setenv, not setenv.c.\nThere's no evidence that any other modern platform lacks setenv.\nMoreover, there's no issue in these branches unless your platform\nlacks setenv yet has GSS support.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 29 May 2021 10:44:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: be-secure-gssapi.c and auth.c with setenv() not compatible on\n Windows"
},
{
"msg_contents": "On Sat, May 29, 2021 at 10:44:14AM -0400, Tom Lane wrote:\n> What I had in mind was to *only* add pgwin32_setenv, not setenv.c.\n> There's no evidence that any other modern platform lacks setenv.\n> Moreover, there's no issue in these branches unless your platform\n> lacks setenv yet has GSS support.\n\nI have been finally able to poke at that, resulting in the attached.\nYou are right that adding only the fallback implementation for\nsetenv() seems to be enough. I cannot get my environment to complain,\nand the code compiles.\n--\nMichael",
"msg_date": "Mon, 31 May 2021 09:14:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: be-secure-gssapi.c and auth.c with setenv() not compatible on\n Windows"
},
{
"msg_contents": "On Mon, May 31, 2021 at 09:14:36AM +0900, Michael Paquier wrote:\n> I have been finally able to poke at that, resulting in the attached.\n> You are right that adding only the fallback implementation for\n> setenv() seems to be enough. I cannot get my environment to complain,\n> and the code compiles.\n\nOkay, applied this stuff to 12 and 13 to take care of the build\nfailures with hamerkop. The ECPG tests should also turn back to green\nthere.\n--\nMichael",
"msg_date": "Tue, 1 Jun 2021 10:14:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: be-secure-gssapi.c and auth.c with setenv() not compatible on\n Windows"
},
{
"msg_contents": "On Tue, Jun 01, 2021 at 10:14:49AM +0900, Michael Paquier wrote:\n> Okay, applied this stuff to 12 and 13 to take care of the build\n> failures with hamerkop. The ECPG tests should also turn back to green\n> there.\n\nhamerkop has reported back, and things are now good on those\nbranches. Now for the remaining issue of HEAD..\n--\nMichael",
"msg_date": "Wed, 2 Jun 2021 12:26:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: be-secure-gssapi.c and auth.c with setenv() not compatible on\n Windows"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen trying to upgrade an existing database from version 10 to 13 I came\nacross a degression in some existing code used by clients. Further\ninvestigations showed that performance measures are similar in versions\n11 to 13, while in the original database on version 10 it's around 100\ntimes faster. I could boil it down to perl functions used for sorting.\n\n>From the real data that I don't own, I created a test case that is\nsufficient to observe the degression: http://ix.io/3o7f\n\n\nThese are the numbers on PG 10:\n\n> test=# explain (analyze, verbose, buffers)\n> select attr from tab order by func(attr);\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------\n> Sort (cost=3269.68..3294.36 rows=9869 width=40) (actual time=179.374..180.558 rows=9869 loops=1)\n> Output: attr, (func(attr))\n> Sort Key: (func(tab.attr))\n> Sort Method: quicksort Memory: 1436kB\n> Buffers: shared hit=49\n> -> Seq Scan on public.tab (cost=0.00..2614.94 rows=9869 width=40) (actual time=2.293..169.060 rows=9869 loops=1)\n> Output: attr, func(attr)\n> Buffers: shared hit=49\n> Planning time: 0.318 ms\n> Execution time: 182.061 ms\n> (10 rows)\n> \n> test=# explain (analyze, verbose, buffers)\n> select attr from tab;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------\n> Seq Scan on public.tab (cost=0.00..147.69 rows=9869 width=8) (actual time=0.045..3.975 rows=9869 loops=1)\n> Output: attr\n> Buffers: shared hit=49\n> Planning time: 0.069 ms\n> Execution time: 6.020 ms\n> (5 rows)\n\n\nAnd here we have PG 11:\n\n> test=# explain (analyze, verbose, buffers)\n> select attr from tab order by func(attr);\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------\n> Sort (cost=3269.68..3294.36 rows=9869 width=40) (actual time=597.877..599.805 rows=9869 loops=1)\n> Output: attr, (func(attr))\n> Sort Key: (func(tab.attr))\n> Sort Method: quicksort Memory: 1436kB\n> Buffers: shared hit=49\n> -> Seq Scan on public.tab (cost=0.00..2614.94 rows=9869 width=40) (actual time=0.878..214.188 rows=9869 loops=1)\n> Output: attr, func(attr)\n> Buffers: shared hit=49\n> Planning Time: 0.151 ms\n> Execution Time: 601.767 ms\n> (10 rows)\n> \n> test=# explain (analyze, verbose, buffers)\n> select attr from tab;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------\n> Seq Scan on public.tab (cost=0.00..147.69 rows=9869 width=8) (actual time=0.033..1.628 rows=9869 loops=1)\n> Output: attr\n> Buffers: shared hit=49\n> Planning Time: 0.043 ms\n> Execution Time: 2.581 ms\n> (5 rows)\n\n\nIn the real scenario it's 500ms vs. 50s. The reason is obviously the\nperl function used as sort key. All different versions have been tested\nwith an unmodified config and one tunes with pgtune. Creating a\nfunctional index does not help in the original database as the planner\ndoesn't use it, while it *is* used in the test case. But the question\nwhat causes that noticeable difference in performance is untouched by\nthe fact that it could be circumvented in some cases.\n\nThe perl version used is v5.24.1.\n\nBest\n Johannes\n\n\n\n",
"msg_date": "Fri, 28 May 2021 16:12:33 +0200",
"msg_from": "=?UTF-8?Q?Johannes_Gra=c3=abn?= <johannes@selfnet.de>",
"msg_from_op": true,
"msg_subject": "Degression (PG10 > 11, 12 or 13)"
},
{
"msg_contents": "On 5/28/21 4:12 PM, Johannes Graën wrote:\n> Hi,\n> \n> When trying to upgrade an existing database from version 10 to 13 I came\n> across a degression in some existing code used by clients. Further\n> investigations showed that performance measures are similar in versions\n> 11 to 13, while in the original database on version 10 it's around 100\n> times faster. I could boil it down to perl functions used for sorting.\n> \n>>From the real data that I don't own, I created a test case that is\n> sufficient to observe the degression: http://ix.io/3o7f\n> \n\nThat function is pretty much just a sequence of ~120 regular\nexpressions, doing something similar to unaccent(). I wonder if we're\ncalling the function much more often, perhaps due to some changes in the\nsort code (the function is immutable, but that does not guarantee it's\ncalled just once).\n\nIt'd be interesting to see profiles from perf, both from 10 and 11.\n\nAlso, maybe try materializing the function results before doing the\nsort, perhaps like this:\n\nSELECT * FROM (select attr, func(attr) as fattr from tab offset 0) foo\nORDER BY fattr;\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 28 May 2021 17:47:18 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Degression (PG10 > 11, 12 or 13)"
},
{
"msg_contents": "hI\r\n\r\npá 28. 5. 2021 v 16:12 odesílatel Johannes Graën <johannes@selfnet.de>\r\nnapsal:\r\n\r\n> Hi,\r\n>\r\n> When trying to upgrade an existing database from version 10 to 13 I came\r\n> across a degression in some existing code used by clients. Further\r\n> investigations showed that performance measures are similar in versions\r\n> 11 to 13, while in the original database on version 10 it's around 100\r\n> times faster. I could boil it down to perl functions used for sorting.\r\n>\r\n> >From the real data that I don't own, I created a test case that is\r\n> sufficient to observe the degression: http://ix.io/3o7f\r\n>\r\n>\r\n> These are the numbers on PG 10:\r\n>\r\n> > test=# explain (analyze, verbose, buffers)\r\n> > select attr from tab order by func(attr);\r\n> > QUERY PLAN\r\n> >\r\n> ----------------------------------------------------------------------------------------------------------------------\r\n> > Sort (cost=3269.68..3294.36 rows=9869 width=40) (actual\r\n> time=179.374..180.558 rows=9869 loops=1)\r\n> > Output: attr, (func(attr))\r\n> > Sort Key: (func(tab.attr))\r\n> > Sort Method: quicksort Memory: 1436kB\r\n> > Buffers: shared hit=49\r\n> > -> Seq Scan on public.tab (cost=0.00..2614.94 rows=9869 width=40)\r\n> (actual time=2.293..169.060 rows=9869 loops=1)\r\n> > Output: attr, func(attr)\r\n> > Buffers: shared hit=49\r\n> > Planning time: 0.318 ms\r\n> > Execution time: 182.061 ms\r\n> > (10 rows)\r\n> >\r\n> > test=# explain (analyze, verbose, buffers)\r\n> > select attr from tab;\r\n> > QUERY PLAN\r\n> >\r\n> ------------------------------------------------------------------------------------------------------------\r\n> > Seq Scan on public.tab (cost=0.00..147.69 rows=9869 width=8) (actual\r\n> time=0.045..3.975 rows=9869 loops=1)\r\n> > Output: attr\r\n> > Buffers: shared hit=49\r\n> > Planning time: 0.069 ms\r\n> > Execution time: 6.020 ms\r\n> > (5 rows)\r\n>\r\n>\r\n> And here we have PG 11:\r\n>\r\n> > test=# explain (analyze, verbose, buffers)\r\n> > select attr from tab order by func(attr);\r\n> > QUERY PLAN\r\n> >\r\n> ----------------------------------------------------------------------------------------------------------------------\r\n> > Sort (cost=3269.68..3294.36 rows=9869 width=40) (actual\r\n> time=597.877..599.805 rows=9869 loops=1)\r\n> > Output: attr, (func(attr))\r\n> > Sort Key: (func(tab.attr))\r\n> > Sort Method: quicksort Memory: 1436kB\r\n> > Buffers: shared hit=49\r\n> > -> Seq Scan on public.tab (cost=0.00..2614.94 rows=9869 width=40)\r\n> (actual time=0.878..214.188 rows=9869 loops=1)\r\n> > Output: attr, func(attr)\r\n> > Buffers: shared hit=49\r\n> > Planning Time: 0.151 ms\r\n> > Execution Time: 601.767 ms\r\n> > (10 rows)\r\n> >\r\n> > test=# explain (analyze, verbose, buffers)\r\n> > select attr from tab;\r\n> > QUERY PLAN\r\n> >\r\n> ------------------------------------------------------------------------------------------------------------\r\n> > Seq Scan on public.tab (cost=0.00..147.69 rows=9869 width=8) (actual\r\n> time=0.033..1.628 rows=9869 loops=1)\r\n> > Output: attr\r\n> > Buffers: shared hit=49\r\n> > Planning Time: 0.043 ms\r\n> > Execution Time: 2.581 ms\r\n> > (5 rows)\r\n>\r\n>\r\n> In the real scenario it's 500ms vs. 50s. The reason is obviously the\r\n> perl function used as sort key. All different versions have been tested\r\n> with an unmodified config and one tunes with pgtune. Creating a\r\n> functional index does not help in the original database as the planner\r\n> doesn't use it, while it *is* used in the test case. But the question\r\n> what causes that noticeable difference in performance is untouched by\r\n> the fact that it could be circumvented in some cases.\r\n>\r\n> The perl version used is v5.24.1.\r\n>\r\n\r\n I looked on profile - Postgres 14\r\n\r\n 5,67% libperl.so.5.32.1 [.] Perl_utf8_length\r\n 5,44% libc-2.33.so [.] __strcoll_l\r\n 4,73% libperl.so.5.32.1 [.] Perl_pp_subst\r\n 4,33% libperl.so.5.32.1 [.] Perl_re_intuit_start\r\n 3,25% libperl.so.5.32.1 [.] Perl_fbm_instr\r\n 1,92% libperl.so.5.32.1 [.] Perl_regexec_flags\r\n 1,79% libperl.so.5.32.1 [.] Perl_runops_standard\r\n 1,16% libperl.so.5.32.1 [.] Perl_pp_const\r\n 0,97% perf [.] 0x00000000002fcf93\r\n 0,94% libperl.so.5.32.1 [.] Perl_pp_nextstate\r\n 0,68% libperl.so.5.32.1 [.] Perl_do_trans\r\n 0,54% perf [.] 0x00000000003dd0c5\r\n\r\nand Postgres - 10\r\n\r\n 5,45% libperl.so.5.32.1 [.] Perl_utf8_length\r\n 4,78% libc-2.33.so [.] __strcoll_l\r\n 4,15% libperl.so.5.32.1 [.] Perl_re_intuit_start\r\n 3,92% libperl.so.5.32.1 [.] Perl_pp_subst\r\n 2,99% libperl.so.5.32.1 [.] Perl_fbm_instr\r\n 1,77% libperl.so.5.32.1 [.] Perl_regexec_flags\r\n 1,59% libperl.so.5.32.1 [.] Perl_runops_standard\r\n 1,02% libperl.so.5.32.1 [.] Perl_pp_const\r\n 0,99% [kernel] [k] psi_group_change\r\n 0,85% [kernel] [k] switch_mm_irqs_off\r\n\r\nand it doesn't look too strange.\r\n\r\n-- postgres 14\r\npostgres=# explain (analyze, verbose, buffers)\r\n select attr from tab order by func(attr);\r\n┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n│ QUERY PLAN\r\n │\r\n╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n│ Sort (cost=3269.68..3294.36 rows=9869 width=40) (actual\r\ntime=246.612..247.292 rows=9869 loops=1) │\r\n│ Output: attr, (func(attr))\r\n │\r\n│ Sort Key: (func(tab.attr))\r\n │\r\n│ Sort Method: quicksort Memory: 1436kB\r\n │\r\n│ Buffers: shared hit=49\r\n │\r\n│ -> Seq Scan on public.tab (cost=0.00..2614.94 rows=9869 width=40)\r\n(actual time=0.102..201.863 rows=9869 loops=1) │\r\n│ Output: attr, func(attr)\r\n │\r\n│ Buffers: shared hit=49\r\n │\r\n│ Planning Time: 0.057 ms\r\n │\r\n│ Execution Time: 248.386 ms\r\n │\r\n└──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n(10 rows)\r\n\r\n-- postgres 10\r\npostgres=# explain (analyze, verbose, buffers)\r\n select attr from tab order by func(attr);\r\n┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐\r\n│ QUERY PLAN\r\n │\r\n╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡\r\n│ Sort (cost=3269.68..3294.36 rows=9869 width=40) (actual\r\ntime=267.779..268.366 rows=9869 loops=1) │\r\n│ Output: attr, (func(attr))\r\n │\r\n│ Sort Key: (func(tab.attr))\r\n │\r\n│ Sort Method: quicksort Memory: 1436kB\r\n │\r\n│ Buffers: shared hit=49\r\n │\r\n│ -> Seq Scan on public.tab (cost=0.00..2614.94 rows=9869 width=40)\r\n(actual time=0.278..222.606 rows=9869 loops=1) │\r\n│ Output: attr, func(attr)\r\n │\r\n│ Buffers: shared hit=49\r\n │\r\n│ Planning time: 0.132 ms\r\n │\r\n│ Execution time: 269.258 ms\r\n │\r\n└──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘\r\n(10 rows)\r\n\r\nThis is tested on my laptop - both version uses same locale\r\n\r\nAre you sure, so all databases use the same encoding and same locale?\r\n\r\nRegards\r\n\r\nPavel\r\n\r\n\r\n\r\n> Best\r\n> Johannes\r\n>\r\n>\r\n>\r\n>\r\n\nhIpá 28. 5. 2021 v 16:12 odesílatel Johannes Graën <johannes@selfnet.de> napsal:Hi,\n\r\nWhen trying to upgrade an existing database from version 10 to 13 I came\r\nacross a degression in some existing code used by clients. Further\r\ninvestigations showed that performance measures are similar in versions\r\n11 to 13, while in the original database on version 10 it's around 100\r\ntimes faster. I could boil it down to perl functions used for sorting.\n\r\n>From the real data that I don't own, I created a test case that is\r\nsufficient to observe the degression: http://ix.io/3o7f\n\n\r\nThese are the numbers on PG 10:\n\r\n> test=# explain (analyze, verbose, buffers)\r\n> select attr from tab order by func(attr);\r\n> QUERY PLAN\r\n> ----------------------------------------------------------------------------------------------------------------------\r\n> Sort (cost=3269.68..3294.36 rows=9869 width=40) (actual time=179.374..180.558 rows=9869 loops=1)\r\n> Output: attr, (func(attr))\r\n> Sort Key: (func(tab.attr))\r\n> Sort Method: quicksort Memory: 1436kB\r\n> Buffers: shared hit=49\r\n> -> Seq Scan on public.tab (cost=0.00..2614.94 rows=9869 width=40) (actual time=2.293..169.060 rows=9869 loops=1)\r\n> Output: attr, func(attr)\r\n> Buffers: shared hit=49\r\n> Planning time: 0.318 ms\r\n> Execution time: 182.061 ms\r\n> (10 rows)\r\n> \r\n> test=# explain (analyze, verbose, buffers)\r\n> select attr from tab;\r\n> QUERY PLAN\r\n> ------------------------------------------------------------------------------------------------------------\r\n> Seq Scan on public.tab (cost=0.00..147.69 rows=9869 width=8) (actual time=0.045..3.975 rows=9869 loops=1)\r\n> Output: attr\r\n> Buffers: shared hit=49\r\n> Planning time: 0.069 ms\r\n> Execution time: 6.020 ms\r\n> (5 rows)\n\n\r\nAnd here we have PG 11:\n\r\n> test=# explain (analyze, verbose, buffers)\r\n> select attr from tab order by func(attr);\r\n> QUERY PLAN\r\n> ----------------------------------------------------------------------------------------------------------------------\r\n> Sort (cost=3269.68..3294.36 rows=9869 width=40) (actual time=597.877..599.805 rows=9869 loops=1)\r\n> Output: attr, (func(attr))\r\n> Sort Key: (func(tab.attr))\r\n> Sort Method: quicksort Memory: 1436kB\r\n> Buffers: shared hit=49\r\n> -> Seq Scan on public.tab (cost=0.00..2614.94 rows=9869 width=40) (actual time=0.878..214.188 rows=9869 loops=1)\r\n> Output: attr, func(attr)\r\n> Buffers: shared hit=49\r\n> Planning Time: 0.151 ms\r\n> Execution Time: 601.767 ms\r\n> (10 rows)\r\n> \r\n> test=# explain (analyze, verbose, buffers)\r\n> select attr from tab;\r\n> QUERY PLAN\r\n> ------------------------------------------------------------------------------------------------------------\r\n> Seq Scan on public.tab (cost=0.00..147.69 rows=9869 width=8) (actual time=0.033..1.628 rows=9869 loops=1)\r\n> Output: attr\r\n> Buffers: shared hit=49\r\n> Planning Time: 0.043 ms\r\n> Execution Time: 2.581 ms\r\n> (5 rows)\n\n\r\nIn the real scenario it's 500ms vs. 50s. The reason is obviously the\r\nperl function used as sort key. All different versions have been tested\r\nwith an unmodified config and one tunes with pgtune. Creating a\r\nfunctional index does not help in the original database as the planner\r\ndoesn't use it, while it *is* used in the test case. But the question\r\nwhat causes that noticeable difference in performance is untouched by\r\nthe fact that it could be circumvented in some cases.\n\r\nThe perl version used is v5.24.1. I looked on profile - Postgres 14 5,67% libperl.so.5.32.1 [.] Perl_utf8_length 5,44% libc-2.33.so [.] __strcoll_l 4,73% libperl.so.5.32.1 [.] Perl_pp_subst 4,33% libperl.so.5.32.1 [.] Perl_re_intuit_start 3,25% libperl.so.5.32.1 [.] Perl_fbm_instr 1,92% libperl.so.5.32.1 [.] Perl_regexec_flags 1,79% libperl.so.5.32.1 [.] Perl_runops_standard 1,16% libperl.so.5.32.1 [.] Perl_pp_const 0,97% perf [.] 0x00000000002fcf93 0,94% libperl.so.5.32.1 [.] Perl_pp_nextstate 0,68% libperl.so.5.32.1 [.] Perl_do_trans 0,54% perf [.] 0x00000000003dd0c5and Postgres - 10 5,45% libperl.so.5.32.1 [.] Perl_utf8_length 4,78% libc-2.33.so [.] __strcoll_l 4,15% libperl.so.5.32.1 [.] Perl_re_intuit_start 3,92% libperl.so.5.32.1 [.] Perl_pp_subst 2,99% libperl.so.5.32.1 [.] Perl_fbm_instr 1,77% libperl.so.5.32.1 [.] Perl_regexec_flags 1,59% libperl.so.5.32.1 [.] Perl_runops_standard 1,02% libperl.so.5.32.1 [.] Perl_pp_const 0,99% [kernel] [k] psi_group_change 0,85% [kernel] [k] switch_mm_irqs_offand it doesn't look too strange. -- postgres 14postgres=# explain (analyze, verbose, buffers) select attr from tab order by func(attr);┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐│ QUERY PLAN │╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡│ Sort (cost=3269.68..3294.36 rows=9869 width=40) (actual time=246.612..247.292 rows=9869 loops=1) ││ Output: attr, (func(attr)) ││ Sort Key: (func(tab.attr)) ││ Sort Method: quicksort Memory: 1436kB ││ Buffers: shared hit=49 ││ -> Seq Scan on public.tab (cost=0.00..2614.94 rows=9869 width=40) (actual time=0.102..201.863 rows=9869 loops=1) ││ Output: attr, func(attr) ││ Buffers: shared hit=49 ││ Planning Time: 0.057 ms ││ Execution Time: 248.386 ms │└──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘(10 rows)\n-- postgres 10postgres=# explain (analyze, verbose, buffers) select attr from tab order by func(attr);┌──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐│ QUERY PLAN │╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡│ Sort (cost=3269.68..3294.36 rows=9869 width=40) (actual time=267.779..268.366 rows=9869 loops=1) ││ Output: attr, (func(attr)) ││ Sort Key: (func(tab.attr)) ││ Sort Method: quicksort Memory: 1436kB ││ Buffers: shared hit=49 ││ -> Seq Scan on public.tab (cost=0.00..2614.94 rows=9869 width=40) (actual time=0.278..222.606 rows=9869 loops=1) ││ Output: attr, func(attr) ││ Buffers: shared hit=49 ││ Planning time: 0.132 ms ││ Execution time: 269.258 ms │└──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘(10 rows)This is tested on my laptop - both version uses same localeAre you sure, so all databases use the same encoding and same locale?RegardsPavel \r\nBest\r\n Johannes",
"msg_date": "Fri, 28 May 2021 17:48:33 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Degression (PG10 > 11, 12 or 13)"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> pá 28. 5. 2021 v 16:12 odesílatel Johannes Graën <johannes@selfnet.de>\n> napsal:\n>> When trying to upgrade an existing database from version 10 to 13 I came\n>> across a degression in some existing code used by clients. Further\n>> investigations showed that performance measures are similar in versions\n>> 11 to 13, while in the original database on version 10 it's around 100\n>> times faster. I could boil it down to perl functions used for sorting.\n\n> Are you sure, so all databases use the same encoding and same locale?\n\nYeah ... I don't know too much about the performance of Perl regexps,\nbut it'd be plausible that it varies depending on locale setting.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 28 May 2021 12:24:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Degression (PG10 > 11, 12 or 13)"
},
{
"msg_contents": "On 28/05/2021 18.24, Tom Lane wrote:\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n>> pá 28. 5. 2021 v 16:12 odesílatel Johannes Graën <johannes@selfnet.de>\n>> napsal:\n>>> When trying to upgrade an existing database from version 10 to 13 I came\n>>> across a degression in some existing code used by clients. Further\n>>> investigations showed that performance measures are similar in versions\n>>> 11 to 13, while in the original database on version 10 it's around 100\n>>> times faster. I could boil it down to perl functions used for sorting.\n> \n>> Are you sure, so all databases use the same encoding and same locale?\n> \n> Yeah ... I don't know too much about the performance of Perl regexps,\n> but it'd be plausible that it varies depending on locale setting.\n\nIt probably wasn't Perl at all. Thanks to the hint I checked the initial\ndatabase again and, while encoding and ctype are set to UTF8, the\ncollation is C, which makes a huge difference:\n\n... order by tab(attr) => Execution Time: 51429.875 ms\n... order by tab(attr collate \"C\") => Execution Time: 537.757 ms\n\nin the original database. Any other version yields similar times.\n\n\nOn 28/05/2021 17.47, Tomas Vondra wrote:\n> That function is pretty much just a sequence of ~120 regular\n> expressions, doing something similar to unaccent(). I wonder if we're\n> calling the function much more often, perhaps due to some changes in the\n> sort code (the function is immutable, but that does not guarantee it's\n> called just once).\n\n> Also, maybe try materializing the function results before doing the\n> sort, perhaps like this:\n>\n> SELECT * FROM (select attr, func(attr) as fattr from tab offset 0) foo\n> ORDER BY fattr;\n\nI was expecting it to be called once in the process of sorting, and it\nseems that this is actually true for all version and different\ncollations, but sorting for a collation that is not C requires\nconsiderable more resources (that still needs to be shown for other\ncollations, but I see the overhead of having more or less complex\ndefinitions vs. just comparing numbers).\n\nThat being said, I would have used unaccent or, if that wasn't an\noption, maybe have those values calculated by a trigger function when\nthe corresponding rows are changed. But I don't control the code.\n\nNow what keeps me wondering is how the sorting works internally and if\nwe could conclude that using the C collation in order expressions and\nindexes is a general way to speed up queries - if the actual order is of\nless importance.\n\nBest\n Johannes\n\n\n",
"msg_date": "Sat, 29 May 2021 01:19:52 +0200",
"msg_from": "=?UTF-8?Q?Johannes_Gra=c3=abn?= <johannes@selfnet.de>",
"msg_from_op": true,
"msg_subject": "Re: Degression (PG10 > 11, 12 or 13)"
}
] |
[
{
"msg_contents": "[Reposted to the proper list]\n\nI started to use PostgreSQL v7.3 in 2003 on my home Linux systems (4 at \none point), gradually moving to v9.0 w/ replication in 2010. In 2017 I \nmoved my 20GB database to AWS/RDS, gradually upgrading to v9.6, & was \nentirely satisfied with the result.\n\nIn March of this year, AWS announced that v9.6 was nearing end of \nsupport, & AWS would forcibly upgrade everyone to v12 on January 22, \n2022, if users did not perform the upgrade earlier. My first attempt \nwas successful as far as the upgrade itself, but complex queries that \nnormally ran in a couple of seconds on v9.x, were taking minutes in v12.\n\nI didn't have the time in March to diagnose the problem, other than some \nfutile adjustments to server parameters, so I reverted back to a saved \ncopy of my v9.6 data.\n\nOn Sunday, being retired, I decided to attempt to solve the issue in \nearnest. I have now spent five days (about 14 hours a day), trying \nvarious things, including adding additional indexes. Keeping the v9.6 \ndata online for web users, I've \"forked\" the data into new copies, & \nupdated them in turn to PostgreSQL v10, v11, v12, & v13. All exhibit \nthe same problem: As you will see below, it appears that versions 10 & \nabove are doing a sequential scan of some of the \"large\" (200K rows) \ntables. Note that the expected & actual run times both differ for v9.6 \n& v13.2, by more than *two orders of magnitude*. Rather than post a huge \neMail (ha ha), I'll start with this one, that shows an \"EXPLAIN ANALYZE\" \nfrom both v9.6 & v13.2, followed by the related table & view \ndefinitions. With one exception, table definitions are from the FCC \n(Federal Communications Commission); the view definitions are my own.\n\n*Here's from v9.6:*\n\n=> EXPLAIN ANALYZE SELECT club_count, extra_count, region_count, \ncallsign AS trustee_callsign, applicant_type, entity_name, licensee_id \nAS _lid FROM genclub_multi_ WHERE club_count >= 5 ORDER BY extra_count \nDESC, club_count DESC, entity_name;\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=407.13..407.13 rows=1 width=94) (actual \ntime=348.850..348.859 rows=43 loops=1)\n Sort Key: \"_Club\".extra_count DESC, \"_Club\".club_count DESC, \n\"_EN\".entity_name\n Sort Method: quicksort Memory: 31kB\n -> Nested Loop (cost=4.90..407.12 rows=1 width=94) (actual \ntime=7.587..348.732 rows=43 loops=1)\n -> Nested Loop (cost=4.47..394.66 rows=1 width=94) (actual \ntime=5.740..248.149 rows=43 loops=1)\n -> Nested Loop Left Join (cost=4.04..382.20 rows=1 \nwidth=79) (actual time=2.458..107.908 rows=55 loops=1)\n -> Hash Join (cost=3.75..380.26 rows=1 width=86) \n(actual time=2.398..106.990 rows=55 loops=1)\n Hash Cond: ((\"_EN\".country_id = \n\"_GovtRegion\".country_id) AND (\"_EN\".state = \"_GovtRegion\".territory_id))\n -> Nested Loop (cost=0.43..376.46 rows=47 \nwidth=94) (actual time=2.294..106.736 rows=55 loops=1)\n -> Seq Scan on \"_Club\" \n(cost=0.00..4.44 rows=44 width=35) (actual time=0.024..0.101 rows=44 \nloops=1)\n Filter: (club_count >= 5)\n Rows Removed by Filter: 151\n -> Index Scan using \"_EN_callsign\" on \n\"_EN\" (cost=0.43..8.45 rows=1 width=69) (actual time=2.179..2.420 \nrows=1 loops=44)\n Index Cond: (callsign = \n\"_Club\".trustee_callsign)\n -> Hash (cost=1.93..1.93 rows=93 width=7) \n(actual time=0.071..0.071 rows=88 loops=1)\n Buckets: 1024 Batches: 1 Memory \nUsage: 12kB\n -> Seq Scan on \"_GovtRegion\" \n(cost=0.00..1.93 rows=93 width=7) (actual time=0.010..0.034 rows=93 loops=1)\n -> Nested Loop (cost=0.29..1.93 rows=1 width=7) \n(actual time=0.012..0.014 rows=1 loops=55)\n Join Filter: (\"_IsoCountry\".iso_alpha2 = \n\"_Territory\".country_id)\n Rows Removed by Join Filter: 0\n -> Index Only Scan using \n\"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..1.62 rows=1 \nwidth=3) (actual time=0.006..0.006 rows=1 loops=55)\n Index Cond: (iso_alpha2 = \n\"_GovtRegion\".country_id)\n Heap Fetches: 55\n -> Index Only Scan using \"_Territory_pkey\" \non \"_Territory\" (cost=0.14..0.29 rows=1 width=7)\n (actual time=0.004..0.005 rows=1 loops=55)\n Index Cond: (territory_id = \n\"_GovtRegion\".territory_id)\n Heap Fetches: 59\n -> Index Scan using \"_HD_pkey\" on \"_HD\" \n(cost=0.43..12.45 rows=1 width=15) (actual time=2.548..2.548 rows=1 \nloops=55)\n Index Cond: (unique_system_identifier = \n\"_EN\".unique_system_identifier)\n Filter: ((\"_EN\".callsign = callsign) AND \n(((((license_status)::text || ' - '::text) || (COALESCE((SubPlan 2), \n'???'::character varying))::text))::character(1) = 'A'::bpchar))\n Rows Removed by Filter: 0\n SubPlan 2\n -> Limit (cost=0.15..8.17 rows=1 width=32) \n(actual time=0.006..0.007 rows=1 loops=55)\n -> Index Scan using \"_LicStatus_pkey\" on \n\"_LicStatus\" (cost=0.15..8.17 rows=1 width=32) (actual \ntime=0.005..0.005 rows=1 loops=55)\n Index Cond: (\"_HD\".license_status = \nstatus_id)\n -> Index Scan using \"_AM_pkey\" on \"_AM\" (cost=0.43..4.27 \nrows=1 width=15) (actual time=2.325..2.325 rows=1 loops=43)\n Index Cond: (unique_system_identifier = \n\"_EN\".unique_system_identifier)\n Filter: (\"_EN\".callsign = callsign)\n SubPlan 1\n -> Limit (cost=0.15..8.17 rows=1 width=32) (actual \ntime=0.007..0.007 rows=1 loops=43)\n -> Index Scan using \"_ApplicantType_pkey\" on \n\"_ApplicantType\" (cost=0.15..8.17 rows=1 width=32) (actual \ntime=0.005..0.005 rows=1 loops=43)\n Index Cond: (\"_EN\".applicant_type_code = \napp_type_id)\n Planning time: 13.490 ms\n Execution time: 349.182 ms\n(43 rows)\n\n\n*Here's from v13.2:*\n\n=> EXPLAIN ANALYZE SELECT club_count, extra_count, region_count, \ncallsign AS trustee_callsign, applicant_type, entity_name, licensee_id \nAS _lid FROM genclub_multi_ WHERE club_count >= 5 ORDER BY extra_count \nDESC, club_count DESC, entity_name;\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=144365.60..144365.60 rows=1 width=94) (actual \ntime=31898.860..31901.922 rows=43 loops=1)\n Sort Key: \"_Club\".extra_count DESC, \"_Club\".club_count DESC, \n\"_EN\".entity_name\n Sort Method: quicksort Memory: 31kB\n -> Nested Loop (cost=58055.66..144365.59 rows=1 width=94) (actual \ntime=6132.403..31894.233 rows=43 loops=1)\n -> Nested Loop (cost=58055.51..144364.21 rows=1 width=62) \n(actual time=1226.085..30337.921 rows=837792 loops=1)\n -> Nested Loop Left Join (cost=58055.09..144360.38 \nrows=1 width=59) (actual time=1062.414..12471.456 rows=1487153 loops=1)\n -> Hash Join (cost=58054.80..144359.69 rows=1 \nwidth=66) (actual time=1061.330..6635.041 rows=1487153 loops=1)\n Hash Cond: ((\"_EN\".unique_system_identifier \n= \"_AM\".unique_system_identifier) AND (\"_EN\".callsign = \"_AM\".callsign))\n -> Hash Join (cost=3.33..53349.72 \nrows=1033046 width=51) (actual time=2.151..3433.178 rows=1487153 loops=1)\n Hash Cond: ((\"_EN\".country_id = \n\"_GovtRegion\".country_id) AND (\"_EN\".state = \"_GovtRegion\".territory_id))\n -> Seq Scan on \"_EN\" \n(cost=0.00..45288.05 rows=1509005 width=60) (actual time=0.037..2737.054 \nrows=1508736 loops=1)\n -> Hash (cost=1.93..1.93 rows=93 \nwidth=7) (actual time=0.706..1.264 rows=88 loops=1)\n Buckets: 1024 Batches: 1 Memory \nUsage: 12kB\n -> Seq Scan on \"_GovtRegion\" \n(cost=0.00..1.93 rows=93 width=7) (actual time=0.013..0.577 rows=93 loops=1)\n -> Hash (cost=28093.99..28093.99 \nrows=1506699 width=15) (actual time=1055.587..1055.588 rows=1506474 loops=1)\n Buckets: 131072 Batches: 32 Memory \nUsage: 3175kB\n -> Seq Scan on \"_AM\" \n(cost=0.00..28093.99 rows=1506699 width=15) (actual time=0.009..742.774 \nrows=1506474 loops=1)\n -> Nested Loop (cost=0.29..0.68 rows=1 width=7) \n(actual time=0.003..0.004 rows=1 loops=1487153)\n Join Filter: (\"_IsoCountry\".iso_alpha2 = \n\"_Territory\".country_id)\n Rows Removed by Join Filter: 0\n -> Index Only Scan using \n\"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..0.38 rows=1 \nwidth=3) (actual time=0.001..0.002 rows=1 loops=1487153)\n Index Cond: (iso_alpha2 = \n\"_GovtRegion\".country_id)\n Heap Fetches: 1487153\n -> Index Only Scan using \"_Territory_pkey\" \non \"_Territory\" (cost=0.14..0.29 rows=1 width=7) (actual \ntime=0.001..0.001 rows=1 loops=1487153)\n Index Cond: (territory_id = \n\"_GovtRegion\".territory_id)\n Heap Fetches: 1550706\n -> Index Scan using \"_HD_pkey\" on \"_HD\" \n(cost=0.43..3.82 rows=1 width=15) (actual time=0.012..0.012 rows=1 \nloops=1487153)\n Index Cond: (unique_system_identifier = \n\"_EN\".unique_system_identifier)\n Filter: ((\"_EN\".callsign = callsign) AND \n(((((license_status)::text || ' - '::text) || (COALESCE((SubPlan 2), \n'???'::character varying))::text))::character(1) = 'A'::bpchar))\n Rows Removed by Filter: 0\n SubPlan 2\n -> Limit (cost=0.00..1.07 rows=1 width=13) \n(actual time=0.001..0.001 rows=1 loops=1487153)\n -> Seq Scan on \"_LicStatus\" \n(cost=0.00..1.07 rows=1 width=13) (actual time=0.000..0.000 rows=1 \nloops=1487153)\n Filter: (\"_HD\".license_status = \nstatus_id)\n Rows Removed by Filter: 1\n -> Index Scan using \"_Club_pkey\" on \"_Club\" (cost=0.14..0.17 \nrows=1 width=35) (actual time=0.002..0.002 rows=0 loops=837792)\n Index Cond: (trustee_callsign = \"_EN\".callsign)\n Filter: (club_count >= 5)\n Rows Removed by Filter: 0\n SubPlan 1\n -> Limit (cost=0.00..1.20 rows=1 width=15) (actual \ntime=0.060..0.060 rows=1 loops=43)\n -> Seq Scan on \"_ApplicantType\" (cost=0.00..1.20 \nrows=1 width=15) (actual time=0.016..0.016 rows=1 loops=43)\n Filter: (\"_EN\".applicant_type_code = app_type_id)\n Rows Removed by Filter: 7\n Planning Time: 173.753 ms\n Execution Time: 31919.601 ms\n(46 rows)\n\n\n*VIEW genclub_multi_:*\n\n=> \\d+ genclub_multi_\n View \"Callsign.genclub_multi_\"\n Column | Type | Collation | Nullable | \nDefault | Storage | Description\n------------------+------------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | | \nplain |\n callsign | character(10) | | | | \nextended |\n fcc_reg_num | character(10) | | | | \nextended |\n licensee_id | character(9) | | | | \nextended |\n subgroup_id_num | character(3) | | | | \nextended |\n applicant_type | text | | | | \nextended |\n entity_type | text | | | | \nextended |\n entity_name | character varying(200) | | | | \nextended |\n attention | character varying(35) | | | | \nextended |\n first_name | character varying(20) | | | | \nextended |\n middle_init | character(1) | | | | \nextended |\n last_name | character varying(20) | | | | \nextended |\n name_suffix | character(3) | | | | \nextended |\n street_address | character varying(60) | | | | \nextended |\n po_box | text | | | | \nextended |\n locality | character varying | | | | \nextended |\n locality_ | character varying | | | | \nextended |\n county | character varying | | | | \nextended |\n state | text | | | | \nextended |\n postal_code | text | | | | \nextended |\n full_name | text | | | | \nextended |\n _entity_name | text | | | | \nextended |\n _first_name | text | | | | \nextended |\n _last_name | text | | | | \nextended |\n zip5 | character(5) | | | | \nextended |\n zip_location | \"GeoPosition\" | | | | \nextended |\n maidenhead | bpchar | | | | \nextended |\n geo_region | smallint | | | | \nplain |\n uls_file_num | character(14) | | | | \nextended |\n radio_service | text | | | | \nextended |\n license_status | text | | | | \nextended |\n grant_date | date | | | | \nplain |\n effective_date | date | | | | \nplain |\n cancel_date | date | | | | \nplain |\n expire_date | date | | | | \nplain |\n end_date | date | | | | \nplain |\n available_date | date | | | | \nplain |\n last_action_date | date | | | | \nplain |\n uls_region | \"MySql\".tinyint | | | | \nplain |\n callsign_group | text | | | | \nextended |\n operator_group | text | | | | \nextended |\n operator_class | text | | | | \nextended |\n prev_class | text | | | | \nextended |\n prev_callsign | character(10) | | | | \nextended |\n vanity_type | text | | | | \nextended |\n is_trustee | character(1) | | | | \nextended |\n trustee_callsign | character(10) | | | | \nextended |\n trustee_name | character varying(50) | | | | \nextended |\n validity | integer | | | | \nplain |\n club_count | bigint | | | | \nplain |\n extra_count | bigint | | | | \nplain |\n region_count | bigint | | | | \nplain |\nView definition:\n SELECT licjb_.sys_id,\n licjb_.callsign,\n licjb_.fcc_reg_num,\n licjb_.licensee_id,\n licjb_.subgroup_id_num,\n licjb_.applicant_type,\n licjb_.entity_type,\n licjb_.entity_name,\n licjb_.attention,\n licjb_.first_name,\n licjb_.middle_init,\n licjb_.last_name,\n licjb_.name_suffix,\n licjb_.street_address,\n licjb_.po_box,\n licjb_.locality,\n licjb_.locality_,\n licjb_.county,\n licjb_.state,\n licjb_.postal_code,\n licjb_.full_name,\n licjb_._entity_name,\n licjb_._first_name,\n licjb_._last_name,\n licjb_.zip5,\n licjb_.zip_location,\n licjb_.maidenhead,\n licjb_.geo_region,\n licjb_.uls_file_num,\n licjb_.radio_service,\n licjb_.license_status,\n licjb_.grant_date,\n licjb_.effective_date,\n licjb_.cancel_date,\n licjb_.expire_date,\n licjb_.end_date,\n licjb_.available_date,\n licjb_.last_action_date,\n licjb_.uls_region,\n licjb_.callsign_group,\n licjb_.operator_group,\n licjb_.operator_class,\n licjb_.prev_class,\n licjb_.prev_callsign,\n licjb_.vanity_type,\n licjb_.is_trustee,\n licjb_.trustee_callsign,\n licjb_.trustee_name,\n licjb_.validity,\n gen.club_count,\n gen.extra_count,\n gen.region_count\n FROM licjb_,\n \"GenLicClub\" gen\n WHERE licjb_.callsign = gen.trustee_callsign AND \nlicjb_.license_status::character(1) = 'A'::bpchar;\n*\n**VIEW GenLicClub:*\n\n=> \\d+ \"GenLicClub\"\n View \"Callsign.GenLicClub\"\n Column | Type | Collation | Nullable | Default | \nStorage | Description\n------------------+---------------+-----------+----------+---------+----------+-------------\n trustee_callsign | character(10) | | | | extended |\n club_count | bigint | | | | plain |\n extra_count | bigint | | | | plain |\n region_count | bigint | | | | plain |\nView definition:\n SELECT \"_Club\".trustee_callsign,\n \"_Club\".club_count,\n \"_Club\".extra_count,\n \"_Club\".region_count\n FROM \"GenLic\".\"_Club\";\n\n*TABLE \"GenLic\".\"_Club\":*\n\n=> \\d+ \"GenLic\".\"_Club\"\n Table \"GenLic._Club\"\n Column | Type | Collation | Nullable | Default | \nStorage | Stats target | Description\n------------------+---------------+-----------+----------+---------+----------+--------------+-------------\n trustee_callsign | character(10) | | not null | | extended \n| |\n club_count | bigint | | | | plain \n| |\n extra_count | bigint | | | | plain \n| |\n region_count | bigint | | | | plain \n| |\nIndexes:\n \"_Club_pkey\" PRIMARY KEY, btree (trustee_callsign)\n\n*VIEW licjb_:*\n\n=> \\d+ licjb_\n View \"Callsign.licjb_\"\n Column | Type | Collation | Nullable | \nDefault | Storage | Description\n------------------+------------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | | \nplain |\n callsign | character(10) | | | | \nextended |\n fcc_reg_num | character(10) | | | | \nextended |\n licensee_id | character(9) | | | | \nextended |\n subgroup_id_num | character(3) | | | | \nextended |\n applicant_type | text | | | | \nextended |\n entity_type | text | | | | \nextended |\n entity_name | character varying(200) | | | | \nextended |\n attention | character varying(35) | | | | \nextended |\n first_name | character varying(20) | | | | \nextended |\n middle_init | character(1) | | | | \nextended |\n last_name | character varying(20) | | | | \nextended |\n name_suffix | character(3) | | | | \nextended |\n street_address | character varying(60) | | | | \nextended |\n po_box | text | | | | \nextended |\n locality | character varying | | | | \nextended |\n locality_ | character varying | | | | \nextended |\n county | character varying | | | | \nextended |\n state | text | | | | \nextended |\n postal_code | text | | | | \nextended |\n full_name | text | | | | \nextended |\n _entity_name | text | | | | \nextended |\n _first_name | text | | | | \nextended |\n _last_name | text | | | | \nextended |\n zip5 | character(5) | | | | \nextended |\n zip_location | \"GeoPosition\" | | | | \nextended |\n maidenhead | bpchar | | | | \nextended |\n geo_region | smallint | | | | \nplain |\n uls_file_num | character(14) | | | | \nextended |\n radio_service | text | | | | \nextended |\n license_status | text | | | | \nextended |\n grant_date | date | | | | \nplain |\n effective_date | date | | | | \nplain |\n cancel_date | date | | | | \nplain |\n expire_date | date | | | | \nplain |\n end_date | date | | | | \nplain |\n available_date | date | | | | \nplain |\n last_action_date | date | | | | \nplain |\n uls_region | \"MySql\".tinyint | | | | \nplain |\n callsign_group | text | | | | \nextended |\n operator_group | text | | | | \nextended |\n operator_class | text | | | | \nextended |\n prev_class | text | | | | \nextended |\n prev_callsign | character(10) | | | | \nextended |\n vanity_type | text | | | | \nextended |\n is_trustee | character(1) | | | | \nextended |\n trustee_callsign | character(10) | | | | \nextended |\n trustee_name | character varying(50) | | | | \nextended |\n validity | integer | | | | \nplain |\nView definition:\n SELECT lic_en_.sys_id,\n lic_en_.callsign,\n lic_en_.fcc_reg_num,\n lic_en_.licensee_id,\n lic_en_.subgroup_id_num,\n lic_en_.applicant_type,\n lic_en_.entity_type,\n lic_en_.entity_name,\n lic_en_.attention,\n lic_en_.first_name,\n lic_en_.middle_init,\n lic_en_.last_name,\n lic_en_.name_suffix,\n lic_en_.street_address,\n lic_en_.po_box,\n lic_en_.locality,\n lic_en_.locality_,\n lic_en_.county,\n lic_en_.state,\n lic_en_.postal_code,\n lic_en_.full_name,\n lic_en_._entity_name,\n lic_en_._first_name,\n lic_en_._last_name,\n lic_en_.zip5,\n lic_en_.zip_location,\n lic_en_.maidenhead,\n lic_en_.geo_region,\n lic_hd_.uls_file_num,\n lic_hd_.radio_service,\n lic_hd_.license_status,\n lic_hd_.grant_date,\n lic_hd_.effective_date,\n lic_hd_.cancel_date,\n lic_hd_.expire_date,\n lic_hd_.end_date,\n lic_hd_.available_date,\n lic_hd_.last_action_date,\n lic_am_.uls_region,\n lic_am_.callsign_group,\n lic_am_.operator_group,\n lic_am_.operator_class,\n lic_am_.prev_class,\n lic_am_.prev_callsign,\n lic_am_.vanity_type,\n lic_am_.is_trustee,\n lic_am_.trustee_callsign,\n lic_am_.trustee_name,\n CASE\n WHEN lic_am_.vanity_type::character(1) = ANY \n(ARRAY['A'::bpchar, 'C'::bpchar]) THEN verify_callsign(lic_en_.callsign, \nlic_en_.licensee_id, lic_hd_.grant_date, lic_en_.state::bpchar, \nlic_am_.operator_class::bpchar, lic_en_.applicant_type::bpchar, \nlic_am_.trustee_callsign)\n ELSE NULL::integer\n END AS validity\n FROM lic_en_\n JOIN lic_hd_ USING (sys_id, callsign)\n JOIN lic_am_ USING (sys_id, callsign);\n\n*VIEW lic_en_:*\n\n=> \\d+ lic_en_\n View \"Callsign.lic_en_\"\n Column | Type | Collation | Nullable | \nDefault | Storage | Description\n-----------------+------------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | | \nplain |\n callsign | character(10) | | | | \nextended |\n fcc_reg_num | character(10) | | | | \nextended |\n licensee_id | character(9) | | | | \nextended |\n subgroup_id_num | character(3) | | | | \nextended |\n applicant_type | text | | | | \nextended |\n entity_type | text | | | | \nextended |\n entity_name | character varying(200) | | | | \nextended |\n attention | character varying(35) | | | | \nextended |\n first_name | character varying(20) | | | | \nextended |\n middle_init | character(1) | | | | \nextended |\n last_name | character varying(20) | | | | \nextended |\n name_suffix | character(3) | | | | \nextended |\n street_address | character varying(60) | | | | \nextended |\n po_box | text | | | | \nextended |\n locality | character varying | | | | \nextended |\n locality_ | character varying | | | | \nextended |\n county | character varying | | | | \nextended |\n state | text | | | | \nextended |\n postal_code | text | | | | \nextended |\n full_name | text | | | | \nextended |\n _entity_name | text | | | | \nextended |\n _first_name | text | | | | \nextended |\n _last_name | text | | | | \nextended |\n zip5 | character(5) | | | | \nextended |\n zip_location | \"GeoPosition\" | | | | \nextended |\n maidenhead | bpchar | | | | \nextended |\n geo_region | smallint | | | | \nplain |\nView definition:\n SELECT lic_en.sys_id,\n lic_en.callsign,\n lic_en.fcc_reg_num,\n lic_en.licensee_id,\n lic_en.subgroup_id_num,\n (lic_en.applicant_type::text || ' - '::text) || COALESCE(( SELECT \n\"ApplicantType\".app_type_text\n FROM \"ApplicantType\"\n WHERE lic_en.applicant_type = \"ApplicantType\".app_type_id\n LIMIT 1), '???'::character varying)::text AS applicant_type,\n (lic_en.entity_type::text || ' - '::text) || COALESCE(( SELECT \n\"EntityType\".entity_text\n FROM \"EntityType\"\n WHERE lic_en.entity_type = \"EntityType\".entity_id\n LIMIT 1), '???'::character varying)::text AS entity_type,\n lic_en.entity_name,\n lic_en.attention,\n lic_en.first_name,\n lic_en.middle_init,\n lic_en.last_name,\n lic_en.name_suffix,\n lic_en.street_address,\n lic_en.po_box,\n lic_en.locality,\n zip_code.locality_text AS locality_,\n \"County\".county_text AS county,\n (territory_id::text || ' - '::text) || \nCOALESCE(govt_region.territory_text, '???'::character varying)::text AS \nstate,\n zip9_format(lic_en.postal_code::text) AS postal_code,\n lic_en.full_name,\n lic_en._entity_name,\n lic_en._first_name,\n lic_en._last_name,\n lic_en.zip5,\n zip_code.zip_location,\n maidenhead(zip_code.zip_location) AS maidenhead,\n govt_region.geo_region\n FROM lic_en\n JOIN govt_region USING (territory_id, country_id)\n LEFT JOIN zip_code USING (territory_id, country_id, zip5)\n LEFT JOIN \"County\" USING (territory_id, country_id, fips_county);\n\n*VIEW lic_en:*\n\n=> \\d+ lic_en\n View \"Callsign.lic_en\"\n Column | Type | Collation | Nullable | \nDefault | Storage | Description\n-----------------+------------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | | \nplain |\n callsign | character(10) | | | | \nextended |\n fcc_reg_num | character(10) | | | | \nextended |\n licensee_id | character(9) | | | | \nextended |\n subgroup_id_num | character(3) | | | | \nextended |\n applicant_type | character(1) | | | | \nextended |\n entity_type | character(2) | | | | \nextended |\n entity_name | character varying(200) | | | | \nextended |\n attention | character varying(35) | | | | \nextended |\n first_name | character varying(20) | | | | \nextended |\n middle_init | character(1) | | | | \nextended |\n last_name | character varying(20) | | | | \nextended |\n name_suffix | character(3) | | | | \nextended |\n street_address | character varying(60) | | | | \nextended |\n po_box | text | | | | \nextended |\n locality | character varying | | | | \nextended |\n territory_id | character(2) | | | | \nextended |\n postal_code | character(9) | | | | \nextended |\n full_name | text | | | | \nextended |\n _entity_name | text | | | | \nextended |\n _first_name | text | | | | \nextended |\n _last_name | text | | | | \nextended |\n zip5 | character(5) | | | | \nextended |\n country_id | character(2) | | | | \nextended |\nView definition:\n SELECT _lic_en.sys_id,\n _lic_en.callsign,\n _lic_en.fcc_reg_num,\n _lic_en.licensee_id,\n _lic_en.subgroup_id_num,\n _lic_en.applicant_type,\n _lic_en.entity_type,\n _lic_en.entity_name,\n _lic_en.attention,\n _lic_en.first_name,\n _lic_en.middle_init,\n _lic_en.last_name,\n _lic_en.name_suffix,\n _lic_en.street_address,\n _lic_en.po_box,\n _lic_en.locality,\n _lic_en.territory_id,\n _lic_en.postal_code,\n _lic_en.full_name,\n _lic_en._entity_name,\n _lic_en._first_name,\n _lic_en._last_name,\n _lic_en.zip5,\n _lic_en.country_id\n FROM _lic_en;\n\n*VIEW _lic_en:*\n\n=> \\d+ _lic_en\n View \"Callsign._lic_en\"\n Column | Type | Collation | Nullable | \nDefault | Storage | Description\n-----------------+------------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | | \nplain |\n callsign | character(10) | | | | \nextended |\n fcc_reg_num | character(10) | | | | \nextended |\n licensee_id | character(9) | | | | \nextended |\n subgroup_id_num | character(3) | | | | \nextended |\n applicant_type | character(1) | | | | \nextended |\n entity_type | character(2) | | | | \nextended |\n entity_name | character varying(200) | | | | \nextended |\n attention | character varying(35) | | | | \nextended |\n first_name | character varying(20) | | | | \nextended |\n middle_init | character(1) | | | | \nextended |\n last_name | character varying(20) | | | | \nextended |\n name_suffix | character(3) | | | | \nextended |\n street_address | character varying(60) | | | | \nextended |\n po_box | text | | | | \nextended |\n locality | character varying | | | | \nextended |\n territory_id | character(2) | | | | \nextended |\n postal_code | character(9) | | | | \nextended |\n full_name | text | | | | \nextended |\n _entity_name | text | | | | \nextended |\n _first_name | text | | | | \nextended |\n _last_name | text | | | | \nextended |\n zip5 | character(5) | | | | \nextended |\n country_id | character(2) | | | | \nextended |\nView definition:\n SELECT \"_EN\".unique_system_identifier AS sys_id,\n \"_EN\".callsign,\n \"_EN\".frn AS fcc_reg_num,\n \"_EN\".licensee_id,\n \"_EN\".sgin AS subgroup_id_num,\n \"_EN\".applicant_type_code AS applicant_type,\n \"_EN\".entity_type,\n \"_EN\".entity_name,\n \"_EN\".attention_line AS attention,\n \"_EN\".first_name,\n \"_EN\".mi AS middle_init,\n \"_EN\".last_name,\n \"_EN\".suffix AS name_suffix,\n \"_EN\".street_address,\n po_box_format(\"_EN\".po_box::text) AS po_box,\n \"_EN\".city AS locality,\n \"_EN\".state AS territory_id,\n \"_EN\".zip_code AS postal_code,\n initcap(((COALESCE(\"_EN\".first_name::text || ' '::text, ''::text) \n|| COALESCE(\"_EN\".mi::text || ' '::text, ''::text)) || \n\"_EN\".last_name::text) || COALESCE(' '::text || \"_EN\".suffix::text, \n''::text)) AS full_name,\n initcap(\"_EN\".entity_name::text) AS _entity_name,\n initcap(\"_EN\".first_name::text) AS _first_name,\n initcap(\"_EN\".last_name::text) AS _last_name,\n \"_EN\".zip_code::character(5) AS zip5,\n \"_EN\".country_id\n FROM \"UlsLic\".\"_EN\";\n\n*TABLE \"UlsLic\".\"_EN\"**:*\n\n=> \\d+ \"UlsLic\".\"_EN\"\n Table \"UlsLic._EN\"\n Column | Type | Collation | \nNullable | Default | Storage | Stats target | Description\n--------------------------+------------------------+-----------+----------+---------+----------+--------------+-------------\n record_type | character(2) | | not \nnull | | extended | |\n unique_system_identifier | integer | | not \nnull | | plain | |\n uls_file_number | character(14) | | \n| | extended | |\n ebf_number | character varying(30) | | \n| | extended | |\n callsign | character(10) | | \n| | extended | |\n entity_type | character(2) | | \n| | extended | |\n licensee_id | character(9) | | \n| | extended | |\n entity_name | character varying(200) | | \n| | extended | |\n first_name | character varying(20) | | \n| | extended | |\n mi | character(1) | | \n| | extended | |\n last_name | character varying(20) | | \n| | extended | |\n suffix | character(3) | | \n| | extended | |\n phone | character(10) | | \n| | extended | |\n fax | character(10) | | \n| | extended | |\n email | character varying(50) | | \n| | extended | |\n street_address | character varying(60) | | \n| | extended | |\n city | character varying | | \n| | extended | |\n state | character(2) | | \n| | extended | |\n zip_code | character(9) | | \n| | extended | |\n po_box | character varying(20) | | \n| | extended | |\n attention_line | character varying(35) | | \n| | extended | |\n sgin | character(3) | | \n| | extended | |\n frn | character(10) | | \n| | extended | |\n applicant_type_code | character(1) | | \n| | extended | |\n applicant_type_other | character(40) | | \n| | extended | |\n status_code | character(1) | | \n| | extended | |\n status_date | \"MySql\".datetime | | \n| | plain | |\n lic_category_code | character(1) | | \n| | extended | |\n linked_license_id | numeric(9,0) | | \n| | main | |\n linked_callsign | character(10) | | \n| | extended | |\n country_id | character(2) | | \n| | extended | |\nIndexes:\n \"_EN_pkey\" PRIMARY KEY, btree (unique_system_identifier)\n \"_EN__entity_name\" btree (initcap(entity_name::text))\n \"_EN__first_name\" btree (initcap(first_name::text))\n \"_EN__last_name\" btree (initcap(last_name::text))\n \"_EN__zip5\" btree ((zip_code::character(5)))\n \"_EN_callsign\" btree (callsign)\n \"_EN_fcc_reg_num\" btree (frn)\n \"_EN_licensee_id\" btree (licensee_id)\nCheck constraints:\n \"_EN_record_type_check\" CHECK (record_type = 'EN'::bpchar)\nForeign-key constraints:\n \"_EN_applicant_type_code_fkey\" FOREIGN KEY (applicant_type_code) \nREFERENCES \"FccLookup\".\"_ApplicantType\"(app_type_id\n)\n \"_EN_entity_type_fkey\" FOREIGN KEY (entity_type) REFERENCES \n\"FccLookup\".\"_EntityType\"(entity_id)\n \"_EN_state_fkey\" FOREIGN KEY (state, country_id) REFERENCES \n\"BaseLookup\".\"_Territory\"(territory_id, country_id)\n \"_EN_unique_system_identifier_fkey\" FOREIGN KEY \n(unique_system_identifier) REFERENCES \"UlsLic\".\"_HD\"(unique_system_i\ndentifier) ON UPDATE CASCADE ON DELETE CASCADE\n\n\n*VIEW lic_hd_:*\n\n=> \\d+ lic_hd_\n View \"Callsign.lic_hd_\"\n Column | Type | Collation | Nullable | Default | \nStorage | Description\n------------------+---------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | | plain |\n callsign | character(10) | | | | extended |\n uls_file_num | character(14) | | | | extended |\n radio_service | text | | | | extended |\n license_status | text | | | | extended |\n grant_date | date | | | | plain |\n effective_date | date | | | | plain |\n cancel_date | date | | | | plain |\n expire_date | date | | | | plain |\n end_date | date | | | | plain |\n available_date | date | | | | plain |\n last_action_date | date | | | | plain |\nView definition:\n SELECT lic_hd.sys_id,\n lic_hd.callsign,\n lic_hd.uls_file_num,\n (lic_hd.radio_service::text || ' - '::text) || COALESCE(( SELECT \n\"RadioService\".service_text\n FROM \"RadioService\"\n WHERE lic_hd.radio_service = \"RadioService\".service_id\n LIMIT 1), '???'::character varying)::text AS radio_service,\n (lic_hd.license_status::text || ' - '::text) || COALESCE(( SELECT \n\"LicStatus\".status_text\n FROM \"LicStatus\"\n WHERE lic_hd.license_status = \"LicStatus\".status_id\n LIMIT 1), '???'::character varying)::text AS license_status,\n lic_hd.grant_date,\n lic_hd.effective_date,\n lic_hd.cancel_date,\n lic_hd.expire_date,\n LEAST(lic_hd.cancel_date, lic_hd.expire_date) AS end_date,\n CASE\n WHEN lic_hd.cancel_date < lic_hd.expire_date THEN \nGREATEST((lic_hd.cancel_date + '2 years'::interval)::date, \nlic_hd.last_action_date + 30)\n WHEN lic_hd.license_status = 'A'::bpchar AND uls_date() > \n(lic_hd.expire_date + '2 years'::interval)::date THEN NULL::date\n ELSE (lic_hd.expire_date + '2 years'::interval)::date\n END + 1 AS available_date,\n lic_hd.last_action_date\n FROM lic_hd;\n\n*VIEW lic_hd:*\n\n=> \\d+ lic_hd\n View \"Callsign.lic_hd\"\n Column | Type | Collation | Nullable | Default | \nStorage | Description\n------------------+---------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | | plain |\n callsign | character(10) | | | | extended |\n uls_file_num | character(14) | | | | extended |\n radio_service | character(2) | | | | extended |\n license_status | character(1) | | | | extended |\n grant_date | date | | | | plain |\n effective_date | date | | | | plain |\n cancel_date | date | | | | plain |\n expire_date | date | | | | plain |\n last_action_date | date | | | | plain |\nView definition:\n SELECT _lic_hd.sys_id,\n _lic_hd.callsign,\n _lic_hd.uls_file_num,\n _lic_hd.radio_service,\n _lic_hd.license_status,\n _lic_hd.grant_date,\n _lic_hd.effective_date,\n _lic_hd.cancel_date,\n _lic_hd.expire_date,\n _lic_hd.last_action_date\n FROM _lic_hd;\n\n*VIEW _lic_hd:*\n\n=> \\d+ _lic_hd\n View \"Callsign._lic_hd\"\n Column | Type | Collation | Nullable | Default | \nStorage | Description\n------------------+---------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | | plain |\n callsign | character(10) | | | | extended |\n uls_file_num | character(14) | | | | extended |\n radio_service | character(2) | | | | extended |\n license_status | character(1) | | | | extended |\n grant_date | date | | | | plain |\n effective_date | date | | | | plain |\n cancel_date | date | | | | plain |\n expire_date | date | | | | plain |\n last_action_date | date | | | | plain |\nView definition:\n SELECT \"_HD\".unique_system_identifier AS sys_id,\n \"_HD\".callsign,\n \"_HD\".uls_file_number AS uls_file_num,\n \"_HD\".radio_service_code AS radio_service,\n \"_HD\".license_status,\n \"_HD\".grant_date,\n \"_HD\".effective_date,\n \"_HD\".cancellation_date AS cancel_date,\n \"_HD\".expired_date AS expire_date,\n \"_HD\".last_action_date\n FROM \"UlsLic\".\"_HD\";\n\n*TABLE **\"UlsLic\".\"_HD\"**:*\n\n=> \\d+ \"UlsLic\".\"_HD\"\n Table \"UlsLic._HD\"\n Column | Type | Collation | \nNullable | Default | Storage | Stats target | Descr\niption\n------------------------------+-----------------------+-----------+----------+---------+----------+--------------+------\n-------\n record_type | character(2) | | not null \n| | extended | |\n unique_system_identifier | integer | | not null \n| | plain | |\n uls_file_number | character(14) | | \n| | extended | |\n ebf_number | character varying(30) | | \n| | extended | |\n callsign | character(10) | | \n| | extended | |\n license_status | character(1) | | \n| | extended | |\n radio_service_code | character(2) | | \n| | extended | |\n grant_date | date | | \n| | plain | |\n expired_date | date | | \n| | plain | |\n cancellation_date | date | | \n| | plain | |\n eligibility_rule_num | character(10) | | \n| | extended | |\n applicant_type_code_reserved | character(1) | | \n| | extended | |\n alien | character(1) | | \n| | extended | |\n alien_government | character(1) | | \n| | extended | |\n alien_corporation | character(1) | | \n| | extended | |\n alien_officer | character(1) | | \n| | extended | |\n alien_control | character(1) | | \n| | extended | |\n revoked | character(1) | | \n| | extended | |\n convicted | character(1) | | \n| | extended | |\n adjudged | character(1) | | \n| | extended | |\n involved_reserved | character(1) | | \n| | extended | |\n common_carrier | character(1) | | \n| | extended | |\n non_common_carrier | character(1) | | \n| | extended | |\n private_comm | character(1) | | \n| | extended | |\n fixed | character(1) | | \n| | extended | |\n mobile | character(1) | | \n| | extended | |\n radiolocation | character(1) | | \n| | extended | |\n satellite | character(1) | | \n| | extended | |\n developmental_or_sta | character(1) | | \n| | extended | |\n interconnected_service | character(1) | | \n| | extended | |\n certifier_first_name | character varying(20) | | \n| | extended | |\n certifier_mi | character varying | | \n| | extended | |\n certifier_last_name | character varying | | \n| | extended | |\n certifier_suffix | character(3) | | \n| | extended | |\n certifier_title | character(40) | | \n| | extended | |\n gender | character(1) | | \n| | extended | |\n african_american | character(1) | | \n| | extended | |\n native_american | character(1) | | \n| | extended | |\n hawaiian | character(1) | | \n| | extended | |\n asian | character(1) | | \n| | extended | |\n white | character(1) | | \n| | extended | |\n ethnicity | character(1) | | \n| | extended | |\n effective_date | date | | \n| | plain | |\n last_action_date | date | | \n| | plain | |\n auction_id | integer | | \n| | plain | |\n reg_stat_broad_serv | character(1) | | \n| | extended | |\n band_manager | character(1) | | \n| | extended | |\n type_serv_broad_serv | character(1) | | \n| | extended | |\n alien_ruling | character(1) | | \n| | extended | |\n licensee_name_change | character(1) | | \n| | extended | |\n whitespace_ind | character(1) | | \n| | extended | |\n additional_cert_choice | character(1) | | \n| | extended | |\n additional_cert_answer | character(1) | | \n| | extended | |\n discontinuation_ind | character(1) | | \n| | extended | |\n regulatory_compliance_ind | character(1) | | \n| | extended | |\n dummy1 | character varying | | \n| | extended | |\n dummy2 | character varying | | \n| | extended | |\n dummy3 | character varying | | \n| | extended | |\n dummy4 | character varying | | \n| | extended | |\nIndexes:\n \"_HD_pkey\" PRIMARY KEY, btree (unique_system_identifier)\n \"_HD_callsign\" btree (callsign)\n \"_HD_grant_date\" btree (grant_date)\n \"_HD_last_action_date\" btree (last_action_date)\n \"_HD_uls_file_num\" btree (uls_file_number)\nCheck constraints:\n \"_HD_record_type_check\" CHECK (record_type = 'HD'::bpchar)\nForeign-key constraints:\n \"_HD_license_status_fkey\" FOREIGN KEY (license_status) REFERENCES \n\"FccLookup\".\"_LicStatus\"(status_id)\n \"_HD_radio_service_code_fkey\" FOREIGN KEY (radio_service_code) \nREFERENCES \"FccLookup\".\"_RadioService\"(service_id)\nReferenced by:\n TABLE \"\"UlsLic\".\"_AM\"\" CONSTRAINT \n\"_AM_unique_system_identifier_fkey\" FOREIGN KEY \n(unique_system_identifier) REFEREN\nCES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON DELETE \nCASCADE\n TABLE \"\"UlsLic\".\"_CO\"\" CONSTRAINT \n\"_CO_unique_system_identifier_fkey\" FOREIGN KEY \n(unique_system_identifier) REFEREN\nCES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON DELETE \nCASCADE\n TABLE \"\"UlsLic\".\"_EN\"\" CONSTRAINT \n\"_EN_unique_system_identifier_fkey\" FOREIGN KEY \n(unique_system_identifier) REFEREN\nCES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON DELETE \nCASCADE\n TABLE \"\"UlsLic\".\"_HS\"\" CONSTRAINT \n\"_HS_unique_system_identifier_fkey\" FOREIGN KEY \n(unique_system_identifier) REFEREN\nCES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON DELETE \nCASCADE\n TABLE \"\"UlsLic\".\"_LA\"\" CONSTRAINT \n\"_LA_unique_system_identifier_fkey\" FOREIGN KEY \n(unique_system_identifier) REFEREN\nCES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON DELETE \nCASCADE\n TABLE \"\"UlsLic\".\"_SC\"\" CONSTRAINT \n\"_SC_unique_system_identifier_fkey\" FOREIGN KEY \n(unique_system_identifier) REFEREN\nCES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON DELETE \nCASCADE\n TABLE \"\"UlsLic\".\"_SF\"\" CONSTRAINT \n\"_SF_unique_system_identifier_fkey\" FOREIGN KEY \n(unique_system_identifier) REFEREN\nCES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON DELETE \nCASCADE\n\n*VIEW lic_am_:*\n\n=> \\d+ lic_am_\n View \"Callsign.lic_am_\"\n Column | Type | Collation | Nullable | \nDefault | Storage | Description\n------------------+-----------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | | \nplain |\n callsign | character(10) | | | | \nextended |\n uls_region | \"MySql\".tinyint | | | | \nplain |\n callsign_group | text | | | | \nextended |\n operator_group | text | | | | \nextended |\n operator_class | text | | | | \nextended |\n prev_class | text | | | | \nextended |\n prev_callsign | character(10) | | | | \nextended |\n vanity_type | text | | | | \nextended |\n is_trustee | character(1) | | | | \nextended |\n trustee_callsign | character(10) | | | | \nextended |\n trustee_name | character varying(50) | | | | \nextended |\nView definition:\n SELECT lic_am.sys_id,\n lic_am.callsign,\n lic_am.uls_region,\n ( SELECT (\"CallsignGroup\".group_id::text || ' - '::text) || \n\"CallsignGroup\".match_text::text\n FROM \"CallsignGroup\"\n WHERE lic_am.callsign ~ \"CallsignGroup\".pattern::text\n LIMIT 1) AS callsign_group,\n ( SELECT (oper_group.group_id::text || ' - '::text) || \noper_group.group_text::text\n FROM oper_group\n WHERE lic_am.operator_class = oper_group.class_id\n LIMIT 1) AS operator_group,\n (lic_am.operator_class::text || ' - '::text) || COALESCE(( SELECT \n\"OperatorClass\".class_text\n FROM \"OperatorClass\"\n WHERE lic_am.operator_class = \"OperatorClass\".class_id\n LIMIT 1), '???'::character varying)::text AS operator_class,\n (lic_am.prev_class::text || ' - '::text) || COALESCE(( SELECT \n\"OperatorClass\".class_text\n FROM \"OperatorClass\"\n WHERE lic_am.prev_class = \"OperatorClass\".class_id\n LIMIT 1), '???'::character varying)::text AS prev_class,\n lic_am.prev_callsign,\n (lic_am.vanity_type::text || ' - '::text) || COALESCE(( SELECT \n\"VanityType\".vanity_text\n FROM \"VanityType\"\n WHERE lic_am.vanity_type = \"VanityType\".vanity_id\n LIMIT 1), '???'::character varying)::text AS vanity_type,\n lic_am.is_trustee,\n lic_am.trustee_callsign,\n lic_am.trustee_name\n FROM lic_am;\n\n*VIEW lic_am:*\n\n=> \\d+ lic_am\n View \"Callsign.lic_am\"\n Column | Type | Collation | Nullable | \nDefault | Storage | Description\n------------------+-----------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | | \nplain |\n callsign | character(10) | | | | \nextended |\n uls_region | \"MySql\".tinyint | | | | \nplain |\n uls_group | character(1) | | | | \nextended |\n operator_class | character(1) | | | | \nextended |\n prev_callsign | character(10) | | | | \nextended |\n prev_class | character(1) | | | | \nextended |\n vanity_type | character(1) | | | | \nextended |\n is_trustee | character(1) | | | | \nextended |\n trustee_callsign | character(10) | | | | \nextended |\n trustee_name | character varying(50) | | | | \nextended |\nView definition:\n SELECT _lic_am.sys_id,\n _lic_am.callsign,\n _lic_am.uls_region,\n _lic_am.uls_group,\n _lic_am.operator_class,\n _lic_am.prev_callsign,\n _lic_am.prev_class,\n _lic_am.vanity_type,\n _lic_am.is_trustee,\n _lic_am.trustee_callsign,\n _lic_am.trustee_name\n FROM _lic_am;\n\n*VIEW _lic_am:*\n\n=> \\d+ _lic_am\n View \"Callsign._lic_am\"\n Column | Type | Collation | Nullable | \nDefault | Storage | Description\n------------------+-----------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | | \nplain |\n callsign | character(10) | | | | \nextended |\n uls_region | \"MySql\".tinyint | | | | \nplain |\n uls_group | character(1) | | | | \nextended |\n operator_class | character(1) | | | | \nextended |\n prev_callsign | character(10) | | | | \nextended |\n prev_class | character(1) | | | | \nextended |\n vanity_type | character(1) | | | | \nextended |\n is_trustee | character(1) | | | | \nextended |\n trustee_callsign | character(10) | | | | \nextended |\n trustee_name | character varying(50) | | | | \nextended |\nView definition:\n SELECT \"_AM\".unique_system_identifier AS sys_id,\n \"_AM\".callsign,\n \"_AM\".region_code AS uls_region,\n \"_AM\".group_code AS uls_group,\n \"_AM\".operator_class,\n \"_AM\".previous_callsign AS prev_callsign,\n \"_AM\".previous_operator_class AS prev_class,\n \"_AM\".vanity_callsign_change AS vanity_type,\n \"_AM\".trustee_indicator AS is_trustee,\n \"_AM\".trustee_callsign,\n \"_AM\".trustee_name\n FROM \"UlsLic\".\"_AM\";\n\n*TABLE **\"UlsLic\".\"_AM\"**:*\n\n=> \\d+ \"UlsLic\".\"_AM\"\n Table \"UlsLic._AM\"\n Column | Type | Collation | \nNullable | Default | Storage | Stats target | Description\n----------------------------+-----------------------+-----------+----------+---------+----------+--------------+-------------\n record_type | character(2) | | not \nnull | | extended | |\n unique_system_identifier | integer | | not \nnull | | plain | |\n uls_file_number | character(14) | | \n| | extended | |\n ebf_number | character varying(30) | | \n| | extended | |\n callsign | character(10) | | \n| | extended | |\n operator_class | character(1) | | \n| | extended | |\n group_code | character(1) | | \n| | extended | |\n region_code | \"MySql\".tinyint | | \n| | plain | |\n trustee_callsign | character(10) | | \n| | extended | |\n trustee_indicator | character(1) | | \n| | extended | |\n physician_certification | character(1) | | \n| | extended | |\n ve_signature | character(1) | | \n| | extended | |\n systematic_callsign_change | character(1) | | \n| | extended | |\n vanity_callsign_change | character(1) | | \n| | extended | |\n vanity_relationship | character(12) | | \n| | extended | |\n previous_callsign | character(10) | | \n| | extended | |\n previous_operator_class | character(1) | | \n| | extended | |\n trustee_name | character varying(50) | | \n| | extended | |\nIndexes:\n \"_AM_pkey\" PRIMARY KEY, btree (unique_system_identifier)\n \"_AM_callsign\" btree (callsign)\n \"_AM_prev_callsign\" btree (previous_callsign)\n \"_AM_trustee_callsign\" btree (trustee_callsign)\nCheck constraints:\n \"_AM_record_type_check\" CHECK (record_type = 'AM'::bpchar)\nForeign-key constraints:\n \"_AM_operator_class_fkey\" FOREIGN KEY (operator_class) REFERENCES \n\"FccLookup\".\"_OperatorClass\"(class_id)\n \"_AM_previous_operator_class_fkey\" FOREIGN KEY \n(previous_operator_class) REFERENCES \"FccLookup\".\"_OperatorClass\"(cla\nss_id)\n \"_AM_unique_system_identifier_fkey\" FOREIGN KEY \n(unique_system_identifier) REFERENCES \"UlsLic\".\"_HD\"(unique_system_i\ndentifier) ON UPDATE CASCADE ON DELETE CASCADE\n \"_AM_vanity_callsign_change_fkey\" FOREIGN KEY \n(vanity_callsign_change) REFERENCES \"FccLookup\".\"_VanityType\"(vanity_i\nd)\n\n\n\n\n\n\n\n [Reposted to the proper list]\n\n I started to use PostgreSQL v7.3 in 2003 on my home Linux systems (4\n at one point), gradually moving to v9.0 w/ replication in 2010. In\n 2017 I moved my 20GB database to AWS/RDS, gradually upgrading to\n v9.6, & was entirely satisfied with the result.\n\n In March of this year, AWS announced that v9.6 was nearing end of\n support, & AWS would forcibly upgrade everyone to v12 on January\n 22, 2022, if users did not perform the upgrade earlier. My first\n attempt was successful as far as the upgrade itself, but complex\n queries that normally ran in a couple of seconds on v9.x, were\n taking minutes in v12.\n\n I didn't have the time in March to diagnose the problem, other than\n some futile adjustments to server parameters, so I reverted back to\n a saved copy of my v9.6 data.\n\n On Sunday, being retired, I decided to attempt to solve the issue in\n earnest. I have now spent five days (about 14 hours a day), trying\n various things, including adding additional indexes. Keeping the\n v9.6 data online for web users, I've \"forked\" the data into new\n copies, & updated them in turn to PostgreSQL v10, v11, v12,\n & v13. All exhibit the same problem: As you will see below, it\n appears that versions 10 & above are doing a sequential scan of\n some of the \"large\" (200K rows) tables. Note that the expected\n & actual run times both differ for v9.6 & v13.2, by more\n than two orders of magnitude. Rather than post a huge eMail\n (ha ha), I'll start with this one, that shows an \"EXPLAIN ANALYZE\"\n from both v9.6 & v13.2, followed by the related table & view\n definitions. With one exception, table definitions are from the FCC\n (Federal Communications Commission); the view definitions are my\n own.\n\nHere's from v9.6:\n\n=> EXPLAIN ANALYZE SELECT\n club_count, extra_count, region_count, callsign AS\n trustee_callsign, applicant_type, entity_name, licensee_id AS _lid\n FROM genclub_multi_ WHERE club_count >= 5 ORDER BY extra_count\n DESC, club_count DESC, entity_name;\n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=407.13..407.13 rows=1 width=94) (actual\n time=348.850..348.859 rows=43 loops=1)\n Sort Key: \"_Club\".extra_count DESC, \"_Club\".club_count DESC,\n \"_EN\".entity_name\n Sort Method: quicksort Memory: 31kB\n -> Nested Loop (cost=4.90..407.12 rows=1 width=94) (actual\n time=7.587..348.732 rows=43 loops=1)\n -> Nested Loop (cost=4.47..394.66 rows=1 width=94)\n (actual time=5.740..248.149 rows=43 loops=1)\n -> Nested Loop Left Join (cost=4.04..382.20\n rows=1 width=79) (actual time=2.458..107.908 rows=55 loops=1)\n -> Hash Join (cost=3.75..380.26 rows=1\n width=86) (actual time=2.398..106.990 rows=55 loops=1)\n Hash Cond: ((\"_EN\".country_id =\n \"_GovtRegion\".country_id) AND (\"_EN\".state =\n \"_GovtRegion\".territory_id))\n -> Nested Loop (cost=0.43..376.46\n rows=47 width=94) (actual time=2.294..106.736 rows=55 loops=1)\n -> Seq Scan on \"_Club\" \n (cost=0.00..4.44 rows=44 width=35) (actual time=0.024..0.101\n rows=44 loops=1)\n Filter: (club_count >=\n 5)\n Rows Removed by Filter: 151\n -> Index Scan using\n \"_EN_callsign\" on \"_EN\" (cost=0.43..8.45 rows=1 width=69) (actual\n time=2.179..2.420 rows=1 loops=44)\n Index Cond: (callsign =\n \"_Club\".trustee_callsign)\n -> Hash (cost=1.93..1.93 rows=93\n width=7) (actual time=0.071..0.071 rows=88 loops=1)\n Buckets: 1024 Batches: 1 Memory\n Usage: 12kB\n -> Seq Scan on \"_GovtRegion\" \n (cost=0.00..1.93 rows=93 width=7) (actual time=0.010..0.034\n rows=93 loops=1)\n -> Nested Loop (cost=0.29..1.93 rows=1\n width=7) (actual time=0.012..0.014 rows=1 loops=55)\n Join Filter: (\"_IsoCountry\".iso_alpha2\n = \"_Territory\".country_id)\n Rows Removed by Join Filter: 0\n -> Index Only Scan using\n \"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..1.62\n rows=1 width=3) (actual time=0.006..0.006 rows=1 loops=55)\n Index Cond: (iso_alpha2 =\n \"_GovtRegion\".country_id)\n Heap Fetches: 55\n -> Index Only Scan using\n \"_Territory_pkey\" on \"_Territory\" (cost=0.14..0.29 rows=1\n width=7)\n (actual time=0.004..0.005 rows=1 loops=55)\n Index Cond: (territory_id =\n \"_GovtRegion\".territory_id)\n Heap Fetches: 59\n -> Index Scan using \"_HD_pkey\" on \"_HD\" \n (cost=0.43..12.45 rows=1 width=15) (actual time=2.548..2.548\n rows=1 loops=55)\n Index Cond: (unique_system_identifier =\n \"_EN\".unique_system_identifier)\n Filter: ((\"_EN\".callsign = callsign) AND\n (((((license_status)::text || ' - '::text) || (COALESCE((SubPlan\n 2), '???'::character varying))::text))::character(1) =\n 'A'::bpchar))\n Rows Removed by Filter: 0\n SubPlan 2\n -> Limit (cost=0.15..8.17 rows=1\n width=32) (actual time=0.006..0.007 rows=1 loops=55)\n -> Index Scan using\n \"_LicStatus_pkey\" on \"_LicStatus\" (cost=0.15..8.17 rows=1\n width=32) (actual time=0.005..0.005 rows=1 loops=55)\n Index Cond:\n (\"_HD\".license_status = status_id)\n -> Index Scan using \"_AM_pkey\" on \"_AM\" \n (cost=0.43..4.27 rows=1 width=15) (actual time=2.325..2.325 rows=1\n loops=43)\n Index Cond: (unique_system_identifier =\n \"_EN\".unique_system_identifier)\n Filter: (\"_EN\".callsign = callsign)\n SubPlan 1\n -> Limit (cost=0.15..8.17 rows=1 width=32) (actual\n time=0.007..0.007 rows=1 loops=43)\n -> Index Scan using \"_ApplicantType_pkey\" on\n \"_ApplicantType\" (cost=0.15..8.17 rows=1 width=32) (actual\n time=0.005..0.005 rows=1 loops=43)\n Index Cond: (\"_EN\".applicant_type_code =\n app_type_id)\n Planning time: 13.490 ms\n Execution time: 349.182 ms\n (43 rows)\n\n\nHere's from v13.2: \n\n=> EXPLAIN ANALYZE SELECT\n club_count, extra_count, region_count, callsign AS\n trustee_callsign, applicant_type, entity_name, licensee_id AS _lid\n FROM genclub_multi_ WHERE club_count >= 5 ORDER BY extra_count\n DESC, club_count DESC, entity_name;\n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=144365.60..144365.60 rows=1 width=94) (actual\n time=31898.860..31901.922 rows=43 loops=1)\n Sort Key: \"_Club\".extra_count DESC, \"_Club\".club_count DESC,\n \"_EN\".entity_name\n Sort Method: quicksort Memory: 31kB\n -> Nested Loop (cost=58055.66..144365.59 rows=1 width=94)\n (actual time=6132.403..31894.233 rows=43 loops=1)\n -> Nested Loop (cost=58055.51..144364.21 rows=1\n width=62) (actual time=1226.085..30337.921 rows=837792 loops=1)\n -> Nested Loop Left Join \n (cost=58055.09..144360.38 rows=1 width=59) (actual\n time=1062.414..12471.456 rows=1487153 loops=1)\n -> Hash Join (cost=58054.80..144359.69\n rows=1 width=66) (actual time=1061.330..6635.041 rows=1487153\n loops=1)\n Hash Cond:\n ((\"_EN\".unique_system_identifier = \"_AM\".unique_system_identifier)\n AND (\"_EN\".callsign = \"_AM\".callsign))\n -> Hash Join (cost=3.33..53349.72\n rows=1033046 width=51) (actual time=2.151..3433.178 rows=1487153\n loops=1)\n Hash Cond: ((\"_EN\".country_id =\n \"_GovtRegion\".country_id) AND (\"_EN\".state =\n \"_GovtRegion\".territory_id))\n -> Seq Scan on \"_EN\" \n (cost=0.00..45288.05 rows=1509005 width=60) (actual\n time=0.037..2737.054 rows=1508736 loops=1)\n -> Hash (cost=1.93..1.93\n rows=93 width=7) (actual time=0.706..1.264 rows=88 loops=1)\n Buckets: 1024 Batches: 1 \n Memory Usage: 12kB\n -> Seq Scan on\n \"_GovtRegion\" (cost=0.00..1.93 rows=93 width=7) (actual\n time=0.013..0.577 rows=93 loops=1)\n -> Hash (cost=28093.99..28093.99\n rows=1506699 width=15) (actual time=1055.587..1055.588\n rows=1506474 loops=1)\n Buckets: 131072 Batches: 32 \n Memory Usage: 3175kB\n -> Seq Scan on \"_AM\" \n (cost=0.00..28093.99 rows=1506699 width=15) (actual\n time=0.009..742.774 rows=1506474 loops=1)\n -> Nested Loop (cost=0.29..0.68 rows=1\n width=7) (actual time=0.003..0.004 rows=1 loops=1487153)\n Join Filter: (\"_IsoCountry\".iso_alpha2\n = \"_Territory\".country_id)\n Rows Removed by Join Filter: 0\n -> Index Only Scan using\n \"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..0.38\n rows=1 width=3) (actual time=0.001..0.002 rows=1 loops=1487153)\n Index Cond: (iso_alpha2 =\n \"_GovtRegion\".country_id)\n Heap Fetches: 1487153\n -> Index Only Scan using\n \"_Territory_pkey\" on \"_Territory\" (cost=0.14..0.29 rows=1\n width=7) (actual time=0.001..0.001 rows=1 loops=1487153)\n Index Cond: (territory_id =\n \"_GovtRegion\".territory_id)\n Heap Fetches: 1550706\n -> Index Scan using \"_HD_pkey\" on \"_HD\" \n (cost=0.43..3.82 rows=1 width=15) (actual time=0.012..0.012 rows=1\n loops=1487153)\n Index Cond: (unique_system_identifier =\n \"_EN\".unique_system_identifier)\n Filter: ((\"_EN\".callsign = callsign) AND\n (((((license_status)::text || ' - '::text) || (COALESCE((SubPlan\n 2), '???'::character varying))::text))::character(1) =\n 'A'::bpchar))\n Rows Removed by Filter: 0\n SubPlan 2\n -> Limit (cost=0.00..1.07 rows=1\n width=13) (actual time=0.001..0.001 rows=1 loops=1487153)\n -> Seq Scan on \"_LicStatus\" \n (cost=0.00..1.07 rows=1 width=13) (actual time=0.000..0.000 rows=1\n loops=1487153)\n Filter: (\"_HD\".license_status =\n status_id)\n Rows Removed by Filter: 1\n -> Index Scan using \"_Club_pkey\" on \"_Club\" \n (cost=0.14..0.17 rows=1 width=35) (actual time=0.002..0.002 rows=0\n loops=837792)\n Index Cond: (trustee_callsign = \"_EN\".callsign)\n Filter: (club_count >= 5)\n Rows Removed by Filter: 0\n SubPlan 1\n -> Limit (cost=0.00..1.20 rows=1 width=15) (actual\n time=0.060..0.060 rows=1 loops=43)\n -> Seq Scan on \"_ApplicantType\" \n (cost=0.00..1.20 rows=1 width=15) (actual time=0.016..0.016 rows=1\n loops=43)\n Filter: (\"_EN\".applicant_type_code =\n app_type_id)\n Rows Removed by Filter: 7\n Planning Time: 173.753 ms\n Execution Time: 31919.601 ms\n (46 rows)\n\n\nVIEW genclub_multi_:\n\n => \\d+ genclub_multi_\n View\n \"Callsign.genclub_multi_\"\n Column | Type | Collation | Nullable\n | Default | Storage | Description\n------------------+------------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | \n | | plain |\n callsign | character(10) | | \n | | extended |\n fcc_reg_num | character(10) | | \n | | extended |\n licensee_id | character(9) | | \n | | extended |\n subgroup_id_num | character(3) | | \n | | extended |\n applicant_type | text | | \n | | extended |\n entity_type | text | | \n | | extended |\n entity_name | character varying(200) | | \n | | extended |\n attention | character varying(35) | | \n | | extended |\n first_name | character varying(20) | | \n | | extended |\n middle_init | character(1) | | \n | | extended |\n last_name | character varying(20) | | \n | | extended |\n name_suffix | character(3) | | \n | | extended |\n street_address | character varying(60) | | \n | | extended |\n po_box | text | | \n | | extended |\n locality | character varying | | \n | | extended |\n locality_ | character varying | | \n | | extended |\n county | character varying | | \n | | extended |\n state | text | | \n | | extended |\n postal_code | text | | \n | | extended |\n full_name | text | | \n | | extended |\n _entity_name | text | | \n | | extended |\n _first_name | text | | \n | | extended |\n _last_name | text | | \n | | extended |\n zip5 | character(5) | | \n | | extended |\n zip_location | \"GeoPosition\" | | \n | | extended |\n maidenhead | bpchar | | \n | | extended |\n geo_region | smallint | | \n | | plain |\n uls_file_num | character(14) | | \n | | extended |\n radio_service | text | | \n | | extended |\n license_status | text | | \n | | extended |\n grant_date | date | | \n | | plain |\n effective_date | date | | \n | | plain |\n cancel_date | date | | \n | | plain |\n expire_date | date | | \n | | plain |\n end_date | date | | \n | | plain |\n available_date | date | | \n | | plain |\n last_action_date | date | | \n | | plain |\n uls_region | \"MySql\".tinyint | | \n | | plain |\n callsign_group | text | | \n | | extended |\n operator_group | text | | \n | | extended |\n operator_class | text | | \n | | extended |\n prev_class | text | | \n | | extended |\n prev_callsign | character(10) | | \n | | extended |\n vanity_type | text | | \n | | extended |\n is_trustee | character(1) | | \n | | extended |\n trustee_callsign | character(10) | | \n | | extended |\n trustee_name | character varying(50) | | \n | | extended |\n validity | integer | | \n | | plain |\n club_count | bigint | | \n | | plain |\n extra_count | bigint | | \n | | plain |\n region_count | bigint | | \n | | plain |\n View definition:\n SELECT licjb_.sys_id,\n licjb_.callsign,\n licjb_.fcc_reg_num,\n licjb_.licensee_id,\n licjb_.subgroup_id_num,\n licjb_.applicant_type,\n licjb_.entity_type,\n licjb_.entity_name,\n licjb_.attention,\n licjb_.first_name,\n licjb_.middle_init,\n licjb_.last_name,\n licjb_.name_suffix,\n licjb_.street_address,\n licjb_.po_box,\n licjb_.locality,\n licjb_.locality_,\n licjb_.county,\n licjb_.state,\n licjb_.postal_code,\n licjb_.full_name,\n licjb_._entity_name,\n licjb_._first_name,\n licjb_._last_name,\n licjb_.zip5,\n licjb_.zip_location,\n licjb_.maidenhead,\n licjb_.geo_region,\n licjb_.uls_file_num,\n licjb_.radio_service,\n licjb_.license_status,\n licjb_.grant_date,\n licjb_.effective_date,\n licjb_.cancel_date,\n licjb_.expire_date,\n licjb_.end_date,\n licjb_.available_date,\n licjb_.last_action_date,\n licjb_.uls_region,\n licjb_.callsign_group,\n licjb_.operator_group,\n licjb_.operator_class,\n licjb_.prev_class,\n licjb_.prev_callsign,\n licjb_.vanity_type,\n licjb_.is_trustee,\n licjb_.trustee_callsign,\n licjb_.trustee_name,\n licjb_.validity,\n gen.club_count,\n gen.extra_count,\n gen.region_count\n FROM licjb_,\n \"GenLicClub\" gen\n WHERE licjb_.callsign = gen.trustee_callsign AND\n licjb_.license_status::character(1) = 'A'::bpchar;\n\nVIEW GenLicClub:\n\n=> \\d+ \"GenLicClub\"\n View \"Callsign.GenLicClub\"\n Column | Type | Collation | Nullable | Default\n | Storage | Description\n------------------+---------------+-----------+----------+---------+----------+-------------\n trustee_callsign | character(10) | | | \n | extended |\n club_count | bigint | | | \n | plain |\n extra_count | bigint | | | \n | plain |\n region_count | bigint | | | \n | plain |\n View definition:\n SELECT \"_Club\".trustee_callsign,\n \"_Club\".club_count,\n \"_Club\".extra_count,\n \"_Club\".region_count\n FROM \"GenLic\".\"_Club\";\n\nTABLE \"GenLic\".\"_Club\":\n\n=> \\d+ \"GenLic\".\"_Club\"\n Table \"GenLic._Club\"\n Column | Type | Collation | Nullable | Default\n | Storage | Stats target | Description\n------------------+---------------+-----------+----------+---------+----------+--------------+-------------\n trustee_callsign | character(10) | | not null | \n | extended | |\n club_count | bigint | | | \n | plain | |\n extra_count | bigint | | | \n | plain | |\n region_count | bigint | | | \n | plain | |\n Indexes:\n \"_Club_pkey\" PRIMARY KEY, btree (trustee_callsign)\n\nVIEW licjb_:\n\n=> \\d+ licjb_\n View \"Callsign.licjb_\"\n Column | Type | Collation | Nullable\n | Default | Storage | Description\n------------------+------------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | \n | | plain |\n callsign | character(10) | | \n | | extended |\n fcc_reg_num | character(10) | | \n | | extended |\n licensee_id | character(9) | | \n | | extended |\n subgroup_id_num | character(3) | | \n | | extended |\n applicant_type | text | | \n | | extended |\n entity_type | text | | \n | | extended |\n entity_name | character varying(200) | | \n | | extended |\n attention | character varying(35) | | \n | | extended |\n first_name | character varying(20) | | \n | | extended |\n middle_init | character(1) | | \n | | extended |\n last_name | character varying(20) | | \n | | extended |\n name_suffix | character(3) | | \n | | extended |\n street_address | character varying(60) | | \n | | extended |\n po_box | text | | \n | | extended |\n locality | character varying | | \n | | extended |\n locality_ | character varying | | \n | | extended |\n county | character varying | | \n | | extended |\n state | text | | \n | | extended |\n postal_code | text | | \n | | extended |\n full_name | text | | \n | | extended |\n _entity_name | text | | \n | | extended |\n _first_name | text | | \n | | extended |\n _last_name | text | | \n | | extended |\n zip5 | character(5) | | \n | | extended |\n zip_location | \"GeoPosition\" | | \n | | extended |\n maidenhead | bpchar | | \n | | extended |\n geo_region | smallint | | \n | | plain |\n uls_file_num | character(14) | | \n | | extended |\n radio_service | text | | \n | | extended |\n license_status | text | | \n | | extended |\n grant_date | date | | \n | | plain |\n effective_date | date | | \n | | plain |\n cancel_date | date | | \n | | plain |\n expire_date | date | | \n | | plain |\n end_date | date | | \n | | plain |\n available_date | date | | \n | | plain |\n last_action_date | date | | \n | | plain |\n uls_region | \"MySql\".tinyint | | \n | | plain |\n callsign_group | text | | \n | | extended |\n operator_group | text | | \n | | extended |\n operator_class | text | | \n | | extended |\n prev_class | text | | \n | | extended |\n prev_callsign | character(10) | | \n | | extended |\n vanity_type | text | | \n | | extended |\n is_trustee | character(1) | | \n | | extended |\n trustee_callsign | character(10) | | \n | | extended |\n trustee_name | character varying(50) | | \n | | extended |\n validity | integer | | \n | | plain |\n View definition:\n SELECT lic_en_.sys_id,\n lic_en_.callsign,\n lic_en_.fcc_reg_num,\n lic_en_.licensee_id,\n lic_en_.subgroup_id_num,\n lic_en_.applicant_type,\n lic_en_.entity_type,\n lic_en_.entity_name,\n lic_en_.attention,\n lic_en_.first_name,\n lic_en_.middle_init,\n lic_en_.last_name,\n lic_en_.name_suffix,\n lic_en_.street_address,\n lic_en_.po_box,\n lic_en_.locality,\n lic_en_.locality_,\n lic_en_.county,\n lic_en_.state,\n lic_en_.postal_code,\n lic_en_.full_name,\n lic_en_._entity_name,\n lic_en_._first_name,\n lic_en_._last_name,\n lic_en_.zip5,\n lic_en_.zip_location,\n lic_en_.maidenhead,\n lic_en_.geo_region,\n lic_hd_.uls_file_num,\n lic_hd_.radio_service,\n lic_hd_.license_status,\n lic_hd_.grant_date,\n lic_hd_.effective_date,\n lic_hd_.cancel_date,\n lic_hd_.expire_date,\n lic_hd_.end_date,\n lic_hd_.available_date,\n lic_hd_.last_action_date,\n lic_am_.uls_region,\n lic_am_.callsign_group,\n lic_am_.operator_group,\n lic_am_.operator_class,\n lic_am_.prev_class,\n lic_am_.prev_callsign,\n lic_am_.vanity_type,\n lic_am_.is_trustee,\n lic_am_.trustee_callsign,\n lic_am_.trustee_name,\n CASE\n WHEN lic_am_.vanity_type::character(1) = ANY\n (ARRAY['A'::bpchar, 'C'::bpchar]) THEN\n verify_callsign(lic_en_.callsign, lic_en_.licensee_id,\n lic_hd_.grant_date, lic_en_.state::bpchar,\n lic_am_.operator_class::bpchar, lic_en_.applicant_type::bpchar,\n lic_am_.trustee_callsign)\n ELSE NULL::integer\n END AS validity\n FROM lic_en_\n JOIN lic_hd_ USING (sys_id, callsign)\n JOIN lic_am_ USING (sys_id, callsign);\n\nVIEW lic_en_:\n\n=> \\d+ lic_en_\n View \"Callsign.lic_en_\"\n Column | Type | Collation | Nullable |\n Default | Storage | Description\n-----------------+------------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | \n | | plain |\n callsign | character(10) | | \n | | extended |\n fcc_reg_num | character(10) | | \n | | extended |\n licensee_id | character(9) | | \n | | extended |\n subgroup_id_num | character(3) | | \n | | extended |\n applicant_type | text | | \n | | extended |\n entity_type | text | | \n | | extended |\n entity_name | character varying(200) | | \n | | extended |\n attention | character varying(35) | | \n | | extended |\n first_name | character varying(20) | | \n | | extended |\n middle_init | character(1) | | \n | | extended |\n last_name | character varying(20) | | \n | | extended |\n name_suffix | character(3) | | \n | | extended |\n street_address | character varying(60) | | \n | | extended |\n po_box | text | | \n | | extended |\n locality | character varying | | \n | | extended |\n locality_ | character varying | | \n | | extended |\n county | character varying | | \n | | extended |\n state | text | | \n | | extended |\n postal_code | text | | \n | | extended |\n full_name | text | | \n | | extended |\n _entity_name | text | | \n | | extended |\n _first_name | text | | \n | | extended |\n _last_name | text | | \n | | extended |\n zip5 | character(5) | | \n | | extended |\n zip_location | \"GeoPosition\" | | \n | | extended |\n maidenhead | bpchar | | \n | | extended |\n geo_region | smallint | | \n | | plain |\n View definition:\n SELECT lic_en.sys_id,\n lic_en.callsign,\n lic_en.fcc_reg_num,\n lic_en.licensee_id,\n lic_en.subgroup_id_num,\n (lic_en.applicant_type::text || ' - '::text) || COALESCE((\n SELECT \"ApplicantType\".app_type_text\n FROM \"ApplicantType\"\n WHERE lic_en.applicant_type =\n \"ApplicantType\".app_type_id\n LIMIT 1), '???'::character varying)::text AS\n applicant_type,\n (lic_en.entity_type::text || ' - '::text) || COALESCE(( SELECT\n \"EntityType\".entity_text\n FROM \"EntityType\"\n WHERE lic_en.entity_type = \"EntityType\".entity_id\n LIMIT 1), '???'::character varying)::text AS entity_type,\n lic_en.entity_name,\n lic_en.attention,\n lic_en.first_name,\n lic_en.middle_init,\n lic_en.last_name,\n lic_en.name_suffix,\n lic_en.street_address,\n lic_en.po_box,\n lic_en.locality,\n zip_code.locality_text AS locality_,\n \"County\".county_text AS county,\n (territory_id::text || ' - '::text) ||\n COALESCE(govt_region.territory_text, '???'::character\n varying)::text AS state,\n zip9_format(lic_en.postal_code::text) AS postal_code,\n lic_en.full_name,\n lic_en._entity_name,\n lic_en._first_name,\n lic_en._last_name,\n lic_en.zip5,\n zip_code.zip_location,\n maidenhead(zip_code.zip_location) AS maidenhead,\n govt_region.geo_region\n FROM lic_en\n JOIN govt_region USING (territory_id, country_id)\n LEFT JOIN zip_code USING (territory_id, country_id, zip5)\n LEFT JOIN \"County\" USING (territory_id, country_id,\n fips_county);\n\nVIEW lic_en:\n\n=> \\d+ lic_en\n View \"Callsign.lic_en\"\n Column | Type | Collation | Nullable |\n Default | Storage | Description\n-----------------+------------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | \n | | plain |\n callsign | character(10) | | \n | | extended |\n fcc_reg_num | character(10) | | \n | | extended |\n licensee_id | character(9) | | \n | | extended |\n subgroup_id_num | character(3) | | \n | | extended |\n applicant_type | character(1) | | \n | | extended |\n entity_type | character(2) | | \n | | extended |\n entity_name | character varying(200) | | \n | | extended |\n attention | character varying(35) | | \n | | extended |\n first_name | character varying(20) | | \n | | extended |\n middle_init | character(1) | | \n | | extended |\n last_name | character varying(20) | | \n | | extended |\n name_suffix | character(3) | | \n | | extended |\n street_address | character varying(60) | | \n | | extended |\n po_box | text | | \n | | extended |\n locality | character varying | | \n | | extended |\n territory_id | character(2) | | \n | | extended |\n postal_code | character(9) | | \n | | extended |\n full_name | text | | \n | | extended |\n _entity_name | text | | \n | | extended |\n _first_name | text | | \n | | extended |\n _last_name | text | | \n | | extended |\n zip5 | character(5) | | \n | | extended |\n country_id | character(2) | | \n | | extended |\n View definition:\n SELECT _lic_en.sys_id,\n _lic_en.callsign,\n _lic_en.fcc_reg_num,\n _lic_en.licensee_id,\n _lic_en.subgroup_id_num,\n _lic_en.applicant_type,\n _lic_en.entity_type,\n _lic_en.entity_name,\n _lic_en.attention,\n _lic_en.first_name,\n _lic_en.middle_init,\n _lic_en.last_name,\n _lic_en.name_suffix,\n _lic_en.street_address,\n _lic_en.po_box,\n _lic_en.locality,\n _lic_en.territory_id,\n _lic_en.postal_code,\n _lic_en.full_name,\n _lic_en._entity_name,\n _lic_en._first_name,\n _lic_en._last_name,\n _lic_en.zip5,\n _lic_en.country_id\n FROM _lic_en;\n\nVIEW _lic_en:\n\n=> \\d+ _lic_en\n View \"Callsign._lic_en\"\n Column | Type | Collation | Nullable |\n Default | Storage | Description\n-----------------+------------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | \n | | plain |\n callsign | character(10) | | \n | | extended |\n fcc_reg_num | character(10) | | \n | | extended |\n licensee_id | character(9) | | \n | | extended |\n subgroup_id_num | character(3) | | \n | | extended |\n applicant_type | character(1) | | \n | | extended |\n entity_type | character(2) | | \n | | extended |\n entity_name | character varying(200) | | \n | | extended |\n attention | character varying(35) | | \n | | extended |\n first_name | character varying(20) | | \n | | extended |\n middle_init | character(1) | | \n | | extended |\n last_name | character varying(20) | | \n | | extended |\n name_suffix | character(3) | | \n | | extended |\n street_address | character varying(60) | | \n | | extended |\n po_box | text | | \n | | extended |\n locality | character varying | | \n | | extended |\n territory_id | character(2) | | \n | | extended |\n postal_code | character(9) | | \n | | extended |\n full_name | text | | \n | | extended |\n _entity_name | text | | \n | | extended |\n _first_name | text | | \n | | extended |\n _last_name | text | | \n | | extended |\n zip5 | character(5) | | \n | | extended |\n country_id | character(2) | | \n | | extended |\n View definition:\n SELECT \"_EN\".unique_system_identifier AS sys_id,\n \"_EN\".callsign,\n \"_EN\".frn AS fcc_reg_num,\n \"_EN\".licensee_id,\n \"_EN\".sgin AS subgroup_id_num,\n \"_EN\".applicant_type_code AS applicant_type,\n \"_EN\".entity_type,\n \"_EN\".entity_name,\n \"_EN\".attention_line AS attention,\n \"_EN\".first_name,\n \"_EN\".mi AS middle_init,\n \"_EN\".last_name,\n \"_EN\".suffix AS name_suffix,\n \"_EN\".street_address,\n po_box_format(\"_EN\".po_box::text) AS po_box,\n \"_EN\".city AS locality,\n \"_EN\".state AS territory_id,\n \"_EN\".zip_code AS postal_code,\n initcap(((COALESCE(\"_EN\".first_name::text || ' '::text,\n ''::text) || COALESCE(\"_EN\".mi::text || ' '::text, ''::text)) ||\n \"_EN\".last_name::text) || COALESCE(' '::text ||\n \"_EN\".suffix::text, ''::text)) AS full_name,\n initcap(\"_EN\".entity_name::text) AS _entity_name,\n initcap(\"_EN\".first_name::text) AS _first_name,\n initcap(\"_EN\".last_name::text) AS _last_name,\n \"_EN\".zip_code::character(5) AS zip5,\n \"_EN\".country_id\n FROM \"UlsLic\".\"_EN\";\n\nTABLE \"UlsLic\".\"_EN\":\n\n=> \\d+ \"UlsLic\".\"_EN\"\n Table\n \"UlsLic._EN\"\n Column | Type | Collation |\n Nullable | Default | Storage | Stats target | Description\n--------------------------+------------------------+-----------+----------+---------+----------+--------------+-------------\n record_type | character(2) | |\n not null | | extended | |\n unique_system_identifier | integer | |\n not null | | plain | |\n uls_file_number | character(14) | \n | | | extended | |\n ebf_number | character varying(30) | \n | | | extended | |\n callsign | character(10) | \n | | | extended | |\n entity_type | character(2) | \n | | | extended | |\n licensee_id | character(9) | \n | | | extended | |\n entity_name | character varying(200) | \n | | | extended | |\n first_name | character varying(20) | \n | | | extended | |\n mi | character(1) | \n | | | extended | |\n last_name | character varying(20) | \n | | | extended | |\n suffix | character(3) | \n | | | extended | |\n phone | character(10) | \n | | | extended | |\n fax | character(10) | \n | | | extended | |\n email | character varying(50) | \n | | | extended | |\n street_address | character varying(60) | \n | | | extended | |\n city | character varying | \n | | | extended | |\n state | character(2) | \n | | | extended | |\n zip_code | character(9) | \n | | | extended | |\n po_box | character varying(20) | \n | | | extended | |\n attention_line | character varying(35) | \n | | | extended | |\n sgin | character(3) | \n | | | extended | |\n frn | character(10) | \n | | | extended | |\n applicant_type_code | character(1) | \n | | | extended | |\n applicant_type_other | character(40) | \n | | | extended | |\n status_code | character(1) | \n | | | extended | |\n status_date | \"MySql\".datetime | \n | | | plain | |\n lic_category_code | character(1) | \n | | | extended | |\n linked_license_id | numeric(9,0) | \n | | | main | |\n linked_callsign | character(10) | \n | | | extended | |\n country_id | character(2) | \n | | | extended | |\n Indexes:\n \"_EN_pkey\" PRIMARY KEY, btree (unique_system_identifier)\n \"_EN__entity_name\" btree (initcap(entity_name::text))\n \"_EN__first_name\" btree (initcap(first_name::text))\n \"_EN__last_name\" btree (initcap(last_name::text))\n \"_EN__zip5\" btree ((zip_code::character(5)))\n \"_EN_callsign\" btree (callsign)\n \"_EN_fcc_reg_num\" btree (frn)\n \"_EN_licensee_id\" btree (licensee_id)\n Check constraints:\n \"_EN_record_type_check\" CHECK (record_type = 'EN'::bpchar)\n Foreign-key constraints:\n \"_EN_applicant_type_code_fkey\" FOREIGN KEY\n (applicant_type_code) REFERENCES\n \"FccLookup\".\"_ApplicantType\"(app_type_id\n )\n \"_EN_entity_type_fkey\" FOREIGN KEY (entity_type) REFERENCES\n \"FccLookup\".\"_EntityType\"(entity_id)\n \"_EN_state_fkey\" FOREIGN KEY (state, country_id) REFERENCES\n \"BaseLookup\".\"_Territory\"(territory_id, country_id)\n \"_EN_unique_system_identifier_fkey\" FOREIGN KEY\n (unique_system_identifier) REFERENCES\n \"UlsLic\".\"_HD\"(unique_system_i\n dentifier) ON UPDATE CASCADE ON DELETE CASCADE\n\n\nVIEW lic_hd_:\n\n=> \\d+ lic_hd_\n View \"Callsign.lic_hd_\"\n Column | Type | Collation | Nullable | Default\n | Storage | Description\n------------------+---------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | \n | plain |\n callsign | character(10) | | | \n | extended |\n uls_file_num | character(14) | | | \n | extended |\n radio_service | text | | | \n | extended |\n license_status | text | | | \n | extended |\n grant_date | date | | | \n | plain |\n effective_date | date | | | \n | plain |\n cancel_date | date | | | \n | plain |\n expire_date | date | | | \n | plain |\n end_date | date | | | \n | plain |\n available_date | date | | | \n | plain |\n last_action_date | date | | | \n | plain |\n View definition:\n SELECT lic_hd.sys_id,\n lic_hd.callsign,\n lic_hd.uls_file_num,\n (lic_hd.radio_service::text || ' - '::text) || COALESCE((\n SELECT \"RadioService\".service_text\n FROM \"RadioService\"\n WHERE lic_hd.radio_service = \"RadioService\".service_id\n LIMIT 1), '???'::character varying)::text AS\n radio_service,\n (lic_hd.license_status::text || ' - '::text) || COALESCE((\n SELECT \"LicStatus\".status_text\n FROM \"LicStatus\"\n WHERE lic_hd.license_status = \"LicStatus\".status_id\n LIMIT 1), '???'::character varying)::text AS\n license_status,\n lic_hd.grant_date,\n lic_hd.effective_date,\n lic_hd.cancel_date,\n lic_hd.expire_date,\n LEAST(lic_hd.cancel_date, lic_hd.expire_date) AS end_date,\n CASE\n WHEN lic_hd.cancel_date < lic_hd.expire_date THEN\n GREATEST((lic_hd.cancel_date + '2 years'::interval)::date,\n lic_hd.last_action_date + 30)\n WHEN lic_hd.license_status = 'A'::bpchar AND\n uls_date() > (lic_hd.expire_date + '2 years'::interval)::date\n THEN NULL::date\n ELSE (lic_hd.expire_date + '2 years'::interval)::date\n END + 1 AS available_date,\n lic_hd.last_action_date\n FROM lic_hd;\n\nVIEW lic_hd:\n\n=> \\d+ lic_hd\n View \"Callsign.lic_hd\"\n Column | Type | Collation | Nullable | Default\n | Storage | Description\n------------------+---------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | \n | plain |\n callsign | character(10) | | | \n | extended |\n uls_file_num | character(14) | | | \n | extended |\n radio_service | character(2) | | | \n | extended |\n license_status | character(1) | | | \n | extended |\n grant_date | date | | | \n | plain |\n effective_date | date | | | \n | plain |\n cancel_date | date | | | \n | plain |\n expire_date | date | | | \n | plain |\n last_action_date | date | | | \n | plain |\n View definition:\n SELECT _lic_hd.sys_id,\n _lic_hd.callsign,\n _lic_hd.uls_file_num,\n _lic_hd.radio_service,\n _lic_hd.license_status,\n _lic_hd.grant_date,\n _lic_hd.effective_date,\n _lic_hd.cancel_date,\n _lic_hd.expire_date,\n _lic_hd.last_action_date\n FROM _lic_hd;\n\nVIEW _lic_hd:\n\n=> \\d+ _lic_hd\n View \"Callsign._lic_hd\"\n Column | Type | Collation | Nullable | Default\n | Storage | Description\n------------------+---------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | | \n | plain |\n callsign | character(10) | | | \n | extended |\n uls_file_num | character(14) | | | \n | extended |\n radio_service | character(2) | | | \n | extended |\n license_status | character(1) | | | \n | extended |\n grant_date | date | | | \n | plain |\n effective_date | date | | | \n | plain |\n cancel_date | date | | | \n | plain |\n expire_date | date | | | \n | plain |\n last_action_date | date | | | \n | plain |\n View definition:\n SELECT \"_HD\".unique_system_identifier AS sys_id,\n \"_HD\".callsign,\n \"_HD\".uls_file_number AS uls_file_num,\n \"_HD\".radio_service_code AS radio_service,\n \"_HD\".license_status,\n \"_HD\".grant_date,\n \"_HD\".effective_date,\n \"_HD\".cancellation_date AS cancel_date,\n \"_HD\".expired_date AS expire_date,\n \"_HD\".last_action_date\n FROM \"UlsLic\".\"_HD\";\n\nTABLE \"UlsLic\".\"_HD\":\n\n=> \\d+ \"UlsLic\".\"_HD\"\n Table\n \"UlsLic._HD\"\n Column | Type | Collation\n | Nullable | Default | Storage | Stats target | Descr\n iption\n------------------------------+-----------------------+-----------+----------+---------+----------+--------------+------\n -------\n record_type | character(2) | \n | not null | | extended | |\n unique_system_identifier | integer | \n | not null | | plain | |\n uls_file_number | character(14) | \n | | | extended | |\n ebf_number | character varying(30) | \n | | | extended | |\n callsign | character(10) | \n | | | extended | |\n license_status | character(1) | \n | | | extended | |\n radio_service_code | character(2) | \n | | | extended | |\n grant_date | date | \n | | | plain | |\n expired_date | date | \n | | | plain | |\n cancellation_date | date | \n | | | plain | |\n eligibility_rule_num | character(10) | \n | | | extended | |\n applicant_type_code_reserved | character(1) | \n | | | extended | |\n alien | character(1) | \n | | | extended | |\n alien_government | character(1) | \n | | | extended | |\n alien_corporation | character(1) | \n | | | extended | |\n alien_officer | character(1) | \n | | | extended | |\n alien_control | character(1) | \n | | | extended | |\n revoked | character(1) | \n | | | extended | |\n convicted | character(1) | \n | | | extended | |\n adjudged | character(1) | \n | | | extended | |\n involved_reserved | character(1) | \n | | | extended | |\n common_carrier | character(1) | \n | | | extended | |\n non_common_carrier | character(1) | \n | | | extended | |\n private_comm | character(1) | \n | | | extended | |\n fixed | character(1) | \n | | | extended | |\n mobile | character(1) | \n | | | extended | |\n radiolocation | character(1) | \n | | | extended | |\n satellite | character(1) | \n | | | extended | |\n developmental_or_sta | character(1) | \n | | | extended | |\n interconnected_service | character(1) | \n | | | extended | |\n certifier_first_name | character varying(20) | \n | | | extended | |\n certifier_mi | character varying | \n | | | extended | |\n certifier_last_name | character varying | \n | | | extended | |\n certifier_suffix | character(3) | \n | | | extended | |\n certifier_title | character(40) | \n | | | extended | |\n gender | character(1) | \n | | | extended | |\n african_american | character(1) | \n | | | extended | |\n native_american | character(1) | \n | | | extended | |\n hawaiian | character(1) | \n | | | extended | |\n asian | character(1) | \n | | | extended | |\n white | character(1) | \n | | | extended | |\n ethnicity | character(1) | \n | | | extended | |\n effective_date | date | \n | | | plain | |\n last_action_date | date | \n | | | plain | |\n auction_id | integer | \n | | | plain | |\n reg_stat_broad_serv | character(1) | \n | | | extended | |\n band_manager | character(1) | \n | | | extended | |\n type_serv_broad_serv | character(1) | \n | | | extended | |\n alien_ruling | character(1) | \n | | | extended | |\n licensee_name_change | character(1) | \n | | | extended | |\n whitespace_ind | character(1) | \n | | | extended | |\n additional_cert_choice | character(1) | \n | | | extended | |\n additional_cert_answer | character(1) | \n | | | extended | |\n discontinuation_ind | character(1) | \n | | | extended | |\n regulatory_compliance_ind | character(1) | \n | | | extended | |\n dummy1 | character varying | \n | | | extended | |\n dummy2 | character varying | \n | | | extended | |\n dummy3 | character varying | \n | | | extended | |\n dummy4 | character varying | \n | | | extended | |\n Indexes:\n \"_HD_pkey\" PRIMARY KEY, btree (unique_system_identifier)\n \"_HD_callsign\" btree (callsign)\n \"_HD_grant_date\" btree (grant_date)\n \"_HD_last_action_date\" btree (last_action_date)\n \"_HD_uls_file_num\" btree (uls_file_number)\n Check constraints:\n \"_HD_record_type_check\" CHECK (record_type = 'HD'::bpchar)\n Foreign-key constraints:\n \"_HD_license_status_fkey\" FOREIGN KEY (license_status)\n REFERENCES \"FccLookup\".\"_LicStatus\"(status_id)\n \"_HD_radio_service_code_fkey\" FOREIGN KEY (radio_service_code)\n REFERENCES \"FccLookup\".\"_RadioService\"(service_id)\n Referenced by:\n TABLE \"\"UlsLic\".\"_AM\"\" CONSTRAINT\n \"_AM_unique_system_identifier_fkey\" FOREIGN KEY\n (unique_system_identifier) REFEREN\n CES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON\n DELETE CASCADE\n TABLE \"\"UlsLic\".\"_CO\"\" CONSTRAINT\n \"_CO_unique_system_identifier_fkey\" FOREIGN KEY\n (unique_system_identifier) REFEREN\n CES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON\n DELETE CASCADE\n TABLE \"\"UlsLic\".\"_EN\"\" CONSTRAINT\n \"_EN_unique_system_identifier_fkey\" FOREIGN KEY\n (unique_system_identifier) REFEREN\n CES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON\n DELETE CASCADE\n TABLE \"\"UlsLic\".\"_HS\"\" CONSTRAINT\n \"_HS_unique_system_identifier_fkey\" FOREIGN KEY\n (unique_system_identifier) REFEREN\n CES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON\n DELETE CASCADE\n TABLE \"\"UlsLic\".\"_LA\"\" CONSTRAINT\n \"_LA_unique_system_identifier_fkey\" FOREIGN KEY\n (unique_system_identifier) REFEREN\n CES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON\n DELETE CASCADE\n TABLE \"\"UlsLic\".\"_SC\"\" CONSTRAINT\n \"_SC_unique_system_identifier_fkey\" FOREIGN KEY\n (unique_system_identifier) REFEREN\n CES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON\n DELETE CASCADE\n TABLE \"\"UlsLic\".\"_SF\"\" CONSTRAINT\n \"_SF_unique_system_identifier_fkey\" FOREIGN KEY\n (unique_system_identifier) REFEREN\n CES \"UlsLic\".\"_HD\"(unique_system_identifier) ON UPDATE CASCADE ON\n DELETE CASCADE\n\nVIEW lic_am_:\n\n => \\d+ lic_am_\n View \"Callsign.lic_am_\"\n Column | Type | Collation | Nullable |\n Default | Storage | Description\n------------------+-----------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | \n | | plain |\n callsign | character(10) | | \n | | extended |\n uls_region | \"MySql\".tinyint | | \n | | plain |\n callsign_group | text | | \n | | extended |\n operator_group | text | | \n | | extended |\n operator_class | text | | \n | | extended |\n prev_class | text | | \n | | extended |\n prev_callsign | character(10) | | \n | | extended |\n vanity_type | text | | \n | | extended |\n is_trustee | character(1) | | \n | | extended |\n trustee_callsign | character(10) | | \n | | extended |\n trustee_name | character varying(50) | | \n | | extended |\n View definition:\n SELECT lic_am.sys_id,\n lic_am.callsign,\n lic_am.uls_region,\n ( SELECT (\"CallsignGroup\".group_id::text || ' - '::text) ||\n \"CallsignGroup\".match_text::text\n FROM \"CallsignGroup\"\n WHERE lic_am.callsign ~ \"CallsignGroup\".pattern::text\n LIMIT 1) AS callsign_group,\n ( SELECT (oper_group.group_id::text || ' - '::text) ||\n oper_group.group_text::text\n FROM oper_group\n WHERE lic_am.operator_class = oper_group.class_id\n LIMIT 1) AS operator_group,\n (lic_am.operator_class::text || ' - '::text) || COALESCE((\n SELECT \"OperatorClass\".class_text\n FROM \"OperatorClass\"\n WHERE lic_am.operator_class = \"OperatorClass\".class_id\n LIMIT 1), '???'::character varying)::text AS\n operator_class,\n (lic_am.prev_class::text || ' - '::text) || COALESCE(( SELECT\n \"OperatorClass\".class_text\n FROM \"OperatorClass\"\n WHERE lic_am.prev_class = \"OperatorClass\".class_id\n LIMIT 1), '???'::character varying)::text AS prev_class,\n lic_am.prev_callsign,\n (lic_am.vanity_type::text || ' - '::text) || COALESCE(( SELECT\n \"VanityType\".vanity_text\n FROM \"VanityType\"\n WHERE lic_am.vanity_type = \"VanityType\".vanity_id\n LIMIT 1), '???'::character varying)::text AS vanity_type,\n lic_am.is_trustee,\n lic_am.trustee_callsign,\n lic_am.trustee_name\n FROM lic_am;\n\nVIEW lic_am:\n\n=> \\d+ lic_am\n View \"Callsign.lic_am\"\n Column | Type | Collation | Nullable |\n Default | Storage | Description\n------------------+-----------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | \n | | plain |\n callsign | character(10) | | \n | | extended |\n uls_region | \"MySql\".tinyint | | \n | | plain |\n uls_group | character(1) | | \n | | extended |\n operator_class | character(1) | | \n | | extended |\n prev_callsign | character(10) | | \n | | extended |\n prev_class | character(1) | | \n | | extended |\n vanity_type | character(1) | | \n | | extended |\n is_trustee | character(1) | | \n | | extended |\n trustee_callsign | character(10) | | \n | | extended |\n trustee_name | character varying(50) | | \n | | extended |\n View definition:\n SELECT _lic_am.sys_id,\n _lic_am.callsign,\n _lic_am.uls_region,\n _lic_am.uls_group,\n _lic_am.operator_class,\n _lic_am.prev_callsign,\n _lic_am.prev_class,\n _lic_am.vanity_type,\n _lic_am.is_trustee,\n _lic_am.trustee_callsign,\n _lic_am.trustee_name\n FROM _lic_am;\n\nVIEW _lic_am:\n\n=> \\d+ _lic_am\n View \"Callsign._lic_am\"\n Column | Type | Collation | Nullable |\n Default | Storage | Description\n------------------+-----------------------+-----------+----------+---------+----------+-------------\n sys_id | integer | | \n | | plain |\n callsign | character(10) | | \n | | extended |\n uls_region | \"MySql\".tinyint | | \n | | plain |\n uls_group | character(1) | | \n | | extended |\n operator_class | character(1) | | \n | | extended |\n prev_callsign | character(10) | | \n | | extended |\n prev_class | character(1) | | \n | | extended |\n vanity_type | character(1) | | \n | | extended |\n is_trustee | character(1) | | \n | | extended |\n trustee_callsign | character(10) | | \n | | extended |\n trustee_name | character varying(50) | | \n | | extended |\n View definition:\n SELECT \"_AM\".unique_system_identifier AS sys_id,\n \"_AM\".callsign,\n \"_AM\".region_code AS uls_region,\n \"_AM\".group_code AS uls_group,\n \"_AM\".operator_class,\n \"_AM\".previous_callsign AS prev_callsign,\n \"_AM\".previous_operator_class AS prev_class,\n \"_AM\".vanity_callsign_change AS vanity_type,\n \"_AM\".trustee_indicator AS is_trustee,\n \"_AM\".trustee_callsign,\n \"_AM\".trustee_name\n FROM \"UlsLic\".\"_AM\";\n\nTABLE \"UlsLic\".\"_AM\":\n\n=> \\d+ \"UlsLic\".\"_AM\"\n Table\n \"UlsLic._AM\"\n Column | Type | Collation |\n Nullable | Default | Storage | Stats target | Description\n----------------------------+-----------------------+-----------+----------+---------+----------+--------------+-------------\n record_type | character(2) | |\n not null | | extended | |\n unique_system_identifier | integer | |\n not null | | plain | |\n uls_file_number | character(14) | \n | | | extended | |\n ebf_number | character varying(30) | \n | | | extended | |\n callsign | character(10) | \n | | | extended | |\n operator_class | character(1) | \n | | | extended | |\n group_code | character(1) | \n | | | extended | |\n region_code | \"MySql\".tinyint | \n | | | plain | |\n trustee_callsign | character(10) | \n | | | extended | |\n trustee_indicator | character(1) | \n | | | extended | |\n physician_certification | character(1) | \n | | | extended | |\n ve_signature | character(1) | \n | | | extended | |\n systematic_callsign_change | character(1) | \n | | | extended | |\n vanity_callsign_change | character(1) | \n | | | extended | |\n vanity_relationship | character(12) | \n | | | extended | |\n previous_callsign | character(10) | \n | | | extended | |\n previous_operator_class | character(1) | \n | | | extended | |\n trustee_name | character varying(50) | \n | | | extended | |\n Indexes:\n \"_AM_pkey\" PRIMARY KEY, btree (unique_system_identifier)\n \"_AM_callsign\" btree (callsign)\n \"_AM_prev_callsign\" btree (previous_callsign)\n \"_AM_trustee_callsign\" btree (trustee_callsign)\n Check constraints:\n \"_AM_record_type_check\" CHECK (record_type = 'AM'::bpchar)\n Foreign-key constraints:\n \"_AM_operator_class_fkey\" FOREIGN KEY (operator_class)\n REFERENCES \"FccLookup\".\"_OperatorClass\"(class_id)\n \"_AM_previous_operator_class_fkey\" FOREIGN KEY\n (previous_operator_class) REFERENCES\n \"FccLookup\".\"_OperatorClass\"(cla\n ss_id)\n \"_AM_unique_system_identifier_fkey\" FOREIGN KEY\n (unique_system_identifier) REFERENCES\n \"UlsLic\".\"_HD\"(unique_system_i\n dentifier) ON UPDATE CASCADE ON DELETE CASCADE\n \"_AM_vanity_callsign_change_fkey\" FOREIGN KEY\n (vanity_callsign_change) REFERENCES\n \"FccLookup\".\"_VanityType\"(vanity_i\n d)",
"msg_date": "Fri, 28 May 2021 11:48:28 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <postgresql@mailpen.com>",
"msg_from_op": true,
"msg_subject": "AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "\nOn 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n> [Reposted to the proper list]\n>\n> I started to use PostgreSQL v7.3 in 2003 on my home Linux systems (4\n> at one point), gradually moving to v9.0 w/ replication in 2010. In\n> 2017 I moved my 20GB database to AWS/RDS, gradually upgrading to v9.6,\n> & was entirely satisfied with the result.\n>\n> In March of this year, AWS announced that v9.6 was nearing end of\n> support, & AWS would forcibly upgrade everyone to v12 on January 22,\n> 2022, if users did not perform the upgrade earlier. My first attempt\n> was successful as far as the upgrade itself, but complex queries that\n> normally ran in a couple of seconds on v9.x, were taking minutes in v12.\n>\n> I didn't have the time in March to diagnose the problem, other than\n> some futile adjustments to server parameters, so I reverted back to a\n> saved copy of my v9.6 data.\n>\n> On Sunday, being retired, I decided to attempt to solve the issue in\n> earnest. I have now spent five days (about 14 hours a day), trying\n> various things, including adding additional indexes. Keeping the v9.6\n> data online for web users, I've \"forked\" the data into new copies, &\n> updated them in turn to PostgreSQL v10, v11, v12, & v13. All exhibit\n> the same problem: As you will see below, it appears that versions 10\n> & above are doing a sequential scan of some of the \"large\" (200K rows)\n> tables. Note that the expected & actual run times both differ for\n> v9.6 & v13.2, by more than *two orders of magnitude*. Rather than post\n> a huge eMail (ha ha), I'll start with this one, that shows an \"EXPLAIN\n> ANALYZE\" from both v9.6 & v13.2, followed by the related table & view\n> definitions. With one exception, table definitions are from the FCC\n> (Federal Communications Commission); the view definitions are my own.\n>\n>\n>\n\nHave you tried reproducing these results outside RDS, say on an EC2\ninstance running vanilla PostgreSQL?\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 28 May 2021 15:08:19 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "Also, did you check your RDS setting in AWS after upgrading? I run four databases in AWS. I found that the work_mem was set way low after an upgrade. I had to tweak many of my settings.\n\nLance\n\nFrom: Andrew Dunstan <andrew@dunslane.net>\nDate: Friday, May 28, 2021 at 2:08 PM\nTo: Dean Gibson (DB Administrator) <postgresql@mailpen.com>, pgsql-performance@lists.postgresql.org <pgsql-performance@lists.postgresql.org>\nSubject: Re: AWS forcing PG upgrade from v9.6 a disaster\n\nOn 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n> [Reposted to the proper list]\n>\n> I started to use PostgreSQL v7.3 in 2003 on my home Linux systems (4\n> at one point), gradually moving to v9.0 w/ replication in 2010. In\n> 2017 I moved my 20GB database to AWS/RDS, gradually upgrading to v9.6,\n> & was entirely satisfied with the result.\n>\n> In March of this year, AWS announced that v9.6 was nearing end of\n> support, & AWS would forcibly upgrade everyone to v12 on January 22,\n> 2022, if users did not perform the upgrade earlier. My first attempt\n> was successful as far as the upgrade itself, but complex queries that\n> normally ran in a couple of seconds on v9.x, were taking minutes in v12.\n>\n> I didn't have the time in March to diagnose the problem, other than\n> some futile adjustments to server parameters, so I reverted back to a\n> saved copy of my v9.6 data.\n>\n> On Sunday, being retired, I decided to attempt to solve the issue in\n> earnest. I have now spent five days (about 14 hours a day), trying\n> various things, including adding additional indexes. Keeping the v9.6\n> data online for web users, I've \"forked\" the data into new copies, &\n> updated them in turn to PostgreSQL v10, v11, v12, & v13. All exhibit\n> the same problem: As you will see below, it appears that versions 10\n> & above are doing a sequential scan of some of the \"large\" (200K rows)\n> tables. Note that the expected & actual run times both differ for\n> v9.6 & v13.2, by more than *two orders of magnitude*. Rather than post\n> a huge eMail (ha ha), I'll start with this one, that shows an \"EXPLAIN\n> ANALYZE\" from both v9.6 & v13.2, followed by the related table & view\n> definitions. With one exception, table definitions are from the FCC\n> (Federal Communications Commission); the view definitions are my own.\n>\n>\n>\n\nHave you tried reproducing these results outside RDS, say on an EC2\ninstance running vanilla PostgreSQL?\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://urldefense.com/v3/__https://www.enterprisedb.com__;!!DZ3fjg!tiFTfkNeARuU_vwxOHZfrJvVXj8kYMPJqa1tO5Fnv75UbERS8ZAmUoNFl_g2EVyL$<https://urldefense.com/v3/__https:/www.enterprisedb.com__;!!DZ3fjg!tiFTfkNeARuU_vwxOHZfrJvVXj8kYMPJqa1tO5Fnv75UbERS8ZAmUoNFl_g2EVyL$>\n\n\n\n\n\n\n\n\n\n\n\nAlso, did you check your RDS setting in AWS after upgrading? I run four databases in AWS. I found that the work_mem was set way low after an upgrade. I had to tweak many of my settings.\n \nLance\n \n\nFrom:\nAndrew Dunstan <andrew@dunslane.net>\nDate: Friday, May 28, 2021 at 2:08 PM\nTo: Dean Gibson (DB Administrator) <postgresql@mailpen.com>, pgsql-performance@lists.postgresql.org <pgsql-performance@lists.postgresql.org>\nSubject: Re: AWS forcing PG upgrade from v9.6 a disaster\n\n\n\nOn 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n> [Reposted to the proper list]\n>\n> I started to use PostgreSQL v7.3 in 2003 on my home Linux systems (4\n> at one point), gradually moving to v9.0 w/ replication in 2010. In\n> 2017 I moved my 20GB database to AWS/RDS, gradually upgrading to v9.6,\n> & was entirely satisfied with the result.\n>\n> In March of this year, AWS announced that v9.6 was nearing end of\n> support, & AWS would forcibly upgrade everyone to v12 on January 22,\n> 2022, if users did not perform the upgrade earlier. My first attempt\n> was successful as far as the upgrade itself, but complex queries that\n> normally ran in a couple of seconds on v9.x, were taking minutes in v12.\n>\n> I didn't have the time in March to diagnose the problem, other than\n> some futile adjustments to server parameters, so I reverted back to a\n> saved copy of my v9.6 data.\n>\n> On Sunday, being retired, I decided to attempt to solve the issue in\n> earnest. I have now spent five days (about 14 hours a day), trying\n> various things, including adding additional indexes. Keeping the v9.6\n> data online for web users, I've \"forked\" the data into new copies, &\n> updated them in turn to PostgreSQL v10, v11, v12, & v13. All exhibit\n> the same problem: As you will see below, it appears that versions 10\n> & above are doing a sequential scan of some of the \"large\" (200K rows)\n> tables. Note that the expected & actual run times both differ for\n> v9.6 & v13.2, by more than *two orders of magnitude*. Rather than post\n> a huge eMail (ha ha), I'll start with this one, that shows an \"EXPLAIN\n> ANALYZE\" from both v9.6 & v13.2, followed by the related table & view\n> definitions. With one exception, table definitions are from the FCC\n> (Federal Communications Commission); the view definitions are my own.\n>\n>\n>\n\nHave you tried reproducing these results outside RDS, say on an EC2\ninstance running vanilla PostgreSQL?\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: \nhttps://urldefense.com/v3/__https://www.enterprisedb.com__;!!DZ3fjg!tiFTfkNeARuU_vwxOHZfrJvVXj8kYMPJqa1tO5Fnv75UbERS8ZAmUoNFl_g2EVyL$",
"msg_date": "Fri, 28 May 2021 19:18:59 +0000",
"msg_from": "\"Campbell, Lance\" <lance@illinois.edu>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "Hi Lance,\n\nDid you customize the PG 12 DB Parameter group to be in sync as much as \npossible with the 9.6 RDS version?� Or are you using PG12 default DB \nParameter group?\n\nAre you using the same AWS Instance Class?\n\nDid you vacuum analyze all your tables after the upgrade to 12?\n\nRegards,\nMichael Vitale\n\nCampbell, Lance wrote on 5/28/2021 3:18 PM:\n>\n> Also, did you check your RDS setting in AWS after upgrading?� I run \n> four databases in AWS.� I found that the work_mem was set way low \n> after an upgrade.� I had to tweak many of my settings.\n>\n> Lance\n>\n> *From: *Andrew Dunstan <andrew@dunslane.net>\n> *Date: *Friday, May 28, 2021 at 2:08 PM\n> *To: *Dean Gibson (DB Administrator) <postgresql@mailpen.com>, \n> pgsql-performance@lists.postgresql.org \n> <pgsql-performance@lists.postgresql.org>\n> *Subject: *Re: AWS forcing PG upgrade from v9.6 a disaster\n>\n>\n> On 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n> > [Reposted to the proper list]\n> >\n> > I started to use PostgreSQL v7.3 in 2003 on my home Linux systems (4\n> > at one point), gradually moving to v9.0 w/ replication in 2010.� In\n> > 2017 I moved my 20GB database to AWS/RDS, gradually upgrading to v9.6,\n> > & was entirely satisfied with the result.\n> >\n> > In March of this year, AWS announced that v9.6 was nearing end of\n> > support, & AWS would forcibly upgrade everyone to v12 on January 22,\n> > 2022, if users did not perform the upgrade earlier.� My first attempt\n> > was successful as far as the upgrade itself, but complex queries that\n> > normally ran in a couple of seconds on v9.x, were taking minutes in v12.\n> >\n> > I didn't have the time in March to diagnose the problem, other than\n> > some futile adjustments to server parameters, so I reverted back to a\n> > saved copy of my v9.6 data.\n> >\n> > On Sunday, being retired, I decided to attempt to solve the issue in\n> > earnest.� I have now spent five days (about 14 hours a day), trying\n> > various things, including adding additional indexes.� Keeping the v9.6\n> > data online for web users, I've \"forked\" the data into new copies, &\n> > updated them in turn to PostgreSQL v10, v11, v12, & v13.� All exhibit\n> > the same problem:� As you will see below, it appears that versions 10\n> > & above are doing a sequential scan of some of the \"large\" (200K rows)\n> > tables.� Note that the expected & actual run times both differ for\n> > v9.6 & v13.2, by more than *two orders of magnitude*. Rather than post\n> > a huge eMail (ha ha), I'll start with this one, that shows an \"EXPLAIN\n> > ANALYZE\" from both v9.6 & v13.2, followed by the related table & view\n> > definitions.� With one exception, table definitions are from the FCC\n> > (Federal Communications Commission);� the view definitions are my own.\n> >\n> >\n> >\n>\n> Have you tried reproducing these results outside RDS, say on an EC2\n> instance running vanilla PostgreSQL?\n>\n>\n> cheers\n>\n>\n> andrew\n>\n>\n>\n> --\n> Andrew Dunstan\n> EDB: \n> https://urldefense.com/v3/__https://www.enterprisedb.com__;!!DZ3fjg!tiFTfkNeARuU_vwxOHZfrJvVXj8kYMPJqa1tO5Fnv75UbERS8ZAmUoNFl_g2EVyL$ \n> <https://urldefense.com/v3/__https:/www.enterprisedb.com__;%21%21DZ3fjg%21tiFTfkNeARuU_vwxOHZfrJvVXj8kYMPJqa1tO5Fnv75UbERS8ZAmUoNFl_g2EVyL$> \n>\n>\n>\n\n\n\n\nHi Lance,\n\nDid you customize the PG 12 DB Parameter group to be in sync as much as \npossible with the 9.6 RDS version?� Or are you using PG12 default DB \nParameter group?\n\nAre you using the same AWS Instance Class?\n\nDid you vacuum analyze all your tables after the upgrade to 12?\n\nRegards,\nMichael Vitale\n\nCampbell, Lance wrote on 5/28/2021 3:18 PM:\n\n\n\n\n\nAlso, did you check your RDS setting in AWS after \nupgrading?� I run four databases in AWS.� I found that the work_mem was \nset way low after an upgrade.� I had to tweak many of my settings.\n�\nLance\n�\n\nFrom:\nAndrew Dunstan \n<andrew@dunslane.net>\nDate: Friday, May 28, 2021 at 2:08 PM\nTo: Dean Gibson (DB Administrator) \n<postgresql@mailpen.com>, pgsql-performance@lists.postgresql.org\n<pgsql-performance@lists.postgresql.org>\nSubject: Re: AWS forcing PG upgrade from v9.6 a disaster\n\n\n\nOn 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n> [Reposted to the proper list]\n>\n> I started to use PostgreSQL v7.3 in 2003 on my home Linux systems \n(4\n> at one point), gradually moving to v9.0 w/ replication in 2010.� In\n> 2017 I moved my 20GB database to AWS/RDS, gradually upgrading to \nv9.6,\n> & was entirely satisfied with the result.\n>\n> In March of this year, AWS announced that v9.6 was nearing end of\n> support, & AWS would forcibly upgrade everyone to v12 on \nJanuary 22,\n> 2022, if users did not perform the upgrade earlier.� My first \nattempt\n> was successful as far as the upgrade itself, but complex queries \nthat\n> normally ran in a couple of seconds on v9.x, were taking minutes in\n v12.\n>\n> I didn't have the time in March to diagnose the problem, other than\n> some futile adjustments to server parameters, so I reverted back to\n a\n> saved copy of my v9.6 data.\n>\n> On Sunday, being retired, I decided to attempt to solve the issue \nin\n> earnest.� I have now spent five days (about 14 hours a day), trying\n> various things, including adding additional indexes.� Keeping the \nv9.6\n> data online for web users, I've \"forked\" the data into new copies, \n&\n> updated them in turn to PostgreSQL v10, v11, v12, & v13.� All \nexhibit\n> the same problem:� As you will see below, it appears that versions \n10\n> & above are doing a sequential scan of some of the \"large\" \n(200K rows)\n> tables.� Note that the expected & actual run times both differ \nfor\n> v9.6 & v13.2, by more than *two orders of magnitude*. Rather \nthan post\n> a huge eMail (ha ha), I'll start with this one, that shows an \n\"EXPLAIN\n> ANALYZE\" from both v9.6 & v13.2, followed by the related table \n& view\n> definitions.� With one exception, table definitions are from the \nFCC\n> (Federal Communications Commission);� the view definitions are my \nown.\n>\n>\n>\n\nHave you tried reproducing these results outside RDS, say on an EC2\ninstance running vanilla PostgreSQL?\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: \nhttps://urldefense.com/v3/__https://www.enterprisedb.com__;!!DZ3fjg!tiFTfkNeARuU_vwxOHZfrJvVXj8kYMPJqa1tO5Fnv75UbERS8ZAmUoNFl_g2EVyL$",
"msg_date": "Fri, 28 May 2021 15:38:58 -0400",
"msg_from": "MichaelDBA <MichaelDBA@sqlexec.com>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "The problem is the plan. The planner massively underestimated the number of\nrows arising from the _EN/_AM join.\n\nUsually postgres is pretty good about running ANALYZE as needed, but it\nmight be a good idea to run it manually to rule that out as a potential\nculprit.\n\n\nOn Fri, May 28, 2021 at 3:19 PM Campbell, Lance <lance@illinois.edu> wrote:\n\n> Also, did you check your RDS setting in AWS after upgrading? I run four\n> databases in AWS. I found that the work_mem was set way low after an\n> upgrade. I had to tweak many of my settings.\n>\n>\n>\n> Lance\n>\n>\n>\n> *From: *Andrew Dunstan <andrew@dunslane.net>\n> *Date: *Friday, May 28, 2021 at 2:08 PM\n> *To: *Dean Gibson (DB Administrator) <postgresql@mailpen.com>,\n> pgsql-performance@lists.postgresql.org <\n> pgsql-performance@lists.postgresql.org>\n> *Subject: *Re: AWS forcing PG upgrade from v9.6 a disaster\n>\n>\n> On 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n> > [Reposted to the proper list]\n> >\n> > I started to use PostgreSQL v7.3 in 2003 on my home Linux systems (4\n> > at one point), gradually moving to v9.0 w/ replication in 2010. In\n> > 2017 I moved my 20GB database to AWS/RDS, gradually upgrading to v9.6,\n> > & was entirely satisfied with the result.\n> >\n> > In March of this year, AWS announced that v9.6 was nearing end of\n> > support, & AWS would forcibly upgrade everyone to v12 on January 22,\n> > 2022, if users did not perform the upgrade earlier. My first attempt\n> > was successful as far as the upgrade itself, but complex queries that\n> > normally ran in a couple of seconds on v9.x, were taking minutes in v12.\n> >\n> > I didn't have the time in March to diagnose the problem, other than\n> > some futile adjustments to server parameters, so I reverted back to a\n> > saved copy of my v9.6 data.\n> >\n> > On Sunday, being retired, I decided to attempt to solve the issue in\n> > earnest. I have now spent five days (about 14 hours a day), trying\n> > various things, including adding additional indexes. Keeping the v9.6\n> > data online for web users, I've \"forked\" the data into new copies, &\n> > updated them in turn to PostgreSQL v10, v11, v12, & v13. All exhibit\n> > the same problem: As you will see below, it appears that versions 10\n> > & above are doing a sequential scan of some of the \"large\" (200K rows)\n> > tables. Note that the expected & actual run times both differ for\n> > v9.6 & v13.2, by more than *two orders of magnitude*. Rather than post\n> > a huge eMail (ha ha), I'll start with this one, that shows an \"EXPLAIN\n> > ANALYZE\" from both v9.6 & v13.2, followed by the related table & view\n> > definitions. With one exception, table definitions are from the FCC\n> > (Federal Communications Commission); the view definitions are my own.\n> >\n> >\n> >\n>\n> Have you tried reproducing these results outside RDS, say on an EC2\n> instance running vanilla PostgreSQL?\n>\n>\n> cheers\n>\n>\n> andrew\n>\n>\n>\n> --\n> Andrew Dunstan\n> EDB:\n> https://urldefense.com/v3/__https://www.enterprisedb.com__;!!DZ3fjg!tiFTfkNeARuU_vwxOHZfrJvVXj8kYMPJqa1tO5Fnv75UbERS8ZAmUoNFl_g2EVyL$\n> <https://urldefense.com/v3/__https:/www.enterprisedb.com__;!!DZ3fjg!tiFTfkNeARuU_vwxOHZfrJvVXj8kYMPJqa1tO5Fnv75UbERS8ZAmUoNFl_g2EVyL$>\n>\n>\n>\n\nThe problem is the plan. The planner massively underestimated the number of rows arising from the _EN/_AM join. Usually postgres is pretty good about running ANALYZE as needed, but it might be a good idea to run it manually to rule that out as a potential culprit. On Fri, May 28, 2021 at 3:19 PM Campbell, Lance <lance@illinois.edu> wrote:\n\n\nAlso, did you check your RDS setting in AWS after upgrading? I run four databases in AWS. I found that the work_mem was set way low after an upgrade. I had to tweak many of my settings.\n \nLance\n \n\nFrom:\nAndrew Dunstan <andrew@dunslane.net>\nDate: Friday, May 28, 2021 at 2:08 PM\nTo: Dean Gibson (DB Administrator) <postgresql@mailpen.com>, pgsql-performance@lists.postgresql.org <pgsql-performance@lists.postgresql.org>\nSubject: Re: AWS forcing PG upgrade from v9.6 a disaster\n\n\n\nOn 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n> [Reposted to the proper list]\n>\n> I started to use PostgreSQL v7.3 in 2003 on my home Linux systems (4\n> at one point), gradually moving to v9.0 w/ replication in 2010. In\n> 2017 I moved my 20GB database to AWS/RDS, gradually upgrading to v9.6,\n> & was entirely satisfied with the result.\n>\n> In March of this year, AWS announced that v9.6 was nearing end of\n> support, & AWS would forcibly upgrade everyone to v12 on January 22,\n> 2022, if users did not perform the upgrade earlier. My first attempt\n> was successful as far as the upgrade itself, but complex queries that\n> normally ran in a couple of seconds on v9.x, were taking minutes in v12.\n>\n> I didn't have the time in March to diagnose the problem, other than\n> some futile adjustments to server parameters, so I reverted back to a\n> saved copy of my v9.6 data.\n>\n> On Sunday, being retired, I decided to attempt to solve the issue in\n> earnest. I have now spent five days (about 14 hours a day), trying\n> various things, including adding additional indexes. Keeping the v9.6\n> data online for web users, I've \"forked\" the data into new copies, &\n> updated them in turn to PostgreSQL v10, v11, v12, & v13. All exhibit\n> the same problem: As you will see below, it appears that versions 10\n> & above are doing a sequential scan of some of the \"large\" (200K rows)\n> tables. Note that the expected & actual run times both differ for\n> v9.6 & v13.2, by more than *two orders of magnitude*. Rather than post\n> a huge eMail (ha ha), I'll start with this one, that shows an \"EXPLAIN\n> ANALYZE\" from both v9.6 & v13.2, followed by the related table & view\n> definitions. With one exception, table definitions are from the FCC\n> (Federal Communications Commission); the view definitions are my own.\n>\n>\n>\n\nHave you tried reproducing these results outside RDS, say on an EC2\ninstance running vanilla PostgreSQL?\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: \nhttps://urldefense.com/v3/__https://www.enterprisedb.com__;!!DZ3fjg!tiFTfkNeARuU_vwxOHZfrJvVXj8kYMPJqa1tO5Fnv75UbERS8ZAmUoNFl_g2EVyL$",
"msg_date": "Fri, 28 May 2021 15:39:23 -0400",
"msg_from": "Ryan Bair <ryandbair@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "The plan is also influenced by cost related and memory related config\nsettings such as random_page_cost and work_mem, right? Hence the questions\nif configs are matching or newer versions are using very conservative\n(default) settings.\n\nThe plan is also influenced by cost related and memory related config settings such as random_page_cost and work_mem, right? Hence the questions if configs are matching or newer versions are using very conservative (default) settings.",
"msg_date": "Fri, 28 May 2021 14:11:09 -0600",
"msg_from": "Michael Lewis <mlewis@entrata.com>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n\nWhat sticks out for me are these two scans, which balloon from 50-60 \nheap fetches to 1.5M each.\n\n> -> Nested Loop (cost=0.29..0.68 rows=1 width=7) \n> (actual time=0.003..0.004 rows=1 loops=1487153)\n> Join Filter: (\"_IsoCountry\".iso_alpha2 = \n> \"_Territory\".country_id)\n> Rows Removed by Join Filter: 0\n> -> Index Only Scan using \n> \"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..0.38 rows=1 \n> width=3) (actual time=0.001..0.002 rows=1 loops=1487153)\n> Index Cond: (iso_alpha2 = \n> \"_GovtRegion\".country_id)\n> Heap Fetches: 1487153\n> -> Index Only Scan using \"_Territory_pkey\" \n> on \"_Territory\" (cost=0.14..0.29 rows=1 width=7) (actual \n> time=0.001..0.001 rows=1 loops=1487153)\n> Index Cond: (territory_id = \n> \"_GovtRegion\".territory_id)\n> Heap Fetches: 1550706\n\nHow did you load the database? pg_dump -> psql/pg_restore?\n\nIf so, did you perform a VACUUM FREEZE after the load?\n\n\nRegards, Jan\n\n-- \nJan Wieck\nPostgres User since 1994\n\n\n",
"msg_date": "Fri, 28 May 2021 16:23:06 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 2021-05-28 12:08, Andrew Dunstan wrote:\n> On 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n>> [Reposted to the proper list]\n>>\n>> ...\n>>\n>>\n>> Have you tried reproducing these results outside RDS, say on an EC2 instance running vanilla PostgreSQL?\n>>\n>> cheers, andrew\n>>\n>> --\n>> Andrew Dunstan\n>> EDB: https://www.enterprisedb.com\n\nThat is step #2 of my backup plan:\n\n 1. Create an EC2 instance running community v9.6. Once that is done\n & running successfully, I'm golden for a long, long time.\n 2. If I am curious (& not worn out), take a snapshot of #1 & update it\n to v13.\n\n\n-- Dean\n\n\n\n\n\n\n\nOn 2021-05-28 12:08, Andrew Dunstan\n wrote:\n\n\n\nOn 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n\n\n[Reposted to the proper list]\n\n...\n\n\nHave you tried reproducing these results outside RDS, say on an EC2 instance running vanilla PostgreSQL?\n\ncheers, andrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n\n\n That is step #2 of my backup plan:\n\n\n Create an EC2 instance running community v9.6. Once that is\n done & running successfully, I'm golden for a long, long\n time.\n\nIf I am curious (& not worn out), take a snapshot of #1\n & update it to v13.\n\n\n -- Dean",
"msg_date": "Fri, 28 May 2021 13:33:18 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <postgresql@mailpen.com>",
"msg_from_op": true,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 2021-05-28 12:18, Campbell, Lance wrote:\n>\n> Also, did you check your RDS setting in AWS after upgrading?� I run \n> four databases in AWS.� I found that the work_mem was set way low \n> after an upgrade.� I had to tweak many of my settings.\n>\n> Lance\n>\n>\n\nI've wondered a lot about work_mem.� The default setting (which I've \ntried) involves a formula, so I have no idea what the actual value is.� \nSince I have a db.t2.micro (now db.t3.micro) instance with only 1GB of \nRAM, I've tried a value of 8000. No difference.\n\n\n\n\n\n\n\nOn 2021-05-28 12:18, Campbell, Lance\n wrote:\n\n\n\n\n\n\nAlso, did you check your RDS setting in AWS\n after upgrading?� I run four databases in AWS.� I found that\n the work_mem was set way low after an upgrade.� I had to tweak\n many of my settings.\n�\nLance\n\n\n\n\n I've wondered a lot about work_mem.� The default setting (which I've\n tried) involves a formula, so I have no idea what the actual value\n is.� Since I have a db.t2.micro (now db.t3.micro) instance with only\n 1GB of RAM, I've tried a value of 8000. No difference.",
"msg_date": "Fri, 28 May 2021 13:37:41 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <postgresql@mailpen.com>",
"msg_from_op": true,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "pá 28. 5. 2021 v 21:39 odesílatel Ryan Bair <ryandbair@gmail.com> napsal:\n\n> The problem is the plan. The planner massively underestimated the number\n> of rows arising from the _EN/_AM join.\n>\n> Usually postgres is pretty good about running ANALYZE as needed, but it\n> might be a good idea to run it manually to rule that out as a potential\n> culprit.\n>\n\nyes\n\nthe very strange is pretty high planning time\n\n Planning Time: 173.753 ms\n\nThis is unusually high number - maybe the server has bad CPU or maybe some\nindexes bloating\n\nRegards\n\nPavel\n\nOn Fri, May 28, 2021 at 3:19 PM Campbell, Lance <lance@illinois.edu> wrote:\n>\n>> Also, did you check your RDS setting in AWS after upgrading? I run four\n>> databases in AWS. I found that the work_mem was set way low after an\n>> upgrade. I had to tweak many of my settings.\n>>\n>>\n>>\n>> Lance\n>>\n>>\n>>\n>> *From: *Andrew Dunstan <andrew@dunslane.net>\n>> *Date: *Friday, May 28, 2021 at 2:08 PM\n>> *To: *Dean Gibson (DB Administrator) <postgresql@mailpen.com>,\n>> pgsql-performance@lists.postgresql.org <\n>> pgsql-performance@lists.postgresql.org>\n>> *Subject: *Re: AWS forcing PG upgrade from v9.6 a disaster\n>>\n>>\n>> On 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n>> > [Reposted to the proper list]\n>> >\n>> > I started to use PostgreSQL v7.3 in 2003 on my home Linux systems (4\n>> > at one point), gradually moving to v9.0 w/ replication in 2010. In\n>> > 2017 I moved my 20GB database to AWS/RDS, gradually upgrading to v9.6,\n>> > & was entirely satisfied with the result.\n>> >\n>> > In March of this year, AWS announced that v9.6 was nearing end of\n>> > support, & AWS would forcibly upgrade everyone to v12 on January 22,\n>> > 2022, if users did not perform the upgrade earlier. My first attempt\n>> > was successful as far as the upgrade itself, but complex queries that\n>> > normally ran in a couple of seconds on v9.x, were taking minutes in v12.\n>> >\n>> > I didn't have the time in March to diagnose the problem, other than\n>> > some futile adjustments to server parameters, so I reverted back to a\n>> > saved copy of my v9.6 data.\n>> >\n>> > On Sunday, being retired, I decided to attempt to solve the issue in\n>> > earnest. I have now spent five days (about 14 hours a day), trying\n>> > various things, including adding additional indexes. Keeping the v9.6\n>> > data online for web users, I've \"forked\" the data into new copies, &\n>> > updated them in turn to PostgreSQL v10, v11, v12, & v13. All exhibit\n>> > the same problem: As you will see below, it appears that versions 10\n>> > & above are doing a sequential scan of some of the \"large\" (200K rows)\n>> > tables. Note that the expected & actual run times both differ for\n>> > v9.6 & v13.2, by more than *two orders of magnitude*. Rather than post\n>> > a huge eMail (ha ha), I'll start with this one, that shows an \"EXPLAIN\n>> > ANALYZE\" from both v9.6 & v13.2, followed by the related table & view\n>> > definitions. With one exception, table definitions are from the FCC\n>> > (Federal Communications Commission); the view definitions are my own.\n>> >\n>> >\n>> >\n>>\n>> Have you tried reproducing these results outside RDS, say on an EC2\n>> instance running vanilla PostgreSQL?\n>>\n>>\n>> cheers\n>>\n>>\n>> andrew\n>>\n>>\n>>\n>> --\n>> Andrew Dunstan\n>> EDB:\n>> https://urldefense.com/v3/__https://www.enterprisedb.com__;!!DZ3fjg!tiFTfkNeARuU_vwxOHZfrJvVXj8kYMPJqa1tO5Fnv75UbERS8ZAmUoNFl_g2EVyL$\n>> <https://urldefense.com/v3/__https:/www.enterprisedb.com__;!!DZ3fjg!tiFTfkNeARuU_vwxOHZfrJvVXj8kYMPJqa1tO5Fnv75UbERS8ZAmUoNFl_g2EVyL$>\n>>\n>>\n>>\n\npá 28. 5. 2021 v 21:39 odesílatel Ryan Bair <ryandbair@gmail.com> napsal:The problem is the plan. The planner massively underestimated the number of rows arising from the _EN/_AM join. Usually postgres is pretty good about running ANALYZE as needed, but it might be a good idea to run it manually to rule that out as a potential culprit. yesthe very strange is pretty high planning time Planning Time: 173.753 msThis is unusually high number - maybe the server has bad CPU or maybe some indexes bloatingRegardsPavelOn Fri, May 28, 2021 at 3:19 PM Campbell, Lance <lance@illinois.edu> wrote:\n\n\nAlso, did you check your RDS setting in AWS after upgrading? I run four databases in AWS. I found that the work_mem was set way low after an upgrade. I had to tweak many of my settings.\n \nLance\n \n\nFrom:\nAndrew Dunstan <andrew@dunslane.net>\nDate: Friday, May 28, 2021 at 2:08 PM\nTo: Dean Gibson (DB Administrator) <postgresql@mailpen.com>, pgsql-performance@lists.postgresql.org <pgsql-performance@lists.postgresql.org>\nSubject: Re: AWS forcing PG upgrade from v9.6 a disaster\n\n\n\nOn 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n> [Reposted to the proper list]\n>\n> I started to use PostgreSQL v7.3 in 2003 on my home Linux systems (4\n> at one point), gradually moving to v9.0 w/ replication in 2010. In\n> 2017 I moved my 20GB database to AWS/RDS, gradually upgrading to v9.6,\n> & was entirely satisfied with the result.\n>\n> In March of this year, AWS announced that v9.6 was nearing end of\n> support, & AWS would forcibly upgrade everyone to v12 on January 22,\n> 2022, if users did not perform the upgrade earlier. My first attempt\n> was successful as far as the upgrade itself, but complex queries that\n> normally ran in a couple of seconds on v9.x, were taking minutes in v12.\n>\n> I didn't have the time in March to diagnose the problem, other than\n> some futile adjustments to server parameters, so I reverted back to a\n> saved copy of my v9.6 data.\n>\n> On Sunday, being retired, I decided to attempt to solve the issue in\n> earnest. I have now spent five days (about 14 hours a day), trying\n> various things, including adding additional indexes. Keeping the v9.6\n> data online for web users, I've \"forked\" the data into new copies, &\n> updated them in turn to PostgreSQL v10, v11, v12, & v13. All exhibit\n> the same problem: As you will see below, it appears that versions 10\n> & above are doing a sequential scan of some of the \"large\" (200K rows)\n> tables. Note that the expected & actual run times both differ for\n> v9.6 & v13.2, by more than *two orders of magnitude*. Rather than post\n> a huge eMail (ha ha), I'll start with this one, that shows an \"EXPLAIN\n> ANALYZE\" from both v9.6 & v13.2, followed by the related table & view\n> definitions. With one exception, table definitions are from the FCC\n> (Federal Communications Commission); the view definitions are my own.\n>\n>\n>\n\nHave you tried reproducing these results outside RDS, say on an EC2\ninstance running vanilla PostgreSQL?\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: \nhttps://urldefense.com/v3/__https://www.enterprisedb.com__;!!DZ3fjg!tiFTfkNeARuU_vwxOHZfrJvVXj8kYMPJqa1tO5Fnv75UbERS8ZAmUoNFl_g2EVyL$",
"msg_date": "Fri, 28 May 2021 22:38:10 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "\nOn 5/28/21 4:23 PM, Jan Wieck wrote:\n> On 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n>\n> What sticks out for me are these two scans, which balloon from 50-60\n> heap fetches to 1.5M each.\n>\n>> -> Nested Loop (cost=0.29..0.68 rows=1\n>> width=7) (actual time=0.003..0.004 rows=1 loops=1487153)\n>> Join Filter: (\"_IsoCountry\".iso_alpha2 =\n>> \"_Territory\".country_id)\n>> Rows Removed by Join Filter: 0\n>> -> Index Only Scan using\n>> \"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..0.38\n>> rows=1 width=3) (actual time=0.001..0.002 rows=1 loops=1487153)\n>> Index Cond: (iso_alpha2 =\n>> \"_GovtRegion\".country_id)\n>> Heap Fetches: 1487153\n>> -> Index Only Scan using\n>> \"_Territory_pkey\" on \"_Territory\" (cost=0.14..0.29 rows=1 width=7)\n>> (actual time=0.001..0.001 rows=1 loops=1487153)\n>> Index Cond: (territory_id =\n>> \"_GovtRegion\".territory_id)\n>> Heap Fetches: 1550706\n>\n> How did you load the database? pg_dump -> psql/pg_restore?\n>\n> If so, did you perform a VACUUM FREEZE after the load?\n>\n>\n>\n\nJan\n\n\nAIUI he did an RDS upgrade. Surely that's not doing a dump/restore? I\nassume you would know better than him or me what it actually does do :-)\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 28 May 2021 17:15:33 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On Fri, May 28, 2021 at 05:15:33PM -0400, Andrew Dunstan wrote:\n> > How did you load the database? pg_dump -> psql/pg_restore?\n> >\n> > If so, did you perform a VACUUM FREEZE after the load?\n> \n> Jan\n> \n> \n> AIUI he did an RDS upgrade. Surely that's not doing a dump/restore? I\n> assume you would know better than him or me what it actually does do :-)\n\nI think it uses pg_upgrade.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Fri, 28 May 2021 17:30:52 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "I recently did 20 upgrades from 9.6 to 12.4 and 12.5. No issues and the upgrade process uses pg_upgrade. I don’t know if AWS modified it though. \n\nBob\n\nSent from my PDP11\n\n> On May 28, 2021, at 5:15 PM, Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n> \n>> On 5/28/21 4:23 PM, Jan Wieck wrote:\n>> On 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n>> \n>> What sticks out for me are these two scans, which balloon from 50-60\n>> heap fetches to 1.5M each.\n>> \n>>> -> Nested Loop (cost=0.29..0.68 rows=1\n>>> width=7) (actual time=0.003..0.004 rows=1 loops=1487153)\n>>> Join Filter: (\"_IsoCountry\".iso_alpha2 =\n>>> \"_Territory\".country_id)\n>>> Rows Removed by Join Filter: 0\n>>> -> Index Only Scan using\n>>> \"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..0.38\n>>> rows=1 width=3) (actual time=0.001..0.002 rows=1 loops=1487153)\n>>> Index Cond: (iso_alpha2 =\n>>> \"_GovtRegion\".country_id)\n>>> Heap Fetches: 1487153\n>>> -> Index Only Scan using\n>>> \"_Territory_pkey\" on \"_Territory\" (cost=0.14..0.29 rows=1 width=7)\n>>> (actual time=0.001..0.001 rows=1 loops=1487153)\n>>> Index Cond: (territory_id =\n>>> \"_GovtRegion\".territory_id)\n>>> Heap Fetches: 1550706\n>> \n>> How did you load the database? pg_dump -> psql/pg_restore?\n>> \n>> If so, did you perform a VACUUM FREEZE after the load?\n>> \n>> \n>> \n> \n> Jan\n> \n> \n> AIUI he did an RDS upgrade. Surely that's not doing a dump/restore? I\n> assume you would know better than him or me what it actually does do :-)\n> \n> \n> cheers\n> \n> \n> andrew\n> \n> \n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n> \n> \n> \n\n\n\n",
"msg_date": "Fri, 28 May 2021 18:09:50 -0400",
"msg_from": "Bob Lunney <bob_lunney@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 2021-05-28 13:23, Jan Wieck wrote:\n> On 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n>\n> What sticks out for me are these two scans, which balloon from 50-60 \n> heap fetches to 1.5M each.\n>\n>> -> Nested Loop (cost=0.29..0.68 rows=1 \n>> width=7) (actual time=0.003..0.004 rows=1 loops=1487153)\n>> Join Filter: (\"_IsoCountry\".iso_alpha2 = \n>> \"_Territory\".country_id)\n>> Rows Removed by Join Filter: 0\n>> -> Index Only Scan using \n>> \"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..0.38 \n>> rows=1 width=3) (actual time=0.001..0.002 rows=1 loops=1487153)\n>> Index Cond: (iso_alpha2 = \n>> \"_GovtRegion\".country_id)\n>> Heap Fetches: 1487153\n>> -> Index Only Scan using \n>> \"_Territory_pkey\" on \"_Territory\" (cost=0.14..0.29 rows=1 width=7) \n>> (actual time=0.001..0.001 rows=1 loops=1487153)\n>> Index Cond: (territory_id = \n>> \"_GovtRegion\".territory_id)\n>> Heap Fetches: 1550706\n>\n> How did you load the database? pg_dump -> psql/pg_restore?\n>\n> If so, did you perform a VACUUM FREEZE after the load?\n>\n> Regards, Jan\n\nIt was RDS's \"upgrade in place\". According to the PostgreSQL site, for \nv9.4 & v12: /\"Aggressive freezing is always performed when the table is \nrewritten, so this option is redundant when //|FULL|//is specified.\"/\n\nI did a VACUUM FULL.\n\n\n\n\n\n\n\nOn 2021-05-28 13:23, Jan Wieck wrote:\n\nOn\n 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n \n\n What sticks out for me are these two scans, which balloon from\n 50-60 heap fetches to 1.5M each.\n \n\n -> Nested Loop \n (cost=0.29..0.68 rows=1 width=7) (actual time=0.003..0.004\n rows=1 loops=1487153)\n \n Join Filter:\n (\"_IsoCountry\".iso_alpha2 = \"_Territory\".country_id)\n \n Rows Removed by Join Filter: 0\n \n -> Index Only Scan using\n \"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..0.38\n rows=1 width=3) (actual time=0.001..0.002 rows=1 loops=1487153)\n \n Index Cond: (iso_alpha2 =\n \"_GovtRegion\".country_id)\n \n Heap Fetches: 1487153\n \n -> Index Only Scan using\n \"_Territory_pkey\" on \"_Territory\" (cost=0.14..0.29 rows=1\n width=7) (actual time=0.001..0.001 rows=1 loops=1487153)\n \n Index Cond: (territory_id =\n \"_GovtRegion\".territory_id)\n \n Heap Fetches: 1550706\n \n\n\n How did you load the database? pg_dump -> psql/pg_restore?\n \n\n If so, did you perform a VACUUM FREEZE after the load?\n \n\n Regards, Jan\n \n\n\n It was RDS's \"upgrade in place\". According to the PostgreSQL site,\n for v9.4 & v12: \"Aggressive freezing is always performed\n when the table is rewritten, so this option is redundant when FULL is specified.\"\n\n I did a VACUUM FULL.",
"msg_date": "Fri, 28 May 2021 15:13:58 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <postgresql@mailpen.com>",
"msg_from_op": true,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On Fri, May 28, 2021, 17:15 Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> On 5/28/21 4:23 PM, Jan Wieck wrote:\n> > On 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n> >\n> > What sticks out for me are these two scans, which balloon from 50-60\n> > heap fetches to 1.5M each.\n> >\n> >> -> Nested Loop (cost=0.29..0.68 rows=1\n> >> width=7) (actual time=0.003..0.004 rows=1 loops=1487153)\n> >> Join Filter: (\"_IsoCountry\".iso_alpha2 =\n> >> \"_Territory\".country_id)\n> >> Rows Removed by Join Filter: 0\n> >> -> Index Only Scan using\n> >> \"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..0.38\n> >> rows=1 width=3) (actual time=0.001..0.002 rows=1 loops=1487153)\n> >> Index Cond: (iso_alpha2 =\n> >> \"_GovtRegion\".country_id)\n> >> Heap Fetches: 1487153\n> >> -> Index Only Scan using\n> >> \"_Territory_pkey\" on \"_Territory\" (cost=0.14..0.29 rows=1 width=7)\n> >> (actual time=0.001..0.001 rows=1 loops=1487153)\n> >> Index Cond: (territory_id =\n> >> \"_GovtRegion\".territory_id)\n> >> Heap Fetches: 1550706\n> >\n> > How did you load the database? pg_dump -> psql/pg_restore?\n> >\n> > If so, did you perform a VACUUM FREEZE after the load?\n> >\n> >\n> >\n>\n> Jan\n>\n>\n> AIUI he did an RDS upgrade. Surely that's not doing a dump/restore? I\n> assume you would know better than him or me what it actually does do :-)\n>\n\nSince I am not working at AWS I can't tell for sure. ;)\n\nIt used to perform a binary pgupgrade. But that also has issues with xids\nand freezing. So I would throw a cluster wide vac-freeze in there for good\nmeasure, Sir.\n\n\nBest Regards, Jan\n\n\n\n>\n> cheers\n>\n>\n> andrew\n>\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\n\nOn Fri, May 28, 2021, 17:15 Andrew Dunstan <andrew@dunslane.net> wrote:\nOn 5/28/21 4:23 PM, Jan Wieck wrote:\n> On 5/28/21 2:48 PM, Dean Gibson (DB Administrator) wrote:\n>\n> What sticks out for me are these two scans, which balloon from 50-60\n> heap fetches to 1.5M each.\n>\n>> -> Nested Loop (cost=0.29..0.68 rows=1\n>> width=7) (actual time=0.003..0.004 rows=1 loops=1487153)\n>> Join Filter: (\"_IsoCountry\".iso_alpha2 =\n>> \"_Territory\".country_id)\n>> Rows Removed by Join Filter: 0\n>> -> Index Only Scan using\n>> \"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..0.38\n>> rows=1 width=3) (actual time=0.001..0.002 rows=1 loops=1487153)\n>> Index Cond: (iso_alpha2 =\n>> \"_GovtRegion\".country_id)\n>> Heap Fetches: 1487153\n>> -> Index Only Scan using\n>> \"_Territory_pkey\" on \"_Territory\" (cost=0.14..0.29 rows=1 width=7)\n>> (actual time=0.001..0.001 rows=1 loops=1487153)\n>> Index Cond: (territory_id =\n>> \"_GovtRegion\".territory_id)\n>> Heap Fetches: 1550706\n>\n> How did you load the database? pg_dump -> psql/pg_restore?\n>\n> If so, did you perform a VACUUM FREEZE after the load?\n>\n>\n>\n\nJan\n\n\nAIUI he did an RDS upgrade. Surely that's not doing a dump/restore? I\nassume you would know better than him or me what it actually does do :-)Since I am not working at AWS I can't tell for sure. ;)It used to perform a binary pgupgrade. But that also has issues with xids and freezing. So I would throw a cluster wide vac-freeze in there for good measure, Sir.Best Regards, Jan\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Fri, 28 May 2021 22:27:05 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "\nOn 5/28/21 10:27 PM, Jan Wieck wrote:\n>\n>\n> On Fri, May 28, 2021, 17:15 Andrew Dunstan <andrew@dunslane.net\n> <mailto:andrew@dunslane.net>> wrote:\n>\n>\n>\n>\n> AIUI he did an RDS upgrade. Surely that's not doing a dump/restore? I\n> assume you would know better than him or me what it actually does\n> do :-)\n>\n>\n> Since I am not working at AWS I can't tell for sure. ;)\n\n\nApologies, my mistake then.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 28 May 2021 22:41:08 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "\n\n> On May 28, 2021, at 14:30, Bruce Momjian <bruce@momjian.us> wrote:\n> I think it uses pg_upgrade.\n\nIt does. It does not, however, do the vacuum analyze step afterwards. A VACUUM (FULL, ANALYZE) should take care of that, and I believe the OP said he had done that after the pg_upgrade.\n\nThe most common reason for this kind of inexplicable stuff after an RDS upgrade is, as others have said, parameter changes, since you get a new default parameter group after the upgrade.\n\nThat being said, this does look like something happened to the planner to cause it to pick a worse plan in v13. The deeply nested views make it kind of hard to pin down, but the core issue appears to be in the \"good\" plan, it evaluates the _Club.club_count > 5 relatively early, which greatly limits the number of rows that it handles elsewhere in the query. Why the plan change, I can't say.\n\nIt might be worth creating a materialized CTE that grabs the \"club_count > 5\" set and uses that, instead of having it at the top level predicates.\n\n",
"msg_date": "Fri, 28 May 2021 19:43:23 -0700",
"msg_from": "Christophe Pettus <xof@thebuild.com>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 2021-05-28 19:43, Christophe Pettus wrote:\n> ...\n> The most common reason for this kind of inexplicable stuff after an RDS upgrade is, as others have said, parameter changes, since you get a new default parameter group after the upgrade.\n>\n> That being said, this does look like something happened to the planner to cause it to pick a worse plan in v13. The deeply nested views make it kind of hard to pin down, but the core issue appears to be in the \"good\" plan, it evaluates the _Club.club_count > 5 relatively early, which greatly limits the number of rows that it handles elsewhere in the query. Why the plan change, I can't say.\n>\n> It might be worth creating a materialized CTE that grabs the \"club_count > 5\" set and uses that, instead of having it at the top level predicates.\n\nI spent quite a bit of time over the past five days experimenting with \nvarious parameter values, to no avail, but I don't mind trying some more.\n\nI have other queries that fail even more spectacularly, & they all seem \nto involve a generated table like the \"club\" one in my example. I have \nan idea that I might try, in effectively changing the order of \nevaluation. I'll have to think about that. Thanks for the suggestion! \nHowever, one \"shouldn't\" have to tinker with the order of stuff in SQL; \nthat's one of the beauties of the language: the \"compiler\" (planner) is \nsupposed to figure that all out. And for me, that's been true for the \npast 15 years with PostgreSQL.\n\nNote that this problem is not unique to v13. It happened with upgrades \nto v10, 11, &12. So, some fundamental change was made back then (at \nleast in the RDS version). Since I need a bulletproof backup past next \nJanuary, I think my next task will be to get an EC2 instance running \nv9.6, where AWS can't try to upgrade it. Then, at my leisure, I can \nfiddle with upgrading.\n\n\n\n\n\n\nOn 2021-05-28 19:43, Christophe Pettus\n wrote:\n\n...\n \nThe most common reason for this kind of inexplicable stuff after an RDS upgrade is, as others have said, parameter changes, since you get a new default parameter group after the upgrade.\n\nThat being said, this does look like something happened to the planner to cause it to pick a worse plan in v13. The deeply nested views make it kind of hard to pin down, but the core issue appears to be in the \"good\" plan, it evaluates the _Club.club_count > 5 relatively early, which greatly limits the number of rows that it handles elsewhere in the query. Why the plan change, I can't say.\n\nIt might be worth creating a materialized CTE that grabs the \"club_count > 5\" set and uses that, instead of having it at the top level predicates.\n\n\n I spent quite a bit of time over the past five days experimenting\n with various parameter values, to no avail, but I don't mind trying\n some more.\n\n I have other queries that fail even more spectacularly, & they\n all seem to involve a generated table like the \"club\" one in my\n example. I have an idea that I might try, in effectively changing\n the order of evaluation. I'll have to think about that. Thanks for\n the suggestion! However, one \"shouldn't\" have to tinker with the\n order of stuff in SQL; that's one of the beauties of the language: \n the \"compiler\" (planner) is supposed to figure that all out. And\n for me, that's been true for the past 15 years with PostgreSQL.\n\n Note that this problem is not unique to v13. It happened with\n upgrades to v10, 11, &12. So, some fundamental change was made\n back then (at least in the RDS version). Since I need a bulletproof\n backup past next January, I think my next task will be to get an EC2\n instance running v9.6, where AWS can't try to upgrade it. Then, at\n my leisure, I can fiddle with upgrading.",
"msg_date": "Fri, 28 May 2021 21:08:28 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <postgresql@mailpen.com>",
"msg_from_op": true,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 05/29/21 07:08, Dean Gibson (DB Administrator) wrote:\n> On 2021-05-28 19:43, Christophe Pettus wrote:\n>> ...\n>> The most common reason for this kind of inexplicable stuff after an RDS upgrade is, as others have said, parameter changes, since you get a new default parameter group after the upgrade.\n>>\n>> That being said, this does look like something happened to the planner to cause it to pick a worse plan in v13. The deeply nested views make it kind of hard to pin down, but the core issue appears to be in the \"good\" plan, it evaluates the _Club.club_count > 5 relatively early, which greatly limits the number of rows that it handles elsewhere in the query. Why the plan change, I can't say.\n>>\n>> It might be worth creating a materialized CTE that grabs the \"club_count > 5\" set and uses that, instead of having it at the top level predicates.\n>\n> I spent quite a bit of time over the past five days experimenting with \n> various parameter values, to no avail, but I don't mind trying some more.\n>\n> I have other queries that fail even more spectacularly, & they all \n> seem to involve a generated table like the \"club\" one in my example. \n> I have an idea that I might try, in effectively changing the order of \n> evaluation. I'll have to think about that. Thanks for the \n> suggestion! However, one \"shouldn't\" have to tinker with the order of \n> stuff in SQL; that's one of the beauties of the language: the \n> \"compiler\" (planner) is supposed to figure that all out. And for me, \n> that's been true for the past 15 years with PostgreSQL.\n>\n> Note that this problem is not unique to v13. It happened with \n> upgrades to v10, 11, &12. So, some fundamental change was made back \n> then (at least in the RDS version). Since I need a bulletproof backup \n> past next January, I think my next task will be to get an EC2 instance \n> running v9.6, where AWS can't try to upgrade it. Then, at my leisure, \n> I can fiddle with upgrading.\n\nBTW what is the planner reason to not use index in v13.2? Is index in \ncorrupted state? Have you try to reindex index \n\"FccLookup\".\"_LicStatus_pkey\" ?\n\n1.5M of seqscan's are looking really bad.\n\n SubPlan 2\n -> Limit (cost=0.15..8.17 rows=1 width=32) \n(actual time=0.006..0.007 rows=1 loops=55)\n -> *Index Scan using \"_LicStatus_pkey\" on \n\"_LicStatus\"* (cost=0.15..8.17 rows=1 width=32) (actual \ntime=0.005..0.005 rows=1 loops=55)\n Index Cond: (\"_HD\".license_status = \nstatus_id)\n\n\nSubPlan 2\n -> Limit (cost=0.00..1.07 rows=1 width=13) \n(actual time=0.001..0.001 rows=1 loops=1487153)\n -> *Seq Scan on \"_LicStatus\"* \n(cost=0.00..1.07 rows=1 width=13) (actual time=0.000..0.000 rows=1 \nloops=1487153)\n Filter: (\"_HD\".license_status = \nstatus_id)\n Rows Removed by Filter: 1\n\n\n\n\n\n\n\nOn 05/29/21 07:08, Dean Gibson (DB\n Administrator) wrote:\n\n\n\nOn 2021-05-28 19:43, Christophe\n Pettus wrote:\n\n...\n The most common reason for this kind of inexplicable stuff after an RDS upgrade is, as others have said, parameter changes, since you get a new default parameter group after the upgrade.\n\nThat being said, this does look like something happened to the planner to cause it to pick a worse plan in v13. The deeply nested views make it kind of hard to pin down, but the core issue appears to be in the \"good\" plan, it evaluates the _Club.club_count > 5 relatively early, which greatly limits the number of rows that it handles elsewhere in the query. Why the plan change, I can't say.\n\nIt might be worth creating a materialized CTE that grabs the \"club_count > 5\" set and uses that, instead of having it at the top level predicates.\n\n\n I spent quite a bit of time over the past five days experimenting\n with various parameter values, to no avail, but I don't mind\n trying some more.\n\n I have other queries that fail even more spectacularly, & they\n all seem to involve a generated table like the \"club\" one in my\n example. I have an idea that I might try, in effectively changing\n the order of evaluation. I'll have to think about that. Thanks\n for the suggestion! However, one \"shouldn't\" have to tinker with\n the order of stuff in SQL; that's one of the beauties of the\n language: the \"compiler\" (planner) is supposed to figure that all\n out. And for me, that's been true for the past 15 years with\n PostgreSQL.\n\n Note that this problem is not unique to v13. It happened with\n upgrades to v10, 11, &12. So, some fundamental change was\n made back then (at least in the RDS version). Since I need a\n bulletproof backup past next January, I think my next task will be\n to get an EC2 instance running v9.6, where AWS can't try to\n upgrade it. Then, at my leisure, I can fiddle with upgrading.\n\nBTW what is the planner reason to not use\n index in v13.2? Is index in corrupted state? Have you try to\n reindex index \"FccLookup\".\"_LicStatus_pkey\" ?\n\n1.5M of seqscan's are\n looking really bad.\n\n SubPlan 2\n -> Limit (cost=0.15..8.17 rows=1\n width=32) (actual time=0.006..0.007 rows=1 loops=55)\n -> Index Scan using\n \"_LicStatus_pkey\" on \"_LicStatus\" (cost=0.15..8.17 rows=1\n width=32) (actual time=0.005..0.005 rows=1 loops=55)\n Index Cond:\n (\"_HD\".license_status = status_id)\n\n \n SubPlan 2\n -> Limit (cost=0.00..1.07 rows=1\n width=13) (actual time=0.001..0.001 rows=1 loops=1487153)\n -> Seq Scan on\n \"_LicStatus\" (cost=0.00..1.07 rows=1 width=13) (actual\n time=0.000..0.000 rows=1 loops=1487153)\n Filter:\n (\"_HD\".license_status = status_id)\n Rows Removed by Filter: 1",
"msg_date": "Sat, 29 May 2021 08:24:37 +0300",
"msg_from": "Alexey M Boltenkov <padrebolt@yandex.ru>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On Fri, May 28, 2021, 22:41 Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> On 5/28/21 10:27 PM, Jan Wieck wrote:\n> >\n> >\n> > On Fri, May 28, 2021, 17:15 Andrew Dunstan <andrew@dunslane.net\n> > <mailto:andrew@dunslane.net>> wrote:\n> >\n> >\n> >\n> >\n> > AIUI he did an RDS upgrade. Surely that's not doing a dump/restore? I\n> > assume you would know better than him or me what it actually does\n> > do :-)\n> >\n> >\n> > Since I am not working at AWS I can't tell for sure. ;)\n>\n>\n> Apologies, my mistake then.\n>\n\nNo need to apologize, you were correct two months ago.\n\n\nBest Regards, Jan\n\n\n\n>\n> cheers\n>\n>\n> andrew\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\n\nOn Fri, May 28, 2021, 22:41 Andrew Dunstan <andrew@dunslane.net> wrote:\nOn 5/28/21 10:27 PM, Jan Wieck wrote:\n>\n>\n> On Fri, May 28, 2021, 17:15 Andrew Dunstan <andrew@dunslane.net\n> <mailto:andrew@dunslane.net>> wrote:\n>\n>\n>\n>\n> AIUI he did an RDS upgrade. Surely that's not doing a dump/restore? I\n> assume you would know better than him or me what it actually does\n> do :-)\n>\n>\n> Since I am not working at AWS I can't tell for sure. ;)\n\n\nApologies, my mistake then.No need to apologize, you were correct two months ago.Best Regards, Jan\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sat, 29 May 2021 07:39:29 -0400",
"msg_from": "Jan Wieck <jan@wi3ck.info>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 2021-05-28 22:24, Alexey M Boltenkov wrote:\n> On 05/29/21 07:08, Dean Gibson (DB Administrator) wrote: [deleted]\n>\n> BTW what is the planner reason to not use index in v13.2? Is index in \n> corrupted state? Have you try to reindex index \n> \"FccLookup\".\"_LicStatus_pkey\" ?\n>\n> 1.5M of seqscan's are looking really bad.\n>\n> SubPlan 2\n> -> Limit (cost=0.15..8.17 rows=1 width=32) \n> (actual time=0.006..0.007 rows=1 loops=55)\n> -> *Index Scan using \"_LicStatus_pkey\" on \n> \"_LicStatus\"* (cost=0.15..8.17 rows=1 width=32) (actual \n> time=0.005..0.005 rows=1 loops=55)\n> Index Cond: (\"_HD\".license_status = \n> status_id)\n>\n>\n> SubPlan 2\n> -> Limit (cost=0.00..1.07 rows=1 width=13) \n> (actual time=0.001..0.001 rows=1 loops=1487153)\n> -> *Seq Scan on \"_LicStatus\"* \n> (cost=0.00..1.07 rows=1 width=13) (actual time=0.000..0.000 rows=1 \n> loops=1487153)\n> Filter: (\"_HD\".license_status = \n> status_id)\n> Rows Removed by Filter: 1\n>\n\nDoing your REINDEX didn't help. Now in the process of reindexing the \nentire database. When that's done, I'll let you know if there is any \nimprovement.\n\n\n\n\n\n\nOn 2021-05-28 22:24, Alexey M Boltenkov\n wrote:\n\n\n\nOn 05/29/21 07:08, Dean Gibson (DB\n Administrator) wrote: [deleted]\n\nBTW what is the planner reason to not use\n index in v13.2? Is index in corrupted state? Have you try to\n reindex index \"FccLookup\".\"_LicStatus_pkey\" ?\n\n1.5M of seqscan's\n are looking really bad.\n\n SubPlan 2\n -> Limit (cost=0.15..8.17 rows=1\n width=32) (actual time=0.006..0.007 rows=1 loops=55)\n -> Index Scan using\n \"_LicStatus_pkey\" on \"_LicStatus\" (cost=0.15..8.17\n rows=1 width=32) (actual time=0.005..0.005 rows=1 loops=55)\n Index Cond:\n (\"_HD\".license_status = status_id)\n\n \n SubPlan 2\n -> Limit (cost=0.00..1.07 rows=1\n width=13) (actual time=0.001..0.001 rows=1 loops=1487153)\n -> Seq Scan on\n \"_LicStatus\" (cost=0.00..1.07 rows=1 width=13)\n (actual time=0.000..0.000 rows=1 loops=1487153)\n Filter:\n (\"_HD\".license_status = status_id)\n Rows Removed by Filter: 1\n\n\n Doing your REINDEX didn't help. Now in the process of reindexing\n the entire database. When that's done, I'll let you know if there\n is any improvement.",
"msg_date": "Sat, 29 May 2021 13:17:40 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <postgresql@mailpen.com>",
"msg_from_op": true,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "*SOLVED !!!* Below is the *new* EXPLAIN ANALYZE for *13.2* on AWS RDS \n(with *no changes* to server parameters) along with the prior EXPLAIN \nANALYZE outputs for easy comparison.\n\nWhile I didn't discount the significance & effect of optimizing the \nserver parameters, this problem always seemed to me like a fundamental \ndifference in how the PostgreSQL planner viewed the structure of the \nquery. In particular, I had a usage pattern of writing VIEWS that \nworked very well with v9.6 & prior versions, but which made me suspect a \nroute of attack:\n\nSince the FCC tables contain lots of one-character codes for different \nconditions, to simplify maintenance & displays to humans, I created over \ntwenty tiny lookup tables (a dozen or so entries in each table), to \nrender a human-readable field as a replacement for the original \none-character field in many of the VIEWs. In some cases those \n\"humanized\" fields were used as conditions in SELECT statements. Of \ncourse, fields that are not referenced or selected for output from a \nparticular query, never get looked up (an advantage over using a JOIN \nfor each lookup). In some cases, for ease of handling multiple or \ncomplex lookups, I indeed used a JOIN. All this worked fine until v10.\n\nHere's the FROM clause that bit me:\n\n FROM lic_en\n JOIN govt_region USING (territory_id, country_id)\n LEFT JOIN zip_code USING (territory_id, country_id, zip5)\n LEFT JOIN \"County\" USING (territory_id, country_id, fips_county);\n\nThe first two JOINs are not the problem, & are in fact retained in my \nsolution. The problem is the third JOIN, where \"fips_county\" from \n\"County\" is actually matched with the corresponding field from the \n\"zip_code\" VIEW. Works fine, if you don't mind the performance impact \nin v10 & above. It has now been rewritten, to be a sub-query for an \noutput field. Voila ! Back to sub-second query times.\n\nThis also solved performance issues with other queries as well. I also \nnow use lookup values as additional fields in the output, in addition to \nthe original fields, which should help some more (but means some changes \nto some web pages that do queries).\n\n-- Dean\n\nps: I wonder how many other RDS users of v9.6 are going to get a very \nrude awakening *very soon*, as AWS is not allowing new instances of v9.6 \nafter *August 2* (see https://forums.aws.amazon.com/ann.jspa?annID=8499 \n). Whether that milestone affects restores from snapshots, remains to \nbe seen (by others, not by me). In other words, users should plan to be \nup & running on a newer version well before August. Total cost to me? \nI\"m in my *8th day* of dealing with this, & I still have a number of web \npages to update, due to changes in SQL field names to manage this mess. \nThis was certainly not a obvious solution.\n\n*Here's from 13.2 (new):*\n\n=> EXPLAIN ANALYZE SELECT club_count, extra_count, region_count, \ncallsign AS trustee_callsign, applicant_type, entity_name, licensee_id \nAS _lid FROM genclub_multi_ WHERE club_count >= 5 ORDER BY extra_count \nDESC, club_count DESC, entity_name;\nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=457.77..457.77 rows=1 width=64) (actual \ntime=48.737..48.742 rows=43 loops=1)\n Sort Key: \"_Club\".extra_count DESC, \"_Club\".club_count DESC, \n\"_EN\".entity_name\n Sort Method: quicksort Memory: 31kB\n -> Nested Loop Left Join (cost=1.57..457.76 rows=1 width=64) \n(actual time=1.796..48.635 rows=43 loops=1)\n -> Nested Loop (cost=1.28..457.07 rows=1 width=71) (actual \ntime=1.736..48.239 rows=43 loops=1)\n Join Filter: ((\"_EN\".country_id = \n\"_GovtRegion\".country_id) AND (\"_EN\".state = \"_GovtRegion\".territory_id))\n Rows Removed by Join Filter: 1297\n -> Nested Loop (cost=1.28..453.75 rows=1 width=70) \n(actual time=1.720..47.778 rows=43 loops=1)\n Join Filter: ((\"_HD\".unique_system_identifier = \n\"_EN\".unique_system_identifier) AND (\"_HD\".callsign = \"_EN\".callsign))\n -> Nested Loop (cost=0.85..450.98 rows=1 \nwidth=65) (actual time=1.207..34.912 rows=43 loops=1)\n -> Nested Loop (cost=0.43..376.57 rows=27 \nwidth=50) (actual time=0.620..20.956 rows=43 loops=1)\n -> Seq Scan on \"_Club\" \n(cost=0.00..4.44 rows=44 width=35) (actual time=0.037..0.067 rows=44 \nloops=1)\n Filter: (club_count >= 5)\n Rows Removed by Filter: 151\n -> Index Scan using \"_HD_callsign\" on \n\"_HD\" (cost=0.43..8.45 rows=1 width=15) (actual time=0.474..0.474 \nrows=1 loops=44)\n Index Cond: (callsign = \n\"_Club\".trustee_callsign)\n Filter: (license_status = \n'A'::bpchar)\n Rows Removed by Filter: 0\n -> Index Scan using \"_AM_pkey\" on \"_AM\" \n(cost=0.43..2.75 rows=1 width=15) (actual time=0.323..0.323 rows=1 loops=43)\n Index Cond: (unique_system_identifier \n= \"_HD\".unique_system_identifier)\n Filter: (\"_HD\".callsign = callsign)\n -> Index Scan using \"_EN_pkey\" on \"_EN\" \n(cost=0.43..2.75 rows=1 width=60) (actual time=0.298..0.298 rows=1 loops=43)\n Index Cond: (unique_system_identifier = \n\"_AM\".unique_system_identifier)\n Filter: (\"_AM\".callsign = callsign)\n -> Seq Scan on \"_GovtRegion\" (cost=0.00..1.93 rows=93 \nwidth=7) (actual time=0.002..0.004 rows=31 loops=43)\n -> Nested Loop (cost=0.29..0.68 rows=1 width=7) (actual \ntime=0.008..0.008 rows=1 loops=43)\n -> Index Only Scan using \"_IsoCountry_iso_alpha2_key\" \non \"_IsoCountry\" (cost=0.14..0.38 rows=1 width=3) (actual \ntime=0.004..0.004 rows=1 loops=43)\n Index Cond: (iso_alpha2 = \"_GovtRegion\".country_id)\n Heap Fetches: 43\n -> Index Only Scan using \"_Territory_pkey\" on \n\"_Territory\" (cost=0.14..0.29 rows=1 width=7) (actual time=0.003..0.003 \nrows=1 loops=43)\n Index Cond: ((country_id = \n\"_IsoCountry\".iso_alpha2) AND (territory_id = \"_GovtRegion\".territory_id))\n Heap Fetches: 43\n Planning Time: 4.017 ms\n Execution Time: 48.822 ms\n\n\nOn 2021-05-28 11:48, Dean Gibson (DB Administrator) wrote:\n> ...\n>\n> *Here's from v9.6:*\n>\n> => EXPLAIN ANALYZE SELECT club_count, extra_count, region_count, \n> callsign AS trustee_callsign, applicant_type, entity_name, licensee_id \n> AS _lid FROM genclub_multi_ WHERE club_count >= 5 ORDER BY \n> extra_count DESC, club_count DESC, entity_name;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=407.13..407.13 rows=1 width=94) (actual \n> time=348.850..348.859 rows=43 loops=1)\n> Sort Key: \"_Club\".extra_count DESC, \"_Club\".club_count DESC, \n> \"_EN\".entity_name\n> Sort Method: quicksort Memory: 31kB\n> -> Nested Loop (cost=4.90..407.12 rows=1 width=94) (actual \n> time=7.587..348.732 rows=43 loops=1)\n> -> Nested Loop (cost=4.47..394.66 rows=1 width=94) (actual \n> time=5.740..248.149 rows=43 loops=1)\n> -> Nested Loop Left Join (cost=4.04..382.20 rows=1 \n> width=79) (actual time=2.458..107.908 rows=55 loops=1)\n> -> Hash Join (cost=3.75..380.26 rows=1 \n> width=86) (actual time=2.398..106.990 rows=55 loops=1)\n> Hash Cond: ((\"_EN\".country_id = \n> \"_GovtRegion\".country_id) AND (\"_EN\".state = \"_GovtRegion\".territory_id))\n> -> Nested Loop (cost=0.43..376.46 rows=47 \n> width=94) (actual time=2.294..106.736 rows=55 loops=1)\n> -> Seq Scan on \"_Club\" \n> (cost=0.00..4.44 rows=44 width=35) (actual time=0.024..0.101 rows=44 \n> loops=1)\n> Filter: (club_count >= 5)\n> Rows Removed by Filter: 151\n> -> Index Scan using \"_EN_callsign\" \n> on \"_EN\" (cost=0.43..8.45 rows=1 width=69) (actual time=2.179..2.420 \n> rows=1 loops=44)\n> Index Cond: (callsign = \n> \"_Club\".trustee_callsign)\n> -> Hash (cost=1.93..1.93 rows=93 width=7) \n> (actual time=0.071..0.071 rows=88 loops=1)\n> Buckets: 1024 Batches: 1 Memory \n> Usage: 12kB\n> -> Seq Scan on \"_GovtRegion\" \n> (cost=0.00..1.93 rows=93 width=7) (actual time=0.010..0.034 rows=93 \n> loops=1)\n> -> Nested Loop (cost=0.29..1.93 rows=1 width=7) \n> (actual time=0.012..0.014 rows=1 loops=55)\n> Join Filter: (\"_IsoCountry\".iso_alpha2 = \n> \"_Territory\".country_id)\n> Rows Removed by Join Filter: 0\n> -> Index Only Scan using \n> \"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..1.62 rows=1 \n> width=3) (actual time=0.006..0.006 rows=1 loops=55)\n> Index Cond: (iso_alpha2 = \n> \"_GovtRegion\".country_id)\n> Heap Fetches: 55\n> -> Index Only Scan using \"_Territory_pkey\" \n> on \"_Territory\" (cost=0.14..0.29 rows=1 width=7)\n> (actual time=0.004..0.005 rows=1 loops=55)\n> Index Cond: (territory_id = \n> \"_GovtRegion\".territory_id)\n> Heap Fetches: 59\n> -> Index Scan using \"_HD_pkey\" on \"_HD\" \n> (cost=0.43..12.45 rows=1 width=15) (actual time=2.548..2.548 rows=1 \n> loops=55)\n> Index Cond: (unique_system_identifier = \n> \"_EN\".unique_system_identifier)\n> Filter: ((\"_EN\".callsign = callsign) AND \n> (((((license_status)::text || ' - '::text) || (COALESCE((SubPlan 2), \n> '???'::character varying))::text))::character(1) = 'A'::bpchar))\n> Rows Removed by Filter: 0\n> SubPlan 2\n> -> Limit (cost=0.15..8.17 rows=1 width=32) \n> (actual time=0.006..0.007 rows=1 loops=55)\n> -> Index Scan using \"_LicStatus_pkey\" on \n> \"_LicStatus\" (cost=0.15..8.17 rows=1 width=32) (actual \n> time=0.005..0.005 rows=1 loops=55)\n> Index Cond: (\"_HD\".license_status = \n> status_id)\n> -> Index Scan using \"_AM_pkey\" on \"_AM\" (cost=0.43..4.27 \n> rows=1 width=15) (actual time=2.325..2.325 rows=1 loops=43)\n> Index Cond: (unique_system_identifier = \n> \"_EN\".unique_system_identifier)\n> Filter: (\"_EN\".callsign = callsign)\n> SubPlan 1\n> -> Limit (cost=0.15..8.17 rows=1 width=32) (actual \n> time=0.007..0.007 rows=1 loops=43)\n> -> Index Scan using \"_ApplicantType_pkey\" on \n> \"_ApplicantType\" (cost=0.15..8.17 rows=1 width=32) (actual \n> time=0.005..0.005 rows=1 loops=43)\n> Index Cond: (\"_EN\".applicant_type_code = \n> app_type_id)\n> Planning time: 13.490 ms\n> Execution time: 349.182 ms\n> (43 rows)\n>\n>\n> *Here's from v13.2:*\n>\n> => EXPLAIN ANALYZE SELECT club_count, extra_count, region_count, \n> callsign AS trustee_callsign, applicant_type, entity_name, licensee_id \n> AS _lid FROM genclub_multi_ WHERE club_count >= 5 ORDER BY \n> extra_count DESC, club_count DESC, entity_name;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Sort (cost=144365.60..144365.60 rows=1 width=94) (actual \n> time=31898.860..31901.922 rows=43 loops=1)\n> Sort Key: \"_Club\".extra_count DESC, \"_Club\".club_count DESC, \n> \"_EN\".entity_name\n> Sort Method: quicksort Memory: 31kB\n> -> Nested Loop (cost=58055.66..144365.59 rows=1 width=94) (actual \n> time=6132.403..31894.233 rows=43 loops=1)\n> -> Nested Loop (cost=58055.51..144364.21 rows=1 width=62) \n> (actual time=1226.085..30337.921 rows=837792 loops=1)\n> -> Nested Loop Left Join (cost=58055.09..144360.38 \n> rows=1 width=59) (actual time=1062.414..12471.456 rows=1487153 loops=1)\n> -> Hash Join (cost=58054.80..144359.69 rows=1 \n> width=66) (actual time=1061.330..6635.041 rows=1487153 loops=1)\n> Hash Cond: ((\"_EN\".unique_system_identifier \n> = \"_AM\".unique_system_identifier) AND (\"_EN\".callsign = \"_AM\".callsign))\n> -> Hash Join (cost=3.33..53349.72 \n> rows=1033046 width=51) (actual time=2.151..3433.178 rows=1487153 loops=1)\n> Hash Cond: ((\"_EN\".country_id = \n> \"_GovtRegion\".country_id) AND (\"_EN\".state = \"_GovtRegion\".territory_id))\n> -> Seq Scan on \"_EN\" \n> (cost=0.00..45288.05 rows=1509005 width=60) (actual \n> time=0.037..2737.054 rows=1508736 loops=1)\n> -> Hash (cost=1.93..1.93 rows=93 \n> width=7) (actual time=0.706..1.264 rows=88 loops=1)\n> Buckets: 1024 Batches: 1 \n> Memory Usage: 12kB\n> -> Seq Scan on \"_GovtRegion\" \n> (cost=0.00..1.93 rows=93 width=7) (actual time=0.013..0.577 rows=93 \n> loops=1)\n> -> Hash (cost=28093.99..28093.99 \n> rows=1506699 width=15) (actual time=1055.587..1055.588 rows=1506474 \n> loops=1)\n> Buckets: 131072 Batches: 32 Memory \n> Usage: 3175kB\n> -> Seq Scan on \"_AM\" \n> (cost=0.00..28093.99 rows=1506699 width=15) (actual \n> time=0.009..742.774 rows=1506474 loops=1)\n> -> Nested Loop (cost=0.29..0.68 rows=1 width=7) \n> (actual time=0.003..0.004 rows=1 loops=1487153)\n> Join Filter: (\"_IsoCountry\".iso_alpha2 = \n> \"_Territory\".country_id)\n> Rows Removed by Join Filter: 0\n> -> Index Only Scan using \n> \"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..0.38 rows=1 \n> width=3) (actual time=0.001..0.002 rows=1 loops=1487153)\n> Index Cond: (iso_alpha2 = \n> \"_GovtRegion\".country_id)\n> Heap Fetches: 1487153\n> -> Index Only Scan using \"_Territory_pkey\" \n> on \"_Territory\" (cost=0.14..0.29 rows=1 width=7) (actual \n> time=0.001..0.001 rows=1 loops=1487153)\n> Index Cond: (territory_id = \n> \"_GovtRegion\".territory_id)\n> Heap Fetches: 1550706\n> -> Index Scan using \"_HD_pkey\" on \"_HD\" \n> (cost=0.43..3.82 rows=1 width=15) (actual time=0.012..0.012 rows=1 \n> loops=1487153)\n> Index Cond: (unique_system_identifier = \n> \"_EN\".unique_system_identifier)\n> Filter: ((\"_EN\".callsign = callsign) AND \n> (((((license_status)::text || ' - '::text) || (COALESCE((SubPlan 2), \n> '???'::character varying))::text))::character(1) = 'A'::bpchar))\n> Rows Removed by Filter: 0\n> SubPlan 2\n> -> Limit (cost=0.00..1.07 rows=1 width=13) \n> (actual time=0.001..0.001 rows=1 loops=1487153)\n> -> Seq Scan on \"_LicStatus\" \n> (cost=0.00..1.07 rows=1 width=13) (actual time=0.000..0.000 rows=1 \n> loops=1487153)\n> Filter: (\"_HD\".license_status = \n> status_id)\n> Rows Removed by Filter: 1\n> -> Index Scan using \"_Club_pkey\" on \"_Club\" (cost=0.14..0.17 \n> rows=1 width=35) (actual time=0.002..0.002 rows=0 loops=837792)\n> Index Cond: (trustee_callsign = \"_EN\".callsign)\n> Filter: (club_count >= 5)\n> Rows Removed by Filter: 0\n> SubPlan 1\n> -> Limit (cost=0.00..1.20 rows=1 width=15) (actual \n> time=0.060..0.060 rows=1 loops=43)\n> -> Seq Scan on \"_ApplicantType\" (cost=0.00..1.20 \n> rows=1 width=15) (actual time=0.016..0.016 rows=1 loops=43)\n> Filter: (\"_EN\".applicant_type_code = app_type_id)\n> Rows Removed by Filter: 7\n> Planning Time: 173.753 ms\n> Execution Time: 31919.601 ms\n> (46 rows)\n>\n\n\n\n\n\n\n\nSOLVED !!! Below is the new\n EXPLAIN ANALYZE for 13.2 on AWS RDS (with no changes\n to server parameters) along with the prior EXPLAIN ANALYZE outputs\n for easy comparison.\n\n While I didn't discount the significance & effect of\n optimizing the server parameters, this problem always seemed to me\n like a fundamental difference in how the PostgreSQL planner viewed\n the structure of the query. In particular, I had a usage pattern\n of writing VIEWS that worked very well with v9.6 & prior\n versions, but which made me suspect a route of attack:\n\n Since the FCC tables contain lots of one-character codes for\n different conditions, to simplify maintenance & displays to\n humans, I created over twenty tiny lookup tables (a dozen or so\n entries in each table), to render a human-readable field as a\n replacement for the original one-character field in many of the\n VIEWs. In some cases those \"humanized\" fields were used as\n conditions in SELECT statements. Of course, fields that are not\n referenced or selected for output from a particular query, never\n get looked up (an advantage over using a JOIN for each lookup). \n In some cases, for ease of handling multiple or complex lookups, I\n indeed used a JOIN. All this worked fine until v10.\n\n Here's the FROM clause that bit me:\n\n FROM lic_en\n JOIN govt_region USING (territory_id, country_id)\n LEFT JOIN zip_code USING (territory_id, country_id, zip5)\n LEFT JOIN \"County\" USING (territory_id, country_id,\n fips_county);\n\n The first two JOINs are not the problem, & are in fact\n retained in my solution. The problem is the third JOIN, where\n \"fips_county\" from \"County\" is actually matched with the\n corresponding field from the \"zip_code\" VIEW. Works fine, if you\n don't mind the performance impact in v10 & above. It has now\n been rewritten, to be a sub-query for an output field. Voila ! \n Back to sub-second query times.\n\n This also solved performance issues with other queries as well. I\n also now use lookup values as additional fields in the output, in\n addition to the original fields, which should help some more (but\n means some changes to some web pages that do queries).\n\n -- Dean\n\n ps: I wonder how many other RDS users of v9.6 are going to get a\n very rude awakening very soon, as AWS is not allowing new\n instances of v9.6 after August 2 (see\n https://forums.aws.amazon.com/ann.jspa?annID=8499 ). Whether that\n milestone affects restores from snapshots, remains to be seen (by\n others, not by me). In other words, users should plan to be up\n & running on a newer version well before August. Total cost\n to me? I\"m in my 8th day of dealing with this, & I\n still have a number of web pages to update, due to changes in SQL\n field names to manage this mess. This was certainly not a obvious\n solution.\n\nHere's from 13.2 (new):\n\n=> EXPLAIN ANALYZE SELECT\n club_count, extra_count, region_count, callsign AS\n trustee_callsign, applicant_type, entity_name, licensee_id AS\n _lid FROM genclub_multi_ WHERE club_count >= 5 ORDER BY\n extra_count DESC, club_count DESC, entity_name;\n \n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=457.77..457.77 rows=1 width=64) (actual\n time=48.737..48.742 rows=43 loops=1)\n Sort Key: \"_Club\".extra_count DESC, \"_Club\".club_count DESC,\n \"_EN\".entity_name\n Sort Method: quicksort Memory: 31kB\n -> Nested Loop Left Join (cost=1.57..457.76 rows=1\n width=64) (actual time=1.796..48.635 rows=43 loops=1)\n -> Nested Loop (cost=1.28..457.07 rows=1 width=71)\n (actual time=1.736..48.239 rows=43 loops=1)\n Join Filter: ((\"_EN\".country_id =\n \"_GovtRegion\".country_id) AND (\"_EN\".state =\n \"_GovtRegion\".territory_id))\n Rows Removed by Join Filter: 1297\n -> Nested Loop (cost=1.28..453.75 rows=1\n width=70) (actual time=1.720..47.778 rows=43 loops=1)\n Join Filter:\n ((\"_HD\".unique_system_identifier =\n \"_EN\".unique_system_identifier) AND (\"_HD\".callsign =\n \"_EN\".callsign))\n -> Nested Loop (cost=0.85..450.98\n rows=1 width=65) (actual time=1.207..34.912 rows=43 loops=1)\n -> Nested Loop \n (cost=0.43..376.57 rows=27 width=50) (actual time=0.620..20.956\n rows=43 loops=1)\n -> Seq Scan on \"_Club\" \n (cost=0.00..4.44 rows=44 width=35) (actual time=0.037..0.067\n rows=44 loops=1)\n Filter: (club_count >=\n 5)\n Rows Removed by Filter:\n 151\n -> Index Scan using\n \"_HD_callsign\" on \"_HD\" (cost=0.43..8.45 rows=1 width=15)\n (actual time=0.474..0.474 rows=1 loops=44)\n Index Cond: (callsign =\n \"_Club\".trustee_callsign)\n Filter: (license_status =\n 'A'::bpchar)\n Rows Removed by Filter: 0\n -> Index Scan using \"_AM_pkey\" on\n \"_AM\" (cost=0.43..2.75 rows=1 width=15) (actual\n time=0.323..0.323 rows=1 loops=43)\n Index Cond:\n (unique_system_identifier = \"_HD\".unique_system_identifier)\n Filter: (\"_HD\".callsign =\n callsign)\n -> Index Scan using \"_EN_pkey\" on\n \"_EN\" (cost=0.43..2.75 rows=1 width=60) (actual\n time=0.298..0.298 rows=1 loops=43)\n Index Cond: (unique_system_identifier\n = \"_AM\".unique_system_identifier)\n Filter: (\"_AM\".callsign = callsign)\n -> Seq Scan on \"_GovtRegion\" \n (cost=0.00..1.93 rows=93 width=7) (actual time=0.002..0.004\n rows=31 loops=43)\n -> Nested Loop (cost=0.29..0.68 rows=1 width=7)\n (actual time=0.008..0.008 rows=1 loops=43)\n -> Index Only Scan using\n \"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..0.38\n rows=1 width=3) (actual time=0.004..0.004 rows=1 loops=43)\n Index Cond: (iso_alpha2 =\n \"_GovtRegion\".country_id)\n Heap Fetches: 43\n -> Index Only Scan using \"_Territory_pkey\" on\n \"_Territory\" (cost=0.14..0.29 rows=1 width=7) (actual\n time=0.003..0.003 rows=1 loops=43)\n Index Cond: ((country_id =\n \"_IsoCountry\".iso_alpha2) AND (territory_id =\n \"_GovtRegion\".territory_id))\n Heap Fetches: 43\n Planning Time: 4.017 ms\n Execution Time: 48.822 ms\n\n\n On 2021-05-28 11:48, Dean Gibson (DB Administrator) wrote:\n\n\n\n ...\n\nHere's from v9.6:\n\n=> EXPLAIN ANALYZE SELECT\n club_count, extra_count, region_count, callsign AS\n trustee_callsign, applicant_type, entity_name, licensee_id AS\n _lid FROM genclub_multi_ WHERE club_count >= 5 ORDER BY\n extra_count DESC, club_count DESC, entity_name;\n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=407.13..407.13 rows=1 width=94) (actual\n time=348.850..348.859 rows=43 loops=1)\n Sort Key: \"_Club\".extra_count DESC, \"_Club\".club_count DESC,\n \"_EN\".entity_name\n Sort Method: quicksort Memory: 31kB\n -> Nested Loop (cost=4.90..407.12 rows=1 width=94)\n (actual time=7.587..348.732 rows=43 loops=1)\n -> Nested Loop (cost=4.47..394.66 rows=1 width=94)\n (actual time=5.740..248.149 rows=43 loops=1)\n -> Nested Loop Left Join (cost=4.04..382.20\n rows=1 width=79) (actual time=2.458..107.908 rows=55 loops=1)\n -> Hash Join (cost=3.75..380.26 rows=1\n width=86) (actual time=2.398..106.990 rows=55 loops=1)\n Hash Cond: ((\"_EN\".country_id =\n \"_GovtRegion\".country_id) AND (\"_EN\".state =\n \"_GovtRegion\".territory_id))\n -> Nested Loop \n (cost=0.43..376.46 rows=47 width=94) (actual time=2.294..106.736\n rows=55 loops=1)\n -> Seq Scan on \"_Club\" \n (cost=0.00..4.44 rows=44 width=35) (actual time=0.024..0.101\n rows=44 loops=1)\n Filter: (club_count >=\n 5)\n Rows Removed by Filter:\n 151\n -> Index Scan using\n \"_EN_callsign\" on \"_EN\" (cost=0.43..8.45 rows=1 width=69)\n (actual time=2.179..2.420 rows=1 loops=44)\n Index Cond: (callsign =\n \"_Club\".trustee_callsign)\n -> Hash (cost=1.93..1.93 rows=93\n width=7) (actual time=0.071..0.071 rows=88 loops=1)\n Buckets: 1024 Batches: 1 \n Memory Usage: 12kB\n -> Seq Scan on\n \"_GovtRegion\" (cost=0.00..1.93 rows=93 width=7) (actual\n time=0.010..0.034 rows=93 loops=1)\n -> Nested Loop (cost=0.29..1.93 rows=1\n width=7) (actual time=0.012..0.014 rows=1 loops=55)\n Join Filter:\n (\"_IsoCountry\".iso_alpha2 = \"_Territory\".country_id)\n Rows Removed by Join Filter: 0\n -> Index Only Scan using\n \"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..1.62\n rows=1 width=3) (actual time=0.006..0.006 rows=1 loops=55)\n Index Cond: (iso_alpha2 =\n \"_GovtRegion\".country_id)\n Heap Fetches: 55\n -> Index Only Scan using\n \"_Territory_pkey\" on \"_Territory\" (cost=0.14..0.29 rows=1\n width=7)\n (actual time=0.004..0.005 rows=1 loops=55)\n Index Cond: (territory_id =\n \"_GovtRegion\".territory_id)\n Heap Fetches: 59\n -> Index Scan using \"_HD_pkey\" on \"_HD\" \n (cost=0.43..12.45 rows=1 width=15) (actual time=2.548..2.548\n rows=1 loops=55)\n Index Cond: (unique_system_identifier =\n \"_EN\".unique_system_identifier)\n Filter: ((\"_EN\".callsign = callsign) AND\n (((((license_status)::text || ' - '::text) || (COALESCE((SubPlan\n 2), '???'::character varying))::text))::character(1) =\n 'A'::bpchar))\n Rows Removed by Filter: 0\n SubPlan 2\n -> Limit (cost=0.15..8.17 rows=1\n width=32) (actual time=0.006..0.007 rows=1 loops=55)\n -> Index Scan using\n \"_LicStatus_pkey\" on \"_LicStatus\" (cost=0.15..8.17 rows=1\n width=32) (actual time=0.005..0.005 rows=1 loops=55)\n Index Cond:\n (\"_HD\".license_status = status_id)\n -> Index Scan using \"_AM_pkey\" on \"_AM\" \n (cost=0.43..4.27 rows=1 width=15) (actual time=2.325..2.325\n rows=1 loops=43)\n Index Cond: (unique_system_identifier =\n \"_EN\".unique_system_identifier)\n Filter: (\"_EN\".callsign = callsign)\n SubPlan 1\n -> Limit (cost=0.15..8.17 rows=1 width=32)\n (actual time=0.007..0.007 rows=1 loops=43)\n -> Index Scan using \"_ApplicantType_pkey\"\n on \"_ApplicantType\" (cost=0.15..8.17 rows=1 width=32) (actual\n time=0.005..0.005 rows=1 loops=43)\n Index Cond: (\"_EN\".applicant_type_code =\n app_type_id)\n Planning time: 13.490 ms\n Execution time: 349.182 ms\n (43 rows)\n\n\nHere's from v13.2: \n\n=> EXPLAIN ANALYZE SELECT\n club_count, extra_count, region_count, callsign AS\n trustee_callsign, applicant_type, entity_name, licensee_id AS\n _lid FROM genclub_multi_ WHERE club_count >= 5 ORDER BY\n extra_count DESC, club_count DESC, entity_name;\n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=144365.60..144365.60 rows=1 width=94) (actual\n time=31898.860..31901.922 rows=43 loops=1)\n Sort Key: \"_Club\".extra_count DESC, \"_Club\".club_count DESC,\n \"_EN\".entity_name\n Sort Method: quicksort Memory: 31kB\n -> Nested Loop (cost=58055.66..144365.59 rows=1\n width=94) (actual time=6132.403..31894.233 rows=43 loops=1)\n -> Nested Loop (cost=58055.51..144364.21 rows=1\n width=62) (actual time=1226.085..30337.921 rows=837792 loops=1)\n -> Nested Loop Left Join \n (cost=58055.09..144360.38 rows=1 width=59) (actual\n time=1062.414..12471.456 rows=1487153 loops=1)\n -> Hash Join (cost=58054.80..144359.69\n rows=1 width=66) (actual time=1061.330..6635.041 rows=1487153\n loops=1)\n Hash Cond:\n ((\"_EN\".unique_system_identifier =\n \"_AM\".unique_system_identifier) AND (\"_EN\".callsign =\n \"_AM\".callsign))\n -> Hash Join \n (cost=3.33..53349.72 rows=1033046 width=51) (actual\n time=2.151..3433.178 rows=1487153 loops=1)\n Hash Cond: ((\"_EN\".country_id =\n \"_GovtRegion\".country_id) AND (\"_EN\".state =\n \"_GovtRegion\".territory_id))\n -> Seq Scan on \"_EN\" \n (cost=0.00..45288.05 rows=1509005 width=60) (actual\n time=0.037..2737.054 rows=1508736 loops=1)\n -> Hash (cost=1.93..1.93\n rows=93 width=7) (actual time=0.706..1.264 rows=88 loops=1)\n Buckets: 1024 Batches:\n 1 Memory Usage: 12kB\n -> Seq Scan on\n \"_GovtRegion\" (cost=0.00..1.93 rows=93 width=7) (actual\n time=0.013..0.577 rows=93 loops=1)\n -> Hash (cost=28093.99..28093.99\n rows=1506699 width=15) (actual time=1055.587..1055.588\n rows=1506474 loops=1)\n Buckets: 131072 Batches: 32 \n Memory Usage: 3175kB\n -> Seq Scan on \"_AM\" \n (cost=0.00..28093.99 rows=1506699 width=15) (actual\n time=0.009..742.774 rows=1506474 loops=1)\n -> Nested Loop (cost=0.29..0.68 rows=1\n width=7) (actual time=0.003..0.004 rows=1 loops=1487153)\n Join Filter:\n (\"_IsoCountry\".iso_alpha2 = \"_Territory\".country_id)\n Rows Removed by Join Filter: 0\n -> Index Only Scan using\n \"_IsoCountry_iso_alpha2_key\" on \"_IsoCountry\" (cost=0.14..0.38\n rows=1 width=3) (actual time=0.001..0.002 rows=1 loops=1487153)\n Index Cond: (iso_alpha2 =\n \"_GovtRegion\".country_id)\n Heap Fetches: 1487153\n -> Index Only Scan using\n \"_Territory_pkey\" on \"_Territory\" (cost=0.14..0.29 rows=1\n width=7) (actual time=0.001..0.001 rows=1 loops=1487153)\n Index Cond: (territory_id =\n \"_GovtRegion\".territory_id)\n Heap Fetches: 1550706\n -> Index Scan using \"_HD_pkey\" on \"_HD\" \n (cost=0.43..3.82 rows=1 width=15) (actual time=0.012..0.012\n rows=1 loops=1487153)\n Index Cond: (unique_system_identifier =\n \"_EN\".unique_system_identifier)\n Filter: ((\"_EN\".callsign = callsign) AND\n (((((license_status)::text || ' - '::text) || (COALESCE((SubPlan\n 2), '???'::character varying))::text))::character(1) =\n 'A'::bpchar))\n Rows Removed by Filter: 0\n SubPlan 2\n -> Limit (cost=0.00..1.07 rows=1\n width=13) (actual time=0.001..0.001 rows=1 loops=1487153)\n -> Seq Scan on \"_LicStatus\" \n (cost=0.00..1.07 rows=1 width=13) (actual time=0.000..0.000\n rows=1 loops=1487153)\n Filter: (\"_HD\".license_status\n = status_id)\n Rows Removed by Filter: 1\n -> Index Scan using \"_Club_pkey\" on \"_Club\" \n (cost=0.14..0.17 rows=1 width=35) (actual time=0.002..0.002\n rows=0 loops=837792)\n Index Cond: (trustee_callsign = \"_EN\".callsign)\n Filter: (club_count >= 5)\n Rows Removed by Filter: 0\n SubPlan 1\n -> Limit (cost=0.00..1.20 rows=1 width=15)\n (actual time=0.060..0.060 rows=1 loops=43)\n -> Seq Scan on \"_ApplicantType\" \n (cost=0.00..1.20 rows=1 width=15) (actual time=0.016..0.016\n rows=1 loops=43)\n Filter: (\"_EN\".applicant_type_code =\n app_type_id)\n Rows Removed by Filter: 7\n Planning Time: 173.753 ms\n Execution Time: 31919.601 ms\n (46 rows)",
"msg_date": "Sun, 30 May 2021 20:07:29 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <postgresql@mailpen.com>",
"msg_from_op": true,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "\n\n> On May 30, 2021, at 20:07, Dean Gibson (DB Administrator) <postgresql@mailpen.com> wrote:\n> The first two JOINs are not the problem, & are in fact retained in my solution. The problem is the third JOIN, where \"fips_county\" from \"County\" is actually matched with the corresponding field from the \"zip_code\" VIEW. Works fine, if you don't mind the performance impact in v10 & above. It has now been rewritten, to be a sub-query for an output field. Voila ! Back to sub-second query times.\n\nIf, rather than a subquery, you explicitly called out the join criteria with ON, did it have the same performance benefit?\n\n\n\n",
"msg_date": "Sun, 30 May 2021 20:41:28 -0700",
"msg_from": "Christophe Pettus <xof@thebuild.com>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 2021-05-30 20:41, Christophe Pettus wrote:\n> On May 30, 2021, at 20:07, Dean Gibson (DB Administrator) \n> <postgresql@mailpen.com> wrote:\n>> The first two JOINs are not the problem, & are in fact retained in my solution. The problem is the third JOIN, where \"fips_county\" from \"County\" is actually matched with the corresponding field from the \"zip_code\" VIEW. Works fine, if you don't mind the performance impact in v10 & above. It has now been rewritten, to be a sub-query for an output field. Voila ! Back to sub-second query times.\n> If, rather than a subquery, you explicitly called out the join criteria with ON, did it have the same performance benefit?\n\nI thought that having a \"USING\" clause, was semantically equivalent to \nan \"ON\" clause with the equalities explicitly stated. So no, I didn't \ntry that.\n\nThe matching that occurred is *exactly *what I wanted. I just didn't \nwant the performance impact.\n\n\n\n\n\n\n\nOn 2021-05-30 20:41, Christophe Pettus\n wrote:\n\nOn\n May 30, 2021, at 20:07, Dean Gibson (DB Administrator)\n <postgresql@mailpen.com> wrote:\n \nThe first two JOINs are not the problem, & are in fact retained in my solution. The problem is the third JOIN, where \"fips_county\" from \"County\" is actually matched with the corresponding field from the \"zip_code\" VIEW. Works fine, if you don't mind the performance impact in v10 & above. It has now been rewritten, to be a sub-query for an output field. Voila ! Back to sub-second query times.\n\n\n\nIf, rather than a subquery, you explicitly called out the join criteria with ON, did it have the same performance benefit?\n\n\n\n I thought that having a \"USING\" clause, was semantically equivalent\n to an \"ON\" clause with the equalities explicitly stated. So no, I\n didn't try that.\n\n The matching that occurred is exactly what I wanted. I\n just didn't want the performance impact.",
"msg_date": "Sun, 30 May 2021 21:23:43 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <postgresql@mailpen.com>",
"msg_from_op": true,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "\"Dean Gibson (DB Administrator)\" <postgresql@mailpen.com> writes:\n> I thought that having a \"USING\" clause, was semantically equivalent to \n> an \"ON\" clause with the equalities explicitly stated. So no, I didn't \n> try that.\n\nUSING is not that, or at least not only that ... read the manual.\n\nI'm wondering if what you saw is some side-effect of the aliasing\nthat USING does.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 31 May 2021 00:44:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 2021-05-30 21:44, Tom Lane wrote:\n> \"Dean Gibson (DB Administrator)\" <postgresql@mailpen.com> writes:\n>> I thought that having a \"USING\" clause, was semantically equivalent to\n>> an \"ON\" clause with the equalities explicitly stated. So no, I didn't\n>> try that.\n> USING is not that, or at least not only that ... read the manual.\n>\n> I'm wondering if what you saw is some side-effect of the aliasing\n> that USING does.\n>\n> \t\t\tregards, tom lane\n\n /|USING ( /|join_column|/ [, ...] )|/\n\n /A clause of the form //|USING ( a, b, ... )|//is shorthand for\n //|ON left_table.a = right_table.a AND left_table.b =\n right_table.b ...|//. Also, //|USING|//implies that only one of\n each pair of equivalent columns will be included in the join\n output, not both./\n\n /\n /\n\n /The //|USING|//clause is a shorthand that allows you to take\n advantage of the specific situation where both sides of the join use\n the same name for the joining column(s). It takes a comma-separated\n list of the shared column names and forms a join condition that\n includes an equality comparison for each one. For example, joining\n //|T1|//and //|T2|//with //|USING (a, b)|//produces the join\n condition //|ON /|T1|/.a = /|T2|/.a AND /|T1|/.b = /|T2|/.b|//./\n\n /Furthermore, the output of //|JOIN USING|//suppresses redundant\n columns: there is no need to print both of the matched columns,\n since they must have equal values. While //|JOIN ON|//produces all\n columns from //|T1|//followed by all columns from //|T2|//, //|JOIN\n USING|//produces one output column for each of the listed column\n pairs (in the listed order), followed by any remaining columns from\n //|T1|//, followed by any remaining columns from //|T2|//./\n\n /Finally, //|NATURAL|//is a shorthand form of //|USING|//: it forms\n a //|USING|//list consisting of all column names that appear in both\n input tables. As with //|USING|//, these columns appear only once in\n the output table. If there are no common column names, //|NATURAL\n JOIN|//behaves like //|JOIN ... ON TRUE|//, producing a\n cross-product join./\n\n\nI get that it's like NATURAL, in that only one column is included. Is \nthere some other side-effect? Is the fact that I was using a LEFT JOIN, \nrelevant? Is what I was doing, unusual (or risky)?\n\n\n\n\n\n\n\n\n\nOn 2021-05-30 21:44, Tom Lane wrote:\n\n\n\"Dean Gibson (DB Administrator)\" <postgresql@mailpen.com> writes:\n\n\nI thought that having a \"USING\" clause, was semantically equivalent to \nan \"ON\" clause with the equalities explicitly stated. So no, I didn't \ntry that.\n\n\n\nUSING is not that, or at least not only that ... read the manual.\n\nI'm wondering if what you saw is some side-effect of the aliasing\nthat USING does.\n\n\t\t\tregards, tom lane\n\n\n\n\n\nUSING ( join_column [,\n ...] )\n\nA clause of the form USING\n ( a, b, ... ) is shorthand for ON left_table.a = right_table.a AND\n left_table.b = right_table.b .... Also, USING implies that only\n one of each pair of equivalent columns will be included in\n the join output, not both.\n\n\n\n\nThe USING clause\n is a shorthand that allows you to take advantage of the\n specific situation where both sides of the join use the same\n name for the joining column(s). It takes a comma-separated\n list of the shared column names and forms a join condition\n that includes an equality comparison for each one. For\n example, joining T1\n and T2 with\n USING (a, b) produces\n the join condition ON T1.a = T2.a AND T1.b = T2.b.\nFurthermore, the output of JOIN\n USING suppresses redundant columns: there is\n no need to print both of the matched columns, since they must\n have equal values. While JOIN ON\n produces all columns from T1\n followed by all columns from T2,\n JOIN USING produces\n one output column for each of the listed column pairs (in the\n listed order), followed by any remaining columns from T1, followed by any\n remaining columns from T2.\nFinally, NATURAL\n is a shorthand form of USING:\n it forms a USING\n list consisting of all column names that appear in both input\n tables. As with USING,\n these columns appear only once in the output table. If there\n are no common column names, NATURAL\n JOIN behaves like JOIN ... ON TRUE, producing a\n cross-product join.\n\n\n\n I get that it's like NATURAL, in that only one column is included. \n Is there some other side-effect? Is the fact that I was using a\n LEFT JOIN, relevant? Is what I was doing, unusual (or risky)?",
"msg_date": "Sun, 30 May 2021 22:24:00 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <postgresql@mailpen.com>",
"msg_from_op": true,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "> Here's the FROM clause that bit me:\n>\n> FROM lic_en\n> JOIN govt_region USING (territory_id, country_id)\n> LEFT JOIN zip_code USING (territory_id, country_id, zip5)\n> LEFT JOIN \"County\" USING (territory_id, country_id, fips_county);\n\nI'm guessing that there's a dependency/correlation between\nterritory/country/county, and that's probably related to a misestimate causing\na bad plan.\n\n> The first two JOINs are not the problem, & are in fact retained in my\n> solution. The problem is the third JOIN, where \"fips_county\" from \"County\" is\n> actually matched with the corresponding field from the \"zip_code\" VIEW. Works\n> fine, if you don't mind the performance impact in v10 & above. It has now\n> been rewritten, to be a sub-query for an output field. Voila ! Back to\n> sub-second query times.\n\nWhat version of 9.6.X were you upgrading *from* ?\n\nv9.6 added selectivity estimates based on FKs, so it's not surprising if there\nwas a plan change migrating *to* v9.6.\n\n...but there were a number of fixes to that, and it seems possible the plans\nchanged between 9.6.0 and 9.6.22, and anything backpatched to 9.X would also be\nin v10+. So you might've gotten the bad plan on 9.6.22, also.\n\nI found these commits that might be relevant.\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=1f184426b\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=7fa93eec4\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=770671062\n\nad1c36b07 wasn't backpatched and probably not relevant to your issue.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 31 May 2021 23:16:35 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 2021-05-31 21:16, Justin Pryzby wrote:\n>> Here's the FROM clause that bit me:\n>>\n>> FROM lic_en\n>> JOIN govt_region USING (territory_id, country_id)\n>> LEFT JOIN zip_code USING (territory_id, country_id, zip5)\n>> LEFT JOIN \"County\" USING (territory_id, country_id, fips_county);\n> I'm guessing that there's a dependency/correlation between territory/country/county, and that's probably related to a misestimate causing a bad plan.\n>\n>> The first two JOINs are not the problem, & are in fact retained in my solution. The problem is the third JOIN, where \"fips_county\" from \"County\" is actually matched with the corresponding field from the \"zip_code\" VIEW. Works fine, if you don't mind the performance impact in v10 & above. It has now been rewritten, to be a sub-query for an output field. Voila ! Back to sub-second query times.\n> What version of 9.6.X were you upgrading *from* ?\n>\n> v9.6 added selectivity estimates based on FKs, so it's not surprising if there was a plan change migrating *to* v9.6.\n\nI originally upgraded from 9.6.20 to v12.6. When that (otherwise \nsuccessful) upgrade had performance problems, I upgraded the v9.6.20 \ncopy to v9.6.21, & tried again, with the same result.\n\nInterestingly, on v13.2 I have now run into another (similar) \nperformance issue. I've solved it by setting the following to values I \nused with v9.x:\n\njoin_collapse_limit & from_collapse_limit = 16\n\ngeqo_threshold = 32\n\nI pretty sure I tried those settings (on v10 & above) with the earlier \nperformance problem, to no avail. However, I now wonder what would have \nbeen the result if I have doubled those values before re-architecting \nsome of my tables (moving from certain JOINs to specific sub-selects).\n\n\n\n\n\n\n\nOn 2021-05-31 21:16, Justin Pryzby\n wrote:\n\n\n\nHere's the FROM clause that bit me:\n\n FROM lic_en\n JOIN govt_region USING (territory_id, country_id)\n LEFT JOIN zip_code USING (territory_id, country_id, zip5)\n LEFT JOIN \"County\" USING (territory_id, country_id, fips_county);\n\n\n\nI'm guessing that there's a dependency/correlation between territory/country/county, and that's probably related to a misestimate causing a bad plan.\n\n\n\nThe first two JOINs are not the problem, & are in fact retained in my solution. The problem is the third JOIN, where \"fips_county\" from \"County\" is actually matched with the corresponding field from the \"zip_code\" VIEW. Works fine, if you don't mind the performance impact in v10 & above. It has now been rewritten, to be a sub-query for an output field. Voila ! Back to sub-second query times.\n\n\n\nWhat version of 9.6.X were you upgrading *from* ?\n\nv9.6 added selectivity estimates based on FKs, so it's not surprising if there was a plan change migrating *to* v9.6.\n\n\n\n I originally upgraded from 9.6.20 to v12.6. When that (otherwise\n successful) upgrade had performance problems, I upgraded the v9.6.20\n copy to v9.6.21, & tried again, with the same result.\n\n Interestingly, on v13.2 I have now run into another (similar)\n performance issue. I've solved it by setting the following to\n values I used with v9.x:\n\n join_collapse_limit & from_collapse_limit = 16\n\n geqo_threshold = 32\n\n I pretty sure I tried those settings (on v10 & above) with the\n earlier performance problem, to no avail. However, I now wonder\n what would have been the result if I have doubled those values\n before re-architecting some of my tables (moving from certain JOINs\n to specific sub-selects).",
"msg_date": "Tue, 1 Jun 2021 10:44:54 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <postgresql@mailpen.com>",
"msg_from_op": true,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "Having now successfully migrated from PostgreSQL v9.6 to v13.2 in Amazon \nRDS, I wondered, why I am paying AWS for an RDS-based version, when I \nwas forced by their POLICY to go through the effort I did? I'm not one \nof the crowd who thinks, \"It works OK, so I don't update anything\". I'm \nusually one who is VERY quick to apply upgrades, especially when there \nis a fallback ability. However, the initial failure to successfully \nupgrade from v9.6 to any more recent major version, put me in a \ntime-limited box that I really don't like to be in.\n\nIf I'm going to have to deal with maintenance issues, like I easily did \nwhen I ran native PostgreSQL, why not go back to that? So, I've ported \nmy database back to native PostgreSQL v13.3 on an AWS EC2 instance. It \nlooks like I will save about 40% of the cost, which is in accord with \nthis article: https://www.iobasis.com/Strategies-to-reduce-Amazon-RDS-Costs/\n\nWhy am I mentioning this here? Because there were minor issues & \nbenefits in porting back to native PostgreSQL, that may be of interest here:\n\nFirst, pg_dumpall (v13.3) errors out, because on RDS, you cannot be a \nsuperuser, & it tries to dump protected stuff. If there is a way around \nthat, I'd like to know it, even though it's not an issue now. pg_dump \nworks OK, but of course you don't get the roles dumped. Fortunately, I \nkept script files that have all the database setup, so I just ran them \nto create all the relationships, & then used the pg_dump output. Worked \nflawlessly.\n\nSecond, I noticed that the compressed (\"-Z6\" level) output from pg-dump \nis less than one-tenth of the disk size of the restored database. \nThat's LOT less than the size of the backups that AWS was charging me for.\n\nThird, once you increase your disk size in RDS, you can never decrease \nit, unless you go through the above port to a brand new instance (RDS or \nnative PostgreSQL). RDS backups must be restored to the same size \nvolume (or larger) that they were created for. A VACUUM FULL ANALYZE on \nRDS requires more than doubling the required disk size (I tried with \nless several times). This is easily dealt with on an EC2 Linux \ninstance, requiring only a couple minutes of DB downtime.\n\nFourth, while AWS is forcing customers to upgrade from v9.6, but the \nonly PostgreSQL client tools that AWS currently provides in their \nstandard repository are for v9.6!!! That means when you want to use any \nof their client tools on newer versions, you have problems. psql gives \nyou a warning on each startup, & pg_dump simply (& correctly) won't back \nup a newer DB. If you add their \"optional\" repository, you can use \nv12.6 tools, but v13.3 is only available by hand-editing the repo file \nto include v13 (which I did). For this level of support, I pay extra? \nI don't think so.\n\nFinally, the AWS support forums are effectively \"write-only.\" Most of \nthe questions asked there, never get ANY response from other users, & \nAWS only uses them to post announcements, from what I can tell. I got a \nLOT more help here in this thread, & last I looked, I don't pay anyone here.\n\n\n\n\n\n\n Having now successfully migrated from PostgreSQL v9.6 to v13.2 in\n Amazon RDS, I wondered, why I am paying AWS for an RDS-based\n version, when I was forced by their POLICY to go through the effort\n I did? I'm not one of the crowd who thinks, \"It works OK, so I\n don't update anything\". I'm usually one who is VERY quick to apply\n upgrades, especially when there is a fallback ability. However, the\n initial failure to successfully upgrade from v9.6 to any more recent\n major version, put me in a time-limited box that I really don't like\n to be in.\n\n If I'm going to have to deal with maintenance issues, like I easily\n did when I ran native PostgreSQL, why not go back to that? So, I've\n ported my database back to native PostgreSQL v13.3 on an AWS EC2\n instance. It looks like I will save about 40% of the cost, which is\n in accord with this article: \n https://www.iobasis.com/Strategies-to-reduce-Amazon-RDS-Costs/\n\n Why am I mentioning this here? Because there were minor issues\n & benefits in porting back to native PostgreSQL, that may be of\n interest here:\n\n First, pg_dumpall (v13.3) errors out, because on RDS, you cannot be\n a superuser, & it tries to dump protected stuff. If there is a\n way around that, I'd like to know it, even though it's not an issue\n now. pg_dump works OK, but of course you don't get the roles\n dumped. Fortunately, I kept script files that have all the database\n setup, so I just ran them to create all the relationships, &\n then used the pg_dump output. Worked flawlessly.\n\n Second, I noticed that the compressed (\"-Z6\" level) output from\n pg-dump is less than one-tenth of the disk size of the restored\n database. That's LOT less than the size of the backups that AWS was\n charging me for.\n\n Third, once you increase your disk size in RDS, you can never\n decrease it, unless you go through the above port to a brand new\n instance (RDS or native PostgreSQL). RDS backups must be restored\n to the same size volume (or larger) that they were created for. A\n VACUUM FULL ANALYZE on RDS requires more than doubling the required\n disk size (I tried with less several times). This is easily dealt\n with on an EC2 Linux instance, requiring only a couple minutes of DB\n downtime.\n\n Fourth, while AWS is forcing customers to upgrade from v9.6, but the\n only PostgreSQL client tools that AWS currently provides in their\n standard repository are for v9.6!!! That means when you want to use\n any of their client tools on newer versions, you have problems. \n psql gives you a warning on each startup, & pg_dump simply\n (& correctly) won't back up a newer DB. If you add their\n \"optional\" repository, you can use v12.6 tools, but v13.3 is only\n available by hand-editing the repo file to include v13 (which I\n did). For this level of support, I pay extra? I don't think so. \n\n Finally, the AWS support forums are effectively \"write-only.\" Most\n of the questions asked there, never get ANY response from other\n users, & AWS only uses them to post announcements, from what I\n can tell. I got a LOT more help here in this thread, & last I\n looked, I don't pay anyone here.",
"msg_date": "Wed, 9 Jun 2021 18:50:38 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <postgresql@mailpen.com>",
"msg_from_op": true,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "\nOn 6/9/21 9:50 PM, Dean Gibson (DB Administrator) wrote:\n> Having now successfully migrated from PostgreSQL v9.6 to v13.2 in\n> Amazon RDS, I wondered, why I am paying AWS for an RDS-based version,\n> when I was forced by their POLICY to go through the effort I did? I'm\n> not one of the crowd who thinks, \"It works OK, so I don't update\n> anything\". I'm usually one who is VERY quick to apply upgrades,\n> especially when there is a fallback ability. However, the initial\n> failure to successfully upgrade from v9.6 to any more recent major\n> version, put me in a time-limited box that I really don't like to be in.\n>\n> If I'm going to have to deal with maintenance issues, like I easily\n> did when I ran native PostgreSQL, why not go back to that? So, I've\n> ported my database back to native PostgreSQL v13.3 on an AWS EC2\n> instance. It looks like I will save about 40% of the cost, which is\n> in accord with this article: \n> https://www.iobasis.com/Strategies-to-reduce-Amazon-RDS-Costs/\n>\n> Why am I mentioning this here? Because there were minor issues &\n> benefits in porting back to native PostgreSQL, that may be of interest\n> here:\n>\n> First, pg_dumpall (v13.3) errors out, because on RDS, you cannot be a\n> superuser, & it tries to dump protected stuff. If there is a way\n> around that, I'd like to know it, even though it's not an issue now. \n> pg_dump works OK, but of course you don't get the roles dumped. \n> Fortunately, I kept script files that have all the database setup, so\n> I just ran them to create all the relationships, & then used the\n> pg_dump output. Worked flawlessly.\n\n\n\nThis was added in release 12 specifically with RDS in mind:\n\n\n pg_dumpall --exclude-database\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 10 Jun 2021 06:29:12 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On Wed, Jun 9, 2021 at 6:50 PM Dean Gibson (DB Administrator) <\npostgresql@mailpen.com> wrote:\n\n> Having now successfully migrated from PostgreSQL v9.6 to v13.2 in Amazon\n> RDS, I wondered, why I am paying AWS for an RDS-based version, when I was\n> forced by their POLICY to go through the effort I did? I'm not one of the\n> crowd who thinks, \"It works OK, so I don't update anything\". I'm usually\n> one who is VERY quick to apply upgrades, especially when there is a\n> fallback ability. However, the initial failure to successfully upgrade\n> from v9.6 to any more recent major version, put me in a time-limited box\n> that I really don't like to be in.\n>\n\nRight, and had you deployed on EC2 you would not have been forced to\nupgrade. This is an argument against RDS for this particular problem.\n\n\n>\n> If I'm going to have to deal with maintenance issues, like I easily did\n> when I ran native PostgreSQL, why not go back to that? So, I've ported my\n> database back to native PostgreSQL v13.3 on an AWS EC2 instance. It looks\n> like I will save about 40% of the cost, which is in accord with this\n> article: https://www.iobasis.com/Strategies-to-reduce-Amazon-RDS-Costs/\n>\n\nThat is correct, it is quite a bit less expensive to host your own EC2\ninstances. Where it is not cheaper is when you need to easily configure\nbackups, take a snapshot, or bring up a replica. For those in the know,\nputting in some work upfront largely removes the burden that RDS corrects\nbut a lot of people who deploy RDS are *not* DBAs, or even Systems people.\nThey are front end developers.\n\nGlad to see you were able to work things out.\n\nJD\n\n-- \n\n - Partner, Father, Explorer and Founder.\n - Founder - https://commandprompt.com/ - 24x7x365 Postgres since 1997\n - Founder and Co-Chair - https://postgresconf.org/\n - Founder - https://postgresql.us - United States PostgreSQL\n - Public speaker, published author, postgresql expert, and people\n believer.\n - Host - More than a refresh\n <https://commandprompt.com/about/more-than-a-refresh/>: A podcast about\n data and the people who wrangle it.\n\nOn Wed, Jun 9, 2021 at 6:50 PM Dean Gibson (DB Administrator) <postgresql@mailpen.com> wrote:\n\n Having now successfully migrated from PostgreSQL v9.6 to v13.2 in\n Amazon RDS, I wondered, why I am paying AWS for an RDS-based\n version, when I was forced by their POLICY to go through the effort\n I did? I'm not one of the crowd who thinks, \"It works OK, so I\n don't update anything\". I'm usually one who is VERY quick to apply\n upgrades, especially when there is a fallback ability. However, the\n initial failure to successfully upgrade from v9.6 to any more recent\n major version, put me in a time-limited box that I really don't like\n to be in.Right, and had you deployed on EC2 you would not have been forced to upgrade. This is an argument against RDS for this particular problem. \n\n If I'm going to have to deal with maintenance issues, like I easily\n did when I ran native PostgreSQL, why not go back to that? So, I've\n ported my database back to native PostgreSQL v13.3 on an AWS EC2\n instance. It looks like I will save about 40% of the cost, which is\n in accord with this article: \n https://www.iobasis.com/Strategies-to-reduce-Amazon-RDS-Costs/That is correct, it is quite a bit less expensive to host your own EC2 instances. Where it is not cheaper is when you need to easily configure backups, take a snapshot, or bring up a replica. For those in the know, putting in some work upfront largely removes the burden that RDS corrects but a lot of people who deploy RDS are *not* DBAs, or even Systems people. They are front end developers. Glad to see you were able to work things out.JD-- Partner, Father, Explorer and Founder.Founder - https://commandprompt.com/ - 24x7x365 Postgres since 1997Founder and Co-Chair - https://postgresconf.org/ Founder - https://postgresql.us - United States PostgreSQLPublic speaker, published author, postgresql expert, and people believer.Host - More than a refresh: A podcast about data and the people who wrangle it.",
"msg_date": "Thu, 10 Jun 2021 07:36:26 -0700",
"msg_from": "Joshua Drake <jd@commandprompt.com>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 2021-06-10 03:29, Andrew Dunstan wrote:\n> On 6/9/21 9:50 PM, Dean Gibson (DB Administrator) wrote:\n>> First, pg_dumpall (v13.3) errors out, because on RDS, you cannot be a superuser, & it tries to dump protected stuff. If there is a way around that, I'd like to know it, even though it's not an issue now. pg_dump works OK, but of course you don't get the roles dumped. Fortunately, I kept script files that have all the database setup, so I just ran them to create all the relationships, & then used the pg_dump output. Worked flawlessly.\n> This was added in release 12 specifically with RDS in mind:\n>\n> pg_dumpall --exclude-database\n>\n> cheers, andrew\n\nI guess I don't understand what that option does:\n\n=>pg_dumpall -U Admin --exclude-database MailPen >zzz.sql\npg_dump: error: could not write to output file: No space left on device\npg_dumpall: error: pg_dump failed on database \"MailPen\", exiting\n\nI expected a tiny file, not 3.5GB. \"MailPen\" is the only database \n(other than what's pre-installed). Do I need quotes on the command line?\n\n\n\n\n\n\n\nOn 2021-06-10 03:29, Andrew Dunstan\n wrote:\n\n\n\nOn 6/9/21 9:50 PM, Dean Gibson (DB Administrator) wrote:\n\n\nFirst, pg_dumpall (v13.3) errors out, because on RDS, you cannot be a superuser, & it tries to dump protected stuff. If there is a way around that, I'd like to know it, even though it's not an issue now. pg_dump works OK, but of course you don't get the roles dumped. Fortunately, I kept script files that have all the database setup, so I just ran them to create all the relationships, & then used the pg_dump output. Worked flawlessly.\n\n\n\nThis was added in release 12 specifically with RDS in mind:\n\n pg_dumpall --exclude-database\n\ncheers, andrew\n\n\n I guess I don't understand what that option does:\n\n =>pg_dumpall -U Admin --exclude-database MailPen >zzz.sql\n pg_dump: error: could not write to output file: No space left on\n device\n pg_dumpall: error: pg_dump failed on database \"MailPen\", exiting\n\n I expected a tiny file, not 3.5GB. \"MailPen\" is the only database\n (other than what's pre-installed). Do I need quotes on the command\n line?",
"msg_date": "Thu, 10 Jun 2021 09:07:52 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <postgresql@mailpen.com>",
"msg_from_op": true,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "Em qui., 10 de jun. de 2021 às 13:08, Dean Gibson (DB Administrator) <\npostgresql@mailpen.com> escreveu:\n\n> On 2021-06-10 03:29, Andrew Dunstan wrote:\n>\n> On 6/9/21 9:50 PM, Dean Gibson (DB Administrator) wrote:\n>\n> First, pg_dumpall (v13.3) errors out, because on RDS, you cannot be a superuser, & it tries to dump protected stuff. If there is a way around that, I'd like to know it, even though it's not an issue now. pg_dump works OK, but of course you don't get the roles dumped. Fortunately, I kept script files that have all the database setup, so I just ran them to create all the relationships, & then used the pg_dump output. Worked flawlessly.\n>\n> This was added in release 12 specifically with RDS in mind:\n>\n> pg_dumpall --exclude-database\n>\n> cheers, andrew\n>\n>\n> I guess I don't understand what that option does:\n>\n> =>pg_dumpall -U Admin --exclude-database MailPen >zzz.sql\n> pg_dump: error: could not write to output file: No space left on device\n> pg_dumpall: error: pg_dump failed on database \"MailPen\", exiting\n>\n> I expected a tiny file, not 3.5GB. \"MailPen\" is the only database (other\n> than what's pre-installed). Do I need quotes on the command line?\n>\nSee at:\nhttps://www.postgresql.org/docs/13/app-pg-dumpall.html\n\nYour cmd lacks =\n=>pg_dumpall -U Admin --exclude-database=MailPen >zzz.sql\n\nregards,\nRanier Vilela\n\nEm qui., 10 de jun. de 2021 às 13:08, Dean Gibson (DB Administrator) <postgresql@mailpen.com> escreveu:\n\nOn 2021-06-10 03:29, Andrew Dunstan\n wrote:\n\n\nOn 6/9/21 9:50 PM, Dean Gibson (DB Administrator) wrote:\n\n\nFirst, pg_dumpall (v13.3) errors out, because on RDS, you cannot be a superuser, & it tries to dump protected stuff. If there is a way around that, I'd like to know it, even though it's not an issue now. pg_dump works OK, but of course you don't get the roles dumped. Fortunately, I kept script files that have all the database setup, so I just ran them to create all the relationships, & then used the pg_dump output. Worked flawlessly.\n\n\nThis was added in release 12 specifically with RDS in mind:\n\n pg_dumpall --exclude-database\n\ncheers, andrew\n\n\n I guess I don't understand what that option does:\n\n =>pg_dumpall -U Admin --exclude-database MailPen >zzz.sql\n pg_dump: error: could not write to output file: No space left on\n device\n pg_dumpall: error: pg_dump failed on database \"MailPen\", exiting\n\n I expected a tiny file, not 3.5GB. \"MailPen\" is the only database\n (other than what's pre-installed). Do I need quotes on the command\n line?See at:https://www.postgresql.org/docs/13/app-pg-dumpall.htmlYour cmd lacks =\n=>pg_dumpall -U Admin --exclude-database=MailPen >zzz.sql regards,Ranier Vilela",
"msg_date": "Thu, 10 Jun 2021 13:54:55 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 2021-06-10 09:54, Ranier Vilela wrote:\n> Em qui., 10 de jun. de 2021 às 13:08, Dean Gibson (DB Administrator) \n> <postgresql@mailpen.com <mailto:postgresql@mailpen.com>> escreveu:\n>\n>\n> I guess I don't understand what that option does:\n>\n> =>pg_dumpall -U Admin --exclude-database MailPen >zzz.sql\n> pg_dump: error: could not write to output file: No space left on\n> device\n> pg_dumpall: error: pg_dump failed on database \"MailPen\", exiting\n>\n> I expected a tiny file, not 3.5GB. \"MailPen\" is the only database\n> (other than what's pre-installed). Do I need quotes on the\n> command line?\n>\n> See at:\n> https://www.postgresql.org/docs/13/app-pg-dumpall.html \n> <https://www.postgresql.org/docs/13/app-pg-dumpall.html>\n>\n> Your cmd lacks =\n> =>pg_dumpall -U Admin --exclude-database=MailPen >zzz.sql\n>\n> regards, Ranier Vilela\n\nI read that before posting, but missed that. Old command line patterns \ndie hard!\n\nHowever, the result was the same: 3.5GB before running out of space.\n\n\n\n\n\n\n\nOn 2021-06-10 09:54, Ranier Vilela\n wrote:\n\n\n\n\n\nEm qui., 10 de jun. de 2021\n às 13:08, Dean Gibson (DB Administrator) <postgresql@mailpen.com>\n escreveu:\n\n\n\n I guess I don't understand what that option does:\n\n =>pg_dumpall -U Admin --exclude-database MailPen\n >zzz.sql\n pg_dump: error: could not write to output file: No space\n left on device\n pg_dumpall: error: pg_dump failed on database \"MailPen\",\n exiting\n\n I expected a tiny file, not 3.5GB. \"MailPen\" is the only\n database (other than what's pre-installed). Do I need\n quotes on the command line?\n\n\nSee at:\nhttps://www.postgresql.org/docs/13/app-pg-dumpall.html\n\n\nYour cmd lacks =\n\n =>pg_dumpall -U Admin --exclude-database=MailPen\n >zzz.sql \n\n\n\nregards, Ranier Vilela\n\n\n\n\n\n I read that before posting, but missed that. Old command line\n patterns die hard!\n\n However, the result was the same: 3.5GB before running out of\n space.",
"msg_date": "Thu, 10 Jun 2021 10:43:13 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <postgresql@mailpen.com>",
"msg_from_op": true,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "\"Dean Gibson (DB Administrator)\" <postgresql@mailpen.com> writes:\n> On 2021-06-10 09:54, Ranier Vilela wrote:\n>> Your cmd lacks =\n>> =>pg_dumpall -U Admin --exclude-database=MailPen >zzz.sql\n\n> I read that before posting, but missed that. Old command line patterns \n> die hard!\n> However, the result was the same: 3.5GB before running out of space.\n\n[ experiments... ] Looks like you gotta do it like this:\n\n\tpg_dumpall '--exclude-database=\"MailPen\"' ...\n\nThis surprises me, as I thought it was project policy not to\ncase-fold command-line arguments (precisely because you end\nup needing weird quoting to prevent that).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 10 Jun 2021 14:00:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "\nOn 6/10/21 2:00 PM, Tom Lane wrote:\n> \"Dean Gibson (DB Administrator)\" <postgresql@mailpen.com> writes:\n>> On 2021-06-10 09:54, Ranier Vilela wrote:\n>>> Your cmd lacks =\n>>> =>pg_dumpall -U Admin --exclude-database=MailPen >zzz.sql\n>> I read that before posting, but missed that. Old command line patterns \n>> die hard!\n>> However, the result was the same: 3.5GB before running out of space.\n> [ experiments... ] Looks like you gotta do it like this:\n>\n> \tpg_dumpall '--exclude-database=\"MailPen\"' ...\n>\n> This surprises me, as I thought it was project policy not to\n> case-fold command-line arguments (precisely because you end\n> up needing weird quoting to prevent that).\n>\n> \t\t\t\n\n\n\nOuch. That looks like a plain old bug. Let's fix it. IIRC I just used\nthe same logic that we use for pg_dump's --exclude-* options, so we need\nto check if they have similar issues.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 10 Jun 2021 14:23:39 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 2021-06-10 11:23, Andrew Dunstan wrote:\n> On 6/10/21 2:00 PM, Tom Lane wrote:\n>> \"Dean Gibson (DB Administrator)\" <postgresql@mailpen.com> writes:\n>>> ... Do I need quotes on the command line?\n>>> On 2021-06-10 09:54, Ranier Vilela wrote:\n>>>> Your cmd lacks =\n>>>> =>pg_dumpall -U Admin --exclude-database=MailPen >zzz.sql\n>>> I read [the manual] before posting, but missed that. Old command line patterns die hard!\n>>> However, the result was the same: 3.5GB before running out of space.\n>> [ experiments... ] Looks like you gotta do it like this:\n>>\n>> \tpg_dumpall '--exclude-database=\"MailPen\"' ...\n>>\n>> This surprises me, as I thought it was project policy not to case-fold command-line arguments (precisely because you end up needing weird quoting to prevent that).\t\n> Ouch. That looks like a plain old bug. Let's fix it. IIRC I just used the same logic that we use for pg_dump's --exclude-* options, so we need to check if they have similar issues.\n>\n> cheers, andrew\n\nThat works! I thought it was a quoting/case issue! I was next going to \ntry single quotes just outside double quotes, & that works as well (& is \na bit more natural):\n\npg_dumpall -U Admin --exclude-database='\"MailPen\"' >zzz.sql\n\nUsing mixed case has bitten me before, but I am not deterred! I run \nphpBB 3.0.14 (very old version) because upgrades to more current \nversions fail on the mixed case of the DB name, as well as the use of \nSCHEMAs to isolate the message board from the rest of the data. Yes, I \nreported it years ago.\n\nI use lower-case for column, VIEW, & function names; mixed (camel) case \nfor table, schema, & database names; & upper-case for SQL keywords. It \nhelps readability (as does murdering a couple semicolons in the prior \nsentence).\n\n\n\n\n\n\n\nOn 2021-06-10 11:23, Andrew Dunstan\n wrote:\n\n\n\nOn 6/10/21 2:00 PM, Tom Lane wrote:\n\n\n\"Dean Gibson (DB Administrator)\" <postgresql@mailpen.com> writes:\n\n\n... Do I need quotes on the command line?\nOn 2021-06-10 09:54, Ranier Vilela wrote:\n\n\nYour cmd lacks =\n=>pg_dumpall -U Admin --exclude-database=MailPen >zzz.sql\n\n\nI read [the manual] before posting, but missed that. Old command line patterns die hard!\nHowever, the result was the same: 3.5GB before running out of space.\n\n\n[ experiments... ] Looks like you gotta do it like this:\n\n\tpg_dumpall '--exclude-database=\"MailPen\"' ...\n\nThis surprises me, as I thought it was project policy not to case-fold command-line arguments (precisely because you end up needing weird quoting to prevent that).\t\n\n\n\nOuch. That looks like a plain old bug. Let's fix it. IIRC I just used the same logic that we use for pg_dump's --exclude-* options, so we need to check if they have similar issues.\n\ncheers, andrew\n\n\n\n That works! I thought it was a quoting/case issue! I was next\n going to try single quotes just outside double quotes, & that\n works as well (& is a bit more natural):\n\n pg_dumpall -U Admin --exclude-database='\"MailPen\"' >zzz.sql\n\n Using mixed case has bitten me before, but I am not deterred! I run\n phpBB 3.0.14 (very old version) because upgrades to more current\n versions fail on the mixed case of the DB name, as well as the use\n of SCHEMAs to isolate the message board from the rest of the data. \n Yes, I reported it years ago.\n\n I use lower-case for column, VIEW, & function names; mixed\n (camel) case for table, schema, & database names; &\n upper-case for SQL keywords. It helps readability (as does\n murdering a couple semicolons in the prior sentence).",
"msg_date": "Thu, 10 Jun 2021 12:29:05 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <postgresql@mailpen.com>",
"msg_from_op": true,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "On 2021-06-10 10:43, Dean Gibson (DB Administrator) wrote:\n> On 2021-06-10 09:54, Ranier Vilela wrote:\n>> Em qui., 10 de jun. de 2021 às 13:08, Dean Gibson (DB Administrator) \n>> <postgresql@mailpen.com <mailto:postgresql@mailpen.com>> escreveu:\n>>\n>>\n>> ... Do I need quotes on the command line?\n>>\n>> See at:\n>> https://www.postgresql.org/docs/13/app-pg-dumpall.html \n>> <https://www.postgresql.org/docs/13/app-pg-dumpall.html>\n>>\n>> Your cmd lacks =\n>> =>pg_dumpall -U Admin --exclude-database=MailPen >zzz.sql\n>>\n>> regards, Ranier Vilela\n>\n> ...\n>\n> However, the result was the same: 3.5GB before running out of space.\n>\n\nIt turns out the \"=\" is not needed. The double-quoting is (this works):\n\npg_dumpall -U Admin --exclude-database '\"MailPen\"' >zzz.sql\n\n\n\n\n\n\nOn 2021-06-10 10:43, Dean Gibson (DB\n Administrator) wrote:\n\n\n\nOn 2021-06-10 09:54, Ranier Vilela\n wrote:\n\n\n\n\n\nEm qui., 10 de jun. de\n 2021 às 13:08, Dean Gibson (DB Administrator) <postgresql@mailpen.com>\n escreveu:\n\n\n\n ... Do I need quotes on the command line?\n\n\nSee at:\nhttps://www.postgresql.org/docs/13/app-pg-dumpall.html\n\n\nYour cmd lacks =\n =>pg_dumpall -U Admin --exclude-database=MailPen\n >zzz.sql \n\n\n\nregards, Ranier Vilela\n\n\n\n\n\n ...\n\n However, the result was the same: 3.5GB before running out of\n space.\n\n\n\n It turns out the \"=\" is not needed. The double-quoting is (this\n works):\n\n pg_dumpall -U Admin --exclude-database '\"MailPen\"' >zzz.sql",
"msg_date": "Thu, 10 Jun 2021 14:46:27 -0700",
"msg_from": "\"Dean Gibson (DB Administrator)\" <postgresql@mailpen.com>",
"msg_from_op": true,
"msg_subject": "Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "\nOn 6/10/21 2:23 PM, Andrew Dunstan wrote:\n> On 6/10/21 2:00 PM, Tom Lane wrote:\n>> \"Dean Gibson (DB Administrator)\" <postgresql@mailpen.com> writes:\n>>> On 2021-06-10 09:54, Ranier Vilela wrote:\n>>>> Your cmd lacks =\n>>>> =>pg_dumpall -U Admin --exclude-database=MailPen >zzz.sql\n>>> I read that before posting, but missed that. Old command line patterns \n>>> die hard!\n>>> However, the result was the same: 3.5GB before running out of space.\n>> [ experiments... ] Looks like you gotta do it like this:\n>>\n>> \tpg_dumpall '--exclude-database=\"MailPen\"' ...\n>>\n>> This surprises me, as I thought it was project policy not to\n>> case-fold command-line arguments (precisely because you end\n>> up needing weird quoting to prevent that).\n>>\n>> \t\t\t\n>\n>\n> Ouch. That looks like a plain old bug. Let's fix it. IIRC I just used\n> the same logic that we use for pg_dump's --exclude-* options, so we need\n> to check if they have similar issues.\n>\n>\n\n\n\nPeter Eisentraut has pointed out to me that this is documented, albeit a\nbit obscurely for pg_dumpall. But it is visible on the pg_dump page.\n\n\nNevertheless, it's a bit of a POLA violation as we've seen above, and\nI'd like to get it fixed, if there's agreement, both for this pg_dumpall\noption and for pg_dump's pattern matching options.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 14 Jun 2021 09:21:30 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "pg_dumpall --exclude-database case folding, was Re: AWS forcing PG\n upgrade from v9.6 a disaster"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 6/10/21 2:23 PM, Andrew Dunstan wrote:\n>> Ouch. That looks like a plain old bug. Let's fix it. IIRC I just used\n>> the same logic that we use for pg_dump's --exclude-* options, so we need\n>> to check if they have similar issues.\n\n> Peter Eisentraut has pointed out to me that this is documented, albeit a\n> bit obscurely for pg_dumpall. But it is visible on the pg_dump page.\n\nHmm.\n\n> Nevertheless, it's a bit of a POLA violation as we've seen above, and\n> I'd like to get it fixed, if there's agreement, both for this pg_dumpall\n> option and for pg_dump's pattern matching options.\n\n+1, but the -performance list isn't really where to hold that discussion.\nPlease start a thread on -hackers.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 14 Jun 2021 09:32:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_dumpall --exclude-database case folding,\n was Re: AWS forcing PG upgrade from v9.6 a disaster"
},
{
"msg_contents": "\n[discussion transferred from psql-performance]\n\nSummary: pg_dumpall and pg_dump fold non-quoted commandline patterns to\nlower case\n\nTom lane writes:\n\nAndrew Dunstan <andrew@dunslane.net> writes:\n> On 6/10/21 2:23 PM, Andrew Dunstan wrote:\n>> Ouch. That looks like a plain old bug. Let's fix it. IIRC I just used\n>> the same logic that we use for pg_dump's --exclude-* options, so we need\n>> to check if they have similar issues.\n\n> Peter Eisentraut has pointed out to me that this is documented, albeit a\n> bit obscurely for pg_dumpall. But it is visible on the pg_dump page.\n\nHmm.\n\n> Nevertheless, it's a bit of a POLA violation as we've seen above, and\n> I'd like to get it fixed, if there's agreement, both for this pg_dumpall\n> option and for pg_dump's pattern matching options.\n\n+1, but the -performance list isn't really where to hold that discussion.\nPlease start a thread on -hackers.\n\nregards, tom lane\n\n\n\n",
"msg_date": "Mon, 14 Jun 2021 09:46:06 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "pg_dumpall --exclude-database case folding"
}
] |
[
{
"msg_contents": "The accounting used by ANALYZE to count dead tuples in\nacquire_sample_rows() (actually in heapam_scan_analyze_next_tuple()\nthese days) makes some dubious assumptions about how it should count\ndead tuples. This is something that I discussed with Masahiko in the\ncontext of our Postgres 14 work on VACUUM, which ultimately led to\nbetter documentation of the issues (see commit 7136bf34). But I want\nto talk about it again now. This is not a new issue.\n\nThe ANALYZE dead tuple accounting takes a 100% quantitative approach\n-- it is entirely unconcerned about qualitative distinctions about the\nnumber of dead tuples per logical row. Sometimes that doesn't matter,\nbut there are many important cases where it clearly is important. I'll\nshow one such case now. This is a case where the system frequently\nlaunches autovacuum workers that really never manage to do truly\nuseful work:\n\n$ pgbench -i -s 50 -F 80\n ...\n$ pgbench -s 50 -j 4 -c 32 -M prepared -T 300 --rate=15000\n ...\n\nI've made the heap fill factor 80 (with -F). I've also set both\nautovacuum_vacuum_scale_factor and autovacuum_analyze_scale_factor to\n0.02 here, which is aggressive but still basically reasonable. I've\nenabled autovacuum logging so we can see exactly what's going on with\nautovacuum when pgbench runs -- that's the interesting part.\n\nThe log output shows that an autovacuum worker was launched and ran\nVACUUM against pgbench_accounts on 11 separate occasions during the 5\nminute pgbench benchmark. All 11 autovacuum log reports show the\ndetails are virtually the same in each case. Here is the 11th and\nfinal output concerning the accounts table (I could have used any of\nthe other 10 just as easily):\n\np 593300/2021-05-28 16:16:47 PDT LOG: automatic vacuum of table\n\"regression.public.pgbench_accounts\": index scans: 0\n pages: 0 removed, 102041 remain, 0 skipped due to pins, 0 skipped frozen\n tuples: 100300 removed, 5000000 remain, 0 are dead but not yet\nremovable, oldest xmin: 7269905\n buffer usage: 204166 hits, 0 misses, 3586 dirtied\n index scan not needed: 0 pages from table (0.00% of total) had 0\ndead item identifiers removed\n avg read rate: 0.000 MB/s, avg write rate: 11.250 MB/s\n I/O Timings:\n system usage: CPU: user: 2.31 s, system: 0.02 s, elapsed: 2.49 s\n WAL usage: 200471 records, 31163 full page images, 44115415 bytes\n\nNotice that we have 0 LP_DEAD items left behind by pruning -- either\nopportunistic pruning or pruning by VACUUM. Pruning by VACUUM inside\nlazy_scan_prune() does \"remove 100300 dead tuples\", so arguably VACUUM\ndoes some useful work. Though I would argue that we don't -- I think\nthat this is a total waste of cycles. This particular quantitative\nmeasure has little to do with anything that matters to the workload.\nThis workload shouldn't ever need to VACUUM the accounts table (except\nwhen the time comes to freeze its tuples) -- the backends can clean up\nafter themselves opportunistically, without ever faltering (i.e.\nwithout ever failing to keep a HOT chain on the same page).\n\nThe picture we see here seems contradictory, even if you think about\nthe problem in exactly the same way as vacuumlazy.c thinks about the\nproblem. On the one hand autovacuum workers are launched because\nopportunistic cleanup techniques (mainly opportunistic heap page\npruning) don't seem to be able to keep up with the workload. On the\nother hand, when VACUUM actually runs we consistently see 0 LP_DEAD\nstub items in heap pages, which is generally an excellent indicator\nthat opportunistic HOT pruning is in fact working perfectly. Only one\nof those statements can be correct.\n\nThe absurdity of autovacuum's behavior with this workload becomes\nundeniable once you tweak just one detail and see what changes. For\nexample, I find that if I repeat the same process but increase\nautovacuum_vacuum_scale_factor from 0.02 to 0.05, everything changes.\nInstead of getting 11 autovacuum runs against pgbench_accounts I get 0\nautovacuum runs! This effect is stable, and won't change if the\nworkload runs for more than 5 minutes. Apparently vacuuming less\naggressively results in less need for vacuuming!\n\nI believe that there is a sharp discontinuity here -- a crossover\npoint for autovacuum_vacuum_scale_factor at which the behavior of the\nsystem *flips*, from very *frequent* autovacuum runs against the\naccounts table, to *zero* runs. This seems like a real problem to me.\nI bet it has real consequences that are hard to model. In any case\nthis simple model seems convincing enough. The dead tuple accounting\nmakes it much harder to set autovacuum_vacuum_scale_factor very\naggressively (say 0.02 or so) -- nobody is going to want to do that as\nlong as it makes the system launch useless autovacuum workers that\nnever end up doing useful work in a subset of tables. Users are\ncurrently missing out on the benefit of very aggressive autovacuums\nagainst tables where it truly makes sense.\n\nThe code in acquire_sample_rows()/heapam_scan_analyze_next_tuple()\ncounts tuples/line pointers on a physical heap page. Perhaps it should\n\"operate against an imaginary version of the page\" instead -- the page\nas it would be just *after* lazy_scan_prune() is called for the page\nduring a future VACUUM. More concretely, if there is a HOT chain then\nacquire_sample_rows() could perhaps either count 0 or 1 or the chain's\ntuples as dead tuples. The code might be taught to recognize that a\ntotal absence of LP_DEAD stubs items on the heap page strongly\nindicates that the workload can manage HOT chains via opportunistic\npruning.\n\nI'm just speculating about what alternative design might fix the issue\nat this point. In any case I contend that the current behavior gets\ntoo much wrong, and should be fixed in Postgres 15.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 28 May 2021 17:27:08 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "ANALYZE's dead tuple accounting can get confused"
}
] |
[
{
"msg_contents": "Hi:\n\nI'm always confused about the following codes.\n\nstatic void\ninitscan(HeapScanDesc scan, ScanKey key, bool keep_startblock)\n{\nParallelBlockTableScanDesc bpscan = NULL;\nbool allow_strat;\nbool allow_sync;\n\n/*\n* Determine the number of blocks we have to scan.\n*\n* It is sufficient to do this once at scan start, since any tuples added\n* while the scan is in progress will be invisible to my snapshot anyway.\n* (That is not true when using a non-MVCC snapshot. However, we couldn't\n* guarantee to return tuples added after scan start anyway, since they\n* might go into pages we already scanned. To guarantee consistent\n* results for a non-MVCC snapshot, the caller must hold some higher-level\n* lock that ensures the interesting tuple(s) won't change.)\n*/\nif (scan->rs_base.rs_parallel != NULL)\n{\nbpscan = (ParallelBlockTableScanDesc) scan->rs_base.rs_parallel;\nscan->rs_nblocks = bpscan->phs_nblocks;\n}\nelse\nscan->rs_nblocks = RelationGextNumberOfBlocks(scan->rs_base.rs_rd);\n\n\n..\n}\n\n1. Why do we need scan->rs_nblocks =\n RelationGextNumberOfBlocks(scan->rs_base.rs_rd) for every rescan, which\nlooks\n mismatched with the comments along the code. and the comments looks\n reasonable to me.\n2. For the heap scan after an IndexScan, we don't need to know the heap\n size, then why do we need to get the nblocks for bitmap heap scan? I\nthink the\n similarity between the 2 is that both of them can get a \"valid\"\nCTID/pages number\n from index scan. To be clearer, I think for bitmap heap scan, we even\ndon't\n need check the RelationGextNumberOfBlocks for the initscan.\n3. If we need to check nblocks every time, why Parallel Scan doesn't\nchange it\nevery time?\n\nshall we remove the RelationGextNumberOfBlocks for bitmap heap scan totally\nand the rescan for normal heap scan?\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nHi: I'm always confused about the following codes.static voidinitscan(HeapScanDesc scan, ScanKey key, bool keep_startblock){\tParallelBlockTableScanDesc bpscan = NULL;\tbool\t\tallow_strat;\tbool\t\tallow_sync;\t/*\t * Determine the number of blocks we have to scan.\t *\t * It is sufficient to do this once at scan start, since any tuples added\t * while the scan is in progress will be invisible to my snapshot anyway.\t * (That is not true when using a non-MVCC snapshot. However, we couldn't\t * guarantee to return tuples added after scan start anyway, since they\t * might go into pages we already scanned. To guarantee consistent\t * results for a non-MVCC snapshot, the caller must hold some higher-level\t * lock that ensures the interesting tuple(s) won't change.)\t */\tif (scan->rs_base.rs_parallel != NULL)\t{\t\tbpscan = (ParallelBlockTableScanDesc) scan->rs_base.rs_parallel;\t\tscan->rs_nblocks = bpscan->phs_nblocks;\t}\telse\t\tscan->rs_nblocks = RelationGextNumberOfBlocks(scan->rs_base.rs_rd);..}1. Why do we need scan->rs_nblocks = RelationGextNumberOfBlocks(scan->rs_base.rs_rd) for every rescan, which looks mismatched with the comments along the code. and the comments looks reasonable to me.2. For the heap scan after an IndexScan, we don't need to know the heap size, then why do we need to get the nblocks for bitmap heap scan? I think the similarity between the 2 is that both of them can get a \"valid\" CTID/pages number from index scan. To be clearer, I think for bitmap heap scan, we even don't need check the RelationGextNumberOfBlocks for the initscan.3. If we need to check nblocks every time, why Parallel Scan doesn't change itevery time?shall we remove the RelationGextNumberOfBlocks for bitmap heap scan totallyand the rescan for normal heap scan? -- Best RegardsAndy Fan (https://www.aliyun.com/)",
"msg_date": "Sat, 29 May 2021 11:23:31 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Regarding the necessity of RelationGetNumberOfBlocks for every rescan\n / bitmap heap scan."
},
{
"msg_contents": "> 1. Why do we need scan->rs_nblocks =\n> RelationGextNumberOfBlocks(scan->rs_base.rs_rd) for every rescan, which\n> looks\n> mismatched with the comments along the code. and the comments looks\n> reasonable to me.\n> 2. For the heap scan after an IndexScan, we don't need to know the heap\n> size, then why do we need to get the nblocks for bitmap heap scan? I\n> think the\n> similarity between the 2 is that both of them can get a \"valid\"\n> CTID/pages number\n> from index scan. To be clearer, I think for bitmap heap scan, we even\n> don't\n> need check the RelationGextNumberOfBlocks for the initscan.\n> 3. If we need to check nblocks every time, why Parallel Scan doesn't\n> change it\n> every time?\n>\n> shall we remove the RelationGextNumberOfBlocks for bitmap heap scan totally\n> and the rescan for normal heap scan?\n>\n>\nyizhi.fzh@e18c07352 /u/y/g/postgres> git diff\ndiff --git a/src/backend/access/heap/heapam.c\nb/src/backend/access/heap/heapam.c\nindex 6ac07f2fda..6df096fb46 100644\n--- a/src/backend/access/heap/heapam.c\n+++ b/src/backend/access/heap/heapam.c\n@@ -246,7 +246,7 @@ initscan(HeapScanDesc scan, ScanKey key, bool\nkeep_startblock)\n bpscan = (ParallelBlockTableScanDesc)\nscan->rs_base.rs_parallel;\n scan->rs_nblocks = bpscan->phs_nblocks;\n }\n- else\n+ else if (scan->rs_nblocks == -1 && !(scan->rs_base.rs_flags &\nSO_TYPE_BITMAPSCAN))\n scan->rs_nblocks =\nRelationGetNumberOfBlocks(scan->rs_base.rs_rd);\n\n /*\n@@ -1209,6 +1209,7 @@ heap_beginscan(Relation relation, Snapshot snapshot,\n scan->rs_base.rs_flags = flags;\n scan->rs_base.rs_parallel = parallel_scan;\n scan->rs_strategy = NULL; /* set in initscan */\n+ scan->rs_nblocks = -1;\n\n\nI did the above hacks, and all the existing tests passed.\n\n>\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\n1. Why do we need scan->rs_nblocks = RelationGextNumberOfBlocks(scan->rs_base.rs_rd) for every rescan, which looks mismatched with the comments along the code. and the comments looks reasonable to me.2. For the heap scan after an IndexScan, we don't need to know the heap size, then why do we need to get the nblocks for bitmap heap scan? I think the similarity between the 2 is that both of them can get a \"valid\" CTID/pages number from index scan. To be clearer, I think for bitmap heap scan, we even don't need check the RelationGextNumberOfBlocks for the initscan.3. If we need to check nblocks every time, why Parallel Scan doesn't change itevery time?shall we remove the RelationGextNumberOfBlocks for bitmap heap scan totallyand the rescan for normal heap scan? yizhi.fzh@e18c07352 /u/y/g/postgres> git diffdiff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.cindex 6ac07f2fda..6df096fb46 100644--- a/src/backend/access/heap/heapam.c+++ b/src/backend/access/heap/heapam.c@@ -246,7 +246,7 @@ initscan(HeapScanDesc scan, ScanKey key, bool keep_startblock) bpscan = (ParallelBlockTableScanDesc) scan->rs_base.rs_parallel; scan->rs_nblocks = bpscan->phs_nblocks; }- else+ else if (scan->rs_nblocks == -1 && !(scan->rs_base.rs_flags & SO_TYPE_BITMAPSCAN)) scan->rs_nblocks = RelationGetNumberOfBlocks(scan->rs_base.rs_rd); /*@@ -1209,6 +1209,7 @@ heap_beginscan(Relation relation, Snapshot snapshot, scan->rs_base.rs_flags = flags; scan->rs_base.rs_parallel = parallel_scan; scan->rs_strategy = NULL; /* set in initscan */+ scan->rs_nblocks = -1;I did the above hacks, and all the existing tests passed. \n-- Best RegardsAndy Fan (https://www.aliyun.com/)",
"msg_date": "Sun, 30 May 2021 12:51:40 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Regarding the necessity of RelationGetNumberOfBlocks for every\n rescan / bitmap heap scan."
},
{
"msg_contents": "On Sat, May 29, 2021 at 11:23 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n> Hi:\n>\n> I'm always confused about the following codes.\n>\n> static void\n> initscan(HeapScanDesc scan, ScanKey key, bool keep_startblock)\n> {\n> ParallelBlockTableScanDesc bpscan = NULL;\n> bool allow_strat;\n> bool allow_sync;\n>\n> /*\n> * Determine the number of blocks we have to scan.\n> *\n> * It is sufficient to do this once at scan start, since any tuples added\n> * while the scan is in progress will be invisible to my snapshot anyway.\n> * (That is not true when using a non-MVCC snapshot. However, we couldn't\n> * guarantee to return tuples added after scan start anyway, since they\n> * might go into pages we already scanned. To guarantee consistent\n> * results for a non-MVCC snapshot, the caller must hold some higher-level\n> * lock that ensures the interesting tuple(s) won't change.)\n> */\n> if (scan->rs_base.rs_parallel != NULL)\n> {\n> bpscan = (ParallelBlockTableScanDesc) scan->rs_base.rs_parallel;\n> scan->rs_nblocks = bpscan->phs_nblocks;\n> }\n> else\n> scan->rs_nblocks = RelationGextNumberOfBlocks(scan->rs_base.rs_rd);\n>\n>\n> ..\n> }\n>\n> 1. Why do we need scan->rs_nblocks =\n> RelationGextNumberOfBlocks(scan->rs_base.rs_rd) for every rescan, which\n> looks\n> mismatched with the comments along the code. and the comments looks\n> reasonable to me.\n>\n\nTo be more precise, this question can be expressed as if the relation size\ncan be changed during rescan. We are sure that the size can be increased\ndue to\nnew data, but we are sure that the new data is useless for the query as\nwell. So\nlooks this case is ok. and for the file size decreasing, since we have lock\non\nthe relation, so the file size would not be reduced as well (I have verified\nthis logic on the online vacuum case, other cases should be similar as\nwell).\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nOn Sat, May 29, 2021 at 11:23 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:Hi: I'm always confused about the following codes.static voidinitscan(HeapScanDesc scan, ScanKey key, bool keep_startblock){\tParallelBlockTableScanDesc bpscan = NULL;\tbool\t\tallow_strat;\tbool\t\tallow_sync;\t/*\t * Determine the number of blocks we have to scan.\t *\t * It is sufficient to do this once at scan start, since any tuples added\t * while the scan is in progress will be invisible to my snapshot anyway.\t * (That is not true when using a non-MVCC snapshot. However, we couldn't\t * guarantee to return tuples added after scan start anyway, since they\t * might go into pages we already scanned. To guarantee consistent\t * results for a non-MVCC snapshot, the caller must hold some higher-level\t * lock that ensures the interesting tuple(s) won't change.)\t */\tif (scan->rs_base.rs_parallel != NULL)\t{\t\tbpscan = (ParallelBlockTableScanDesc) scan->rs_base.rs_parallel;\t\tscan->rs_nblocks = bpscan->phs_nblocks;\t}\telse\t\tscan->rs_nblocks = RelationGextNumberOfBlocks(scan->rs_base.rs_rd);..}1. Why do we need scan->rs_nblocks = RelationGextNumberOfBlocks(scan->rs_base.rs_rd) for every rescan, which looks mismatched with the comments along the code. and the comments looks reasonable to me.To be more precise, this question can be expressed as if the relation sizecan be changed during rescan. We are sure that the size can be increased due tonew data, but we are sure that the new data is useless for the query as well. Solooks this case is ok. and for the file size decreasing, since we have lock onthe relation, so the file size would not be reduced as well (I have verifiedthis logic on the online vacuum case, other cases should be similar as well).-- Best RegardsAndy Fan (https://www.aliyun.com/)",
"msg_date": "Mon, 31 May 2021 13:46:22 +0800",
"msg_from": "Andy Fan <zhihui.fan1213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Regarding the necessity of RelationGetNumberOfBlocks for every\n rescan / bitmap heap scan."
},
{
"msg_contents": "+1, This would be an nice improvement even the lseek is fast usually, it is a system call after all\n\nBuzhen------------------------------------------------------------------\n发件人:Andy Fan<zhihui.fan1213@gmail.com>\n日 期:2021年05月31日 13:46:22\n收件人:PostgreSQL Hackers<pgsql-hackers@lists.postgresql.org>\n主 题:Re: Regarding the necessity of RelationGetNumberOfBlocks for every rescan / bitmap heap scan.\n\n\n\nOn Sat, May 29, 2021 at 11:23 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\nHi: \n\nI'm always confused about the following codes.\n\nstatic void\ninitscan(HeapScanDesc scan, ScanKey key, bool keep_startblock)\n{\n ParallelBlockTableScanDesc bpscan = NULL;\n bool allow_strat;\n bool allow_sync;\n\n /*\n * Determine the number of blocks we have to scan.\n *\n * It is sufficient to do this once at scan start, since any tuples added\n * while the scan is in progress will be invisible to my snapshot anyway.\n * (That is not true when using a non-MVCC snapshot. However, we couldn't\n * guarantee to return tuples added after scan start anyway, since they\n * might go into pages we already scanned. To guarantee consistent\n * results for a non-MVCC snapshot, the caller must hold some higher-level\n * lock that ensures the interesting tuple(s) won't change.)\n */\n if (scan->rs_base.rs_parallel != NULL)\n {\n bpscan = (ParallelBlockTableScanDesc) scan->rs_base.rs_parallel;\n scan->rs_nblocks = bpscan->phs_nblocks;\n }\n else\n scan->rs_nblocks = RelationGextNumberOfBlocks(scan->rs_base.rs_rd);\n\n\n..\n}\n\n1. Why do we need scan->rs_nblocks =\n RelationGextNumberOfBlocks(scan->rs_base.rs_rd) for every rescan, which looks\n mismatched with the comments along the code. and the comments looks\n reasonable to me.\n\nTo be more precise, this question can be expressed as if the relation size\ncan be changed during rescan. We are sure that the size can be increased due to\nnew data, but we are sure that the new data is useless for the query as well. So\nlooks this case is ok. and for the file size decreasing, since we have lock on\nthe relation, so the file size would not be reduced as well (I have verified\nthis logic on the online vacuum case, other cases should be similar as well).\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/) \n\n+1, This would be an nice improvement even the lseek is fast usually, it is a system call after allBuzhen------------------------------------------------------------------发件人:Andy Fan<zhihui.fan1213@gmail.com>日 期:2021年05月31日 13:46:22收件人:PostgreSQL Hackers<pgsql-hackers@lists.postgresql.org>主 题:Re: Regarding the necessity of RelationGetNumberOfBlocks for every rescan / bitmap heap scan.On Sat, May 29, 2021 at 11:23 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:Hi: I'm always confused about the following codes.static voidinitscan(HeapScanDesc scan, ScanKey key, bool keep_startblock){\tParallelBlockTableScanDesc bpscan = NULL;\tbool\t\tallow_strat;\tbool\t\tallow_sync;\t/*\t * Determine the number of blocks we have to scan.\t *\t * It is sufficient to do this once at scan start, since any tuples added\t * while the scan is in progress will be invisible to my snapshot anyway.\t * (That is not true when using a non-MVCC snapshot. However, we couldn't\t * guarantee to return tuples added after scan start anyway, since they\t * might go into pages we already scanned. To guarantee consistent\t * results for a non-MVCC snapshot, the caller must hold some higher-level\t * lock that ensures the interesting tuple(s) won't change.)\t */\tif (scan->rs_base.rs_parallel != NULL)\t{\t\tbpscan = (ParallelBlockTableScanDesc) scan->rs_base.rs_parallel;\t\tscan->rs_nblocks = bpscan->phs_nblocks;\t}\telse\t\tscan->rs_nblocks = RelationGextNumberOfBlocks(scan->rs_base.rs_rd);..}1. Why do we need scan->rs_nblocks = RelationGextNumberOfBlocks(scan->rs_base.rs_rd) for every rescan, which looks mismatched with the comments along the code. and the comments looks reasonable to me.To be more precise, this question can be expressed as if the relation sizecan be changed during rescan. We are sure that the size can be increased due tonew data, but we are sure that the new data is useless for the query as well. Solooks this case is ok. and for the file size decreasing, since we have lock onthe relation, so the file size would not be reduced as well (I have verifiedthis logic on the online vacuum case, other cases should be similar as well).-- Best RegardsAndy Fan (https://www.aliyun.com/)",
"msg_date": "Mon, 31 May 2021 13:59:27 +0800",
"msg_from": "\"=?UTF-8?B?6ZmI5L2z5piVKOatpeecnyk=?=\" <buzhen.cjx@alibaba-inc.com>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?B?5Zue5aSN77yaUmU6IFJlZ2FyZGluZyB0aGUgbmVjZXNzaXR5IG9mIFJlbGF0aW9uR2V0TnVt?=\n =?UTF-8?B?YmVyT2ZCbG9ja3MgZm9yIGV2ZXJ5IHJlc2NhbiAvIGJpdG1hcCBoZWFwIHNjYW4u?="
}
] |
[
{
"msg_contents": "Hi,\n\nI felt inclusion of alias types regpublication and regsubscription will\nhelp the logical replication users. This will also help in [1].\nThe alias types allow simplified lookup of publication oid values for\nobjects. For example, to examine the pg_publication_rel rows, one could\nwrite:\nSELECT prpubid::regpublication, prrelid::regclass FROM pg_publication_rel;\n\nrather than:\nSELECT p.pubname, prrelid::regclass FROM pg_publication_rel pr,\npg_publication p WHERE pr.prpubid = p.oid;\n\nSimilarly in case of subscription:\nFor example, to examine the pg_subscription_rel rows, one could write:\nSELECT srsubid::regsubscription, srrelid::regclass FROM pg_subscription_rel;\n\nrather than:\nSELECT s.subname,srsubid::regclass FROM pg_subscription_rel sr,\npg_subscription s where sr.srsubid = s.oid;\n\nAttached patch has the changes for the same.\nThoughts?\n\n[1] -\nhttps://www.postgresql.org/message-id/flat/CALDaNm0OANxuJ6RXqwZsM1MSY4s19nuH3734j4a72etDwvBETQ%40mail.gmail.com\n\nRegards,\nVignesh",
"msg_date": "Sat, 29 May 2021 20:59:16 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Addition of alias types regpublication and regsubscription"
},
{
"msg_contents": "vignesh C <vignesh21@gmail.com> writes:\n> I felt inclusion of alias types regpublication and regsubscription will\n> help the logical replication users.\n\nThis doesn't really seem worth the trouble --- how often would you\nuse these?\n\nIf we had a policy of inventing reg* aliases for every kind of catalog\nobject, that'd be one thing, but we don't. (And the overhead in\ninventing new object kinds is already high enough, so I'm not in favor\nof creating such a policy.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 29 May 2021 11:40:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Addition of alias types regpublication and regsubscription"
},
{
"msg_contents": "On Sat, May 29, 2021 at 9:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> vignesh C <vignesh21@gmail.com> writes:\n> > I felt inclusion of alias types regpublication and regsubscription will\n> > help the logical replication users.\n>\n> This doesn't really seem worth the trouble --- how often would you\n> use these?\n>\n> If we had a policy of inventing reg* aliases for every kind of catalog\n> object, that'd be one thing, but we don't. (And the overhead in\n> inventing new object kinds is already high enough, so I'm not in favor\n> of creating such a policy.)\n\nok, Thanks for considering this.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 31 May 2021 19:07:04 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Addition of alias types regpublication and regsubscription"
}
] |
[
{
"msg_contents": "Good day.\n\nLong time ago I've been played with proprietary \"compressed storage\"\npatch on heavily updated table, and found empty pages (ie cleaned by\nvacuum) are not compressed enough.\n\nWhen table is stress-updated, page for new row versions are allocated\nin round-robin kind, therefore some 1GB segments contains almost\nno live tuples. Vacuum removes dead tuples, but segments remains large\nafter compression (>400MB) as if they are still full.\n\nAfter some investigation I found it is because PageRepairFragmentation,\nPageIndex*Delete* don't clear space that just became empty therefore it\nstill contains garbage data. Clearing it with memset greatly increase\ncompression ratio: some compressed relation segments become 30-60MB just\nafter vacuum remove tuples in them.\n\nWhile this result is not directly applied to stock PostgreSQL, I believe\npage compression is important for full_page_writes with wal_compression\nenabled. And probably when PostgreSQL is used on filesystem with\ncompression enabled (ZFS?).\n\nTherefore I propose clearing page's empty space with zero in\nPageRepairFragmentation, PageIndexMultiDelete, PageIndexTupleDelete and\nPageIndexTupleDeleteNoCompact.\n\nSorry, didn't measure impact on raw performance yet.\n\nregards,\nYura Sokolov aka funny_falcon",
"msg_date": "Sun, 30 May 2021 03:10:26 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Clear empty space in a page."
},
{
"msg_contents": "\nHello Yura,\n\n> didn't measure impact on raw performance yet.\n\nMust be done. There c/should be a guc to control this behavior if the \nperformance impact is noticeable.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sun, 30 May 2021 07:22:28 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Clear empty space in a page."
},
{
"msg_contents": "Hi,\n\nI happened to be running some postgres on zfs on Linux/aarch64 tests\nand tested this patch.\n\nKernel: 4.18.0-305.el8.aarch64\nCPU: 16x3.0GHz Ampere Alta / Arm Neoverse N1 cores\n\nZFS: 2.1.0-rc6\nZFS options: options spl spl_kmem_cache_slab_limit=65536 (see:\nhttps://github.com/openzfs/zfs/issues/12150)\n\nPostgres: 13.3 with and without the patch\nPostgres config:\n\nfull_page_writes = on\nwal_compression = on\n\nWithout patch:\n\nstarting vacuum...end.\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 100\nquery mode: prepared\nnumber of clients: 32\nnumber of threads: 32\nduration: 43200 s\nnumber of transactions actually processed: 612557228\nlatency average = 2.257 ms\ntps = 14179.551402 (including connections establishing)\ntps = 14179.553286 (excluding connections establishing)\n\nWith patch:\n\nstarting vacuum...end.\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 100\nquery mode: prepared\nnumber of clients: 32\nnumber of threads: 32\nduration: 43200 s\nnumber of transactions actually processed: 606967295\nlatency average = 2.278 ms\ntps = 14050.164370 (including connections establishing)\ntps = 14050.166007 (excluding connections establishing)\n\nIt does seem to help with on disk compression but it *might* have\ncaused more fragmentation.\n\nRegards,\nOmar\n\nOn Sat, May 29, 2021 at 10:22 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello Yura,\n>\n> > didn't measure impact on raw performance yet.\n>\n> Must be done. There c/should be a guc to control this behavior if the\n> performance impact is noticeable.\n>\n> --\n> Fabien.\n>\n>\n\n\n",
"msg_date": "Sun, 30 May 2021 09:23:32 -0700",
"msg_from": "Omar Kilani <omar.kilani@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Clear empty space in a page."
},
{
"msg_contents": "Hi,\n\nOn 2021-05-30 03:10:26 +0300, Yura Sokolov wrote:\n> While this result is not directly applied to stock PostgreSQL, I believe\n> page compression is important for full_page_writes with wal_compression\n> enabled. And probably when PostgreSQL is used on filesystem with\n> compression enabled (ZFS?).\n\nI don't think the former is relevant, because the hole is skipped in wal page\ncompression (at some cost).\n\n\n> Therefore I propose clearing page's empty space with zero in\n> PageRepairFragmentation, PageIndexMultiDelete, PageIndexTupleDelete and\n> PageIndexTupleDeleteNoCompact.\n> \n> Sorry, didn't measure impact on raw performance yet.\n\nI'm worried that this might cause O(n^2) behaviour in some cases, by\nrepeatedly memset'ing the same mostly already zeroed space to 0. Why do we\never need to do memset_hole() instead of accurately just zeroing out the space\nthat was just vacated?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 30 May 2021 14:07:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Clear empty space in a page."
},
{
"msg_contents": "Hi,\n\nAndres Freund wrote 2021-05-31 00:07:\n> Hi,\n> \n> On 2021-05-30 03:10:26 +0300, Yura Sokolov wrote:\n>> While this result is not directly applied to stock PostgreSQL, I \n>> believe\n>> page compression is important for full_page_writes with \n>> wal_compression\n>> enabled. And probably when PostgreSQL is used on filesystem with\n>> compression enabled (ZFS?).\n> \n> I don't think the former is relevant, because the hole is skipped in \n> wal page\n> compression (at some cost).\n\nAh, forgot about. Yep, you are right.\n\n>> Therefore I propose clearing page's empty space with zero in\n>> PageRepairFragmentation, PageIndexMultiDelete, PageIndexTupleDelete \n>> and\n>> PageIndexTupleDeleteNoCompact.\n>> \n>> Sorry, didn't measure impact on raw performance yet.\n> \n> I'm worried that this might cause O(n^2) behaviour in some cases, by\n> repeatedly memset'ing the same mostly already zeroed space to 0. Why do \n> we\n> ever need to do memset_hole() instead of accurately just zeroing out \n> the space\n> that was just vacated?\n\nIt is done exactly this way: memset_hole accepts \"old_pd_upper\" and \ncleans between\nold and new one.\n\nregards,\nYura\n\n\n",
"msg_date": "Tue, 01 Jun 2021 09:08:11 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Clear empty space in a page."
}
] |
[
{
"msg_contents": "Hi,\n\nIRIX gave the world O_DIRECT, and then every Unix I've used followed\ntheir lead except Apple's, which gave the world fcntl(fd, F_NOCACHE,\n1). From what I could find in public discussion, this API difference\nmay stem from the caching policy being controlled at the per-file\n(vnode) level in older macOS (and perhaps ancestors), but since 10.4\nit's per file descriptor, so approximately like O_DIRECT on other\nsystems. The precise effects and constraints of O_DIRECT/F_NOCACHE\nare different across operating systems and file systems in some subtle\nand not-so-subtle ways, but the general concept is the same: try to\navoid buffering.\n\nI thought about a few different ways to encapsulate this API\ndifference in PostgreSQL, and toyed with two:\n\n1. We could define our own fake O_DIRECT flag, and translate that to\nthe right thing inside BasicOpenFilePerm(). That seems a bit icky.\nWe'd have to be careful not to collide with system defined flags and\nworry about changes. We do that sort of thing for Windows, though\nthat's a bit different, there we translate *all* the flags from\nPOSIXesque to Windowsian.\n\n2. We could make an extended BasicOpenFilePerm() variant that takes a\nseparate boolean parameter for direct, so that we don't have to hijack\nany flag space, but now we need new interfaces just to tolerate a\nrather niche system.\n\nHere's a draft patch like #2, just for discussion. Better ideas?\n\nThe reason I want to get direct I/O working on this \"client\" OS is\nbecause the AIO project will propose to use direct I/O for the buffer\npool as an option, and I would like Macs to be able to do that\nprimarily for the sake of developers trying out the patch set. Based\non memories from the good old days of attending conferences, a decent\npercentage of PostgreSQL developers are on Macs.\n\nAs it stands, the patch only actually has any effect if you set\nwal_level=minimal and max_wal_senders=0, which is a configuration that\nI guess almost no-one uses. Otherwise xlog.c assumes that the\nfilesystem is going to be used for data exchange with replication\nprocesses (something we should replace with WAL buffers in shmem some\ntime soon) so for now it's better to keep the data in page cache since\nit'll be accessed again soon.\n\nUnfortunately, this change makes pg_test_fsync show a very slightly\nlower number for open_data_sync on my ancient Intel Mac, but\npg_test_fsync isn't really representative anymore since minimal\nlogging is by now unusual (I guess pg_test_fsync would ideally do the\ntest with and without direct to make that clearer). Whether this is a\ngood option for the WAL is separate from whether it's a good option\nfor relation data (ie a way to avoid large scale double buffering, but\nhave new, different problems), and later patches will propose new\nseparate GUCs to control that.",
"msg_date": "Sun, 30 May 2021 16:39:48 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "O_DIRECT on macOS"
},
{
"msg_contents": "On Sun, May 30, 2021 at 04:39:48PM +1200, Thomas Munro wrote:\n\n> +BasicOpenFilePermDirect(const char *fileName, int fileFlags, mode_t fileMode,\n> + bool direct)\n> ...\n> +#if !defined(O_DIRECT) && defined(F_NOCACHE)\n> + /* macOS requires an extra step. */\n> + if (direct && fcntl(fd, F_NOCACHE, 1) < 0)\n> + {\n> + int save_errno = errno;\n> +\n> + close(fd);\n> + errno = save_errno;\n> + ereport(ERROR,\n> + (errcode_for_file_access(),\n> + errmsg(\"could not disable kernel file caching for file \\\"%s\\\": %m\",\n> + fileName)));\n> + }\n> +#endif\n\nShould there be an \"else\" to warn/error in the case that \"direct\" is requested\nbut not supported?\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 30 May 2021 11:19:12 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: O_DIRECT on macOS"
},
{
"msg_contents": "Hi,\n\nThanks for starting the discussion on this!\n\nOn 2021-05-30 16:39:48 +1200, Thomas Munro wrote:\n> I thought about a few different ways to encapsulate this API\n> difference in PostgreSQL, and toyed with two:\n> \n> 1. We could define our own fake O_DIRECT flag, and translate that to\n> the right thing inside BasicOpenFilePerm(). That seems a bit icky.\n> We'd have to be careful not to collide with system defined flags and\n> worry about changes. We do that sort of thing for Windows, though\n> that's a bit different, there we translate *all* the flags from\n> POSIXesque to Windowsian.\n> \n> 2. We could make an extended BasicOpenFilePerm() variant that takes a\n> separate boolean parameter for direct, so that we don't have to hijack\n> any flag space, but now we need new interfaces just to tolerate a\n> rather niche system.\n\nI don't think 2) really covers the problem on its own. It's fine for\nthings that directly use BasicOpenFilePerm(), but what about \"virtual\nfile descriptors\" (PathNameOpenFile())? I.e. what md.c et al use? There\nwe need to store the fact that we want non-buffered IO as part of the\nvfd, otherwise we'll loose that information when re-opening the file\nlater.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 30 May 2021 13:12:44 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: O_DIRECT on macOS"
},
{
"msg_contents": "On Mon, May 31, 2021 at 8:12 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2021-05-30 16:39:48 +1200, Thomas Munro wrote:\n> > I thought about a few different ways to encapsulate this API\n> > difference in PostgreSQL, and toyed with two:\n> >\n> > 1. We could define our own fake O_DIRECT flag, and translate that to\n> > the right thing inside BasicOpenFilePerm(). That seems a bit icky.\n> > We'd have to be careful not to collide with system defined flags and\n> > worry about changes. We do that sort of thing for Windows, though\n> > that's a bit different, there we translate *all* the flags from\n> > POSIXesque to Windowsian.\n> >\n> > 2. We could make an extended BasicOpenFilePerm() variant that takes a\n> > separate boolean parameter for direct, so that we don't have to hijack\n> > any flag space, but now we need new interfaces just to tolerate a\n> > rather niche system.\n>\n> I don't think 2) really covers the problem on its own. It's fine for\n> things that directly use BasicOpenFilePerm(), but what about \"virtual\n> file descriptors\" (PathNameOpenFile())? I.e. what md.c et al use? There\n> we need to store the fact that we want non-buffered IO as part of the\n> vfd, otherwise we'll loose that information when re-opening the file\n> later.\n\nRight, a bit more API perturbation is required to traffic the separate\nflags around for VFDs, which is all a bit unpleasant for a feature\nthat most people don't care about.\n\nFor comparison, here is my sketch of idea #1. I pick an arbitrary\nvalue to use as PG_O_DIRECT (I don't want to define O_DIRECT for fear\nof breaking other code that might see it and try to pass it into\nopen()... for all I know, it might happen to match OS-internal value\nO_NASAL_DEMONS), and statically assert that it doesn't collide with\nstandard flags we're using, and I strip it out of the flags I pass in\nto open(). As I said, a bit icky, but it's a tiny and localised\npatch, which is nice.\n\nI also realised that it probably wasn't right to raise an ERROR, so in\nthis version I return -1 when fcntl() fails.",
"msg_date": "Mon, 31 May 2021 10:29:20 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: O_DIRECT on macOS"
},
{
"msg_contents": "On Mon, May 31, 2021 at 4:19 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Should there be an \"else\" to warn/error in the case that \"direct\" is requested\n> but not supported?\n\nThe way we use O_DIRECT currently is extremely minimal, it's just \"if\nyou've got it, we'll use it, but otherwise not complain\", and I wasn't\ntrying to change that yet, but you're right that if we add explicit\nGUCs to turn on direct I/O for WAL and data files we should definitely\nnot let you turn them on if we can't do it.\n\n\n",
"msg_date": "Mon, 31 May 2021 10:31:42 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: O_DIRECT on macOS"
},
{
"msg_contents": "On Mon, May 31, 2021 at 10:29 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> For comparison, here is my sketch of idea #1. I pick an arbitrary\n> value to use as PG_O_DIRECT (I don't want to define O_DIRECT for fear\n> of breaking other code that might see it and try to pass it into\n> open()... for all I know, it might happen to match OS-internal value\n> O_NASAL_DEMONS), and statically assert that it doesn't collide with\n> standard flags we're using, and I strip it out of the flags I pass in\n> to open(). As I said, a bit icky, but it's a tiny and localised\n> patch, which is nice.\n\nI'm planning to go with that idea (#1), if there are no objections.\n\n\n",
"msg_date": "Tue, 13 Jul 2021 13:25:50 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: O_DIRECT on macOS"
},
{
"msg_contents": "Hi,\n\nOn 2021-07-13 13:25:50 +1200, Thomas Munro wrote:\n> On Mon, May 31, 2021 at 10:29 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > For comparison, here is my sketch of idea #1. I pick an arbitrary\n> > value to use as PG_O_DIRECT (I don't want to define O_DIRECT for fear\n> > of breaking other code that might see it and try to pass it into\n> > open()... for all I know, it might happen to match OS-internal value\n> > O_NASAL_DEMONS), and statically assert that it doesn't collide with\n> > standard flags we're using, and I strip it out of the flags I pass in\n> > to open(). As I said, a bit icky, but it's a tiny and localised\n> > patch, which is nice.\n> \n> I'm planning to go with that idea (#1), if there are no objections.\n\nThe only other viable approach I see is to completely separate our\ninternal flag representation from the OS representation and do the whole\nmapping inside fd.c - but that seems like a too big hammer right now.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 12 Jul 2021 18:56:09 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: O_DIRECT on macOS"
},
{
"msg_contents": "On Tue, Jul 13, 2021 at 1:56 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2021-07-13 13:25:50 +1200, Thomas Munro wrote:\n> > I'm planning to go with that idea (#1), if there are no objections.\n>\n> The only other viable approach I see is to completely separate our\n> internal flag representation from the OS representation and do the whole\n> mapping inside fd.c - but that seems like a too big hammer right now.\n\nAgreed. Pushed!\n\nFor the record, Solaris has directio() that could be handled the same\nway. I'm not planning to look into that myself, but patches welcome.\nIllumos (née OpenSolaris) got with the programme and added O_DIRECT.\nOf our 10-or-so target systems I guess that'd leave just HPUX (judging\nby an old man page found on the web) and OpenBSD with no direct I/O\nsupport.\n\n\n",
"msg_date": "Mon, 19 Jul 2021 12:28:34 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: O_DIRECT on macOS"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Agreed. Pushed!\n\nprairiedog thinks that Assert is too optimistic about whether all\nthose flags exist.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Jul 2021 00:41:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: O_DIRECT on macOS"
},
{
"msg_contents": "On Mon, Jul 19, 2021 at 4:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> prairiedog thinks that Assert is too optimistic about whether all\n> those flags exist.\n\nFixed.\n\n(Huh, I received no -committers email for 2dbe8905.)\n\n\n",
"msg_date": "Mon, 19 Jul 2021 16:54:40 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: O_DIRECT on macOS"
},
{
"msg_contents": "On Mon, Jul 19, 2021 at 12:55 AM Thomas Munro <thomas.munro@gmail.com>\nwrote:\n>\n> On Mon, Jul 19, 2021 at 4:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > prairiedog thinks that Assert is too optimistic about whether all\n> > those flags exist.\n>\n> Fixed.\n>\n> (Huh, I received no -committers email for 2dbe8905.)\n\nIt didn't show up in the archives, either. Neither did your follow-up\n04cad8f7bc.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, Jul 19, 2021 at 12:55 AM Thomas Munro <thomas.munro@gmail.com> wrote:>> On Mon, Jul 19, 2021 at 4:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:> > prairiedog thinks that Assert is too optimistic about whether all> > those flags exist.>> Fixed.>> (Huh, I received no -committers email for 2dbe8905.)It didn't show up in the archives, either. Neither did your follow-up 04cad8f7bc.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 19 Jul 2021 08:15:20 -0400",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: O_DIRECT on macOS"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Mon, Jul 19, 2021 at 4:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> prairiedog thinks that Assert is too optimistic about whether all\n>> those flags exist.\n\n> Fixed.\n\nHmm ... we used to have to avoid putting #if constructs in the arguments\nof macros (such as StaticAssertStmt). Maybe that's not a thing anymore\nwith C99, and in any case this whole stanza is fairly platform-specific\nso we may not run into a compiler that complains. But my hindbrain wants\nto see this done with separate statements, eg\n\n#if defined(O_CLOEXEC)\n StaticAssertStmt((PG_O_DIRECT & O_CLOEXEC) == 0,\n \"PG_O_DIRECT collides with O_CLOEXEC\");\n#endif\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Jul 2021 10:13:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: O_DIRECT on macOS"
},
{
"msg_contents": "On Tue, Jul 20, 2021 at 2:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Hmm ... we used to have to avoid putting #if constructs in the arguments\n> of macros (such as StaticAssertStmt). Maybe that's not a thing anymore\n> with C99, and in any case this whole stanza is fairly platform-specific\n> so we may not run into a compiler that complains. But my hindbrain wants\n> to see this done with separate statements, eg\n>\n> #if defined(O_CLOEXEC)\n> StaticAssertStmt((PG_O_DIRECT & O_CLOEXEC) == 0,\n> \"PG_O_DIRECT collides with O_CLOEXEC\");\n> #endif\n\nOk, done.\n\nWhile I was here again, I couldn't resist trying to extend this to\nSolaris, since it looked so easy. I don't have access, but I tested\non Illumos by undefining O_DIRECT. Thoughts?",
"msg_date": "Tue, 20 Jul 2021 11:23:48 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: O_DIRECT on macOS"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> While I was here again, I couldn't resist trying to extend this to\n> Solaris, since it looked so easy. I don't have access, but I tested\n> on Illumos by undefining O_DIRECT. Thoughts?\n\nI can try that on the gcc farm in a bit.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Jul 2021 19:43:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: O_DIRECT on macOS"
},
{
"msg_contents": "I wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n>> While I was here again, I couldn't resist trying to extend this to\n>> Solaris, since it looked so easy. I don't have access, but I tested\n>> on Illumos by undefining O_DIRECT. Thoughts?\n\n> I can try that on the gcc farm in a bit.\n\nHmm, it compiles cleanly, but something seems drastically wrong,\nbecause performance is just awful. On the other hand, I don't\nknow what sort of storage is underlying this instance, so maybe\nthat's to be expected? If I set fsync = off, the speed seems\ncomparable to what wrasse reports, but with fsync on it's like\n\ntest tablespace ... ok 87990 ms\nparallel group (20 tests, in groups of 1): boolean char name varchar text int2 int4 int8 oid float4 float8 bit numeric txid uuid enum money rangetypes pg_lsn regproc\n boolean ... ok 3229 ms\n char ... ok 2758 ms\n name ... ok 2229 ms\n varchar ... ok 7373 ms\n text ... ok 722 ms\n int2 ... ok 342 ms\n int4 ... ok 1303 ms\n int8 ... ok 1095 ms\n oid ... ok 1086 ms\n float4 ... ok 6360 ms\n float8 ... ok 5224 ms\n bit ... ok 6254 ms\n numeric ... ok 44304 ms\n txid ... ok 377 ms\n uuid ... ok 3946 ms\n enum ... ok 33189 ms\n money ... ok 622 ms\n rangetypes ... ok 17301 ms\n pg_lsn ... ok 798 ms\n regproc ... ok 145 ms\n\n(I stopped running it at that point...)\n\nAlso, the results of pg_test_fsync seem wrong; it refuses to run\ntests for the cases we're interested in:\n\n$ pg_test_fsync \n5 seconds per test\nDIRECTIO_ON supported on this platform for open_datasync and open_sync.\n\nCompare file sync methods using one 8kB write:\n(in wal_sync_method preference order, except fdatasync is Linux's default)\n open_datasync n/a*\n fdatasync 8.324 ops/sec 120139 usecs/op\n fsync 0.906 ops/sec 1103936 usecs/op\n fsync_writethrough n/a\n open_sync n/a*\n* This file system and its mount options do not support direct\n I/O, e.g. ext4 in journaled mode.\n\nCompare file sync methods using two 8kB writes:\n(in wal_sync_method preference order, except fdatasync is Linux's default)\n open_datasync n/a*\n fdatasync 7.329 ops/sec 136449 usecs/op\n fsync 0.788 ops/sec 1269258 usecs/op\n fsync_writethrough n/a\n open_sync n/a*\n* This file system and its mount options do not support direct\n I/O, e.g. ext4 in journaled mode.\n\nCompare open_sync with different write sizes:\n(This is designed to compare the cost of writing 16kB in different write\nopen_sync sizes.)\n 1 * 16kB open_sync write n/a*\n 2 * 8kB open_sync writes n/a*\n 4 * 4kB open_sync writes n/a*\n 8 * 2kB open_sync writes n/a*\n 16 * 1kB open_sync writes n/a*\n\nTest if fsync on non-write file descriptor is honored:\n(If the times are similar, fsync() can sync data written on a different\ndescriptor.)\n write, fsync, close 16.388 ops/sec 61020 usecs/op\n write, close, fsync 9.084 ops/sec 110082 usecs/op\n\nNon-sync'ed 8kB writes:\n write 39855.686 ops/sec 25 usecs/op\n\n\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Jul 2021 20:26:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: O_DIRECT on macOS"
},
{
"msg_contents": "On Tue, Jul 20, 2021 at 12:26 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I can try that on the gcc farm in a bit.\n\nThanks!\n\n> Hmm, it compiles cleanly, but something seems drastically wrong,\n> because performance is just awful. On the other hand, I don't\n> know what sort of storage is underlying this instance, so maybe\n> that's to be expected?\n\nOuch. I assume this was without wal_method=minimal (or it'd have\nreached the new code and failed completely, based on the pg_test_fsync\nresult).\n\n> open_datasync n/a*\n\nI'm waiting for access, but I see from man pages that closed source\nZFS doesn't accept DIRECTIO_ON, so it may not be possible to see it\nwork on an all-ZFS system that you can't mount a new FS on. Hmm.\nWell, many OSes have file systems that can't do it (ext4 journal=data,\netc). One problem is that we don't treat all OSes the same when\nselecting wal_sync_method, even though O_DIRECT is complicated on many\nOSes. It would also be nice if the choice to use direct I/O were\nindependently controlled, and ... [trails off]. Alright, I'll leave\nthis on ice for now.\n\n\n",
"msg_date": "Tue, 20 Jul 2021 17:01:43 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: O_DIRECT on macOS"
}
] |
[
{
"msg_contents": "While working on something in \"psql/common.c\" I noticed some triplicated \ncode, including a long translatable string. This minor patch refactors \nthis in one function.\n\n-- \nFabien.",
"msg_date": "Sun, 30 May 2021 11:09:41 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "psql - factor out echo code"
},
{
"msg_contents": ">-----Original Message-----\n>From: Fabien COELHO <coelho@cri.ensmp.fr>\n>Sent: Sunday, May 30, 2021 6:10 PM\n>To: PostgreSQL Developers <pgsql-hackers@lists.postgresql.org>\n>Subject: psql - factor out echo code\n>\n>\n>While working on something in \"psql/common.c\" I noticed some triplicated code,\n>including a long translatable string. This minor patch refactors this in one\n>function.\n>\n>--\n>Fabien.\n\nWouldn't it be better to comment it like any other function?\n\nBest regards,\nShinya Kato\n\n\n",
"msg_date": "Mon, 14 Jun 2021 03:54:30 +0000",
"msg_from": "<Shinya11.Kato@nttdata.com>",
"msg_from_op": false,
"msg_subject": "RE: psql - factor out echo code"
},
{
"msg_contents": "> Wouldn't it be better to comment it like any other function?\n\nSure. Attached.\n\n-- \nFabien.",
"msg_date": "Mon, 14 Jun 2021 08:57:10 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "RE: psql - factor out echo code"
},
{
"msg_contents": ">> Wouldn't it be better to comment it like any other function?\n>\n>Sure. Attached.\n\nThank you for your revision.\nI think this patch is good, so I will move it to ready for committer.\n\nBest regards,\nShinya Kato\n\n\n",
"msg_date": "Tue, 15 Jun 2021 09:19:57 +0000",
"msg_from": "<Shinya11.Kato@nttdata.com>",
"msg_from_op": false,
"msg_subject": "RE: psql - factor out echo code"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> [ psql-echo-2.patch ]\n\nI went to commit this, figuring that it was a trivial bit of code\nconsolidation, but as I looked around in common.c I got rather\nunhappy with the inconsistent behavior of things. Examining\nthe various places that implement \"echo\"-related logic, we have\nthe three places this patch proposes to unify, which log queries\nusing\n\n fprintf(out,\n _(\"********* QUERY **********\\n\"\n \"%s\\n\"\n \"**************************\\n\\n\"), query);\n\nand then we have two more that just do\n\n puts(query);\n\nplus this:\n\n if (!OK && pset.echo == PSQL_ECHO_ERRORS)\n pg_log_info(\"STATEMENT: %s\", query);\n\nSo it's exactly fifty-fifty as to whether we add all that decoration\nor none at all. I think if we're going to touch this logic, we\nought to try to unify the behavior. My vote would be to drop the\ndecoration everywhere, but perhaps there are votes not to?\n\nA different angle is that the identical decoration is used for both\npsql-generated queries that are logged because of ECHO_HIDDEN, and\nuser-entered queries. This seems at best rather unhelpful. If\nwe keep the decoration, should we make it different for those two\ncases? (Maybe \"INTERNAL QUERY\" vs \"QUERY\", for example.) The\ncases with no decoration likewise fall into multiple categories,\nboth user-entered and generated-by-gexec; if we were going with\na decorated approach I'd think it useful to make a distinction\nbetween those, too.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Jul 2021 11:15:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: psql - factor out echo code"
},
{
"msg_contents": "Hello Tom,\n\n> I went to commit this, figuring that it was a trivial bit of code\n> consolidation, but as I looked around in common.c I got rather\n> unhappy with the inconsistent behavior of things. Examining\n> the various places that implement \"echo\"-related logic, we have\n> the three places this patch proposes to unify, which log queries\n> using\n>\n> fprintf(out,\n> _(\"********* QUERY **********\\n\"\n> \"%s\\n\"\n> \"**************************\\n\\n\"), query);\n>\n> and then we have two more that just do\n>\n> puts(query);\n>\n> plus this:\n>\n> if (!OK && pset.echo == PSQL_ECHO_ERRORS)\n> pg_log_info(\"STATEMENT: %s\", query);\n>\n> So it's exactly fifty-fifty as to whether we add all that decoration\n> or none at all. I think if we're going to touch this logic, we\n> ought to try to unify the behavior.\n\n+1.\n\nI did not go this way because I wanted it to be a simple restructuring \npatch so that it could go through without much ado, but I agree with \nimproving the current status. I'm not sure we want too much ascii-art.\n\n> My vote would be to drop the decoration everywhere, but perhaps there \n> are votes not to?\n\nNo, I'd be ok with removing the decoration, or at least simplify them, or \nas you suggest below make the have a useful semantics.\n\n> A different angle is that the identical decoration is used for both\n> psql-generated queries that are logged because of ECHO_HIDDEN, and\n> user-entered queries. This seems at best rather unhelpful.\n\nIndeed.\n\n> If we keep the decoration, should we make it different for those two \n> cases? (Maybe \"INTERNAL QUERY\" vs \"QUERY\", for example.) The cases \n> with no decoration likewise fall into multiple categories, both \n> user-entered and generated-by-gexec; if we were going with a decorated \n> approach I'd think it useful to make a distinction between those, too.\n>\n> Thoughts?\n\nYes. Maybe decorations should be SQL comments, and the purpose/origin of \nthe query could be made clear as you suggest, eg something like markdown \nin a comment:\n\n \"-- # <whatever> QUERY\\n%s\\n\\n\"\n\nwith <whatever> in USER DESCRIPTION COMPLETION GEXEC…\n\n-- \nFabien.",
"msg_date": "Fri, 2 Jul 2021 21:53:29 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - factor out echo code"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> Yes. Maybe decorations should be SQL comments, and the purpose/origin of \n> the query could be made clear as you suggest, eg something like markdown \n> in a comment:\n> \"-- # <whatever> QUERY\\n%s\\n\\n\"\n\nIf we keep the decoration, I'd agree with dropping all the asterisks.\nI'd vote for something pretty minimalistic, like\n\n\t-- INTERNAL QUERY:\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Jul 2021 16:56:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: psql - factor out echo code"
},
{
"msg_contents": "On 2021-Jul-02, Tom Lane wrote:\n\n> Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> > Yes. Maybe decorations should be SQL comments, and the purpose/origin of \n> > the query could be made clear as you suggest, eg something like markdown \n> > in a comment:\n> > \"-- # <whatever> QUERY\\n%s\\n\\n\"\n> \n> If we keep the decoration, I'd agree with dropping all the asterisks.\n> I'd vote for something pretty minimalistic, like\n> \n> \t-- INTERNAL QUERY:\n\nI think the most interesting case for decoration is the \"step by step\"\nmode, where you want the \"title\" that precedes each query be easily\nvisible. I think two uppercase words are not sufficient for that ...\nand Markdown format which would force you to convert to HTML before you\ncan notice where it is, are not sufficient either. The line with a few\nasterisks seems fine to me for that. Removing the asterisks in the\nother case seems fine. I admit I don't use the step-by-step mode all\nthat much, though.\n\nAlso: one place that prints queries that wasn't mentioned before is\nexec_command_print() in command.c.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"Ed is the standard text editor.\"\n http://groups.google.com/group/alt.religion.emacs/msg/8d94ddab6a9b0ad3\n\n\n",
"msg_date": "Fri, 2 Jul 2021 17:07:50 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: psql - factor out echo code"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> I think the most interesting case for decoration is the \"step by step\"\n> mode, where you want the \"title\" that precedes each query be easily\n> visible.\n\nI'm okay with leaving the step-by-step prompt as-is, personally.\nIt's the inconsistency of the other ones that bugs me.\n\n> Also: one place that prints queries that wasn't mentioned before is\n> exec_command_print() in command.c.\n\nAh, I was wondering if anyplace outside common.c did so. But that\none seems to me to be a different animal -- it's not logging\nqueries-about-to-be-executed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Jul 2021 17:33:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: psql - factor out echo code"
},
{
"msg_contents": "> \"-- # <whatever> QUERY\\n%s\\n\\n\"\n\nAttached an attempt along those lines. I found another duplicate of the \nascii-art printing in another function.\n\nCompletion queries seems to be out of the echo/echo hidden feature.\n\nIncredible, there is a (small) impact on regression tests for the \\gexec \ncase. All other changes have no impact, because they are not tested:-(\n\n-- \nFabien.",
"msg_date": "Fri, 2 Jul 2021 23:37:24 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - factor out echo code"
},
{
"msg_contents": "On Sat, Jul 3, 2021 at 3:07 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> > \"-- # <whatever> QUERY\\n%s\\n\\n\"\n>\n> Attached an attempt along those lines. I found another duplicate of the\n> ascii-art printing in another function.\n>\n> Completion queries seems to be out of the echo/echo hidden feature.\n>\n> Incredible, there is a (small) impact on regression tests for the \\gexec\n> case. All other changes have no impact, because they are not tested:-(\n\nI am changing the status to \"Needs review\" as the review is not\ncompleted for this patch and also there are some tests failing, that\nneed to be fixed:\ntest test_extdepend ... FAILED 50 ms\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 10 Jul 2021 20:19:13 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - factor out echo code"
},
{
"msg_contents": "Hello Vignesh,\n\n> I am changing the status to \"Needs review\" as the review is not\n> completed for this patch and also there are some tests failing, that\n> need to be fixed:\n> test test_extdepend ... FAILED 50 ms\n\nIndeed,\n\nAttached v4 simplifies the format and fixes this one.\nI ran check-world, this time.\n\n-- \nFabien.",
"msg_date": "Sat, 10 Jul 2021 18:55:36 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - factor out echo code"
},
{
"msg_contents": "On Sat, Jul 10, 2021 at 10:25 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello Vignesh,\n>\n> > I am changing the status to \"Needs review\" as the review is not\n> > completed for this patch and also there are some tests failing, that\n> > need to be fixed:\n> > test test_extdepend ... FAILED 50 ms\n>\n> Indeed,\n>\n> Attached v4 simplifies the format and fixes this one.\n> I ran check-world, this time.\n\nThanks for posting an updated patch, the tests are passing now. I have\nchanged the status back to Ready For Committer.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sun, 11 Jul 2021 18:36:19 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - factor out echo code"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> Attached v4 simplifies the format and fixes this one.\n\nI think this goes way way overboard in terms of invasiveness.\nThere's no need to identify individual call sites of PSQLexec.\nWe didn't have anything like that level of detail before, and\nthere has been no field demand for it either. What I had\nin mind was basically to identify the call sites of echoQuery,\nie distinguish user commands from psql-generated commands\nwith labels like \"QUERY:\" vs \"INTERNAL QUERY:\". We don't\nneed to change the APIs of existing functions, I don't think.\n\nIt also looks like a mess from the translatibility standpoint.\nYou can't expect \"%s QUERY\" to be a useful thing for translators.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 13 Jul 2021 18:16:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: psql - factor out echo code"
},
{
"msg_contents": ">> Attached v4 simplifies the format and fixes this one.\n>\n> I think this goes way way overboard in terms of invasiveness. There's no \n> need to identify individual call sites of PSQLexec. [...]\n\nISTM that having the information was useful for the user who actually \nasked for psql to show hidden queries, and pretty simple to get, although \nsomehow invasive.\n\n> It also looks like a mess from the translatibility standpoint.\n> You can't expect \"%s QUERY\" to be a useful thing for translators.\n\nSure. Maybe I should have used an enum have a explicit switch in \nechoQuery, but I do not like writing this kind of code.\n\nAttached a v5 without hinting at the origin of the query beyond internal \nor not.\n\n-- \nFabien.",
"msg_date": "Wed, 14 Jul 2021 09:57:54 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - factor out echo code"
},
{
"msg_contents": "Hi\n\n\nne 24. 7. 2022 v 21:39 odesílatel Fabien COELHO <coelho@cri.ensmp.fr>\nnapsal:\n\n>\n> >> Attached v4 simplifies the format and fixes this one.\n> >\n> > I think this goes way way overboard in terms of invasiveness. There's no\n> > need to identify individual call sites of PSQLexec. [...]\n>\n> ISTM that having the information was useful for the user who actually\n> asked for psql to show hidden queries, and pretty simple to get, although\n> somehow invasive.\n>\n> > It also looks like a mess from the translatibility standpoint.\n> > You can't expect \"%s QUERY\" to be a useful thing for translators.\n>\n> Sure. Maybe I should have used an enum have a explicit switch in\n> echoQuery, but I do not like writing this kind of code.\n>\n> Attached a v5 without hinting at the origin of the query beyond internal\n> or not.\n>\n\n\nI had just one question - with this patch, the format of output of modes\nECHO ALL and ECHO QUERIES will be different, and that can be a little bit\nmessy. On second hand, the prefix --QUERY can be disturbing in echo queries\nmode. It is not a problem in echo all mode, because queries and results are\nmixed together. So in the end, I think the current design can work.\n\nAll tests passed, this is trivial patch without impacts on users\n\nI'll mark this patch as ready for committer\n\nRegards\n\nPavel\n\nHine 24. 7. 2022 v 21:39 odesílatel Fabien COELHO <coelho@cri.ensmp.fr> napsal:\n>> Attached v4 simplifies the format and fixes this one.\n>\n> I think this goes way way overboard in terms of invasiveness. There's no \n> need to identify individual call sites of PSQLexec. [...]\n\nISTM that having the information was useful for the user who actually \nasked for psql to show hidden queries, and pretty simple to get, although \nsomehow invasive.\n\n> It also looks like a mess from the translatibility standpoint.\n> You can't expect \"%s QUERY\" to be a useful thing for translators.\n\nSure. Maybe I should have used an enum have a explicit switch in \nechoQuery, but I do not like writing this kind of code.\n\nAttached a v5 without hinting at the origin of the query beyond internal \nor not.I had just one question - with this patch, the format of output of modes ECHO ALL and ECHO QUERIES will be different, and that can be a little bit messy. On second hand, the prefix --QUERY can be disturbing in echo queries mode. It is not a problem in echo all mode, because queries and results are mixed together. So in the end, I think the current design can work.All tests passed, this is trivial patch without impacts on usersI'll mark this patch as ready for committerRegardsPavel",
"msg_date": "Sun, 24 Jul 2022 22:23:39 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - factor out echo code"
},
{
"msg_contents": "On Sun, Jul 24, 2022 at 10:23:39PM +0200, Pavel Stehule wrote:\n> I had just one question - with this patch, the format of output of modes\n> ECHO ALL and ECHO QUERIES will be different, and that can be a little bit\n> messy. On second hand, the prefix --QUERY can be disturbing in echo queries\n> mode. It is not a problem in echo all mode, because queries and results are\n> mixed together. So in the end, I think the current design can work.\n> \n> All tests passed, this is trivial patch without impacts on users\n> \n> I'll mark this patch as ready for committer\n\nHmm. The refactoring is worth it as much as the differentiation\nbetween QUERY and INTERNAL QUERY as the same pattern is repeated 5\ntimes.\n\nNow some of the output generated by test_extdepend gets a bit\nconfusing:\n+-- QUERY:\n+\n+\n+-- QUERY:\n\nThat's not entirely this patch's fault. Still that's not really\nintuitive to see the output of a query that's just a blank spot..\n--\nMichael",
"msg_date": "Wed, 30 Nov 2022 16:45:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: psql - factor out echo code"
},
{
"msg_contents": "\n> Hmm. The refactoring is worth it as much as the differentiation\n> between QUERY and INTERNAL QUERY as the same pattern is repeated 5\n> times.\n>\n> Now some of the output generated by test_extdepend gets a bit\n> confusing:\n> +-- QUERY:\n> +\n> +\n> +-- QUERY:\n>\n> That's not entirely this patch's fault. Still that's not really\n> intuitive to see the output of a query that's just a blank spot..\n\nHmmm.\n\nWhat about adding an explicit \\echo before these empty outputs to mitigate \nthe possible induced confusion?\n\n-- \nFabien.\n\n\n",
"msg_date": "Wed, 30 Nov 2022 10:24:20 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - factor out echo code"
},
{
"msg_contents": ">> Now some of the output generated by test_extdepend gets a bit\n>> confusing:\n>> +-- QUERY:\n>> +\n>> +\n>> +-- QUERY:\n>> \n>> That's not entirely this patch's fault. Still that's not really\n>> intuitive to see the output of a query that's just a blank spot..\n>\n> Hmmm.\n>\n> What about adding an explicit \\echo before these empty outputs to mitigate \n> the possible induced confusion?\n\n\\echo is not possible.\n\nAttached an attempt to improve the situation by replacing empty lines with \ncomments in this test.\n\n-- \nFabien.",
"msg_date": "Wed, 30 Nov 2022 10:43:09 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: psql - factor out echo code"
},
{
"msg_contents": "st 30. 11. 2022 v 10:43 odesílatel Fabien COELHO <coelho@cri.ensmp.fr>\nnapsal:\n\n>\n> >> Now some of the output generated by test_extdepend gets a bit\n> >> confusing:\n> >> +-- QUERY:\n> >> +\n> >> +\n> >> +-- QUERY:\n> >>\n> >> That's not entirely this patch's fault. Still that's not really\n> >> intuitive to see the output of a query that's just a blank spot..\n> >\n> > Hmmm.\n> >\n> > What about adding an explicit \\echo before these empty outputs to\n> mitigate\n> > the possible induced confusion?\n>\n> \\echo is not possible.\n>\n> Attached an attempt to improve the situation by replacing empty lines with\n> comments in this test.\n>\n\nI can confirm so all regress tests passed\n\nRegards\n\nPavel\n\n\n>\n> --\n> Fabien.\n\nst 30. 11. 2022 v 10:43 odesílatel Fabien COELHO <coelho@cri.ensmp.fr> napsal:\n>> Now some of the output generated by test_extdepend gets a bit\n>> confusing:\n>> +-- QUERY:\n>> +\n>> +\n>> +-- QUERY:\n>> \n>> That's not entirely this patch's fault. Still that's not really\n>> intuitive to see the output of a query that's just a blank spot..\n>\n> Hmmm.\n>\n> What about adding an explicit \\echo before these empty outputs to mitigate \n> the possible induced confusion?\n\n\\echo is not possible.\n\nAttached an attempt to improve the situation by replacing empty lines with \ncomments in this test.I can confirm so all regress tests passedRegardsPavel \n\n-- \nFabien.",
"msg_date": "Thu, 1 Dec 2022 08:27:46 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - factor out echo code"
},
{
"msg_contents": "On 01.12.22 08:27, Pavel Stehule wrote:\n> st 30. 11. 2022 v 10:43 odesílatel Fabien COELHO <coelho@cri.ensmp.fr \n> <mailto:coelho@cri.ensmp.fr>> napsal:\n> \n> \n> >> Now some of the output generated by test_extdepend gets a bit\n> >> confusing:\n> >> +-- QUERY:\n> >> +\n> >> +\n> >> +-- QUERY:\n> >>\n> >> That's not entirely this patch's fault. Still that's not really\n> >> intuitive to see the output of a query that's just a blank spot..\n> >\n> > Hmmm.\n> >\n> > What about adding an explicit \\echo before these empty outputs to\n> mitigate\n> > the possible induced confusion?\n> \n> \\echo is not possible.\n> \n> Attached an attempt to improve the situation by replacing empty\n> lines with\n> comments in this test.\n> \n> \n> I can confirm so all regress tests passed\n\nI think this patch requires an up-to-date summary and explanation. The \nthread is over a year old and the patch has evolved quite a bit. There \nare some test changes that are not explained. Please provide more \ndetail so that the patch can be considered.\n\n\n\n",
"msg_date": "Mon, 13 Feb 2023 11:41:02 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - factor out echo code"
},
{
"msg_contents": "On Mon, 13 Feb 2023 at 05:41, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> I think this patch requires an up-to-date summary and explanation. The\n> thread is over a year old and the patch has evolved quite a bit. There\n> are some test changes that are not explained. Please provide more\n> detail so that the patch can be considered.\n\nGiven this feedback I'm going to mark this Returned with Feedback. I\nthink it'll be clearer to start with a new thread explaining the\nintent of the patch as it is now.\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Mon, 3 Apr 2023 16:03:40 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: psql - factor out echo code"
}
] |
[
{
"msg_contents": "I got interested in $SUBJECT as a result of the thread at [1].\nIt turns out that the existing implementation in inval.c is quite\ninefficient when a lot of individual commands each register just\na few invalidations --- but a few invalidations per command is\npretty typical. As an example, consider\n\nDO $do$\n BEGIN\n FOR i IN 1..200000000 LOOP\n execute 'create function foo' || i || '() returns int language sql as $$select 1$$';\n if (i % 100000 = 0) then\n raise notice '% loops done', i;\n end if;\n END LOOP;\n END\n$do$;\n\nEach CREATE FUNCTION registers three invalidation events, which\nminimally would require 48 bytes ... but the current code actually\neats about 2kB per iteration, because we allocate a pair of new\n\"chunks\" for each command. The chunks themselves are intended\nto hold 32 entries which'd take 512 bytes --- but there's some\noverhead, causing aset.c to round up to 1024 bytes. Ouch.\n\nIt gets worse though. If you wrap the commands in subtransactions:\n\nDO $do$\n BEGIN\n FOR i IN 1..200000000 LOOP\n begin\n execute 'create function foo' || i || '() returns int language sql as $$select 1$$';\n if (i % 100000 = 0) then\n raise notice '% loops done', i;\n end if;\n exception when division_by_zero then null;\n end;\n END LOOP;\n END\n$do$;\n\nthe space consumption balloons to about 8kB per iteration, because the\nchunks are allocated in the per-subtransaction CurTransactionContext,\nwhich is given 8kB right off the bat. In common cases this'll be the\n*only* allocation in that context.\n\nWe can do a lot better, by exploiting what we know about the usage\npatterns of invalidation requests. New requests are always added to\nthe latest sublist, and the only management actions are (1) merge\nlatest sublist into next-to-latest sublist, or (2) drop latest\nsublist, if a subtransaction aborts. This means we could perfectly\nwell keep all the requests in a single, densely packed array in\nTopTransactionContext, and replace the \"list\" control structures\nwith indexes into that array. The attached patch does that.\n\nI don't see any particular speed differential with this (unsurprising,\nsince the other actions that an inval event logs and then triggers\nwill surely swamp inval.c's management overhead). But the space\nconsumption decreases gratifyingly.\n\nThere is one notable new assumption I had to make for this. At end\nof a subtransaction, we have to merge its inval events into the\n\"PriorCmd\" list of the parent subtransaction. (It has to be the\nPriorCmd list, not the CurrentCmd list, because these events have\nalready been processed locally; we don't want to do that again.)\nThis means the parent's CurrentCmd list has to be empty at that\ninstant, else we'd be trying to merge sublists that aren't adjacent\nin the array. As far as I can tell, this is always true: the patch's\ncheck for it doesn't trigger in a check-world run. And there's an\nargument that it must be true for semantic consistency (see comments\nin patch). So if that check ever fails, it probably means there is a\nmissing CommandCounterIncrement somewhere. Still, this could use more\nreview and testing.\n\nBTW, I noted with some amusement that this comment in\nxactGetCommittedInvalidationMessages:\n\n * ... Maintain the order that they\n * would be processed in by AtEOXact_Inval(), to ensure emulated behaviour\n * in redo is as similar as possible to original. We want the same bugs,\n * if any, not new ones.\n\nis making a claim that the existing code there actually does not\nsatisfy. In particular it fails to maintain the correct ordering of\ncatcache vs. relcache events. The patch fixes that, but I wonder\nwhether there is anything we need to do in the back branches. I'm\ninclined to think that it doesn't matter beyond the small efficiency\nrisk inherent in doing (some) relcache flushes before catcache\nflushes. The code already says that the order of events within any\none list isn't supposed to matter.\n\nAnyway, I'll add this to the next CF.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/88986113-6b01-452b-89d0-9492b6a79e33%40www.fastmail.com",
"msg_date": "Sun, 30 May 2021 13:22:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Reducing memory consumption for pending inval messages"
},
{
"msg_contents": "I wrote:\n> It turns out that the existing implementation in inval.c is quite\n> inefficient when a lot of individual commands each register just\n> a few invalidations --- but a few invalidations per command is\n> pretty typical.\n\nPer the cfbot, here's a rebase over 3788c6678 (actually just\nundoing its effects on inval.c, since that code is removed here).\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 13 Jul 2021 16:21:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Reducing memory consumption for pending inval messages"
},
{
"msg_contents": "On 5/30/21, 10:22 AM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\r\n> We can do a lot better, by exploiting what we know about the usage\r\n> patterns of invalidation requests. New requests are always added to\r\n> the latest sublist, and the only management actions are (1) merge\r\n> latest sublist into next-to-latest sublist, or (2) drop latest\r\n> sublist, if a subtransaction aborts. This means we could perfectly\r\n> well keep all the requests in a single, densely packed array in\r\n> TopTransactionContext, and replace the \"list\" control structures\r\n> with indexes into that array. The attached patch does that.\r\n\r\nI spent some time looking through this patch, and it seems reasonable\r\nto me.\r\n\r\n> There is one notable new assumption I had to make for this. At end\r\n> of a subtransaction, we have to merge its inval events into the\r\n> \"PriorCmd\" list of the parent subtransaction. (It has to be the\r\n> PriorCmd list, not the CurrentCmd list, because these events have\r\n> already been processed locally; we don't want to do that again.)\r\n> This means the parent's CurrentCmd list has to be empty at that\r\n> instant, else we'd be trying to merge sublists that aren't adjacent\r\n> in the array. As far as I can tell, this is always true: the patch's\r\n> check for it doesn't trigger in a check-world run. And there's an\r\n> argument that it must be true for semantic consistency (see comments\r\n> in patch). So if that check ever fails, it probably means there is a\r\n> missing CommandCounterIncrement somewhere. Still, this could use more\r\n> review and testing.\r\n\r\nI didn't discover any problems with this assumption in my testing,\r\neither. Perhaps it'd be good to commit something like this sooner in\r\nthe v15 development cycle to maximize the amount of coverage it gets.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Mon, 16 Aug 2021 20:14:25 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Reducing memory consumption for pending inval messages"
},
{
"msg_contents": "\"Bossart, Nathan\" <bossartn@amazon.com> writes:\n> On 5/30/21, 10:22 AM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\n>> We can do a lot better, by exploiting what we know about the usage\n>> patterns of invalidation requests.\n\n> I spent some time looking through this patch, and it seems reasonable\n> to me.\n\nThanks for reviewing!\n\n>> There is one notable new assumption I had to make for this. At end\n>> of a subtransaction, we have to merge its inval events into the\n>> \"PriorCmd\" list of the parent subtransaction. (It has to be the\n>> PriorCmd list, not the CurrentCmd list, because these events have\n>> already been processed locally; we don't want to do that again.)\n>> This means the parent's CurrentCmd list has to be empty at that\n>> instant, else we'd be trying to merge sublists that aren't adjacent\n>> in the array. As far as I can tell, this is always true: the patch's\n>> check for it doesn't trigger in a check-world run. And there's an\n>> argument that it must be true for semantic consistency (see comments\n>> in patch). So if that check ever fails, it probably means there is a\n>> missing CommandCounterIncrement somewhere. Still, this could use more\n>> review and testing.\n\n> I didn't discover any problems with this assumption in my testing,\n> either. Perhaps it'd be good to commit something like this sooner in\n> the v15 development cycle to maximize the amount of coverage it gets.\n\nYeah, that's a good point. I'll go push this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 16 Aug 2021 16:18:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Reducing memory consumption for pending inval messages"
}
] |
[
{
"msg_contents": "Hi,\n\nwhile looking at the other thread related to postgres_fdw batching [1]\nand testing with very large batches, I noticed this disappointing\nbehavior when inserting 1M rows (just integers, nothing fancy):\n\nno batching: 64782 ms\n100 rows: 2118 ms\n32767 rows: 41115 ms\n\nPretty nice improvement when batching 100 rows, but then it all goes\nwrong for some reason.\n\nThe problem is pretty obvious from a perf profile:\n\n\n --100.00%--ExecModifyTable\n |\n --99.70%--ExecInsert\n |\n |--50.87%--MakeSingleTupleTableSlot\n | |\n | --50.85%--MakeTupleTableSlot\n | |\n | --50.70%--IncrTupleDescRefCount\n | |\n | --50.69%--ResourceOwnerRememberTupleDesc\n | |\n | --50.69%--ResourceArrayAdd\n |\n |--48.18%--ExecBatchInsert\n | |\n | --47.92%--ExecDropSingleTupleTableSlot\n | |\n | |--47.17%--DecrTupleDescRefCount\n | | |\n | | --47.15%--ResourceOwnerForgetTupleDesc\n | | |\n | | --47.14%--ResourceArrayRemove\n | |\n | --0.53%--ExecClearTuple\n |\n --0.60%--ExecCopySlot\n\n\nThere are two problems at play, here. Firstly, the way it's coded now\nthe slots are pretty much re-created for each batch. So with 1M rows and\nbatches of 32k rows, that's ~30x drop/create. That seems a bit wasteful,\nand it shouldn't be too difficult to keep the slots across batches. (We\ncan't initialize all the slots in advance, because we don't know how\nmany will be needed, but we don't have to release them between batches.)\n\nThe other problem is that ResourceArrayAdd/Remove seem to behave a bit\npoorly with very many elements - I'm not sure if it's O(N^2) or worse,\nbut growing the array and linear searches seem to be a bit expensive.\n\nI'll take a look at fixing the first point, but I'm not entirely sure\nhow much will that improve the situation.\n\n\nregards\n\n\n[1]\nhttps://postgr.es/m/OS0PR01MB571603973C0AC2874AD6BF2594299%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 30 May 2021 22:22:10 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "postgres_fdw batching vs. (re)creating the tuple slots"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-30 22:22:10 +0200, Tomas Vondra wrote:\n> There are two problems at play, here. Firstly, the way it's coded now\n> the slots are pretty much re-created for each batch. So with 1M rows and\n> batches of 32k rows, that's ~30x drop/create. That seems a bit wasteful,\n> and it shouldn't be too difficult to keep the slots across batches. (We\n> can't initialize all the slots in advance, because we don't know how\n> many will be needed, but we don't have to release them between batches.)\n\nYea, that sounds like an obvious improvement.\n\n\n> I'll take a look at fixing the first point, but I'm not entirely sure\n> how much will that improve the situation.\n\nHm - I'd not expect this to still show up in the profile afterwards,\nwhen you insert >> 32k rows. Still annoying when a smaller number is\ninserted, of course.\n\n\n> The other problem is that ResourceArrayAdd/Remove seem to behave a bit\n> poorly with very many elements - I'm not sure if it's O(N^2) or worse,\n> but growing the array and linear searches seem to be a bit expensive.\n\nHm. I assume this is using the hashed representation of a resowner array\nmost of the time, not the array one? I suspect the problem is that\npretty quickly the ResourceArrayRemove() degrades to a linear search,\nbecause all of the resowner entries are the same, so the hashing doesn't\nhelp us at all. The peril of a simplistic open-coded hash table :(\n\nI think in this specific situation the easiest workaround is to use a\ncopy of the tuple desc, instead of the one in the relcache - the copy\nwon't be refcounted.\n\nThe whole tupledesc refcount / resowner stuff is a mess. We don't really\nutilize it much, and pay a pretty steep price for maintaining it.\n\nThis'd be less of an issue if we didn't store one resowner item for each\nreference, but kept track of the refcount one tupdesc resowner item\nhas. But there's no space to store that right now, nor is it easy to\nmake space, due to the way comparisons work for resowner.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 30 May 2021 13:58:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw batching vs. (re)creating the tuple slots"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-05-30 22:22:10 +0200, Tomas Vondra wrote:\n>> The other problem is that ResourceArrayAdd/Remove seem to behave a bit\n>> poorly with very many elements - I'm not sure if it's O(N^2) or worse,\n>> but growing the array and linear searches seem to be a bit expensive.\n\n> Hm. I assume this is using the hashed representation of a resowner array\n> most of the time, not the array one? I suspect the problem is that\n> pretty quickly the ResourceArrayRemove() degrades to a linear search,\n> because all of the resowner entries are the same, so the hashing doesn't\n> help us at all. The peril of a simplistic open-coded hash table :(\n\nNot only does ResourceArrayRemove degrade, but so does ResourceArrayAdd.\n\n> I think in this specific situation the easiest workaround is to use a\n> copy of the tuple desc, instead of the one in the relcache - the copy\n> won't be refcounted.\n\nProbably. There's no obvious reason why these transient slots need\na long-lived tupdesc. But it does seem like the hashing scheme somebody\nadded to resowners is a bit too simplistic. It ought to be able to\ncope with lots of refs to the same object, or at least not be extra-awful\nfor that case.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 30 May 2021 17:10:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw batching vs. (re)creating the tuple slots"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-30 17:10:59 -0400, Tom Lane wrote:\n> But it does seem like the hashing scheme somebody added to resowners\n> is a bit too simplistic. It ought to be able to cope with lots of\n> refs to the same object, or at least not be extra-awful for that case.\n\nIt's not really the hashing that's the problem, right? The array\nrepresentation would have nearly the same problem, I think?\n\nIt doesn't seem trivial to improve it without making resowner.c's\nrepresentation a good bit more complicated. Right now there's no space\nto store a 'per resowner & tupdesc refcount'. We can't even just make\nthe tuple desc reference a separate allocation (of (tupdesc, refcount)),\nbecause ResourceArrayRemove() relies on testing for equality with ==.\n\nI think we'd basically need an additional version of ResourceArray (type\n+ functions) which can store some additional data for each entry?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 30 May 2021 14:26:16 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw batching vs. (re)creating the tuple slots"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-05-30 17:10:59 -0400, Tom Lane wrote:\n>> But it does seem like the hashing scheme somebody added to resowners\n>> is a bit too simplistic. It ought to be able to cope with lots of\n>> refs to the same object, or at least not be extra-awful for that case.\n\n> It's not really the hashing that's the problem, right? The array\n> representation would have nearly the same problem, I think?\n\nResourceArrayAdd would have zero problem. ResourceArrayRemove is\nO(1) as long as resources are removed in reverse order ... which\nis effectively true if they're all the same resource. So while\nI've not tested, I believe that this particular case would have\nno issue at all with the old resowner implementation, stupid\nthough that was.\n\n> It doesn't seem trivial to improve it without making resowner.c's\n> representation a good bit more complicated.\n\nDunno, I have not studied the new version at all.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 30 May 2021 19:17:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw batching vs. (re)creating the tuple slots"
},
{
"msg_contents": "Hi,\n\nHere's two WIP patches that fixes the regression for me. The first part\nis from [1], so make large batches work, 0002 just creates a copy of the\ntupledesc to not cause issues in resource owner, 0003 ensures we only\ninitialize the slots once (not per batch).\n\nWith the patches applied, the timings look like this:\n\n batch timing\n ----------------------\n 1 64194.942 ms\n 10 7233.785 ms\n 100 2244.255 ms\n 32k 1372.175 ms\n\nwhich seems fine. I still need to get this properly tested etc. and make\nsure nothing is left over.\n\nregards\n\n\n[1]\nhttps://postgr.es/m/OS0PR01MB571603973C0AC2874AD6BF2594299%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 4 Jun 2021 13:48:13 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw batching vs. (re)creating the tuple slots"
},
{
"msg_contents": "Argh! I forgot the attachments, of course.\n\nOn 6/4/21 1:48 PM, Tomas Vondra wrote:\n> Hi,\n> \n> Here's two WIP patches that fixes the regression for me. The first part\n> is from [1], so make large batches work, 0002 just creates a copy of the\n> tupledesc to not cause issues in resource owner, 0003 ensures we only\n> initialize the slots once (not per batch).\n> \n> With the patches applied, the timings look like this:\n> \n> batch timing\n> ----------------------\n> 1 64194.942 ms\n> 10 7233.785 ms\n> 100 2244.255 ms\n> 32k 1372.175 ms\n> \n> which seems fine. I still need to get this properly tested etc. and make\n> sure nothing is left over.\n> \n> regards\n> \n> \n> [1]\n> https://postgr.es/m/OS0PR01MB571603973C0AC2874AD6BF2594299%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n> \n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 4 Jun 2021 13:52:28 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw batching vs. (re)creating the tuple slots"
},
{
"msg_contents": "Hi,\n\nHere's a v2 fixing a silly bug with reusing the same variable in two \nnested loops (worked for simple postgres_fdw cases, but \"make check\" \nfailed).\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 9 Jun 2021 12:30:17 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw batching vs. (re)creating the tuple slots"
},
{
"msg_contents": "On Wed, Jun 9, 2021 at 4:00 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> Here's a v2 fixing a silly bug with reusing the same variable in two\n> nested loops (worked for simple postgres_fdw cases, but \"make check\"\n> failed).\n\nI applied these patches and ran make check in postgres_fdw contrib\nmodule, I saw a server crash. Is it the same failure you were saying\nabove?\n\nWith Regards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Wed, 9 Jun 2021 16:20:49 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw batching vs. (re)creating the tuple slots"
},
{
"msg_contents": "On 6/9/21 12:50 PM, Bharath Rupireddy wrote:\n> On Wed, Jun 9, 2021 at 4:00 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> Hi,\n>>\n>> Here's a v2 fixing a silly bug with reusing the same variable in two\n>> nested loops (worked for simple postgres_fdw cases, but \"make check\"\n>> failed).\n> \n> I applied these patches and ran make check in postgres_fdw contrib\n> module, I saw a server crash. Is it the same failure you were saying\n> above?\n> \n\nNope, that was causing infinite loop. This is jut a silly mistake on my \nside - I forgot to replace the i/j variable inside the loop. Here's v3.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 9 Jun 2021 13:08:36 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw batching vs. (re)creating the tuple slots"
},
{
"msg_contents": "On Wed, Jun 9, 2021 at 4:38 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 6/9/21 12:50 PM, Bharath Rupireddy wrote:\n> > On Wed, Jun 9, 2021 at 4:00 PM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> Hi,\n> >>\n> >> Here's a v2 fixing a silly bug with reusing the same variable in two\n> >> nested loops (worked for simple postgres_fdw cases, but \"make check\"\n> >> failed).\n> >\n> > I applied these patches and ran make check in postgres_fdw contrib\n> > module, I saw a server crash. Is it the same failure you were saying\n> > above?\n> >\n>\n> Nope, that was causing infinite loop. This is jut a silly mistake on my\n> side - I forgot to replace the i/j variable inside the loop. Here's v3.\n\nThanks. The postgres_fdw regression test execution time is not\nincreased too much with the patches even with the test case added by\nthe below commit. With and without the patches attached in this\nthread, the execution times are 5 sec and 17 sec respectively. So,\nessentially these patches are reducing the execution time for the test\ncase added by the below commit.\n\ncommit cb92703384e2bb3fa0a690e5dbb95ad333c2b44c\nAuthor: Tomas Vondra <tomas.vondra@postgresql.org>\nDate: Tue Jun 8 20:22:18 2021 +0200\n\n Adjust batch size in postgres_fdw to not use too many parameters\n\nWith Regards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Wed, 9 Jun 2021 16:50:56 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw batching vs. (re)creating the tuple slots"
},
{
"msg_contents": "On 6/9/21 1:08 PM, Tomas Vondra wrote:\n> \n> \n> On 6/9/21 12:50 PM, Bharath Rupireddy wrote:\n>> On Wed, Jun 9, 2021 at 4:00 PM Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>>>\n>>> Hi,\n>>>\n>>> Here's a v2 fixing a silly bug with reusing the same variable in two\n>>> nested loops (worked for simple postgres_fdw cases, but \"make check\"\n>>> failed).\n>>\n>> I applied these patches and ran make check in postgres_fdw contrib\n>> module, I saw a server crash. Is it the same failure you were saying\n>> above?\n>>\n> \n> Nope, that was causing infinite loop. This is jut a silly mistake on my\n> side - I forgot to replace the i/j variable inside the loop. Here's v3.\n> \n> regards\n> \n\nFWIW I've pushed this, after improving the comments a little bit.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 11 Jun 2021 23:01:56 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw batching vs. (re)creating the tuple slots"
},
{
"msg_contents": "Tomas Vondra писал 2021-06-12 00:01:\n> On 6/9/21 1:08 PM, Tomas Vondra wrote:\n>> \n>> \n>> On 6/9/21 12:50 PM, Bharath Rupireddy wrote:\n>>> On Wed, Jun 9, 2021 at 4:00 PM Tomas Vondra\n>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>> \n>>>> Hi,\n>>>> \n>>>> Here's a v2 fixing a silly bug with reusing the same variable in two\n>>>> nested loops (worked for simple postgres_fdw cases, but \"make check\"\n>>>> failed).\n>>> \n>>> I applied these patches and ran make check in postgres_fdw contrib\n>>> module, I saw a server crash. Is it the same failure you were saying\n>>> above?\n>>> \n>> \n>> Nope, that was causing infinite loop. This is jut a silly mistake on \n>> my\n>> side - I forgot to replace the i/j variable inside the loop. Here's \n>> v3.\n>> \n>> regards\n>> \n> \n> FWIW I've pushed this, after improving the comments a little bit.\n> \n> \n> regards\n\nHi.\nIt seems this commit\n\ncommit b676ac443b6a83558d4701b2dd9491c0b37e17c4\nAuthor: Tomas Vondra <tomas.vondra@postgresql.org>\nDate: Fri Jun 11 20:19:48 2021 +0200\n\n Optimize creation of slots for FDW bulk inserts\n\nhas broken batch insert for partitions with unique indexes.\n\nEarlier the case worked as expected, inserting 1000 tuples. Now it exits \nwith\n\nERROR: duplicate key value violates unique constraint \"p0_pkey\"\nDETAIL: Key (x)=(1) already exists.\nCONTEXT: remote SQL command: INSERT INTO public.batch_table_p0(x, \nfield1, field2) VALUES ($1, $2, $3), ($4, $5, $6), ($7, $8, $9), ($10, \n$11, $12), ($13, $14, $15), ($16, $17, $18), ($19, $20, $21), ($22, $23, \n$24), ($25, $26, $27), ($28, $29, $30), ($31, $32, $33), ($34, $35, \n$36), ($37, $38, $39), ($40, $41, $42), ($43, $44, $45), ($46, $47, \n$48), ($49, $50, $51), ($52, $53, $54), ($55, $56, $57), ($58, $59, \n$60), ($61, $62, $63), ($64, $65, $66), ($67, $68, $69), ($70, $71, \n$72), ($73, $74, $75), ($76, $77, $78), ($79, $80, $81), ($82, $83, \n$84), ($85, $86, $87), ($88, $89, $90), ($91, $92, $93), ($94, $95, \n$96), ($97, $98, $99), ($100, $101, $102), ($103, $104, $105), ($106, \n$107, $108), ($109, $110, $111), ($112, $113, $114), ($115, $116, $117), \n($118, $119, $120), ($121, $122, $123), ($124, $125, $126), ($127, $128, \n$129), ($130, $131, $132), ($133, $134, $135), ($136, $137, $138), \n($139, $140, $141), ($142, $143, $144), ($145, $146, $147), ($148, $149, \n$150), ($151, $152, $153), ($154, $155, $156), ($157, $158, $159), \n($160, $161, $162), ($163, $164, $165), ($166, $167, $168), ($169, $170, \n$171), ($172, $173, $174), ($175, $176, $177), ($178, $179, $180), \n($181, $182, $183), ($184, $185, $186), ($187, $188, $189), ($190, $191, \n$192), ($193, $194, $195), ($196, $197, $198), ($199, $200, $201), \n($202, $203, $204), ($205, $206, $207), ($208, $209, $210), ($211, $212, \n$213), ($214, $215, $216), ($217, $218, $219), ($220, $221, $222), \n($223, $224, $225), ($226, $227, $228), ($229, $230, $231), ($232, $233, \n$234), ($235, $236, $237), ($238, $239, $240), ($241, $242, $243), \n($244, $245, $246), ($247, $248, $249), ($250, $251, $252), ($253, $254, \n$255), ($256, $257, $258), ($259, $260, $261), ($262, $263, $264), \n($265, $266, $267), ($268, $269, $270), ($271, $272, $273), ($274, $275, \n$276), ($277, $278, $279), ($280, $281, $282), ($283, $284, $285), \n($286, $287, $288), ($289, $290, $291), ($292, $293, $294), ($295, $296, \n$297), ($298, $299, $300)\n\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional",
"msg_date": "Wed, 16 Jun 2021 15:36:13 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw batching vs. (re)creating the tuple slots"
},
{
"msg_contents": "On 6/16/21 2:36 PM, Alexander Pyhalov wrote:\n> \n> Hi.\n> It seems this commit\n> \n> commit b676ac443b6a83558d4701b2dd9491c0b37e17c4\n> Author: Tomas Vondra <tomas.vondra@postgresql.org>\n> Date: Fri Jun 11 20:19:48 2021 +0200\n> \n> Optimize creation of slots for FDW bulk inserts\n> \n> has broken batch insert for partitions with unique indexes.\n> \n\nThanks for the report and reproducer!\n\nTurns out this is a mind-bogglingly silly bug I made in b676ac443b :-( \nThe data is copied into the slots only in the branch that initializes \nthem, so the subsequent batches just insert the same data over and over.\n\nThe attached patch fixes that, and adds a regression test (a bit smaller \nversion of your reproducer). I'll get this committed shortly.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 16 Jun 2021 16:23:18 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw batching vs. (re)creating the tuple slots"
},
{
"msg_contents": "On 6/16/21 4:23 PM, Tomas Vondra wrote:\n> On 6/16/21 2:36 PM, Alexander Pyhalov wrote:\n>>\n>> Hi.\n>> It seems this commit\n>>\n>> commit b676ac443b6a83558d4701b2dd9491c0b37e17c4\n>> Author: Tomas Vondra <tomas.vondra@postgresql.org>\n>> Date: Fri Jun 11 20:19:48 2021 +0200\n>>\n>> Optimize creation of slots for FDW bulk inserts\n>>\n>> has broken batch insert for partitions with unique indexes.\n>>\n> \n> Thanks for the report and reproducer!\n> \n> Turns out this is a mind-bogglingly silly bug I made in b676ac443b :-( \n> The data is copied into the slots only in the branch that initializes \n> them, so the subsequent batches just insert the same data over and over.\n> \n> The attached patch fixes that, and adds a regression test (a bit smaller \n> version of your reproducer). I'll get this committed shortly.\n> \n\nPushed, after a bit more cleanup and testing.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 16 Jun 2021 23:55:17 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw batching vs. (re)creating the tuple slots"
}
] |
[
{
"msg_contents": "Our team uses postgresql as the database, but we have some problem on grant and revoke.\n\nimagine the following sequence of operations:\n\ncreateuser test;\nCREATETABLE sales (trans_id int,datedate, amount int)\nPARTITIONBYRANGE(date);\nCREATETABLE sales_1 PARTITION OF sales\n FORVALUESFROM('2001-01-01')TO('2002-01-01')\n PARTITIONBYRANGE(amount);\nCREATETABLE sales_1 PARTITION OF sales\n FORVALUESFROM('2002-01-01')TO('2003-01-01')\n PARTITIONBYRANGE(amount);\n \nGRANTSELECTON sales TO test;\n\nset role test;\n\nSELECT*FROM sales;\n-- error, because test don't have select authority on sales_1\nSELECT*FROM sales_1;\n\nIn this example, the role test only has the select permission for sales and cannot access sales_1, which is very inconvenient.\n\nIn most scenarios, we want to assign permissions to a table and partition table to a user, but in postgresql, permissions are not recursive, so we need to spend extra energy to do this. So let's ask the postgresql team, why is the permission granted in a non-recursive way and what are the benefits?\n\nIf it is in a recursive way, when I grant select on parent table to user, the user also have permission on child table. It is very convenient.\n\nIn postgresql, we already have the Inheritance. If the table child inherits the table parent, every query command to the parent will recurse to the child. If the user does not want to recurse, you can use only keyword to do this, then why the partition is not consistent with the inheritite feature?\nOur team uses postgresql as the database, but we have some problem on grant and revoke.imagine the following sequence of operations:create user test;CREATE TABLE sales (trans_id int, date date, amount int) PARTITION BY RANGE (date);CREATE TABLE sales_1 PARTITION OF sales FOR VALUES FROM ('2001-01-01') TO ('2002-01-01') PARTITION BY RANGE (amount);CREATE TABLE sales_1 PARTITION OF sales FOR VALUES FROM ('2002-01-01') TO ('2003-01-01') PARTITION BY RANGE (amount); GRANT SELECT ON sales TO test;set role test;SELECT * FROM sales;-- error, because test don't have select authority on sales_1SELECT * FROM sales_1;In this example, the role test only has the select permission for sales and cannot access sales_1, which is very inconvenient.In most scenarios, we want to assign permissions to a table and partition table to a user, but in postgresql, permissions are not recursive, so we need to spend extra energy to do this. So let's ask the postgresql team, why is the permission granted in a non-recursive way and what are the benefits? If it is in a recursive way, when I grant select on parent table to user, the user also have permission on child table. It is very convenient.In postgresql, we already have the Inheritance. If the table child inherits the table parent, every query command to the parent will recurse to the child. If the user does not want to recurse, you can use only keyword to do this, then why the partition is not consistent with the inheritite feature?",
"msg_date": "Mon, 31 May 2021 15:19:15 +0800 (GMT+08:00)",
"msg_from": "mzj1996@mail.ustc.edu.cn",
"msg_from_op": true,
"msg_subject": "why is the permission granted in a non-recursive way and what are\n the benefits"
},
{
"msg_contents": "On Mon, May 31, 2021 at 12:19 AM <mzj1996@mail.ustc.edu.cn> wrote:\n\n> Our team uses postgresql as the database, but we have some problem on\n> grant and revoke.\n>\n> imagine the following sequence of operations:\n>\n> create user test;\n> CREATE TABLE sales (trans_id int, date date, amount int)\n> PARTITION BY RANGE (date);\n> CREATE TABLE sales_1 PARTITION OF sales\n> FOR VALUES FROM ('2001-01-01') TO ('2002-01-01')\n> PARTITION BY RANGE (amount);\n> CREATE TABLE sales_1 PARTITION OF sales\n> FOR VALUES FROM ('2002-01-01') TO ('2003-01-01')\n> PARTITION BY RANGE (amount);\n>\n> GRANT SELECT ON sales TO test;\n>\n> set role test;\n>\n> SELECT * FROM sales;\n> -- error, because test don't have select authority on sales_1\n> SELECT * FROM sales_1;\n>\n> In this example, the role test only has the select permission for sales\n> and cannot access sales_1, which is very inconvenient.\n>\n> In most scenarios, we want to assign permissions to a table and partition\n> table to a user, but in postgresql, permissions are not recursive, so we\n> need to spend extra energy to do this. *So let's ask the postgresql team,\n> why is the permission granted in a non-recursive way and what are the\n> benefits?*\n>\n> If it is in a recursive way, when I grant select on parent table to user,\n> the user also have permission on child table. It is very convenient.\n>\n> In postgresql, we already have the *Inheritance*. If the table child\n> inherits the table parent, every query command to the parent will recurse\n> to the child. If the user does not want to recurse, you can use *only*\n> keyword to do this, *then why the partition is not consistent with the\n> inheritite feature?*\n>\nHi,\nIn your example, the second 'CREATE TABLE sales_1' should be 'CREATE TABLE\nsales_2'.\n\nWhat is the expected behavior if sales_2 is created after the 'GRANT SELECT\nON sales TO test' statement ?\nShould permission on sales_2 be granted to test ?\n\nCheers\n\nOn Mon, May 31, 2021 at 12:19 AM <mzj1996@mail.ustc.edu.cn> wrote:Our team uses postgresql as the database, but we have some problem on grant and revoke.imagine the following sequence of operations:create user test;CREATE TABLE sales (trans_id int, date date, amount int) PARTITION BY RANGE (date);CREATE TABLE sales_1 PARTITION OF sales FOR VALUES FROM ('2001-01-01') TO ('2002-01-01') PARTITION BY RANGE (amount);CREATE TABLE sales_1 PARTITION OF sales FOR VALUES FROM ('2002-01-01') TO ('2003-01-01') PARTITION BY RANGE (amount); GRANT SELECT ON sales TO test;set role test;SELECT * FROM sales;-- error, because test don't have select authority on sales_1SELECT * FROM sales_1;In this example, the role test only has the select permission for sales and cannot access sales_1, which is very inconvenient.In most scenarios, we want to assign permissions to a table and partition table to a user, but in postgresql, permissions are not recursive, so we need to spend extra energy to do this. So let's ask the postgresql team, why is the permission granted in a non-recursive way and what are the benefits? If it is in a recursive way, when I grant select on parent table to user, the user also have permission on child table. It is very convenient.In postgresql, we already have the Inheritance. If the table child inherits the table parent, every query command to the parent will recurse to the child. If the user does not want to recurse, you can use only keyword to do this, then why the partition is not consistent with the inheritite feature?Hi,In your example, the second 'CREATE TABLE sales_1' should be 'CREATE TABLE sales_2'.What is the expected behavior if sales_2 is created after the 'GRANT SELECT ON sales TO test' statement ?Should permission on sales_2 be granted to test ?Cheers",
"msg_date": "Mon, 31 May 2021 01:36:22 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: why is the permission granted in a non-recursive way and what are\n the benefits"
},
{
"msg_contents": "mzj1996@mail.ustc.edu.cn writes:\n> In most scenarios, we want to assign permissions to a table and partition table to a user, but in postgresql, permissions are not recursive, so we need to spend extra energy to do this. So let's ask the postgresql team, why is the permission granted in a non-recursive way and what are the benefits?\n\nIt's intentional, because you might not wish to allow users of the\npartitioned table to mess with the partitions directly. Since only\nthe table directly named in the query is permission-checked, it's\nnot necessary for users of the partitioned table to have such child\npermissions.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 31 May 2021 09:44:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: why is the permission granted in a non-recursive way and what are\n the benefits"
}
] |
[
{
"msg_contents": "Dear all\n\nAny idea how to disable the autovacuum during the regression and coverage\ntests for the MobilityDB extension ?\n\nI have tried\nalter system set autovacuum = off;\nbut it does not seem to work.\n\nAny suggestions are much appreciated.\n\nEsteban\n\nDear allAny idea how to disable the autovacuum during the regression and coverage tests for the MobilityDB extension ?I have tried alter system set autovacuum = off;but it does not seem to work.Any suggestions are much appreciated.Esteban",
"msg_date": "Mon, 31 May 2021 09:29:42 +0200",
"msg_from": "Esteban Zimanyi <ezimanyi@ulb.ac.be>",
"msg_from_op": true,
"msg_subject": "How to disable the autovacuum ?"
},
{
"msg_contents": "## Esteban Zimanyi (ezimanyi@ulb.ac.be):\n\n> I have tried\n> alter system set autovacuum = off;\n> but it does not seem to work.\n\nDid you reload the configuration (\"SELECT pg_reload_conf()\" etc) after\nthat? If not, that's your problem right there.\n\nRegards,\nChristoph\n\n-- \nSpare Space\n\n\n",
"msg_date": "Mon, 31 May 2021 10:47:11 +0200",
"msg_from": "Christoph Moench-Tegeder <cmt@burggraben.net>",
"msg_from_op": false,
"msg_subject": "Re: How to disable the autovacuum ?"
},
{
"msg_contents": "Dear Christoph\n\nMany thanks for your prompt reply !\n\nIs there a step-by-step procedure specified somewhere?\n\nFor example, before launching the tests there is a load.sql file that loads\nall the test tables. The file starts as follows\n\nSET statement_timeout = 0;\nSET lock_timeout = 0;\nSET idle_in_transaction_session_timeout = 0;\nSET client_encoding = 'UTF8';\nSET standard_conforming_strings = on;\nSELECT pg_catalog.set_config('search_path', '', false);\nSET check_function_bodies = false;\nSET client_min_messages = warning;\nSET row_security = off;\nSET default_tablespace = '';\nSET default_with_oids = false;\n\n--\n-- Name: tbl_tbool; Type: TABLE; Schema: public; Owner: -\n--\n\nDROP TABLE IF EXISTS public.tbl_tbool;\nCREATE TABLE public.tbl_tbool (\n k integer,\n temp tbool\n);\nALTER TABLE tbl_tbool SET (autovacuum_enabled = false);\n\n[... many more table definitions added after which the load of these tables\nstarts ...]\n\nCOPY public.tbl_tbool (k,temp) FROM stdin;\n1 f@2001-05-31 20:25:00+02\n2 f@2001-06-13 00:50:00+02\n[...]\n\\.\n\n[... load of the other tables ...]\n\nI wonder whether this is the best way to do it, or whether it is better to\ndisable the autovacuum at the beginning for all the tests\n\nThanks for your help !\n\nOn Mon, May 31, 2021 at 10:47 AM Christoph Moench-Tegeder <\ncmt@burggraben.net> wrote:\n\n> ## Esteban Zimanyi (ezimanyi@ulb.ac.be):\n>\n> > I have tried\n> > alter system set autovacuum = off;\n> > but it does not seem to work.\n>\n> Did you reload the configuration (\"SELECT pg_reload_conf()\" etc) after\n> that? If not, that's your problem right there.\n>\n> Regards,\n> Christoph\n>\n> --\n> Spare Space\n>\n\nDear ChristophMany thanks for your prompt reply !Is there a step-by-step procedure specified somewhere?For example, before launching the tests there is a load.sql file that loads all the test tables. The file starts as followsSET statement_timeout = 0;SET lock_timeout = 0;SET idle_in_transaction_session_timeout = 0;SET client_encoding = 'UTF8';SET standard_conforming_strings = on;SELECT pg_catalog.set_config('search_path', '', false);SET check_function_bodies = false;SET client_min_messages = warning;SET row_security = off;SET default_tablespace = '';SET default_with_oids = false;---- Name: tbl_tbool; Type: TABLE; Schema: public; Owner: ---DROP TABLE IF EXISTS public.tbl_tbool;CREATE TABLE public.tbl_tbool ( k integer, temp tbool);ALTER TABLE tbl_tbool SET (autovacuum_enabled = false); [... many more table definitions added after which the load of these tables starts ...]COPY public.tbl_tbool (k,temp) FROM stdin;1 f@2001-05-31 20:25:00+022 f@2001-06-13 00:50:00+02[...]\\.[... load of the other tables ...] I wonder whether this is the best way to do it, or whether it is better to disable the autovacuum at the beginning for all the testsThanks for your help !On Mon, May 31, 2021 at 10:47 AM Christoph Moench-Tegeder <cmt@burggraben.net> wrote:## Esteban Zimanyi (ezimanyi@ulb.ac.be):\n\n> I have tried\n> alter system set autovacuum = off;\n> but it does not seem to work.\n\nDid you reload the configuration (\"SELECT pg_reload_conf()\" etc) after\nthat? If not, that's your problem right there.\n\nRegards,\nChristoph\n\n-- \nSpare Space",
"msg_date": "Mon, 31 May 2021 11:32:19 +0200",
"msg_from": "Esteban Zimanyi <ezimanyi@ulb.ac.be>",
"msg_from_op": true,
"msg_subject": "Re: How to disable the autovacuum ?"
},
{
"msg_contents": "Esteban Zimanyi <ezimanyi@ulb.ac.be> writes:\n> Any idea how to disable the autovacuum during the regression and coverage\n> tests for the MobilityDB extension ?\n\nTBH, this seems like a pretty bad idea. If your extension doesn't\nbehave stably with autovacuum it's not going to be much use in the\nreal world.\n\nIn the core tests, we sometimes disable autovac for individual\ntables using a per-table storage option, but that's a last resort.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 31 May 2021 09:49:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: How to disable the autovacuum ?"
},
{
"msg_contents": "Many thanks Tom for your feedback. I appreciate it.\n\nActually the tests work in parallel with autovacuum, I just wanted to\nminimize the test time since the autovacuum launches in the middle of the\nmany regression and robustness tests. But then I follow your advice.\n\nRegards\n\nEsteban\n\n\nOn Mon, May 31, 2021 at 3:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Esteban Zimanyi <ezimanyi@ulb.ac.be> writes:\n> > Any idea how to disable the autovacuum during the regression and coverage\n> > tests for the MobilityDB extension ?\n>\n> TBH, this seems like a pretty bad idea. If your extension doesn't\n> behave stably with autovacuum it's not going to be much use in the\n> real world.\n>\n> In the core tests, we sometimes disable autovac for individual\n> tables using a per-table storage option, but that's a last resort.\n>\n> regards, tom lane\n>\n\nMany thanks Tom for your feedback. I appreciate it.Actually the tests work in parallel with autovacuum, I just wanted to minimize the test time since the autovacuum launches in the middle of the many regression and robustness tests. But then I follow your advice.RegardsEstebanOn Mon, May 31, 2021 at 3:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Esteban Zimanyi <ezimanyi@ulb.ac.be> writes:\n> Any idea how to disable the autovacuum during the regression and coverage\n> tests for the MobilityDB extension ?\n\nTBH, this seems like a pretty bad idea. If your extension doesn't\nbehave stably with autovacuum it's not going to be much use in the\nreal world.\n\nIn the core tests, we sometimes disable autovac for individual\ntables using a per-table storage option, but that's a last resort.\n\n regards, tom lane",
"msg_date": "Mon, 31 May 2021 16:10:41 +0200",
"msg_from": "Esteban Zimanyi <ezimanyi@ulb.ac.be>",
"msg_from_op": true,
"msg_subject": "Re: How to disable the autovacuum ?"
},
{
"msg_contents": "## Esteban Zimanyi (ezimanyi@ulb.ac.be):\n\n> Is there a step-by-step procedure specified somewhere?\n\nThe first step is not to disable autovacuum... (why would you want to\ndo that?).\n\n> For example, before launching the tests there is a load.sql file that loads\n> all the test tables. The file starts as follows\n> \n> SET statement_timeout = 0;\n\nThat's all session parameters.\nThe general principle here is: We have parameters which can be set\ninside a session - \"SET ...\" - or even inside a transaction - \"SET LOCAL\"\n- and reset again (\"RESET ...\"). These parameters are also in the\nserver configuration - I like to think of those settings as the defaults\nfor new sessions (except when overridden on a user/database/function\nlevel, etc.).\nOther parameters have to be set on the server level - that is, added\nto the configuration file(s). (\"ALTER SYSTEM\" is just a way to add\nconfiguration directives via the postgresql.auto.conf file). Changes\nthe the configuration files become active after a server reload (or\nrestart, of course).\nSome of these settings can only be set on server start (not reload)\nor even have to match the data directory.\n\nIn any case, the documentation is very clear if a restart is\nrequired for changing a parameter, e.g. in\n https://www.postgresql.org/docs/13/runtime-config-autovacuum.html\n \"This parameter can only be set in the postgresql.conf file or on the\n server command line\"\n\nAnd some parameters can be set as storage parameters on a per-object\n(table, index, ...) base, but that's the next can of worms.\n\nThe takeaways of this are:\n1. ALTER SYSTEM only edits the configuration, reloads and restarts have\n to be handled by the operator as usual.\n Documentation: https://www.postgresql.org/docs/13/sql-altersystem.html\n \"Values set with ALTER SYSTEM will be effective after the next server\n configuration reload, or after the next server restart in the case of\n parameters that can only be changed at server start. A server\n configuration reload can be commanded by calling the SQL function\n pg_reload_conf(), running pg_ctl reload, or sending a SIGHUP signal\n to the main server process.\"\n2. Different parameters can have different contexts, which you should\n be aware of when changing them. Session and object (\"storage\")\n parameters take their defaults from the server configuration.\n Check parameter documentation and the pg_settings view (documented\n here: https://www.postgresql.org/docs/13/view-pg-settings.html )\n for parameter contexts.\n3. Don't disable autovacuum unless you really know what you're doing.\n\nGruß,\nChristoph\n\n-- \nSpare Space\n\n\n",
"msg_date": "Mon, 31 May 2021 16:21:49 +0200",
"msg_from": "Christoph Moench-Tegeder <cmt@burggraben.net>",
"msg_from_op": false,
"msg_subject": "Re: How to disable the autovacuum ?"
}
] |
[
{
"msg_contents": "The comparison predicates IS [NOT] TRUE/FALSE/UNKNOWN were not\nrecognised by postgres_fdw, so they were not pushed down to the remote\nserver. The attached patch adds support for them.\n\nI am adding this to the commitfest 2021-07.",
"msg_date": "Mon, 31 May 2021 11:03:05 +0300",
"msg_from": "Emre Hasegeli <emre@hasegeli.com>",
"msg_from_op": true,
"msg_subject": "postgres_fdw: Handle boolean comparison predicates"
},
{
"msg_contents": "Hi Emre,\nThis looks like a good improvement.\n\nPlease add this patch to the commitfest so that it's not forgotten. It\nwill be considered as a new feature so will be considered for commit\nafter the next commitfest.\n\nMean time here are some comments.\n+/*\n+ * Deparse IS [NOT] TRUE/FALSE/UNKNOWN expression.\n+ */\n+static void\n+deparseBooleanTest(BooleanTest *node, deparse_expr_cxt *context)\n+{\n+ StringInfo buf = context->buf;\n+\n+ switch (node->booltesttype)\n+ {\n\n+ case IS_NOT_TRUE:\n+ appendStringInfoString(buf, \"(NOT \");\n+ deparseExpr(node->arg, context);\n+ appendStringInfoString(buf, \" OR \");\n+ deparseExpr(node->arg, context);\n+ appendStringInfoString(buf, \" IS NULL)\");\n+ break;\n\n+}\n\nI don't understand why we need to complicate the expressions when\nsending those to the foreign nodes. Why do we want to send (xyz IS\nFALSE) (NOT (xyz) OR (xyz IS NULL)) and not as just (xyz IS FALSE).\nThe latter is much more readable and less error-prone. That true for\nall the BooleanTest deparsing.\n\n+EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft1 t1 WHERE (c1 = 100) IS\nTRUE; -- BooleanTest\n\nAlso test a boolean column?\n\nOn Mon, May 31, 2021 at 1:33 PM Emre Hasegeli <emre@hasegeli.com> wrote:\n>\n> The comparison predicates IS [NOT] TRUE/FALSE/UNKNOWN were not\n> recognised by postgres_fdw, so they were not pushed down to the remote\n> server. The attached patch adds support for them.\n>\n> I am adding this to the commitfest 2021-07.\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 31 May 2021 17:38:11 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: Handle boolean comparison predicates"
},
{
"msg_contents": "> Please add this patch to the commitfest so that it's not forgotten. It\n> will be considered as a new feature so will be considered for commit\n> after the next commitfest.\n\nI did [1]. You can add yourself as a reviewer.\n\n> I don't understand why we need to complicate the expressions when\n> sending those to the foreign nodes. Why do we want to send\n> (NOT xyz OR xyz IS NULL) and not as just (xyz IS FALSE).\n> The latter is much more readable and less error-prone. That true for\n> all the BooleanTest deparsing.\n\n= true/false conditions are normalised. I thought similar behaviour\nwould be expected here.\n\n> +EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft1 t1 WHERE (c1 = 100) IS\n> TRUE; -- BooleanTest\n>\n> Also test a boolean column?\n\nThere isn't a boolean column on the test table currently.\n\n[1] https://commitfest.postgresql.org/33/3144/\n\n\n",
"msg_date": "Mon, 31 May 2021 19:51:57 +0300",
"msg_from": "Emre Hasegeli <emre@hasegeli.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw: Handle boolean comparison predicates"
},
{
"msg_contents": "Le lundi 31 mai 2021, 18:51:57 CEST Emre Hasegeli a écrit :\n> > Please add this patch to the commitfest so that it's not forgotten. It\n> > will be considered as a new feature so will be considered for commit\n> > after the next commitfest.\n> \n> I did [1]. You can add yourself as a reviewer.\n> \n> > I don't understand why we need to complicate the expressions when\n> > sending those to the foreign nodes. Why do we want to send\n> > (NOT xyz OR xyz IS NULL) and not as just (xyz IS FALSE).\n> > The latter is much more readable and less error-prone. That true for\n> > all the BooleanTest deparsing.\n> \n> = true/false conditions are normalised. I thought similar behaviour\n> would be expected here.\n\nI agree with Ashutosh, since IS NOT TRUE / FALSE is already a way of \nnormalizing it I don't really see what this brings.\n\n> \n> > +EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft1 t1 WHERE (c1 = 100) IS\n> > TRUE; -- BooleanTest\n> > \n> > Also test a boolean column?\n> \n> There isn't a boolean column on the test table currently.\n\nWe should probably add one then. \n\n\n\n-- \nRonan Dunklau\n\n\n\n\n",
"msg_date": "Thu, 22 Jul 2021 09:40:30 +0200",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: Handle boolean comparison predicates"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nHello\r\n\r\nI tried to apply the patch to master branch and got a couple of errors, so I think the patch needs a rebase. \r\n\r\nI also agree with Ashutosh that the \"IS NOT TRUE\" case can be simplified to just \"IS FALSE\". it's simpler to understand.\r\n\r\nbased on this, I think we should restructure the switch-case statement in deparseBooleanTest because some of the cases in there evaluate to the same result but handles differently. \r\n\r\nFor example, \"IS TRUE\" and \"IS NOT FALSE\" both evaluate to true, so can be handled in the same way\r\n\r\nsomething like:\r\nswitch (node->booltesttype)\r\n{\r\n\tcase IS_TRUE:\r\n\tcase IS_NOT_FALSE:\r\n\t\tappendStringInfoChar(buf, '(');\r\n\t\tdeparseExpr(node->arg, context);\r\n\t\tappendStringInfoString(buf, \")\");\r\n\t\tbreak;\r\n\tcase IS_FALSE:\r\n\tcase IS_NOT_TRUE:\r\n\t\tappendStringInfoChar(buf, '(');\r\n\t\tdeparseExpr(node->arg, context);\r\n\t\tappendStringInfoString(buf, \" IS FALSE)\");\r\n\t\tbreak;\r\n\tcase IS_UNKNOWN:\r\n\t\tappendStringInfoChar(buf, '(');\r\n\t\tdeparseExpr(node->arg, context);\r\n\t\tappendStringInfoString(buf, \" IS NULL)\");\r\n\t\tbreak;\r\n\tcase IS_NOT_UNKNOWN:\r\n\t\tappendStringInfoChar(buf, '(');\r\n\t\tdeparseExpr(node->arg, context);\r\n\t\tappendStringInfoString(buf, \" IS NOT NULL)\");\r\n\t\tbreak;\r\n}\r\n\r\njust a thought\r\nthanks!\r\n\r\n-------------------------------\r\nCary Huang\r\nHighGo Software Canada\r\nwww.highgo.ca",
"msg_date": "Fri, 20 Aug 2021 19:33:12 +0000",
"msg_from": "Cary Huang <cary.huang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: Handle boolean comparison predicates"
},
{
"msg_contents": "Cary Huang <cary.huang@highgo.ca> writes:\n> I also agree with Ashutosh that the \"IS NOT TRUE\" case can be simplified to just \"IS FALSE\". it's simpler to understand.\n\nUh ... surely that's just wrong?\n\nregression=# select null is not true;\n ?column? \n----------\n t\n(1 row)\n\nregression=# select null is false; \n ?column? \n----------\n f\n(1 row)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 20 Aug 2021 16:06:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: Handle boolean comparison predicates"
},
{
"msg_contents": "> On 31 May 2021, at 18:51, Emre Hasegeli <emre@hasegeli.com> wrote:\n> \n>> Please add this patch to the commitfest so that it's not forgotten. It\n>> will be considered as a new feature so will be considered for commit\n>> after the next commitfest.\n> \n> I did [1].\n\nThe patch no longer applies to HEAD, can you please submit a rebased version?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 1 Sep 2021 13:15:27 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: Handle boolean comparison predicates"
},
{
"msg_contents": "> On 1 Sep 2021, at 13:15, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 31 May 2021, at 18:51, Emre Hasegeli <emre@hasegeli.com> wrote:\n>> \n>>> Please add this patch to the commitfest so that it's not forgotten. It\n>>> will be considered as a new feature so will be considered for commit\n>>> after the next commitfest.\n>> \n>> I did [1].\n> \n> The patch no longer applies to HEAD, can you please submit a rebased version?\n\nSince the commitfest is now ending, I'm marking this Returned with Feedback.\nPlease resubmit a rebased version for the next CF if you are still interested\nin pursuing this patch.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Fri, 1 Oct 2021 09:10:49 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw: Handle boolean comparison predicates"
}
] |
[
{
"msg_contents": "Hi.\n\nThere's issue with join pushdown after\n\ncommit 86dc90056dfdbd9d1b891718d2e5614e3e432f35\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Wed Mar 31 11:52:34 2021 -0400\n\n Rework planning and execution of UPDATE and DELETE\n\nTo make sure that join pushdown path selected, one can patch\ncontrib/postgres_fdw/postgres_fdw.c in the following way:\n\ndiff --git a/contrib/postgres_fdw/postgres_fdw.c \nb/contrib/postgres_fdw/postgres_fdw.c\nindex c48a421e88b..c2bf6833050 100644\n--- a/contrib/postgres_fdw/postgres_fdw.c\n+++ b/contrib/postgres_fdw/postgres_fdw.c\n@@ -5959,6 +5959,8 @@ postgresGetForeignJoinPaths(PlannerInfo *root,\n /* Estimate costs for bare join relation */\n estimate_path_cost_size(root, joinrel, NIL, NIL, NULL,\n &rows, &width, \n&startup_cost, &total_cost);\n+\n+ startup_cost = total_cost = 0;\n /* Now update this information in the joinrel */\n joinrel->rows = rows;\n joinrel->reltarget->width = width;\n\nNow, this simple test shows the issue:\n\ncreate extension postgres_fdw;\n\nDO $d$\n BEGIN\n EXECUTE $$CREATE SERVER loopback FOREIGN DATA WRAPPER \npostgres_fdw\n OPTIONS (dbname '$$||current_database()||$$',\n port '$$||current_setting('port')||$$')$$;\n END;\n$d$;\n\nCREATE USER MAPPING FOR CURRENT_USER SERVER loopback;\n\nCREATE TABLE base_tbl (a int, b int);\nCREATE FOREIGN TABLE remote_tbl (a int, b int)\n SERVER loopback OPTIONS (table_name 'base_tbl');\n\ninsert into remote_tbl select generate_series(1,100), \ngenerate_series(1,100);\n\nexplain verbose update remote_tbl d set a= case when current_timestamp> \n'2012-02-02'::timestamp then 5 else 6 end FROM remote_tbl AS t (a, b) \nWHERE d.a = (t.a);\n \n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Update on public.remote_tbl d (cost=0.00..42.35 rows=0 width=0)\n Remote SQL: UPDATE public.base_tbl SET a = $2 WHERE ctid = $1\n -> Foreign Scan (cost=0.00..42.35 rows=8470 width=74)\n Output: CASE WHEN (CURRENT_TIMESTAMP > '2012-02-02 \n00:00:00'::timestamp without time zone) THEN 5 ELSE 6 END, d.ctid, d.*, \nt.*\n Relations: (public.remote_tbl d) INNER JOIN (public.remote_tbl \nt)\n Remote SQL: SELECT r1.ctid, CASE WHEN (r1.*)::text IS NOT NULL \nTHEN ROW(r1.a, r1.b) END, CASE WHEN (r2.*)::text IS NOT NULL THEN \nROW(r2.a, r2.b) END FROM (public.base_tbl r1 INNER JOIN public.base_tbl \nr2 ON (((r1.a = r2.a)))) FOR UPDATE OF r1\n -> Merge Join (cost=433.03..566.29 rows=8470 width=70)\n Output: d.ctid, d.*, t.*\n Merge Cond: (d.a = t.a)\n -> Sort (cost=211.00..214.10 rows=1241 width=42)\n Output: d.ctid, d.*, d.a\n Sort Key: d.a\n -> Foreign Scan on public.remote_tbl d \n(cost=100.00..147.23 rows=1241 width=42)\n Output: d.ctid, d.*, d.a\n Remote SQL: SELECT a, b, ctid FROM \npublic.base_tbl FOR UPDATE\n -> Sort (cost=222.03..225.44 rows=1365 width=36)\n Output: t.*, t.a\n Sort Key: t.a\n -> Foreign Scan on public.remote_tbl t \n(cost=100.00..150.95 rows=1365 width=36)\n Output: t.*, t.a\n Remote SQL: SELECT a, b FROM public.base_tbl\nupdate remote_tbl d set a= case when current_timestamp> \n'2012-02-02'::timestamp then 5 else 6 end FROM remote_tbl AS t (a, b) \nWHERE d.a = (t.a);\n\nYou'll get\nERROR: input of anonymous composite types is not implemented\nCONTEXT: whole-row reference to foreign table \"remote_tbl\"\n\nmake_tuple_from_result_row() (called by fetch_more_data()), will try to \ncall InputFunctionCall() for ROW(r1.a, r1.b) and will get error in \nrecord_in().\n\nHere ROW(r2.a, r2.b) would have attribute type id, corresponding to \nremote_tbl, but ROW(r1.a, r1.b) would have atttypid 2249 (RECORD).\n\nBefore 86dc90056dfdbd9d1b891718d2e5614e3e432f35 the plan would be \ndifferent and looked like\n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Update on public.remote_tbl d (cost=0.00..73.54 rows=14708 width=46)\n Remote SQL: UPDATE public.base_tbl SET a = $2 WHERE ctid = $1\n -> Foreign Scan (cost=0.00..73.54 rows=14708 width=46)\n Output: CASE WHEN (CURRENT_TIMESTAMP > '2012-02-02 \n00:00:00'::timestamp without time zone) THEN d.a ELSE 6 END, d.b, \nd.ctid, t.*\n Relations: (public.remote_tbl d) INNER JOIN (public.remote_tbl \nt)\n Remote SQL: SELECT r1.a, r1.b, r1.ctid, CASE WHEN (r2.*)::text \nIS NOT NULL THEN ROW(r2.a, r2.b) END FROM (public.base_tbl r1 INNER JOIN \npublic.base_tbl r2 ON (((r1.a = r2.a)))) FOR UPDATE OF r1\n -> Merge Join (cost=516.00..747.39 rows=14708 width=46)\n Output: d.a, d.b, d.ctid, t.*\n Merge Cond: (d.a = t.a)\n -> Sort (cost=293.97..299.35 rows=2155 width=14)\n Output: d.a, d.b, d.ctid\n Sort Key: d.a\n -> Foreign Scan on public.remote_tbl d \n(cost=100.00..174.65 rows=2155 width=14)\n Output: d.a, d.b, d.ctid\n Remote SQL: SELECT a, b, ctid FROM \npublic.base_tbl FOR UPDATE\n -> Sort (cost=222.03..225.44 rows=1365 width=36)\n Output: t.*, t.a\n Sort Key: t.a\n -> Foreign Scan on public.remote_tbl t \n(cost=100.00..150.95 rows=1365 width=36)\n Output: t.*, t.a\n Remote SQL: SELECT a, b FROM public.base_tbl\n\nHere ROW(r2.a, r2.b) would have attribute type id, corresponding to \nremote_tbl.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n",
"msg_date": "Mon, 31 May 2021 15:39:41 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "join pushdown and issue with foreign update"
},
{
"msg_contents": "Alexander Pyhalov писал 2021-05-31 15:39:\n> Hi.\n> \n> There's issue with join pushdown after\n> \n> commit 86dc90056dfdbd9d1b891718d2e5614e3e432f35\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> Date: Wed Mar 31 11:52:34 2021 -0400\n> \n...\n> You'll get\n> ERROR: input of anonymous composite types is not implemented\n> CONTEXT: whole-row reference to foreign table \"remote_tbl\"\n> \n> make_tuple_from_result_row() (called by fetch_more_data()), will try\n> to call InputFunctionCall() for ROW(r1.a, r1.b) and will get error in\n> record_in().\n> \n> Here ROW(r2.a, r2.b) would have attribute type id, corresponding to\n> remote_tbl, but ROW(r1.a, r1.b) would have atttypid 2249 (RECORD).\n> \n\nThe issue seems to be that add_row_identity_columns() adds RECORD var to \nthe query.\nAdding var with table's relation type fixes this issue, but breaks \nupdate of\npartitioned tables, as we add \"wholerow\" with type of one child relation \nand then\ntry to use it with another child (of different table type).\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n",
"msg_date": "Mon, 31 May 2021 19:04:17 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: join pushdown and issue with foreign update"
},
{
"msg_contents": "Hi,\n\nOn Tue, Jun 1, 2021 at 1:04 AM Alexander Pyhalov\n<a.pyhalov@postgrespro.ru> wrote:\n>\n> Alexander Pyhalov писал 2021-05-31 15:39:\n> > Hi.\n> >\n> > There's issue with join pushdown after\n> >\n> > commit 86dc90056dfdbd9d1b891718d2e5614e3e432f35\n> > Author: Tom Lane <tgl@sss.pgh.pa.us>\n> > Date: Wed Mar 31 11:52:34 2021 -0400\n> >\n> ...\n> > You'll get\n> > ERROR: input of anonymous composite types is not implemented\n> > CONTEXT: whole-row reference to foreign table \"remote_tbl\"\n\nInteresting, thanks for reporting this. This sounds like a regression\non 86dc90056's part.\n\n> > make_tuple_from_result_row() (called by fetch_more_data()), will try\n> > to call InputFunctionCall() for ROW(r1.a, r1.b) and will get error in\n> > record_in().\n> >\n> > Here ROW(r2.a, r2.b) would have attribute type id, corresponding to\n> > remote_tbl, but ROW(r1.a, r1.b) would have atttypid 2249 (RECORD).\n> >\n>\n> The issue seems to be that add_row_identity_columns() adds RECORD var to\n> the query.\n> Adding var with table's relation type fixes this issue, but breaks\n> update of\n> partitioned tables, as we add \"wholerow\" with type of one child relation\n> and then\n> try to use it with another child (of different table type).\n\nPerhaps, we can get away with adding the wholerow Var with the target\nrelation's reltype when the target foreign table is not a \"child\"\nrelation, but the root target relation itself. Maybe like the\nattached?\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 1 Jun 2021 21:47:40 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: join pushdown and issue with foreign update"
},
{
"msg_contents": "Amit Langote писал 2021-06-01 15:47:\n\n> Perhaps, we can get away with adding the wholerow Var with the target\n> relation's reltype when the target foreign table is not a \"child\"\n> relation, but the root target relation itself. Maybe like the\n> attached?\n> \n\nHi.\n\nI think the patch fixes this issue, but it still preserves chances to \nget RECORD in fetch_more_data()\n(at least with combination with asymmetric partition-wise join).\n\nWhat about the following patch?\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional",
"msg_date": "Tue, 01 Jun 2021 19:00:55 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: join pushdown and issue with foreign update"
},
{
"msg_contents": "Alexander Pyhalov <a.pyhalov@postgrespro.ru> writes:\n> What about the following patch?\n\nISTM that using a specific rowtype rather than RECORD would be\nquite disastrous from the standpoint of bloating the number of\ndistinct resjunk columns we need for a partition tree with a\nlot of children. Maybe we'll have to go that way, but it seems\nlike an absolute last resort.\n\nI think a preferable fix involves making sure that the correct\nrecord-type typmod is propagated to record_in in this context.\nAlternatively, maybe we could insert the foreign table's rowtype\nduring execution of the input operation, without touching the\nplan as such.\n\nCould we start by creating a test case that doesn't involve\nuncommittable hacks to the source code?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 01 Jun 2021 14:19:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: join pushdown and issue with foreign update"
},
{
"msg_contents": "Tom Lane писал 2021-06-01 21:19:\n> Alexander Pyhalov <a.pyhalov@postgrespro.ru> writes:\n>> What about the following patch?\n> \n> ISTM that using a specific rowtype rather than RECORD would be\n> quite disastrous from the standpoint of bloating the number of\n> distinct resjunk columns we need for a partition tree with a\n> lot of children. Maybe we'll have to go that way, but it seems\n> like an absolute last resort.\n\nWhy do you think they are distinct?\nIn suggested patch all of them will have type of the common ancestor \n(root of the partition tree).\n\n> \n> I think a preferable fix involves making sure that the correct\n> record-type typmod is propagated to record_in in this context.\n> Alternatively, maybe we could insert the foreign table's rowtype\n> during execution of the input operation, without touching the\n> plan as such.\n> \n> Could we start by creating a test case that doesn't involve\n> uncommittable hacks to the source code?\n\nYes, it seems the following works fine to reproduce the issue.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional",
"msg_date": "Tue, 01 Jun 2021 21:47:58 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: join pushdown and issue with foreign update"
},
{
"msg_contents": "Alexander Pyhalov <a.pyhalov@postgrespro.ru> writes:\n> Tom Lane писал 2021-06-01 21:19:\n>> ISTM that using a specific rowtype rather than RECORD would be\n>> quite disastrous from the standpoint of bloating the number of\n>> distinct resjunk columns we need for a partition tree with a\n>> lot of children. Maybe we'll have to go that way, but it seems\n>> like an absolute last resort.\n\n> Why do you think they are distinct?\n> In suggested patch all of them will have type of the common ancestor \n> (root of the partition tree).\n\nSeems moderately unlikely that that will work in cases where the\npartition children have rowtypes different from the ancestor\n(different column order etc). It'll also cause the problem we\noriginally sought to avoid for selects across traditional inheritance\ntrees, where there isn't a common partition ancestor.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 01 Jun 2021 16:01:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: join pushdown and issue with foreign update"
},
{
"msg_contents": "I wrote:\n> I think a preferable fix involves making sure that the correct\n> record-type typmod is propagated to record_in in this context.\n> Alternatively, maybe we could insert the foreign table's rowtype\n> during execution of the input operation, without touching the\n> plan as such.\n\nHere's a draft-quality patch based on that idea. It resolves\nthe offered test case, but I haven't beat on it beyond that.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 01 Jun 2021 17:32:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: join pushdown and issue with foreign update"
},
{
"msg_contents": "On 2/6/21 02:32, Tom Lane wrote:\n> I wrote:\n>> I think a preferable fix involves making sure that the correct\n>> record-type typmod is propagated to record_in in this context.\n>> Alternatively, maybe we could insert the foreign table's rowtype\n>> during execution of the input operation, without touching the\n>> plan as such.\n> \n> Here's a draft-quality patch based on that idea. It resolves\n> the offered test case, but I haven't beat on it beyond that.\n> \n> \t\t\tregards, tom lane\n> \nI played with your patch and couldn't find any errors. But what if ROW \noperation were allowed to be pushed to a foreign server?\nPotentially, I can imagine pushed-down JOIN with arbitrary ROW function \nin its target list.\nAmit's approach looks more safe for me.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n",
"msg_date": "Wed, 2 Jun 2021 12:39:37 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: join pushdown and issue with foreign update"
},
{
"msg_contents": "On Wed, Jun 2, 2021 at 6:32 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > I think a preferable fix involves making sure that the correct\n> > record-type typmod is propagated to record_in in this context.\n> > Alternatively, maybe we could insert the foreign table's rowtype\n> > during execution of the input operation, without touching the\n> > plan as such.\n>\n> Here's a draft-quality patch based on that idea.\n\nThis looks good to me. Yeah, I agree that reversing our decision to\nmark row-id wholerow Vars in as RECORD rather than a specific reltype\nwill have to wait until we hear more complaints than just this one,\nwhich seems fixable with a patch like this.\n\n> It resolves\n> the offered test case, but I haven't beat on it beyond that.\n\nGiven that we don't (no longer) support pushing down the join of child\ntarget relations with other relations, I don't think we have other\ncases that are affected at this point. I have a feeling that your\npatch will have fixed things enough that the same problem will not\noccur when we have join pushdown under UPDATE occurring in more cases.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 2 Jun 2021 16:43:40 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: join pushdown and issue with foreign update"
},
{
"msg_contents": "Tom Lane писал 2021-06-02 00:32:\n> I wrote:\n>> I think a preferable fix involves making sure that the correct\n>> record-type typmod is propagated to record_in in this context.\n>> Alternatively, maybe we could insert the foreign table's rowtype\n>> during execution of the input operation, without touching the\n>> plan as such.\n> \n> Here's a draft-quality patch based on that idea. It resolves\n> the offered test case, but I haven't beat on it beyond that.\n> \n> \t\t\tregards, tom lane\n\nHi.\nThe patch seems to work fine for mentioned case.\nFor now I'm working on function pushdown. When record-returning function \n(like unnest())\nis pushed down, on this stage we've already lost any type information, \nso get the issue again.\nSo far I'm not sure how to fix the issue, perhaps just avoid pushing \nforeign join if we have\nrecord, corresponding to function RTE var in joinrel->reltarget?\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n",
"msg_date": "Wed, 02 Jun 2021 11:35:09 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: join pushdown and issue with foreign update"
},
{
"msg_contents": "On Wed, Jun 2, 2021 at 4:39 PM Andrey Lepikhov\n<a.lepikhov@postgrespro.ru> wrote:\n> On 2/6/21 02:32, Tom Lane wrote:\n> > I wrote:\n> >> I think a preferable fix involves making sure that the correct\n> >> record-type typmod is propagated to record_in in this context.\n> >> Alternatively, maybe we could insert the foreign table's rowtype\n> >> during execution of the input operation, without touching the\n> >> plan as such.\n> >\n> > Here's a draft-quality patch based on that idea. It resolves\n> > the offered test case, but I haven't beat on it beyond that.\n> >\n> I played with your patch and couldn't find any errors. But what if ROW\n> operation were allowed to be pushed to a foreign server?\n>\n> Potentially, I can imagine pushed-down JOIN with arbitrary ROW function\n> in its target list.\n\nAre you saying that a pushed down ROW() expression may not correspond\nwith the Var chosen by the following code?\n\n+ /*\n+ * If we can't identify the referenced table, do nothing. This'll\n+ * likely lead to failure later, but perhaps we can muddle through.\n+ */\n+ var = (Var *) list_nth_node(TargetEntry, fsplan->fdw_scan_tlist,\n+ i)->expr;\n+ if (!IsA(var, Var))\n+ continue;\n+ rte = list_nth(estate->es_range_table, var->varno - 1);\n+ if (rte->rtekind != RTE_RELATION)\n+ continue;\n+ reltype = get_rel_type_id(rte->relid);\n+ if (!OidIsValid(reltype))\n+ continue;\n+ att->atttypid = reltype;\n\nThat may be a valid concern. I wonder if it would make sense to also\ncheck varattno == 0 here somewhere for good measure.\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 2 Jun 2021 17:40:32 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: join pushdown and issue with foreign update"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Wed, Jun 2, 2021 at 4:39 PM Andrey Lepikhov\n> <a.lepikhov@postgrespro.ru> wrote:\n>> I played with your patch and couldn't find any errors. But what if ROW\n>> operation were allowed to be pushed to a foreign server?\n>> Potentially, I can imagine pushed-down JOIN with arbitrary ROW function\n>> in its target list.\n\nI thought about this for awhile and I don't think it's a real concern.\nThere's nothing stopping us from pushing an expression of the form\n\"func(row(...))\" or \"row(...) op row(...)\", because we're not asking\nto retrieve the value of the ROW() expression. Whether the remote\nserver can handle that is strictly its concern. (Probably, it's\ngoing to do something involving a locally-assigned typmod to keep\ntrack of the rowtype, but it's not our problem.) Where things get\nsticky is if we try to *retrieve the value* of a ROW() expression.\nAnd except in this specific context, I don't see why we'd do that.\nThere's no advantage compared to retrieving the component Vars\nor expressions.\n\n> ... I wonder if it would make sense to also\n> check varattno == 0 here somewhere for good measure.\n\nYeah, I considered doing that but left it off in this version.\nIt's not clear to me how there could be a table column of type RECORD,\nso it seemed unnecessary. On the other hand, it's also cheap\ninsurance, so I'll put it back.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 04 Jun 2021 19:23:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: join pushdown and issue with foreign update"
}
] |
[
{
"msg_contents": "I've been thinking about rationalizing some of the buildfarm code, which\nhas grown somewhat like Topsy over the years. One useful thing would be\nto run all the \"make\" and \"install\" pieces together. When the buildfarm\nstarted we didn't have world targets, but they are now almost ancient\nhistory themselves, so it would be nice to leverage them.\n\nHowever, not all buildfarm animals are set up to build the docs, and not\nall owners necessarily want to. Moreover, we have provision for testing\nvarious docs formats (PDF, epub etc). So I'd like to be able to build\nand install all the world EXCEPT the docs. Rather than specify yet more\ntargets in the Makefile, it seemed to me a better way would be to\nprovide a SKIPDOCS option that could be set on the command line like this:\n\n make SKIPDOCS=1 world\n make SKIPDOCS=1 install-world\n\nThe attached very small patch is intended to provide for that.\nIncidentally, this is exactly what the MSVC build system's 'build.bat'\nand 'install.bat' do.\n\nI should add that quite apart from the buildfarm considerations this is\nsomething I've long wanted, and I suspect other developers would find it\nuseful too.\n\nObviously to be useful to the buildfarm it would need to be backpatched.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 31 May 2021 10:16:11 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "make world and install-world without docs"
},
{
"msg_contents": "On Mon, May 31, 2021, at 16:16, Andrew Dunstan wrote:\n> However, not all buildfarm animals are set up to build the docs, and not\n> all owners necessarily want to. Moreover, we have provision for testing\n> various docs formats (PDF, epub etc). So I'd like to be able to build\n> and install all the world EXCEPT the docs.\n\nWhy would someone not always want to test building the docs?\nWhat makes the docs special?\n\n/Joel\nOn Mon, May 31, 2021, at 16:16, Andrew Dunstan wrote:However, not all buildfarm animals are set up to build the docs, and notall owners necessarily want to. Moreover, we have provision for testingvarious docs formats (PDF, epub etc). So I'd like to be able to buildand install all the world EXCEPT the docs.Why would someone not always want to test building the docs?What makes the docs special?/Joel",
"msg_date": "Mon, 31 May 2021 21:32:34 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "Re: make world and install-world without docs"
},
{
"msg_contents": "\"Joel Jacobson\" <joel@compiler.org> writes:\n> On Mon, May 31, 2021, at 16:16, Andrew Dunstan wrote:\n>> However, not all buildfarm animals are set up to build the docs, and not\n>> all owners necessarily want to. Moreover, we have provision for testing\n>> various docs formats (PDF, epub etc). So I'd like to be able to build\n>> and install all the world EXCEPT the docs.\n\n> Why would someone not always want to test building the docs?\n> What makes the docs special?\n\nToolchain requirements, cf [1]. Per Andrew's comment, requiring all\nthat stuff to be installed would move the goalposts quite a ways for\nbuildfarm owners, and not all of the older systems we have in the farm\nwould be able to do it easily. (If you don't have access to prebuilt\npackages, you're looking at a lot of work to get that stuff\ninstalled.)\n\nIt was a good deal worse when we used the TeX-based toolchain\nto make PDFs, but it's still not something I want to foist on\nbuildfarm owners. Especially since there's no real reason\nto think that there are platform dependencies that would make\nit valuable to run such builds on a spectrum of machines.\nWe do have a couple of machines that have opted-in to building\nthe docs, and that seems sufficient. I feel no urge to make\nit be opt-out instead.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/docs/devel/docguide-toolsets.html\n\n\n",
"msg_date": "Mon, 31 May 2021 16:07:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: make world and install-world without docs"
},
{
"msg_contents": "On 2021-May-31, Andrew Dunstan wrote:\n\n> However, not all buildfarm animals are set up to build the docs, and not\n> all owners necessarily want to. Moreover, we have provision for testing\n> various docs formats (PDF, epub etc). So I'd like to be able to build\n> and install all the world EXCEPT the docs. Rather than specify yet more\n> targets in the Makefile, it seemed to me a better way would be to\n> provide a SKIPDOCS option that could be set on the command line like this:\n> \n> make SKIPDOCS=1 world\n> make SKIPDOCS=1 install-world\n\nI could use this feature. +1\n\n> +ifndef SKIPDOCS\n> $(call recurse,world,doc src config contrib,all)\n> world:\n> \t+@echo \"PostgreSQL, contrib, and documentation successfully made. Ready to install.\"\n> +else\n> +$(call recurse,world,src config contrib,all)\n> +world:\n> +\t+@echo \"PostgreSQL and contrib successfully made. Ready to install.\"\n> +endif\n\nI was going to suggest that instead of repeating the $(call) line you\ncould do something like\n\n$(call recurse,world,src config contrib,all)\nifndef SKIPDOCS\n$(call recurse,world,doc,all)\nendif\n\n... however, this makes the echoed string be wrong, and the whole thing\nlooks uglier if you use a second \"ifndef\" to generate the string, so I\nthink your proposal is okay.\n\nI do wonder if these echoed strings are really all that necessary.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n",
"msg_date": "Mon, 31 May 2021 20:06:46 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: make world and install-world without docs"
},
{
"msg_contents": "On 31.05.21 16:16, Andrew Dunstan wrote:\n> make SKIPDOCS=1 world\n> make SKIPDOCS=1 install-world\n\nMaybe this should be configure option? That's generally where you set \nwhat you want to build or not build. (That might also make the \nbuildfarm integration easier, since there are already facilities to \nspecify and report configure options.)\n\n\n\n",
"msg_date": "Wed, 2 Jun 2021 00:20:04 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: make world and install-world without docs"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 31.05.21 16:16, Andrew Dunstan wrote:\n>> make SKIPDOCS=1 world\n>> make SKIPDOCS=1 install-world\n\n> Maybe this should be configure option? That's generally where you set \n> what you want to build or not build. (That might also make the \n> buildfarm integration easier, since there are already facilities to \n> specify and report configure options.)\n\nHmm, I think I prefer Andrew's way. The fact that I don't want\nto build the docs right now doesn't mean I won't want to do so\nlater --- in fact, that sequence is pretty exactly what I do\nwhenever I'm working on a patch. It'd be annoying to have\nto re-configure to make that work.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 01 Jun 2021 18:23:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: make world and install-world without docs"
},
{
"msg_contents": "\nOn 6/1/21 6:23 PM, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> On 31.05.21 16:16, Andrew Dunstan wrote:\n>>> make SKIPDOCS=1 world\n>>> make SKIPDOCS=1 install-world\n>> Maybe this should be configure option? That's generally where you set \n>> what you want to build or not build. (That might also make the \n>> buildfarm integration easier, since there are already facilities to \n>> specify and report configure options.)\n> Hmm, I think I prefer Andrew's way. The fact that I don't want\n> to build the docs right now doesn't mean I won't want to do so\n> later --- in fact, that sequence is pretty exactly what I do\n> whenever I'm working on a patch. It'd be annoying to have\n> to re-configure to make that work.\n>\n> \t\t\t\n\n\n\nYes, agreed. If you don't like the SKIPDOCS=1 mechanism, let's just\ninvent a couple of new targets instead, say `world-bin` and\n`install-world-bin`.\n\n\nI'm inclined to agree with Alvaro that the messages are at best an\noddity. Standard Unix practice is to be silent on success.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 2 Jun 2021 15:47:41 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: make world and install-world without docs"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> I'm inclined to agree with Alvaro that the messages are at best an\n> oddity. Standard Unix practice is to be silent on success.\n\nWe've been steadily moving towards less chatter during builds.\nI'd be good with dropping these messages in HEAD, but doing so\nin the back branches might be inadvisable.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 02 Jun 2021 16:21:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: make world and install-world without docs"
},
{
"msg_contents": "On 6/2/21 4:21 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> I'm inclined to agree with Alvaro that the messages are at best an\n>> oddity. Standard Unix practice is to be silent on success.\n> We've been steadily moving towards less chatter during builds.\n> I'd be good with dropping these messages in HEAD, but doing so\n> in the back branches might be inadvisable.\n>\n> \t\t\n\n\n\nOK, I think on reflection new targets will be cleaner. What I suggest is\nthe attached, applied to all branches, followed by removal of the four\nnoise messages in just HEAD.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 1 Jul 2021 10:47:07 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: make world and install-world without docs"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> OK, I think on reflection new targets will be cleaner. What I suggest is\n> the attached, applied to all branches, followed by removal of the four\n> noise messages in just HEAD.\n\nShouldn't these new targets be documented somewhere?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 01 Jul 2021 10:50:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: make world and install-world without docs"
},
{
"msg_contents": "On 7/1/21 10:50 AM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> OK, I think on reflection new targets will be cleaner. What I suggest is\n>> the attached, applied to all branches, followed by removal of the four\n>> noise messages in just HEAD.\n> Shouldn't these new targets be documented somewhere?\n>\n> \t\t\t\n\n\n\nGood point. See attached.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 1 Jul 2021 11:23:42 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: make world and install-world without docs"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n>> Shouldn't these new targets be documented somewhere?\n\n> Good point. See attached.\n\n+1, but spell check: \"documantation\"\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 01 Jul 2021 11:46:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: make world and install-world without docs"
},
{
"msg_contents": "On 01.07.21 16:47, Andrew Dunstan wrote:\n> \n> On 6/2/21 4:21 PM, Tom Lane wrote:\n>> Andrew Dunstan <andrew@dunslane.net> writes:\n>>> I'm inclined to agree with Alvaro that the messages are at best an\n>>> oddity. Standard Unix practice is to be silent on success.\n>> We've been steadily moving towards less chatter during builds.\n>> I'd be good with dropping these messages in HEAD, but doing so\n>> in the back branches might be inadvisable.\n\n> OK, I think on reflection new targets will be cleaner. What I suggest is\n> the attached, applied to all branches, followed by removal of the four\n> noise messages in just HEAD.\n\nThis naming approach is a bit problematic. For example, we have \n\"install-bin\" in src/backend/, which is specifically for only installing \nbinaries, not data files etc. (hence the name). Your proposal would \nconfuse this scheme.\n\nI think we should also take a step back here and consider: We had \"all\", \nwhich wasn't \"all\" enough, then we had \"world\", now we have \n\"world-minus-a-bit\", but it's still more than \"all\". It's like we are \ntrying to prove the continuum hypothesis here.\n\nI think we had consensus on the make variable approach, so I'm confused \nwhy a different solution was committed and backpatched without discussion.\n\n\n",
"msg_date": "Thu, 1 Jul 2021 21:39:16 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: make world and install-world without docs"
},
{
"msg_contents": "\nOn 7/1/21 3:39 PM, Peter Eisentraut wrote:\n> On 01.07.21 16:47, Andrew Dunstan wrote:\n>>\n>> On 6/2/21 4:21 PM, Tom Lane wrote:\n>>> Andrew Dunstan <andrew@dunslane.net> writes:\n>>>> I'm inclined to agree with Alvaro that the messages are at best an\n>>>> oddity. Standard Unix practice is to be silent on success.\n>>> We've been steadily moving towards less chatter during builds.\n>>> I'd be good with dropping these messages in HEAD, but doing so\n>>> in the back branches might be inadvisable.\n>\n>> OK, I think on reflection new targets will be cleaner. What I suggest is\n>> the attached, applied to all branches, followed by removal of the four\n>> noise messages in just HEAD.\n>\n> This naming approach is a bit problematic. For example, we have\n> \"install-bin\" in src/backend/, which is specifically for only\n> installing binaries, not data files etc. (hence the name). Your\n> proposal would confuse this scheme.\n>\n> I think we should also take a step back here and consider: We had\n> \"all\", which wasn't \"all\" enough, then we had \"world\", now we have\n> \"world-minus-a-bit\", but it's still more than \"all\". It's like we are\n> trying to prove the continuum hypothesis here.\n>\n> I think we had consensus on the make variable approach, so I'm\n> confused why a different solution was committed and backpatched\n> without discussion.\n\n\nIn fact the names and approach were suggested in my email of June 21st.\n\nThe make variable approach just felt klunky in the end.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 1 Jul 2021 16:22:34 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: make world and install-world without docs"
},
{
"msg_contents": "On 01.07.21 22:22, Andrew Dunstan wrote:\n> \n> On 7/1/21 3:39 PM, Peter Eisentraut wrote:\n>> On 01.07.21 16:47, Andrew Dunstan wrote:\n>>>\n>>> On 6/2/21 4:21 PM, Tom Lane wrote:\n>>>> Andrew Dunstan <andrew@dunslane.net> writes:\n>>>>> I'm inclined to agree with Alvaro that the messages are at best an\n>>>>> oddity. Standard Unix practice is to be silent on success.\n>>>> We've been steadily moving towards less chatter during builds.\n>>>> I'd be good with dropping these messages in HEAD, but doing so\n>>>> in the back branches might be inadvisable.\n>>\n>>> OK, I think on reflection new targets will be cleaner. What I suggest is\n>>> the attached, applied to all branches, followed by removal of the four\n>>> noise messages in just HEAD.\n>>\n>> This naming approach is a bit problematic. For example, we have\n>> \"install-bin\" in src/backend/, which is specifically for only\n>> installing binaries, not data files etc. (hence the name). Your\n>> proposal would confuse this scheme.\n>>\n>> I think we should also take a step back here and consider: We had\n>> \"all\", which wasn't \"all\" enough, then we had \"world\", now we have\n>> \"world-minus-a-bit\", but it's still more than \"all\". It's like we are\n>> trying to prove the continuum hypothesis here.\n>>\n>> I think we had consensus on the make variable approach, so I'm\n>> confused why a different solution was committed and backpatched\n>> without discussion.\n> \n> \n> In fact the names and approach were suggested in my email of June 21st.\n\nAFAICT this thread contains no email from June 21st or thereabouts.\n\nhttps://www.postgresql.org/message-id/flat/6a421136-d462-b043-a8eb-e75b2861f3df%40dunslane.net\n\n\n",
"msg_date": "Thu, 1 Jul 2021 22:29:49 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: make world and install-world without docs"
},
{
"msg_contents": "\nOn 7/1/21 4:29 PM, Peter Eisentraut wrote:\n> On 01.07.21 22:22, Andrew Dunstan wrote:\n>>\n>> On 7/1/21 3:39 PM, Peter Eisentraut wrote:\n>>> On 01.07.21 16:47, Andrew Dunstan wrote:\n>>>>\n>>>> On 6/2/21 4:21 PM, Tom Lane wrote:\n>>>>> Andrew Dunstan <andrew@dunslane.net> writes:\n>>>>>> I'm inclined to agree with Alvaro that the messages are at best an\n>>>>>> oddity. Standard Unix practice is to be silent on success.\n>>>>> We've been steadily moving towards less chatter during builds.\n>>>>> I'd be good with dropping these messages in HEAD, but doing so\n>>>>> in the back branches might be inadvisable.\n>>>\n>>>> OK, I think on reflection new targets will be cleaner. What I\n>>>> suggest is\n>>>> the attached, applied to all branches, followed by removal of the four\n>>>> noise messages in just HEAD.\n>>>\n>>> This naming approach is a bit problematic. For example, we have\n>>> \"install-bin\" in src/backend/, which is specifically for only\n>>> installing binaries, not data files etc. (hence the name). Your\n>>> proposal would confuse this scheme.\n>>>\n>>> I think we should also take a step back here and consider: We had\n>>> \"all\", which wasn't \"all\" enough, then we had \"world\", now we have\n>>> \"world-minus-a-bit\", but it's still more than \"all\". It's like we are\n>>> trying to prove the continuum hypothesis here.\n>>>\n>>> I think we had consensus on the make variable approach, so I'm\n>>> confused why a different solution was committed and backpatched\n>>> without discussion.\n>>\n>>\n>> In fact the names and approach were suggested in my email of June 21st.\n>\n> AFAICT this thread contains no email from June 21st or thereabouts.\n>\n> https://www.postgresql.org/message-id/flat/6a421136-d462-b043-a8eb-e75b2861f3df%40dunslane.net\n>\n\n\nApologies. June 2nd. One day American style dates will stop playing\nhavoc with my head - it's only been 25 years or so.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 1 Jul 2021 17:08:50 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: make world and install-world without docs"
}
] |
[
{
"msg_contents": "I noticed that 428b260f87e8 (v12) broke the cases where a parent\nforeign table has row marks assigned. Specifically, the following\nAssert added to expand_inherited_rtentry() by that commit looks bogus\nin this regard:\n\n/* The old PlanRowMark should already have necessitated adding TID */\nAssert(old_allMarkTypes & ~(1 << ROW_MARK_COPY));\n\nThe Assert appears to have been written based on the assumption that\nthe root parent would always be a local heap relation, but given that\nwe allow foreign tables also to be inheritance parents, that\nassumption is false.\n\nProblem cases:\n\ncreate extension postgres_fdw ;\ncreate server loopback foreign data wrapper postgres_fdw;\ncreate user mapping for current_user server loopback ;\ncreate table loct1 (a int);\ncreate foreign table ft_parent (a int) server loopback options\n(table_name 'loct1');\ncreate table loct2 (a int);\ncreate foreign table ft_child () inherits (ft_parent) server loopback\noptions (table_name 'loct2');\nexplain (verbose) select * from ft_parent FOR UPDATE;\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!?> \\q\n\nJust commenting out that Assert will let the above work, but that's\nnot enough, because any child tables that are local can't access ctidN\njunk columns that should have been added before reaching\nexpand_inherited_rtentry(), but wouldn't because the parent is a\nforeign table.\n\ncreate table loct3 () inherits (ft_parent);\nexplain (verbose) select * from ft_parent FOR UPDATE;\nERROR: could not find junk ctid1 column\n\nThe right thing would have been to have the same code as in\npreprocess_targetlist() to add a TID row marking junk column if\nneeded. Attached a patch for that, which also adds the test cases.\nActually, I had to make a separate version of the patch to apply to\nthe v12 branch, because EXPLAIN outputs relation aliases a bit\ndifferently starting in v13, which is attached too.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 1 Jun 2021 20:55:42 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "parent foreign tables and row marks"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> I noticed that 428b260f87e8 (v12) broke the cases where a parent\n> foreign table has row marks assigned.\n\nIndeed :-(. Fix pushed. I tweaked the comments and test case slightly.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 02 Jun 2021 14:39:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: parent foreign tables and row marks"
},
{
"msg_contents": "On Thu, Jun 3, 2021 at 3:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > I noticed that 428b260f87e8 (v12) broke the cases where a parent\n> > foreign table has row marks assigned.\n>\n> Indeed :-(. Fix pushed. I tweaked the comments and test case slightly.\n\nThank you.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 3 Jun 2021 10:07:57 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: parent foreign tables and row marks"
},
{
"msg_contents": "On Thu, Jun 3, 2021 at 10:07 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Thu, Jun 3, 2021 at 3:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Amit Langote <amitlangote09@gmail.com> writes:\n> > > I noticed that 428b260f87e8 (v12) broke the cases where a parent\n> > > foreign table has row marks assigned.\n> >\n> > Indeed :-(. Fix pushed. I tweaked the comments and test case slightly.\n>\n> Thank you.\n\nAh, I had forgotten to propose that we replace the following in the\npreprocess_targetlist()'s row marks loop:\n\n /* child rels use the same junk attrs as their parents */\n if (rc->rti != rc->prti)\n continue;\n\nby an Assert as follows:\n\n+ /* No child row marks yet. */\n+ Assert (rc->rti == rc->prti);\n\nI think the only place that sets prti that is != rti of a row mark is\nexpand_single_inheritance_child() and we can be sure that that\nfunction now always runs after preprocess_targetlist() has run.\nAttached a patch.\n\nThoughts?\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 3 Jun 2021 22:08:45 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: parent foreign tables and row marks"
}
] |
[
{
"msg_contents": "Hi,\nIt seems error code checking in pgtls_init() should follow the same\nconvention as PG codebase adopts - i.e. the non-zero error code should be\nreturned (instead of hard coded -1).\n\nPlease see the attached patch.\n\nThanks",
"msg_date": "Tue, 1 Jun 2021 10:32:59 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "return correct error code from pgtls_init"
},
{
"msg_contents": "On Tue, Jun 01, 2021 at 10:32:59AM -0700, Zhihong Yu wrote:\n> It seems error code checking in pgtls_init() should follow the same\n> convention as PG codebase adopts - i.e. the non-zero error code should be\n> returned (instead of hard coded -1).\n> \n> Please see the attached patch.\n\nI don't see the point of changing this. First, other areas of\nfe-secure-openssl.c use a harcoded value of -1 as error codes, so the\ncurrent style is more consistent. Second, if we were to change that,\nwhy are you not changing one call of pthread_mutex_lock()?\n--\nMichael",
"msg_date": "Wed, 2 Jun 2021 10:14:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: return correct error code from pgtls_init"
},
{
"msg_contents": "On Tue, Jun 1, 2021 at 6:14 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Jun 01, 2021 at 10:32:59AM -0700, Zhihong Yu wrote:\n> > It seems error code checking in pgtls_init() should follow the same\n> > convention as PG codebase adopts - i.e. the non-zero error code should be\n> > returned (instead of hard coded -1).\n> >\n> > Please see the attached patch.\n>\n> I don't see the point of changing this. First, other areas of\n> fe-secure-openssl.c use a harcoded value of -1 as error codes, so the\n> current style is more consistent. Second, if we were to change that,\n> why are you not changing one call of pthread_mutex_lock()?\n> --\n> Michael\n>\n\nHi,\nLooking at the -1 return, e.g.\n\n pq_lockarray = malloc(sizeof(pthread_mutex_t) *\nCRYPTO_num_locks());\n\nwhen pq_lockarray is NULL. We can return errno.\n\nI didn't change call to pthread_mutex_lock() because PGTHREAD_ERROR() is\nused which aborts.\n\nCheers\n\nOn Tue, Jun 1, 2021 at 6:14 PM Michael Paquier <michael@paquier.xyz> wrote:On Tue, Jun 01, 2021 at 10:32:59AM -0700, Zhihong Yu wrote:\n> It seems error code checking in pgtls_init() should follow the same\n> convention as PG codebase adopts - i.e. the non-zero error code should be\n> returned (instead of hard coded -1).\n> \n> Please see the attached patch.\n\nI don't see the point of changing this. First, other areas of\nfe-secure-openssl.c use a harcoded value of -1 as error codes, so the\ncurrent style is more consistent. Second, if we were to change that,\nwhy are you not changing one call of pthread_mutex_lock()?\n--\nMichaelHi,Looking at the -1 return, e.g. pq_lockarray = malloc(sizeof(pthread_mutex_t) * CRYPTO_num_locks()); when pq_lockarray is NULL. We can return errno.I didn't change call to pthread_mutex_lock() because PGTHREAD_ERROR() is used which aborts.Cheers",
"msg_date": "Tue, 1 Jun 2021 18:56:42 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: return correct error code from pgtls_init"
},
{
"msg_contents": "On Tue, Jun 01, 2021 at 06:56:42PM -0700, Zhihong Yu wrote:\n> Looking at the -1 return, e.g.\n> \n> pq_lockarray = malloc(sizeof(pthread_mutex_t) *\n> CRYPTO_num_locks());\n> \n> when pq_lockarray is NULL. We can return errno.\n> \n> I didn't change call to pthread_mutex_lock() because PGTHREAD_ERROR() is\n> used which aborts.\n\nI am not sure what you mean here, and there is nothing wrong with this\ncode as far as I know, as we would let the caller of pgtls_init() know\nthat something is wrong.\n--\nMichael",
"msg_date": "Tue, 17 Aug 2021 13:13:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: return correct error code from pgtls_init"
}
] |
[
{
"msg_contents": "Hi.\n\nThe documentation of ALTER SUBSCRIPTION REFRESH PUBLICATION [1] says:\n\n----------\n\nREFRESH PUBLICATION\n\nFetch missing table information from publisher. This will start\nreplication of tables that were added to the subscribed-to\npublications since the last invocation of REFRESH PUBLICATION or since\nCREATE SUBSCRIPTION.\n\nrefresh_option specifies additional options for the refresh operation.\nThe supported options are:\n\ncopy_data (boolean)\n\nSpecifies whether the existing data in the publications that are being\nsubscribed to should be copied once the replication starts. The\ndefault is true. (Previously subscribed --tables are not copied.)\n\n----------\n\nBut I found that default copy_data = true to be unintuitive.\n\ne.g. When I had previously done the CREATE SUBSCRIPTION using\ncopy_data = false, then I assumed (wrongly) that the subscription\ndefault would remain as copy_data = false even when doing the REFRESH\nPUBLICATION.\n\nIs that a deliberate functionality, or is it a quirk / bug?\n\n------\n[1] https://www.postgresql.org/docs/devel/sql-altersubscription.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Wed, 2 Jun 2021 11:10:09 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "ALTER SUBSCRIPTION REFRESH PUBLICATION has default copy_data = true?"
},
{
"msg_contents": "On 02.06.21 03:10, Peter Smith wrote:\n> The documentation of ALTER SUBSCRIPTION REFRESH PUBLICATION [1] says:\n\n> But I found that default copy_data = true to be unintuitive.\n> \n> e.g. When I had previously done the CREATE SUBSCRIPTION using\n> copy_data = false, then I assumed (wrongly) that the subscription\n> default would remain as copy_data = false even when doing the REFRESH\n> PUBLICATION.\n> \n> Is that a deliberate functionality, or is it a quirk / bug?\n\ncopy_data is an option of the action, not a property of the \nsubscription. The difference between those two things is admittedly not \n clearly (at all?) documented.\n\nHowever, I'm not sure whether creating a subscription that always \ndefaults to copy_data=false for tables added in the future is useful \nfunctionality, so I think the current behavior is okay.\n\n\n",
"msg_date": "Wed, 2 Jun 2021 16:52:42 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER SUBSCRIPTION REFRESH PUBLICATION has default copy_data =\n true?"
},
{
"msg_contents": "On Wed, Jun 2, 2021 at 6:40 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi.\n>\n> The documentation of ALTER SUBSCRIPTION REFRESH PUBLICATION [1] says:\n>\n> ----------\n>\n> REFRESH PUBLICATION\n>\n> Fetch missing table information from publisher. This will start\n> replication of tables that were added to the subscribed-to\n> publications since the last invocation of REFRESH PUBLICATION or since\n> CREATE SUBSCRIPTION.\n>\n> refresh_option specifies additional options for the refresh operation.\n> The supported options are:\n>\n> copy_data (boolean)\n>\n> Specifies whether the existing data in the publications that are being\n> subscribed to should be copied once the replication starts. The\n> default is true. (Previously subscribed --tables are not copied.)\n>\n> ----------\n>\n> But I found that default copy_data = true to be unintuitive.\n>\n> e.g. When I had previously done the CREATE SUBSCRIPTION using\n> copy_data = false, then I assumed (wrongly) that the subscription\n> default would remain as copy_data = false even when doing the REFRESH\n> PUBLICATION.\n\nThe fact is that the options copy_data, create_slot and connect are\nnot stored in the catalog pg_subscription (actually they don't need to\nbe). Among these the copy_data option can be specified in the ALTER\n... SUBSCRIPTION.\n\n> Is that a deliberate functionality, or is it a quirk / bug?\n\nI don't think it's a bug. Maybe adding a note in the docs about the\noptions which are stored/not stored in the pg_subscription catalog\nwould help.\n\nWith Regards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Wed, 2 Jun 2021 20:23:30 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ALTER SUBSCRIPTION REFRESH PUBLICATION has default copy_data =\n true?"
},
{
"msg_contents": "On Thu, Jun 3, 2021 at 12:52 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> > Is that a deliberate functionality, or is it a quirk / bug?\n>\n> copy_data is an option of the action, not a property of the\n> subscription. The difference between those two things is admittedly not\n> clearly (at all?) documented.\n>\n> However, I'm not sure whether creating a subscription that always\n> defaults to copy_data=false for tables added in the future is useful\n> functionality, so I think the current behavior is okay.\n\n...\n\nOn Thu, Jun 3, 2021 at 12:53 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> > Is that a deliberate functionality, or is it a quirk / bug?\n>\n> I don't think it's a bug....\n\n...\n\nOK. Thanks to both of you for sharing your thoughts about it.\n\n------\nKind Regards,\nPeter Smith\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 7 Jun 2021 11:05:22 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: ALTER SUBSCRIPTION REFRESH PUBLICATION has default copy_data =\n true?"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nIn the latest HEAD branch, I found some places were using\r\nappendStringInfo/appendPQExpBuffer() when they could have been using\r\nappendStringInfoString/ appendPQExpBufferStr() instead. I think we'd better\r\nfix these places in case other developers will use these codes as a reference,\r\nthough, it seems will not bring noticeable performance gain.\r\n\r\nAttaching a patch to fix these places.\r\n\r\nBest regards,\r\nhouzj",
"msg_date": "Wed, 2 Jun 2021 01:37:51 +0000",
"msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Fixup some appendStringInfo and appendPQExpBuffer calls"
},
{
"msg_contents": "On Wed, Jun 02, 2021 at 01:37:51AM +0000, houzj.fnst@fujitsu.com wrote:\n> In the latest HEAD branch, I found some places were using\n> appendStringInfo/appendPQExpBuffer() when they could have been using\n> appendStringInfoString/ appendPQExpBufferStr() instead. I think we'd better\n> fix these places in case other developers will use these codes as a reference,\n> though, it seems will not bring noticeable performance gain.\n\nIndeed, that's the same thing as 110d817 to make all those calls\ncheaper. No objections from me to do those changes now rather than\nlater on HEAD.\n--\nMichael",
"msg_date": "Wed, 2 Jun 2021 13:29:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fixup some appendStringInfo and appendPQExpBuffer calls"
},
{
"msg_contents": "On 2021-Jun-02, houzj.fnst@fujitsu.com wrote:\n\n> Hi,\n> \n> In the latest HEAD branch, I found some places were using\n> appendStringInfo/appendPQExpBuffer() when they could have been using\n> appendStringInfoString/ appendPQExpBufferStr() instead. I think we'd better\n> fix these places in case other developers will use these codes as a reference,\n> though, it seems will not bring noticeable performance gain.\n\nhmm why didn't we get warnings about the PENDING DETACH one? Maybe we\nneed some decorator in PQExpBuffer.\n\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"La espina, desde que nace, ya pincha\" (Proverbio africano)\n\n\n",
"msg_date": "Wed, 2 Jun 2021 06:57:01 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Fixup some appendStringInfo and appendPQExpBuffer calls"
},
{
"msg_contents": "\nOn 02.06.21 12:57, Alvaro Herrera wrote:\n>> In the latest HEAD branch, I found some places were using\n>> appendStringInfo/appendPQExpBuffer() when they could have been using\n>> appendStringInfoString/ appendPQExpBufferStr() instead. I think we'd better\n>> fix these places in case other developers will use these codes as a reference,\n>> though, it seems will not bring noticeable performance gain.\n> \n> hmm why didn't we get warnings about the PENDING DETACH one? Maybe we\n> need some decorator in PQExpBuffer.\n\nI don't think there is anything wrong with the existing code there. \nIt's just like using printf() when you could use puts().\n\n(I'm not against the proposed patch, just answering this question.)\n\n\n",
"msg_date": "Wed, 2 Jun 2021 16:55:49 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Fixup some appendStringInfo and appendPQExpBuffer calls"
},
{
"msg_contents": "On Wed, 2 Jun 2021 at 16:29, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Jun 02, 2021 at 01:37:51AM +0000, houzj.fnst@fujitsu.com wrote:\n> > In the latest HEAD branch, I found some places were using\n> > appendStringInfo/appendPQExpBuffer() when they could have been using\n> > appendStringInfoString/ appendPQExpBufferStr() instead. I think we'd better\n> > fix these places in case other developers will use these codes as a reference,\n> > though, it seems will not bring noticeable performance gain.\n>\n> Indeed, that's the same thing as 110d817 to make all those calls\n> cheaper. No objections from me to do those changes now rather than\n> later on HEAD.\n\nI think it would be good to fix at least the instances that are new\ncode in PG14 before we branch for PG15. They all seem low enough risk\nand worth keeping the new-to-PG14 code as close to the same as\npossible between major versions. It seems more likely that newer code\nwill need bug fixes in the future so having the code as similar as\npossible in each branch makes backpatching easier.\n\nFor the code that's not new to PG14, I feel less strongly about those.\nIn the patch there's just 2 instances of these; one in\ncontrib/sepgsql/schema.c and another in\nsrc/backend/postmaster/postmaster.c. I've tried to push for these\nsorts of things to be fixed at around this time of year in the past,\nbut there have been other people thinking we should wait until we\nbranch. For example [1] and [2].\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAKJS1f9APLTZRomOSndx_nFcFNfUxncz%3Dp2_-1wr0hrzT4ELKg%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/4a84839e-afe4-ea27-6823-23372511dcbf%402ndquadrant.com\n\n\n",
"msg_date": "Thu, 3 Jun 2021 13:53:34 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fixup some appendStringInfo and appendPQExpBuffer calls"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Wed, 2 Jun 2021 at 16:29, Michael Paquier <michael@paquier.xyz> wrote:\n>> Indeed, that's the same thing as 110d817 to make all those calls\n>> cheaper. No objections from me to do those changes now rather than\n>> later on HEAD.\n\n> I think it would be good to fix at least the instances that are new\n> code in PG14 before we branch for PG15. They all seem low enough risk\n> and worth keeping the new-to-PG14 code as close to the same as\n> possible between major versions.\n\n+1 for fixing this sort of thing in new code before we branch.\n\nI'm less interested in changing code that already exists in back\nbranches. I think the risk of causing headaches for back-patches\nmay outweigh any benefit of such micro-optimizations.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 02 Jun 2021 22:56:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fixup some appendStringInfo and appendPQExpBuffer calls"
},
{
"msg_contents": "On Thu, Jun 03, 2021 at 01:53:34PM +1200, David Rowley wrote:\n> I think it would be good to fix at least the instances that are new\n> code in PG14 before we branch for PG15. They all seem low enough risk\n> and worth keeping the new-to-PG14 code as close to the same as\n> possible between major versions. It seems more likely that newer code\n> will need bug fixes in the future so having the code as similar as\n> possible in each branch makes backpatching easier.\n\n> For the code that's not new to PG14, I feel less strongly about those.\n> In the patch there's just 2 instances of these; one in\n> contrib/sepgsql/schema.c and another in\n> src/backend/postmaster/postmaster.c. I've tried to push for these\n> sorts of things to be fixed at around this time of year in the past,\n> but there have been other people thinking we should wait until we\n> branch. For example [1] and [2].\n\nNo objections to those arguments, makes sense. I don't see an issue\nwith changing the new code before branching, FWIW. As you already did\n110d817, perhaps you would prefer taking care of it?\n--\nMichael",
"msg_date": "Thu, 3 Jun 2021 12:00:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fixup some appendStringInfo and appendPQExpBuffer calls"
},
{
"msg_contents": "On Thu, 3 Jun 2021 at 15:01, Michael Paquier <michael@paquier.xyz> wrote:\n> As you already did\n> 110d817, perhaps you would prefer taking care of it?\n\nOk. I'll take care of it.\n\nThanks\n\nDavid\n\n\n",
"msg_date": "Thu, 3 Jun 2021 15:06:57 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fixup some appendStringInfo and appendPQExpBuffer calls"
},
{
"msg_contents": "On Thu, 3 Jun 2021 at 15:06, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 3 Jun 2021 at 15:01, Michael Paquier <michael@paquier.xyz> wrote:\n> > As you already did\n> > 110d817, perhaps you would prefer taking care of it?\n>\n> Ok. I'll take care of it.\n\nI looked at this and couldn't help but notice how the following used\nDatumGetPointer() instead of DatumGetCString():\n\nappendStringInfo(&str, \"%s ... %s\",\n DatumGetPointer(a),\n DatumGetPointer(b));\n\nHowever, looking a bit further it looks like instead of using\nFunctionCall1 to call the type's output function, that the code should\nuse OutputFunctionCall and get a char * directly.\n\ne.g the attached.\n\nThere are quite a few other places in that file that should be using\nDatumGetCString() instead of DatumGetPointer().\n\nShould we fix those too for PG14?\n\nIn the meantime, I'll push a version of this with just the StringInfo\nfixes first. If we do anything else it can be done as a separate\ncommit.\n\nDavid",
"msg_date": "Thu, 3 Jun 2021 15:51:48 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fixup some appendStringInfo and appendPQExpBuffer calls"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> There are quite a few other places in that file that should be using\n> DatumGetCString() instead of DatumGetPointer().\n> Should we fix those too for PG14?\n\n+1. I'm surprised we are not getting compiler warnings.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Jun 2021 00:17:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fixup some appendStringInfo and appendPQExpBuffer calls"
},
{
"msg_contents": "On Thu, 3 Jun 2021 at 15:06, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 3 Jun 2021 at 15:01, Michael Paquier <michael@paquier.xyz> wrote:\n> > As you already did\n> > 110d817, perhaps you would prefer taking care of it?\n>\n> Ok. I'll take care of it.\n\nPushed.\n\nDavid\n\n\n",
"msg_date": "Thu, 3 Jun 2021 16:39:06 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fixup some appendStringInfo and appendPQExpBuffer calls"
},
{
"msg_contents": "On Thu, 3 Jun 2021 at 16:17, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > There are quite a few other places in that file that should be using\n> > DatumGetCString() instead of DatumGetPointer().\n> > Should we fix those too for PG14?\n>\n> +1. I'm surprised we are not getting compiler warnings.\n\nI've attached a patch to fix those.\n\nI did end up getting in a little deeper than I'd have liked as I also\nfound a few typos along the way.\n\nAlso, going by my calendar, the copyright year was incorrect.\n\nTomas, any chance you could look over this? I didn't really take the\ntime to understand the code, so some of my comment adjustments might\nbe incorrect.\n\nDavid",
"msg_date": "Thu, 3 Jun 2021 18:51:46 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fixup some appendStringInfo and appendPQExpBuffer calls"
},
{
"msg_contents": "On 03.06.21 06:17, Tom Lane wrote:\n> David Rowley <dgrowleyml@gmail.com> writes:\n>> There are quite a few other places in that file that should be using\n>> DatumGetCString() instead of DatumGetPointer().\n>> Should we fix those too for PG14?\n> \n> +1. I'm surprised we are not getting compiler warnings.\n\nWell, DatumGetPointer() returns Pointer, and Pointer is char *.\n\n\n",
"msg_date": "Thu, 3 Jun 2021 15:03:14 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Fixup some appendStringInfo and appendPQExpBuffer calls"
},
{
"msg_contents": "On Thu, 3 Jun 2021 at 16:17, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > There are quite a few other places in that file that should be using\n> > DatumGetCString() instead of DatumGetPointer().\n> > Should we fix those too for PG14?\n>\n> +1. I'm surprised we are not getting compiler warnings.\n\nI pushed a fix for this.\n\nI did happen to find one other in mcv.c which dates back to 2019. I\nwas wondering if we should bother with that one since it's already out\nthere in PG13.\n\nDavid",
"msg_date": "Fri, 4 Jun 2021 22:53:58 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fixup some appendStringInfo and appendPQExpBuffer calls"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I did happen to find one other in mcv.c which dates back to 2019. I\n> was wondering if we should bother with that one since it's already out\n> there in PG13.\n\nMaybe not. Per Peter's point, it's just cosmetic really.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 04 Jun 2021 09:30:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fixup some appendStringInfo and appendPQExpBuffer calls"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhile looking at a separate issue, I have noticed that TestLib.pm is\nlagging behind in terms of environment variables it had better mask to\navoid failures:\nhttps://www.postgresql.org/message-id/YLXjFOV3teAPirmS@paquier.xyz\n\nOnce I began playing with the variables not covered yet, and tested\nfancy cases with junk values, I have been able to see various failures\nin the TAP tests, mainly with authentication and SSL.\n\nAttached is a patch to strengthen all that, which I think we'd better\nbackpatch.\n\nAny objections to that?\n--\nMichael",
"msg_date": "Wed, 2 Jun 2021 10:49:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "TAP tests still sensitive to various PG* environment variables "
},
{
"msg_contents": "> On 2 Jun 2021, at 03:49, Michael Paquier <michael@paquier.xyz> wrote:\n\n> Attached is a patch to strengthen all that, which I think we'd better\n> backpatch.\n\n+1\n\n> Any objections to that?\n\nSeems like a good idea, to keep test invocation stable across branches, minus\nPGSSLCRLDIR and PGSSLSNI which are only available in HEAD etc.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 2 Jun 2021 10:39:56 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests still sensitive to various PG* environment variables "
},
{
"msg_contents": "\nOn 6/1/21 9:49 PM, Michael Paquier wrote:\n> Hi all,\n>\n> While looking at a separate issue, I have noticed that TestLib.pm is\n> lagging behind in terms of environment variables it had better mask to\n> avoid failures:\n> https://www.postgresql.org/message-id/YLXjFOV3teAPirmS@paquier.xyz\n>\n> Once I began playing with the variables not covered yet, and tested\n> fancy cases with junk values, I have been able to see various failures\n> in the TAP tests, mainly with authentication and SSL.\n>\n> Attached is a patch to strengthen all that, which I think we'd better\n> backpatch.\n>\n> Any objections to that?\n\n\n\nThis is a bit gruesome:\n\n\n +��� delete $ENV{PGCHANNELBINDING};\n +��� delete $ENV{PGCLIENTENCODING};\n ���� delete $ENV{PGCONNECT_TIMEOUT};\n ���� delete $ENV{PGDATA};\n ���� delete $ENV{PGDATABASE};\n +��� delete $ENV{PGGSSENCMODE};\n +��� delete $ENV{PGGSSLIB};\n ���� delete $ENV{PGHOSTADDR};\n +��� delete $ENV{PGKRBSRVNAME};\n +��� delete $ENV{PGPASSFILE};\n +��� delete $ENV{PGPASSWORD};\n +��� delete $ENV{PGREQUIREPEER};\n ���� delete $ENV{PGREQUIRESSL};\n ���� delete $ENV{PGSERVICE};\n +��� delete $ENV{PGSERVICEFILE};\n +��� delete $ENV{PGSSLCERT};\n +��� delete $ENV{PGSSLCRL};\n +��� delete $ENV{PGSSLCRLDIR};\n +��� delete $ENV{PGSSLKEY};\n +��� delete $ENV{PGSSLMAXPROTOCOLVERSION};\n +��� delete $ENV{PGSSLMINPROTOCOLVERSION};\n ���� delete $ENV{PGSSLMODE};\n +��� delete $ENV{PGSSLROOTCERT};\n +��� delete $ENV{PGSSLSNI};\n ���� delete $ENV{PGUSER};\n ���� delete $ENV{PGPORT};\n ���� delete $ENV{PGHOST};\n\n\n\nLet's change it to something like:\n\n\n my @scrubkeys = qw ( PGCHANNELBINDING\n\n �� PGCLIENTENCODING PGCONNECT_TIMEOUT PGDATA\n\n �� ...\n\n ��� );\n\n delete @ENV{@scrubkeys};\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 2 Jun 2021 15:43:46 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests still sensitive to various PG* environment variables"
},
{
"msg_contents": "On Wed, Jun 02, 2021 at 03:43:46PM -0400, Andrew Dunstan wrote:\n> Let's change it to something like:\n>\n> my @scrubkeys = qw ( PGCHANNELBINDING\n> PGCLIENTENCODING PGCONNECT_TIMEOUT PGDATA\n> ...\n> );\n> delete @ENV{@scrubkeys};\n\nGood idea. I have used that. Thanks.\n--\nMichael",
"msg_date": "Thu, 3 Jun 2021 12:03:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: TAP tests still sensitive to various PG* environment variables"
},
{
"msg_contents": "On Wed, Jun 02, 2021 at 10:39:56AM +0200, Daniel Gustafsson wrote:\n> Seems like a good idea, to keep test invocation stable across branches, minus\n> PGSSLCRLDIR and PGSSLSNI which are only available in HEAD etc.\n\nRight. This took me a couple of hours to make consistent across\nall the branches. After more review, I have found also about\nPGTARGETSESSIONATTRS that would take down the recovery tests as of\n10~. Fun.\n--\nMichael",
"msg_date": "Thu, 3 Jun 2021 12:10:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: TAP tests still sensitive to various PG* environment variables"
}
] |
[
{
"msg_contents": "Add regression test for recovery pause.\n\nPreviously there was no regression test for recovery pause feature.\nThis commit adds the test that checks\n\n- recovery can be paused or resumed expectedly\n- pg_get_wal_replay_pause_state() reports the correct pause state\n- the paused state ends and promotion continues if a promotion\n is triggered while recovery is paused\n\nSuggested-by: Michael Paquier\nAuthor: Fujii Masao\nReviewed-by: Kyotaro Horiguchi, Dilip Kumar\nDiscussion: https://postgr.es/m/YKNirzqM1HYyk5h4@paquier.xyz\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/6bbc5c5e96b08f6b8c7d28d10ed8dfe6c49dca30\n\nModified Files\n--------------\nsrc/test/recovery/t/005_replay_delay.pl | 59 +++++++++++++++++++++++++++++++--\n1 file changed, 57 insertions(+), 2 deletions(-)",
"msg_date": "Wed, 02 Jun 2021 03:20:51 +0000",
"msg_from": "Fujii Masao <fujii@postgresql.org>",
"msg_from_op": true,
"msg_subject": "pgsql: Add regression test for recovery pause."
},
{
"msg_contents": "Fujii Masao <fujii@postgresql.org> writes:\n> Add regression test for recovery pause.\n\nBuildfarm member jacana doesn't like this patch:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2021-06-02%2012%3A00%3A44\n\nthe symptom being\n\nJun 02 09:05:17 t/005_replay_delay..................# poll_query_until timed out executing this query:\nJun 02 09:05:17 # SELECT '0/3002A20'::pg_lsn < pg_last_wal_receive_lsn()\nJun 02 09:05:17 # expecting this output:\nJun 02 09:05:17 # t\nJun 02 09:05:17 # last actual query output:\nJun 02 09:05:17 # \nJun 02 09:05:17 # with stderr:\nJun 02 09:05:17 # ERROR: syntax error at or near \"pg_lsn\"\nJun 02 09:05:17 # LINE 1: SELECT '0\\\\3002A20';pg_lsn < pg_last_wal_receive_lsn()\nJun 02 09:05:17 # ^\n\nChecking the postmaster log confirms that what the backend is getting is\n\n2021-06-02 08:58:01.073 EDT [60b78059.f84:4] 005_replay_delay.pl ERROR: syntax error at or near \"pg_lsn\" at character 20\n2021-06-02 08:58:01.073 EDT [60b78059.f84:5] 005_replay_delay.pl STATEMENT: SELECT '0\\\\3002A20';pg_lsn < pg_last_wal_receive_lsn()\n\nIt sort of looks like something has decided that the pg_lsn constant\nis a search path and made a lame attempt to convert it to Windows\nstyle. I doubt our own code is doing that, so I'm inclined to blame\nIPC::Run thinking it can mangle the command string it's given.\nI wonder whether jacana has got a freshly-installed version of IPC::Run.\n\nAnother interesting question is how come we managed to get this far\nin the tests. There is a nearly, but not quite, identical delay\nquery in 002_archiving.pl, which already ran successfully:\n\n# Wait until necessary replay has been done on standby\nmy $caughtup_query =\n \"SELECT '$current_lsn'::pg_lsn <= pg_last_wal_replay_lsn()\";\n$node_standby->poll_query_until('postgres', $caughtup_query)\n or die \"Timed out while waiting for standby to catch up\";\n\nI wonder whether the fact that 002 uses '<=' not '<' could be\nat all related. (I also wonder which one is correct as a means\nof waiting for replay; they are not both correct.)\n\nIn any case, letting IPC::Run munge SQL commands seems completely\nunacceptable. We can't plan on working around that every time.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 02 Jun 2021 17:26:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add regression test for recovery pause."
},
{
"msg_contents": "\nOn 6/2/21 5:26 PM, Tom Lane wrote:\n> Fujii Masao <fujii@postgresql.org> writes:\n>> Add regression test for recovery pause.\n> Buildfarm member jacana doesn't like this patch:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2021-06-02%2012%3A00%3A44\n>\n> the symptom being\n>\n> Jun 02 09:05:17 t/005_replay_delay..................# poll_query_until timed out executing this query:\n> Jun 02 09:05:17 # SELECT '0/3002A20'::pg_lsn < pg_last_wal_receive_lsn()\n> Jun 02 09:05:17 # expecting this output:\n> Jun 02 09:05:17 # t\n> Jun 02 09:05:17 # last actual query output:\n> Jun 02 09:05:17 # \n> Jun 02 09:05:17 # with stderr:\n> Jun 02 09:05:17 # ERROR: syntax error at or near \"pg_lsn\"\n> Jun 02 09:05:17 # LINE 1: SELECT '0\\\\3002A20';pg_lsn < pg_last_wal_receive_lsn()\n> Jun 02 09:05:17 # ^\n>\n> Checking the postmaster log confirms that what the backend is getting is\n>\n> 2021-06-02 08:58:01.073 EDT [60b78059.f84:4] 005_replay_delay.pl ERROR: syntax error at or near \"pg_lsn\" at character 20\n> 2021-06-02 08:58:01.073 EDT [60b78059.f84:5] 005_replay_delay.pl STATEMENT: SELECT '0\\\\3002A20';pg_lsn < pg_last_wal_receive_lsn()\n>\n> It sort of looks like something has decided that the pg_lsn constant\n> is a search path and made a lame attempt to convert it to Windows\n> style. I doubt our own code is doing that, so I'm inclined to blame\n> IPC::Run thinking it can mangle the command string it's given.\n> I wonder whether jacana has got a freshly-installed version of IPC::Run.\n>\n> Another interesting question is how come we managed to get this far\n> in the tests. There is a nearly, but not quite, identical delay\n> query in 002_archiving.pl, which already ran successfully:\n>\n> # Wait until necessary replay has been done on standby\n> my $caughtup_query =\n> \"SELECT '$current_lsn'::pg_lsn <= pg_last_wal_replay_lsn()\";\n> $node_standby->poll_query_until('postgres', $caughtup_query)\n> or die \"Timed out while waiting for standby to catch up\";\n>\n> I wonder whether the fact that 002 uses '<=' not '<' could be\n> at all related. (I also wonder which one is correct as a means\n> of waiting for replay; they are not both correct.)\n>\n> In any case, letting IPC::Run munge SQL commands seems completely\n> unacceptable. We can't plan on working around that every time.\n>\n> \t\t\t\n\n\n\nLooks to me like we're getting munged by the msys shell, and unlike on\nmsys2 there isn't a way to disable it:\nhttps://stackoverflow.com/questions/7250130/how-to-stop-mingw-and-msys-from-mangling-path-names-given-at-the-command-line\n\n\nc.f. commit 73ff3a0abbb\n\n\nMaybe a robust solution would be to have the query piped to psql on its\nstdin rather than on the command line. poll_query_until looks on a quick\ncheck like the only place in PostgresNode where we use \"psql -c\"\n\n\nI'll experiment a bit tomorrow.\n\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 2 Jun 2021 18:25:40 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add regression test for recovery pause."
},
{
"msg_contents": "\nOn 6/2/21 6:25 PM, Andrew Dunstan wrote:\n>\n>\n> Looks to me like we're getting munged by the msys shell, and unlike on\n> msys2 there isn't a way to disable it:\n> https://stackoverflow.com/questions/7250130/how-to-stop-mingw-and-msys-from-mangling-path-names-given-at-the-command-line\n>\n>\n> c.f. commit 73ff3a0abbb\n>\n>\n> Maybe a robust solution would be to have the query piped to psql on its\n> stdin rather than on the command line. poll_query_until looks on a quick\n> check like the only place in PostgresNode where we use \"psql -c\"\n>\n>\n> I'll experiment a bit tomorrow.\n>\n>\n>\n\n\nMy suspicion was correct. Fix pushed.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 3 Jun 2021 16:41:53 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add regression test for recovery pause."
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> My suspicion was correct. Fix pushed.\n\nGreat, thanks.\n\nDo we need to worry about back-patching that? It seems only\naccidental if no existing back-branch test cases hit this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Jun 2021 16:45:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add regression test for recovery pause."
},
{
"msg_contents": "\nOn 6/3/21 4:45 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> My suspicion was correct. Fix pushed.\n> Great, thanks.\n>\n> Do we need to worry about back-patching that? It seems only\n> accidental if no existing back-branch test cases hit this.\n\n\nWell, we haven't had breakage, but its also useful to keep things in\nsync as much as possible. Ill do it shortly.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 3 Jun 2021 17:10:25 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add regression test for recovery pause."
}
] |
[
{
"msg_contents": "Hi\n\nAttached a patch to support tab completion for CREATE TYPE ... SUBSCRIPT introduced at c7aba7c14e.\n\nRegards,\nTang",
"msg_date": "Wed, 2 Jun 2021 09:50:51 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "tab-complete for CREATE TYPE ... SUBSCRIPT"
},
{
"msg_contents": "On Wednesday, June 2, 2021 6:51 PM, tanghy.fnst@fujitsu.com wrote:\n\n>Attached a patch to support tab completion for CREATE TYPE ... SUBSCRIPT introduced at c7aba7c14e.\n\nOops, comma forgot. patch Updated.\n\nRegards,\nTang",
"msg_date": "Wed, 2 Jun 2021 11:06:41 +0000",
"msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: tab-complete for CREATE TYPE ... SUBSCRIPT"
},
{
"msg_contents": "On Wed, Jun 2, 2021 at 4:37 PM tanghy.fnst@fujitsu.com\n<tanghy.fnst@fujitsu.com> wrote:\n>\n> On Wednesday, June 2, 2021 6:51 PM, tanghy.fnst@fujitsu.com wrote:\n>\n> >Attached a patch to support tab completion for CREATE TYPE ... SUBSCRIPT introduced at c7aba7c14e.\n>\n> Oops, comma forgot. patch Updated.\n\nv2 patch LGTM.\n\nWith the patch:\npostgres=# create type mytype(\nALIGNMENT DEFAULT INTERNALLENGTH PREFERRED SUBSCRIPT\nANALYZE DELIMITER LIKE RECEIVE TYPMOD_IN\nCATEGORY ELEMENT OUTPUT SEND TYPMOD_OUT\nCOLLATABLE INPUT PASSEDBYVALUE STORAGE\n\nWith Regards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Wed, 2 Jun 2021 19:51:25 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: tab-complete for CREATE TYPE ... SUBSCRIPT"
},
{
"msg_contents": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com> writes:\n> Attached a patch to support tab completion for CREATE TYPE ... SUBSCRIPT introduced at c7aba7c14e.\n\nHuh ... I had no idea anyone had taught tab-complete about the\nindividual fields of CREATE TYPE. Experimenting with it,\nI see that the multirange patch missed this too. Fix pushed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 02 Jun 2021 10:45:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: tab-complete for CREATE TYPE ... SUBSCRIPT"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile experimenting with parallel index builds, I've noticed a somewhat \nstrange behavior of pg_stat_progress_create_index when a btree index is \nbuilt with parallel workers - some of the phases seem to be missing.\n\nIn serial (no parallelism) mode, the progress is roughly this (it's \nalways the first/last timestamp of each phase):\n\n | command | phase\n-------------+--------------+----------------------------------------\n 12:56:01 AM | CREATE INDEX | building index: scanning table\n ...\n 01:06:22 AM | CREATE INDEX | building index: scanning table\n 01:06:23 AM | CREATE INDEX | building index: sorting live tuples\n ...\n 01:13:10 AM | CREATE INDEX | building index: sorting live tuples\n 01:13:11 AM | CREATE INDEX | building index: loading tuples in tree\n ...\n 01:24:02 AM | CREATE INDEX | building index: loading tuples in tree\n\nSo it goes through three phases:\n\n1) scanning tuples\n2) sorting live tuples\n3) loading tuples in tree\n\nBut with parallel build index build, it changes to:\n\n | command | phase\n-------------+--------------+----------------------------------------\n 11:40:48 AM | CREATE INDEX | building index: scanning table\n ...\n 11:47:24 AM | CREATE INDEX | building index: scanning table (scan\n complete)\n 11:56:22 AM | CREATE INDEX | building index: scanning table\n 11:56:23 AM | CREATE INDEX | building index: loading tuples in tree\n ...\n 12:05:33 PM | CREATE INDEX | building index: loading tuples in tree\n\nThat is, the \"sorting live tuples\" phase disappeared, and instead it \nseems to be counted in the \"scanning table\" one, as if there was an \nupdate of the phase missing.\n\nI've only tried this on master, but I assume it behaves like this in the \nolder releases too. I wonder if this is intentional - it sure is a bit \nmisleading.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 2 Jun 2021 13:56:55 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "pg_stat_progress_create_index vs. parallel index builds"
},
{
"msg_contents": "On Wed, 2 Jun 2021 at 13:57, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> While experimenting with parallel index builds, I've noticed a somewhat\n> strange behavior of pg_stat_progress_create_index when a btree index is\n> built with parallel workers - some of the phases seem to be missing.\n>\n> In serial (no parallelism) mode, the progress is roughly this (it's\n> always the first/last timestamp of each phase):\n>\n> | command | phase\n> -------------+--------------+----------------------------------------\n> 12:56:01 AM | CREATE INDEX | building index: scanning table\n> ...\n> 01:06:22 AM | CREATE INDEX | building index: scanning table\n> 01:06:23 AM | CREATE INDEX | building index: sorting live tuples\n> ...\n> 01:13:10 AM | CREATE INDEX | building index: sorting live tuples\n> 01:13:11 AM | CREATE INDEX | building index: loading tuples in tree\n> ...\n> 01:24:02 AM | CREATE INDEX | building index: loading tuples in tree\n>\n> So it goes through three phases:\n>\n> 1) scanning tuples\n> 2) sorting live tuples\n> 3) loading tuples in tree\n>\n> But with parallel build index build, it changes to:\n>\n> | command | phase\n> -------------+--------------+----------------------------------------\n> 11:40:48 AM | CREATE INDEX | building index: scanning table\n> ...\n> 11:47:24 AM | CREATE INDEX | building index: scanning table (scan\n> complete)\n> 11:56:22 AM | CREATE INDEX | building index: scanning table\n> 11:56:23 AM | CREATE INDEX | building index: loading tuples in tree\n> ...\n> 12:05:33 PM | CREATE INDEX | building index: loading tuples in tree\n>\n> That is, the \"sorting live tuples\" phase disappeared, and instead it\n> seems to be counted in the \"scanning table\" one, as if there was an\n> update of the phase missing.\n\n> I've only tried this on master, but I assume it behaves like this in the\n> older releases too. I wonder if this is intentional - it sure is a bit\n> misleading.\n\nThis was a suprise to me as well. According to documentation in\nsortsupport.h (line 125-129) the parallel workers produce pre-sorted\nsegments during the scanning phase, which are subsequently merged by\nthe leader. This might mean that the 'sorting' phase is already\nfinished during the 'scanning' phase by waiting for the parallel\nworkers; I haven't looked further if this is the case and whether it\ncould be changed to also produce the sorting metrics, but seeing as it\nis part of the parallel workers API of tuplesort, I think fixing it in\ncurrent releases is going to be difficult.\n\nWith regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Wed, 2 Jun 2021 15:03:41 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_create_index vs. parallel index builds"
},
{
"msg_contents": "\n\nOn 6/2/21 3:03 PM, Matthias van de Meent wrote:\n> On Wed, 2 Jun 2021 at 13:57, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> Hi,\n>>\n>> While experimenting with parallel index builds, I've noticed a somewhat\n>> strange behavior of pg_stat_progress_create_index when a btree index is\n>> built with parallel workers - some of the phases seem to be missing.\n>>\n>> In serial (no parallelism) mode, the progress is roughly this (it's\n>> always the first/last timestamp of each phase):\n>>\n>> | command | phase\n>> -------------+--------------+----------------------------------------\n>> 12:56:01 AM | CREATE INDEX | building index: scanning table\n>> ...\n>> 01:06:22 AM | CREATE INDEX | building index: scanning table\n>> 01:06:23 AM | CREATE INDEX | building index: sorting live tuples\n>> ...\n>> 01:13:10 AM | CREATE INDEX | building index: sorting live tuples\n>> 01:13:11 AM | CREATE INDEX | building index: loading tuples in tree\n>> ...\n>> 01:24:02 AM | CREATE INDEX | building index: loading tuples in tree\n>>\n>> So it goes through three phases:\n>>\n>> 1) scanning tuples\n>> 2) sorting live tuples\n>> 3) loading tuples in tree\n>>\n>> But with parallel build index build, it changes to:\n>>\n>> | command | phase\n>> -------------+--------------+----------------------------------------\n>> 11:40:48 AM | CREATE INDEX | building index: scanning table\n>> ...\n>> 11:47:24 AM | CREATE INDEX | building index: scanning table (scan\n>> complete)\n>> 11:56:22 AM | CREATE INDEX | building index: scanning table\n>> 11:56:23 AM | CREATE INDEX | building index: loading tuples in tree\n>> ...\n>> 12:05:33 PM | CREATE INDEX | building index: loading tuples in tree\n>>\n>> That is, the \"sorting live tuples\" phase disappeared, and instead it\n>> seems to be counted in the \"scanning table\" one, as if there was an\n>> update of the phase missing.\n> \n>> I've only tried this on master, but I assume it behaves like this in the\n>> older releases too. I wonder if this is intentional - it sure is a bit\n>> misleading.\n> \n> This was a suprise to me as well. According to documentation in\n> sortsupport.h (line 125-129) the parallel workers produce pre-sorted\n> segments during the scanning phase, which are subsequently merged by\n> the leader. This might mean that the 'sorting' phase is already\n> finished during the 'scanning' phase by waiting for the parallel\n> workers; I haven't looked further if this is the case and whether it\n> could be changed to also produce the sorting metrics, but seeing as it\n> is part of the parallel workers API of tuplesort, I think fixing it in\n> current releases is going to be difficult.\n> \n\nMaybe. Perhaps it's more complicated to decide when to switch between \nphases with parallel workers. Still, the table scan is done after ~8 \nminutes (based on blocks_total vs. blocks_done), yet we keep that phase \nfor another ~9 minutes. It seems this is where the workers do the sort, \nso \"sorting live tuples\" seems like a more natural phase for this.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 2 Jun 2021 15:23:23 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_progress_create_index vs. parallel index builds"
},
{
"msg_contents": "On Wed, 2 Jun 2021 at 15:23, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>\n>\n>\n> On 6/2/21 3:03 PM, Matthias van de Meent wrote:\n> > On Wed, 2 Jun 2021 at 13:57, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> Hi,\n> >>\n> >> While experimenting with parallel index builds, I've noticed a somewhat\n> >> strange behavior of pg_stat_progress_create_index when a btree index is\n> >> built with parallel workers - some of the phases seem to be missing.\n> >>\n> >> In serial (no parallelism) mode, the progress is roughly this (it's\n> >> always the first/last timestamp of each phase):\n> >>\n> >> | command | phase\n> >> -------------+--------------+----------------------------------------\n> >> 12:56:01 AM | CREATE INDEX | building index: scanning table\n> >> ...\n> >> 01:06:22 AM | CREATE INDEX | building index: scanning table\n> >> 01:06:23 AM | CREATE INDEX | building index: sorting live tuples\n> >> ...\n> >> 01:13:10 AM | CREATE INDEX | building index: sorting live tuples\n> >> 01:13:11 AM | CREATE INDEX | building index: loading tuples in tree\n> >> ...\n> >> 01:24:02 AM | CREATE INDEX | building index: loading tuples in tree\n> >>\n> >> So it goes through three phases:\n> >>\n> >> 1) scanning tuples\n> >> 2) sorting live tuples\n> >> 3) loading tuples in tree\n> >>\n> >> But with parallel build index build, it changes to:\n> >>\n> >> | command | phase\n> >> -------------+--------------+----------------------------------------\n> >> 11:40:48 AM | CREATE INDEX | building index: scanning table\n> >> ...\n> >> 11:47:24 AM | CREATE INDEX | building index: scanning table (scan\n> >> complete)\n> >> 11:56:22 AM | CREATE INDEX | building index: scanning table\n> >> 11:56:23 AM | CREATE INDEX | building index: loading tuples in tree\n> >> ...\n> >> 12:05:33 PM | CREATE INDEX | building index: loading tuples in tree\n> >>\n> >> That is, the \"sorting live tuples\" phase disappeared, and instead it\n> >> seems to be counted in the \"scanning table\" one, as if there was an\n> >> update of the phase missing.\n> >\n> >> I've only tried this on master, but I assume it behaves like this in the\n> >> older releases too. I wonder if this is intentional - it sure is a bit\n> >> misleading.\n> >\n> > This was a suprise to me as well. According to documentation in\n> > sortsupport.h (line 125-129) the parallel workers produce pre-sorted\n> > segments during the scanning phase, which are subsequently merged by\n> > the leader. This might mean that the 'sorting' phase is already\n> > finished during the 'scanning' phase by waiting for the parallel\n> > workers; I haven't looked further if this is the case and whether it\n> > could be changed to also produce the sorting metrics, but seeing as it\n> > is part of the parallel workers API of tuplesort, I think fixing it in\n> > current releases is going to be difficult.\n> >\n>\n> Maybe. Perhaps it's more complicated to decide when to switch between\n> phases with parallel workers. Still, the table scan is done after ~8\n> minutes (based on blocks_total vs. blocks_done), yet we keep that phase\n> for another ~9 minutes. It seems this is where the workers do the sort,\n> so \"sorting live tuples\" seems like a more natural phase for this.\n\nAfter looking at it a bit more, it seems like a solution was actually\neasier than I'd expected. PFA a prototype (unvalidated, but\ncheck-world -ed) patch that would add these subphases of progress\nreporting, which can be backpatched down to 12.\n\nDo note that this is a partial fix, as it only fixes it when the\nleader participates; but I don't think that limitation is too much of\na problem because only on builds which explicitly define the\nnon-standard DISABLE_LEADER_PARTICIPATION this will happen, and in\nsuch cases the progress reporting for the loading phase will fail as\nwell.\n\nWith regards,\n\nMatthias van de Meent",
"msg_date": "Wed, 2 Jun 2021 16:54:06 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_create_index vs. parallel index builds"
},
{
"msg_contents": "On 6/2/21 4:54 PM, Matthias van de Meent wrote:\n> On Wed, 2 Jun 2021 at 15:23, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> ...\n>>\n >\n> After looking at it a bit more, it seems like a solution was actually\n> easier than I'd expected. PFA a prototype (unvalidated, but\n> check-world -ed) patch that would add these subphases of progress\n> reporting, which can be backpatched down to 12.\n> \n\nNice. I gave it a try on the database I'm experimenting with, and it \nseems to be working fine. Please add it to the next CF.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 2 Jun 2021 17:42:29 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_progress_create_index vs. parallel index builds"
},
{
"msg_contents": "On Wed, 2 Jun 2021 at 17:42, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>\n> On 6/2/21 4:54 PM, Matthias van de Meent wrote:\n> > On Wed, 2 Jun 2021 at 15:23, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> ...\n> >>\n> >\n> > After looking at it a bit more, it seems like a solution was actually\n> > easier than I'd expected. PFA a prototype (unvalidated, but\n> > check-world -ed) patch that would add these subphases of progress\n> > reporting, which can be backpatched down to 12.\n> >\n>\n> Nice. I gave it a try on the database I'm experimenting with, and it\n> seems to be working fine. Please add it to the next CF.\n\nThanks, cf available here: https://commitfest.postgresql.org/33/3149/\n\nWith regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Wed, 2 Jun 2021 17:48:38 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_create_index vs. parallel index builds"
},
{
"msg_contents": "On 2021-Jun-02, Tomas Vondra wrote:\n\n> Hi,\n> \n> While experimenting with parallel index builds, I've noticed a somewhat\n> strange behavior of pg_stat_progress_create_index when a btree index is\n> built with parallel workers - some of the phases seem to be missing.\n\nHmm, that's odd. I distinctly recall testing the behavior with parallel\nworkers, and it is mentioned by Rahila in the original thread, and I\nthink we tried to ensure that it was sane. I am surprised to learn that\nthere's such a large gap.\n\nI'll go have a deeper look at the provided patch and try to get it\nbackpatched.\n\nI think it would be valuable to have some kind of test mode where the\nprogress reporting APIs would make some noise (perhaps with a bespoke\nGUC option) so that we can test things in some automated manner ...\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"In fact, the basic problem with Perl 5's subroutines is that they're not\ncrufty enough, so the cruft leaks out into user-defined code instead, by\nthe Conservation of Cruft Principle.\" (Larry Wall, Apocalypse 6)\n\n\n",
"msg_date": "Wed, 2 Jun 2021 12:38:53 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_create_index vs. parallel index builds"
},
{
"msg_contents": "\n\nOn 6/2/21 6:38 PM, Alvaro Herrera wrote:\n> On 2021-Jun-02, Tomas Vondra wrote:\n> \n>> Hi,\n>>\n>> While experimenting with parallel index builds, I've noticed a somewhat\n>> strange behavior of pg_stat_progress_create_index when a btree index is\n>> built with parallel workers - some of the phases seem to be missing.\n> \n> Hmm, that's odd. I distinctly recall testing the behavior with parallel\n> workers, and it is mentioned by Rahila in the original thread, and I\n> think we tried to ensure that it was sane. I am surprised to learn that\n> there's such a large gap.\n> \n\nYeah, I quickly skimmed [1] which I think is the thread you're referring\nto, and there is some discussion about parallel workers. I haven't read\nit in detail, though.\n\n[1]\nhttps://www.postgresql.org/message-id/20181220220022.mg63bhk26zdpvmcj%40alvherre.pgsql\n\n> I'll go have a deeper look at the provided patch and try to get it\n> backpatched.\n> \n> I think it would be valuable to have some kind of test mode where the\n> progress reporting APIs would make some noise (perhaps with a bespoke\n> GUC option) so that we can test things in some automated manner ...\n> \n\nTrue, but how would that GUC work? Would it add something into the\nsystem view, or just log something?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 2 Jun 2021 22:20:11 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_progress_create_index vs. parallel index builds"
},
{
"msg_contents": "On 2021-Jun-02, Tomas Vondra wrote:\n\n> On 6/2/21 6:38 PM, Alvaro Herrera wrote:\n\n> > Hmm, that's odd. I distinctly recall testing the behavior with parallel\n> > workers, and it is mentioned by Rahila in the original thread, and I\n> > think we tried to ensure that it was sane. I am surprised to learn that\n> > there's such a large gap.\n> \n> Yeah, I quickly skimmed [1] which I think is the thread you're referring\n> to, and there is some discussion about parallel workers. I haven't read\n> it in detail, though.\n> \n> [1]\n> https://www.postgresql.org/message-id/20181220220022.mg63bhk26zdpvmcj%40alvherre.pgsql\n\nWell, it is quite possible that we found *some* problems with parallel\nworkers but not all of them :-)\n\n> > I think it would be valuable to have some kind of test mode where the\n> > progress reporting APIs would make some noise (perhaps with a bespoke\n> > GUC option) so that we can test things in some automated manner ...\n> \n> True, but how would that GUC work? Would it add something into the\n> system view, or just log something?\n\nWith the GUC turned on, emit some sort of message (maybe at INFO level)\nwhenever some subset of the progress parameters changes. This makes it\neasy to compare the progress of any command with the expected set of\nmessages.\n\nHowever, it's not very clear which parameters are observed\nfor changes (you can't do it for all params, because you'd get one for\neach block in some cases, and that's unworkable). The way have #defined\nthe parameters makes it difficult to annotate parameters with flag bits;\nwe could have something \n\n#ifdef USE_ASSERT_CHECKING\n#define PROGRESS_LOG_CHANGES 0x70000000\n#else\n#define PROGRESS_LOG_CHANGES 0x0\n#endif\n#define PROGRESS_CLUSTER_PHASE (1 | PROGRESS_LOG_CHANGES)\n\nand the progress-reporting knows to mask-out the LOG_CHANGES bit before\nstoring the value in memory, but also knows to emit the log output if\nthat's enabled and the LOG_CHANGES bit is present. (The assertion flag\nwould be tested at compile time to avoid a performance hit in production\nbuilds.)\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"I can't go to a restaurant and order food because I keep looking at the\nfonts on the menu. Five minutes later I realize that it's also talking\nabout food\" (Donald Knuth)\n\n\n",
"msg_date": "Wed, 2 Jun 2021 16:33:54 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_create_index vs. parallel index builds"
},
{
"msg_contents": "On Thu, Jun 3, 2021 at 1:49 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Wed, 2 Jun 2021 at 17:42, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> >\n> > Nice. I gave it a try on the database I'm experimenting with, and it\n> > seems to be working fine. Please add it to the next CF.\n>\n> Thanks, cf available here: https://commitfest.postgresql.org/33/3149/\n>\n\nThe patch looks OK to me. It seems apparent that the lines added by\nthe patch are missing from the current source in the parallel case.\n\nI tested with and without the patch, using the latest PG14 source as\nof today, and can confirm that without the patch applied, the \"sorting\nlive tuples\" phase is not reported in the parallel-case, but with the\npatch applied it then does get reported in that case. I also confirmed\nthat, as you said, the patch only addresses the usual case where the\nparallel leader participates in the parallel operation.\nWhat is slightly puzzling to me (and perhaps digging deeper will\nreveal it) is why this \"sorting live tuples\" phase seems so short in\nthe serial case compared to the parallel case?\nFor example, in my test I created an index on a column of a table\nhaving 10 million records, and it took about 40 seconds, during which\nthe \"sorting live tuples\" phase seemed to take about 8 seconds. Yet\nfor the serial case, index creation took about 75 seconds, during\nwhich the \"sorting live tuples\" phase seemed to take about 1 second.\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 4 Jun 2021 17:25:53 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_create_index vs. parallel index builds"
},
{
"msg_contents": "On Fri, Jun 4, 2021 at 5:25 PM Greg Nancarrow <gregn4422@gmail.com> wrote:\n>\n> What is slightly puzzling to me (and perhaps digging deeper will\n> reveal it) is why this \"sorting live tuples\" phase seems so short in\n> the serial case compared to the parallel case?\n> For example, in my test I created an index on a column of a table\n> having 10 million records, and it took about 40 seconds, during which\n> the \"sorting live tuples\" phase seemed to take about 8 seconds. Yet\n> for the serial case, index creation took about 75 seconds, during\n> which the \"sorting live tuples\" phase seemed to take about 1 second.\n>\n\nSeems to be because in the serial case, the sort occurs after the scan\nis complete (obviously) but in the parallel case, the scan and sort\nare combined, so (after patch application) a portion of the then\nreported \"sorting live tuples\" phase is actually \"scanning table\".\n\nRegards,\nGreg Nancarrow\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 4 Jun 2021 23:12:37 +1000",
"msg_from": "Greg Nancarrow <gregn4422@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_create_index vs. parallel index builds"
},
{
"msg_contents": "On 2021-Jun-04, Greg Nancarrow wrote:\n\n> I tested with and without the patch, using the latest PG14 source as\n> of today, and can confirm that without the patch applied, the \"sorting\n> live tuples\" phase is not reported in the parallel-case, but with the\n> patch applied it then does get reported in that case. I also confirmed\n> that, as you said, the patch only addresses the usual case where the\n> parallel leader participates in the parallel operation.\n> What is slightly puzzling to me (and perhaps digging deeper will\n> reveal it) is why this \"sorting live tuples\" phase seems so short in\n> the serial case compared to the parallel case?\n> For example, in my test I created an index on a column of a table\n> having 10 million records, and it took about 40 seconds, during which\n> the \"sorting live tuples\" phase seemed to take about 8 seconds. Yet\n> for the serial case, index creation took about 75 seconds, during\n> which the \"sorting live tuples\" phase seemed to take about 1 second.\n\nI think the reason is that scanning the table is not just scanning the\ntable -- it is also feeding tuples to tuplesort, which internally is\nalready sorting them as it receives them. So by the time you're done\nscanning the relation, some (large) fraction of the sorting work is\nalready done, which is why the \"sorting\" phase is so short.\n\n\nTracing sort is not easy. we discussed this earlier; see\nhttps://postgr.es/m/20181218210159.xtkltzm7flrwsm55@alvherre.pgsql\nfor example.\n\n-- \n�lvaro Herrera Valdivia, Chile\nMaybe there's lots of data loss but the records of data loss are also lost.\n(Lincoln Yeoh)\n\n\n",
"msg_date": "Wed, 9 Jun 2021 16:53:09 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_create_index vs. parallel index builds"
},
{
"msg_contents": "On 2021-Jun-04, Greg Nancarrow wrote:\n\n> On Thu, Jun 3, 2021 at 1:49 AM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> >\n> > On Wed, 2 Jun 2021 at 17:42, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> > >\n> > > Nice. I gave it a try on the database I'm experimenting with, and it\n> > > seems to be working fine. Please add it to the next CF.\n> >\n> > Thanks, cf available here: https://commitfest.postgresql.org/33/3149/\n> \n> The patch looks OK to me. It seems apparent that the lines added by\n> the patch are missing from the current source in the parallel case.\n> \n> I tested with and without the patch, using the latest PG14 source as\n> of today, and can confirm that without the patch applied, the \"sorting\n> live tuples\" phase is not reported in the parallel-case, but with the\n> patch applied it then does get reported in that case. I also confirmed\n> that, as you said, the patch only addresses the usual case where the\n> parallel leader participates in the parallel operation.\n\nSo, with Matthias' patch applied and some instrumentation to log (some)\nparameter updates, this is what I get on building an index in parallel.\nThe \"subphase\" is parameter 10:\n\n2021-06-09 17:04:30.692 -04 19194 WARNING: updating param 0 to 1\n2021-06-09 17:04:30.692 -04 19194 WARNING: updating param 6 to 0\n2021-06-09 17:04:30.692 -04 19194 WARNING: updating param 8 to 403\n2021-06-09 17:04:30.696 -04 19194 WARNING: updating param 9 to 2\n2021-06-09 17:04:30.696 -04 19194 WARNING: updating param 10 to 1\n2021-06-09 17:04:30.696 -04 19194 WARNING: updating param 11 to 0\n2021-06-09 17:04:30.696 -04 19194 WARNING: updating param 15 to 0\n2021-06-09 17:04:30.696 -04 19194 WARNING: updating param 10 to 2\n2021-06-09 17:04:30.696 -04 19194 WARNING: updating param 15 to 486726\n2021-06-09 17:04:37.418 -04 19194 WARNING: updating param 10 to 3\t<-- this one is new\n2021-06-09 17:04:42.215 -04 19194 WARNING: updating param 11 to 110000000\n2021-06-09 17:04:42.215 -04 19194 WARNING: updating param 15 to 0\n2021-06-09 17:04:42.215 -04 19194 WARNING: updating param 10 to 3\n2021-06-09 17:04:42.237 -04 19194 WARNING: updating param 10 to 5\n\nThe thing to note is that we set subphase to 3 twice. The first of\nthose is added by the patch to _bt_parallel_scan_and_sort. The second\nis in _bt_leafbuild, just before setting the subphase to LEAF_LOAD. So\nthe change is that we set the subphase to \"sorting live tuples\" five\nseconds ahead of what we were doing previously. Seems ok. (We could\nalternatively skip the progress update call in _bt_leafbuild; but those\ncalls are so cheap that adding a conditional jump is almost as\nexpensive.)\n\n(The other potential problem might be to pointlessly invoke the progress\nupdate calls when in a worker. But that's already covered because only\nthe leader passes progress=true to _bt_parallel_scan_and_sort.)\n\nI'll push now.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n",
"msg_date": "Fri, 11 Jun 2021 17:24:47 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_progress_create_index vs. parallel index builds"
}
] |
[
{
"msg_contents": "Working with users over the years, some have large libraries of server\nside code sometimes consisting of 100k+ lines of code over 1000+ functions\nand procedures. This usually comes from a migration of a commercial\ndatabase like Oracle where it was best practice to put all of your\nbusiness logic into stored procedures. In these types of apps, just\nmanaging the code is a challenge. To help classify objects, schemas\nare used, but you are at the mercy of a naming convention to show\nassociation. For example, a frequent naming convention would be having\nrelated schemas with the names of foo_bar and foo_baz. For devs, that's\nakin to keeping a file like xlog.c in a directory structure like\nbackend_access_transam instead of backend/access/transam. IMHO, having\na hierarchy makes it simpler to reason about related code bits.\n\nThe SQL spec does have a concept of modules that help address this. It's\ndefined as a persistent object within a schema that contains one or more\nroutines. It also defines other things like local temporary tables and\npath specifications. There are other databases like DB2 that have\nimplemented module support each with their own way of defining the\nroutines within the module. The spec doesn't really give guidance on\nhow to manipulate the objects within the module.\n\nAttached is a POC patch for modules. I modeled it as a sub-schema because\nthat is more what it seems like to me. It adds additional columns to\npg_namespace and allows for 3-part (or 4 with the database name) naming\nof objects within the module. This simple example works with the patch.\n\nCREATE SCHEMA foo;\nCREATE MODULE foo.bar\n CREATE FUNCTION hello() RETURNS text\n LANGUAGE sql\n RETURN 'hello'\n CREATE FUNCTION world() RETURNS text\n LANGUAGE sql\n RETURN 'world';\nSELECT foo.bar.hello();\n\nQuestions\n- Do we want to add module support?\n\n- If we do, should it be implemented as a type of namespace or should it\n be its own object type that lives in something like pg_module?\n\n- How should users interact with objects within a module? They could be\n mostly independent like the current POC or we can introduce a path like\n ALTER MODULE foo ADD FUNCTION blah\n\n--Jim",
"msg_date": "Wed, 2 Jun 2021 09:38:39 -0400",
"msg_from": "Jim Mlodgenski <jimmy76@gmail.com>",
"msg_from_op": true,
"msg_subject": "Support for CREATE MODULE?"
},
{
"msg_contents": "Jim Mlodgenski <jimmy76@gmail.com> writes:\n> Questions\n> - Do we want to add module support?\n\nCertainly many people have asked for that, or things like that.\n\n> - If we do, should it be implemented as a type of namespace or should it\n> be its own object type that lives in something like pg_module?\n\nWhile I didn't read the actual patch, your sketch just above this makes\nme want to run away screaming. In the first place, what do you think\nthe primary key of pg_namespace is now? But the bigger problem is that\nsub-namespaces just do not work in SQL syntax. Back when we first added\nschema support, I had some ambitions towards allowing nested schemas,\nwhich is a big part of the reason why pg_namespace is named that and not\npg_schema. But the idea fell apart after I understood the syntactic\nambiguities it'd introduce. It's already quite hard to tell which part\nof a multiply.qualified.name is which, given that SQL says that you can\noptionally put a \"catalog\" (database) name in front of the others.\nI really doubt there is a way to shoehorn sub-schemas in there without\ncreating terrible ambiguities. Is \"a.b.c\" a reference to object c in\nschema b in database a, or is it a reference to object c in sub-schema b\nin schema a? This is why we've ended up with bastard syntax like\n(table.column).subcolumn.\n\n> - How should users interact with objects within a module? They could be\n> mostly independent like the current POC or we can introduce a path like\n> ALTER MODULE foo ADD FUNCTION blah\n\nI wonder whether it'd be better to consider modules as a kind of\nextension, or at least things with the same sort of ownership relations\nas extensions have.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 02 Jun 2021 09:58:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Support for CREATE MODULE?"
},
{
"msg_contents": "On Wed, Jun 2, 2021 at 9:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> In the first place, what do you think the primary key of pg_namespace is now?\n\nIn the patch the unique constraint is (nspname, nspnamespace) which is\ncertainly awkward. I initially went down the pg_module route to avoid\nadding another catalog, but in retrospect, that may be a cleaner way.\n\n\n> It's already quite hard to tell which part\n> of a multiply.qualified.name is which, given that SQL says that you can\n> optionally put a \"catalog\" (database) name in front of the others.\n> I really doubt there is a way to shoehorn sub-schemas in there without\n> creating terrible ambiguities. Is \"a.b.c\" a reference to object c in\n> schema b in database a, or is it a reference to object c in sub-schema b\n> in schema a?\n\nThat was the area I had the most difficult part to reason about. I tried to make\nsome simplifying assumptions by checking if \"a\" was the current database.\nSince we don't support cross database access, if it was not, I assumed \"a\"\nwas a schema. I not sure if that would be valid, but it did scope things\nto a more manageable problem.\n\n> I wonder whether it'd be better to consider modules as a kind of\n> extension, or at least things with the same sort of ownership relations\n> as extensions have.\n\nThat would solve the problem of associating objects which is the larger\nproblem for users today. The objects can all live in their respective\nschemas with the module tying them all together.\n\n\n",
"msg_date": "Wed, 2 Jun 2021 10:43:10 -0400",
"msg_from": "Jim Mlodgenski <jimmy76@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Support for CREATE MODULE?"
},
{
"msg_contents": "On Wed, Jun 2, 2021 at 10:43:10AM -0400, Jim Mlodgenski wrote:\n> On Wed, Jun 2, 2021 at 9:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > In the first place, what do you think the primary key of pg_namespace is now?\n> \n> In the patch the unique constraint is (nspname, nspnamespace) which is\n> certainly awkward. I initially went down the pg_module route to avoid\n> adding another catalog, but in retrospect, that may be a cleaner way.\n> \n> \n> > It's already quite hard to tell which part\n> > of a multiply.qualified.name is which, given that SQL says that you can\n> > optionally put a \"catalog\" (database) name in front of the others.\n> > I really doubt there is a way to shoehorn sub-schemas in there without\n> > creating terrible ambiguities. Is \"a.b.c\" a reference to object c in\n> > schema b in database a, or is it a reference to object c in sub-schema b\n> > in schema a?\n> \n> That was the area I had the most difficult part to reason about. I tried to make\n> some simplifying assumptions by checking if \"a\" was the current database.\n> Since we don't support cross database access, if it was not, I assumed \"a\"\n> was a schema. I not sure if that would be valid, but it did scope things\n> to a more manageable problem.\n\nIf we go in this direction, I assume we would just disallow a schema\nname matching the database name. CREATE DATABASE with TEMPLATE would\nhave to check that. Also the common case where you create a database\nname to match the user name, and also a schema inside to match the\nusername, would have to be disallowed, e.g. creating a 'postgres' schema\nto match the 'postgres' user in the 'postgres' database.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n",
"msg_date": "Wed, 2 Jun 2021 11:07:30 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Support for CREATE MODULE?"
},
{
"msg_contents": "On 6/2/21 10:43 AM, Jim Mlodgenski wrote:\n> On Wed, Jun 2, 2021 at 9:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I wonder whether it'd be better to consider modules as a kind of\n>> extension, or at least things with the same sort of ownership relations\n>> as extensions have.\n> \n> That would solve the problem of associating objects which is the larger\n> problem for users today. The objects can all live in their respective\n> schemas with the module tying them all together.\n\n\nMaybe something similar to \"CREATE EXTENSION ... FROM unpackaged\"?\n\nSomething like:\nCREATE EXTENSION myfoo; /* shell extension */\nALTER EXTENSION myfoo ADD type ...;\nALTER EXTENSION myfoo ADD function ...;\n...\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n",
"msg_date": "Wed, 2 Jun 2021 11:11:42 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Support for CREATE MODULE?"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> If we go in this direction, I assume we would just disallow a schema\n> name matching the database name.\n\nThat seems quite impossible to enforce.\n\nregression=# create database d1;\nCREATE DATABASE\nregression=# alter database d1 rename to d2;\nALTER DATABASE\n\nThe system had no way to know that d1 doesn't contain a schema named d2.\nAnd you can't fix that by restricting the ALTER to be done on the\ncurrent database:\n\nregression=# \\c d2\nYou are now connected to database \"d2\" as user \"postgres\".\nd2=# alter database d2 rename to d3;\nERROR: current database cannot be renamed\n\nBetween that and the point that this restriction would certainly break\nexisting installations, this is a non-starter.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 02 Jun 2021 11:14:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Support for CREATE MODULE?"
},
{
"msg_contents": "On Wed, Jun 2, 2021, at 16:43, Jim Mlodgenski wrote:\n> On Wed, Jun 2, 2021 at 9:58 AM Tom Lane <tgl@sss.pgh.pa.us <mailto:tgl%40sss.pgh.pa.us>> wrote:\n> > I wonder whether it'd be better to consider modules as a kind of\n> > extension, or at least things with the same sort of ownership relations\n> > as extensions have.\n> \n> That would solve the problem of associating objects which is the larger\n> problem for users today. The objects can all live in their respective\n> schemas with the module tying them all together.\n\nI like the idea of somehow using extensions.\n\nRight now, extensions can only be added from the command-line, via `make install`.\n\nBut maybe a new extension could be packaged from the SQL prompt, out of existing database objects that are not already part of an extension?\n\nMaybe the interface could be:\n\ninit_new_extension(extension_name text) function, to register a new empty extension.\nadd_object_to_extension(extension_name text, type text, object_names text[], object_args text[])\n\nThen, if dropping the extension, all objects would be dropped, and if creating the extension, all objects would be restored.\n\nI don't have an idea on how to handle update scripts, but since it's not mandatory to provide extension update scripts, maybe that's not a problem.\n\n/Joel\nOn Wed, Jun 2, 2021, at 16:43, Jim Mlodgenski wrote:On Wed, Jun 2, 2021 at 9:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:> I wonder whether it'd be better to consider modules as a kind of> extension, or at least things with the same sort of ownership relations> as extensions have.That would solve the problem of associating objects which is the largerproblem for users today. The objects can all live in their respectiveschemas with the module tying them all together.I like the idea of somehow using extensions.Right now, extensions can only be added from the command-line, via `make install`.But maybe a new extension could be packaged from the SQL prompt, out of existing database objects that are not already part of an extension?Maybe the interface could be:init_new_extension(extension_name text) function, to register a new empty extension.add_object_to_extension(extension_name text, type text, object_names text[], object_args text[])Then, if dropping the extension, all objects would be dropped, and if creating the extension, all objects would be restored.I don't have an idea on how to handle update scripts, but since it's not mandatory to provide extension update scripts, maybe that's not a problem./Joel",
"msg_date": "Wed, 02 Jun 2021 17:22:11 +0200",
"msg_from": "\"Joel Jacobson\" <joel@compiler.org>",
"msg_from_op": false,
"msg_subject": "Re: Support for CREATE MODULE?"
},
{
"msg_contents": "On 2021-Jun-02, Jim Mlodgenski wrote:\n\n> Attached is a POC patch for modules. I modeled it as a sub-schema because\n> that is more what it seems like to me. It adds additional columns to\n> pg_namespace and allows for 3-part (or 4 with the database name) naming\n> of objects within the module. This simple example works with the patch.\n\nGiven the downthread discussion, this idea doesn't seem workable.\nPeople are now discussing \"what if the module is some kind of\nextension\". But to me that seems to go against the grain; you'd have to\nimplement a ton of stuff in order to let \"extension-modules\" be\ninstalled without on-disk foo.control files.\n\nBut what if the module is just a particular kind of *namespace*? I\nmean, what if CREATE MODULE is implemented by creating a row in\npg_namespace with nspkind='m'? So a pg_namespace row can refer to\neither a regular schema (nspkind='s') or a module. In a schema you can\ncreate objects of any kind just like today, but in a module you're\nrestricted to having only functions (and maybe also operators? other\ntypes of objects?).\n\nThen, a qualified object name foo.bar() can refer to either the routine\nbar() in schema foo, or routine bar in module foo. To the low-level\ncode it's pretty much the same thing (look the namespace in pg_namespace\njust as today).\n\nWhat other properties do you want modules to have? Are there \"private\"\nfunctions? (What *is* a private function in this context? I mean, how\ndoes \"being in a module\" interact with object lookup rules? Does\nplpgsql have to be aware that a routine is in a module?)\nAre there module-scoped variables? (If so, you probably want Pavel\nStehule's variable patch pushed ahead of time).\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n",
"msg_date": "Wed, 2 Jun 2021 15:08:44 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Support for CREATE MODULE?"
},
{
"msg_contents": "On 02.06.21 16:43, Jim Mlodgenski wrote:\n>> It's already quite hard to tell which part\n>> of a multiply.qualified.name is which, given that SQL says that you can\n>> optionally put a \"catalog\" (database) name in front of the others.\n>> I really doubt there is a way to shoehorn sub-schemas in there without\n>> creating terrible ambiguities. Is \"a.b.c\" a reference to object c in\n>> schema b in database a, or is it a reference to object c in sub-schema b\n>> in schema a?\n> That was the area I had the most difficult part to reason about. I tried to make\n> some simplifying assumptions by checking if \"a\" was the current database.\n> Since we don't support cross database access, if it was not, I assumed \"a\"\n> was a schema. I not sure if that would be valid, but it did scope things\n> to a more manageable problem.\n\nGiven that, as you said, the concept of modules is in the SQL standard, \nthere is surely some guidance in there about how this is supposed to \naffect name resolution. So let's start with that. Maybe we won't like \nit in the end or whatever, but we should surely look there first.\n\n\n",
"msg_date": "Thu, 3 Jun 2021 14:49:35 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Support for CREATE MODULE?"
},
{
"msg_contents": "On Thu, Jun 3, 2021 at 8:49 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> Given that, as you said, the concept of modules is in the SQL standard,\n> there is surely some guidance in there about how this is supposed to\n> affect name resolution. So let's start with that. Maybe we won't like\n> it in the end or whatever, but we should surely look there first.\n\nStudying the spec further, catalog/schema/module are all used to\nidentify a module-level routine. I don't see it spelled out that\nis needs to be in that format of catalog.schema.module.routine to\nfully qualify the routine, but it would likely be awkward for users\nto come up with an alternative syntax like\n(catalog.schema.module).routine or catalog.scheme.module->routine\n\nThe way the spec is worded, I read it as that schemas take precedence\nover modules regarding path resolution. So for example with 2-level\nnaming if there is a schema 'foo' and a module 'public.foo' both with\nfunctions 'bar' 'foo.bar' would refer to the schema-level function not\nthe module-level function. I've not found guidance on throwing catalog\ninto the mix and 3-level naming. Say we had a catalog 'postgres' with a\nschema 'foo' with a function 'bar' and a schema 'postgres' with a module\n'foo' with a function 'bar'. What would 'postgres.foo.bar' refer to? If\nthe SQL was executed from a catalog other than 'postgres', we'd have no\nway of knowing if 'foo.bar' existed there. So if it's implementation\ndependent, saying schemas take precedence over catalogs may make sense\nand 'postgres.foo.bar' refers to the module-level function in the\n'postgres' schema.\n\n\n",
"msg_date": "Fri, 4 Jun 2021 15:31:54 -0400",
"msg_from": "Jim Mlodgenski <jimmy76@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Support for CREATE MODULE?"
},
{
"msg_contents": "Hi\n\nst 2. 6. 2021 v 15:39 odesílatel Jim Mlodgenski <jimmy76@gmail.com> napsal:\n\n> Working with users over the years, some have large libraries of server\n> side code sometimes consisting of 100k+ lines of code over 1000+ functions\n> and procedures. This usually comes from a migration of a commercial\n> database like Oracle where it was best practice to put all of your\n> business logic into stored procedures. In these types of apps, just\n> managing the code is a challenge. To help classify objects, schemas\n> are used, but you are at the mercy of a naming convention to show\n> association. For example, a frequent naming convention would be having\n> related schemas with the names of foo_bar and foo_baz. For devs, that's\n> akin to keeping a file like xlog.c in a directory structure like\n> backend_access_transam instead of backend/access/transam. IMHO, having\n> a hierarchy makes it simpler to reason about related code bits.\n>\n> The SQL spec does have a concept of modules that help address this. It's\n> defined as a persistent object within a schema that contains one or more\n> routines. It also defines other things like local temporary tables and\n> path specifications. There are other databases like DB2 that have\n> implemented module support each with their own way of defining the\n> routines within the module. The spec doesn't really give guidance on\n> how to manipulate the objects within the module.\n>\n> Attached is a POC patch for modules. I modeled it as a sub-schema because\n> that is more what it seems like to me. It adds additional columns to\n> pg_namespace and allows for 3-part (or 4 with the database name) naming\n> of objects within the module. This simple example works with the patch.\n>\n> CREATE SCHEMA foo;\n> CREATE MODULE foo.bar\n> CREATE FUNCTION hello() RETURNS text\n> LANGUAGE sql\n> RETURN 'hello'\n> CREATE FUNCTION world() RETURNS text\n> LANGUAGE sql\n> RETURN 'world';\n> SELECT foo.bar.hello();\n>\n> Questions\n> - Do we want to add module support?\n>\n> - If we do, should it be implemented as a type of namespace or should it\n> be its own object type that lives in something like pg_module?\n>\n> - How should users interact with objects within a module? They could be\n> mostly independent like the current POC or we can introduce a path like\n> ALTER MODULE foo ADD FUNCTION blah\n>\n\nI never liked the SQL/PSM concept of modules. The possibility to assign\ndatabase objects to schema or to modules looks like schizophrenia.\n\nThere are only two advantages of modules - a) possibility to define private\nobjects, b) local scope - the objects from modules shadows external objects\nwithout dependency of search_path.\n\nBut both these features are pretty hard to implement in PL/pgSQL - where\nexpression executor is SQL executor.\n\nWithout these features I don't see strong benefits for modules.\n\nRegards\n\nPavel\n\n\n\n>\n> --Jim\n>\n\nHist 2. 6. 2021 v 15:39 odesílatel Jim Mlodgenski <jimmy76@gmail.com> napsal:Working with users over the years, some have large libraries of server\nside code sometimes consisting of 100k+ lines of code over 1000+ functions\nand procedures. This usually comes from a migration of a commercial\ndatabase like Oracle where it was best practice to put all of your\nbusiness logic into stored procedures. In these types of apps, just\nmanaging the code is a challenge. To help classify objects, schemas\nare used, but you are at the mercy of a naming convention to show\nassociation. For example, a frequent naming convention would be having\nrelated schemas with the names of foo_bar and foo_baz. For devs, that's\nakin to keeping a file like xlog.c in a directory structure like\nbackend_access_transam instead of backend/access/transam. IMHO, having\na hierarchy makes it simpler to reason about related code bits.\n\nThe SQL spec does have a concept of modules that help address this. It's\ndefined as a persistent object within a schema that contains one or more\nroutines. It also defines other things like local temporary tables and\npath specifications. There are other databases like DB2 that have\nimplemented module support each with their own way of defining the\nroutines within the module. The spec doesn't really give guidance on\nhow to manipulate the objects within the module.\n\nAttached is a POC patch for modules. I modeled it as a sub-schema because\nthat is more what it seems like to me. It adds additional columns to\npg_namespace and allows for 3-part (or 4 with the database name) naming\nof objects within the module. This simple example works with the patch.\n\nCREATE SCHEMA foo;\nCREATE MODULE foo.bar\n CREATE FUNCTION hello() RETURNS text\n LANGUAGE sql\n RETURN 'hello'\n CREATE FUNCTION world() RETURNS text\n LANGUAGE sql\n RETURN 'world';\nSELECT foo.bar.hello();\n\nQuestions\n- Do we want to add module support?\n\n- If we do, should it be implemented as a type of namespace or should it\n be its own object type that lives in something like pg_module?\n\n- How should users interact with objects within a module? They could be\n mostly independent like the current POC or we can introduce a path like\n ALTER MODULE foo ADD FUNCTION blahI never liked the SQL/PSM concept of modules. The possibility to assign database objects to schema or to modules looks like schizophrenia.There are only two advantages of modules - a) possibility to define private objects, b) local scope - the objects from modules shadows external objects without dependency of search_path.But both these features are pretty hard to implement in PL/pgSQL - where expression executor is SQL executor.Without these features I don't see strong benefits for modules. RegardsPavel \n\n--Jim",
"msg_date": "Fri, 4 Jun 2021 22:09:26 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Support for CREATE MODULE?"
}
] |
[
{
"msg_contents": "I just had a case where a new user was slightly confused by our\ninstallation \"Short Version\" instructions. I think the confusion would\nbe lessened by adding a couple of comments, as in the attached patch.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 2 Jun 2021 16:43:42 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "improve installation short version"
},
{
"msg_contents": "\n\n> On Jun 2, 2021, at 1:43 PM, Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n> <short-version-comments.patch>\n\nIt's not the fault of your patch, but the docs seem to be misleading in ways that your comments don't fix. (Or perhaps my Windows knowledge is just too lacking to realize why these instructions are ok?)\n\nPrior to where your patch makes changes, the docs say, \"If you are building PostgreSQL for Microsoft Windows, read this chapter if you intend to build with MinGW or Cygwin\". I think going on to tell the users to use `su` is a bit odd. Does that exist in standard MinGW and Cygwin environments? I thought \"run as\" was the Windows option for this.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Wed, 2 Jun 2021 13:58:18 -0700",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: improve installation short version"
},
{
"msg_contents": "\nOn 6/2/21 4:58 PM, Mark Dilger wrote:\n>\n>> On Jun 2, 2021, at 1:43 PM, Andrew Dunstan <andrew@dunslane.net> wrote:\n>>\n>> <short-version-comments.patch>\n> It's not the fault of your patch, but the docs seem to be misleading in ways that your comments don't fix. (Or perhaps my Windows knowledge is just too lacking to realize why these instructions are ok?)\n>\n> Prior to where your patch makes changes, the docs say, \"If you are building PostgreSQL for Microsoft Windows, read this chapter if you intend to build with MinGW or Cygwin\". I think going on to tell the users to use `su` is a bit odd. Does that exist in standard MinGW and Cygwin environments? I thought \"run as\" was the Windows option for this.\n>\n\nYes, good point. We should fix that. Yes, \"runas\" is a sort of su.\nThere's no adduser either.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 2 Jun 2021 17:27:55 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: improve installation short version"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 6/2/21 4:58 PM, Mark Dilger wrote:\n>> Prior to where your patch makes changes, the docs say, \"If you are building PostgreSQL for Microsoft Windows, read this chapter if you intend to build with MinGW or Cygwin\". I think going on to tell the users to use `su` is a bit odd. Does that exist in standard MinGW and Cygwin environments? I thought \"run as\" was the Windows option for this.\n\n> Yes, good point. We should fix that. Yes, \"runas\" is a sort of su.\n> There's no adduser either.\n\nThere's a whole lot of Unix systems that don't spell that command\nas \"adduser\", either. That whole recipe has to be understood as\na guide, not something you can blindly copy-and-paste.\n\nMaybe what we really need is an initial disclaimer saying something\nalong the lines of \"Here's approximately what you need to do; adapt\nthese commands per local requirements.\"\n\nAnd then, perhaps, change the last line to \"For more detail, see\nthe rest of this chapter\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 02 Jun 2021 17:36:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: improve installation short version"
},
{
"msg_contents": "On 6/2/21 5:36 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 6/2/21 4:58 PM, Mark Dilger wrote:\n>>> Prior to where your patch makes changes, the docs say, \"If you are building PostgreSQL for Microsoft Windows, read this chapter if you intend to build with MinGW or Cygwin\". I think going on to tell the users to use `su` is a bit odd. Does that exist in standard MinGW and Cygwin environments? I thought \"run as\" was the Windows option for this.\n>> Yes, good point. We should fix that. Yes, \"runas\" is a sort of su.\n>> There's no adduser either.\n> There's a whole lot of Unix systems that don't spell that command\n> as \"adduser\", either. That whole recipe has to be understood as\n> a guide, not something you can blindly copy-and-paste.\n>\n> Maybe what we really need is an initial disclaimer saying something\n> along the lines of \"Here's approximately what you need to do; adapt\n> these commands per local requirements.\"\n>\n> And then, perhaps, change the last line to \"For more detail, see\n> the rest of this chapter\".\n>\n> \t\t\t\n\n\n\nOk, patch attached\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 3 Jun 2021 08:33:05 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: improve installation short version"
},
{
"msg_contents": "On 02.06.21 23:27, Andrew Dunstan wrote:\n>>> <short-version-comments.patch>\n>> It's not the fault of your patch, but the docs seem to be misleading in ways that your comments don't fix. (Or perhaps my Windows knowledge is just too lacking to realize why these instructions are ok?)\n>>\n>> Prior to where your patch makes changes, the docs say, \"If you are building PostgreSQL for Microsoft Windows, read this chapter if you intend to build with MinGW or Cygwin\". I think going on to tell the users to use `su` is a bit odd. Does that exist in standard MinGW and Cygwin environments? I thought \"run as\" was the Windows option for this.\n>>\n> \n> Yes, good point. We should fix that. Yes, \"runas\" is a sort of su.\n> There's no adduser either.\n\nI think those instructions were written before \"sudo\" became common. \nMaybe we should update them a bit.\n\n\n",
"msg_date": "Thu, 3 Jun 2021 15:01:05 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: improve installation short version"
}
] |
[
{
"msg_contents": "Commit 19890a064 changed pg_create_logical_replication_slot() to allow\ndecoding of two-phase transactions, but did not extend the\nCREATE_REPLICATION_SLOT command to support it. Strangely, it does\nextend the CreateReplicationSlotCmd struct to add a \"two_phase\" field,\nbut doesn't set it anywhere.\n\nThere were patches[1] from around the time of the commit to support\nCREATE_REPLICATION_SLOT as well.\n\nIs there a reason to support two-phase decoding, but not with the\nreplication protocol? If so, why change the CreateReplicationSlotCmd\nstructure as though we will support it?\n\nRegards,\n\tJeff Davis\n\n[1] \nhttps://www.postgresql.org/message-id/CAFPTHDZ2rigOf0oM0OBhv1yRmyMO5-SQfT9FCLYj-Jp9ShXB3A@mail.gmail.com\n\n\n\n\n",
"msg_date": "Wed, 02 Jun 2021 16:17:58 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Decoding of two-phase xacts missing from CREATE_REPLICATION_SLOT\n command"
},
{
"msg_contents": "On Thu, Jun 3, 2021 at 4:48 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> Commit 19890a064 changed pg_create_logical_replication_slot() to allow\n> decoding of two-phase transactions, but did not extend the\n> CREATE_REPLICATION_SLOT command to support it. Strangely, it does\n> extend the CreateReplicationSlotCmd struct to add a \"two_phase\" field,\n> but doesn't set it anywhere.\n>\n> There were patches[1] from around the time of the commit to support\n> CREATE_REPLICATION_SLOT as well.\n>\n> Is there a reason to support two-phase decoding, but not with the\n> replication protocol? If so, why change the CreateReplicationSlotCmd\n> structure as though we will support it?\n>\n\nThe idea is to support two_phase via protocol with a subscriber-side\nwork where we can test it as well. The code to support it via\nreplication protocol is present in the first patch of subscriber-side\nwork [1] which uses that code as well. Basically, we don't have a good\nway to test it without subscriber-side work so decided to postpone it\ntill the corresponding work is done. I think we can remove the change\nin CreateReplicationSlotCmd, that is a leftover. If we have to support\nit via protocol, then at the minimum, we need to enhance\npg_recvlogical so that the same can be tested.\n\n[1] - https://www.postgresql.org/message-id/CAHut%2BPt7wnctZpfhaLyuPA0mXDAtuw7DsMUfw3TePJLxqTArjA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 3 Jun 2021 09:29:28 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Decoding of two-phase xacts missing from CREATE_REPLICATION_SLOT\n command"
},
{
"msg_contents": "On Thu, 2021-06-03 at 09:29 +0530, Amit Kapila wrote:\n> The idea is to support two_phase via protocol with a subscriber-side\n> work where we can test it as well. The code to support it via\n> replication protocol is present in the first patch of subscriber-side\n> work [1] which uses that code as well. Basically, we don't have a\n> good\n> way to test it without subscriber-side work so decided to postpone it\n> till the corresponding work is done.\n\nThank you for clarifying.\n\nRight now, it feels a bit incomplete. If it's not much work, I\nrecommend breaking out the CREATE_REPLICATION_SLOT changes and updating\npg_recvlogical, so that it can go in v14 (and\npg_create_logical_replication_slot() will match\nCREATE_REPLICATION_SLOT). But if that's complicated or controversial,\nthen I'm fine waiting for the other work to complete.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Thu, 03 Jun 2021 09:38:24 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Decoding of two-phase xacts missing from\n CREATE_REPLICATION_SLOT command"
},
{
"msg_contents": "On Thu, Jun 3, 2021 at 10:08 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Thu, 2021-06-03 at 09:29 +0530, Amit Kapila wrote:\n> > The idea is to support two_phase via protocol with a subscriber-side\n> > work where we can test it as well. The code to support it via\n> > replication protocol is present in the first patch of subscriber-side\n> > work [1] which uses that code as well. Basically, we don't have a\n> > good\n> > way to test it without subscriber-side work so decided to postpone it\n> > till the corresponding work is done.\n>\n> Thank you for clarifying.\n>\n> Right now, it feels a bit incomplete. If it's not much work, I\n> recommend breaking out the CREATE_REPLICATION_SLOT changes and updating\n> pg_recvlogical, so that it can go in v14 (and\n> pg_create_logical_replication_slot() will match\n> CREATE_REPLICATION_SLOT). But if that's complicated or controversial,\n> then I'm fine waiting for the other work to complete.\n>\n\nI think we can try but not sure if we can get it by then. So, here is\nmy suggestion:\na. remove the change in CreateReplicationSlotCmd\nb. prepare the patches for protocol change and pg_recvlogical. This\nwill anyway include the change we removed as part of (a).\n\nDoes that sound reasonable?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 4 Jun 2021 08:36:26 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Decoding of two-phase xacts missing from CREATE_REPLICATION_SLOT\n command"
},
{
"msg_contents": "On Fri, Jun 4, 2021 at 1:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> I think we can try but not sure if we can get it by then. So, here is\n> my suggestion:\n> a. remove the change in CreateReplicationSlotCmd\n> b. prepare the patches for protocol change and pg_recvlogical. This\n> will anyway include the change we removed as part of (a).\n\n\nAttaching two patches:\n1. Removes two-phase from CreateReplicationSlotCmd\n2. Adds two-phase option in CREATE_REPLICATION_SLOT command.\n\nI will send a patch to update pg_recvlogical next week.\n\nregards,\nAjin Cherian\nFujitsu Australia",
"msg_date": "Fri, 4 Jun 2021 18:59:35 +1000",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Decoding of two-phase xacts missing from CREATE_REPLICATION_SLOT\n command"
},
{
"msg_contents": "On Fri, 2021-06-04 at 08:36 +0530, Amit Kapila wrote:\n> I think we can try but not sure if we can get it by then. So, here is\n> my suggestion:\n> a. remove the change in CreateReplicationSlotCmd\n> b. prepare the patches for protocol change and pg_recvlogical. This\n> will anyway include the change we removed as part of (a).\n\nYes, sounds good.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Fri, 04 Jun 2021 11:09:41 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Decoding of two-phase xacts missing from\n CREATE_REPLICATION_SLOT command"
},
{
"msg_contents": "On Fri, Jun 4, 2021 at 2:29 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> On Fri, Jun 4, 2021 at 1:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > I think we can try but not sure if we can get it by then. So, here is\n> > my suggestion:\n> > a. remove the change in CreateReplicationSlotCmd\n> > b. prepare the patches for protocol change and pg_recvlogical. This\n> > will anyway include the change we removed as part of (a).\n>\n>\n> Attaching two patches:\n> 1. Removes two-phase from CreateReplicationSlotCmd\n>\n\nPushed the above patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 7 Jun 2021 10:46:53 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Decoding of two-phase xacts missing from CREATE_REPLICATION_SLOT\n command"
},
{
"msg_contents": "On Mon, Jun 7, 2021 at 3:17 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> Pushed the above patch.\n\nHere's an updated patchset that adds back in the option for two-phase\nin CREATE_REPLICATION_SLOT command and a second patch that adds\nsupport for\ntwo-phase decoding in pg_recvlogical.\n\nregards,\nAjin Cherian",
"msg_date": "Tue, 8 Jun 2021 17:41:06 +1000",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Decoding of two-phase xacts missing from CREATE_REPLICATION_SLOT\n command"
},
{
"msg_contents": "On Tue, 2021-06-08 at 17:41 +1000, Ajin Cherian wrote:\n> Here's an updated patchset that adds back in the option for two-phase\n> in CREATE_REPLICATION_SLOT command and a second patch that adds\n> support for\n> two-phase decoding in pg_recvlogical.\n\nA few things:\n\n* I suggest putting the TWO_PHASE keyword after the LOGICAL keyword\n* Document the TWO_PHASE keyword in doc/src/sgml/protocol.sgml\n* Cross check that --two-phase is specified only if --create-slot is\nspecified\n* Maybe an Assert(!(two_phase && is_physical)) in\nCreateReplicationSlot()?\n\nOther than that, it looks good, and it works as I expect it to.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Tue, 08 Jun 2021 13:23:45 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Decoding of two-phase xacts missing from\n CREATE_REPLICATION_SLOT command"
},
{
"msg_contents": "On Wed, Jun 9, 2021 at 6:23 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Tue, 2021-06-08 at 17:41 +1000, Ajin Cherian wrote:\n> > Here's an updated patchset that adds back in the option for two-phase\n> > in CREATE_REPLICATION_SLOT command and a second patch that adds\n> > support for\n> > two-phase decoding in pg_recvlogical.\n>\n> A few things:\n>\n> * I suggest putting the TWO_PHASE keyword after the LOGICAL keyword\n> * Document the TWO_PHASE keyword in doc/src/sgml/protocol.sgml\n> * Cross check that --two-phase is specified only if --create-slot is\n> specified\n> * Maybe an Assert(!(two_phase && is_physical)) in\n> CreateReplicationSlot()?\n>\n> Other than that, it looks good, and it works as I expect it to.\n\n\nUpdated. Do have a look.\n\nthanks,\nAjin Cherian\nFujitsu Australia",
"msg_date": "Wed, 9 Jun 2021 20:46:13 +1000",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Decoding of two-phase xacts missing from CREATE_REPLICATION_SLOT\n command"
},
{
"msg_contents": "On Wed, Jun 9, 2021 at 1:53 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Tue, 2021-06-08 at 17:41 +1000, Ajin Cherian wrote:\n> > Here's an updated patchset that adds back in the option for two-phase\n> > in CREATE_REPLICATION_SLOT command and a second patch that adds\n> > support for\n> > two-phase decoding in pg_recvlogical.\n>\n> A few things:\n>\n> * I suggest putting the TWO_PHASE keyword after the LOGICAL keyword\n>\n\nIsn't it better to add it after LOGICAL IDENT? In docs [1], we expect\nthat way. Also, see code in libpqrcv_create_slot where we expect them\nto be together but we can change that if you still prefer to add it\nafter LOGICAL. BTW, can't we consider it to be part of\ncreate_slot_opt_list?\n\nAlso, it might be good if we can add a test in\nsrc/bin/pg_basebackup/t/030_pg_recvlogical\n\n[1] - https://www.postgresql.org/docs/devel/logicaldecoding-walsender.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 9 Jun 2021 16:50:38 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Decoding of two-phase xacts missing from CREATE_REPLICATION_SLOT\n command"
},
{
"msg_contents": "On Wed, Jun 9, 2021 at 4:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jun 9, 2021 at 1:53 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> >\n> > On Tue, 2021-06-08 at 17:41 +1000, Ajin Cherian wrote:\n> > > Here's an updated patchset that adds back in the option for two-phase\n> > > in CREATE_REPLICATION_SLOT command and a second patch that adds\n> > > support for\n> > > two-phase decoding in pg_recvlogical.\n> >\n> > A few things:\n> >\n> > * I suggest putting the TWO_PHASE keyword after the LOGICAL keyword\n> >\n>\n> Isn't it better to add it after LOGICAL IDENT? In docs [1], we expect\n> that way. Also, see code in libpqrcv_create_slot where we expect them\n> to be together but we can change that if you still prefer to add it\n> after LOGICAL. BTW, can't we consider it to be part of\n> create_slot_opt_list?\n>\n> Also, it might be good if we can add a test in\n> src/bin/pg_basebackup/t/030_pg_recvlogical\n>\n\nSome more points:\n1. pg_recvlogical can only send two_phase option if\n(PQserverVersion(conn) >= 140000), otherwise, it won't work for older\nversions of the server.\n2. In the main patch [1], we do send two_phase option even during\nSTART_REPLICATION for the very first time when the two_phase can be\nenabled. There are reasons as described in the worker.c why we can't\nenable it during CREATE_REPLICATION_SLOT. Now, if we want to do\nprotocol changes, I wonder why only do some changes and leave the rest\nfor the next version?\n\n[1] - https://www.postgresql.org/message-id/CAHut%2BPt7wnctZpfhaLyuPA0mXDAtuw7DsMUfw3TePJLxqTArjA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 9 Jun 2021 17:27:35 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Decoding of two-phase xacts missing from CREATE_REPLICATION_SLOT\n command"
},
{
"msg_contents": "On Wed, 2021-06-09 at 16:50 +0530, Amit Kapila wrote:\n> BTW, can't we consider it to be part of\n> create_slot_opt_list?\n\nYes, that would be better. It looks like the physical and logical slot\noptions are mixed together -- should they be separated in the grammar\nso that using an option with the wrong kind of slot would be a parse\nerror?\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Wed, 09 Jun 2021 15:12:57 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Decoding of two-phase xacts missing from\n CREATE_REPLICATION_SLOT command"
},
{
"msg_contents": "On Wed, 2021-06-09 at 17:27 +0530, Amit Kapila wrote:\n> 2. In the main patch [1], we do send two_phase option even during\n> START_REPLICATION for the very first time when the two_phase can be\n> enabled. There are reasons as described in the worker.c why we can't\n> enable it during CREATE_REPLICATION_SLOT. \n\nI'll have to catch up on the thread to digest that reasoning and how it\napplies to decoding vs. replication. But there don't seem to be changes\nto START_REPLICATION for twophase, so I don't quite follow your point.\n\nAre you saying that we should not be able to create slots with twophase\nat all until the rest of the changes go in?\n\n> Now, if we want to do\n> protocol changes, I wonder why only do some changes and leave the\n> rest\n> for the next version?\n\nI started this thread because it's possible to create a slot a certain\nway using the SQL function create_logical_replication_slot(), but it's\nimpossible over the replication protocol. That seems inconsistent to\nme.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Wed, 09 Jun 2021 15:43:14 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Decoding of two-phase xacts missing from\n CREATE_REPLICATION_SLOT command"
},
{
"msg_contents": "Jeff Davis <pgsql@j-davis.com> writes:\n> On Wed, 2021-06-09 at 16:50 +0530, Amit Kapila wrote:\n>> BTW, can't we consider it to be part of\n>> create_slot_opt_list?\n\n> Yes, that would be better. It looks like the physical and logical slot\n> options are mixed together -- should they be separated in the grammar\n> so that using an option with the wrong kind of slot would be a parse\n> error?\n\nThat sort of parse error is usually pretty unfriendly to users who\nmay not quite remember which options are for what; all they'll get\nis \"syntax error\" which won't illuminate anything. I'd rather let\nthe grammar accept both, and throw an appropriate error further\ndownstream.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 09 Jun 2021 18:47:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Decoding of two-phase xacts missing from CREATE_REPLICATION_SLOT\n command"
},
{
"msg_contents": "On Thu, Jun 10, 2021 at 4:13 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Wed, 2021-06-09 at 17:27 +0530, Amit Kapila wrote:\n> > 2. In the main patch [1], we do send two_phase option even during\n> > START_REPLICATION for the very first time when the two_phase can be\n> > enabled. There are reasons as described in the worker.c why we can't\n> > enable it during CREATE_REPLICATION_SLOT.\n>\n> I'll have to catch up on the thread to digest that reasoning and how it\n> applies to decoding vs. replication. But there don't seem to be changes\n> to START_REPLICATION for twophase, so I don't quite follow your point.\n>\n\nI think it is because we pass there as an option as I have suggested\ndoing in the case of CREATE_REPLICATION_SLOT.\n\n> Are you saying that we should not be able to create slots with twophase\n> at all until the rest of the changes go in?\n>\n\nNo, the slots will be created but two_phase option will be enabled\nonly after the initial tablesync is complete.\n\n> > Now, if we want to do\n> > protocol changes, I wonder why only do some changes and leave the\n> > rest\n> > for the next version?\n>\n> I started this thread because it's possible to create a slot a certain\n> way using the SQL function create_logical_replication_slot(), but it's\n> impossible over the replication protocol. That seems inconsistent to\n> me.\n>\n\nRight, I understand that but on the protocol side, there are few more\nthings to be considered to allow subscribers to enable two_phase.\nHowever, maybe, for now, we can do it just for create_replication_slot\nand the start_replication stuff required for subscribers can be done\nlater. I was not completely sure if that is a good idea.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 10 Jun 2021 08:38:31 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Decoding of two-phase xacts missing from CREATE_REPLICATION_SLOT\n command"
},
{
"msg_contents": "On Wed, Jun 9, 2021 at 9:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jun 9, 2021 at 4:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Jun 9, 2021 at 1:53 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> > >\n> > > On Tue, 2021-06-08 at 17:41 +1000, Ajin Cherian wrote:\n> > > > Here's an updated patchset that adds back in the option for two-phase\n> > > > in CREATE_REPLICATION_SLOT command and a second patch that adds\n> > > > support for\n> > > > two-phase decoding in pg_recvlogical.\n> > >\n> > > A few things:\n> > >\n> > > * I suggest putting the TWO_PHASE keyword after the LOGICAL keyword\n> > >\n> >\n> > Isn't it better to add it after LOGICAL IDENT? In docs [1], we expect\n> > that way. Also, see code in libpqrcv_create_slot where we expect them\n> > to be together but we can change that if you still prefer to add it\n> > after LOGICAL. BTW, can't we consider it to be part of\n> > create_slot_opt_list?\n\nChanged accordingly.\n\n> Some more points:\n> 1. pg_recvlogical can only send two_phase option if\n> (PQserverVersion(conn) >= 140000), otherwise, it won't work for older\n> versions of the server.\n\nUpdated accordingly.\n\nI've also modified the pg_recvlogical test case with the new option.\n\nregards,\nAjin Cherian",
"msg_date": "Thu, 10 Jun 2021 18:34:15 +1000",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Decoding of two-phase xacts missing from CREATE_REPLICATION_SLOT\n command"
},
{
"msg_contents": "On Thu, Jun 10, 2021 at 2:04 PM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n\nThe new patches look mostly good apart from the below cosmetic issues.\nI think the question is whether we want to do these for PG-14 or\npostpone them till PG-15. I think these don't appear to be risky\nchanges so we can get them in PG-14 as that might help some outside\ncore solutions as appears to be the case for Jeff. The changes related\nto start_replication are too specific to the subscriber-side solution\nso we can postpone those along with the subscriber-side 2PC work.\nJeff, Ajin, what do you think?\n\nAlso, I can take care of the below cosmetic issues before committing\nif we decide to do this for PG-14.\n\nFew cosmetic issues:\n==================\n1. git diff --check shows\nsrc/bin/pg_basebackup/t/030_pg_recvlogical.pl:109: new blank line at EOF.\n\n2.\n+\n <para>\n The following example shows SQL interface that can be used to decode prepared\n transactions. Before you use two-phase commit commands, you must set\n\nSpurious line addition.\n\n3.\n/* Build query */\n appendPQExpBuffer(query, \"CREATE_REPLICATION_SLOT \\\"%s\\\"\", slot_name);\n if (is_temporary)\n appendPQExpBufferStr(query, \" TEMPORARY\");\n+\n if (is_physical)\n\nSpurious line addition.\n\n4.\n appendPQExpBuffer(query, \" LOGICAL \\\"%s\\\"\", plugin);\n+ if (two_phase && PQserverVersion(conn) >= 140000)\n+ appendPQExpBufferStr(query, \" TWO_PHASE\");\n+\n if (PQserverVersion(conn) >= 100000)\n /* pg_recvlogical doesn't use an exported snapshot, so suppress */\n appendPQExpBufferStr(query, \" NOEXPORT_SNAPSHOT\");\n\nI think it might be better to append TWO_PHASE after NOEXPORT_SNAPSHOT\nbut it doesn't matter much.\n\n5.\n+$node->safe_psql('postgres',\n+ \"BEGIN;INSERT INTO test_table values (11); PREPARE TRANSACTION 'test'\");\n\nThere is no space after BEGIN but there is a space after INSERT. For\nconsistency-sake, I will have space after BEGIN as well.\n\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 11 Jun 2021 15:43:56 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Decoding of two-phase xacts missing from CREATE_REPLICATION_SLOT\n command"
},
{
"msg_contents": "On Fri, Jun 11, 2021 at 8:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jun 10, 2021 at 2:04 PM Ajin Cherian <itsajin@gmail.com> wrote:\n> >\n>\n> The new patches look mostly good apart from the below cosmetic issues.\n> I think the question is whether we want to do these for PG-14 or\n> postpone them till PG-15. I think these don't appear to be risky\n> changes so we can get them in PG-14 as that might help some outside\n> core solutions as appears to be the case for Jeff. The changes related\n> to start_replication are too specific to the subscriber-side solution\n> so we can postpone those along with the subscriber-side 2PC work.\n> Jeff, Ajin, what do you think?\n>\n\nSince we are exposing two-phase decoding using the\npg_create_replication_slot API, I think\nit is reasonable to expose CREATE_REPLICATION_SLOT as well. We can\nleave the subscriber side changes\nfor PG-15.\n\n> Also, I can take care of the below cosmetic issues before committing\n> if we decide to do this for PG-14.\n\nThanks,\n\nRegards,\nAjin Cherian\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 11 Jun 2021 20:26:02 +1000",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Decoding of two-phase xacts missing from CREATE_REPLICATION_SLOT\n command"
},
{
"msg_contents": "On Fri, 2021-06-11 at 15:43 +0530, Amit Kapila wrote:\n> The new patches look mostly good apart from the below cosmetic\n> issues.\n> I think the question is whether we want to do these for PG-14 or\n> postpone them till PG-15. I think these don't appear to be risky\n> changes so we can get them in PG-14 as that might help some outside\n> core solutions as appears to be the case for Jeff. \n\nMy main interest here is that I'm working on replication protocol\nsupport in a rust library. While doing that, I've encountered a lot of\nrough edges (as you may have seen in my recent posts), and this patch\nfixes one of them.\n\nBut at the same time, several small changes to the protocol spread\nacross several releases introduces more opportunity for confusion.\n\nIf we are confident this is the right direction, then v14 makes sense\nfor consistency. But if waiting for v15 might result in a better\nchange, then we should probably just get it into the July CF for v15.\n\n(My apologies if my opinion has drifted a bit since this thread began.)\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Fri, 11 Jun 2021 12:26:13 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Decoding of two-phase xacts missing from\n CREATE_REPLICATION_SLOT command"
},
{
"msg_contents": "On Sat, Jun 12, 2021 at 12:56 AM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Fri, 2021-06-11 at 15:43 +0530, Amit Kapila wrote:\n> > The new patches look mostly good apart from the below cosmetic\n> > issues.\n> > I think the question is whether we want to do these for PG-14 or\n> > postpone them till PG-15. I think these don't appear to be risky\n> > changes so we can get them in PG-14 as that might help some outside\n> > core solutions as appears to be the case for Jeff.\n>\n> My main interest here is that I'm working on replication protocol\n> support in a rust library. While doing that, I've encountered a lot of\n> rough edges (as you may have seen in my recent posts), and this patch\n> fixes one of them.\n>\n> But at the same time, several small changes to the protocol spread\n> across several releases introduces more opportunity for confusion.\n>\n> If we are confident this is the right direction, then v14 makes sense\n> for consistency. But if waiting for v15 might result in a better\n> change, then we should probably just get it into the July CF for v15.\n>\n\nIf that is the case, I would prefer July CF v15. The patch is almost\nready, so I'll try to get it early in the July CF. Ajin, feel free to\npost the patch after addressing some minor comments raised by me\nyesterday.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 12 Jun 2021 12:52:23 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Decoding of two-phase xacts missing from CREATE_REPLICATION_SLOT\n command"
},
{
"msg_contents": "On Fri, Jun 11, 2021 at 8:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n\n> Also, I can take care of the below cosmetic issues before committing\n> if we decide to do this for PG-14.\n>\n> Few cosmetic issues:\n> ==================\n> 1. git diff --check shows\n> src/bin/pg_basebackup/t/030_pg_recvlogical.pl:109: new blank line at EOF.\n>\n> 2.\n> +\n> <para>\n> The following example shows SQL interface that can be used to decode prepared\n> transactions. Before you use two-phase commit commands, you must set\n>\n> Spurious line addition.\n>\n\nFixed.\n\n> 3.\n> /* Build query */\n> appendPQExpBuffer(query, \"CREATE_REPLICATION_SLOT \\\"%s\\\"\", slot_name);\n> if (is_temporary)\n> appendPQExpBufferStr(query, \" TEMPORARY\");\n> +\n> if (is_physical)\n>\n> Spurious line addition.\n>\n\nFixed.\n\n> 4.\n> appendPQExpBuffer(query, \" LOGICAL \\\"%s\\\"\", plugin);\n> + if (two_phase && PQserverVersion(conn) >= 140000)\n> + appendPQExpBufferStr(query, \" TWO_PHASE\");\n> +\n> if (PQserverVersion(conn) >= 100000)\n> /* pg_recvlogical doesn't use an exported snapshot, so suppress */\n> appendPQExpBufferStr(query, \" NOEXPORT_SNAPSHOT\");\n>\n> I think it might be better to append TWO_PHASE after NOEXPORT_SNAPSHOT\n> but it doesn't matter much.\n>\n\nI haven't changed this, I like to keep it this way.\n\n> 5.\n> +$node->safe_psql('postgres',\n> + \"BEGIN;INSERT INTO test_table values (11); PREPARE TRANSACTION 'test'\");\n>\n> There is no space after BEGIN but there is a space after INSERT. For\n> consistency-sake, I will have space after BEGIN as well.\n\nChanged this.\n\nregards,\nAjin Cherian\nFujitsu Australia",
"msg_date": "Tue, 15 Jun 2021 17:34:21 +1000",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Decoding of two-phase xacts missing from CREATE_REPLICATION_SLOT\n command"
},
{
"msg_contents": "On Tue, Jun 15, 2021 at 5:34 PM Ajin Cherian <itsajin@gmail.com> wrote:\n\nSince we've decided to not commit this for PG-14, I've added these\npatches along with the larger patch-set for\nsubscriber side 2pc in thread [1]\n\n[1] - https://www.postgresql.org/message-id/CAHut+PuJKTNRjFre0VBufWMz9BEScC__nT+PUhbSaUNW2biPow@mail.gmail.com\n\nregards,\nAjin Cherian\n\n\n",
"msg_date": "Fri, 18 Jun 2021 13:11:52 +1000",
"msg_from": "Ajin Cherian <itsajin@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Decoding of two-phase xacts missing from CREATE_REPLICATION_SLOT\n command"
}
] |
[
{
"msg_contents": "Hi all,\n\nI have noticed that the documentation for PGSSLCRLDIR is missing.\nThat seems like an oversight in f5465fa.\n\nThoughts?\n--\nMichael",
"msg_date": "Thu, 3 Jun 2021 12:13:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Documentation missing for PGSSLCRLDIR"
},
{
"msg_contents": "At Thu, 3 Jun 2021 12:13:22 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> Hi all,\n> \n> I have noticed that the documentation for PGSSLCRLDIR is missing.\n> That seems like an oversight in f5465fa.\n> \n> Thoughts?\n\nUgg.. Thanks for finding that. I don't find a similar mistake in the\nsame page.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 03 Jun 2021 13:42:20 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Documentation missing for PGSSLCRLDIR"
},
{
"msg_contents": "> On 3 Jun 2021, at 05:13, Michael Paquier <michael@paquier.xyz> wrote:\n\n> I have noticed that the documentation for PGSSLCRLDIR is missing.\n> That seems like an oversight in f5465fa.\n\n+1 on applying this.\n\nWhile looking at this I found another nearby oversight which needs a backport\ndown to 13 where it was introduced. The PGSSLMAXPROTOCOLVERSION documentation\nis linking to the minimum protocol version docs. Fixed in the attached.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Thu, 3 Jun 2021 14:08:02 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Documentation missing for PGSSLCRLDIR"
},
{
"msg_contents": "On Thu, Jun 03, 2021 at 02:08:02PM +0200, Daniel Gustafsson wrote:\n> While looking at this I found another nearby oversight which needs a backport\n> down to 13 where it was introduced. The PGSSLMAXPROTOCOLVERSION documentation\n> is linking to the minimum protocol version docs. Fixed in the attached.\n\nThanks, fixed this bit.\n--\nMichael",
"msg_date": "Fri, 4 Jun 2021 09:45:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Documentation missing for PGSSLCRLDIR"
},
{
"msg_contents": "On Thu, Jun 03, 2021 at 01:42:20PM +0900, Kyotaro Horiguchi wrote:\n> Ugg.. Thanks for finding that. I don't find a similar mistake in the\n> same page.\n\nThanks for double-checking. Applied.\n--\nMichael",
"msg_date": "Fri, 4 Jun 2021 09:50:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Documentation missing for PGSSLCRLDIR"
}
] |
[
{
"msg_contents": "Hi all,\n\nserinus has been complaining about the new gcd functions in 13~:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2021-06-03%2003%3A44%3A14\n\nThe overflow detection is going wrong the way up and down, like here:\n SELECT gcd((-9223372036854775808)::int8, (-9223372036854775808)::int8); -- overflow\n-ERROR: bigint out of range\n+ gcd\n+----------------------\n+ -9223372036854775808\n+(1 row)\n\nThat seems like a compiler bug to me as this host uses recent GCC\nsnapshots, and I cannot see a problem in GCC 10.2 on my own dev box.\nBut perhaps I am missing something?\n\nThanks,\n--\nMichael",
"msg_date": "Thu, 3 Jun 2021 16:26:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Failures with gcd functions with GCC snapshots GCC and -O3 (?)"
},
{
"msg_contents": "On Thu, 3 Jun 2021 at 08:26, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Hi all,\n>\n> serinus has been complaining about the new gcd functions in 13~:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2021-06-03%2003%3A44%3A14\n>\n> The overflow detection is going wrong the way up and down, like here:\n> SELECT gcd((-9223372036854775808)::int8, (-9223372036854775808)::int8); -- overflow\n> -ERROR: bigint out of range\n> + gcd\n> +----------------------\n> + -9223372036854775808\n> +(1 row)\n>\n> That seems like a compiler bug to me as this host uses recent GCC\n> snapshots, and I cannot see a problem in GCC 10.2 on my own dev box.\n> But perhaps I am missing something?\n>\n\nHuh, yeah. The code is pretty clear that that should throw an error:\n\n if (arg1 == PG_INT64_MIN)\n {\n if (arg2 == 0 || arg2 == PG_INT64_MIN)\n ereport(ERROR,\n (errcode(ERRCODE_NUMERIC_VALUE_OUT_OF_RANGE),\n errmsg(\"bigint out of range\")));\n\nand FWIW it works OK on my dev box with gcc 10.2.1 and the same cflags.\n\nRegards,\nDean\n\n\n",
"msg_date": "Thu, 3 Jun 2021 09:28:08 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failures with gcd functions with GCC snapshots GCC and -O3 (?)"
},
{
"msg_contents": "Hello\n\nI build gcc version 12.0.0 20210603 (experimental) locally, then tried REL_13_STABLE with same CFLAGS as serinus\n./configure --prefix=/home/melkij/tmp/pgdev/inst CFLAGS=\"-O3 -ggdb -g3 -Wall -Wextra -Wno-unused-parameter -Wno-sign-compare -Wno-missing-field-initializers\" --enable-tap-tests --enable-cassert --enable-debug\ncheck-world passed.\n\nregards, Sergei\n\n\n",
"msg_date": "Thu, 03 Jun 2021 12:34:26 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: Failures with gcd functions with GCC snapshots GCC and -O3 (?)"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> serinus has been complaining about the new gcd functions in 13~:\n\nmoonjelly, which also runs a bleeding-edge gcc, started to fail the same\nway at about the same time. Given that none of our code in that area\nhas changed, it's hard to think it's anything but a broken compiler.\nMaybe somebody should report that to gcc upstream?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Jun 2021 09:45:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Failures with gcd functions with GCC snapshots GCC and -O3 (?)"
},
{
"msg_contents": "\n>> serinus has been complaining about the new gcd functions in 13~:\n>\n> moonjelly, which also runs a bleeding-edge gcc, started to fail the same\n> way at about the same time. Given that none of our code in that area\n> has changed, it's hard to think it's anything but a broken compiler.\n\n> Maybe somebody should report that to gcc upstream?\n\nYes.\n\nI will isolate a small case (hopefully) and do a report over week-end, \nafter checking that the latest version is still broken.\n\n-- \nFabien.\n\n\n",
"msg_date": "Fri, 4 Jun 2021 15:03:36 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Failures with gcd functions with GCC snapshots GCC and -O3 (?)"
},
{
"msg_contents": "\n>>> serinus has been complaining about the new gcd functions in 13~:\n>> \n>> moonjelly, which also runs a bleeding-edge gcc, started to fail the same\n>> way at about the same time. Given that none of our code in that area\n>> has changed, it's hard to think it's anything but a broken compiler.\n>\n>> Maybe somebody should report that to gcc upstream?\n>\n> Yes.\n>\n> I will isolate a small case (hopefully) and do a report over week-end, after \n> checking that the latest version is still broken.\n\nNot needed in the end, the problem has disappeared with today's \ngcc recompilation.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sat, 5 Jun 2021 16:31:32 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Failures with gcd functions with GCC snapshots GCC and -O3 (?)"
},
{
"msg_contents": "On 2021-Jun-03, Michael Paquier wrote:\n\n> Hi all,\n> \n> serinus has been complaining about the new gcd functions in 13~:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2021-06-03%2003%3A44%3A14\n\nHello, this problem is still happening; serinus' configure output says\nit's running a snapshot from 20210527, and Fabien mentioned downthread\nthat his compiler stopped complaining on 2021-06-05. Andres, maybe the\ncompiler in serinus is due for an update too?\n\nThanks\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Once again, thank you and all of the developers for your hard work on\nPostgreSQL. This is by far the most pleasant management experience of\nany database I've worked on.\" (Dan Harris)\nhttp://archives.postgresql.org/pgsql-performance/2006-04/msg00247.php\n\n\n",
"msg_date": "Fri, 18 Jun 2021 14:38:21 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Failures with gcd functions with GCC snapshots GCC and -O3 (?)"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> Hello, this problem is still happening; serinus' configure output says\n> it's running a snapshot from 20210527, and Fabien mentioned downthread\n> that his compiler stopped complaining on 2021-06-05. Andres, maybe the\n> compiler in serinus is due for an update too?\n\nYeah, serinus is visibly still running one of the broken revisions:\n\nconfigure: using compiler=gcc (Debian 20210527-1) 12.0.0 20210527 (experimental) [master revision 262e75d22c3:7bb6b9b2f47:9d3a953ec4d2695e9a6bfa5f22655e2aea47a973]\n\nIt'd sure be nice if seawasp stopped spamming the buildfarm failure log,\ntoo. That seems to be a different issue:\n\nProgram terminated with signal SIGABRT, Aborted.\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n50\t../sysdeps/unix/sysv/linux/raise.c: No such file or directory.\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n#1 0x00007f3827947859 in __GI_abort () at abort.c:79\n#2 0x00007f3827947729 in __assert_fail_base (fmt=0x7f3827add588 \"%s%s%s:%u: %s%sAssertion `%s' failed.\\\\n%n\", assertion=0x7f381c3ce4c8 \"S->getValue() && \\\\\"Releasing SymbolStringPtr with zero ref count\\\\\"\", file=0x7f381c3ce478 \"/home/fabien/llvm-src/llvm/include/llvm/ExecutionEngine/Orc/SymbolStringPool.h\", line=91, function=<optimized out>) at assert.c:92\n#3 0x00007f3827958f36 in __GI___assert_fail (assertion=0x7f381c3ce4c8 \"S->getValue() && \\\\\"Releasing SymbolStringPtr with zero ref count\\\\\"\", file=0x7f381c3ce478 \"/home/fabien/llvm-src/llvm/include/llvm/ExecutionEngine/Orc/SymbolStringPool.h\", line=91, function=0x7f381c3ce570 \"llvm::orc::SymbolStringPtr::~SymbolStringPtr()\") at assert.c:101\n#4 0x00007f381c23c98d in llvm::orc::SymbolStringPtr::~SymbolStringPtr (this=0x277a8b0, __in_chrg=<optimized out>) at /home/fabien/llvm-src/llvm/include/llvm/ExecutionEngine/Orc/SymbolStringPool.h:91\n#5 0x00007f381c24f879 in std::_Destroy<llvm::orc::SymbolStringPtr> (__pointer=0x277a8b0) at /home/fabien/gcc-10-bin/include/c++/10.3.1/bits/stl_construct.h:140\n#6 0x00007f381c24d10c in std::_Destroy_aux<false>::__destroy<llvm::orc::SymbolStringPtr*> (__first=0x277a8b0, __last=0x277a998) at /home/fabien/gcc-10-bin/include/c++/10.3.1/bits/stl_construct.h:152\n#7 0x00007f381c2488a6 in std::_Destroy<llvm::orc::SymbolStringPtr*> (__first=0x277a8b0, __last=0x277a998) at /home/fabien/gcc-10-bin/include/c++/10.3.1/bits/stl_construct.h:185\n#8 0x00007f381c243c51 in std::_Destroy<llvm::orc::SymbolStringPtr*, llvm::orc::SymbolStringPtr> (__first=0x277a8b0, __last=0x277a998) at /home/fabien/gcc-10-bin/include/c++/10.3.1/bits/alloc_traits.h:738\n#9 0x00007f381c23f1c3 in std::vector<llvm::orc::SymbolStringPtr, std::allocator<llvm::orc::SymbolStringPtr> >::~vector (this=0x7ffc73440a10, __in_chrg=<optimized out>) at /home/fabien/gcc-10-bin/include/c++/10.3.1/bits/stl_vector.h:680\n#10 0x00007f381c26112c in llvm::orc::JITDylib::removeTracker (this=0x18b4240, RT=...) at /home/fabien/llvm-src/llvm/lib/ExecutionEngine/Orc/Core.cpp:1464\n#11 0x00007f381c264cb9 in operator() (__closure=0x7ffc73440d00) at /home/fabien/llvm-src/llvm/lib/ExecutionEngine/Orc/Core.cpp:2054\n#12 0x00007f381c264d29 in llvm::orc::ExecutionSession::runSessionLocked<llvm::orc::ExecutionSession::removeResourceTracker(llvm::orc::ResourceTracker&)::<lambda()> >(struct {...} &&) (this=0x187d110, F=...) at /home/fabien/llvm-src/llvm/include/llvm/ExecutionEngine/Orc/Core.h:1291\n#13 0x00007f381c264ebc in llvm::orc::ExecutionSession::removeResourceTracker (this=0x187d110, RT=...) at /home/fabien/llvm-src/llvm/lib/ExecutionEngine/Orc/Core.cpp:2051\n#14 0x00007f381c25734c in llvm::orc::ResourceTracker::remove (this=0x1910c30) at /home/fabien/llvm-src/llvm/lib/ExecutionEngine/Orc/Core.cpp:53\n#15 0x00007f381c25a9c1 in llvm::orc::JITDylib::clear (this=0x18b4240) at /home/fabien/llvm-src/llvm/lib/ExecutionEngine/Orc/Core.cpp:622\n#16 0x00007f381c26305e in llvm::orc::ExecutionSession::endSession (this=0x187d110) at /home/fabien/llvm-src/llvm/lib/ExecutionEngine/Orc/Core.cpp:1777\n#17 0x00007f381c3373a3 in llvm::orc::LLJIT::~LLJIT (this=0x18a73b0, __in_chrg=<optimized out>) at /home/fabien/llvm-src/llvm/lib/ExecutionEngine/Orc/LLJIT.cpp:1002\n#18 0x00007f381c38af48 in LLVMOrcDisposeLLJIT (J=0x18a73b0) at /home/fabien/llvm-src/llvm/lib/ExecutionEngine/Orc/OrcV2CBindings.cpp:561\n#19 0x00007f381c596612 in llvm_shutdown (code=<optimized out>, arg=140722242323824) at llvmjit.c:892\n#20 0x00000000007d4385 in proc_exit_prepare (code=code@entry=0) at ipc.c:209\n#21 0x00000000007d4288 in proc_exit (code=code@entry=0) at ipc.c:107\n\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 18 Jun 2021 14:51:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Failures with gcd functions with GCC snapshots GCC and -O3 (?)"
},
{
"msg_contents": "> It'd sure be nice if seawasp stopped spamming the buildfarm failure log,\n> too.\n\nThere was a silent API breakage (same API, different behavior, how nice…) \nin llvm main that Andres figured out, which will have to be fixed at some \npoint, so this is reminder that it is still a todo… Not sure when a fix is \nplanned, though. I'm afraid portability may require that different code is \nexecuted depending on llvm version. Or maybe we should wrestle a revert on \nllvm side? Hmmm…\n\nSo I'm not very confident that the noise will go away quickly, sorry.\n\n-- \nFabien.",
"msg_date": "Fri, 18 Jun 2021 23:31:29 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Failures with gcd functions with GCC snapshots GCC and -O3 (?)"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>> It'd sure be nice if seawasp stopped spamming the buildfarm failure log,\n>> too.\n\n> There was a silent API breakage (same API, different behavior, how nice…) \n> in llvm main that Andres figured out, which will have to be fixed at some \n> point, so this is reminder that it is still a todo…\n\nIf it were *our* todo, that would be one thing; but it isn't.\n\n> Not sure when a fix is \n> planned, though. I'm afraid portability may require that different code is \n> executed depending on llvm version. Or maybe we should wrestle a revert on \n> llvm side? Hmmm…\n\n> So I'm not very confident that the noise will go away quickly, sorry.\n\nCould you please just shut down the animal until that's dealt with?\nIt's extremely unpleasant to have to root through a lot of useless\nfailures to find the ones that might be of interest. Right now\nserinus and seawasp are degrading this report nearly to uselessness:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_failures.pl\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 18 Jun 2021 17:46:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Failures with gcd functions with GCC snapshots GCC and -O3 (?)"
},
{
"msg_contents": "Hello Tom,\n\n>> So I'm not very confident that the noise will go away quickly, sorry.\n>\n> Could you please just shut down the animal until that's dealt with?\n\nHmmm… Obviously I can.\n\nHowever, please note that the underlying logic of \"a test is failing, \nlet's just remove it\" does not sound right to me at all:-(\n\nThe test is failing because there is a problem, and shuting down the test \nto improve a report does not in any way help to fix it, it just helps to \nhide it.\n\n> It's extremely unpleasant to have to root through a lot of useless\n> failures\n\nI do not understand how they are useless. Pg does not work properly with \ncurrent LLVM, and keeps on not working. I think that this information is \nworthy, even if I do not like it and would certainly prefer a quick fix.\n\n> to find the ones that might be of interest. Right now serinus and \n> seawasp are degrading this report nearly to uselessness:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_failures.pl\n\nIMHO, the report should be improved, not the test removed.\n\nIf you insist I will shut down the animal, bit I'd prefer not to.\n\nI think that the reminder has value, and just because some report is not \ndesigned to handle this nicely does not seem like a good reason to do \nthat.\n\n-- \nFabien.",
"msg_date": "Sat, 19 Jun 2021 00:17:20 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Failures with gcd functions with GCC snapshots GCC and -O3 (?)"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>> Could you please just shut down the animal until that's dealt with?\n\n> The test is failing because there is a problem, and shuting down the test \n> to improve a report does not in any way help to fix it, it just helps to \n> hide it.\n\nOur buildfarm is run for the use of the Postgres project, not the LLVM\nproject. I'm not really happy that it contains any experimental-compiler\nanimals at all, but as long as they're unobtrusive I can stand it.\nserinus and seawasp are being the opposite of unobtrusive.\n\nIf you don't want to shut it down entirely, maybe backing it off to\nrun only once a week would be an acceptable compromise. Since you\nonly update its compiler version once a week, I doubt we learn much\nfrom runs done more often than that anyway.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 18 Jun 2021 18:26:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Failures with gcd functions with GCC snapshots GCC and -O3 (?)"
},
{
"msg_contents": "Hello Tom,\n\n>>> Could you please just shut down the animal until that's dealt with?\n>\n>> The test is failing because there is a problem, and shuting down the test\n>> to improve a report does not in any way help to fix it, it just helps to\n>> hide it.\n>\n> Our buildfarm is run for the use of the Postgres project, not the LLVM\n> project.\n\nThe point of these animals is to have early warning of upcoming compiler \nchanges. Given the release cycle of the project and the fact that a \nversion is expected to work for 5 years, this is a clear benefit for \npostgres, IMO. When the compiler is broken, it is noisy, too bad.\n\nIn this instance the compiler is not broken, but postgres is.\n\nIf the consensus is that these animals are useless, I'll remove them, and \nbe sad that the community is not able to see their value.\n\n> I'm not really happy that it contains any experimental-compiler\n> animals at all, but as long as they're unobtrusive I can stand it.\n> serinus and seawasp are being the opposite of unobtrusive.\n\nI think that the problem is the report, not the failing animal.\n\nIn French we say \"ce n’est pas en cassant le thermomètre qu’on fait tomber \nla fièvre\", which is an equivalent of \"don't shoot the messenger\".\n\n> If you don't want to shut it down entirely, maybe backing it off to\n> run only once a week would be an acceptable compromise. Since you\n> only update its compiler version once a week, I doubt we learn much\n> from runs done more often than that anyway.\n\nHmmm… I can slow it down. We will wait one week to learn that the problems \nhave been fixed, wow.\n\n<Sigh>.\n\n-- \nFabien.",
"msg_date": "Sat, 19 Jun 2021 00:54:41 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Failures with gcd functions with GCC snapshots GCC and -O3 (?)"
},
{
"msg_contents": "On Sat, Jun 19, 2021 at 9:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> >> It'd sure be nice if seawasp stopped spamming the buildfarm failure log,\n> >> too.\n>\n> > There was a silent API breakage (same API, different behavior, how nice…)\n> > in llvm main that Andres figured out, which will have to be fixed at some\n> > point, so this is reminder that it is still a todo…\n>\n> If it were *our* todo, that would be one thing; but it isn't.\n\nOver on the other thread[1] we learned that this is an API change\naffecting reference counting semantics[2], so unless there is some\ndiscussion somewhere about reverting the LLVM change that I'm unaware\nof, I'm guessing we're going to need to change our code sooner or\nlater. I have a bleeding edge LLVM on my dev machine, and I'm willing\nto try to reproduce the crash and write the trivial patch (that is,\nfigure out the right preprocessor incantation to detect the version or\nfeature, and bump the reference count as appropriate), if Andres\nand/or Fabien aren't already on the case.\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGLEy8mgtN7BNp0ooFAjUedDTJj5dME7NxLU-m91b85siA%40mail.gmail.com\n[2] https://github.com/llvm/llvm-project/commit/c8fc5e3ba942057d6c4cdcd1faeae69a28e7b671\n\n\n",
"msg_date": "Sat, 19 Jun 2021 11:52:12 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Failures with gcd functions with GCC snapshots GCC and -O3 (?)"
},
{
"msg_contents": ">>> There was a silent API breakage (same API, different behavior, how nice…)\n>>> in llvm main that Andres figured out, which will have to be fixed at some\n>>> point, so this is reminder that it is still a todo…\n>>\n>> If it were *our* todo, that would be one thing; but it isn't.\n>\n> Over on the other thread[1] we learned that this is an API change\n> affecting reference counting semantics[2], so unless there is some\n> discussion somewhere about reverting the LLVM change that I'm unaware\n> of, I'm guessing we're going to need to change our code sooner or\n> later.\n\nIndeed, I'm afraid the solution will have to be on pg side.\n\n> I have a bleeding edge LLVM on my dev machine, and I'm willing to try to \n> reproduce the crash and write the trivial patch (that is, figure out the \n> right preprocessor incantation to detect the version or feature, and \n> bump the reference count as appropriate), if Andres and/or Fabien aren't \n> already on the case.\n\nI'm not in the case, I'm only the one running the farm animal which barks \ntoo annoyingly for Tom.\n\n-- \nFabien.",
"msg_date": "Sat, 19 Jun 2021 06:55:41 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Failures with gcd functions with GCC snapshots GCC and -O3 (?)"
}
] |
[
{
"msg_contents": "Hi,\n\nIn one of my testing scenario, i found pg_upgrade is failed for \n'plpgsql_call_handler' handler\n\nSteps to reproduce - ( on any supported version of PG)\n\nPerform initdb ( ./initdb -D d1 ; ./initdb -D d2)\n\nStart d1 cluster(./pg_ctl -D d1 start) , connect to postgres (./psql \npostgres) and create this language\n\npostgres=# CREATE TRUSTED LANGUAGE plspl_sm HANDLER plpgsql_call_handler;\nCREATE LANGUAGE\n\nstop the server (./pg_ctl -D d1 stop)\n\nperform pg_upgrade ( ./pg_upgrade -d d1 -D d2 -b . B .)\n\nwill fail with these message\n\npg_restore: while PROCESSING TOC:\npg_restore: from TOC entry 825; 2612 16384 PROCEDURAL LANGUAGE plspl_sm edb\npg_restore: error: could not execute query: ERROR: could not open \nextension control file \n\"/home/edb/pg14/pg/edbpsql/share/postgresql/extension/plspl_sm.control\": \nNo such file or directory\nCommand was: CREATE OR REPLACE PROCEDURAL LANGUAGE \"plspl_sm\";\n\nis this expected ?\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n",
"msg_date": "Thu, 3 Jun 2021 15:23:58 +0530",
"msg_from": "tushar <tushar.ahuja@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "pg_upgrade is failed for 'plpgsql_call_handler' handler"
},
{
"msg_contents": "> On 3 Jun 2021, at 11:53, tushar <tushar.ahuja@enterprisedb.com> wrote:\n\n> In one of my testing scenario, i found pg_upgrade is failed for 'plpgsql_call_handler' handle\n\nThis isn't really a pg_upgrade issue but a pg_dump issue. The handler, inline\nnd validator functions will be looked up among the functions loaded into\npg_dump and included in the CREATE LANGUAGE statement. However, iff they are\nin pg_catalog then they wont be found (pg_catalog is excluded in getFuncs) and\na bare CREATE LANGUAGE statement will be emitted. This bare statement will\nthen be interpreted as CREATE EXTENSION.\n\nThis is intentional since the language template work in 8.1, before then\npg_dump would look up support functions in pg_catalog.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Thu, 3 Jun 2021 12:54:48 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade is failed for 'plpgsql_call_handler' handler"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 3 Jun 2021, at 11:53, tushar <tushar.ahuja@enterprisedb.com> wrote:\n>> In one of my testing scenario, i found pg_upgrade is failed for 'plpgsql_call_handler' handle\n\n> This is intentional since the language template work in 8.1, before then\n> pg_dump would look up support functions in pg_catalog.\n\nI don't see any particular need to support reaching inside the guts\nof another PL language implementation, as this test case does.\nWe'd be perfectly within our rights to rename plpgsql_call_handler\nas something else; that should be nobody's business except that\nof the plpgsql extension.\n\nBut yeah, the behavior you're seeing here is intended to support\nnormally-packaged languages. pg_dump won't ordinarily dump objects\nin pg_catalog, because it assumes stuff in pg_catalog is to\nbe treated as built-in.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Jun 2021 10:12:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade is failed for 'plpgsql_call_handler' handler"
},
{
"msg_contents": "> On 3 Jun 2021, at 16:12, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>>> On 3 Jun 2021, at 11:53, tushar <tushar.ahuja@enterprisedb.com> wrote:\n>>> In one of my testing scenario, i found pg_upgrade is failed for 'plpgsql_call_handler' handle\n> \n>> This is intentional since the language template work in 8.1, before then\n>> pg_dump would look up support functions in pg_catalog.\n> \n> I don't see any particular need to support reaching inside the guts\n> of another PL language implementation, as this test case does.\n> We'd be perfectly within our rights to rename plpgsql_call_handler\n> as something else; that should be nobody's business except that\n> of the plpgsql extension.\n> \n> But yeah, the behavior you're seeing here is intended to support\n> normally-packaged languages. pg_dump won't ordinarily dump objects\n> in pg_catalog, because it assumes stuff in pg_catalog is to\n> be treated as built-in.\n\nAgreed, I don't think there is anything we could/should do here (the lack of\ncomplaints in the archives back that up).\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Thu, 3 Jun 2021 16:20:10 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_upgrade is failed for 'plpgsql_call_handler' handler"
}
] |
[
{
"msg_contents": "Hi,\n\nIt looks like for some of the fsm_set_and_search calls whose return\nvalue is ignored (in fsm_search and RecordPageWithFreeSpace), there's\nno (void). Is it intentional? In the code base, we generally have\n(void) when non-void return value of a function is ignored.\n\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Thu, 3 Jun 2021 16:24:28 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Are we missing (void) when return value of fsm_set_and_search is\n ignored?"
},
{
"msg_contents": "On Thu, Jun 3, 2021 at 4:24 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> It looks like for some of the fsm_set_and_search calls whose return\n> value is ignored (in fsm_search and RecordPageWithFreeSpace), there's\n> no (void). Is it intentional?\n\nBasically, fsm_set_and_search, serve both \"set\" and \"search\", but it\nonly search if the \"minValue\" is > 0. So if the minvalue is passed as\n0 then the return value is ignored intentionally. I can see in both\nplaces where the returned value is ignored the minvalue is passed as\n0.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 3 Jun 2021 16:47:05 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Are we missing (void) when return value of fsm_set_and_search is\n ignored?"
},
{
"msg_contents": "On Thu, Jun 3, 2021 at 4:47 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Jun 3, 2021 at 4:24 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > It looks like for some of the fsm_set_and_search calls whose return\n> > value is ignored (in fsm_search and RecordPageWithFreeSpace), there's\n> > no (void). Is it intentional?\n>\n> Basically, fsm_set_and_search, serve both \"set\" and \"search\", but it\n> only search if the \"minValue\" is > 0. So if the minvalue is passed as\n> 0 then the return value is ignored intentionally. I can see in both\n> places where the returned value is ignored the minvalue is passed as\n> 0.\n\nThanks. I know why we are ignoring the return value. I was trying to\nsay, when we ignore (for whatsoever reason it maybe) return value of\nany non-void returning function, we do something like below right?\n\n(void) fsm_set_and_search(rel, addr, slot, new_cat, 0);\n\ninstead of\n\nfsm_set_and_search(rel, addr, slot, new_cat, 0);\n\nWith Regards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Thu, 3 Jun 2021 17:11:42 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Are we missing (void) when return value of fsm_set_and_search is\n ignored?"
},
{
"msg_contents": "On Thu, Jun 3, 2021 at 6:54 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> It looks like for some of the fsm_set_and_search calls whose return\n> value is ignored (in fsm_search and RecordPageWithFreeSpace), there's\n> no (void). Is it intentional? In the code base, we generally have\n> (void) when non-void return value of a function is ignored.\n\nThat's a good practice, +1 for changing that.\n\n\n",
"msg_date": "Thu, 3 Jun 2021 19:51:51 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Are we missing (void) when return value of fsm_set_and_search is\n ignored?"
},
{
"msg_contents": "On Thu, Jun 3, 2021 at 5:11 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Jun 3, 2021 at 4:47 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Thu, Jun 3, 2021 at 4:24 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > Hi,\n> > >\n> > > It looks like for some of the fsm_set_and_search calls whose return\n> > > value is ignored (in fsm_search and RecordPageWithFreeSpace), there's\n> > > no (void). Is it intentional?\n> >\n> > Basically, fsm_set_and_search, serve both \"set\" and \"search\", but it\n> > only search if the \"minValue\" is > 0. So if the minvalue is passed as\n> > 0 then the return value is ignored intentionally. I can see in both\n> > places where the returned value is ignored the minvalue is passed as\n> > 0.\n>\n> Thanks. I know why we are ignoring the return value. I was trying to\n> say, when we ignore (for whatsoever reason it maybe) return value of\n> any non-void returning function, we do something like below right?\n>\n> (void) fsm_set_and_search(rel, addr, slot, new_cat, 0);\n>\n> instead of\n>\n> fsm_set_and_search(rel, addr, slot, new_cat, 0);\n\nOkay, I thought you were asking whether we are ignoring the return\nvalue is intentional or not. Yeah, typecasting the return with void\nis a better practice for ignoring the return value.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 3 Jun 2021 17:27:36 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Are we missing (void) when return value of fsm_set_and_search is\n ignored?"
},
{
"msg_contents": "On Thu, Jun 3, 2021 at 5:22 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Thu, Jun 3, 2021 at 6:54 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > It looks like for some of the fsm_set_and_search calls whose return\n> > value is ignored (in fsm_search and RecordPageWithFreeSpace), there's\n> > no (void). Is it intentional? In the code base, we generally have\n> > (void) when non-void return value of a function is ignored.\n>\n> That's a good practice, +1 for changing that.\n\nThanks. PSA v1 patch.\n\nWith Regards,\nBharath Rupireddy.",
"msg_date": "Thu, 3 Jun 2021 18:24:08 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Are we missing (void) when return value of fsm_set_and_search is\n ignored?"
},
{
"msg_contents": "On 03.06.21 12:54, Bharath Rupireddy wrote:\n> It looks like for some of the fsm_set_and_search calls whose return\n> value is ignored (in fsm_search and RecordPageWithFreeSpace), there's\n> no (void). Is it intentional? In the code base, we generally have\n> (void) when non-void return value of a function is ignored.\n\nI don't think that is correct. I don't see anyone writing\n\n(void) printf(...);\n\nso this is not a generally applicable strategy.\n\nWe have pg_nodiscard for functions where you explicitly want callers to \ncheck the return value. In all other cases, callers are free to ignore \nreturn values.\n\n\n",
"msg_date": "Thu, 3 Jun 2021 14:57:42 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Are we missing (void) when return value of fsm_set_and_search is\n ignored?"
},
{
"msg_contents": "On Thu, Jun 03, 2021 at 02:57:42PM +0200, Peter Eisentraut wrote:\n> On 03.06.21 12:54, Bharath Rupireddy wrote:\n> > It looks like for some of the fsm_set_and_search calls whose return\n> > value is ignored (in fsm_search and RecordPageWithFreeSpace), there's\n> > no (void). Is it intentional? In the code base, we generally have\n> > (void) when non-void return value of a function is ignored.\n> \n> I don't think that is correct. I don't see anyone writing\n> \n> (void) printf(...);\n\nWe somehow do have a (void) fprint(...) in src/port/getopt.c.\n\n> so this is not a generally applicable strategy.\n> \n> We have pg_nodiscard for functions where you explicitly want callers to\n> check the return value. In all other cases, callers are free to ignore\n> return values.\n\nYes, but we have a lot a examples of functions without pg_nodiscard and callers\nstill explicitly ignoring the results, like fsm_vacuum_page() in the same file.\nIt would be more consistent and make the code slightly more self explanatory.\n\n\n",
"msg_date": "Fri, 4 Jun 2021 12:28:05 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Are we missing (void) when return value of fsm_set_and_search is\n ignored?"
},
{
"msg_contents": "On Fri, Jun 4, 2021 at 9:58 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > so this is not a generally applicable strategy.\n> >\n> > We have pg_nodiscard for functions where you explicitly want callers to\n> > check the return value. In all other cases, callers are free to ignore\n> > return values.\n>\n> Yes, but we have a lot a examples of functions without pg_nodiscard and\ncallers\n> still explicitly ignoring the results, like fsm_vacuum_page() in the same\nfile.\n> It would be more consistent and make the code slightly more self\nexplanatory.\n\nYeah, just for consistency reasons (void) casting can be added to\nfsm_set_and_search when it's return value is ignored.\n\nWith Regards,\nBharath Rupireddy.\n\nOn Fri, Jun 4, 2021 at 9:58 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > so this is not a generally applicable strategy.\n> >\n> > We have pg_nodiscard for functions where you explicitly want callers to\n> > check the return value. In all other cases, callers are free to ignore\n> > return values.\n>\n> Yes, but we have a lot a examples of functions without pg_nodiscard and callers\n> still explicitly ignoring the results, like fsm_vacuum_page() in the same file.\n> It would be more consistent and make the code slightly more self explanatory.\n\nYeah, just for consistency reasons (void) casting can be added to fsm_set_and_search when it's return value is ignored.\n\nWith Regards,\nBharath Rupireddy.",
"msg_date": "Fri, 4 Jun 2021 17:03:21 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Are we missing (void) when return value of fsm_set_and_search is\n ignored?"
},
{
"msg_contents": "On 04.06.21 06:28, Julien Rouhaud wrote:\n> Yes, but we have a lot a examples of functions without pg_nodiscard and callers\n> still explicitly ignoring the results, like fsm_vacuum_page() in the same file.\n> It would be more consistent and make the code slightly more self explanatory.\n\nI'm not clear how you'd make a guideline out of this, other than, \"it's \nalso done elsewhere\".\n\nIn this case I'd actually split the function in two, one that returns \nvoid and one that always returns a value to be consumed. This \noverloading is a bit confusing.\n\n\n",
"msg_date": "Fri, 4 Jun 2021 22:08:04 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Are we missing (void) when return value of fsm_set_and_search is\n ignored?"
},
{
"msg_contents": "On Sat, Jun 5, 2021 at 1:38 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 04.06.21 06:28, Julien Rouhaud wrote:\n> > Yes, but we have a lot a examples of functions without pg_nodiscard and callers\n> > still explicitly ignoring the results, like fsm_vacuum_page() in the same file.\n> > It would be more consistent and make the code slightly more self explanatory.\n>\n> I'm not clear how you'd make a guideline out of this, other than, \"it's\n> also done elsewhere\".\n\nI proposed to do (void) fsm_set_and_search by looking at lot of other\nplaces (more than few 100) in the code base like (void)\ndefGetBoolean(def) (void) hv_iterinit(obj) (void) set_config_option(\nand so on. I'm not sure whether having consistent code in a few\nhundred places amounts to a standard practice.\n\n> In this case I'd actually split the function in two, one that returns\n> void and one that always returns a value to be consumed. This\n> overloading is a bit confusing.\n\nThanks. I don't want to go in that direction. Instead I choose to\nwithdraw the proposal here and let the fsm_set_and_search function\nusage be as is.\n\nWith Regards,\nBharath Rupireddy.\n\n\n",
"msg_date": "Sat, 5 Jun 2021 11:07:20 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Are we missing (void) when return value of fsm_set_and_search is\n ignored?"
},
{
"msg_contents": "On Sat, Jun 5, 2021 at 4:08 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 04.06.21 06:28, Julien Rouhaud wrote:\n> > Yes, but we have a lot a examples of functions without pg_nodiscard and callers\n> > still explicitly ignoring the results, like fsm_vacuum_page() in the same file.\n> > It would be more consistent and make the code slightly more self explanatory.\n>\n> I'm not clear how you'd make a guideline out of this, other than, \"it's\n> also done elsewhere\".\n\nWhen it can be confusing, like here?\n\n> In this case I'd actually split the function in two, one that returns\n> void and one that always returns a value to be consumed. This\n> overloading is a bit confusing.\n\nThat would work too, but it may be overkill as it's not a public API.\n\n\n",
"msg_date": "Sat, 5 Jun 2021 15:36:40 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Are we missing (void) when return value of fsm_set_and_search is\n ignored?"
}
] |
[
{
"msg_contents": "I noticed earlier today when working in brin_minmax_multi.c that the\ncopyright year was incorrect. That caused me to wonder if any other\nsource files have the incorrect year.\n\ngit grep -E \"Portions Copyright \\(c\\) ([0-9]{4}-[0-9]{4}|[0-9]{4}),\nPostgreSQL Global Development Group\" | grep -Ev \"2021\"\n\nSeems fairly good at finding them. 14 in total.\n\nThe attached fixes.\n\nI'll push this in the New Zealand morning unless anyone comes up with\na reason why I shouldn't before then.\n\nDavid",
"msg_date": "Fri, 4 Jun 2021 00:16:56 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "A few source files have the wrong copyright year"
},
{
"msg_contents": "> On 3 Jun 2021, at 14:16, David Rowley <dgrowleyml@gmail.com> wrote:\n\n> git grep -E \"Portions Copyright \\(c\\) ([0-9]{4}-[0-9]{4}|[0-9]{4}),\n> PostgreSQL Global Development Group\" | grep -Ev \"2021\"\n> \n> Seems fairly good at finding them. 14 in total.\n\nsrc/tools/copyright.pl finds these as well as contrib/pageinspect/gistfuncs.c\nwhich also ends the range in 2020, might want to include that one too when\npushing this.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Thu, 3 Jun 2021 14:30:16 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: A few source files have the wrong copyright year"
},
{
"msg_contents": "On Fri, 4 Jun 2021 at 00:30, Daniel Gustafsson <daniel@yesql.se> wrote:\n> src/tools/copyright.pl finds these as well as contrib/pageinspect/gistfuncs.c\n> which also ends the range in 2020, might want to include that one too when\n> pushing this.\n\nThanks. I wasn't aware of that.\n\nDavid\n\n\n",
"msg_date": "Fri, 4 Jun 2021 00:36:06 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: A few source files have the wrong copyright year"
},
{
"msg_contents": "On Fri, 4 Jun 2021 at 00:16, David Rowley <dgrowleyml@gmail.com> wrote:\n> I noticed earlier today when working in brin_minmax_multi.c that the\n> copyright year was incorrect. That caused me to wonder if any other\n> source files have the incorrect year.\n\n> The attached fixes.\n\nPushed.\n\nDavid\n\n\n",
"msg_date": "Fri, 4 Jun 2021 12:21:16 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: A few source files have the wrong copyright year"
},
{
"msg_contents": "On Fri, Jun 04, 2021 at 12:21:16PM +1200, David Rowley wrote:\n> Pushed.\n\nThanks.\n--\nMichael",
"msg_date": "Fri, 4 Jun 2021 09:35:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: A few source files have the wrong copyright year"
}
] |
[
{
"msg_contents": "Hi,\n\nI was checking the GRANT on pg_subscription and noticed that the command is not\ncorrect. There is a comment that says \"All columns of pg_subscription except\nsubconninfo are readable\". However, there are columns that aren't included: oid\nand subsynccommit. It seems an oversight in the commits 6f236e1eb8c and\n887227a1cc8.\n\nThere are monitoring tools and data collectors that aren't using a\nsuperuser to read catalog information (I usually recommend using pg_monitor).\nHence, you cannot join pg_subscription with relations such as\npg_subscription_rel or pg_stat_subscription because column oid has no\ncolumn-level privilege. I'm attaching a patch to fix it (indeed, 2 patches\nbecause of additional columns for v14). We should add instructions in the minor\nversion release notes too.\n\nThis issue was reported by Israel Barth.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/",
"msg_date": "Thu, 03 Jun 2021 10:41:24 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": true,
"msg_subject": "missing GRANT on pg_subscription columns"
},
{
"msg_contents": "\"Euler Taveira\" <euler@eulerto.com> writes:\n> I was checking the GRANT on pg_subscription and noticed that the command is not\n> correct. There is a comment that says \"All columns of pg_subscription except\n> subconninfo are readable\". However, there are columns that aren't included: oid\n> and subsynccommit. It seems an oversight in the commits 6f236e1eb8c and\n> 887227a1cc8.\n\nUgh.\n\n> There are monitoring tools and data collectors that aren't using a\n> superuser to read catalog information (I usually recommend using pg_monitor).\n> Hence, you cannot join pg_subscription with relations such as\n> pg_subscription_rel or pg_stat_subscription because column oid has no\n> column-level privilege. I'm attaching a patch to fix it (indeed, 2 patches\n> because of additional columns for v14). We should add instructions in the minor\n> version release notes too.\n\nI agree with fixing this in HEAD. But given that this has been wrong\nsince v10 with zero previous complaints, I doubt that it is worth the\ncomplication of trying to do something about it in the back branches.\nMaybe we could just adjust the docs there, instead.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Jun 2021 13:09:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: missing GRANT on pg_subscription columns"
},
{
"msg_contents": "On Thu, Jun 3, 2021 at 10:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> \"Euler Taveira\" <euler@eulerto.com> writes:\n> > I was checking the GRANT on pg_subscription and noticed that the command is not\n> > correct. There is a comment that says \"All columns of pg_subscription except\n> > subconninfo are readable\". However, there are columns that aren't included: oid\n> > and subsynccommit. It seems an oversight in the commits 6f236e1eb8c and\n> > 887227a1cc8.\n>\n> Ugh.\n>\n> > There are monitoring tools and data collectors that aren't using a\n> > superuser to read catalog information (I usually recommend using pg_monitor).\n> > Hence, you cannot join pg_subscription with relations such as\n> > pg_subscription_rel or pg_stat_subscription because column oid has no\n> > column-level privilege. I'm attaching a patch to fix it (indeed, 2 patches\n> > because of additional columns for v14). We should add instructions in the minor\n> > version release notes too.\n>\n> I agree with fixing this in HEAD. But given that this has been wrong\n> since v10 with zero previous complaints, I doubt that it is worth the\n> complication of trying to do something about it in the back branches.\n> Maybe we could just adjust the docs there, instead.\n>\n\nThis sounds reasonable to me. Euler, can you provide the doc updates\nfor back-branches?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 7 Jun 2021 14:38:18 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: missing GRANT on pg_subscription columns"
},
{
"msg_contents": "On Mon, Jun 7, 2021 at 2:38 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jun 3, 2021 at 10:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > \"Euler Taveira\" <euler@eulerto.com> writes:\n> > > I was checking the GRANT on pg_subscription and noticed that the command is not\n> > > correct. There is a comment that says \"All columns of pg_subscription except\n> > > subconninfo are readable\". However, there are columns that aren't included: oid\n> > > and subsynccommit. It seems an oversight in the commits 6f236e1eb8c and\n> > > 887227a1cc8.\n> >\n> > Ugh.\n> >\n> > > There are monitoring tools and data collectors that aren't using a\n> > > superuser to read catalog information (I usually recommend using pg_monitor).\n> > > Hence, you cannot join pg_subscription with relations such as\n> > > pg_subscription_rel or pg_stat_subscription because column oid has no\n> > > column-level privilege. I'm attaching a patch to fix it (indeed, 2 patches\n> > > because of additional columns for v14). We should add instructions in the minor\n> > > version release notes too.\n> >\n> > I agree with fixing this in HEAD. But given that this has been wrong\n> > since v10 with zero previous complaints, I doubt that it is worth the\n> > complication of trying to do something about it in the back branches.\n> > Maybe we could just adjust the docs there, instead.\n> >\n>\n> This sounds reasonable to me. Euler, can you provide the doc updates\n> for back-branches?\n\nAttached patch has the documentation changes for the back-branches. As\nthere is no specific reason for this, I have just mentioned\n\"Additionally normal users can't access columns oid and\nsubsynccommit.\" The same patch applies till V10 branch.\n\nRegards,\nVignesh",
"msg_date": "Mon, 28 Jun 2021 11:02:44 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: missing GRANT on pg_subscription columns"
},
{
"msg_contents": "On Mon, Jun 28, 2021 at 11:02 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, Jun 7, 2021 at 2:38 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Jun 3, 2021 at 10:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n>\n> Attached patch has the documentation changes for the back-branches. As\n> there is no specific reason for this, I have just mentioned\n> \"Additionally normal users can't access columns oid and\n> subsynccommit.\" The same patch applies till V10 branch.\n>\n\nThanks for the patch. Tom has already pushed the code as part of\ncommit 3590680b85, so I am not sure if it is still valuable to fix\ndocs in back-branches.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 29 Jun 2021 08:19:35 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: missing GRANT on pg_subscription columns"
}
] |
[
{
"msg_contents": "One problem with unlogged tables is that the application has no way to\ntell if they were reset, or they just happen to be empty.\n\nThis can be a problem with sharding, where you might have different\nshards of an unlogged table on different servers. If one server\ncrashes, you'll be missing only one shard of the data, which may appear\ninconsistent. In that case, you'd like the application (or sharding\nsolution) to be able to detect that one shard was lost, and TRUNCATE\nthose that remain to get back to a reasonable state.\n\nIt would be easy enough for the init fork to have a single page with a\nflag set. That way, when the main fork is replaced with the init fork,\nother code could detect that a reset happened.\n\nWhen detected, depending on a GUC, the behavior could be to auto-\ntruncate it (to get the current silent behavior), or refuse to perform\nthe operation (except an explicit TRUNCATE), or issue a\nwarning/log/notice.\n\nThe biggest challenge would be: when should we detect that the reset\nhas happened? There might be a lot of entry points. Another idea would\nbe to just have a SQL function that the application could call whenever\nit needs to know.\n\nThoughts?\n\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Thu, 03 Jun 2021 13:04:43 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Make unlogged table resets detectable"
},
{
"msg_contents": "On 03/06/2021 23:04, Jeff Davis wrote:\n> One problem with unlogged tables is that the application has no way to\n> tell if they were reset, or they just happen to be empty.\n> \n> This can be a problem with sharding, where you might have different\n> shards of an unlogged table on different servers. If one server\n> crashes, you'll be missing only one shard of the data, which may appear\n> inconsistent. In that case, you'd like the application (or sharding\n> solution) to be able to detect that one shard was lost, and TRUNCATE\n> those that remain to get back to a reasonable state.\n> \n> It would be easy enough for the init fork to have a single page with a\n> flag set. That way, when the main fork is replaced with the init fork,\n> other code could detect that a reset happened.\n\nI'd suggest using a counter rather than a flag. With a flag, if one \nclient clears the flag to acknowledge that a truncation happened, others \nmight miss it. See also ABA problem.\n\n> When detected, depending on a GUC, the behavior could be to auto-\n> truncate it (to get the current silent behavior), or refuse to perform\n> the operation (except an explicit TRUNCATE), or issue a\n> warning/log/notice.\n\nTRUNCATE isn't quite what happens when an unlogged table is \nre-initialized. It changes the relfilenode, resets stats, and requires a \nmore strict lock. So I don't think repurposing TRUNCATE for \nre-initializing a table is a good idea. There's also potential for a \nrace condition, if two connections see that a table needs \nre-initialization, and issue \"TRUNCATE + INSERT\" concurrently. One of \nthe INSERTs will be lost.\n\nA warning or notice is easy to miss.\n\n> The biggest challenge would be: when should we detect that the reset\n> has happened? There might be a lot of entry points. Another idea would\n> be to just have a SQL function that the application could call whenever\n> it needs to know.\n\nYeah, a SQL function to get the current \"reset counter\" would be nice.\n\n- Heikki\n\n\n",
"msg_date": "Fri, 4 Jun 2021 09:42:22 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Make unlogged table resets detectable"
},
{
"msg_contents": "On Fri, 2021-06-04 at 09:42 +0300, Heikki Linnakangas wrote:\n> I'd suggest using a counter rather than a flag. With a flag, if one \n> client clears the flag to acknowledge that a truncation happened,\n> others \n> might miss it. See also ABA problem.\n\nThis feels like it's getting more complex.\n\nStepping back, maybe unlogged tables are the wrong level to solve this\nproblem. We could just have a \"crash counter\" in pg_control that would\nbe incremented every time a crash happened (and all unlogged tables are\nreset). It might be a number or maybe the LSN of the startup checkpoint\nafter the most recent crash.\n\nA SQL function could read the value. Perhaps we'd also have a SQL\nfunction to reset it, but I don't see a use case for it.\n\nThen, it's up to the client to check it against a stored value, and\nclear/repopulate unlogged tables as necessary.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Fri, 04 Jun 2021 17:41:24 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Make unlogged table resets detectable"
},
{
"msg_contents": "On Fri, Jun 4, 2021 at 8:41 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> Stepping back, maybe unlogged tables are the wrong level to solve this\n> problem. We could just have a \"crash counter\" in pg_control that would\n> be incremented every time a crash happened (and all unlogged tables are\n> reset). It might be a number or maybe the LSN of the startup checkpoint\n> after the most recent crash.\n>\n> A SQL function could read the value. Perhaps we'd also have a SQL\n> function to reset it, but I don't see a use case for it.\n>\n> Then, it's up to the client to check it against a stored value, and\n> clear/repopulate unlogged tables as necessary.\n\nI think this would be useful for a variety of purposes. Both being\nable to know the last time that it happened and being able to know the\nnumber of times that it happened could be useful, depending on the\nscenario. For example, if one of my employer's customers began\ncomplaining about a problem that started happening recently, it would\nbe useful to be able to establish whether there had also been a crash\nrecently, and a timestamp or LSN would help a lot. On the other hand,\nif we had a counter, we'd probably find out some interesting things,\ntoo. Maybe someone would report that the value of the counter was\nsurprisingly large. For example, if a customer's pg_control output\nshowed that the database cluster had performed crash recovery 162438\ntimes, I might have some, err, followup questions.\n\nThis is not a vote for or against any specific proposal; it's just a\ngeneral statement that I support trying to do something in this area,\nand that it feels like anything we do will likely have some value.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 7 Jun 2021 14:34:29 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make unlogged table resets detectable"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Jun 4, 2021 at 8:41 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>> Stepping back, maybe unlogged tables are the wrong level to solve this\n>> problem. We could just have a \"crash counter\" in pg_control that would\n>> be incremented every time a crash happened (and all unlogged tables are\n>> reset). It might be a number or maybe the LSN of the startup checkpoint\n>> after the most recent crash.\n\n> I think this would be useful for a variety of purposes. Both being\n> able to know the last time that it happened and being able to know the\n> number of times that it happened could be useful, depending on the\n> scenario.\n\n+1. I'd support recording the time of the last crash recovery, as\nwell as having a counter. I think an LSN would not be as useful\nas a timestamp.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 07 Jun 2021 14:56:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Make unlogged table resets detectable"
},
{
"msg_contents": "On Mon, Jun 07, 2021 at 02:56:57PM -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Fri, Jun 4, 2021 at 8:41 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> >> Stepping back, maybe unlogged tables are the wrong level to solve this\n> >> problem. We could just have a \"crash counter\" in pg_control that would\n> >> be incremented every time a crash happened (and all unlogged tables are\n> >> reset). It might be a number or maybe the LSN of the startup checkpoint\n> >> after the most recent crash.\n> \n> > I think this would be useful for a variety of purposes. Both being\n> > able to know the last time that it happened and being able to know the\n> > number of times that it happened could be useful, depending on the\n> > scenario.\n> \n> +1. I'd support recording the time of the last crash recovery, as\n> well as having a counter. I think an LSN would not be as useful\n> as a timestamp.\n\n+1\n\nIt's been suggested before ;)\nhttps://www.postgresql.org/message-id/20180228221653.GB32095%40telsasoft.com\n\nPS. I currently monitor for crashes by checking something hacky like:\n| SELECT backend_start - pg_postmaster_start_time() ORDER BY 1\n\n\n",
"msg_date": "Mon, 7 Jun 2021 21:58:30 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Make unlogged table resets detectable"
},
{
"msg_contents": "On Mon, Jun 07, 2021 at 02:56:57PM -0400, Tom Lane wrote:\n> +1. I'd support recording the time of the last crash recovery, as\n> well as having a counter. I think an LSN would not be as useful\n> as a timestamp.\n\nOne could guess a timestamp based on a LSN, no? So I'd like to think\nthe opposite actually: a LSN would be more useful than a timestamp.\n--\nMichael",
"msg_date": "Tue, 8 Jun 2021 12:46:05 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Make unlogged table resets detectable"
},
{
"msg_contents": "On Tue, Jun 08, 2021 at 12:46:05PM +0900, Michael Paquier wrote:\n> On Mon, Jun 07, 2021 at 02:56:57PM -0400, Tom Lane wrote:\n> > +1. I'd support recording the time of the last crash recovery, as\n> > well as having a counter. I think an LSN would not be as useful\n> > as a timestamp.\n> \n> One could guess a timestamp based on a LSN, no? So I'd like to think\n> the opposite actually: a LSN would be more useful than a timestamp.\n\nWouldn't that work only if the LSN is recent enough, depending on the WAL\nactivity?\n\n\n",
"msg_date": "Tue, 8 Jun 2021 12:52:23 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make unlogged table resets detectable"
},
{
"msg_contents": "On Mon, Jun 7, 2021 at 11:46 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, Jun 07, 2021 at 02:56:57PM -0400, Tom Lane wrote:\n> > +1. I'd support recording the time of the last crash recovery, as\n> > well as having a counter. I think an LSN would not be as useful\n> > as a timestamp.\n>\n> One could guess a timestamp based on a LSN, no? So I'd like to think\n> the opposite actually: a LSN would be more useful than a timestamp.\n\nOne could also guess an LSN based on a timestamp, but I think in\neither case one has to be a pretty good guesser. The rate at which WAL\nis generated is hardly guaranteed to be uniform, and if you're looking\nat a system for the first time you may have no idea what it is.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 8 Jun 2021 09:18:43 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Make unlogged table resets detectable"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Jun 7, 2021 at 11:46 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> On Mon, Jun 07, 2021 at 02:56:57PM -0400, Tom Lane wrote:\n>>> +1. I'd support recording the time of the last crash recovery, as\n>>> well as having a counter. I think an LSN would not be as useful\n>>> as a timestamp.\n\n>> One could guess a timestamp based on a LSN, no? So I'd like to think\n>> the opposite actually: a LSN would be more useful than a timestamp.\n\n> One could also guess an LSN based on a timestamp, but I think in\n> either case one has to be a pretty good guesser.\n\nYeah. If there are actually use-cases for knowing both things, then\nwe ought to record both. However, it's not real clear to me why\nLSN would be interesting.\n\nBTW, I spent a bit of time thinking about whether we should\nrecord the timestamp at start or end of crash recovery; my conclusion\nis we should record the latter. It would only make a difference to\npeople who wanted to inspect the value (a) while crash recovery is\nin progress or (b) after a failed crash recovery. In both scenarios,\nyou have other mechanisms to discover the start time of the current\ncrash; while if we overwrite the pg_control field at the start,\nthere's no longer a way to know how long ago the previous crash was.\nSo it seems best not to overwrite the time of the previous crash\nuntil we're up.\n\n(If there is a reason to log LSN, maybe the argument is different\nfor that? Although I'd think that looking at the last checkpoint\nREDO location is sufficient for figuring out where the current\ncrash recovery attempt started.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 08 Jun 2021 12:52:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Make unlogged table resets detectable"
},
{
"msg_contents": "On Tue, 2021-06-08 at 12:52 -0400, Tom Lane wrote:\n> Yeah. If there are actually use-cases for knowing both things, then\n> we ought to record both. However, it's not real clear to me why\n> LSN would be interesting.\n\nLet me expand on my use case: in a sharded environment, how do you\nfigure out if you need to repopulate an UNLOGGED table? For a single\nnode, there's not much risk, because you either have the data or you\ndon't. But in a sharded environment, if one node crashes, you might end\nup with some shards empty and others populated, and that's\ninconsistent.\n\nIf Postgres provides a way to figure out when the last crash happened,\nthen that would give the sharding solution the basic information it\nneeds to figure out if it needs to clear and repopulate the entire\nunlogged table (i.e. all its shards on all nodes).\n\nClearly, the sharding solution would need to do some tracking of its\nown, like recording when the last TRUNCATE happened, to figure out what\nto do. For that tracking, I think using the LSN makes more sense than a\ntimestamp.\n\n> (If there is a reason to log LSN, maybe the argument is different\n> for that? Although I'd think that looking at the last checkpoint\n> REDO location is sufficient for figuring out where the current\n> crash recovery attempt started.)\n\nI came to a similar conclusion for my use case: tracking the LSN at the\nend of the recovery makes more sense.\n\nI attached a patch to track last recovery LSN, time, and total count.\nBut there are a few issues:\n\n1. Do we want a way to reset the counter? If so, should it be done with\npg_resetwal or a superuser SQL function?\n\n2. It would be helpful to also know the last time a promotion happened,\nfor the same reason (e.g. a failover of a single node leading to an\nunlogged table with some empty shards and some populated ones). Should\nalso store the last promotion LSN and time as well? Does \"promotion\ncount\" make sense, and should we track that, too?\n\n3. Should we try to track crash information across promotions, or just\nstart them at the initial values when promoted?\n\nRegards,\n\tJeff Davis",
"msg_date": "Tue, 08 Jun 2021 12:28:28 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Make unlogged table resets detectable"
},
{
"msg_contents": "Jeff Davis <pgsql@j-davis.com> writes:\n> On Tue, 2021-06-08 at 12:52 -0400, Tom Lane wrote:\n>> Yeah. If there are actually use-cases for knowing both things, then\n>> we ought to record both. However, it's not real clear to me why\n>> LSN would be interesting.\n\n> Let me expand on my use case: in a sharded environment, how do you\n> figure out if you need to repopulate an UNLOGGED table?\n\nSince we don't put LSNs into unlogged tables, nor would the different\nshards be likely to have equivalent LSNs, I'm not seeing that LSN is\nremarkably better for this than a timestamp.\n\n> 1. Do we want a way to reset the counter? If so, should it be done with\n> pg_resetwal or a superuser SQL function?\n\nI'd be kind of inclined to say no, short of pg_resetwal, and maybe\nnot then.\n\n> 2. It would be helpful to also know the last time a promotion happened,\n\nI'm not following this either. How do you unpromote a node?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 08 Jun 2021 16:08:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Make unlogged table resets detectable"
},
{
"msg_contents": "On Tue, 2021-06-08 at 16:08 -0400, Tom Lane wrote:\n> Since we don't put LSNs into unlogged tables, nor would the different\n> shards be likely to have equivalent LSNs, I'm not seeing that LSN is\n> remarkably better for this than a timestamp.\n\nIt requires some other bookkeeping on the part of the sharding\nsolution. This is ugly (alternative suggestions welcome), but I think\nit would work:\n\n1. The sharding code would create on each node:\n CREATE UNLOGGED TABLE unlogged_table_status(\n shard_name regclass,\n last_truncate pg_lsn);\n\n2. When you create an unlogged table, each node would do:\n INSERT INTO unlogged_table_status\n VALUES('my_unlogged_shard', pg_current_wal_flush_lsn())\n\n3. When you TRUNCATE an unlogged table, each node would do:\n UPDATE unlogged_table_status\n SET last_truncate=pg_current_wal_flush_lsn()\n WHERE shard_name='my_unlogged_shard'\n\n4. When connecting to a node and accessing a shard of an unlogged table\nfor the first time, test whether the shard has been lost with:\n SELECT\n last_truncate <= (pg_control_recovery()).last_recovery_lsn\n AS shard_was_lost\n FROM unlogged_table_status\n WHERE shard_name='my_unlogged_shard'\n\n5. If the shard was lost, truncate all shards for that table on all\nnodes (and update the unlogged_table_status on all nodes as in #3).\n\nNot exactly straightforward, but better than the current situation. And\nI think it can be made more robust than a timestamp.\n\n> I'd be kind of inclined to say no, short of pg_resetwal, and maybe\n> not then.\n\nAgreed, at least until we find some use case that says otherwise.\n\n> > 2. It would be helpful to also know the last time a promotion\n> > happened,\n> \n> I'm not following this either. How do you unpromote a node?\n\nWhat I meant by \"node\" here is actually a primary+standby pair. Let's\nsay each primary+standby pair holds one shard of an unlogged table.\n\nIn this case, a crash followed by restart is equivalent to a primary\nfailing over to a promoted standby -- in either case, the shard is\ngone, but other shards of the same table may be populated on other\nprimaries. We need to detect that the shard is gone and then wipe out\nall the other shards on the healthy primaries.\n\nYou could reasonably say that it's the job of the sharding solution to\nkeep track of these crashes and handle unlogged tables at the time. But\nit's inconvenient to insert more tasks into a sensitive process like\nfailover/recovery. It's preferable to be able to detect the unlogged\ntable problem after the fact and handle it when the systems are all up\nand stable.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Tue, 08 Jun 2021 14:29:25 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Make unlogged table resets detectable"
},
{
"msg_contents": "Is this patch targetting pg15 ?\nThere's no discussion since June.\n\nLatest at 2021-06-08 21:29:25 by Jeff Davis <pgsql at j-davis.com>\n\n2022-02-02 16:37:58 \tJulien Rouhaud (rjuju) \tClosed in commitfest 2022-01 with status: Moved to next CF\n2021-12-03 06:18:05 \tMichael Paquier (michael-kun) \tClosed in commitfest 2021-11 with status: Moved to next CF\n2021-10-04 16:32:49 \tJaime Casanova (jcasanov) \tClosed in commitfest 2021-09 with status: Moved to next CF\n2021-08-03 02:29:40 \tMasahiko Sawada (masahikosawada) \tClosed in commitfest 2021-07 with status: Moved to next CF\n\n\n",
"msg_date": "Fri, 4 Mar 2022 10:12:27 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Make unlogged table resets detectable"
},
{
"msg_contents": "On Fri, Mar 04, 2022 at 10:12:27AM -0600, Justin Pryzby wrote:\n> Is this patch targetting pg15 ?\n> There's no discussion since June.\n> \n> Latest at 2021-06-08 21:29:25 by Jeff Davis <pgsql at j-davis.com>\n\nThis is too long, so let's discard this patch for now.\n--\nMichael",
"msg_date": "Sat, 5 Mar 2022 19:33:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Make unlogged table resets detectable"
}
] |
[
{
"msg_contents": "Hi -hackers,\n\nPresented for discussion is a POC for a DELETE CASCADE functionality,\nwhich will allow you one-shot usage of treating existing NO ACTION and\nRESTRICT FK constraints as if they were originally defined as CASCADE\nconstraints. I can't tell you how many times this functionality would have\nbeen useful in the field, and despite the expected answer of \"define your\nconstraints right in the first place\", this is not always an option, nor is\nthe ability to change that easily (or create new constraints that need to\nrevalidate against big tables) always the best option.\n\nThat said, I'm happy to quibble about the specific approach to be taken;\nI've written this based on the most straightforward way I could come up\nwith to accomplish this, but if there are better directions to take to get\nthe equivalent functionality I'm happy to discuss.\n\n From the commit message:\n\nProof of concept of allowing a DELETE statement to override formal FK's\nhandling from RESTRICT/NO\nACTION and treat as CASCADE instead.\n\nSyntax is \"DELETE CASCADE ...\" instead of \"DELETE ... CASCADE\" due to\nunresolvable bison conflicts.\n\nSample session:\n\n postgres=# create table foo (id serial primary key, val text);\n CREATE TABLE\n postgres=# create table bar (id serial primary key, foo_id int references\nfoo(id), val text);\n CREATE TABLE\n postgres=# insert into foo (val) values ('a'),('b'),('c');\n INSERT 0 3\n postgres=# insert into bar (foo_id, val) values\n(1,'d'),(1,'e'),(2,'f'),(2,'g');\n INSERT 0 4\n postgres=# select * from foo;\n id | val\n ----+-----\n 1 | a\n 2 | b\n 3 | c\n (3 rows)\n\n postgres=# select * from bar;\n id | foo_id | val\n ----+--------+-----\n 1 | 1 | d\n 2 | 1 | e\n 3 | 2 | f\n 4 | 2 | g\n (4 rows)\n\n postgres=# delete from foo where id = 1;\n ERROR: update or delete on table \"foo\" violates foreign key constraint\n\"bar_foo_id_fkey\" on table \"bar\"\n DETAIL: Key (id)=(1) is still referenced from table \"bar\".\n postgres=# delete cascade from foo where id = 1;\n DELETE 1\n postgres=# select * from foo;\n id | val\n ----+-----\n 2 | b\n 3 | c\n (2 rows)\n\n postgres=# select * from bar;\n id | foo_id | val\n ----+--------+-----\n 3 | 2 | f\n 4 | 2 | g\n (2 rows)\n\n\nBest,\n\nDavid",
"msg_date": "Thu, 3 Jun 2021 15:49:15 -0500",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "DELETE CASCADE"
},
{
"msg_contents": "On Thu, 3 Jun 2021 at 16:49, David Christensen <\ndavid.christensen@crunchydata.com> wrote:\n\n> Hi -hackers,\n>\n> Presented for discussion is a POC for a DELETE CASCADE functionality,\n> which will allow you one-shot usage of treating existing NO ACTION and\n> RESTRICT FK constraints as if they were originally defined as CASCADE\n> constraints. I can't tell you how many times this functionality would have\n> been useful in the field, and despite the expected answer of \"define your\n> constraints right in the first place\", this is not always an option, nor is\n> the ability to change that easily (or create new constraints that need to\n> revalidate against big tables) always the best option.\n>\n\nI would sometimes find this convenient. There are circumstances where I\ndon't want every DELETE to blunder all over the database deleting stuff,\nbut certain specific DELETEs should take care of the referencing tables.\n\nAn additional syntax to say \"CASCADE TO table1, table2\" would be safer and\nsometimes useful in the case where I know I want to cascade to specific\nother tables but not all (and in particular not to ones I didn't think of\nwhen I wrote the query); I might almost suggest omitting the cascade to all\nsyntax (or maybe have a separate syntax, literally \"CASCADE TO ALL TABLES\"\nor some such).\n\nWhat happens if I don't have delete permission on the referencing table?\nWhen a foreign key reference delete cascades, I can cause records to\ndisappear from a referencing table even if I don't have delete permission\non that table. This feels like it's just supposed to be a convenience that\nreplaces multiple DELETE invocations but one way or the other we need to be\nclear on the behaviour.\n\nSidebar: isn't this inconsistent with trigger behaviour in general? When I\nsay \"ON DELETE CASCADE\" what I mean and what I get are the same: whenever\nthe referenced row is deleted, the referencing row also disappears,\nregardless of the identity or permissions of the role running the actual\nDELETE. But any manually implemented trigger runs as the caller; I cannot\nmake the database do something when a table update occurs; I can only make\nthe role doing the table update perform some additional actions.\n\nOn Thu, 3 Jun 2021 at 16:49, David Christensen <david.christensen@crunchydata.com> wrote:Hi -hackers,Presented for discussion is a POC for a DELETE CASCADE functionality, which will allow you one-shot usage of treating existing NO ACTION and RESTRICT FK constraints as if they were originally defined as CASCADE constraints. I can't tell you how many times this functionality would have been useful in the field, and despite the expected answer of \"define your constraints right in the first place\", this is not always an option, nor is the ability to change that easily (or create new constraints that need to revalidate against big tables) always the best option.I would sometimes find this convenient. There are circumstances where I don't want every DELETE to blunder all over the database deleting stuff, but certain specific DELETEs should take care of the referencing tables.An additional syntax to say \"CASCADE TO table1, table2\" would be safer and sometimes useful in the case where I know I want to cascade to specific other tables but not all (and in particular not to ones I didn't think of when I wrote the query); I might almost suggest omitting the cascade to all syntax (or maybe have a separate syntax, literally \"CASCADE TO ALL TABLES\" or some such).What happens if I don't have delete permission on the referencing table? When a foreign key reference delete cascades, I can cause records to disappear from a referencing table even if I don't have delete permission on that table. This feels like it's just supposed to be a convenience that replaces multiple DELETE invocations but one way or the other we need to be clear on the behaviour.Sidebar: isn't this inconsistent with trigger behaviour in general? When I say \"ON DELETE CASCADE\" what I mean and what I get are the same: whenever the referenced row is deleted, the referencing row also disappears, regardless of the identity or permissions of the role running the actual DELETE. But any manually implemented trigger runs as the caller; I cannot make the database do something when a table update occurs; I can only make the role doing the table update perform some additional actions.",
"msg_date": "Thu, 3 Jun 2021 17:15:06 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DELETE CASCADE"
},
{
"msg_contents": "On Thu, Jun 3, 2021 at 1:49 PM David Christensen <\ndavid.christensen@crunchydata.com> wrote:\n\n> Presented for discussion is a POC for a DELETE CASCADE functionality,\n> which will allow you one-shot usage of treating existing NO ACTION and\n> RESTRICT FK constraints as if they were originally defined as CASCADE\n> constraints.\n>\n\nON DELETE NO ACTION constraints become ON DELETE CASCADE constraints - ON\nDELETE SET NULL constraints are ignored, and not possible to emulate via\nthis feature.\n\n\n> I can't tell you how many times this functionality would have been\n> useful in the field, and despite the expected answer of \"define your\n> constraints right in the first place\", this is not always an option, nor is\n> the ability to change that easily (or create new constraints that need to\n> revalidate against big tables) always the best option.\n>\n\nOnce...but I agreed.\n\n>\n> That said, I'm happy to quibble about the specific approach to be taken;\n> I've written this based on the most straightforward way I could come up\n> with to accomplish this, but if there are better directions to take to get\n> the equivalent functionality I'm happy to discuss.\n>\n>\nThis behavior should require the same permissions as actually creating an\nON DELETE CASCADE FK on the cascaded-to tables. i.e., Table Owner role\nmembership (the requirement for FK permissions can be assumed by the\npresence of the existing FK constraint and being the table's owner).\n\nHaving the defined FK behaviors be more readily changeable, while not\nmitigating this need, is IMO a more important feature to implement. If\nthere is a reason that cannot be implemented (besides no one has bothered\nto take the time) then I would consider that reason to also apply to\nprevent implementing this work-around.\n\nDavid J.\n\nOn Thu, Jun 3, 2021 at 1:49 PM David Christensen <david.christensen@crunchydata.com> wrote:Presented for discussion is a POC for a DELETE CASCADE functionality, which will allow you one-shot usage of treating existing NO ACTION and RESTRICT FK constraints as if they were originally defined as CASCADE constraints.ON DELETE NO ACTION constraints become ON DELETE CASCADE constraints - ON DELETE SET NULL constraints are ignored, and not possible to emulate via this feature. I can't tell you how many times this functionality would have been useful in the field, and despite the expected answer of \"define your constraints right in the first place\", this is not always an option, nor is the ability to change that easily (or create new constraints that need to revalidate against big tables) always the best option.Once...but I agreed.That said, I'm happy to quibble about the specific approach to be taken; I've written this based on the most straightforward way I could come up with to accomplish this, but if there are better directions to take to get the equivalent functionality I'm happy to discuss.This behavior should require the same permissions as actually creating an ON DELETE CASCADE FK on the cascaded-to tables. i.e., Table Owner role membership (the requirement for FK permissions can be assumed by the presence of the existing FK constraint and being the table's owner).Having the defined FK behaviors be more readily changeable, while not mitigating this need, is IMO a more important feature to implement. If there is a reason that cannot be implemented (besides no one has bothered to take the time) then I would consider that reason to also apply to prevent implementing this work-around.David J.",
"msg_date": "Thu, 3 Jun 2021 14:47:55 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DELETE CASCADE"
},
{
"msg_contents": "On Thu, Jun 3, 2021 at 4:48 PM David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n> On Thu, Jun 3, 2021 at 1:49 PM David Christensen <\n> david.christensen@crunchydata.com> wrote:\n>\n>> Presented for discussion is a POC for a DELETE CASCADE functionality,\n>> which will allow you one-shot usage of treating existing NO ACTION and\n>> RESTRICT FK constraints as if they were originally defined as CASCADE\n>> constraints.\n>>\n>\n> ON DELETE NO ACTION constraints become ON DELETE CASCADE constraints - ON\n> DELETE SET NULL constraints are ignored, and not possible to emulate via\n> this feature.\n>\n\nI have not tested this part per se (which clearly I need to expand the\nexisting test suite), but my reasoning here was that ON DELETE SET\nNULL/DEFAULT would still be applied with their defined behaviors (being\nthat we're still calling the underlying RI triggers using SPI) with the\nsame results; the intent of this feature is just to suppress the RESTRICT\naction and cascade the DELETE to all tables (on down the chain) which would\nnormally block this, without having to manually figure all the dependencies\nwhich can be inferred by the database itself.\n\n\n> I can't tell you how many times this functionality would have been\n>> useful in the field, and despite the expected answer of \"define your\n>> constraints right in the first place\", this is not always an option, nor is\n>> the ability to change that easily (or create new constraints that need to\n>> revalidate against big tables) always the best option.\n>>\n>\n> Once...but I agreed.\n>\n\nHeh.\n\n\n> That said, I'm happy to quibble about the specific approach to be taken;\n>> I've written this based on the most straightforward way I could come up\n>> with to accomplish this, but if there are better directions to take to get\n>> the equivalent functionality I'm happy to discuss.\n>>\n>>\n> This behavior should require the same permissions as actually creating an\n> ON DELETE CASCADE FK on the cascaded-to tables. i.e., Table Owner role\n> membership (the requirement for FK permissions can be assumed by the\n> presence of the existing FK constraint and being the table's owner).\n>\n\nI'm not sure if this would be overly prohibitive or not, but if you're the\ntable owner this should just work, like you point out. I think this\nrestriction could be fine for the common case, and if there was a way to\nhint if/when this failed to cascade as to the actual reason for the failure\nI'm fine with that part too. (I was assuming that DELETE permission on the\nunderlying tables + existence of FK would be enough in practice, but we\ncould definitely tighten that up.)\n\n\n> Having the defined FK behaviors be more readily changeable, while not\n> mitigating this need, is IMO a more important feature to implement. If\n> there is a reason that cannot be implemented (besides no one has bothered\n> to take the time) then I would consider that reason to also apply to\n> prevent implementing this work-around.\n>\n\nAgreed that this would be a nice feature to have too; noone wants to break\nFK consistency to change things or require a rescan of a valid constraint.\n\nOn Thu, Jun 3, 2021 at 4:48 PM David G. Johnston <david.g.johnston@gmail.com> wrote:On Thu, Jun 3, 2021 at 1:49 PM David Christensen <david.christensen@crunchydata.com> wrote:Presented for discussion is a POC for a DELETE CASCADE functionality, which will allow you one-shot usage of treating existing NO ACTION and RESTRICT FK constraints as if they were originally defined as CASCADE constraints.ON DELETE NO ACTION constraints become ON DELETE CASCADE constraints - ON DELETE SET NULL constraints are ignored, and not possible to emulate via this feature.I have not tested this part per se (which clearly I need to expand the existing test suite), but my reasoning here was that ON DELETE SET NULL/DEFAULT would still be applied with their defined behaviors (being that we're still calling the underlying RI triggers using SPI) with the same results; the intent of this feature is just to suppress the RESTRICT action and cascade the DELETE to all tables (on down the chain) which would normally block this, without having to manually figure all the dependencies which can be inferred by the database itself. I can't tell you how many times this functionality would have been useful in the field, and despite the expected answer of \"define your constraints right in the first place\", this is not always an option, nor is the ability to change that easily (or create new constraints that need to revalidate against big tables) always the best option.Once...but I agreed.Heh. That said, I'm happy to quibble about the specific approach to be taken; I've written this based on the most straightforward way I could come up with to accomplish this, but if there are better directions to take to get the equivalent functionality I'm happy to discuss.This behavior should require the same permissions as actually creating an ON DELETE CASCADE FK on the cascaded-to tables. i.e., Table Owner role membership (the requirement for FK permissions can be assumed by the presence of the existing FK constraint and being the table's owner).I'm not sure if this would be overly prohibitive or not, but if you're the table owner this should just work, like you point out. I think this restriction could be fine for the common case, and if there was a way to hint if/when this failed to cascade as to the actual reason for the failure I'm fine with that part too. (I was assuming that DELETE permission on the underlying tables + existence of FK would be enough in practice, but we could definitely tighten that up.) Having the defined FK behaviors be more readily changeable, while not mitigating this need, is IMO a more important feature to implement. If there is a reason that cannot be implemented (besides no one has bothered to take the time) then I would consider that reason to also apply to prevent implementing this work-around.Agreed that this would be a nice feature to have too; noone wants to break FK consistency to change things or require a rescan of a valid constraint.",
"msg_date": "Thu, 3 Jun 2021 17:02:36 -0500",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: DELETE CASCADE"
},
{
"msg_contents": "On Thu, Jun 3, 2021 at 4:15 PM Isaac Morland <isaac.morland@gmail.com>\nwrote:\n\n> On Thu, 3 Jun 2021 at 16:49, David Christensen <\n> david.christensen@crunchydata.com> wrote:\n>\n>> Hi -hackers,\n>>\n>> Presented for discussion is a POC for a DELETE CASCADE functionality,\n>> which will allow you one-shot usage of treating existing NO ACTION and\n>> RESTRICT FK constraints as if they were originally defined as CASCADE\n>> constraints. I can't tell you how many times this functionality would have\n>> been useful in the field, and despite the expected answer of \"define your\n>> constraints right in the first place\", this is not always an option, nor is\n>> the ability to change that easily (or create new constraints that need to\n>> revalidate against big tables) always the best option.\n>>\n>\n> I would sometimes find this convenient. There are circumstances where I\n> don't want every DELETE to blunder all over the database deleting stuff,\n> but certain specific DELETEs should take care of the referencing tables.\n>\n> An additional syntax to say \"CASCADE TO table1, table2\" would be safer and\n> sometimes useful in the case where I know I want to cascade to specific\n> other tables but not all (and in particular not to ones I didn't think of\n> when I wrote the query); I might almost suggest omitting the cascade to all\n> syntax (or maybe have a separate syntax, literally \"CASCADE TO ALL TABLES\"\n> or some such).\n>\n\nI'm not fond of the syntax requirements for the explicitness here, plus it\nseems like it would complicate the functionality of the patch (which\ncurrently is able to just slightly refactor the RI triggers to account for\na single state variable, rather than do anything smarter than that). I do\nunderstand the desire/need for visibility into what would be affected with\nan offhand statement.\n\nWhat happens if I don't have delete permission on the referencing table?\n> When a foreign key reference delete cascades, I can cause records to\n> disappear from a referencing table even if I don't have delete permission\n> on that table. This feels like it's just supposed to be a convenience that\n> replaces multiple DELETE invocations but one way or the other we need to be\n> clear on the behaviour.\n>\n\nDid you test this and find a failure? Because it is literally using all of\nthe same RI proc code/permissions as defined I would expect that it would\njust abort the transaction. (I am working on expanding the test suite for\nthis feature to allow for test cases like this, so keep 'em coming... :-))\n\n\n> Sidebar: isn't this inconsistent with trigger behaviour in general? When I\n> say \"ON DELETE CASCADE\" what I mean and what I get are the same: whenever\n> the referenced row is deleted, the referencing row also disappears,\n> regardless of the identity or permissions of the role running the actual\n> DELETE. But any manually implemented trigger runs as the caller; I cannot\n> make the database do something when a table update occurs; I can only make\n> the role doing the table update perform some additional actions.\n>\n\nHave you found a failure? Because all this is doing is effectively calling\nthe guts of the cascade RI routines, so no differences should occur. If\nnot, I'm not quite clear on your objection; can you clarify?\n\nDavid\n\nOn Thu, Jun 3, 2021 at 4:15 PM Isaac Morland <isaac.morland@gmail.com> wrote:On Thu, 3 Jun 2021 at 16:49, David Christensen <david.christensen@crunchydata.com> wrote:Hi -hackers,Presented for discussion is a POC for a DELETE CASCADE functionality, which will allow you one-shot usage of treating existing NO ACTION and RESTRICT FK constraints as if they were originally defined as CASCADE constraints. I can't tell you how many times this functionality would have been useful in the field, and despite the expected answer of \"define your constraints right in the first place\", this is not always an option, nor is the ability to change that easily (or create new constraints that need to revalidate against big tables) always the best option.I would sometimes find this convenient. There are circumstances where I don't want every DELETE to blunder all over the database deleting stuff, but certain specific DELETEs should take care of the referencing tables.An additional syntax to say \"CASCADE TO table1, table2\" would be safer and sometimes useful in the case where I know I want to cascade to specific other tables but not all (and in particular not to ones I didn't think of when I wrote the query); I might almost suggest omitting the cascade to all syntax (or maybe have a separate syntax, literally \"CASCADE TO ALL TABLES\" or some such).I'm not fond of the syntax requirements for the explicitness here, plus it seems like it would complicate the functionality of the patch (which currently is able to just slightly refactor the RI triggers to account for a single state variable, rather than do anything smarter than that). I do understand the desire/need for visibility into what would be affected with an offhand statement.What happens if I don't have delete permission on the referencing table? When a foreign key reference delete cascades, I can cause records to disappear from a referencing table even if I don't have delete permission on that table. This feels like it's just supposed to be a convenience that replaces multiple DELETE invocations but one way or the other we need to be clear on the behaviour.Did you test this and find a failure? Because it is literally using all of the same RI proc code/permissions as defined I would expect that it would just abort the transaction. (I am working on expanding the test suite for this feature to allow for test cases like this, so keep 'em coming... :-)) Sidebar: isn't this inconsistent with trigger behaviour in general? When I say \"ON DELETE CASCADE\" what I mean and what I get are the same: whenever the referenced row is deleted, the referencing row also disappears, regardless of the identity or permissions of the role running the actual DELETE. But any manually implemented trigger runs as the caller; I cannot make the database do something when a table update occurs; I can only make the role doing the table update perform some additional actions.Have you found a failure? Because all this is doing is effectively calling the guts of the cascade RI routines, so no differences should occur. If not, I'm not quite clear on your objection; can you clarify?David",
"msg_date": "Thu, 3 Jun 2021 17:08:23 -0500",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: DELETE CASCADE"
},
{
"msg_contents": ">\n> What happens if I don't have delete permission on the referencing table?\n>> When a foreign key reference delete cascades, I can cause records to\n>> disappear from a referencing table even if I don't have delete permission\n>> on that table. This feels like it's just supposed to be a convenience that\n>> replaces multiple DELETE invocations but one way or the other we need to be\n>> clear on the behaviour.\n>>\n>\n> Did you test this and find a failure? Because it is literally using all of\n> the same RI proc code/permissions as defined I would expect that it would\n> just abort the transaction. (I am working on expanding the test suite for\n> this feature to allow for test cases like this, so keep 'em coming... :-))\n>\n\nEnclosed is a basic test script and the corresponding output run through\n`psql -e` (will adapt into part of the regression test, but wanted to get\nthis out there). TL;DR; DELETE CASCADE behaves exactly as if said\nconstraint were defined as a ON DELETE CASCADE FK constraint wrt DELETE\npermission behavior. I do agree in this case, that it makes sense to throw\nan error if we're trying to bypass the RESTRICT behavior and we are not\npart of the table owner role (and since this would be called/checked\nrecursively for each table involved in the graph I think we can count on it\nreporting the appropriate error message in this case).",
"msg_date": "Thu, 3 Jun 2021 17:25:33 -0500",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: DELETE CASCADE"
},
{
"msg_contents": "On Thu, 3 Jun 2021 at 18:08, David Christensen <\ndavid.christensen@crunchydata.com> wrote:\n\n> On Thu, Jun 3, 2021 at 4:15 PM Isaac Morland <isaac.morland@gmail.com>\n> wrote:\n>\n>>\n>> What happens if I don't have delete permission on the referencing table?\n>> When a foreign key reference delete cascades, I can cause records to\n>> disappear from a referencing table even if I don't have delete permission\n>> on that table. This feels like it's just supposed to be a convenience that\n>> replaces multiple DELETE invocations but one way or the other we need to be\n>> clear on the behaviour.\n>>\n>\n> Did you test this and find a failure? Because it is literally using all of\n> the same RI proc code/permissions as defined I would expect that it would\n> just abort the transaction. (I am working on expanding the test suite for\n> this feature to allow for test cases like this, so keep 'em coming... :-))\n>\n\nI haven't run your patch. I'm just asking because it's a question about\nexactly how the behaviour works that needs to be clearly and intentionally\ndecided (and documented). I think aborting the transaction with a\npermission denied error on the referencing table is probably the right\nbehaviour: it's what you would get if you issued an equivalent delete on\nthe referencing table explicitly. I think of your patch as being a\nconvenience to avoid having to write a separate DELETE for each referencing\ntable. So based on what you say, it sounds like you've already covered this\nissue.\n\nSidebar: isn't this inconsistent with trigger behaviour in general? When I\n>> say \"ON DELETE CASCADE\" what I mean and what I get are the same: whenever\n>> the referenced row is deleted, the referencing row also disappears,\n>> regardless of the identity or permissions of the role running the actual\n>> DELETE. But any manually implemented trigger runs as the caller; I cannot\n>> make the database do something when a table update occurs; I can only make\n>> the role doing the table update perform some additional actions.\n>>\n>\n> Have you found a failure? Because all this is doing is effectively\n> calling the guts of the cascade RI routines, so no differences should\n> occur. If not, I'm not quite clear on your objection; can you clarify?\n>\n\nSorry, my sidebar is only tangentially related. In another thread we had a\ndiscussion about triggers, which it turns out execute as the role running\nthe command, not as the owner of the table. For many triggers it doesn't\nmatter, but for many things I can think of that I would want to do with\ntriggers it will only work if the trigger executes as the owner of the\ntable (or trigger, hypothetically…); and there are several common cases\nwhere it makes way more sense to execute as the owner (e.g., triggers to\nmaintain a log table; it doesn't make sense to have to grant permissions on\nthe log table to roles with permissions on the main table, and also allows\nspurious log entries to be made). But here it seems that cascaded actions\ndo execute as a role that is not dependent on who is running the command.\n\nIn short, I probably should have left off the sidebar. It's not an issue\nwith your patch.\n\nOn Thu, 3 Jun 2021 at 18:08, David Christensen <david.christensen@crunchydata.com> wrote:On Thu, Jun 3, 2021 at 4:15 PM Isaac Morland <isaac.morland@gmail.com> wrote:What happens if I don't have delete permission on the referencing table? When a foreign key reference delete cascades, I can cause records to disappear from a referencing table even if I don't have delete permission on that table. This feels like it's just supposed to be a convenience that replaces multiple DELETE invocations but one way or the other we need to be clear on the behaviour.Did you test this and find a failure? Because it is literally using all of the same RI proc code/permissions as defined I would expect that it would just abort the transaction. (I am working on expanding the test suite for this feature to allow for test cases like this, so keep 'em coming... :-))I haven't run your patch. I'm just asking because it's a question about exactly how the behaviour works that needs to be clearly and intentionally decided (and documented). I think aborting the transaction with a permission denied error on the referencing table is probably the right behaviour: it's what you would get if you issued an equivalent delete on the referencing table explicitly. I think of your patch as being a convenience to avoid having to write a separate DELETE for each referencing table. So based on what you say, it sounds like you've already covered this issue.Sidebar: isn't this inconsistent with trigger behaviour in general? When I say \"ON DELETE CASCADE\" what I mean and what I get are the same: whenever the referenced row is deleted, the referencing row also disappears, regardless of the identity or permissions of the role running the actual DELETE. But any manually implemented trigger runs as the caller; I cannot make the database do something when a table update occurs; I can only make the role doing the table update perform some additional actions.Have you found a failure? Because all this is doing is effectively calling the guts of the cascade RI routines, so no differences should occur. If not, I'm not quite clear on your objection; can you clarify?Sorry, my sidebar is only tangentially related. In another thread we had a discussion about triggers, which it turns out execute as the role running the command, not as the owner of the table. For many triggers it doesn't matter, but for many things I can think of that I would want to do with triggers it will only work if the trigger executes as the owner of the table (or trigger, hypothetically…); and there are several common cases where it makes way more sense to execute as the owner (e.g., triggers to maintain a log table; it doesn't make sense to have to grant permissions on the log table to roles with permissions on the main table, and also allows spurious log entries to be made). But here it seems that cascaded actions do execute as a role that is not dependent on who is running the command.In short, I probably should have left off the sidebar. It's not an issue with your patch.",
"msg_date": "Thu, 3 Jun 2021 18:26:44 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DELETE CASCADE"
},
{
"msg_contents": "On Thu, 3 Jun 2021 at 18:25, David Christensen <\ndavid.christensen@crunchydata.com> wrote:\n\n> What happens if I don't have delete permission on the referencing table?\n>>> When a foreign key reference delete cascades, I can cause records to\n>>> disappear from a referencing table even if I don't have delete permission\n>>> on that table. This feels like it's just supposed to be a convenience that\n>>> replaces multiple DELETE invocations but one way or the other we need to be\n>>> clear on the behaviour.\n>>>\n>>\n>> Did you test this and find a failure? Because it is literally using all\n>> of the same RI proc code/permissions as defined I would expect that it\n>> would just abort the transaction. (I am working on expanding the test\n>> suite for this feature to allow for test cases like this, so keep 'em\n>> coming... :-))\n>>\n>\n> Enclosed is a basic test script and the corresponding output run through\n> `psql -e` (will adapt into part of the regression test, but wanted to get\n> this out there). TL;DR; DELETE CASCADE behaves exactly as if said\n> constraint were defined as a ON DELETE CASCADE FK constraint wrt DELETE\n> permission behavior. I do agree in this case, that it makes sense to throw\n> an error if we're trying to bypass the RESTRICT behavior and we are not\n> part of the table owner role (and since this would be called/checked\n> recursively for each table involved in the graph I think we can count on it\n> reporting the appropriate error message in this case).\n>\n\nSurely you mean if we don't have DELETE permission on the referencing\ntable? I don't see why we need to be a member of the table owner role.\n\nOn Thu, 3 Jun 2021 at 18:25, David Christensen <david.christensen@crunchydata.com> wrote:What happens if I don't have delete permission on the referencing table? When a foreign key reference delete cascades, I can cause records to disappear from a referencing table even if I don't have delete permission on that table. This feels like it's just supposed to be a convenience that replaces multiple DELETE invocations but one way or the other we need to be clear on the behaviour.Did you test this and find a failure? Because it is literally using all of the same RI proc code/permissions as defined I would expect that it would just abort the transaction. (I am working on expanding the test suite for this feature to allow for test cases like this, so keep 'em coming... :-))Enclosed is a basic test script and the corresponding output run through `psql -e` (will adapt into part of the regression test, but wanted to get this out there). TL;DR; DELETE CASCADE behaves exactly as if said constraint were defined as a ON DELETE CASCADE FK constraint wrt DELETE permission behavior. I do agree in this case, that it makes sense to throw an error if we're trying to bypass the RESTRICT behavior and we are not part of the table owner role (and since this would be called/checked recursively for each table involved in the graph I think we can count on it reporting the appropriate error message in this case).Surely you mean if we don't have DELETE permission on the referencing table? I don't see why we need to be a member of the table owner role.",
"msg_date": "Thu, 3 Jun 2021 18:29:29 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DELETE CASCADE"
},
{
"msg_contents": "On Thu, Jun 3, 2021 at 3:29 PM Isaac Morland <isaac.morland@gmail.com>\nwrote:\n\n> Surely you mean if we don't have DELETE permission on the referencing\n> table? I don't see why we need to be a member of the table owner role.\n>\n\nI would reverse the question - why does this feature need to allow the more\nbroad DELETE permission instead of just limiting it to the table owner? The\nlatter matches the required permission for the existing cascade feature\nthat this is extending.\n\nDavid J.\n\nOn Thu, Jun 3, 2021 at 3:29 PM Isaac Morland <isaac.morland@gmail.com> wrote:Surely you mean if we don't have DELETE permission on the referencing table? I don't see why we need to be a member of the table owner role.I would reverse the question - why does this feature need to allow the more broad DELETE permission instead of just limiting it to the table owner? The latter matches the required permission for the existing cascade feature that this is extending.David J.",
"msg_date": "Thu, 3 Jun 2021 16:53:08 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DELETE CASCADE"
},
{
"msg_contents": "On 03.06.21 23:47, David G. Johnston wrote:\n> This behavior should require the same permissions as actually creating \n> an ON DELETE CASCADE FK on the cascaded-to tables. i.e., Table Owner \n> role membership (the requirement for FK permissions can be assumed by \n> the presence of the existing FK constraint and being the table's owner).\n\nYou can create foreign keys if you have the REFERENCES privilege on the \nprimary key table. That's something this patch doesn't observe \ncorrectly: Normally, the owner of the foreign key table decides the \ncascade action, but with this patch, it's the primary key table owner.\n\n\n",
"msg_date": "Fri, 4 Jun 2021 21:53:05 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: DELETE CASCADE"
},
{
"msg_contents": "On Fri, Jun 4, 2021 at 2:53 PM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 03.06.21 23:47, David G. Johnston wrote:\n> > This behavior should require the same permissions as actually creating\n> > an ON DELETE CASCADE FK on the cascaded-to tables. i.e., Table Owner\n> > role membership (the requirement for FK permissions can be assumed by\n> > the presence of the existing FK constraint and being the table's owner).\n>\n> You can create foreign keys if you have the REFERENCES privilege on the\n> primary key table. That's something this patch doesn't observe\n> correctly: Normally, the owner of the foreign key table decides the\n> cascade action, but with this patch, it's the primary key table owner.\n>\n\nSo what are the necessary and sufficient conditions to check at this\npoint? The constraint already exists, so what permissions would we need to\ncheck against which table(s) in order to grant this action?\n\nOn Fri, Jun 4, 2021 at 2:53 PM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 03.06.21 23:47, David G. Johnston wrote:\n> This behavior should require the same permissions as actually creating \n> an ON DELETE CASCADE FK on the cascaded-to tables. i.e., Table Owner \n> role membership (the requirement for FK permissions can be assumed by \n> the presence of the existing FK constraint and being the table's owner).\n\nYou can create foreign keys if you have the REFERENCES privilege on the \nprimary key table. That's something this patch doesn't observe \ncorrectly: Normally, the owner of the foreign key table decides the \ncascade action, but with this patch, it's the primary key table owner.So what are the necessary and sufficient conditions to check at this point? The constraint already exists, so what permissions would we need to check against which table(s) in order to grant this action?",
"msg_date": "Fri, 4 Jun 2021 15:24:31 -0500",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: DELETE CASCADE"
},
{
"msg_contents": "On Fri, 4 Jun 2021 at 16:24, David Christensen <\ndavid.christensen@crunchydata.com> wrote:\n\n> On Fri, Jun 4, 2021 at 2:53 PM Peter Eisentraut <\n> peter.eisentraut@enterprisedb.com> wrote:\n>\n>> On 03.06.21 23:47, David G. Johnston wrote:\n>> > This behavior should require the same permissions as actually creating\n>> > an ON DELETE CASCADE FK on the cascaded-to tables. i.e., Table Owner\n>> > role membership (the requirement for FK permissions can be assumed by\n>> > the presence of the existing FK constraint and being the table's owner).\n>>\n>> You can create foreign keys if you have the REFERENCES privilege on the\n>> primary key table. That's something this patch doesn't observe\n>> correctly: Normally, the owner of the foreign key table decides the\n>> cascade action, but with this patch, it's the primary key table owner.\n>>\n>\n> So what are the necessary and sufficient conditions to check at this\n> point? The constraint already exists, so what permissions would we need to\n> check against which table(s) in order to grant this action?\n>\n\nI apologize if I am deeply confused, but say I have this:\n\nCREATE TABLE parent (\n pid int primary key,\n parent_data text\n);\n\nCREATE TABLE child (\n pid int REFERENCES parent,\n cid int,\n PRIMARY KEY (pid, cid),\n child_data text\n);\n\nIt's easy to imagine needing to write:\n\nDELETE FROM child WHERE ...\nDELETE FROM parent WHERE ...\n\n... where the WHERE clauses both work out to the same pid values. It would\nbe nice to be able to say:\n\nDELETE CASCADE FROM parent WHERE ...\n\n... and just skip writing the first DELETE entirely. And what do I mean by\n\"DELETE CASCADE\" if not \"delete the referencing rows from child\"? So to me\nI think I should require DELETE permission on child (and parent) in order\nto execute this DELETE CASCADE. I definitely shouldn't require any\nDDL-related permissions (table owner, REFERENCES, …) because I'm not doing\nDDL - just data changes. Sure, it may be implemented by temporarily\ntreating the foreign key references differently, but conceptually I'm just\ndeleting from multiple tables in one command.\n\nI will say I would prefer this syntax:\n\nDELETE FROM parent WHERE ... CASCADE TO child;\n\n(or \"CASCADE TO ALL TABLES\" or some such if I want that)\n\nI don't like the idea of saying \"CASCADE\" and getting a bunch of tables I\ndidn't intend (or which didn't exist when the query was written).\n\nOn Fri, 4 Jun 2021 at 16:24, David Christensen <david.christensen@crunchydata.com> wrote:On Fri, Jun 4, 2021 at 2:53 PM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 03.06.21 23:47, David G. Johnston wrote:\n> This behavior should require the same permissions as actually creating \n> an ON DELETE CASCADE FK on the cascaded-to tables. i.e., Table Owner \n> role membership (the requirement for FK permissions can be assumed by \n> the presence of the existing FK constraint and being the table's owner).\n\nYou can create foreign keys if you have the REFERENCES privilege on the \nprimary key table. That's something this patch doesn't observe \ncorrectly: Normally, the owner of the foreign key table decides the \ncascade action, but with this patch, it's the primary key table owner.So what are the necessary and sufficient conditions to check at this point? The constraint already exists, so what permissions would we need to check against which table(s) in order to grant this action?I apologize if I am deeply confused, but say I have this:CREATE TABLE parent ( pid int primary key, parent_data text);CREATE TABLE child ( pid int REFERENCES parent, cid int, PRIMARY KEY (pid, cid), child_data text);It's easy to imagine needing to write:DELETE FROM child WHERE ...DELETE FROM parent WHERE ...... where the WHERE clauses both work out to the same pid values. It would be nice to be able to say:DELETE CASCADE FROM parent WHERE ...... and just skip writing the first DELETE entirely. And what do I mean by \"DELETE CASCADE\" if not \"delete the referencing rows from child\"? So to me I think I should require DELETE permission on child (and parent) in order to execute this DELETE CASCADE. I definitely shouldn't require any DDL-related permissions (table owner, REFERENCES, …) because I'm not doing DDL - just data changes. Sure, it may be implemented by temporarily treating the foreign key references differently, but conceptually I'm just deleting from multiple tables in one command.I will say I would prefer this syntax:DELETE FROM parent WHERE ... CASCADE TO child;(or \"CASCADE TO ALL TABLES\" or some such if I want that)I don't like the idea of saying \"CASCADE\" and getting a bunch of tables I didn't intend (or which didn't exist when the query was written).",
"msg_date": "Fri, 4 Jun 2021 16:40:41 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DELETE CASCADE"
},
{
"msg_contents": "On Fri, Jun 4, 2021 at 3:40 PM Isaac Morland <isaac.morland@gmail.com>\nwrote:\n\n> I apologize if I am deeply confused, but say I have this:\n>\n> CREATE TABLE parent (\n> pid int primary key,\n> parent_data text\n> );\n>\n> CREATE TABLE child (\n> pid int REFERENCES parent,\n> cid int,\n> PRIMARY KEY (pid, cid),\n> child_data text\n> );\n>\n> It's easy to imagine needing to write:\n>\n> DELETE FROM child WHERE ...\n> DELETE FROM parent WHERE ...\n>\n> ... where the WHERE clauses both work out to the same pid values. It would\n> be nice to be able to say:\n>\n> DELETE CASCADE FROM parent WHERE ...\n>\n\nThis is entirely the use case and the motivation.\n\n\n> ... and just skip writing the first DELETE entirely. And what do I mean by\n> \"DELETE CASCADE\" if not \"delete the referencing rows from child\"? So to me\n> I think I should require DELETE permission on child (and parent) in order\n> to execute this DELETE CASCADE. I definitely shouldn't require any\n> DDL-related permissions (table owner, REFERENCES, …) because I'm not doing\n> DDL - just data changes. Sure, it may be implemented by temporarily\n> treating the foreign key references differently, but conceptually I'm just\n> deleting from multiple tables in one command.\n>\n\nThis is the part where I'm also running into some conceptual roadblocks\nbetween what is an implementation issue based on the current behavior of\nCASCADE triggers, and what makes sense in terms of a POLA perspective. In\npart, I am having this discussion to flesh out this part of the problem.\n\n\n> I will say I would prefer this syntax:\n>\n> DELETE FROM parent WHERE ... CASCADE TO child;\n>\n> (or \"CASCADE TO ALL TABLES\" or some such if I want that)\n>\n> I don't like the idea of saying \"CASCADE\" and getting a bunch of tables I\n> didn't intend (or which didn't exist when the query was written).\n>\n\nA soft -1 from me here, though I understand the rationale here; you would\nbe unable to manually delete these records with the existing constraints if\nthere were a `grandchild` table without first removing those records too.\n(Maybe some method of previewing which relations/FKs would be involved here\nwould be a suitable compromise, but I have no idea what that would look\nlike or how it would work.) (Maybe just NOTICE: DELETE CASCADES to ... for\neach table, and people should know to wrap in a transaction if they don't\nknow what will happen.)\n\nDavid\n\nOn Fri, Jun 4, 2021 at 3:40 PM Isaac Morland <isaac.morland@gmail.com> wrote:I apologize if I am deeply confused, but say I have this:CREATE TABLE parent ( pid int primary key, parent_data text);CREATE TABLE child ( pid int REFERENCES parent, cid int, PRIMARY KEY (pid, cid), child_data text);It's easy to imagine needing to write:DELETE FROM child WHERE ...DELETE FROM parent WHERE ...... where the WHERE clauses both work out to the same pid values. It would be nice to be able to say:DELETE CASCADE FROM parent WHERE ...This is entirely the use case and the motivation. ... and just skip writing the first DELETE entirely. And what do I mean by \"DELETE CASCADE\" if not \"delete the referencing rows from child\"? So to me I think I should require DELETE permission on child (and parent) in order to execute this DELETE CASCADE. I definitely shouldn't require any DDL-related permissions (table owner, REFERENCES, …) because I'm not doing DDL - just data changes. Sure, it may be implemented by temporarily treating the foreign key references differently, but conceptually I'm just deleting from multiple tables in one command.This is the part where I'm also running into some conceptual roadblocks between what is an implementation issue based on the current behavior of CASCADE triggers, and what makes sense in terms of a POLA perspective. In part, I am having this discussion to flesh out this part of the problem. I will say I would prefer this syntax:DELETE FROM parent WHERE ... CASCADE TO child;(or \"CASCADE TO ALL TABLES\" or some such if I want that)I don't like the idea of saying \"CASCADE\" and getting a bunch of tables I didn't intend (or which didn't exist when the query was written).A soft -1 from me here, though I understand the rationale here; you would be unable to manually delete these records with the existing constraints if there were a `grandchild` table without first removing those records too. (Maybe some method of previewing which relations/FKs would be involved here would be a suitable compromise, but I have no idea what that would look like or how it would work.) (Maybe just NOTICE: DELETE CASCADES to ... for each table, and people should know to wrap in a transaction if they don't know what will happen.)David",
"msg_date": "Fri, 4 Jun 2021 15:53:10 -0500",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: DELETE CASCADE"
},
{
"msg_contents": "On 04.06.21 22:24, David Christensen wrote:\n> So what are the necessary and sufficient conditions to check at this \n> point? The constraint already exists, so what permissions would we need \n> to check against which table(s) in order to grant this action?\n\nI think you would need DELETE privilege on all affected tables.\n\n\n\n",
"msg_date": "Sat, 5 Jun 2021 09:29:14 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: DELETE CASCADE"
},
{
"msg_contents": "On 03.06.21 22:49, David Christensen wrote:\n> Presented for discussion is a POC for a DELETE CASCADE functionality, \n> which will allow you one-shot usage of treating existing NO ACTION and \n> RESTRICT FK constraints as if they were originally defined as CASCADE \n> constraints. I can't tell you how many times this functionality would \n> have been useful in the field, and despite the expected answer of \n> \"define your constraints right in the first place\", this is not always \n> an option, nor is the ability to change that easily (or create new \n> constraints that need to revalidate against big tables) always the best \n> option.\n\nI think, if we think this is useful, the other way around would also be \nuseful: Override a foreign key defined as ON DELETE CASCADE to behave as \nRESTRICT for a particular command.\n\n\n\n",
"msg_date": "Sat, 5 Jun 2021 09:30:42 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: DELETE CASCADE"
},
{
"msg_contents": "\n> On Jun 5, 2021, at 2:30 AM, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> On 03.06.21 22:49, David Christensen wrote:\n>> Presented for discussion is a POC for a DELETE CASCADE functionality, which will allow you one-shot usage of treating existing NO ACTION and RESTRICT FK constraints as if they were originally defined as CASCADE constraints. I can't tell you how many times this functionality would have been useful in the field, and despite the expected answer of \"define your constraints right in the first place\", this is not always an option, nor is the ability to change that easily (or create new constraints that need to revalidate against big tables) always the best option.\n> \n> I think, if we think this is useful, the other way around would also be useful: Override a foreign key defined as ON DELETE CASCADE to behave as RESTRICT for a particular command.\n\nI am not opposed to this, but I am struggling to come up with a use case. Where would this be useful?\n\nDavid\n\n",
"msg_date": "Sat, 5 Jun 2021 07:21:11 -0500",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: DELETE CASCADE"
},
{
"msg_contents": "On Sat, 5 Jun 2021 at 03:30, Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 03.06.21 22:49, David Christensen wrote:\n> > Presented for discussion is a POC for a DELETE CASCADE functionality,\n> > which will allow you one-shot usage of treating existing NO ACTION and\n> > RESTRICT FK constraints as if they were originally defined as CASCADE\n> > constraints. I can't tell you how many times this functionality would\n> > have been useful in the field, and despite the expected answer of\n> > \"define your constraints right in the first place\", this is not always\n> > an option, nor is the ability to change that easily (or create new\n> > constraints that need to revalidate against big tables) always the best\n> > option.\n>\n> I think, if we think this is useful, the other way around would also be\n> useful: Override a foreign key defined as ON DELETE CASCADE to behave as\n> RESTRICT for a particular command.\n>\n\nThis is not as obviously useful as the other, but might conceivably still\nhave applications.\n\nWe would need to be very careful about permissions. This is a substitute\nfor checking whether there are any matching rows in the referring tables\nand throwing an error manually in that case. My immediate reaction is that\nthis should require SELECT permission on the referring tables. Or to be\nmore precise, SELECT permission on the foreign key columns in the referring\ntables.\n\nOn Sat, 5 Jun 2021 at 03:30, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 03.06.21 22:49, David Christensen wrote:\n> Presented for discussion is a POC for a DELETE CASCADE functionality, \n> which will allow you one-shot usage of treating existing NO ACTION and \n> RESTRICT FK constraints as if they were originally defined as CASCADE \n> constraints. I can't tell you how many times this functionality would \n> have been useful in the field, and despite the expected answer of \n> \"define your constraints right in the first place\", this is not always \n> an option, nor is the ability to change that easily (or create new \n> constraints that need to revalidate against big tables) always the best \n> option.\n\nI think, if we think this is useful, the other way around would also be \nuseful: Override a foreign key defined as ON DELETE CASCADE to behave as \nRESTRICT for a particular command.This is not as obviously useful as the other, but might conceivably still have applications.We would need to be very careful about permissions. This is a substitute for checking whether there are any matching rows in the referring tables and throwing an error manually in that case. My immediate reaction is that this should require SELECT permission on the referring tables. Or to be more precise, SELECT permission on the foreign key columns in the referring tables.",
"msg_date": "Sat, 5 Jun 2021 08:24:39 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DELETE CASCADE"
},
{
"msg_contents": "\n> On Jun 5, 2021, at 2:29 AM, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> On 04.06.21 22:24, David Christensen wrote:\n>> So what are the necessary and sufficient conditions to check at this point? The constraint already exists, so what permissions would we need to check against which table(s) in order to grant this action?\n> \n> I think you would need DELETE privilege on all affected tables.\n\nSo basically where we are dispatching to the CASCADE guts, first check session user’s DELETE permission and throw the normal permissions error if they can’t delete?\n\n\n",
"msg_date": "Sat, 5 Jun 2021 07:25:39 -0500",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: DELETE CASCADE"
},
{
"msg_contents": "On 05.06.21 14:21, David Christensen wrote:\n> \n>> On Jun 5, 2021, at 2:30 AM, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n>>\n>> On 03.06.21 22:49, David Christensen wrote:\n>>> Presented for discussion is a POC for a DELETE CASCADE functionality, which will allow you one-shot usage of treating existing NO ACTION and RESTRICT FK constraints as if they were originally defined as CASCADE constraints. I can't tell you how many times this functionality would have been useful in the field, and despite the expected answer of \"define your constraints right in the first place\", this is not always an option, nor is the ability to change that easily (or create new constraints that need to revalidate against big tables) always the best option.\n>>\n>> I think, if we think this is useful, the other way around would also be useful: Override a foreign key defined as ON DELETE CASCADE to behave as RESTRICT for a particular command.\n> \n> I am not opposed to this, but I am struggling to come up with a use case. Where would this be useful?\n\nIf you suspect a primary key row is no longer used, you want to delete \nit, but don't want to accidentally delete it if it's still used.\n\nI sense more complicated concurrency and permission issues, however.\n\n\n",
"msg_date": "Mon, 7 Jun 2021 09:54:00 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: DELETE CASCADE"
},
{
"msg_contents": "On 05.06.21 14:25, David Christensen wrote:\n> \n>> On Jun 5, 2021, at 2:29 AM, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n>>\n>> On 04.06.21 22:24, David Christensen wrote:\n>>> So what are the necessary and sufficient conditions to check at this point? The constraint already exists, so what permissions would we need to check against which table(s) in order to grant this action?\n>>\n>> I think you would need DELETE privilege on all affected tables.\n> \n> So basically where we are dispatching to the CASCADE guts, first check session user’s DELETE permission and throw the normal permissions error if they can’t delete?\n\nActually, you also need appropriate SELECT permissions that correspond \nto the WHERE clause of the DELETE statement.\n\n\n",
"msg_date": "Mon, 7 Jun 2021 09:56:49 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: DELETE CASCADE"
},
{
"msg_contents": "On Mon, Jun 7, 2021 at 2:54 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 05.06.21 14:21, David Christensen wrote:\n> >\n> >> On Jun 5, 2021, at 2:30 AM, Peter Eisentraut <\n> peter.eisentraut@enterprisedb.com> wrote:\n> >>\n> >> On 03.06.21 22:49, David Christensen wrote:\n> >>> Presented for discussion is a POC for a DELETE CASCADE functionality,\n> which will allow you one-shot usage of treating existing NO ACTION and\n> RESTRICT FK constraints as if they were originally defined as CASCADE\n> constraints. I can't tell you how many times this functionality would have\n> been useful in the field, and despite the expected answer of \"define your\n> constraints right in the first place\", this is not always an option, nor is\n> the ability to change that easily (or create new constraints that need to\n> revalidate against big tables) always the best option.\n> >>\n> >> I think, if we think this is useful, the other way around would also be\n> useful: Override a foreign key defined as ON DELETE CASCADE to behave as\n> RESTRICT for a particular command.\n> >\n> > I am not opposed to this, but I am struggling to come up with a use\n> case. Where would this be useful?\n>\n> If you suspect a primary key row is no longer used, you want to delete\n> it, but don't want to accidentally delete it if it's still used.\n>\n\nOkay, I can see that.\n\n\n> I sense more complicated concurrency and permission issues, however.\n>\n\nAssuming this happens in the same transaction, wouldn't this just work? Or\nare you thinking there needs to be some sort of predicate lock to prevent a\nconcurrent add of the referencing record in the FK table?\n\nDavid\n\nOn Mon, Jun 7, 2021 at 2:54 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 05.06.21 14:21, David Christensen wrote:\n> \n>> On Jun 5, 2021, at 2:30 AM, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n>>\n>> On 03.06.21 22:49, David Christensen wrote:\n>>> Presented for discussion is a POC for a DELETE CASCADE functionality, which will allow you one-shot usage of treating existing NO ACTION and RESTRICT FK constraints as if they were originally defined as CASCADE constraints. I can't tell you how many times this functionality would have been useful in the field, and despite the expected answer of \"define your constraints right in the first place\", this is not always an option, nor is the ability to change that easily (or create new constraints that need to revalidate against big tables) always the best option.\n>>\n>> I think, if we think this is useful, the other way around would also be useful: Override a foreign key defined as ON DELETE CASCADE to behave as RESTRICT for a particular command.\n> \n> I am not opposed to this, but I am struggling to come up with a use case. Where would this be useful?\n\nIf you suspect a primary key row is no longer used, you want to delete \nit, but don't want to accidentally delete it if it's still used.Okay, I can see that. \nI sense more complicated concurrency and permission issues, however.Assuming this happens in the same transaction, wouldn't this just work? Or are you thinking there needs to be some sort of predicate lock to prevent a concurrent add of the referencing record in the FK table?David",
"msg_date": "Tue, 8 Jun 2021 14:25:49 -0500",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: DELETE CASCADE"
},
{
"msg_contents": ">\n> > So basically where we are dispatching to the CASCADE guts, first check\n> session user’s DELETE permission and throw the normal permissions error if\n> they can’t delete?\n>\n> Actually, you also need appropriate SELECT permissions that correspond\n> to the WHERE clause of the DELETE statement.\n>\n\nSo this would be both a table-level and column level check? (It seems odd\nthat we could configure a policy whereby we could DELETE an arbitrary row\nin the table, but not SELECT which one, but I can see that there could be\ninformation leakage implications here.)\n\nOther permissions-level things to consider, like RLS, or is this part\nautomatic? Do you happen to know offhand another instance in the code\nwhich takes these granular permissions into consideration? Might help\nbootstrap both the understanding and the implementation of this.\n\nThanks,\n\nDavid\n\n> So basically where we are dispatching to the CASCADE guts, first check session user’s DELETE permission and throw the normal permissions error if they can’t delete?\n\nActually, you also need appropriate SELECT permissions that correspond \nto the WHERE clause of the DELETE statement.So this would be both a table-level and column level check? (It seems odd that we could configure a policy whereby we could DELETE an arbitrary row in the table, but not SELECT which one, but I can see that there could be information leakage implications here.)Other permissions-level things to consider, like RLS, or is this part automatic? Do you happen to know offhand another instance in the code which takes these granular permissions into consideration? Might help bootstrap both the understanding and the implementation of this.Thanks,David",
"msg_date": "Tue, 8 Jun 2021 14:29:28 -0500",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: DELETE CASCADE"
},
{
"msg_contents": "On 08.06.21 21:29, David Christensen wrote:\n> > So basically where we are dispatching to the CASCADE guts, first\n> check session user’s DELETE permission and throw the normal\n> permissions error if they can’t delete?\n> \n> Actually, you also need appropriate SELECT permissions that correspond\n> to the WHERE clause of the DELETE statement.\n> \n> \n> So this would be both a table-level and column level check? (It seems \n> odd that we could configure a policy whereby we could DELETE an \n> arbitrary row in the table, but not SELECT which one, but I can see that \n> there could be information leakage implications here.)\n> \n> Other permissions-level things to consider, like RLS, or is this part \n> automatic? Do you happen to know offhand another instance in the code \n> which takes these granular permissions into consideration? Might help \n> bootstrap both the understanding and the implementation of this.\n\nAll of this permissions checking code already exists in the executor, so \nthe question perhaps isn't so much which details to consider but how to \nmake use of that code. If you convince the trigger actions to run as \nthe invoker of the original DELETE rather than whatever they are doing \nnow, that should all work correctly. How to do that, I don't know right \nnow.\n\n\n",
"msg_date": "Wed, 9 Jun 2021 09:17:52 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: DELETE CASCADE"
},
{
"msg_contents": "On 08.06.21 21:25, David Christensen wrote:\n> I sense more complicated concurrency and permission issues, however.\n> \n> Assuming this happens in the same transaction, wouldn't this just work? \n> Or are you thinking there needs to be some sort of predicate lock to \n> prevent a concurrent add of the referencing record in the FK table?\n\nIt might work, I'm just saying it needs to be thought about carefully. \nIf you have functionality like, delete this if there is no matching \nrecord over there, you need to have the permission to check that and \nneed to make sure it stays that way.\n\n\n",
"msg_date": "Wed, 9 Jun 2021 09:21:45 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: DELETE CASCADE"
},
{
"msg_contents": "On Wednesday, June 9, 2021, Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n>\n> It might work, I'm just saying it needs to be thought about carefully. If\n> you have functionality like, delete this if there is no matching record\n> over there, you need to have the permission to check that and need to make\n> sure it stays that way.\n>\n>\nWhich I believe the presence of an existing foreign key does quite nicely.\nThus if the executing user is the table owner (group membership usually)\nand a FK already exists, the conditions for the cascade are fulfilled,\nincluding locking I would think, because that FK could have been defined to\njust do it without all this. We are effectively just temporarily changing\nthat aspect of the foreign key - the behavior should be identical to on\ncascade delete.\n\n I require convincing that there is a use case that requires laxer\npermissions. Especially if we can solve the whole changing of the cascade\noption without having to drop the foreign key.\n\nDavid J.\n\nOn Wednesday, June 9, 2021, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\nIt might work, I'm just saying it needs to be thought about carefully. If you have functionality like, delete this if there is no matching record over there, you need to have the permission to check that and need to make sure it stays that way.\nWhich I believe the presence of an existing foreign key does quite nicely. Thus if the executing user is the table owner (group membership usually) and a FK already exists, the conditions for the cascade are fulfilled, including locking I would think, because that FK could have been defined to just do it without all this. We are effectively just temporarily changing that aspect of the foreign key - the behavior should be identical to on cascade delete. I require convincing that there is a use case that requires laxer permissions. Especially if we can solve the whole changing of the cascade option without having to drop the foreign key.David J.",
"msg_date": "Wed, 9 Jun 2021 06:48:00 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DELETE CASCADE"
},
{
"msg_contents": "On Wed, Jun 9, 2021 at 8:48 AM David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n> On Wednesday, June 9, 2021, Peter Eisentraut <\n> peter.eisentraut@enterprisedb.com> wrote:\n>\n>>\n>> It might work, I'm just saying it needs to be thought about carefully. If\n>> you have functionality like, delete this if there is no matching record\n>> over there, you need to have the permission to check that and need to make\n>> sure it stays that way.\n>>\n>>\n> Which I believe the presence of an existing foreign key does quite\n> nicely. Thus if the executing user is the table owner (group membership\n> usually) and a FK already exists, the conditions for the cascade are\n> fulfilled, including locking I would think, because that FK could have been\n> defined to just do it without all this. We are effectively just\n> temporarily changing that aspect of the foreign key - the behavior should\n> be identical to on cascade delete.\n>\n\nI think Peter is referring to the DELETE RESTRICT proposed mirror behavior\nin this specific case, not DELETE CASCADE specifically.\n\n\n> I require convincing that there is a use case that requires laxer\n> permissions. Especially if we can solve the whole changing of the cascade\n> option without having to drop the foreign key.\n>\n\nThis was my original feeling as well, though really if I was going to run\nthis operation it would likely already be the database owner or superuser,\nso my desire to make this work in all situations is tempered with my desire\nto just have the basic functionality available at *some* level. :-)\n\nDavid\n\nOn Wed, Jun 9, 2021 at 8:48 AM David G. Johnston <david.g.johnston@gmail.com> wrote:On Wednesday, June 9, 2021, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\nIt might work, I'm just saying it needs to be thought about carefully. If you have functionality like, delete this if there is no matching record over there, you need to have the permission to check that and need to make sure it stays that way.\nWhich I believe the presence of an existing foreign key does quite nicely. Thus if the executing user is the table owner (group membership usually) and a FK already exists, the conditions for the cascade are fulfilled, including locking I would think, because that FK could have been defined to just do it without all this. We are effectively just temporarily changing that aspect of the foreign key - the behavior should be identical to on cascade delete.I think Peter is referring to the DELETE RESTRICT proposed mirror behavior in this specific case, not DELETE CASCADE specifically. I require convincing that there is a use case that requires laxer permissions. Especially if we can solve the whole changing of the cascade option without having to drop the foreign key.This was my original feeling as well, though really if I was going to run this operation it would likely already be the database owner or superuser, so my desire to make this work in all situations is tempered with my desire to just have the basic functionality available at *some* level. :-)David",
"msg_date": "Wed, 9 Jun 2021 09:32:36 -0500",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: DELETE CASCADE"
},
{
"msg_contents": "\nDavid G. Johnston writes:\n\n> Having the defined FK behaviors be more readily changeable, while not\n> mitigating this need, is IMO a more important feature to implement. If\n> there is a reason that cannot be implemented (besides no one has bothered\n> to take the time) then I would consider that reason to also apply to\n> prevent implementing this work-around.\n>\n> David J.\n\nI assume this would look something like:\n\nALTER TABLE foo ALTER CONSTRAINT my_fkey ON UPDATE CASCADE ON DELETE RESTRICT\n\nwith omitted referential_action implying preserving the existing one.\n\nSeems if we were going to tackle this particular problem, there would be two possible approaches\nhere:\n\n1) Change the definitions of the RI_FKey_* constraints for (at least) RI_FKey_*_del() to instead\nshare a single function definition RI_FKey_del() and then pass in the constraint type operation\n(restrict, cascade, no action, etc) in as a trigger argument instead of having separate functions for\neach constraint type here. This would then ensure that the dispatch function could both change the\nconstriant just by modifying the trigger arguments, as well as allowing for potential different behavior\ndepending on how the underlying function is called.\n\n2) Keep the existing RI trigger functions, but allow an ALTER CONSTRAINT variant to replace the\ntrigger function to the new desired value, preserving (or transforming, as necessary) the original\narguments.\n\nA few things off-hand:\n\n- pg_trigger locking will be necessary as we change the underlying args for the tables in\n question. This seems unavoidable.\n\n- partitions; do we need to lock them all in both solutions, or can we get away without it in the\n first approach?\n\n- with the first solution you would lose the self-describing name of the trigger functions\n themselves (moving to the trigger args instead); while it is a change in a very long-standing\n behavior/design, it *should* be an implementation detail, and allows things like the explicit\n DELETE [ RESTRICT | CASCADE ] the original patch was pushing for.\n\n- probably more I haven't thought of.\n\nBest,\n\nDavid\n\n\n",
"msg_date": "Wed, 07 Jul 2021 13:16:36 -0500",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: DELETE CASCADE"
},
{
"msg_contents": "[ a couple of random thoughts after quickly scanning this thread ... ]\n\nDavid Christensen <david.christensen@crunchydata.com> writes:\n> I assume this would look something like:\n> ALTER TABLE foo ALTER CONSTRAINT my_fkey ON UPDATE CASCADE ON DELETE RESTRICT\n> with omitted referential_action implying preserving the existing one.\n\nI seem to remember somebody working on exactly that previously, though\nit's evidently not gotten committed. In any case, we already have\n\n\tALTER TABLE ... ALTER CONSTRAINT constraint_name\n\t[ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ]\n\nwhich has to modify pg_trigger rows, so it's hard to see why it'd\nbe at all difficult to implement this using code similar to that\n(maybe even sharing much of the code).\n\nReturning to the original thought of a DML statement option to temporarily\noverride the referential_action, I wonder why only temporarily-set-CASCADE\nwas considered. It seems to me like there might also be use-cases for\ntemporarily selecting the SET NULL or SET DEFAULT actions.\n\nAnother angle is that if we consider the deferrability properties as\nprecedent, there already is a way to override an FK constraint's\ndeferrability for the duration of a transaction: see SET CONSTRAINTS.\nSo I wonder if maybe the way to treat this is to invent something like\n\n\tSET CONSTRAINTS my_fk_constraint [,...] ON DELETE referential_action\n\nwhich would override the constraints' action for the remainder of the\ntransaction. (Permission needs TBD, but probably the same as you\nwould need to create a new FK constraint on the relevant table.)\n\nIn comparison to the original proposal, this'd force you to be explicit\nabout which constraint(s) you intend to override, but TBH I think that's\na good thing.\n\nOne big practical problem, which we've never addressed in the context of\nSET CONSTRAINTS but maybe it's time to tackle, is that the SQL spec\ndefines the syntax like that because it thinks constraint names are\nunique per-schema; thus a possibly-schema-qualified name is sufficient\nID. Of course we say that constraint names are only unique per-table,\nso SET CONSTRAINTS has always had this issue of not being very carefully\ntargeted. I think we could do something like extending the syntax\nto be\n\n\tSET CONSTRAINTS conname [ON tablename] [,...] new_properties\n\nAnyway, just food for thought --- I'm not necessarily set on any\nof this.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 24 Sep 2021 13:00:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: DELETE CASCADE"
},
{
"msg_contents": "\nTom Lane <tgl@sss.pgh.pa.us> writes:\n\n> [ a couple of random thoughts after quickly scanning this thread ... ]\n>\n> David Christensen <david.christensen@crunchydata.com> writes:\n>> I assume this would look something like:\n>> ALTER TABLE foo ALTER CONSTRAINT my_fkey ON UPDATE CASCADE ON DELETE RESTRICT\n>> with omitted referential_action implying preserving the existing one.\n>\n> I seem to remember somebody working on exactly that previously, though\n> it's evidently not gotten committed. In any case, we already have\n>\n> \tALTER TABLE ... ALTER CONSTRAINT constraint_name\n> \t[ DEFERRABLE | NOT DEFERRABLE ] [ INITIALLY DEFERRED | INITIALLY IMMEDIATE ]\n>\n> which has to modify pg_trigger rows, so it's hard to see why it'd\n> be at all difficult to implement this using code similar to that\n> (maybe even sharing much of the code).\n\nSure; this was my assumption as well.\n\n> Returning to the original thought of a DML statement option to temporarily\n> override the referential_action, I wonder why only temporarily-set-CASCADE\n> was considered. It seems to me like there might also be use-cases for\n> temporarily selecting the SET NULL or SET DEFAULT actions.\n\nAgreed; DELETE CASCADE had been the originating use case, but no reason to limit it to just that\naction. (I'd later expanded it to add RESTRICT as well, but barring implementation issues, probably\nno reason to limit any of referential action.)\n\n> Another angle is that if we consider the deferrability properties as\n> precedent, there already is a way to override an FK constraint's\n> deferrability for the duration of a transaction: see SET CONSTRAINTS.\n> So I wonder if maybe the way to treat this is to invent something like\n>\n> \tSET CONSTRAINTS my_fk_constraint [,...] ON DELETE referential_action\n>\n> which would override the constraints' action for the remainder of the\n> transaction. (Permission needs TBD, but probably the same as you\n> would need to create a new FK constraint on the relevant table.)\n>\n> In comparison to the original proposal, this'd force you to be explicit\n> about which constraint(s) you intend to override, but TBH I think that's\n> a good thing.\n\nI can see the argument for this in terms of being cautious/explicit about what gets removed, however\nthe utility in this particular form was related to being able to *avoid* having to manually figure out\nthe relationship chains and the specific constraints involved. Might there be some sort of middle\nground here?\n\n> One big practical problem, which we've never addressed in the context of\n> SET CONSTRAINTS but maybe it's time to tackle, is that the SQL spec\n> defines the syntax like that because it thinks constraint names are\n> unique per-schema; thus a possibly-schema-qualified name is sufficient\n> ID. Of course we say that constraint names are only unique per-table,\n> so SET CONSTRAINTS has always had this issue of not being very carefully\n> targeted. I think we could do something like extending the syntax\n> to be\n>\n> \tSET CONSTRAINTS conname [ON tablename] [,...] new_properties\n\nThis part seems reasonable. I need to look at how the existing SET CONSTRAINTS is implemented;\nwould be interesting to see how the settings per-table/session are managed, as that would be\nillustrative to additional transient state like this.\n\n> Anyway, just food for thought --- I'm not necessarily set on any\n> of this.\n\nSure thing; I appreciate even a little bit of your attention here.\n\nBest,\n\nDavid\n\n-- \n\n\n",
"msg_date": "Wed, 29 Sep 2021 15:55:22 -0500",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: DELETE CASCADE"
},
{
"msg_contents": "Hi,\n\nOn Wed, Sep 29, 2021 at 03:55:22PM -0500, David Christensen wrote:\n> \n> I can see the argument for this in terms of being cautious/explicit about what gets removed, however\n> the utility in this particular form was related to being able to *avoid* having to manually figure out\n> the relationship chains and the specific constraints involved. Might there be some sort of middle\n> ground here?\n> [...]\n> > I think we could do something like extending the syntax to be\n> >\n> > \tSET CONSTRAINTS conname [ON tablename] [,...] new_properties\n> \n> This part seems reasonable. I need to look at how the existing SET CONSTRAINTS is implemented;\n> would be interesting to see how the settings per-table/session are managed, as that would be\n> illustrative to additional transient state like this.\n\nThe cfbot reports that this patch doesn't apply anymore:\nhttp://cfbot.cputube.org/patch_36_3195.log\n\n> patching file src/backend/utils/adt/ri_triggers.c\n> Hunk #1 succeeded at 93 (offset 3 lines).\n> Hunk #2 FAILED at 181.\n> Hunk #3 succeeded at 556 (offset 5 lines).\n> Hunk #4 succeeded at 581 (offset 5 lines).\n> Hunk #5 succeeded at 755 (offset 5 lines).\n> Hunk #6 succeeded at 776 (offset 5 lines).\n> 1 out of 6 hunks FAILED -- saving rejects to file src/backend/utils/adt/ri_triggers.c.rej\n\nAre you currently working on a possibly different approach and/or grammar? If\nnot, could you send a rebased patch? In the meantime I will switch the cf\nentry to Waiting on Author.\n\n\n",
"msg_date": "Wed, 12 Jan 2022 16:57:27 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DELETE CASCADE"
},
{
"msg_contents": "Hi,\n\nOn Wed, Jan 12, 2022 at 04:57:27PM +0800, Julien Rouhaud wrote:\n> \n> The cfbot reports that this patch doesn't apply anymore:\n> http://cfbot.cputube.org/patch_36_3195.log\n> \n> > patching file src/backend/utils/adt/ri_triggers.c\n> > Hunk #1 succeeded at 93 (offset 3 lines).\n> > Hunk #2 FAILED at 181.\n> > Hunk #3 succeeded at 556 (offset 5 lines).\n> > Hunk #4 succeeded at 581 (offset 5 lines).\n> > Hunk #5 succeeded at 755 (offset 5 lines).\n> > Hunk #6 succeeded at 776 (offset 5 lines).\n> > 1 out of 6 hunks FAILED -- saving rejects to file src/backend/utils/adt/ri_triggers.c.rej\n> \n> Are you currently working on a possibly different approach and/or grammar? If\n> not, could you send a rebased patch? In the meantime I will switch the cf\n> entry to Waiting on Author.\n\nIt's been almost 4 months since your last email, and almost 2 weeks since the\nnotice that this patch doesn't apply anymore. Without update in the next\ncouple of days this patch will be closed as Returned with Feedback per the\ncommitfest rules.\n\n\n",
"msg_date": "Tue, 25 Jan 2022 22:26:53 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DELETE CASCADE"
},
{
"msg_contents": "Hi,\n\nOn Tue, Jan 25, 2022 at 10:26:53PM +0800, Julien Rouhaud wrote:\n> \n> It's been almost 4 months since your last email, and almost 2 weeks since the\n> notice that this patch doesn't apply anymore. Without update in the next\n> couple of days this patch will be closed as Returned with Feedback per the\n> commitfest rules.\n\nClosed.\n\n\n",
"msg_date": "Mon, 31 Jan 2022 11:46:44 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DELETE CASCADE"
},
{
"msg_contents": "On Sun, Jan 30, 2022 at 9:47 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> Hi,\n>\n> On Tue, Jan 25, 2022 at 10:26:53PM +0800, Julien Rouhaud wrote:\n> >\n> > It's been almost 4 months since your last email, and almost 2 weeks\n> since the\n> > notice that this patch doesn't apply anymore. Without update in the next\n> > couple of days this patch will be closed as Returned with Feedback per\n> the\n> > commitfest rules.\n>\n> Closed.\n>\n\nSounds good; when I get time to look at this again I will resubmit (if\npeople think the base functionality is worth it, which is still a topic of\ndiscussion).\n\nDavid\n\nOn Sun, Jan 30, 2022 at 9:47 PM Julien Rouhaud <rjuju123@gmail.com> wrote:Hi,\n\nOn Tue, Jan 25, 2022 at 10:26:53PM +0800, Julien Rouhaud wrote:\n> \n> It's been almost 4 months since your last email, and almost 2 weeks since the\n> notice that this patch doesn't apply anymore. Without update in the next\n> couple of days this patch will be closed as Returned with Feedback per the\n> commitfest rules.\n\nClosed.Sounds good; when I get time to look at this again I will resubmit (if people think the base functionality is worth it, which is still a topic of discussion).David",
"msg_date": "Mon, 31 Jan 2022 09:14:21 -0600",
"msg_from": "David Christensen <david.christensen@crunchydata.com>",
"msg_from_op": true,
"msg_subject": "Re: DELETE CASCADE"
},
{
"msg_contents": "Hi,\n\nOn Mon, Jan 31, 2022 at 09:14:21AM -0600, David Christensen wrote:\n> \n> Sounds good; when I get time to look at this again I will resubmit (if\n> people think the base functionality is worth it, which is still a topic of\n> discussion).\n\nYes, please do! Sorry I should have mentioned it, if a patch is closed as\nReturn with Feedback it means that the feature is wanted, which was my\nunderstanding backlogging this thread, so submitting in some later commit fest\nis expected.\n\n\n",
"msg_date": "Tue, 1 Feb 2022 00:30:34 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: DELETE CASCADE"
}
] |
[
{
"msg_contents": "Hi,\nI was looking at write_relcache_init_file() where an attempt is made to\nunlink the tempfilename.\n\nHowever, the return value is not checked.\nIf the tempfilename is not removed (the file exists), I think we should log\na warning and proceed.\n\nPlease comment on the proposed patch.\n\nThanks",
"msg_date": "Thu, 3 Jun 2021 15:44:13 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "checking return value from unlink in write_relcache_init_file"
},
{
"msg_contents": "On Thu, Jun 03, 2021 at 03:44:13PM -0700, Zhihong Yu wrote:\n> Hi,\n> I was looking at write_relcache_init_file() where an attempt is made to\n> unlink the tempfilename.\n> \n> However, the return value is not checked.\n> If the tempfilename is not removed (the file exists), I think we should log\n> a warning and proceed.\n> \n> Please comment on the proposed patch.\n\n- unlink(tempfilename); /* in case it exists w/wrong permissions */\n+ /* in case it exists w/wrong permissions */\n+ if (unlink(tempfilename) && errno != ENOENT)\n+ {\n+ ereport(WARNING,\n+ (errcode_for_file_access(),\n+ errmsg(\"could not unlink relation-cache initialization file \\\"%s\\\": %m\",\n+ tempfilename),\n+ errdetail(\"Continuing anyway, but there's something wrong.\")));\n+ return;\n+ }\n+\n \n fp = AllocateFile(tempfilename, PG_BINARY_W);\n\nThe comment here is instructive: the unlink is in advance of AllocateFile(),\nand if the file exists with wrong permissions, then AllocateFile would itself fail,\nand then issue a warning:\n\n errmsg(\"could not create relation-cache initialization file \\\"%s\\\": %m\",\n tempfilename),\n errdetail(\"Continuing anyway, but there's something wrong.\")));\n\nIn that context, I don't think it's needed to check the return of unlink().\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 3 Jun 2021 17:54:14 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: checking return value from unlink in write_relcache_init_file"
},
{
"msg_contents": "Zhihong Yu <zyu@yugabyte.com> writes:\n> Please comment on the proposed patch.\n\nIf the unlink fails, there's only really a problem if the subsequent\nopen() fails to overwrite the file --- and that stanza is perfectly\ncapable of complaining for itself. So I think the code is fine and\nthere's no need for a separate message about the unlink. Refusing to\nproceed, as you've done here, is strictly worse than what we have.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Jun 2021 18:56:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: checking return value from unlink in write_relcache_init_file"
},
{
"msg_contents": "On 2021-Jun-03, Tom Lane wrote:\n\n> If the unlink fails, there's only really a problem if the subsequent\n> open() fails to overwrite the file --- and that stanza is perfectly\n> capable of complaining for itself. So I think the code is fine and\n> there's no need for a separate message about the unlink. Refusing to\n> proceed, as you've done here, is strictly worse than what we have.\n\nIt does seem to deserve a comment explaining this.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"Pensar que el espectro que vemos es ilusorio no lo despoja de espanto,\ns�lo le suma el nuevo terror de la locura\" (Perelandra, C.S. Lewis)\n\n\n",
"msg_date": "Thu, 3 Jun 2021 20:55:57 -0400",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: checking return value from unlink in write_relcache_init_file"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2021-Jun-03, Tom Lane wrote:\n>> If the unlink fails, there's only really a problem if the subsequent\n>> open() fails to overwrite the file --- and that stanza is perfectly\n>> capable of complaining for itself. So I think the code is fine and\n>> there's no need for a separate message about the unlink. Refusing to\n>> proceed, as you've done here, is strictly worse than what we have.\n\n> It does seem to deserve a comment explaining this.\n\nAgreed, the existing comment there is a tad terse.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Jun 2021 21:16:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: checking return value from unlink in write_relcache_init_file"
},
{
"msg_contents": "On Thu, Jun 3, 2021 at 6:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > On 2021-Jun-03, Tom Lane wrote:\n> >> If the unlink fails, there's only really a problem if the subsequent\n> >> open() fails to overwrite the file --- and that stanza is perfectly\n> >> capable of complaining for itself. So I think the code is fine and\n> >> there's no need for a separate message about the unlink. Refusing to\n> >> proceed, as you've done here, is strictly worse than what we have.\n>\n> > It does seem to deserve a comment explaining this.\n>\n> Agreed, the existing comment there is a tad terse.\n>\n> regards, tom lane\n>\nHi,\nHere is the patch with a bit more comment on the unlink() call.\n\nCheers",
"msg_date": "Thu, 3 Jun 2021 18:37:07 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: checking return value from unlink in write_relcache_init_file"
}
] |
[
{
"msg_contents": "Found a few small typos in the docs as per the attached.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Fri, 4 Jun 2021 10:17:25 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "A few random typos in the docs"
},
{
"msg_contents": "On Fri, 4 Jun 2021 at 20:17, Daniel Gustafsson <daniel@yesql.se> wrote:\n> Found a few small typos in the docs as per the attached.\n\nThe patch looks good. These all seem to be new as of PG14.\n\nI can take care of this.\n\nDavid\n\n\n",
"msg_date": "Fri, 4 Jun 2021 23:30:04 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: A few random typos in the docs"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.